WorldWideScience

Sample records for models introduce mixtures

  1. Introducing Program Evaluation Models

    Directory of Open Access Journals (Sweden)

    Raluca GÂRBOAN

    2008-02-01

    Full Text Available Programs and project evaluation models can be extremely useful in project planning and management. The aim is to set the right questions as soon as possible in order to see in time and deal with the unwanted program effects, as well as to encourage the positive elements of the project impact. In short, different evaluation models are used in order to minimize losses and maximize the benefits of the interventions upon small or large social groups. This article introduces some of the most recently used evaluation models.

  2. Pointer Sentinel Mixture Models

    OpenAIRE

    Merity, Stephen; Xiong, Caiming; Bradbury, James; Socher, Richard

    2016-01-01

    Recent neural network sequence models with softmax classifiers have achieved their best language modeling performance only with very large hidden states and large vocabularies. Even then they struggle to predict rare or unseen words even if the context makes the prediction unambiguous. We introduce the pointer sentinel mixture architecture for neural sequence models which has the ability to either reproduce a word from the recent context or produce a word from a standard softmax classifier. O...

  3. Bayesian mixture models for spectral density estimation

    OpenAIRE

    Cadonna, Annalisa

    2017-01-01

    We introduce a novel Bayesian modeling approach to spectral density estimation for multiple time series. Considering first the case of non-stationary timeseries, the log-periodogram of each series is modeled as a mixture of Gaussiandistributions with frequency-dependent weights and mean functions. The implied model for the log-spectral density is a mixture of linear mean functionswith frequency-dependent weights. The mixture weights are built throughsuccessive differences of a logit-normal di...

  4. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, Scott; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimoneous represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimoneous...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: 'Are we actually dealing with a convolutive mixture?'. We try to answer this question for EEG data....

  5. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: ’Are we actually dealing with a convolutive mixture?’. We try to answer this question for EEG data....

  6. Concomitant variables in finite mixture models

    NARCIS (Netherlands)

    Wedel, M

    The standard mixture model, the concomitant variable mixture model, the mixture regression model and the concomitant variable mixture regression model all enable simultaneous identification and description of groups of observations. This study reviews the different ways in which dependencies among

  7. Introducing and modeling inefficiency contributions

    DEFF Research Database (Denmark)

    Asmild, Mette; Kronborg, Dorte; Matthews, Kent

    2016-01-01

    Whilst Data Envelopment Analysis (DEA) is the most commonly used non-parametric benchmarking approach, the interpretation and application of DEA results can be limited by the fact that radial improvement potentials are identified across variables. In contrast, Multi-directional Efficiency Analysis......-called inefficiency contributions, which are defined as the relative contributions from specific variables to the overall levels of inefficiencies. A statistical model for distinguishing the inefficiency contributions between subgroups is proposed and the method is illustrated on a data set on Chinese banks....

  8. Multilevel Mixture Factor Models

    Science.gov (United States)

    Varriale, Roberta; Vermunt, Jeroen K.

    2012-01-01

    Factor analysis is a statistical method for describing the associations among sets of observed variables in terms of a small number of underlying continuous latent variables. Various authors have proposed multilevel extensions of the factor model for the analysis of data sets with a hierarchical structure. These Multilevel Factor Models (MFMs)…

  9. Hierarchical mixture models for assessing fingerprint individuality

    OpenAIRE

    Dass, Sarat C.; Li, Mingfei

    2009-01-01

    The study of fingerprint individuality aims to determine to what extent a fingerprint uniquely identifies an individual. Recent court cases have highlighted the need for measures of fingerprint individuality when a person is identified based on fingerprint evidence. The main challenge in studies of fingerprint individuality is to adequately capture the variability of fingerprint features in a population. In this paper hierarchical mixture models are introduced to infer the extent of individua...

  10. The Supervised Learning Gaussian Mixture Model

    Institute of Scientific and Technical Information of China (English)

    马继涌; 高文

    1998-01-01

    The traditional Gaussian Mixture Model(GMM)for pattern recognition is an unsupervised learning method.The parameters in the model are derived only by the training samples in one class without taking into account the effect of sample distributions of other classes,hence,its recognition accuracy is not ideal sometimes.This paper introduces an approach for estimating the parameters in GMM in a supervising way.The Supervised Learning Gaussian Mixture Model(SLGMM)improves the recognition accuracy of the GMM.An experimental example has shown its effectiveness.The experimental results have shown that the recognition accuracy derived by the approach is higher than those obtained by the Vector Quantization(VQ)approach,the Radial Basis Function (RBF) network model,the Learning Vector Quantization (LVQ) approach and the GMM.In addition,the training time of the approach is less than that of Multilayer Perceptrom(MLP).

  11. Introducing Model Predictive Control for Improving Power Plant Portfolio Performance

    DEFF Research Database (Denmark)

    Edlund, Kristian Skjoldborg; Bendtsen, Jan Dimon; Børresen, Simon

    2008-01-01

    This paper introduces a model predictive control (MPC) approach for construction of a controller for balancing the power generation against consumption in a power system. The objective of the controller is to coordinate a portfolio consisting of multiple power plant units in the effort to perform...... reference tracking and disturbance rejection in an economically optimal way. The performance function is chosen as a mixture of the `1-norm and a linear weighting to model the economics of the system. Simulations show a significant improvement of the performance of the MPC compared to the current...

  12. Introducing Waqf Based Takaful Model in India

    Directory of Open Access Journals (Sweden)

    Syed Ahmed Salman

    2014-03-01

    Full Text Available Objective – Waqf is a unique feature of the socioeconomic system of Islam in a multi- religious and developing country like India. India is a rich country with waqf assets. The history of waqf in India can be traced back to 800 years ago. Most of the researchers, suggest how waqf can be used a tool to mitigate the poverty of Muslims. India has the third highest Muslim population after Indonesia and Pakistan. However, the majority of Muslims belong to the low income group and they are in need of help. It is believed that waqf can be utilized for the betterment of Indian Muslim community. Among the available uses of waqf assets, the main objective of this paper is to introduce waqf based takaful model in India. In addition, how this proposed model can be adopted in India is highlighted.Methods – Library research is applied since this paper relies on secondary data by thoroughlyreviewing the most relevant literature.Result – India as a rich country with waqf assets should fully utilize the resources to help the Muslims through takaful.Conclusion – In this study, we have proposed waqf based takaful model with the combination of the concepts mudarabah and wakalah for India. We recommend this model based on the background of the  country and situations. Since we have not tested the viability of this model in India, future research should be continued on this testing.Keywords : Wakaf, Takaful, Kemiskinan dan India

  13. Introducing tropical lianas in a vegetation model

    Science.gov (United States)

    Verbeeck, Hans; De Deurwaerder, Hannes; Brugnera, Manfredo di Procia e.; Krshna Moorthy Paravathi, Sruthi; Pausenberger, Nancy; Roels, Jana; kearsley, elizabeth

    2016-04-01

    Tropical forests are essential components of the earth system and play a critical role for land surface feedbacks to climate change. These forests are currently experiencing large-scale structural changes, including the increase of liana abundance and biomass. This liana proliferation might have large impacts on the carbon cycle of tropical forests. However no single global vegetation model currently accounts for lianas. The TREECLIMBERS project (ERC starting grant) aims to introduce for the first time lianas into a vegetation model. The project attempts to reach this challenging goal by performing a global meta-analysis on liana data and by collecting new data in South American forests. Those new and existing datasets form the basis of a new liana plant functional type (PFT) that will be included in the Ecosystem Demography model (ED2). This presentation will show an overview of the current progress of the TREECLIMBERS project. Liana inventory data collected in French Guiana along a forest disturbance gradient show the relation between liana abundance and disturbance. Xylem water isotope analysis indicates that trees and lianas can rely on different soil water resources. New modelling concepts for liana PFTs will be presented and in-situ leaf gas exchange and sap flow data are used to parameterize water and carbon fluxes for this new PFT. Finally ongoing terrestrial LiDAR observations of liana infested forest will be highlighted.

  14. Essays on Finite Mixture Models

    NARCIS (Netherlands)

    A. van Dijk (Bram)

    2009-01-01

    textabstractFinite mixture distributions are a weighted average of a ¯nite number of distributions. The latter are usually called the mixture components. The weights are usually described by a multinomial distribution and are sometimes called mixing proportions. The mixture components may be the

  15. Essays on Finite Mixture Models

    NARCIS (Netherlands)

    A. van Dijk (Bram)

    2009-01-01

    textabstractFinite mixture distributions are a weighted average of a ¯nite number of distributions. The latter are usually called the mixture components. The weights are usually described by a multinomial distribution and are sometimes called mixing proportions. The mixture components may be the sam

  16. Bayesian Estimation of a Mixture Model

    Directory of Open Access Journals (Sweden)

    Ilhem Merah

    2015-05-01

    Full Text Available We present the properties of a bathtub curve reliability model having both a sufficient adaptability and a minimal number of parameters introduced by Idée and Pierrat (2010. This one is a mixture of a Gamma distribution G(2, (1/θ and a new distribution L(θ. We are interesting by Bayesian estimation of the parameters and survival function of this model with a squared-error loss function and non-informative prior using the approximations of Lindley (1980 and Tierney and Kadane (1986. Using a statistical sample of 60 failure data relative to a technical device, we illustrate the results derived. Based on a simulation study, comparisons are made between these two methods and the maximum likelihood method of this two parameters model.

  17. [Comparison of two spectral mixture analysis models].

    Science.gov (United States)

    Wang, Qin-Jun; Lin, Qi-Zhong; Li, Ming-Xiao; Wang, Li-Ming

    2009-10-01

    A spectral mixture analysis experiment was designed to compare the spectral unmixing effects of linear spectral mixture analysis (LSMA) and constraint linear spectral mixture analysis (CLSMA). In the experiment, red, green, blue and yellow colors were printed on a coarse album as four end members. Thirty nine mixed samples were made according to each end member's different percent in one pixel. Then, field spectrometer was located on the top of the mixed samples' center to measure spectrum one by one. Inversion percent of each end member in the pixel was extracted using LSMA and CLSMA models. Finally, normalized mean squared error was calculated between inversion and real percent to compare the two models' effects on spectral unmixing. Results from experiment showed that the total error of LSMA was 0.30087 and that of CLSMA was 0.37552 when using all bands in the spectrum. Therefore, LSMA was 0.075 less than that of CLSMA when the whole bands of four end members' spectra were used. On the other hand, the total error of LSMA was 0.28095 and that of CLSMA was 0.29805 after band selection. So, LSMA was 0.017 less than that of CLSMA when bands selection was performed. Therefore, whether all or selected bands were used, the accuracy of LSMA was better than that of CLSMA because during the process of spectrum measurement, errors caused by instrument or human were introduced into the model, leading to that the measured data could not mean the strict requirement of CLSMA and therefore reduced its accuracy: Furthermore, the total error of LSMA using selected bands was 0.02 less than that using the whole bands. The total error of CLSMA using selected bands was 0.077 less than that using the whole bands. So, in the same model, spectral unmixing using selected bands to reduce the correlation of end members' spectra was superior to that using the whole bands.

  18. Mixture Modeling: Applications in Educational Psychology

    Science.gov (United States)

    Harring, Jeffrey R.; Hodis, Flaviu A.

    2016-01-01

    Model-based clustering methods, commonly referred to as finite mixture modeling, have been applied to a wide variety of cross-sectional and longitudinal data to account for heterogeneity in population characteristics. In this article, we elucidate 2 such approaches: growth mixture modeling and latent profile analysis. Both techniques are…

  19. Introducing Synchronisation in Deterministic Network Models

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Jessen, Jan Jakob; Nielsen, Jens Frederik D.;

    2006-01-01

    The paper addresses performance analysis for distributed real time systems through deterministic network modelling. Its main contribution is the introduction and analysis of models for synchronisation between tasks and/or network elements. Typical patterns of synchronisation are presented leading....... The suggested models are intended for incorporation into an existing analysis tool a.k.a. CyNC based on the MATLAB/SimuLink framework for graphical system analysis and design....

  20. Introducing Seismic Tomography with Computational Modeling

    Science.gov (United States)

    Neves, R.; Neves, M. L.; Teodoro, V.

    2011-12-01

    Learning seismic tomography principles and techniques involves advanced physical and computational knowledge. In depth learning of such computational skills is a difficult cognitive process that requires a strong background in physics, mathematics and computer programming. The corresponding learning environments and pedagogic methodologies should then involve sets of computational modelling activities with computer software systems which allow students the possibility to improve their mathematical or programming knowledge and simultaneously focus on the learning of seismic wave propagation and inverse theory. To reduce the level of cognitive opacity associated with mathematical or programming knowledge, several computer modelling systems have already been developed (Neves & Teodoro, 2010). Among such systems, Modellus is particularly well suited to achieve this goal because it is a domain general environment for explorative and expressive modelling with the following main advantages: 1) an easy and intuitive creation of mathematical models using just standard mathematical notation; 2) the simultaneous exploration of images, tables, graphs and object animations; 3) the attribution of mathematical properties expressed in the models to animated objects; and finally 4) the computation and display of mathematical quantities obtained from the analysis of images and graphs. Here we describe virtual simulations and educational exercises which enable students an easy grasp of the fundamental of seismic tomography. The simulations make the lecture more interactive and allow students the possibility to overcome their lack of advanced mathematical or programming knowledge and focus on the learning of seismological concepts and processes taking advantage of basic scientific computation methods and tools.

  1. On the mixture model for multiphase flow

    Energy Technology Data Exchange (ETDEWEB)

    Manninen, M.; Taivassalo, V. [VTT Energy, Espoo (Finland). Nuclear Energy; Kallio, S. [Aabo Akademi, Turku (Finland)

    1996-12-31

    Numerical flow simulation utilising a full multiphase model is impractical for a suspension possessing wide distributions in the particle size or density. Various approximations are usually made to simplify the computational task. In the simplest approach, the suspension is represented by a homogeneous single-phase system and the influence of the particles is taken into account in the values of the physical properties. This study concentrates on the derivation and closing of the model equations. The validity of the mixture model is also carefully analysed. Starting from the continuity and momentum equations written for each phase in a multiphase system, the field equations for the mixture are derived. The mixture equations largely resemble those for a single-phase flow but are represented in terms of the mixture density and velocity. The volume fraction for each dispersed phase is solved from a phase continuity equation. Various approaches applied in closing the mixture model equations are reviewed. An algebraic equation is derived for the velocity of a dispersed phase relative to the continuous phase. Simplifications made in calculating the relative velocity restrict the applicability of the mixture model to cases in which the particles reach the terminal velocity in a short time period compared to the characteristic time scale of the flow of the mixture. (75 refs.)

  2. Mixture

    Directory of Open Access Journals (Sweden)

    Silva-Aguilar Martín

    2011-01-01

    Full Text Available Metals are ubiquitous pollutants present as mixtures. In particular, mixture of arsenic-cadmium-lead is among the leading toxic agents detected in the environment. These metals have carcinogenic and cell-transforming potential. In this study, we used a two step cell transformation model, to determine the role of oxidative stress in transformation induced by a mixture of arsenic-cadmium-lead. Oxidative damage and antioxidant response were determined. Metal mixture treatment induces the increase of damage markers and the antioxidant response. Loss of cell viability and increased transforming potential were observed during the promotion phase. This finding correlated significantly with generation of reactive oxygen species. Cotreatment with N-acetyl-cysteine induces effect on the transforming capacity; while a diminution was found in initiation, in promotion phase a total block of the transforming capacity was observed. Our results suggest that oxidative stress generated by metal mixture plays an important role only in promotion phase promoting transforming capacity.

  3. Introducing AORN's new model for evidence rating.

    Science.gov (United States)

    Spruce, Lisa; Van Wicklin, Sharon A; Hicks, Rodney W; Conner, Ramona; Dunn, Debra

    2014-02-01

    Nurses today are expected to implement evidence-based practices in the perioperative setting to assess and implement practice changes. All evidence-based practice begins with a question, a practice problem to address, or a needed change that is identified. To assess the question, a literature search is performed and relevant literature is identified and appraised. The types of evidence used to inform practice can be scientific research (eg, randomized controlled trials, systematic reviews) or nonresearch evidence (eg, regulatory and accrediting agency requirements, professional association practice standards and guidelines, quality improvement project reports). The AORN recommended practices are a synthesis of related knowledge on a given topic, and the authorship process begins with a systematic review of the literature conducted in collaboration with a medical librarian. At least two appraisers independently evaluate the applicable literature for quality and strength by using the AORN Research Appraisal Tool and AORN Non-Research Appraisal Tool. To collectively appraise the evidence supporting particular practice recommendations, the AORN recommended practices authors have implemented a new evidence rating model that is appropriate for research and nonresearch literature and that is relevant to the perioperative setting.

  4. Modeling text with generalizable Gaussian mixtures

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Sigurdsson, Sigurdur; Kolenda, Thomas

    2000-01-01

    We apply and discuss generalizable Gaussian mixture (GGM) models for text mining. The model automatically adapts model complexity for a given text representation. We show that the generalizability of these models depends on the dimensionality of the representation and the sample size. We discuss...

  5. Spatial mixture multiscale modeling for aggregated health data.

    Science.gov (United States)

    Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin

    2016-09-01

    One of the main goals in spatial epidemiology is to study the geographical pattern of disease risks. For such purpose, the convolution model composed of correlated and uncorrelated components is often used. However, one of the two components could be predominant in some regions. To investigate the predominance of the correlated or uncorrelated component for multiple scale data, we propose four different spatial mixture multiscale models by mixing spatially varying probability weights of correlated (CH) and uncorrelated heterogeneities (UH). The first model assumes that there is no linkage between the different scales and, hence, we consider independent mixture convolution models at each scale. The second model introduces linkage between finer and coarser scales via a shared uncorrelated component of the mixture convolution model. The third model is similar to the second model but the linkage between the scales is introduced through the correlated component. Finally, the fourth model accommodates for a scale effect by sharing both CH and UH simultaneously. We applied these models to real and simulated data, and found that the fourth model is the best model followed by the second model.

  6. Identifiability of large phylogenetic mixture models.

    Science.gov (United States)

    Rhodes, John A; Sullivant, Seth

    2012-01-01

    Phylogenetic mixture models are statistical models of character evolution allowing for heterogeneity. Each of the classes in some unknown partition of the characters may evolve by different processes, or even along different trees. Such models are of increasing interest for data analysis, as they can capture the variety of evolutionary processes that may be occurring across long sequences of DNA or proteins. The fundamental question of whether parameters of such a model are identifiable is difficult to address, due to the complexity of the parameterization. Identifiability is, however, essential to their use for statistical inference.We analyze mixture models on large trees, with many mixture components, showing that both numerical and tree parameters are indeed identifiable in these models when all trees are the same. This provides a theoretical justification for some current empirical studies, and indicates that extensions to even more mixture components should be theoretically well behaved. We also extend our results to certain mixtures on different trees, using the same algebraic techniques.

  7. Periphyton responses to nutrient and atrazine mixtures introduced through agricultural runoff

    Science.gov (United States)

    Agricultural runoff often contains pollutants with potential antagonistic impacts on periphyton, such as nutrients and atrazine. The individual influence of these pollutants on periphyton has been extensively studied, but their impact when introduced in a more realistic scenario of multiple agricult...

  8. Flexible Rasch Mixture Models with Package psychomix

    Directory of Open Access Journals (Sweden)

    Hannah Frick

    2012-05-01

    Full Text Available Measurement invariance is an important assumption in the Rasch model and mixture models constitute a flexible way of checking for a violation of this assumption by detecting unobserved heterogeneity in item response data. Here, a general class of Rasch mixture models is established and implemented in R, using conditional maximum likelihood estimation of the item parameters (given the raw scores along with flexible specification of two model building blocks: (1 Mixture weights for the unobserved classes can be treated as model parameters or based on covariates in a concomitant variable model. (2 The distribution of raw score probabilities can be parametrized in two possible ways, either using a saturated model or a specification through mean and variance. The function raschmix( in the R package psychomix provides these models, leveraging the general infrastructure for fitting mixture models in the flexmix package. Usage of the function and its associated methods is illustrated on artificial data as well as empirical data from a study of verbally aggressive behavior.

  9. Lattice Model for water-solute mixtures

    OpenAIRE

    Furlan, A. P.; Almarza, N. G.; M. C. Barbosa

    2016-01-01

    A lattice model for the study of mixtures of associating liquids is proposed. Solvent and solute are modeled by adapting the associating lattice gas (ALG) model. The nature of interaction solute/solvent is controlled by tuning the energy interactions between the patches of ALG model. We have studied three set of parameters, resulting on, hydrophilic, inert and hydrophobic interactions. Extensive Monte Carlo simulations were carried out and the behavior of pure components and the excess proper...

  10. A Skew-Normal Mixture Regression Model

    Science.gov (United States)

    Liu, Min; Lin, Tsung-I

    2014-01-01

    A challenge associated with traditional mixture regression models (MRMs), which rest on the assumption of normally distributed errors, is determining the number of unobserved groups. Specifically, even slight deviations from normality can lead to the detection of spurious classes. The current work aims to (a) examine how sensitive the commonly…

  11. Mixture model analysis of complex samples

    NARCIS (Netherlands)

    Wedel, M; ter Hofstede, F; Steenkamp, JBEM

    1998-01-01

    We investigate the effects of a complex sampling design on the estimation of mixture models. An approximate or pseudo likelihood approach is proposed to obtain consistent estimates of class-specific parameters when the sample arises from such a complex design. The effects of ignoring the sample desi

  12. Population mixture model for nonlinear telomere dynamics

    Science.gov (United States)

    Itzkovitz, Shalev; Shlush, Liran I.; Gluck, Dan; Skorecki, Karl

    2008-12-01

    Telomeres are DNA repeats protecting chromosomal ends which shorten with each cell division, eventually leading to cessation of cell growth. We present a population mixture model that predicts an exponential decrease in telomere length with time. We analytically solve the dynamics of the telomere length distribution. The model provides an excellent fit to available telomere data and accounts for the previously unexplained observation of telomere elongation following stress and bone marrow transplantation, thereby providing insight into the nature of the telomere clock.

  13. Self-assembly models for lipid mixtures

    Science.gov (United States)

    Singh, Divya; Porcar, Lionel; Butler, Paul; Perez-Salas, Ursula

    2006-03-01

    Solutions of mixed long and short (detergent-like) phospholipids referred to as ``bicelle'' mixtures in the literature, are known to form a variety of different morphologies based on their total lipid composition and temperature in a complex phase diagram. Some of these morphologies have been found to orient in a magnetic field, and consequently bicelle mixtures are widely used to study the structure of soluble as well as membrane embedded proteins using NMR. In this work, we report on the low temperature phase of the DMPC and DHPC bicelle mixture, where there is agreement on the discoid structures but where molecular packing models are still being contested. The most widely accepted packing arrangement, first proposed by Vold and Prosser had the lipids completely segregated in the disk: DHPC in the rim and DMPC in the disk. Using data from small angle neutron scattering (SANS) experiments, we show how radius of the planar domain of the disks is governed by the effective molar ratio qeff of lipids in aggregate and not the molar ratio q (q = [DMPC]/[DHPC] ) as has been understood previously. We propose a new quantitative (packing) model and show that in this self assembly scheme, qeff is the real determinant of disk sizes. Based on qeff , a master equation can then scale the radii of disks from mixtures with varying q and total lipid concentration.

  14. Hard-sphere kinetic models for inert and reactive mixtures

    Science.gov (United States)

    Polewczak, Jacek

    2016-10-01

    I consider stochastic variants of a simple reacting sphere (SRS) kinetic model (Xystris and Dahler 1978 J. Chem. Phys. 68 387-401, Qin and Dahler 1995 J. Chem. Phys. 103 725-50, Dahler and Qin 2003 J. Chem. Phys. 118 8396-404) for dense reacting mixtures. In contrast to the line-of-center models of chemical reactive models, in the SRS kinetic model, the microscopic reversibility (detailed balance) can be easily shown to be satisfied, and thus all mathematical aspects of the model can be fully justified. In the SRS model, the molecules behave as if they were single mass points with two internal states. Collisions may alter the internal states of the molecules, and this occurs when the kinetic energy associated with the reactive motion exceeds the activation energy. Reactive and non-reactive collision events are considered to be hard sphere-like. I consider a four component mixture A, B, A *, B *, in which the chemical reactions are of the type A+B\\rightleftharpoons {{A}\\ast}+{{B}\\ast} , with A * and B * being distinct species from A and B. This work extends the joined works with George Stell to the kinetic models of dense inert and reactive mixtures. The idea of introducing smearing-type effect in the collisional process results in a new class of stochastic kinetic models for both inert and reactive mixtures. In this paper the important new mathematical properties of such systems of kinetic equations are proven. The new results for stochastic revised Enskog system for inert mixtures are also provided.

  15. Gaussian mixture model of heart rate variability.

    Directory of Open Access Journals (Sweden)

    Tommaso Costa

    Full Text Available Heart rate variability (HRV is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters.

  16. Bayesian mixture models for partially verified data

    DEFF Research Database (Denmark)

    Kostoulas, Polychronis; Browne, William J.; Nielsen, Søren Saxmose;

    2013-01-01

    for some individuals, in order to minimize this loss in the discriminatory power. The distribution of the continuous antibody response against MAP has been obtained for healthy, MAP-infected and MAP-infectious cows of different age groups. The overall power of the milk-ELISA to discriminate between healthy......Bayesian mixture models can be used to discriminate between the distributions of continuous test responses for different infection stages. These models are particularly useful in case of chronic infections with a long latent period, like Mycobacterium avium subsp. paratuberculosis (MAP) infection...

  17. Video compressive sensing using Gaussian mixture models.

    Science.gov (United States)

    Yang, Jianbo; Yuan, Xin; Liao, Xuejun; Llull, Patrick; Brady, David J; Sapiro, Guillermo; Carin, Lawrence

    2014-11-01

    A Gaussian mixture model (GMM)-based algorithm is proposed for video reconstruction from temporally compressed video measurements. The GMM is used to model spatio-temporal video patches, and the reconstruction can be efficiently computed based on analytic expressions. The GMM-based inversion method benefits from online adaptive learning and parallel computation. We demonstrate the efficacy of the proposed inversion method with videos reconstructed from simulated compressive video measurements, and from a real compressive video camera. We also use the GMM as a tool to investigate adaptive video compressive sensing, i.e., adaptive rate of temporal compression.

  18. Introducing Model Predictive Control for Improving Power Plant Portfolio Performance

    DEFF Research Database (Denmark)

    Edlund, Kristian Skjoldborg; Bendtsen, Jan Dimon; Børresen, Simon

    2008-01-01

    This paper introduces a model predictive control (MPC) approach for construction of a controller for balancing the power generation against consumption in a power system. The objective of the controller is to coordinate a portfolio consisting of multiple power plant units in the effort to perform...

  19. Introducing Artificial Neural Networks through a Spreadsheet Model

    Science.gov (United States)

    Rienzo, Thomas F.; Athappilly, Kuriakose K.

    2012-01-01

    Business students taking data mining classes are often introduced to artificial neural networks (ANN) through point and click navigation exercises in application software. Even if correct outcomes are obtained, students frequently do not obtain a thorough understanding of ANN processes. This spreadsheet model was created to illuminate the roles of…

  20. Introducing Artificial Neural Networks through a Spreadsheet Model

    Science.gov (United States)

    Rienzo, Thomas F.; Athappilly, Kuriakose K.

    2012-01-01

    Business students taking data mining classes are often introduced to artificial neural networks (ANN) through point and click navigation exercises in application software. Even if correct outcomes are obtained, students frequently do not obtain a thorough understanding of ANN processes. This spreadsheet model was created to illuminate the roles of…

  1. Investigation of a Gamma model for mixture STR samples

    DEFF Research Database (Denmark)

    Christensen, Susanne; Bøttcher, Susanne Gammelgaard; Lauritzen, Steffen L.

    The behaviour of PCR Amplification Kit, when used for mixture STR samples, is investigated. A model based on the Gamma distribution is fitted to the amplifier output for constructed mixtures, and the assumptions of the model is evaluated via residual analysis.......The behaviour of PCR Amplification Kit, when used for mixture STR samples, is investigated. A model based on the Gamma distribution is fitted to the amplifier output for constructed mixtures, and the assumptions of the model is evaluated via residual analysis....

  2. Simulation of mixture microstructures via particle packing models and their direct comparison with real mixtures

    Science.gov (United States)

    Gulliver, Eric A.

    The objective of this thesis to identify and develop techniques providing direct comparison between simulated and real packed particle mixture microstructures containing submicron-sized particles. This entailed devising techniques for simulating powder mixtures, producing real mixtures with known powder characteristics, sectioning real mixtures, interrogating mixture cross-sections, evaluating and quantifying the mixture interrogation process and for comparing interrogation results between mixtures. A drop and roll-type particle-packing model was used to generate simulations of random mixtures. The simulated mixtures were then evaluated to establish that they were not segregated and free from gross defects. A powder processing protocol was established to provide real mixtures for direct comparison and for use in evaluating the simulation. The powder processing protocol was designed to minimize differences between measured particle size distributions and the particle size distributions in the mixture. A sectioning technique was developed that was capable of producing distortion free cross-sections of fine scale particulate mixtures. Tessellation analysis was used to interrogate mixture cross sections and statistical quality control charts were used to evaluate different types of tessellation analysis and to establish the importance of differences between simulated and real mixtures. The particle-packing program generated crescent shaped pores below large particles but realistic looking mixture microstructures otherwise. Focused ion beam milling was the only technique capable of sectioning particle compacts in a manner suitable for stereological analysis. Johnson-Mehl and Voronoi tessellation of the same cross-sections produced tessellation tiles with different the-area populations. Control charts analysis showed Johnson-Mehl tessellation measurements are superior to Voronoi tessellation measurements for detecting variations in mixture microstructure, such as altered

  3. Thermodynamic modeling of CO2 mixtures

    DEFF Research Database (Denmark)

    Bjørner, Martin Gamel

    performed satisfactorily and predicted the general behavior of the systems, but qCPA used fewer adjustable parameters to achieve similar predictions. It has been demonstrated that qCPA is a promising model which, compared to CPA, systematically improves the predictions of the experimentally determined phase......, accurate predictions of the thermodynamic properties and phase equilibria of mixtures containing CO2 are challenging with classical models such as the Soave-Redlich-Kwong (SRK) equation of state (EoS). This is believed to be due to the fact, that CO2 has a large quadrupole moment which the classical models...... do not explicitly account for. In this thesis, in an attempt to obtain a physically more consistent model, the cubicplus association (CPA) EoS is extended to include quadrupolar interactions. The new quadrupolar CPA (qCPA) can be used with the experimental value of the quadrupolemoment...

  4. Mixture latent autoregressive models for longitudinal data

    CERN Document Server

    Bartolucci, Francesco; Pennoni, Fulvia

    2011-01-01

    Many relevant statistical and econometric models for the analysis of longitudinal data include a latent process to account for the unobserved heterogeneity between subjects in a dynamic fashion. Such a process may be continuous (typically an AR(1)) or discrete (typically a Markov chain). In this paper, we propose a model for longitudinal data which is based on a mixture of AR(1) processes with different means and correlation coefficients, but with equal variances. This model belongs to the class of models based on a continuous latent process, and then it has a natural interpretation in many contexts of application, but it is more flexible than other models in this class, reaching a goodness-of-fit similar to that of a discrete latent process model, with a reduced number of parameters. We show how to perform maximum likelihood estimation of the proposed model by the joint use of an Expectation-Maximisation algorithm and a Newton-Raphson algorithm, implemented by means of recursions developed in the hidden Mark...

  5. Modeling dynamic functional connectivity using a wishart mixture model

    DEFF Research Database (Denmark)

    Nielsen, Søren Føns Vind; Madsen, Kristoffer Hougaard; Schmidt, Mikkel Nørgaard

    2017-01-01

    .e. the window length. In this work we use the Wishart Mixture Model (WMM) as a probabilistic model for dFC based on variational inference. The framework admits arbitrary window lengths and number of dynamic components and includes the static one-component model as a special case. We exploit that the WMM...

  6. Introducing A Hybrid Data Mining Model to Evaluate Customer Loyalty

    Directory of Open Access Journals (Sweden)

    H. Alizadeh

    2016-12-01

    Full Text Available The main aim of this study was introducing a comprehensive model of bank customers᾽ loyalty evaluation based on the assessment and comparison of different clustering methods᾽ performance. This study also pursues the following specific objectives: a using different clustering methods and comparing them for customer classification, b finding the effective variables in determining the customer loyalty, and c using different collective classification methods to increase the modeling accuracy and comparing the results with the basic methods. Since loyal customers generate more profit, this study aims at introducing a two-step model for classification of customers and their loyalty. For this purpose, various methods of clustering such as K-medoids, X-means and K-means were used, the last of which outperformed the other two through comparing with Davis-Bouldin index. Customers were clustered by using K-means and members of these four clusters were analyzed and labeled. Then, a predictive model was run based on demographic variables of customers using various classification methods such as DT (Decision Tree, ANN (Artificial Neural Networks, NB (Naive Bayes, KNN (K-Nearest Neighbors and SVM (Support Vector Machine, as well as their bagging and boosting to predict the class of loyal customers. The results showed that the bagging-ANN was the most accurate method in predicting loyal customers. This two-stage model can be used in banks and financial institutions with similar data to identify the type of future customers.

  7. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  8. Mixture Model and MDSDCA for Textual Data

    Science.gov (United States)

    Allouti, Faryel; Nadif, Mohamed; Hoai An, Le Thi; Otjacques, Benoît

    E-mailing has become an essential component of cooperation in business. Consequently, the large number of messages manually produced or automatically generated can rapidly cause information overflow for users. Many research projects have examined this issue but surprisingly few have tackled the problem of the files attached to e-mails that, in many cases, contain a substantial part of the semantics of the message. This paper considers this specific topic and focuses on the problem of clustering and visualization of attached files. Relying on the multinomial mixture model, we used the Classification EM algorithm (CEM) to cluster the set of files, and MDSDCA to visualize the obtained classes of documents. Like the Multidimensional Scaling method, the aim of the MDSDCA algorithm based on the Difference of Convex functions is to optimize the stress criterion. As MDSDCA is iterative, we propose an initialization approach to avoid starting with random values. Experiments are investigated using simulations and textual data.

  9. Dirichlet multinomial mixtures: generative models for microbial metagenomics.

    Science.gov (United States)

    Holmes, Ian; Harris, Keith; Quince, Christopher

    2012-01-01

    We introduce Dirichlet multinomial mixtures (DMM) for the probabilistic modelling of microbial metagenomics data. This data can be represented as a frequency matrix giving the number of times each taxa is observed in each sample. The samples have different size, and the matrix is sparse, as communities are diverse and skewed to rare taxa. Most methods used previously to classify or cluster samples have ignored these features. We describe each community by a vector of taxa probabilities. These vectors are generated from one of a finite number of Dirichlet mixture components each with different hyperparameters. Observed samples are generated through multinomial sampling. The mixture components cluster communities into distinct 'metacommunities', and, hence, determine envirotypes or enterotypes, groups of communities with a similar composition. The model can also deduce the impact of a treatment and be used for classification. We wrote software for the fitting of DMM models using the 'evidence framework' (http://code.google.com/p/microbedmm/). This includes the Laplace approximation of the model evidence. We applied the DMM model to human gut microbe genera frequencies from Obese and Lean twins. From the model evidence four clusters fit this data best. Two clusters were dominated by Bacteroides and were homogenous; two had a more variable community composition. We could not find a significant impact of body mass on community structure. However, Obese twins were more likely to derive from the high variance clusters. We propose that obesity is not associated with a distinct microbiota but increases the chance that an individual derives from a disturbed enterotype. This is an example of the 'Anna Karenina principle (AKP)' applied to microbial communities: disturbed states having many more configurations than undisturbed. We verify this by showing that in a study of inflammatory bowel disease (IBD) phenotypes, ileal Crohn's disease (ICD) is associated with a more variable

  10. Dirichlet multinomial mixtures: generative models for microbial metagenomics.

    Directory of Open Access Journals (Sweden)

    Ian Holmes

    Full Text Available We introduce Dirichlet multinomial mixtures (DMM for the probabilistic modelling of microbial metagenomics data. This data can be represented as a frequency matrix giving the number of times each taxa is observed in each sample. The samples have different size, and the matrix is sparse, as communities are diverse and skewed to rare taxa. Most methods used previously to classify or cluster samples have ignored these features. We describe each community by a vector of taxa probabilities. These vectors are generated from one of a finite number of Dirichlet mixture components each with different hyperparameters. Observed samples are generated through multinomial sampling. The mixture components cluster communities into distinct 'metacommunities', and, hence, determine envirotypes or enterotypes, groups of communities with a similar composition. The model can also deduce the impact of a treatment and be used for classification. We wrote software for the fitting of DMM models using the 'evidence framework' (http://code.google.com/p/microbedmm/. This includes the Laplace approximation of the model evidence. We applied the DMM model to human gut microbe genera frequencies from Obese and Lean twins. From the model evidence four clusters fit this data best. Two clusters were dominated by Bacteroides and were homogenous; two had a more variable community composition. We could not find a significant impact of body mass on community structure. However, Obese twins were more likely to derive from the high variance clusters. We propose that obesity is not associated with a distinct microbiota but increases the chance that an individual derives from a disturbed enterotype. This is an example of the 'Anna Karenina principle (AKP' applied to microbial communities: disturbed states having many more configurations than undisturbed. We verify this by showing that in a study of inflammatory bowel disease (IBD phenotypes, ileal Crohn's disease (ICD is associated with

  11. Mixtures of multiplicative cascade models in geochemistry

    Directory of Open Access Journals (Sweden)

    F. P. Agterberg

    2007-05-01

    Full Text Available Multifractal modeling of geochemical map data can help to explain the nature of frequency distributions of element concentration values for small rock samples and their spatial covariance structure. Useful frequency distribution models are the lognormal and Pareto distributions which plot as straight lines on logarithmic probability and log-log paper, respectively. The model of de Wijs is a simple multiplicative cascade resulting in discrete logbinomial distribution that closely approximates the lognormal. In this model, smaller blocks resulting from dividing larger blocks into parts have concentration values with constant ratios that are scale-independent. The approach can be modified by adopting random variables for these ratios. Other modifications include a single cascade model with ratio parameters that depend on magnitude of concentration value. The Turcotte model, which is another variant of the model of de Wijs, results in a Pareto distribution. Often a single straight line on logarithmic probability or log-log paper does not provide a good fit to observed data and two or more distributions should be fitted. For example, geochemical background and anomalies (extremely high values have separate frequency distributions for concentration values and for local singularity coefficients. Mixtures of distributions can be simulated by adding the results of separate cascade models. Regardless of properties of background, an unbiased estimate can be obtained of the parameter of the Pareto distribution characterizing anomalies in the upper tail of the element concentration frequency distribution or lower tail of the local singularity distribution. Computer simulation experiments and practical examples are used to illustrate the approach.

  12. Empirical profile mixture models for phylogenetic reconstruction

    National Research Council Canada - National Science Library

    Si Quang, Le; Gascuel, Olivier; Lartillot, Nicolas

    2008-01-01

    Motivation: Previous studies have shown that accounting for site-specific amino acid replacement patterns using mixtures of stationary probability profiles offers a promising approach for improving...

  13. Statistical Compressed Sensing of Gaussian Mixture Models

    CERN Document Server

    Yu, Guoshen

    2011-01-01

    A novel framework of compressed sensing, namely statistical compressed sensing (SCS), that aims at efficiently sampling a collection of signals that follow a statistical distribution, and achieving accurate reconstruction on average, is introduced. SCS based on Gaussian models is investigated in depth. For signals that follow a single Gaussian model, with Gaussian or Bernoulli sensing matrices of O(k) measurements, considerably smaller than the O(k log(N/k)) required by conventional CS based on sparse models, where N is the signal dimension, and with an optimal decoder implemented via linear filtering, significantly faster than the pursuit decoders applied in conventional CS, the error of SCS is shown tightly upper bounded by a constant times the best k-term approximation error, with overwhelming probability. The failure probability is also significantly smaller than that of conventional sparsity-oriented CS. Stronger yet simpler results further show that for any sensing matrix, the error of Gaussian SCS is u...

  14. Modeling methods for mixture-of-mixtures experiments applied to a tablet formulation problem.

    Science.gov (United States)

    Piepel, G F

    1999-01-01

    During the past few years, statistical methods for the experimental design, modeling, and optimization of mixture experiments have been widely applied to drug formulation problems. Different methods are required for mixture-of-mixtures (MoM) experiments in which a formulation is a mixture of two or more "major" components, each of which is a mixture of one or more "minor" components. Two types of MoM experiments are briefly described. A tablet formulation optimization example from a 1997 article in this journal is used to illustrate one type of MoM experiment and corresponding empirical modeling methods. Literature references that discuss other methods for MoM experiments are also provided.

  15. Evaluating Mixture Modeling for Clustering: Recommendations and Cautions

    Science.gov (United States)

    Steinley, Douglas; Brusco, Michael J.

    2011-01-01

    This article provides a large-scale investigation into several of the properties of mixture-model clustering techniques (also referred to as latent class cluster analysis, latent profile analysis, model-based clustering, probabilistic clustering, Bayesian classification, unsupervised learning, and finite mixture models; see Vermunt & Magdison,…

  16. Mixtures of compound Poisson processes as models of tick-by-tick financial data

    CERN Document Server

    Scalas, E

    2006-01-01

    A model for the phenomenological description of tick-by-tick share prices in a stock exchange is introduced. It is based on mixtures of compound Poisson processes. Preliminary results based on Monte Carlo simulation show that this model can reproduce various stylized facts.

  17. Mixtures of compound Poisson processes as models of tick-by-tick financial data

    Science.gov (United States)

    Scalas, Enrico

    2007-10-01

    A model for the phenomenological description of tick-by-tick share prices in a stock exchange is introduced. It is based on mixtures of compound Poisson processes. Preliminary results based on Monte Carlo simulation show that this model can reproduce various stylized facts.

  18. [Model for introducing or revitalizing the final monograph].

    Science.gov (United States)

    Saupe, Rosita; Wendhausen, Agueda Lenita Pereira; Machado, Heloisa Beatriz

    2004-01-01

    The requirement set by the Curricular Guidelines for a Course Conclusion Work in the nursing area is an innovation for the majority of courses in Brazil. Only a few courses have introduced this type of study as a result of their forward-looking vision. This requirement has demanded an effort from the universities, to ensure that these studies do not only represent an academic exercise, but also an institutional quality indicator and a possible contribution to the solution of social problems. Our proposed model includes: defining lines of research, gathering researchers per by area of interest, organizing research groups and centers, defining the preferred types of studies, planning operational agendas, carrying out a follow-up on their introduction and encouraging their publication.

  19. Introducing the Leadership in Enabling Occupation (LEO) model.

    Science.gov (United States)

    Townsend, Elizabeth A; Polatajko, Helene J; Craik, Janet M; von Zweck, Claudia M

    2011-10-01

    Occupational therapy is a broad profession yet access to services remains restricted and uneven across Canada. Access to the potential breadth of occupational therapy is severely restrained by complex supply, retention, and funding challenges. To improve access to occupational therapy, widespread leadership is needed by all practitioners. This brief report introduces the Leadership in Enabling Occupation (LEO) Model, which displays the inter-relationship of four elements of everyday leadership as described in "Positioning Occupational Therapy for Leadership," Section IV, of Enabling Occupation II: Advancing a Vision of Health, Well-being and Justice through Occupation (Townsend & Polatajko, 2007). All occupational therapists have the power to develop leadership capacity within and beyond designated leadership positions. LEO is a leadership tool to extend all occupational therapists' strategic use of scholarship, new accountability approaches, existing and new funding, and workforce planning to improve access to occupational therapy.

  20. Species Tree Inference Using a Mixture Model.

    Science.gov (United States)

    Ullah, Ikram; Parviainen, Pekka; Lagergren, Jens

    2015-09-01

    Species tree reconstruction has been a subject of substantial research due to its central role across biology and medicine. A species tree is often reconstructed using a set of gene trees or by directly using sequence data. In either of these cases, one of the main confounding phenomena is the discordance between a species tree and a gene tree due to evolutionary events such as duplications and losses. Probabilistic methods can resolve the discordance by coestimating gene trees and the species tree but this approach poses a scalability problem for larger data sets. We present MixTreEM-DLRS: A two-phase approach for reconstructing a species tree in the presence of gene duplications and losses. In the first phase, MixTreEM, a novel structural expectation maximization algorithm based on a mixture model is used to reconstruct a set of candidate species trees, given sequence data for monocopy gene families from the genomes under study. In the second phase, PrIME-DLRS, a method based on the DLRS model (Åkerborg O, Sennblad B, Arvestad L, Lagergren J. 2009. Simultaneous Bayesian gene tree reconstruction and reconciliation analysis. Proc Natl Acad Sci U S A. 106(14):5714-5719), is used for selecting the best species tree. PrIME-DLRS can handle multicopy gene families since DLRS, apart from modeling sequence evolution, models gene duplication and loss using a gene evolution model (Arvestad L, Lagergren J, Sennblad B. 2009. The gene evolution model and computing its associated probabilities. J ACM. 56(2):1-44). We evaluate MixTreEM-DLRS using synthetic and biological data, and compare its performance with a recent genome-scale species tree reconstruction method PHYLDOG (Boussau B, Szöllősi GJ, Duret L, Gouy M, Tannier E, Daubin V. 2013. Genome-scale coestimation of species and gene trees. Genome Res. 23(2):323-330) as well as with a fast parsimony-based algorithm Duptree (Wehe A, Bansal MS, Burleigh JG, Eulenstein O. 2008. Duptree: a program for large-scale phylogenetic

  1. Optimal mixture experiments

    CERN Document Server

    Sinha, B K; Pal, Manisha; Das, P

    2014-01-01

    The book dwells mainly on the optimality aspects of mixture designs. As mixture models are a special case of regression models, a general discussion on regression designs has been presented, which includes topics like continuous designs, de la Garza phenomenon, Loewner order domination, Equivalence theorems for different optimality criteria and standard optimality results for single variable polynomial regression and multivariate linear and quadratic regression models. This is followed by a review of the available literature on estimation of parameters in mixture models. Based on recent research findings, the volume also introduces optimal mixture designs for estimation of optimum mixing proportions in different mixture models, which include Scheffé’s quadratic model, Darroch-Waller model, log- contrast model, mixture-amount models, random coefficient models and multi-response model.  Robust mixture designs and mixture designs in blocks have been also reviewed. Moreover, some applications of mixture desig...

  2. Introducing Earth Sciences Students to Modeling Using MATLAB Exercises

    Science.gov (United States)

    Anderson, R. S.

    2003-12-01

    While we subject our students to math and physics and chemistry courses to complement their geological studies, we rarely allow them to experience the joys of modeling earth systems. Given the degree to which modern earth sciences relies upon models of complex systems, it seems appropriate to allow our students to develop some experience with this activity. In addition, as modeling is an unforgivingly logical exercise, it demands the student absorb the fundamental concepts, the assumptions behind them, and the means of constraining the relevant parameters in a problem. These concepts commonly include conservation of some quantity, the fluxes of that quantity, and careful prescription of the boundary and initial conditions. I have used MATLAB as an entrance to this world, and will illustrate the products of the exercises we have worked. This software is platform-independent, and has a wonderful graphics package (including movies) that is embedded intimately as one-to-several line calls. The exercises should follow a progression from simple to complex, and serve to introduce the many discrete tasks within modeling. I advocate full immersion in the first exercise. Example exercises include: growth of spatter cones (summation of parabolic trajectories of lava bombs); response of thermal profiles in the earth to varying surface temperature (thermal conduction); hillslope or fault scarp evolution (topographic diffusion); growth and subsidence of volcanoes (flexure); and coral growth on a subsiding platform in the face of sealevel fluctuations (coral biology and light extinction). These exercises can be motivated by reading a piece in the classical or modern literature that either describes a model, or better yet serves to describe the system well, but does not present a model. I have found that the generation of movies from even the early simulation exercises serves as an additional motivator for students. We discuss the models in each class meeting, and learn that there

  3. Introducing Students to Gas Chromatography-Mass Spectrometry Analysis and Determination of Kerosene Components in a Complex Mixture

    Science.gov (United States)

    Pacot, Giselle Mae M.; Lee, Lyn May; Chin, Sung-Tong; Marriott, Philip J.

    2016-01-01

    Gas chromatography-mass spectrometry (GC-MS) and GC-tandem MS (GC-MS/MS) are useful in many separation and characterization procedures. GC-MS is now a common tool in industry and research, and increasingly, GC-MS/MS is applied to the measurement of trace components in complex mixtures. This report describes an upper-level undergraduate experiment…

  4. Introducing Students to Gas Chromatography-Mass Spectrometry Analysis and Determination of Kerosene Components in a Complex Mixture

    Science.gov (United States)

    Pacot, Giselle Mae M.; Lee, Lyn May; Chin, Sung-Tong; Marriott, Philip J.

    2016-01-01

    Gas chromatography-mass spectrometry (GC-MS) and GC-tandem MS (GC-MS/MS) are useful in many separation and characterization procedures. GC-MS is now a common tool in industry and research, and increasingly, GC-MS/MS is applied to the measurement of trace components in complex mixtures. This report describes an upper-level undergraduate experiment…

  5. Learning High-Dimensional Mixtures of Graphical Models

    CERN Document Server

    Anandkumar, A; Kakade, S M

    2012-01-01

    We consider the problem of learning mixtures of discrete graphical models in high dimensions and propose a novel method for estimating the mixture components with provable guarantees. The method proceeds mainly in three stages. In the first stage, it estimates the union of the Markov graphs of the mixture components (referred to as the union graph) via a series of rank tests. It then uses this estimated union graph to compute the mixture components via a spectral decomposition method. The spectral decomposition method was originally proposed for latent class models, and we adapt this method for learning the more general class of graphical model mixtures. In the end, the method produces tree approximations of the mixture components via the Chow-Liu algorithm. Our output is thus a tree-mixture model which serves as a good approximation to the underlying graphical model mixture. When the union graph has sparse node separators, we prove that our method has sample and computational complexities scaling as poly(p, ...

  6. Second-order model selection in mixture experiments

    Energy Technology Data Exchange (ETDEWEB)

    Redgate, P.E.; Piepel, G.F.; Hrma, P.R.

    1992-07-01

    Full second-order models for q-component mixture experiments contain q(q+l)/2 terms, which increases rapidly as q increases. Fitting full second-order models for larger q may involve problems with ill-conditioning and overfitting. These problems can be remedied by transforming the mixture components and/or fitting reduced forms of the full second-order mixture model. Various component transformation and model reduction approaches are discussed. Data from a 10-component nuclear waste glass study are used to illustrate ill-conditioning and overfitting problems that can be encountered when fitting a full second-order mixture model. Component transformation, model term selection, and model evaluation/validation techniques are discussed and illustrated for the waste glass example.

  7. A stochastic evolutionary model generating a mixture of exponential distributions

    CERN Document Server

    Fenner, Trevor; Loizou, George

    2015-01-01

    Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in \\cite{FENN15} so that it can generate mixture models,in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.

  8. Self-organising mixture autoregressive model for non-stationary time series modelling.

    Science.gov (United States)

    Ni, He; Yin, Hujun

    2008-12-01

    Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.

  9. Detection of unobserved heterogeneity with growth mixture models

    OpenAIRE

    Jost Reinecke; Luca Mariotti

    2009-01-01

    Latent growth curve models as structural equation models are extensively discussedin various research fields (Duncan et al., 2006). Recent methodological and statisticalextension are focused on the consideration of unobserved heterogeneity in empiricaldata. Muth´en extended the classical structural equation approach by mixture components,i. e. categorical latent classes (Muth´en 2002, 2004, 2007).The paper will discuss applications of growth mixture models with data from oneof the first panel...

  10. An equiratio mixture model for non-additive components : a case study for aspartame/acesulfame-K mixtures

    NARCIS (Netherlands)

    Schifferstein, H.N.J.

    1996-01-01

    The Equiratio Mixture Model predicts the psychophysical function for an equiratio mixture type on the basis of the psychophysical functions for the unmixed components. The model reliably estimates the sweetness of mixtures of sugars and sugar-alchohols, but is unable to predict intensity for asparta

  11. Modeling and interpreting biological effects of mixtures in the environment: introduction to the metal mixture modeling evaluation project.

    Science.gov (United States)

    Van Genderen, Eric; Adams, William; Dwyer, Robert; Garman, Emily; Gorsuch, Joseph

    2015-04-01

    The fate and biological effects of chemical mixtures in the environment are receiving increased attention from the scientific and regulatory communities. Understanding the behavior and toxicity of metal mixtures poses unique challenges for incorporating metal-specific concepts and approaches, such as bioavailability and metal speciation, in multiple-metal exposures. To avoid the use of oversimplified approaches to assess the toxicity of metal mixtures, a collaborative 2-yr research project and multistakeholder group workshop were conducted to examine and evaluate available higher-tiered chemical speciation-based metal mixtures modeling approaches. The Metal Mixture Modeling Evaluation project and workshop achieved 3 important objectives related to modeling and interpretation of biological effects of metal mixtures: 1) bioavailability models calibrated for single-metal exposures can be integrated to assess mixture scenarios; 2) the available modeling approaches perform consistently well for various metal combinations, organisms, and endpoints; and 3) several technical advancements have been identified that should be incorporated into speciation models and environmental risk assessments for metals.

  12. Simulation of rheological behavior of asphalt mixture with lattice model

    Institute of Scientific and Technical Information of China (English)

    杨圣枫; 杨新华; 陈传尧

    2008-01-01

    A three-dimensional(3D) lattice model for predicting the rheological behavior of asphalt mixtures was presented.In this model asphalt mixtures were described as a two-phase composite material consisting of asphalt sand and coarse aggregates distributed randomly.Asphalt sand was regarded as a viscoelastic material and aggregates as an elastic material.The rheological response of asphalt mixture subjected to different constant stresses was simulated.The calibrated overall creep strain shows a good approximation to experimental results.

  13. A class-adaptive spatially variant mixture model for image segmentation.

    Science.gov (United States)

    Nikou, Christophoros; Galatsanos, Nikolaos P; Likas, Aristidis C

    2007-04-01

    We propose a new approach for image segmentation based on a hierarchical and spatially variant mixture model. According to this model, the pixel labels are random variables and a smoothness prior is imposed on them. The main novelty of this work is a new family of smoothness priors for the label probabilities in spatially variant mixture models. These Gauss-Markov random field-based priors allow all their parameters to be estimated in closed form via the maximum a posteriori (MAP) estimation using the expectation-maximization methodology. Thus, it is possible to introduce priors with multiple parameters that adapt to different aspects of the data. Numerical experiments are presented where the proposed MAP algorithms were tested in various image segmentation scenarios. These experiments demonstrate that the proposed segmentation scheme compares favorably to both standard and previous spatially constrained mixture model-based segmentation.

  14. Introduction to the special section on mixture modeling in personality assessment.

    Science.gov (United States)

    Wright, Aidan G C; Hallquist, Michael N

    2014-01-01

    Latent variable models offer a conceptual and statistical framework for evaluating the underlying structure of psychological constructs, including personality and psychopathology. Complex structures that combine or compare categorical and dimensional latent variables can be accommodated using mixture modeling approaches, which provide a powerful framework for testing nuanced theories about psychological structure. This special series includes introductory primers on cross-sectional and longitudinal mixture modeling, in addition to empirical examples applying these techniques to real-world data collected in clinical settings. This group of articles is designed to introduce personality assessment scientists and practitioners to a general latent variable framework that we hope will stimulate new research and application of mixture models to the assessment of personality and its pathology.

  15. Option Pricing with Asymmetric Heteroskedastic Normal Mixture Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V. K; Stentoft, Lars

    2015-01-01

    2011, and compute dollar losses and implied standard deviation losses. We compare our results to those of existing mixture models and other benchmarks like component models and jump models. Using the model confidence set test, the overall dollar root mean squared error of the best performing benchmark...

  16. Proper Versus Improper Mixtures in the ESR Model

    CERN Document Server

    Garola, Claudio

    2011-01-01

    The interpretation of mixtures is problematic in quantum mechanics (QM) because of nonobjectivity of properties. The ESR model restores objectivity reinterpreting quantum probabilities as conditional on detection and embodying the mathematical formalism of QM into a broader noncontextual (hence local) framework. We have recently provided a Hilbert space representation of the generalized observables that appear in the ESR model. We show here that each proper mixture is represented by a family of density operators parametrized by the macroscopic properties characterizing the physical system $\\Omega$ that is considered and that each improper mixture is represented by a single density operator which coincides with the operator that represents it in QM. The new representations avoid the problems mentioned above and entail some predictions that differ from the predictions of QM. One can thus contrive experiments for distinguishing empirically proper from improper mixtures, hence for confirming or disproving the ESR...

  17. Mixture modeling approach to flow cytometry data.

    Science.gov (United States)

    Boedigheimer, Michael J; Ferbas, John

    2008-05-01

    Flow Cytometry has become a mainstay technique for measuring fluorescent and physical attributes of single cells in a suspended mixture. These data are reduced during analysis using a manual or semiautomated process of gating. Despite the need to gate data for traditional analyses, it is well recognized that analyst-to-analyst variability can impact the dataset. Moreover, cells of interest can be inadvertently excluded from the gate, and relationships between collected variables may go unappreciated because they were not included in the original analysis plan. A multivariate non-gating technique was developed and implemented that accomplished the same goal as traditional gating while eliminating many weaknesses. The procedure was validated against traditional gating for analysis of circulating B cells in normal donors (n = 20) and persons with Systemic Lupus Erythematosus (n = 42). The method recapitulated relationships in the dataset while providing for an automated and objective assessment of the data. Flow cytometry analyses are amenable to automated analytical techniques that are not predicated on discrete operator-generated gates. Such alternative approaches can remove subjectivity in data analysis, improve efficiency and may ultimately enable construction of large bioinformatics data systems for more sophisticated approaches to hypothesis testing.

  18. Stochastic downscaling of precipitation with neural network conditional mixture models

    Science.gov (United States)

    Carreau, Julie; Vrac, Mathieu

    2011-10-01

    We present a new class of stochastic downscaling models, the conditional mixture models (CMMs), which builds on neural network models. CMMs are mixture models whose parameters are functions of predictor variables. These functions are implemented with a one-layer feed-forward neural network. By combining the approximation capabilities of mixtures and neural networks, CMMs can, in principle, represent arbitrary conditional distributions. We evaluate the CMMs at downscaling precipitation data at three stations in the French Mediterranean region. A discrete (Dirac) component is included in the mixture to handle the "no-rain" events. Positive rainfall is modeled with a mixture of continuous densities, which can be either Gaussian, log-normal, or hybrid Pareto (an extension of the generalized Pareto). CMMs are stochastic weather generators in the sense that they provide a model for the conditional density of local variables given large-scale information. In this study, we did not look for the most appropriate set of predictors, and we settled for a decent set as the basis to compare the downscaling models. The set of predictors includes the National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalyses sea level pressure fields on a 6 × 6 grid cell region surrounding the stations plus three date variables. We compare the three distribution families of CMMs with a simpler benchmark model, which is more common in the downscaling community. The difference between the benchmark model and CMMs is that positive rainfall is modeled with a single Gamma distribution. The results show that CMM with hybrid Pareto components outperforms both the CMM with Gaussian components and the benchmark model in terms of log-likelihood. However, there is no significant difference with the log-normal CMM. In general, the additional flexibility of mixture models, as opposed to using a single distribution, allows us to better represent the

  19. A Lattice Boltzmann Model of Binary Fluid Mixture

    CERN Document Server

    Orlandini, E; Yeomans, J M; Orlandini, Enzo; Swift, Michael R.

    1995-01-01

    We introduce a lattice Boltzmann for simulating an immiscible binary fluid mixture. Our collision rules are derived from a macroscopic thermodynamic description of the fluid in a way motivated by the Cahn-Hilliard approach to non-equilibrium dynamics. This ensures that a thermodynamically consistent state is reached in equilibrium. The non-equilibrium dynamics is investigated numerically and found to agree with simple analytic predictions in both the one-phase and the two-phase region of the phase diagram.

  20. Anharmonic effects in simple physical models: introducing undergraduates to nonlinearity

    Science.gov (United States)

    Christian, J. M.

    2017-09-01

    Given the pervasive character of nonlinearity throughout the physical universe, a case is made for introducing undergraduate students to its consequences and signatures earlier rather than later. The dynamics of two well-known systems—a spring and a pendulum—are reviewed when the standard textbook linearising assumptions are relaxed. Some qualitative effects of nonlinearity can be anticipated from symmetry (e.g., inspection of potential energy functions), and further physical insight gained by applying a simple successive-approximation method that might be taught in parallel with courses on classical mechanics, ordinary differential equations, and computational physics. We conclude with a survey of how these ideas have been deployed on programmes at a UK university.

  1. Count data modeling and classification using finite mixtures of distributions.

    Science.gov (United States)

    Bouguila, Nizar

    2011-02-01

    In this paper, we consider the problem of constructing accurate and flexible statistical representations for count data, which we often confront in many areas such as data mining, computer vision, and information retrieval. In particular, we analyze and compare several generative approaches widely used for count data clustering, namely multinomial, multinomial Dirichlet, and multinomial generalized Dirichlet mixture models. Moreover, we propose a clustering approach via a mixture model based on a composition of the Liouville family of distributions, from which we select the Beta-Liouville distribution, and the multinomial. The novel proposed model, which we call multinomial Beta-Liouville mixture, is optimized by deterministic annealing expectation-maximization and minimum description length, and strives to achieve a high accuracy of count data clustering and model selection. An important feature of the multinomial Beta-Liouville mixture is that it has fewer parameters than the recently proposed multinomial generalized Dirichlet mixture. The performance evaluation is conducted through a set of extensive empirical experiments, which concern text and image texture modeling and classification and shape modeling, and highlights the merits of the proposed models and approaches.

  2. Introducing the Collaborative Learning Modeling Language (ColeML)

    DEFF Research Database (Denmark)

    Bundsgaard, Jeppe

    2014-01-01

    with a few basic concepts, 2) the language should make possible a visual graphic representation of the model, 3) elements of the model should be able to change status during the articulation, 4) the system should accept unfinished models, 5) models should be able to be built by integrating other models......, and differentiating teaching. Technology can help respond to these challenges (Brush & Saye, 2008; Bundsgaard, 2009, 2010; Ge, Planas, & Er, 2010; Helic, Krottmaier, Maurer, & Scerbakov, 2005; Daniel Schneider & Synteta, 2005; D. Schneider, Synteta, & Frété, 2002), but platforms are very expensive to build from...... the ground up. If these platforms are to find their way into everyday teaching and learning, they have to be easy and cheap to develop. Thus there is a need for easy to use application programming platforms. This paper argues that a visual modeling programming language would be an important part...

  3. Detecting Housing Submarkets using Unsupervised Learning of Finite Mixture Models

    DEFF Research Database (Denmark)

    Ntantamis, Christos

    framework. The global form of heterogeneity is incorporated in a Hedonic Price Index model that encompasses a nonlinear function of the geographical coordinates of each dwelling. The local form of heterogeneity is subsequently modeled as a Finite Mixture Model for the residuals of the Hedonic Index...

  4. Statistical Compressive Sensing of Gaussian Mixture Models

    CERN Document Server

    Yu, Guoshen

    2010-01-01

    A new framework of compressive sensing (CS), namely statistical compressive sensing (SCS), that aims at efficiently sampling a collection of signals that follow a statistical distribution and achieving accurate reconstruction on average, is introduced. For signals following a Gaussian distribution, with Gaussian or Bernoulli sensing matrices of O(k) measurements, considerably smaller than the O(k log(N/k)) required by conventional CS, where N is the signal dimension, and with an optimal decoder implemented with linear filtering, significantly faster than the pursuit decoders applied in conventional CS, the error of SCS is shown tightly upper bounded by a constant times the k-best term approximation error, with overwhelming probability. The failure probability is also significantly smaller than that of conventional CS. Stronger yet simpler results further show that for any sensing matrix, the error of Gaussian SCS is upper bounded by a constant times the k-best term approximation with probability one, and the ...

  5. An Active Learning Exercise for Introducing Agent-Based Modeling

    Science.gov (United States)

    Pinder, Jonathan P.

    2013-01-01

    Recent developments in agent-based modeling as a method of systems analysis and optimization indicate that students in business analytics need an introduction to the terminology, concepts, and framework of agent-based modeling. This article presents an active learning exercise for MBA students in business analytics that demonstrates agent-based…

  6. Novel mixture model for the representation of potential energy surfaces

    Science.gov (United States)

    Pham, Tien Lam; Kino, Hiori; Terakura, Kiyoyuki; Miyake, Takashi; Dam, Hieu Chi

    2016-10-01

    We demonstrate that knowledge of chemical physics on a materials system can be automatically extracted from first-principles calculations using a data mining technique; this information can then be utilized to construct a simple empirical atomic potential model. By using unsupervised learning of the generative Gaussian mixture model, physically meaningful patterns of atomic local chemical environments can be detected automatically. Based on the obtained information regarding these atomic patterns, we propose a chemical-structure-dependent linear mixture model for estimating the atomic potential energy. Our experiments show that the proposed mixture model significantly improves the accuracy of the prediction of the potential energy surface for complex systems that possess a large diversity in their local structures.

  7. Finite mixture varying coefficient models for analyzing longitudinal heterogenous data.

    Science.gov (United States)

    Lu, Zhaohua; Song, Xinyuan

    2012-03-15

    This paper aims to develop a mixture model to study heterogeneous longitudinal data on the treatment effect of heroin use from a California Civil Addict Program. Each component of the mixture is characterized by a varying coefficient mixed effect model. We use the Bayesian P-splines approach to approximate the varying coefficient functions. We develop Markov chain Monte Carlo algorithms to estimate the smooth functions, unknown parameters, and latent variables in the model. We use modified deviance information criterion to determine the number of components in the mixture. A simulation study demonstrates that the modified deviance information criterion selects the correct number of components and the estimation of unknown quantities is accurate. We apply the proposed model to the heroin treatment study. Furthermore, we identify heterogeneous longitudinal patterns.

  8. Introducing BioSARN - an ecological niche model refinement tool.

    Science.gov (United States)

    Heap, Marshall J

    2016-08-01

    Environmental niche modeling outputs a biological species' potential distribution. Further work is needed to arrive at a species' realized distribution. The Biological Species Approximate Realized Niche (BioSARN) application provides the ecological modeler with a toolset to refine Environmental niche models (ENMs). These tools include soil and land class filtering, niche area quantification and novelties like enhanced temporal corridor definition, and output to a high spatial resolution land class model. BioSARN is exemplified with a study on Fraser fir, a tree species with strong land class and edaphic correlations. Soil and land class filtering caused the potential distribution area to decline 17%. Enhanced temporal corridor definition permitted distinction of current, continuing, and future niches, and thus niche change and movement. Tile quantification analysis provided further corroboration of these trends. BioSARN does not substitute other established ENM methods. Rather, it allows the experimenter to work with their preferred ENM, refining it using their knowledge and experience. Output from lower spatial resolution ENMs to a high spatial resolution land class model is a pseudo high-resolution result. Still, it maybe the best that can be achieved until wide range high spatial resolution environmental data and accurate high precision species occurrence data become generally available.

  9. Phylogenetic mixture models can reduce node-density artifacts.

    Science.gov (United States)

    Venditti, Chris; Meade, Andrew; Pagel, Mark

    2008-04-01

    We investigate the performance of phylogenetic mixture models in reducing a well-known and pervasive artifact of phylogenetic inference known as the node-density effect, comparing them to partitioned analyses of the same data. The node-density effect refers to the tendency for the amount of evolutionary change in longer branches of phylogenies to be underestimated compared to that in regions of the tree where there are more nodes and thus branches are typically shorter. Mixture models allow more than one model of sequence evolution to describe the sites in an alignment without prior knowledge of the evolutionary processes that characterize the data or how they correspond to different sites. If multiple evolutionary patterns are common in sequence evolution, mixture models may be capable of reducing node-density effects by characterizing the evolutionary processes more accurately. In gene-sequence alignments simulated to have heterogeneous patterns of evolution, we find that mixture models can reduce node-density effects to negligible levels or remove them altogether, performing as well as partitioned analyses based on the known simulated patterns. The mixture models achieve this without knowledge of the patterns that generated the data and even in some cases without specifying the full or true model of sequence evolution known to underlie the data. The latter result is especially important in real applications, as the true model of evolution is seldom known. We find the same patterns of results for two real data sets with evidence of complex patterns of sequence evolution: mixture models substantially reduced node-density effects and returned better likelihoods compared to partitioning models specifically fitted to these data. We suggest that the presence of more than one pattern of evolution in the data is a common source of error in phylogenetic inference and that mixture models can often detect these patterns even without prior knowledge of their presence in the

  10. Community Detection Using Multilayer Edge Mixture Model

    CERN Document Server

    Zhang, Han; Lai, Jian-Huang; Yu, Philip S

    2016-01-01

    A wide range of complex systems can be modeled as networks with corresponding constraints on the edges and nodes, which have been extensively studied in recent years. Nowadays, with the progress of information technology, systems that contain the information collected from multiple perspectives have been generated. The conventional models designed for single perspective networks fail to depict the diverse topological properties of such systems, so multilayer network models aiming at describing the structure of these networks emerge. As a major concern in network science, decomposing the networks into communities, which usually refers to closely interconnected node groups, extracts valuable information about the structure and interactions of the network. Unlike the contention of dozens of models and methods in conventional single-layer networks, methods aiming at discovering the communities in the multilayer networks are still limited. In order to help explore the community structure in multilayer networks, we...

  11. Modeling Biodegradation Kinetics on Benzene and Toluene and Their Mixture

    Directory of Open Access Journals (Sweden)

    Aparecido N. Módenes

    2007-10-01

    Full Text Available The objective of this work was to model the biodegradation kinetics of toxic compounds toluene and benzene as pure substrates and in a mixture. As a control, Monod and Andrews models were used. To predict substrates interactions, more sophisticated models of inhibition and competition, and SKIP (sum kinetics interactions parameters model were applied. The models evaluation was performed based on the experimental data from Pseudomonas putida F1 activities published in the literature. In parameter identification procedure, the global method of particle swarm optimization (PSO was applied. The simulation results show that the better description of the biodegradation process of pure toxic substrate can be achieved by Andrews' model. The biodegradation process of a mixture of toxic substrates is modeled the best when modified competitive inhibition and SKIP models are used. The developed software can be used as a toolbox of a kinetics model catalogue of industrial wastewater treatment for process design and optimization.

  12. Evaluating Differential Effects Using Regression Interactions and Regression Mixture Models

    Science.gov (United States)

    Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung

    2015-01-01

    Research increasingly emphasizes understanding differential effects. This article focuses on understanding regression mixture models, which are relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their…

  13. Introducing a critical dialogical model for vocational teacher education

    Directory of Open Access Journals (Sweden)

    Daniel Alvunger

    2016-02-01

    Full Text Available The purpose with this article is to conceptualise and present what is referred to as a critical dialogical model for vocational teacher education that takes into account the interaction between theory/research and practice/experiential knowledge. The theoretical framework for the model is based on critical hermeneutics and the methodology of dialogue seminars with the aim to promote the development of a 'critical self' among the vocational teacher students. The model enacts an interface between theory and practice where a number of processes are identified: a reflective-analogical process, a critical-analytical process and an interactive critical self-building process. In order to include a theoretical argument concerning the issue of content, the concept of 'learning capital' and its four sub-categories in terms of curricular capital, instructional capital, moral capital and venture capital is used. We point at content-related aspects of student learning and how a critical self has the potential to promote various kinds of 'capital' and capacity building that may be of importance in the future work-life of the vocational teacher student.

  14. Introducing Human APOE into Aβ Transgenic Mouse Models

    Directory of Open Access Journals (Sweden)

    Leon M. Tai

    2011-01-01

    Full Text Available Apolipoprotein E (apoE and apoE/amyloid-β (Aβ transgenic (Tg mouse models are critical to understanding apoE-isoform effects on Alzheimer's disease risk. Compared to wild type, apoE−/− mice exhibit neuronal deficits, similar to apoE4-Tg compared to apoE3-Tg mice, providing a model for Aβ-independent apoE effects on neurodegeneration. To determine the effects of apoE on Aβ-induced neuropathology, apoE−/− mice were crossed with Aβ-Tg mice, resulting in a significant delay in plaque deposition. Surprisingly, crossing human-apoE-Tg mice with apoE−/−/Aβ-Tg mice further delayed plaque deposition, which eventually developed in apoE4/Aβ-Tg mice prior to apoE3/Aβ-Tg. One approach to address hAPOE-induced temporal delay in Aβ pathology is an additional insult, like head injury. Another is crossing human-apoE-Tg mice with Aβ-Tg mice that have rapid-onset Aβ pathology. For example, because 5xFAD mice develop plaques by 2 months, the prediction is that human-apoE/5xFAD-Tg mice develop plaques around 6 months and 12 months before other human-apoE/Aβ-Tg mice. Thus, tractable models for human-apoE/Aβ-Tg mice continue to evolve.

  15. Multi-resolution image segmentation based on Gaussian mixture model

    Institute of Scientific and Technical Information of China (English)

    Tang Yinggan; Liu Dong; Guan Xinping

    2006-01-01

    Mixture model based image segmentation method, which assumes that image pixels are independent and do not consider the position relationship between pixels, is not robust to noise and usually leads to misclassification. A new segmentation method, called multi-resolution Gaussian mixture model method, is proposed. First, an image pyramid is constructed and son-father link relationship is built between each level of pyramid. Then the mixture model segmentation method is applied to the top level. The segmentation result on the top level is passed top-down to the bottom level according to the son-father link relationship between levels. The proposed method considers not only local but also global information of image, it overcomes the effect of noise and can obtain better segmentation result. Experimental result demonstrates its effectiveness.

  16. A Gamma Model for Mixture STR Samples

    DEFF Research Database (Denmark)

    Christensen, Susanne; Bøttcher, Susanne Gammelgaard; Morling, Niels

    This project investigates the behavior of the PCR Amplification Kit. A number of known DNA-profiles are mixed two by two in "known" proportions and analyzed. Gamma distribution models are fitted to the resulting data to learn to what extent actual mixing proportions can be rediscovered in the amp......This project investigates the behavior of the PCR Amplification Kit. A number of known DNA-profiles are mixed two by two in "known" proportions and analyzed. Gamma distribution models are fitted to the resulting data to learn to what extent actual mixing proportions can be rediscovered...... in the amplifier output and thereby the question of confidence in separate DNA -profiles suggested by an output is addressed....

  17. Modeling of Complex Mixtures: JP-8 Toxicokinetics

    Science.gov (United States)

    2008-10-01

    diffusion, including metabolic loss via the cytochrome P-450 system, described by non-linear Michaelis - Menten kinetics as shown in the following...point. Inhalation and iv were the dose routes for the rat study. The modelers used saturable ( Michaelis - Menten ) kinetics as well as a second... Michaelis - Menten liver metabolic constants for n-decane have been measured (Km = 1.5 mg/L and Vmax = 0.4 mg/hour) using rat liver slices in a vial

  18. Hidden Markov Models with Factored Gaussian Mixtures Densities

    Institute of Scientific and Technical Information of China (English)

    LI Hao-zheng; LIU Zhi-qiang; ZHU Xiang-hua

    2004-01-01

    We present a factorial representation of Gaussian mixture models for observation densities in Hidden Markov Models(HMMs), which uses the factorial learning in the HMM framework. We derive the reestimation formulas for estimating the factorized parameters by the Expectation Maximization (EM) algorithm. We conduct several experiments to compare the performance of this model structure with Factorial Hidden Markov Models(FHMMs) and HMMs, some conclusions and promising empirical results are presented.

  19. A stochastic evolutionary model generating a mixture of exponential distributions

    Science.gov (United States)

    Fenner, Trevor; Levene, Mark; Loizou, George

    2016-02-01

    Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in [T. Fenner, M. Levene, G. Loizou, J. Stat. Mech. 2015, P08015 (2015)] so that it can generate mixture models, in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.

  20. Multinomial mixture model with heterogeneous classification probabilities

    Science.gov (United States)

    Holland, M.D.; Gray, B.R.

    2011-01-01

    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  1. A 2D Axisymmetric Mixture Multiphase Model for Bottom Stirring in a BOF Converter

    Science.gov (United States)

    Kruskopf, Ari

    2017-02-01

    A process model for basic oxygen furnace (BOF) steel converter is in development. The model will take into account all the essential physical and chemical phenomena, while achieving real-time calculation of the process. The complete model will include a 2D axisymmetric turbulent multiphase flow model for iron melt and argon gas mixture, a steel scrap melting model, and a chemical reaction model. A novel liquid mass conserving mixture multiphase model for bubbling gas jet is introduced in this paper. In-house implementation of the model is tested and validated in this article independently from the other parts of the full process model. Validation data comprise three different water models with different volume flow rates of air blown through a regular nozzle and a porous plug. The water models cover a wide range of dimensionless number R_{{p}} , which include values that are similar for industrial-scale steel converter. The k- ɛ turbulence model is used with wall functions so that a coarse grid can be utilized. The model calculates a steady-state flow field for gas/liquid mixture using control volume method with staggered SIMPLE algorithm.

  2. A 2D Axisymmetric Mixture Multiphase Model for Bottom Stirring in a BOF Converter

    Science.gov (United States)

    Kruskopf, Ari

    2016-11-01

    A process model for basic oxygen furnace (BOF) steel converter is in development. The model will take into account all the essential physical and chemical phenomena, while achieving real-time calculation of the process. The complete model will include a 2D axisymmetric turbulent multiphase flow model for iron melt and argon gas mixture, a steel scrap melting model, and a chemical reaction model. A novel liquid mass conserving mixture multiphase model for bubbling gas jet is introduced in this paper. In-house implementation of the model is tested and validated in this article independently from the other parts of the full process model. Validation data comprise three different water models with different volume flow rates of air blown through a regular nozzle and a porous plug. The water models cover a wide range of dimensionless number R_{p} , which include values that are similar for industrial-scale steel converter. The k-ɛ turbulence model is used with wall functions so that a coarse grid can be utilized. The model calculates a steady-state flow field for gas/liquid mixture using control volume method with staggered SIMPLE algorithm.

  3. Robust estimation of unbalanced mixture models on samples with outliers.

    Science.gov (United States)

    Galimzianova, Alfiia; Pernuš, Franjo; Likar, Boštjan; Špiclin, Žiga

    2015-11-01

    Mixture models are often used to compactly represent samples from heterogeneous sources. However, in real world, the samples generally contain an unknown fraction of outliers and the sources generate different or unbalanced numbers of observations. Such unbalanced and contaminated samples may, for instance, be obtained by high density data sensors such as imaging devices. Estimation of unbalanced mixture models from samples with outliers requires robust estimation methods. In this paper, we propose a novel robust mixture estimator incorporating trimming of the outliers based on component-wise confidence level ordering of observations. The proposed method is validated and compared to the state-of-the-art FAST-TLE method on two data sets, one consisting of synthetic samples with a varying fraction of outliers and a varying balance between mixture weights, while the other data set contained structural magnetic resonance images of the brain with tumors of varying volumes. The results on both data sets clearly indicate that the proposed method is capable to robustly estimate unbalanced mixtures over a broad range of outlier fractions. As such, it is applicable to real-world samples, in which the outlier fraction cannot be estimated in advance.

  4. Option Pricing with Asymmetric Heteroskedastic Normal Mixture Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars

    varying higher order moments of the risk neutral distribution. When forecasting out-of-sample a large set of index options between 1996 and 2009, substantial improvements are found compared to several benchmark models in terms of dollar losses and the ability to explain the smirk in implied volatilities....... Overall, the dollar root mean squared error of the best performing benchmark component model is 39% larger than for the mixture model. When considering the recent financial crisis this difference increases to 69%....

  5. The R Package bgmm : Mixture Modeling with Uncertain Knowledge

    Directory of Open Access Journals (Sweden)

    Przemys law Biecek

    2012-04-01

    Full Text Available Classical supervised learning enjoys the luxury of accessing the true known labels for the observations in a modeled dataset. Real life, however, poses an abundance of problems, where the labels are only partially defined, i.e., are uncertain and given only for a subsetof observations. Such partial labels can occur regardless of the knowledge source. For example, an experimental assessment of labels may have limited capacity and is prone to measurement errors. Also expert knowledge is often restricted to a specialized area and is thus unlikely to provide trustworthy labels for all observations in the dataset. Partially supervised mixture modeling is able to process such sparse and imprecise input. Here, we present an R package calledbgmm, which implements two partially supervised mixture modeling methods: soft-label and belief-based modeling. For completeness, we equipped the package also with the functionality of unsupervised, semi- and fully supervised mixture modeling. On real data we present the usage of bgmm for basic model-fitting in all modeling variants. The package can be applied also to selection of the best-fitting from a set of models with different component numbers or constraints on their structures. This functionality is presented on an artificial dataset, which can be simulated in bgmm from a distribution defined by a given model.

  6. The Semiparametric Normal Variance-Mean Mixture Model

    DEFF Research Database (Denmark)

    Korsholm, Lars

    1997-01-01

    We discuss the normal vairance-mean mixture model from a semi-parametric point of view, i.e. we let the mixing distribution belong to a non parametric family. The main results are consistency of the non parametric maximum likelihood estimat or in this case, and construction of an asymptotically...... normal and efficient estimator....

  7. Comparing State SAT Scores Using a Mixture Modeling Approach

    Science.gov (United States)

    Kim, YoungKoung Rachel

    2009-01-01

    Presented at the national conference for AERA (American Educational Research Association) in April 2009. The large variability of SAT taker population across states makes state-by-state comparisons of the SAT scores challenging. Using a mixture modeling approach, therefore, the current study presents a method of identifying subpopulations in terms…

  8. Detecting Social Desirability Bias Using Factor Mixture Models

    Science.gov (United States)

    Leite, Walter L.; Cooper, Lou Ann

    2010-01-01

    Based on the conceptualization that social desirable bias (SDB) is a discrete event resulting from an interaction between a scale's items, the testing situation, and the respondent's latent trait on a social desirability factor, we present a method that makes use of factor mixture models to identify which examinees are most likely to provide…

  9. An integral equation model for warm and hot dense mixtures

    CERN Document Server

    Starrett, C E; Daligault, J; Hamel, S

    2014-01-01

    In Starrett and Saumon [Phys. Rev. E 87, 013104 (2013)] a model for the calculation of electronic and ionic structures of warm and hot dense matter was described and validated. In that model the electronic structure of one "atom" in a plasma is determined using a density functional theory based average-atom (AA) model, and the ionic structure is determined by coupling the AA model to integral equations governing the fluid structure. That model was for plasmas with one nuclear species only. Here we extend it to treat plasmas with many nuclear species, i.e. mixtures, and apply it to a carbon-hydrogen mixture relevant to inertial confinement fusion experiments. Comparison of the predicted electronic and ionic structures with orbital-free and Kohn-Sham molecular dynamics simulations reveals excellent agreement wherever chemical bonding is not significant.

  10. Modeling adsorption of binary and ternary mixtures on microporous media

    DEFF Research Database (Denmark)

    Monsalvo, Matias Alfonso; Shapiro, Alexander

    2007-01-01

    The goal of this work is to analyze the adsorption of binary and ternary mixtures on the basis of the multicomponent potential theory of adsorption (MPTA). In the MPTA, the adsorbate is considered as a segregated mixture in the external potential field emitted by the solid adsorbent. This makes...... it possible using the same equation of state to describe the thermodynamic properties of the segregated and the bulk phases. For comparison, we also used the ideal adsorbed solution theory (IAST) to describe adsorption equilibria. The main advantage of these two models is their capabilities to predict...

  11. Background based Gaussian mixture model lesion segmentation in PET

    Energy Technology Data Exchange (ETDEWEB)

    Soffientini, Chiara Dolores, E-mail: chiaradolores.soffientini@polimi.it; Baselli, Giuseppe [DEIB, Department of Electronics, Information, and Bioengineering, Politecnico di Milano, Piazza Leonardo da Vinci 32, Milan 20133 (Italy); De Bernardi, Elisabetta [Department of Medicine and Surgery, Tecnomed Foundation, University of Milano—Bicocca, Monza 20900 (Italy); Zito, Felicia; Castellani, Massimo [Nuclear Medicine Department, Fondazione IRCCS Ca’ Granda Ospedale Maggiore Policlinico, via Francesco Sforza 35, Milan 20122 (Italy)

    2016-05-15

    Purpose: Quantitative {sup 18}F-fluorodeoxyglucose positron emission tomography is limited by the uncertainty in lesion delineation due to poor SNR, low resolution, and partial volume effects, subsequently impacting oncological assessment, treatment planning, and follow-up. The present work develops and validates a segmentation algorithm based on statistical clustering. The introduction of constraints based on background features and contiguity priors is expected to improve robustness vs clinical image characteristics such as lesion dimension, noise, and contrast level. Methods: An eight-class Gaussian mixture model (GMM) clustering algorithm was modified by constraining the mean and variance parameters of four background classes according to the previous analysis of a lesion-free background volume of interest (background modeling). Hence, expectation maximization operated only on the four classes dedicated to lesion detection. To favor the segmentation of connected objects, a further variant was introduced by inserting priors relevant to the classification of neighbors. The algorithm was applied to simulated datasets and acquired phantom data. Feasibility and robustness toward initialization were assessed on a clinical dataset manually contoured by two expert clinicians. Comparisons were performed with respect to a standard eight-class GMM algorithm and to four different state-of-the-art methods in terms of volume error (VE), Dice index, classification error (CE), and Hausdorff distance (HD). Results: The proposed GMM segmentation with background modeling outperformed standard GMM and all the other tested methods. Medians of accuracy indexes were VE <3%, Dice >0.88, CE <0.25, and HD <1.2 in simulations; VE <23%, Dice >0.74, CE <0.43, and HD <1.77 in phantom data. Robustness toward image statistic changes (±15%) was shown by the low index changes: <26% for VE, <17% for Dice, and <15% for CE. Finally, robustness toward the user-dependent volume initialization was

  12. A general mixture model for sediment laden flows

    Science.gov (United States)

    Liang, Lixin; Yu, Xiping; Bombardelli, Fabián

    2017-09-01

    A mixture model for general description of sediment-laden flows is developed based on an Eulerian-Eulerian two-phase flow theory, with the aim at gaining computational speed in the prediction, but preserving the accuracy of the complete two-fluid model. The basic equations of the model include the mass and momentum conservation equations for the sediment-water mixture, and the mass conservation equation for sediment. However, a newly-obtained expression for the slip velocity between phases allows for the computation of the sediment motion, without the need of solving the momentum equation for sediment. The turbulent motion is represented for both the fluid and the particulate phases. A modified k-ε model is used to describe the fluid turbulence while an algebraic model is adopted for turbulent motion of particles. A two-dimensional finite difference method based on the SMAC scheme was used to numerically solve the mathematical model. The model is validated through simulations of fluid and suspended sediment motion in steady open-channel flows, both in equilibrium and non-equilibrium states, as well as in oscillatory flows. The computed sediment concentrations, horizontal velocity and turbulent kinetic energy of the mixture are all shown to be in good agreement with available experimental data, and importantly, this is done at a fraction of the computational efforts required by the complete two-fluid model.

  13. Adaptive mixture observation models for multiple object tracking

    Institute of Scientific and Technical Information of China (English)

    CUI Peng; SUN LiFeng; YANG ShiQiang

    2009-01-01

    Multiple object tracking (MOT) poses many difficulties to conventional well-studied single object track-ing (SOT) algorithms, such as severe expansion of configuration space, high complexity of motion con-ditions, and visual ambiguities among nearby targets, among which the visual ambiguity problem is the central challenge. In this paper, we address this problem by embedding adaptive mixture observation models (AMOM) into a mixture tracker which is implemented in Particle Filter framework. In AMOM, the extracted multiple features for appearance description are combined according to their discriminative power between ambiguity prone objects, where the discriminability of features are evaluated by online entropy-based feature selection techniques. The induction of AMOM can help to surmount the Incapa-bility of conventional mixture tracker in handling object occlusions, and meanwhile retain its merits of flexibility and high efficiency. The final experiments show significant improvement in MOT scenarios compared with other methods.

  14. Phylogenetic mixtures and linear invariants for equal input models.

    Science.gov (United States)

    Casanellas, Marta; Steel, Mike

    2017-04-01

    The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).

  15. Generalized Observables, Bell's Inequalities and Mixtures in the ESR Model for QM

    CERN Document Server

    Garola, Claudio

    2010-01-01

    The extended semantic realism (ESR) model proposes a new theoretical perspective which embodies the mathematical formalism of standard (Hilbert space) quantum mechanics (QM) into a noncontextual framework, reinterpreting quantum probabilities as conditional instead of absolute. We provide in this review an overall view on the present status of our research on this topic. We attain in a new, shortened way a mathematical representation of the generalized observables introduced by the ESR model and a generalization of the projection postulate of elementary QM. Basing on these results we prove that the Bell-Clauser-Horne-Shimony-Holt (BCHSH) inequality, a modified BCHSH inequality and quantum predictions hold together in the ESR model because they refer to different parts of the picture of the physical world supplied by the model. Then we show that a new mathematical representation of mixtures must be introduced in the ESR model which does not coincide with the standard representation in QM and avoids some deep p...

  16. The physical model for research of behavior of grouting mixtures

    Science.gov (United States)

    Hajovsky, Radovan; Pies, Martin; Lossmann, Jaroslav

    2016-06-01

    The paper deals with description of physical model designed for verification of behavior of grouting mixtures when applied below underground water level. Described physical model has been set up to determine propagation of grouting mixture in a given environment. Extension of grouting in this environment is based on measurement of humidity and temperature with the use of combined sensors located within preinstalled special measurement probes around grouting needle. Humidity was measured by combined capacity sensor DTH-1010, temperature was gathered by a NTC thermistor. Humidity sensors measured time when grouting mixture reached sensor location point. NTC thermistors measured temperature changes in time starting from initial of injection. This helped to develop 3D map showing the distribution of grouting mixture through the environment. Accomplishment of this particular measurement was carried out by a designed primary measurement module capable of connecting 4 humidity and temperature sensors. This module also takes care of converting these physical signals into unified analogue signals consequently brought to the input terminals of analogue input of programmable automation controller (PAC) WinPAC-8441. This controller ensures the measurement itself, archiving and visualization of all data. Detail description of a complex measurement system and evaluation in form of 3D animations and graphs is supposed to be in a full paper.

  17. Landmine detection using mixture of discrete hidden Markov models

    Science.gov (United States)

    Frigui, Hichem; Hamdi, Anis; Missaoui, Oualid; Gader, Paul

    2009-05-01

    We propose a landmine detection algorithm that uses a mixture of discrete hidden Markov models. We hypothesize that the data are generated by K models. These different models reflect the fact that mines and clutter objects have different characteristics depending on the mine type, soil and weather conditions, and burial depth. Model identification could be achieved through clustering in the parameters space or in the feature space. However, this approach is inappropriate as it is not trivial to define a meaningful distance metric for model parameters or sequence comparison. Our proposed approach is based on clustering in the log-likelihood space, and has two main steps. First, one HMM is fit to each of the R individual sequence. For each fitted model, we evaluate the log-likelihood of each sequence. This will result in an R×R log-likelihood distance matrix that will be partitioned into K groups using a hierarchical clustering algorithm. In the second step, we pool the sequences, according to which cluster they belong, into K groups, and we fit one HMM to each group. The mixture of these K HMMs would be used to build a descriptive model of the data. An artificial neural networks is then used to fuse the output of the K models. Results on large and diverse Ground Penetrating Radar data collections show that the proposed method can identify meaningful and coherent HMM models that describe different properties of the data. Each HMM models a group of alarm signatures that share common attributes such as clutter, mine type, and burial depth. Our initial experiments have also indicated that the proposed mixture model outperform the baseline HMM that uses one model for the mine and one model for the background.

  18. Gaussian mixture models as flux prediction method for central receivers

    Science.gov (United States)

    Grobler, Annemarie; Gauché, Paul; Smit, Willie

    2016-05-01

    Flux prediction methods are crucial to the design and operation of central receiver systems. Current methods such as the circular and elliptical (bivariate) Gaussian prediction methods are often used in field layout design and aiming strategies. For experimental or small central receiver systems, the flux profile of a single heliostat often deviates significantly from the circular and elliptical Gaussian models. Therefore a novel method of flux prediction was developed by incorporating the fitting of Gaussian mixture models onto flux profiles produced by flux measurement or ray tracing. A method was also developed to predict the Gaussian mixture model parameters of a single heliostat for a given time using image processing. Recording the predicted parameters in a database ensures that more accurate predictions are made in a shorter time frame.

  19. A Generalized Gamma Mixture Model for Ultrasonic Tissue Characterization

    Directory of Open Access Journals (Sweden)

    Gonzalo Vegas-Sanchez-Ferrero

    2012-01-01

    Full Text Available Several statistical models have been proposed in the literature to describe the behavior of speckles. Among them, the Nakagami distribution has proven to very accurately characterize the speckle behavior in tissues. However, it fails when describing the heavier tails caused by the impulsive response of a speckle. The Generalized Gamma (GG distribution (which also generalizes the Nakagami distribution was proposed to overcome these limitations. Despite the advantages of the distribution in terms of goodness of fitting, its main drawback is the lack of a closed-form maximum likelihood (ML estimates. Thus, the calculation of its parameters becomes difficult and not attractive. In this work, we propose (1 a simple but robust methodology to estimate the ML parameters of GG distributions and (2 a Generalized Gama Mixture Model (GGMM. These mixture models are of great value in ultrasound imaging when the received signal is characterized by a different nature of tissues. We show that a better speckle characterization is achieved when using GG and GGMM rather than other state-of-the-art distributions and mixture models. Results showed the better performance of the GG distribution in characterizing the speckle of blood and myocardial tissue in ultrasonic images.

  20. Modeling, clustering, and segmenting video with mixtures of dynamic textures.

    Science.gov (United States)

    Chan, Antoni B; Vasconcelos, Nuno

    2008-05-01

    A dynamic texture is a spatio-temporal generative model for video, which represents video sequences as observations from a linear dynamical system. This work studies the mixture of dynamic textures, a statistical model for an ensemble of video sequences that is sampled from a finite collection of visual processes, each of which is a dynamic texture. An expectationmaximization (EM) algorithm is derived for learning the parameters of the model, and the model is related to previous works in linear systems, machine learning, time-series clustering, control theory, and computer vision. Through experimentation, it is shown that the mixture of dynamic textures is a suitable representation for both the appearance and dynamics of a variety of visual processes that have traditionally been challenging for computer vision (e.g. fire, steam, water, vehicle and pedestrian traffic, etc.). When compared with state-of-the-art methods in motion segmentation, including both temporal texture methods and traditional representations (e.g. optical flow or other localized motion representations), the mixture of dynamic textures achieves superior performance in the problems of clustering and segmenting video of such processes.

  1. Evaluation of Distance Measures Between Gaussian Mixture Models of MFCCs

    DEFF Research Database (Denmark)

    Jensen, Jesper Højvang; Ellis, Dan P. W.; Christensen, Mads Græsbøll

    2007-01-01

    In music similarity and in the related task of genre classification, a distance measure between Gaussian mixture models is frequently needed. We present a comparison of the Kullback-Leibler distance, the earth movers distance and the normalized L2 distance for this application. Although the norma......In music similarity and in the related task of genre classification, a distance measure between Gaussian mixture models is frequently needed. We present a comparison of the Kullback-Leibler distance, the earth movers distance and the normalized L2 distance for this application. Although...... the normalized L2 distance was slightly inferior to the Kullback-Leibler distance with respect to classification performance, it has the advantage of obeying the triangle inequality, which allows for efficient searching....

  2. Detecting Clusters in Atom Probe Data with Gaussian Mixture Models.

    Science.gov (United States)

    Zelenty, Jennifer; Dahl, Andrew; Hyde, Jonathan; Smith, George D W; Moody, Michael P

    2017-04-01

    Accurately identifying and extracting clusters from atom probe tomography (APT) reconstructions is extremely challenging, yet critical to many applications. Currently, the most prevalent approach to detect clusters is the maximum separation method, a heuristic that relies heavily upon parameters manually chosen by the user. In this work, a new clustering algorithm, Gaussian mixture model Expectation Maximization Algorithm (GEMA), was developed. GEMA utilizes a Gaussian mixture model to probabilistically distinguish clusters from random fluctuations in the matrix. This machine learning approach maximizes the data likelihood via expectation maximization: given atomic positions, the algorithm learns the position, size, and width of each cluster. A key advantage of GEMA is that atoms are probabilistically assigned to clusters, thus reflecting scientifically meaningful uncertainty regarding atoms located near precipitate/matrix interfaces. GEMA outperforms the maximum separation method in cluster detection accuracy when applied to several realistically simulated data sets. Lastly, GEMA was successfully applied to real APT data.

  3. Translated Poisson Mixture Model for Stratification Learning (PREPRINT)

    Science.gov (United States)

    2007-09-01

    unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified Translated Poisson Mixture Model for Stratification Learning Gloria Haro Dept. Teoria ...Pless. Figure 1 shows, for each algorithm, the point cloud with each point colored and marked differently according to its classification. In the dif...1: Clustering of a spiral and a plane. Results with different algorithms (this is a color figure). Due to the statistical nature of the R-TPMM

  4. Analysis of Forest Foliage Using a Multivariate Mixture Model

    Science.gov (United States)

    Hlavka, C. A.; Peterson, David L.; Johnson, L. F.; Ganapol, B.

    1997-01-01

    Data with wet chemical measurements and near infrared spectra of ground leaf samples were analyzed to test a multivariate regression technique for estimating component spectra which is based on a linear mixture model for absorbance. The resulting unmixed spectra for carbohydrates, lignin, and protein resemble the spectra of extracted plant starches, cellulose, lignin, and protein. The unmixed protein spectrum has prominent absorption spectra at wavelengths which have been associated with nitrogen bonds.

  5. XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling

    Science.gov (United States)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-08-01

    XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.

  6. KONVERGENSI ESTIMATOR DALAM MODEL MIXTURE BERBASIS MISSING DATA

    Directory of Open Access Journals (Sweden)

    N Dwidayati

    2014-11-01

    Full Text Available Model mixture dapat mengestimasi proporsi pasien yang sembuh (cured dan fungsi survival pasien tak sembuh (uncured. Pada kajian ini, model mixture dikembangkan untuk  analisis cure rate berbasis missing data. Ada beberapa metode yang dapat digunakan untuk analisis missing data.  Salah satu metode yang dapat digunakan adalah Algoritma EM, Metode ini didasarkan pada dua langkah, yaitu: (1 Expectation Step dan (2 Maximization Step. Algoritma EM merupakan pendekatan iterasi untuk mempelajari model dari data dengan nilai hilang melalui empat  langkah, yaitu(1 pilih himpunan inisial dari parameter untuk sebuah model, (2 tentukan nilai ekspektasi untuk data hilang, (3 buat induksi parameter model baru dari gabungan nilai ekspekstasi dan data asli, dan (4 jika parameter tidak converged, ulangi langkah 2 menggunakan model baru. Berdasar kajian yang dilakukan dapat ditunjukkan bahwa pada algoritma EM, log-likelihood untuk missing data  mengalami kenaikan setelah dilakukan setiap iterasi dari algoritmanya. Dengan demikian berdasar algoritma EM, barisan likelihood konvergen jika likelihood terbatas ke bawah. Model mixture can estimate the proportion of recovering (cured patients and function of survival but do not recover (uncured patients. In this study, a model mixture has been developed to analyze the curing rate based on missing data. There are some methods applicable to analyze missing data. One of the methods is EM Algorithm, This method is based on two (2 steps, i.e.: ( 1 Expectation Step and ( 2 Maximization Step. EM Algorithm is an iteration approach to study the model from data with missing values in four (4 steps, i.e. (1 to choose initial set from parameters for a model, ( 2 to determine the expectation value for missing data, ( 3 to make induction for the new model parameter from the combined expectation values and the original data, and ( 4 if parameter is not converged, repeat step 2 using new model. The current study indicated that for

  7. Some covariance models based on normal scale mixtures

    CERN Document Server

    Schlather, Martin

    2011-01-01

    Modelling spatio-temporal processes has become an important issue in current research. Since Gaussian processes are essentially determined by their second order structure, broad classes of covariance functions are of interest. Here, a new class is described that merges and generalizes various models presented in the literature, in particular models in Gneiting (J. Amer. Statist. Assoc. 97 (2002) 590--600) and Stein (Nonstationary spatial covariance functions (2005) Univ. Chicago). Furthermore, new models and a multivariate extension are introduced.

  8. Induced polarization of clay-sand mixtures. Experiments and modelling.

    Science.gov (United States)

    Okay, G.; Leroy, P.

    2012-04-01

    The complex conductivity of saturated unconsolidated sand-clay mixtures was experimentally investigated using two types of clay minerals, kaolinite and smectite (mainly Na-Montmorillonite) in the frequency range 1.4 mHz - 12 kHz. The experiments were performed with various clay contents (1, 5, 20, and 100 % in volume of the sand-clay mixture) and salinities (distilled water, 0.1 g/L, 1 g/L, and 10 g/L NaCl solution). Induced polarization measurements were performed with a cylindrical four-electrode sample-holder associated with a SIP-Fuchs II impedance meter and non-polarizing Cu/CuSO4 electrodes. The results illustrate the strong impact of the CEC of the clay minerals upon the complex conductivity. The quadrature conductivity increases steadily with the clay content. We observe that the dependence on frequency of the quadrature conductivity of sand-kaolinite mixtures is more important than for sand-bentonite mixtures. For both types of clay, the quadrature conductivity seems to be fairly independent on the pore fluid salinity except at very low clay contents. The experimental data show good agreement with predicted values given by our SIP model. This complex conductivity model considers the electrochemical polarization of the Stern layer coating the clay particles and the Maxwell-Wagner polarization. We use the differential effective medium theory to calculate the complex conductivity of the porous medium constituted of the grains and the electrolyte. The SIP model includes also the effect of the grain size distribution upon the complex conductivity spectra.

  9. Sand - rubber mixtures submitted to isotropic loading: a minimal model

    Science.gov (United States)

    Platzer, Auriane; Rouhanifar, Salman; Richard, Patrick; Cazacliu, Bogdan; Ibraim, Erdin

    2017-06-01

    The volume of scrap tyres, an undesired urban waste, is increasing rapidly in every country. Mixing sand and rubber particles as a lightweight backfill is one of the possible alternatives to avoid stockpiling them in the environment. This paper presents a minimal model aiming to capture the evolution of the void ratio of sand-rubber mixtures undergoing an isotropic compression loading. It is based on the idea that, submitted to a pressure, the rubber chips deform and partially fill the porous space of the system, leading to a decrease of the void ratio with increasing pressure. Our simple approach is capable of reproducing experimental data for two types of sand (a rounded one and a sub-angular one) and up to mixtures composed of 50% of rubber.

  10. The Spectral Mixture Models: A Minimum Information Divergence Approach

    Science.gov (United States)

    2010-04-01

    Bayesian   Information   Criterion .   Developing a metric that measures the fitness of different models is beyond the scope of our discussion.    2.1...data,  then  the  results  are  questionable  or  perhaps  wrong.    Various  information   criteria  have  been  proposed  such  as  the  Akaike   and...LABORATORY INFORMATION DIRECTORATE THE SPECTRAL MIXTURE MODELS

  11. Determining of migraine prognosis using latent growth mixture models

    Institute of Scientific and Technical Information of China (English)

    Bahar Tasdelen; Aynur Ozge; Hakan Kaleagasi; Semra Erdogan; Tufan Mengi

    2011-01-01

    Background This paper presents a retrospective study to classify patients into subtypes of the treatment according to baseline and longitudinally observed values considering heterogenity in migraine prognosis. In the classical prospective clinical studies,participants are classified with respect to baseline status and followed within a certain time period.However,latent growth mixture model is the most suitable method,which considers the population heterogenity and is not affected drop-outs if they are missing at random. Hence,we planned this comprehensive study to identify prognostic factors in migraine.Methods The study data have been based on a 10-year computer-based follow-up data of Mersin University Headache Outpatient Department. The developmental trajectories within subgroups were described for the severity,frequency,and duration of headache separately and the probabilities of each subgroup were estimated by using latent growth mixture models. SAS PROC TRAJ procedures,semiparametric and group-based mixture modeling approach,were applied to define the developmental trajectories.Results While the three-group model for the severity (mild,moderate,severe) and frequency (low,medium,high) of headache appeared to be appropriate,the four-group model for the duration (low,medium,high,extremely high) was more suitable. The severity of headache increased in the patients with nausea,vomiting,photophobia and phonophobia.The frequency of headache was especially related with increasing age and unilateral pain. Nausea and photophobia were also related with headache duration.Conclusions Nausea,vomiting and photophobia were the most significant factors to identify developmental trajectories.The remission time was not the same for the severity,frequency,and duration of headache.

  12. Background Subtraction with DirichletProcess Mixture Models.

    Science.gov (United States)

    Haines, Tom S F; Tao Xiang

    2014-04-01

    Video analysis often begins with background subtraction. This problem is often approached in two steps-a background model followed by a regularisation scheme. A model of the background allows it to be distinguished on a per-pixel basis from the foreground, whilst the regularisation combines information from adjacent pixels. We present a new method based on Dirichlet process Gaussian mixture models, which are used to estimate per-pixel background distributions. It is followed by probabilistic regularisation. Using a non-parametric Bayesian method allows per-pixel mode counts to be automatically inferred, avoiding over-/under- fitting. We also develop novel model learning algorithms for continuous update of the model in a principled fashion as the scene changes. These key advantages enable us to outperform the state-of-the-art alternatives on four benchmarks.

  13. Molecular Code Division Multiple Access: Gaussian Mixture Modeling

    Science.gov (United States)

    Zamiri-Jafarian, Yeganeh

    Communications between nano-devices is an emerging research field in nanotechnology. Molecular Communication (MC), which is a bio-inspired paradigm, is a promising technique for communication in nano-network. In MC, molecules are administered to exchange information among nano-devices. Due to the nature of molecular signals, traditional communication methods can't be directly applied to the MC framework. The objective of this thesis is to present novel diffusion-based MC methods when multi nano-devices communicate with each other in the same environment. A new channel model and detection technique, along with a molecular-based access method, are proposed in here for communication between asynchronous users. In this work, the received molecular signal is modeled as a Gaussian mixture distribution when the MC system undergoes Brownian noise and inter-symbol interference (ISI). This novel approach demonstrates a suitable modeling for diffusion-based MC system. Using the proposed Gaussian mixture model, a simple receiver is designed by minimizing the error probability. To determine an optimum detection threshold, an iterative algorithm is derived which minimizes a linear approximation of the error probability function. Also, a memory-based receiver is proposed to improve the performance of the MC system by considering previously detected symbols in obtaining the threshold value. Numerical evaluations reveal that theoretical analysis of the bit error rate (BER) performance based on the Gaussian mixture model match simulation results very closely. Furthermore, in this thesis, molecular code division multiple access (MCDMA) is proposed to overcome the inter-user interference (IUI) caused by asynchronous users communicating in a shared propagation environment. Based on the selected molecular codes, a chip detection scheme with an adaptable threshold value is developed for the MCDMA system when the proposed Gaussian mixture model is considered. Results indicate that the

  14. Modeling phase equilibria for acid gas mixtures using the CPA equation of state. Part II: Binary mixtures with CO2

    DEFF Research Database (Denmark)

    Tsivintzelis, Ioannis; Kontogeorgis, Georgios; Michelsen, Michael Locht

    2011-01-01

    In Part I of this series of articles, the study of H2S mixtures has been presented with CPA. In this study the phase behavior of CO2 containing mixtures is modeled. Binary mixtures with water, alcohols, glycols and hydrocarbons are investigated. Both phase equilibria (vapor–liquid and liquid......, alcohols and glycols) are considered, the importance of cross-association is investigated. The cross-association is accounted for either via combining rules or using a cross-solvation energy obtained from experimental spectroscopic or calorimetric data or from ab initio calculations. In both cases two...

  15. Modeling human mortality using mixtures of bathtub shaped failure distributions.

    Science.gov (United States)

    Bebbington, Mark; Lai, Chin-Diew; Zitikis, Ricardas

    2007-04-07

    Aging and mortality is usually modeled by the Gompertz-Makeham distribution, where the mortality rate accelerates with age in adult humans. The resulting parameters are interpreted as the frailty and decrease in vitality with age. This fits well to life data from 'westernized' societies, where the data are accurate, of high resolution, and show the effects of high quality post-natal care. We show, however, that when the data are of lower resolution, and contain considerable structure in the infant mortality, the fit can be poor. Moreover, the Gompertz-Makeham distribution is consistent with neither the force of natural selection, nor the recently identified 'late life mortality deceleration'. Although actuarial models such as the Heligman-Pollard distribution can, in theory, achieve an improved fit, the lack of a closed form for the survival function makes fitting extremely arduous, and the biological interpretation can be lacking. We show, that a mixture, assigning mortality to exogenous or endogenous causes, using the reduced additive and flexible Weibull distributions, models well human mortality over the entire life span. The components of the mixture are asymptotically consistent with the reliability and biological theories of aging. The relative simplicity of the mixture distribution makes feasible a technique where the curvature functions of the corresponding survival and hazard rate functions are used to identify the beginning and the end of various life phases, such as infant mortality, the end of the force of natural selection, and late life mortality deceleration. We illustrate our results with a comparative analysis of Canadian and Indonesian mortality data.

  16. Experiments with Mixtures Designs, Models, and the Analysis of Mixture Data

    CERN Document Server

    Cornell, John A

    2011-01-01

    The most comprehensive, single-volume guide to conducting experiments with mixtures"If one is involved, or heavily interested, in experiments on mixtures of ingredients, one must obtain this book. It is, as was the first edition, the definitive work."-Short Book Reviews (Publication of the International Statistical Institute)"The text contains many examples with worked solutions and with its extensive coverage of the subject matter will prove invaluable to those in the industrial and educational sectors whose work involves the design and analysis of mixture experiments."-Journal of the Royal S

  17. Improved Gaussian Mixture Models for Adaptive Foreground Segmentation

    DEFF Research Database (Denmark)

    Katsarakis, Nikolaos; Pnevmatikakis, Aristodemos; Tan, Zheng-Hua

    2016-01-01

    Adaptive foreground segmentation is traditionally performed using Stauffer & Grimson’s algorithm that models every pixel of the frame by a mixture of Gaussian distributions with continuously adapted parameters. In this paper we provide an enhancement of the algorithm by adding two important dynamic...... elements to the baseline algorithm: The learning rate can change across space and time, while the Gaussian distributions can be merged together if they become similar due to their adaptation process. We quantify the importance of our enhancements and the effect of parameter tuning using an annotated...

  18. A mixture copula Bayesian network model for multimodal genomic data

    Directory of Open Access Journals (Sweden)

    Qingyang Zhang

    2017-04-01

    Full Text Available Gaussian Bayesian networks have become a widely used framework to estimate directed associations between joint Gaussian variables, where the network structure encodes the decomposition of multivariate normal density into local terms. However, the resulting estimates can be inaccurate when the normality assumption is moderately or severely violated, making it unsuitable for dealing with recent genomic data such as the Cancer Genome Atlas data. In the present paper, we propose a mixture copula Bayesian network model which provides great flexibility in modeling non-Gaussian and multimodal data for causal inference. The parameters in mixture copula functions can be efficiently estimated by a routine expectation–maximization algorithm. A heuristic search algorithm based on Bayesian information criterion is developed to estimate the network structure, and prediction can be further improved by the best-scoring network out of multiple predictions from random initial values. Our method outperforms Gaussian Bayesian networks and regular copula Bayesian networks in terms of modeling flexibility and prediction accuracy, as demonstrated using a cell signaling data set. We apply the proposed methods to the Cancer Genome Atlas data to study the genetic and epigenetic pathways that underlie serous ovarian cancer.

  19. Categorization of Digital Games in English Language Learning Studies: Introducing the SSI Model

    Science.gov (United States)

    Sundqvist, Pia

    2013-01-01

    The main aim of the present paper is to introduce a model for digital game categorization suitable for use in English language learning studies: the Scale of Social Interaction (SSI) Model (original idea published as Sundqvist, 2013). The SSI Model proposes a classification of commercial off-the-shelf (COTS) digital games into three categories:…

  20. Efficient speaker verification using Gaussian mixture model component clustering.

    Energy Technology Data Exchange (ETDEWEB)

    De Leon, Phillip L. (New Mexico State University, Las Cruces, NM); McClanahan, Richard D.

    2012-04-01

    In speaker verification (SV) systems that employ a support vector machine (SVM) classifier to make decisions on a supervector derived from Gaussian mixture model (GMM) component mean vectors, a significant portion of the computational load is involved in the calculation of the a posteriori probability of the feature vectors of the speaker under test with respect to the individual component densities of the universal background model (UBM). Further, the calculation of the sufficient statistics for the weight, mean, and covariance parameters derived from these same feature vectors also contribute a substantial amount of processing load to the SV system. In this paper, we propose a method that utilizes clusters of GMM-UBM mixture component densities in order to reduce the computational load required. In the adaptation step we score the feature vectors against the clusters and calculate the a posteriori probabilities and update the statistics exclusively for mixture components belonging to appropriate clusters. Each cluster is a grouping of multivariate normal distributions and is modeled by a single multivariate distribution. As such, the set of multivariate normal distributions representing the different clusters also form a GMM. This GMM is referred to as a hash GMM which can be considered to a lower resolution representation of the GMM-UBM. The mapping that associates the components of the hash GMM with components of the original GMM-UBM is referred to as a shortlist. This research investigates various methods of clustering the components of the GMM-UBM and forming hash GMMs. Of five different methods that are presented one method, Gaussian mixture reduction as proposed by Runnall's, easily outperformed the other methods. This method of Gaussian reduction iteratively reduces the size of a GMM by successively merging pairs of component densities. Pairs are selected for merger by using a Kullback-Leibler based metric. Using Runnal's method of reduction, we

  1. Nonlinear sensor fault diagnosis using mixture of probabilistic PCA models

    Science.gov (United States)

    Sharifi, Reza; Langari, Reza

    2017-02-01

    This paper presents a methodology for sensor fault diagnosis in nonlinear systems using a Mixture of Probabilistic Principal Component Analysis (MPPCA) models. This methodology separates the measurement space into several locally linear regions, each of which is associated with a Probabilistic PCA (PPCA) model. Using the transformation associated with each PPCA model, a parity relation scheme is used to construct a residual vector. Bayesian analysis of the residuals forms the basis for detection and isolation of sensor faults across the entire range of operation of the system. The resulting method is demonstrated in its application to sensor fault diagnosis of a fully instrumented HVAC system. The results show accurate detection of sensor faults under the assumption that a single sensor is faulty.

  2. Gaussian Mixture Model and Rjmcmc Based RS Image Segmentation

    Science.gov (United States)

    Shi, X.; Zhao, Q. H.

    2017-09-01

    For the image segmentation method based on Gaussian Mixture Model (GMM), there are some problems: 1) The number of component was usually a fixed number, i.e., fixed class and 2) GMM is sensitive to image noise. This paper proposed a RS image segmentation method that combining GMM with reversible jump Markov Chain Monte Carlo (RJMCMC). In proposed algorithm, GMM was designed to model the distribution of pixel intensity in RS image. Assume that the number of component was a random variable. Respectively build the prior distribution of each parameter. In order to improve noise resistance, used Gibbs function to model the prior distribution of GMM weight coefficient. According to Bayes' theorem, build posterior distribution. RJMCMC was used to simulate the posterior distribution and estimate its parameters. Finally, an optimal segmentation is obtained on RS image. Experimental results show that the proposed algorithm can converge to the optimal number of class and get an ideal segmentation results.

  3. Refining personality disorder subtypes and classification using finite mixture modeling.

    Science.gov (United States)

    Yun, Rebecca J; Stern, Barry L; Lenzenweger, Mark F; Tiersky, Lana A

    2013-04-01

    The current Diagnostic and Statistical Manual of Mental Disorders (DSM) diagnostic system for Axis II disorders continues to be characterized by considerable heterogeneity and poor discriminant validity. Such problems impede accurate personality disorder (PD) diagnosis. As a result, alternative assessment tools are often used in conjunction with the DSM. One popular framework is the object relational model developed by Kernberg and his colleagues (J. F. Clarkin, M. F. Lenzenweger, F. Yeomans, K. N. Levy, & O. F. Kernberg, 2007, An object relations model of borderline pathology, Journal of Personality Disorders, Vol. 21, pp. 474-499; O. F. Kernberg, 1984, Severe Personality Disorders, New Haven, CT: Yale University Press; O. F. Kernberg & E. Caligor, 2005, A psychoanalytic theory of personality disorders, in M. F. Lenzenweger & J. F. Clarkin, Eds., Major Theories of Personality Disorder, New York, NY: Guilford Press). Drawing on this model and empirical studies thereof, the current study attempted to clarify Kernberg's (1984) PD taxonomy and identify subtypes within a sample with varying levels of personality pathology using finite mixture modeling. Subjects (N = 141) were recruited to represent a wide range of pathology. The finite mixture modeling results indicated that 3 components were harbored within the variables analyzed. Group 1 was characterized by low levels of antisocial, paranoid, and aggressive features, and Group 2 was characterized by elevated paranoid features. Group 3 revealed the highest levels across the 3 variables. The validity of the obtained solution was then evaluated by reference to a variety of external measures that supported the validity of the identified grouping structure. Findings generally appear congruent with previous research, which argued that a PD taxonomy based on paranoid, aggressive, and antisocial features is a viable supplement to current diagnostic systems. Our study suggests that Kernberg's object relational model offers a

  4. A model for steady flows of magma-volatile mixtures

    CERN Document Server

    Belan, Marco

    2012-01-01

    A general one-dimensional model for the steady adiabatic motion of liquid-volatile mixtures in vertical ducts with varying cross-section is presented. The liquid contains a dissolved part of the volatile and is assumed to be incompressible and in thermomechanical equilibrium with a perfect gas phase, which is generated by the exsolution of the same volatile. An inverse problem approach is used -- the pressure along the duct is set as an input datum, and the other physical quantities are obtained as output. This fluid-dynamic model is intended as an approximate description of magma-volatile mixture flows of interest to geophysics and planetary sciences. It is implemented as a symbolic code, where each line stands for an analytic expression, whether algebraic or differential, which is managed by the software kernel independently of the numerical value of each variable. The code is versatile and user-friendly and permits to check the consequences of different hypotheses even through its early steps. Only the las...

  5. Microbial comparative pan-genomics using binomial mixture models

    Directory of Open Access Journals (Sweden)

    Ussery David W

    2009-08-01

    Full Text Available Abstract Background The size of the core- and pan-genome of bacterial species is a topic of increasing interest due to the growing number of sequenced prokaryote genomes, many from the same species. Attempts to estimate these quantities have been made, using regression methods or mixture models. We extend the latter approach by using statistical ideas developed for capture-recapture problems in ecology and epidemiology. Results We estimate core- and pan-genome sizes for 16 different bacterial species. The results reveal a complex dependency structure for most species, manifested as heterogeneous detection probabilities. Estimated pan-genome sizes range from small (around 2600 gene families in Buchnera aphidicola to large (around 43000 gene families in Escherichia coli. Results for Echerichia coli show that as more data become available, a larger diversity is estimated, indicating an extensive pool of rarely occurring genes in the population. Conclusion Analyzing pan-genomics data with binomial mixture models is a way to handle dependencies between genomes, which we find is always present. A bottleneck in the estimation procedure is the annotation of rarely occurring genes.

  6. Mixture of a seismicity model based on the rate-and-state friction and ETAS model

    Science.gov (United States)

    Iwata, T.

    2015-12-01

    Currently the ETAS model [Ogata, 1988, JASA] is considered to be a standard model of seismicity. However, because the ETAS model is a purely statistical one, the physics-based seismicity model derived from the rate-and-state friction (hereafter referred to as Dieterich model) [Dieterich, 1994, JGR] is frequently examined. However, the original version of the Dieterich model has several problems in the application to real earthquake sequences and therefore modifications have been conducted in previous studies. Iwata [2015, Pageoph] is one of such studies and shows that the Dieterich model is significantly improved as a result of the inclusion of the effect of secondary aftershocks (i.e., aftershocks caused by previous aftershocks). However, still the performance of the ETAS model is superior to that of the improved Dieterich model. For further improvement, the mixture of the Dieterich and ETAS models is examined in this study. To achieve the mixture, the seismicity rate is represented as a sum of the ETAS and Dieterich models of which weights are given as k and 1-k, respectively. This mixture model is applied to the aftershock sequences of the 1995 Kobe and 2004 Mid-Niigata sequences which have been analyzed in Iwata [2015]. Additionally, the sequence of the Matsushiro earthquake swarm in central Japan 1965-1970 is also analyzed. The value of k and parameters of the ETAS and Dieterich models are estimated by means of the maximum likelihood method, and the model performances are assessed on the basis of AIC. For the two aftershock sequences, the AIC values of the ETAS model are around 3-9 smaller (i.e., better) than those of the mixture model. On the contrary, for the Matsushiro swarm, the AIC value of the mixture model is 5.8 smaller than that of the ETAS model, indicating that the mixture of the two models results in significant improvement of the seismicity model.

  7. MODELLING AND PARAMETER ESTIMATION IN REACTIVE CONTINUOUS MIXTURES: THE CATALYTIC CRACKING OF ALKANES. PART I

    Directory of Open Access Journals (Sweden)

    PEIXOTO F. C.

    1999-01-01

    Full Text Available Fragmentation kinetics is employed to model a continuous reactive mixture. An explicit solution is found and experimental data on the catalytic cracking of a mixture of alkanes are used for deactivation and kinetic parameter estimation.

  8. Classifying Gamma-Ray Bursts with Gaussian Mixture Model

    CERN Document Server

    Yang, En-Bo; Choi, Chul-Sung; Chang, Heon-Young

    2016-01-01

    Using Gaussian Mixture Model (GMM) and Expectation Maximization Algorithm, we perform an analysis of time duration ($T_{90}$) for \\textit{CGRO}/BATSE, \\textit{Swift}/BAT and \\textit{Fermi}/GBM Gamma-Ray Bursts. The $T_{90}$ distributions of 298 redshift-known \\textit{Swift}/BAT GRBs have also been studied in both observer and rest frames. Bayesian Information Criterion has been used to compare between different GMM models. We find that two Gaussian components are better to describe the \\textit{CGRO}/BATSE and \\textit{Fermi}/GBM GRBs in the observer frame. Also, we caution that two groups are expected for the \\textit{Swift}/BAT bursts in the rest frame, which is consistent with some previous results. However, \\textit{Swift} GRBs in the observer frame seem to show a trimodal distribution, of which the superficial intermediate class may result from the selection effect of \\textit{Swift}/BAT.

  9. Classifying gamma-ray bursts with Gaussian Mixture Model

    Science.gov (United States)

    Zhang, Zhi-Bin; Yang, En-Bo; Choi, Chul-Sung; Chang, Heon-Young

    2016-11-01

    Using Gaussian Mixture Model (GMM) and expectation-maximization algorithm, we perform an analysis of time duration (T90) for Compton Gamma Ray Observatory (CGRO)/BATSE, Swift/BAT and Fermi/GBM gamma-ray bursts (GRBs). The T90 distributions of 298 redshift-known Swift/BAT GRBs have also been studied in both observer and rest frames. Bayesian information criterion has been used to compare between different GMM models. We find that two Gaussian components are better to describe the CGRO/BATSE and Fermi/GBM GRBs in the observer frame. Also, we caution that two groups are expected for the Swift/BAT bursts in the rest frame, which is consistent with some previous results. However, Swift GRBs in the observer frame seem to show a trimodal distribution, of which the superficial intermediate class may result from the selection effect of Swift/BAT.

  10. Mixtures of Polya trees for flexible spatial frailty survival modelling.

    Science.gov (United States)

    Zhao, Luping; Hanson, Timothy E; Carlin, Bradley P

    2009-06-01

    Mixtures of Polya trees offer a very flexible nonparametric approach for modelling time-to-event data. Many such settings also feature spatial association that requires further sophistication, either at the point level or at the lattice level. In this paper, we combine these two aspects within three competing survival models, obtaining a data analytic approach that remains computationally feasible in a fully hierarchical Bayesian framework using Markov chain Monte Carlo methods. We illustrate our proposed methods with an analysis of spatially oriented breast cancer survival data from the Surveillance, Epidemiology and End Results program of the National Cancer Institute. Our results indicate appreciable advantages for our approach over competing methods that impose unrealistic parametric assumptions, ignore spatial association or both.

  11. Bayesian sensitivity analysis of incomplete data: bridging pattern-mixture and selection models.

    Science.gov (United States)

    Kaciroti, Niko A; Raghunathan, Trivellore

    2014-11-30

    Pattern-mixture models (PMM) and selection models (SM) are alternative approaches for statistical analysis when faced with incomplete data and a nonignorable missing-data mechanism. Both models make empirically unverifiable assumptions and need additional constraints to identify the parameters. Here, we first introduce intuitive parameterizations to identify PMM for different types of outcome with distribution in the exponential family; then we translate these to their equivalent SM approach. This provides a unified framework for performing sensitivity analysis under either setting. These new parameterizations are transparent, easy-to-use, and provide dual interpretation from both the PMM and SM perspectives. A Bayesian approach is used to perform sensitivity analysis, deriving inferences using informative prior distributions on the sensitivity parameters. These models can be fitted using software that implements Gibbs sampling.

  12. Bayesian nonparametric meta-analysis using Polya tree mixture models.

    Science.gov (United States)

    Branscum, Adam J; Hanson, Timothy E

    2008-09-01

    Summary. A common goal in meta-analysis is estimation of a single effect measure using data from several studies that are each designed to address the same scientific inquiry. Because studies are typically conducted in geographically disperse locations, recent developments in the statistical analysis of meta-analytic data involve the use of random effects models that account for study-to-study variability attributable to differences in environments, demographics, genetics, and other sources that lead to heterogeneity in populations. Stemming from asymptotic theory, study-specific summary statistics are modeled according to normal distributions with means representing latent true effect measures. A parametric approach subsequently models these latent measures using a normal distribution, which is strictly a convenient modeling assumption absent of theoretical justification. To eliminate the influence of overly restrictive parametric models on inferences, we consider a broader class of random effects distributions. We develop a novel hierarchical Bayesian nonparametric Polya tree mixture (PTM) model. We present methodology for testing the PTM versus a normal random effects model. These methods provide researchers a straightforward approach for conducting a sensitivity analysis of the normality assumption for random effects. An application involving meta-analysis of epidemiologic studies designed to characterize the association between alcohol consumption and breast cancer is presented, which together with results from simulated data highlight the performance of PTMs in the presence of nonnormality of effect measures in the source population.

  13. A Community of Practice Model for Introducing Mobile Tablets to University Faculty

    Science.gov (United States)

    Drouin, Michelle; Vartanian, Lesa Rae; Birk, Samantha

    2014-01-01

    We examined the effectiveness of a community of practice (CoP) model for introducing tablets to 139 faculty members at a higher education institution. Using a CoP within a systems model, we used large- and small-group mentorship to foster collaboration among faculty members. Most faculty members agreed that the project was well organized and…

  14. Introducing an Intervention Model for Fostering Affective Involvement with Persons Who Are Congenitally Deafblind

    Science.gov (United States)

    Martens, Marga A. W.; Janssen, Marleen J.; Ruijssenaars, Wied A. J. J. M.; Riksen-Walraven, J. Marianne

    2014-01-01

    The article presented here introduces the Intervention Model for Affective Involvement (IMAI), which was designed to train staff members (for example, teachers, caregivers, support workers) to foster affective involvement during interaction and communication with persons who have congenital deaf-blindness. The model is theoretically underpinned,…

  15. Advances in Behavioral Genetics Modeling Using Mplus: Applications of Factor Mixture Modeling to Twin Data

    National Research Council Canada - National Science Library

    Muthen, Bengt; Asparouhov, Tihomir; Rebollo, Irene

    2006-01-01

    This article discusses new latent variable techniques developed by the authors. As an illustration, a new factor mixture model is applied to the monozygotic-dizygotic twin analysis of binary items measuring alcohol-use disorder...

  16. Fourth-order strain-gradient phase mixture model for nanocrystalline fcc materials

    Science.gov (United States)

    Klusemann, Benjamin; Bargmann, Swantje; Estrin, Yuri

    2016-12-01

    The proposed modeling approach for nanocrystalline materials is an extension of the local phase mixture model introduced by Kim et al (2000 Acta Mater. 48 493-504). Local models cannot account for any non-uniformities or strain patterns, i.e. such models describe the behavior correctly only as long as it is homogeneous. In order to capture heterogeneities, the phase mixture model is augmented with gradient terms of higher order, namely second and fourth order. Different deformation mechanisms are assumed to operate in grain interior and grain boundaries concurrently. The deformation mechanism in grain boundaries is associated with diffusional mass transport along the boundaries, while in the grain interior dislocation glide as well as diffusion controlled mechanisms are considered. In particular, the mechanical response of nanostructured polycrystals is investigated. The model is capable of correctly predicting the transition of flow stress from Hall-Petch behavior in conventional grain size range to an inverse Hall-Petch relation in the nanocrystalline grain size range. The consideration of second- and fourth-order strain gradients allows non-uniformities within the strain field to represent strain patterns in combination with a regularization effect. Details of the numerical implementation are provided.

  17. Modeling mixtures of thyroid gland function disruptors in a vertebrate alternative model, the zebrafish eleutheroembryo

    Energy Technology Data Exchange (ETDEWEB)

    Thienpont, Benedicte; Barata, Carlos [Department of Environmental Chemistry, Institute of Environmental Assessment and Water Research (IDAEA, CSIC), Jordi Girona, 18-26, 08034 Barcelona (Spain); Raldúa, Demetrio, E-mail: drpqam@cid.csic.es [Department of Environmental Chemistry, Institute of Environmental Assessment and Water Research (IDAEA, CSIC), Jordi Girona, 18-26, 08034 Barcelona (Spain); Maladies Rares: Génétique et Métabolisme (MRGM), University of Bordeaux, EA 4576, F-33400 Talence (France)

    2013-06-01

    Maternal thyroxine (T4) plays an essential role in fetal brain development, and even mild and transitory deficits in free-T4 in pregnant women can produce irreversible neurological effects in their offspring. Women of childbearing age are daily exposed to mixtures of chemicals disrupting the thyroid gland function (TGFDs) through the diet, drinking water, air and pharmaceuticals, which has raised the highest concern for the potential additive or synergic effects on the development of mild hypothyroxinemia during early pregnancy. Recently we demonstrated that zebrafish eleutheroembryos provide a suitable alternative model for screening chemicals impairing the thyroid hormone synthesis. The present study used the intrafollicular T4-content (IT4C) of zebrafish eleutheroembryos as integrative endpoint for testing the hypotheses that the effect of mixtures of TGFDs with a similar mode of action [inhibition of thyroid peroxidase (TPO)] was well predicted by a concentration addition concept (CA) model, whereas the response addition concept (RA) model predicted better the effect of dissimilarly acting binary mixtures of TGFDs [TPO-inhibitors and sodium-iodide symporter (NIS)-inhibitors]. However, CA model provided better prediction of joint effects than RA in five out of the six tested mixtures. The exception being the mixture MMI (TPO-inhibitor)-KClO{sub 4} (NIS-inhibitor) dosed at a fixed ratio of EC{sub 10} that provided similar CA and RA predictions and hence it was difficult to get any conclusive result. There results support the phenomenological similarity criterion stating that the concept of concentration addition could be extended to mixture constituents having common apical endpoints or common adverse outcomes. - Highlights: • Potential synergic or additive effect of mixtures of chemicals on thyroid function. • Zebrafish as alternative model for testing the effect of mixtures of goitrogens. • Concentration addition seems to predict better the effect of

  18. Accuracy assessment of linear spectral mixture model due to terrain undulation

    Science.gov (United States)

    Wang, Tianxing; Chen, Songlin; Ma, Ya

    2008-12-01

    Mixture spectra are common in remote sensing due to the limitations of spatial resolution and the heterogeneity of land surface. During the past 30 years, a lot of subpixel model have developed to investigate the information within mixture pixels. Linear spectral mixture model (LSMM) is a simper and more general subpixel model. LSMM also known as spectral mixture analysis is a widely used procedure to determine the proportion of endmembers (constituent materials) within a pixel based on the endmembers' spectral characteristics. The unmixing accuracy of LSMM is restricted by variety of factors, but now the research about LSMM is mostly focused on appraisement of nonlinear effect relating to itself and techniques used to select endmembers, unfortunately, the environment conditions of study area which could sway the unmixing-accuracy, such as atmospheric scatting and terrain undulation, are not studied. This paper probes emphatically into the accuracy uncertainty of LSMM resulting from the terrain undulation. ASTER dataset was chosen and the C terrain correction algorithm was applied to it. Based on this, fractional abundances for different cover types were extracted from both pre- and post-C terrain illumination corrected ASTER using LSMM. Simultaneously, the regression analyses and the IKONOS image were introduced to assess the unmixing accuracy. Results showed that terrain undulation could dramatically constrain the application of LSMM in mountain area. Specifically, for vegetation abundances, a improved unmixing accuracy of 17.6% (regression against to NDVI) and 18.6% (regression against to MVI) for R2 was achieved respectively by removing terrain undulation. Anyway, this study indicated in a quantitative way that effective removal or minimization of terrain illumination effects was essential for applying LSMM. This paper could also provide a new instance for LSMM applications in mountainous areas. In addition, the methods employed in this study could be

  19. A smooth mixture of Tobits model for healthcare expenditure.

    Science.gov (United States)

    Keane, Michael; Stavrunova, Olena

    2011-09-01

    This paper develops a smooth mixture of Tobits (SMTobit) model for healthcare expenditure. The model is a generalization of the smoothly mixing regressions framework of Geweke and Keane (J Econometrics 2007; 138: 257-290) to the case of a Tobit-type limited dependent variable. A Markov chain Monte Carlo algorithm with data augmentation is developed to obtain the posterior distribution of model parameters. The model is applied to the US Medicare Current Beneficiary Survey data on total medical expenditure. The results suggest that the model can capture the overall shape of the expenditure distribution very well, and also provide a good fit to a number of characteristics of the conditional (on covariates) distribution of expenditure, such as the conditional mean, variance and probability of extreme outcomes, as well as the 50th, 90th, and 95th, percentiles. We find that healthier individuals face an expenditure distribution with lower mean, variance and probability of extreme outcomes, compared with their counterparts in a worse state of health. Males have an expenditure distribution with higher mean, variance and probability of an extreme outcome, compared with their female counterparts. The results also suggest that heart and cardiovascular diseases affect the expenditure of males more than that of females.

  20. Introducing Dark Matter to Little Higgs Models without T-Parity

    CERN Document Server

    Martin, Travis A W

    2013-01-01

    We present a novel new method for incorporating dark matter into little Higgs models in a way that can be applied to many existing models without introducing T-parity, while simultaneously alleviating precision constraints arising from heavy gauge bosons. The low energy scalar potential of these dark little Higgs models is similar to, and can draw upon existing phenomenological studies of, inert doublet models. Furthermore, we apply this method to modify the littlest Higgs model to create the next to littlest Higgs model, and describe details of the dark matter candidate and its contribution to the relic density.

  1. Microbial comparative pan-genomics using binomial mixture models

    DEFF Research Database (Denmark)

    Ussery, David; Snipen, L; Almøy, T

    2009-01-01

    The size of the core- and pan-genome of bacterial species is a topic of increasing interest due to the growing number of sequenced prokaryote genomes, many from the same species. Attempts to estimate these quantities have been made, using regression methods or mixture models. We extend the latter...... approach by using statistical ideas developed for capture-recapture problems in ecology and epidemiology. RESULTS: We estimate core- and pan-genome sizes for 16 different bacterial species. The results reveal a complex dependency structure for most species, manifested as heterogeneous detection...... probabilities. Estimated pan-genome sizes range from small (around 2600 gene families) in Buchnera aphidicola to large (around 43000 gene families) in Escherichia coli. Results for Echerichia coli show that as more data become available, a larger diversity is estimated, indicating an extensive pool of rarely...

  2. Novel Methods for Surface EMG Analysis and Exploration Based on Multi-Modal Gaussian Mixture Models.

    Directory of Open Access Journals (Sweden)

    Anna Magdalena Vögele

    Full Text Available This paper introduces a new method for data analysis of animal muscle activation during locomotion. It is based on fitting Gaussian mixture models (GMMs to surface EMG data (sEMG. This approach enables researchers/users to isolate parts of the overall muscle activation within locomotion EMG data. Furthermore, it provides new opportunities for analysis and exploration of sEMG data by using the resulting Gaussian modes as atomic building blocks for a hierarchical clustering. In our experiments, composite peak models representing the general activation pattern per sensor location (one sensor on the long back muscle, three sensors on the gluteus muscle on each body side were identified per individual for all 14 horses during walk and trot in the present study. Hereby we show the applicability of the method to identify composite peak models, which describe activation of different muscles throughout cycles of locomotion.

  3. Mixture models versus free energy of hydration models for waste glass durability

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, G.; Redgate, T.; Masuga, P.

    1996-03-01

    Two approaches for modeling high-level waste glass durability as a function of glass composition are compared. The mixture approach utilizes first-order mixture (FOM) or second-order mixture (SOM) polynomials in composition, whereas the free energy of hydration (FEH) approach assumes durability is linearly related to the FEH of glass. Both approaches fit their models to data using least squares regression. The mixture and FEH approaches are used to model glass durability as a function of glass composition for several simulated waste glass data sets. The resulting FEH and FOM model coefficients and goodness-of-fit statistics are compared, both within and across data sets. The goodness-of-fit statistics show that the FOM model fits/predicts durability in each data set better (sometimes much better) than the FEH model. Considerable differences also exist between some FEH and FOM model component coefficients for each of the data sets. These differences are due to the mixture approach having a greater flexibility to account for the effect of a glass component depending on the level and range of the component and on the levels of other glass components. The mixture approach can also account for higher-order (e.g., curvilinear or interactive) effects of components, whereas the FEH approach cannot. SOM models were developed for three of the data sets, and are shown to improve on the corresponding FOM models. Thus, the mixture approach has much more flexibility than the FEH approach for approximating the relationship between glass composition and durability for various glass composition regions.

  4. Improved model for mixtures of polymers and hard spheres

    Science.gov (United States)

    D'Adamo, Giuseppe; Pelissetto, Andrea

    2016-12-01

    Extensive Monte Carlo simulations are used to investigate how model systems of mixtures of polymers and hard spheres approach the scaling limit. We represent polymers as lattice random walks of length L with an energy penalty w for each intersection (Domb-Joyce model), interacting with hard spheres of radius R c via a hard-core pair potential of range {{R}\\text{mon}}+{{R}c} , where R mon is identified as the monomer radius. We show that the mixed polymer-colloid interaction gives rise to new confluent corrections. The leading ones scale as {{L}-ν} , where ν ≈ 0.588 is the usual Flory exponent. Finally, we determine optimal values of the model parameters w and R mon that guarantee the absence of the two leading confluent corrections. This improved model shows a significantly faster convergence to the asymptotic limit L\\to ∞ and is amenable for extensive and accurate numerical simulations at finite density, with only a limited computational effort.

  5. Compressive sensing by learning a Gaussian mixture model from measurements.

    Science.gov (United States)

    Yang, Jianbo; Liao, Xuejun; Yuan, Xin; Llull, Patrick; Brady, David J; Sapiro, Guillermo; Carin, Lawrence

    2015-01-01

    Compressive sensing of signals drawn from a Gaussian mixture model (GMM) admits closed-form minimum mean squared error reconstruction from incomplete linear measurements. An accurate GMM signal model is usually not available a priori, because it is difficult to obtain training signals that match the statistics of the signals being sensed. We propose to solve that problem by learning the signal model in situ, based directly on the compressive measurements of the signals, without resorting to other signals to train a model. A key feature of our method is that the signals being sensed are treated as random variables and are integrated out in the likelihood. We derive a maximum marginal likelihood estimator (MMLE) that maximizes the likelihood of the GMM of the underlying signals given only their linear compressive measurements. We extend the MMLE to a GMM with dominantly low-rank covariance matrices, to gain computational speedup. We report extensive experimental results on image inpainting, compressive sensing of high-speed video, and compressive hyperspectral imaging (the latter two based on real compressive cameras). The results demonstrate that the proposed methods outperform state-of-the-art methods by significant margins.

  6. Flexible Mixture-Amount Models for Business and Industry Using Gaussian Processes

    NARCIS (Netherlands)

    A. Ruseckaite (Aiste); D. Fok (Dennis); P.P. Goos (Peter)

    2016-01-01

    markdownabstractMany products and services can be described as mixtures of ingredients whose proportions sum to one. Specialized models have been developed for linking the mixture proportions to outcome variables, such as preference, quality and liking. In many scenarios, only the mixture

  7. Flexible Mixture-Amount Models for Business and Industry Using Gaussian Processes

    NARCIS (Netherlands)

    A. Ruseckaite (Aiste); D. Fok (Dennis); P.P. Goos (Peter)

    2016-01-01

    markdownabstractMany products and services can be described as mixtures of ingredients whose proportions sum to one. Specialized models have been developed for linking the mixture proportions to outcome variables, such as preference, quality and liking. In many scenarios, only the mixture proportion

  8. Modeling Phase Equilibria for Acid Gas Mixtures Using the CPA Equation of State. I. Mixtures with H2S

    DEFF Research Database (Denmark)

    Tsivintzelis, Ioannis; Kontogeorgis, Georgios; Michelsen, Michael Locht

    2010-01-01

    The Cubic-Plus-Association (CPA) equation of state is applied to a large variety of mixtures containing H2S, which are of interest in the oil and gas industry. Binary H2S mixtures with alkanes, CO2, water, methanol, and glycols are first considered. The interactions of H2S with polar compounds...... (water, methanol, and glycols) are modeled assuming presence or not of cross-association interactions. Such interactions are accounted for using either a combining rule or a cross-solvation energy obtained from spectroscopic data. Using the parameters obtained from the binary systems, one ternary...

  9. Fully Bayesian mixture model for differential gene expression: simulations and model checks.

    Science.gov (United States)

    Lewin, Alex; Bochkina, Natalia; Richardson, Sylvia

    2007-01-01

    We present a Bayesian hierarchical model for detecting differentially expressed genes using a mixture prior on the parameters representing differential effects. We formulate an easily interpretable 3-component mixture to classify genes as over-expressed, under-expressed and non-differentially expressed, and model gene variances as exchangeable to allow for variability between genes. We show how the proportion of differentially expressed genes, and the mixture parameters, can be estimated in a fully Bayesian way, extending previous approaches where this proportion was fixed and empirically estimated. Good estimates of the false discovery rates are also obtained. Different parametric families for the mixture components can lead to quite different classifications of genes for a given data set. Using Affymetrix data from a knock out and wildtype mice experiment, we show how predictive model checks can be used to guide the choice between possible mixture priors. These checks show that extending the mixture model to allow extra variability around zero instead of the usual point mass null fits the data better. A software package for R is available.

  10. Introducing a Model for Optimal Design of Sequential Objective Structured Clinical Examinations

    Science.gov (United States)

    Mortaz Hejri, Sara; Yazdani, Kamran; Labaf, Ali; Norcini, John J.; Jalili, Mohammad

    2016-01-01

    In a sequential OSCE which has been suggested to reduce testing costs, candidates take a short screening test and who fail the test, are asked to take the full OSCE. In order to introduce an effective and accurate sequential design, we developed a model for designing and evaluating screening OSCEs. Based on two datasets from a 10-station…

  11. Toxicological risk assessment of complex mixtures through the Wtox model

    Directory of Open Access Journals (Sweden)

    William Gerson Matias

    2015-01-01

    Full Text Available Mathematical models are important tools for environmental management and risk assessment. Predictions about the toxicity of chemical mixtures must be enhanced due to the complexity of eects that can be caused to the living species. In this work, the environmental risk was accessed addressing the need to study the relationship between the organism and xenobiotics. Therefore, ve toxicological endpoints were applied through the WTox Model, and with this methodology we obtained the risk classication of potentially toxic substances. Acute and chronic toxicity, citotoxicity and genotoxicity were observed in the organisms Daphnia magna, Vibrio scheri and Oreochromis niloticus. A case study was conducted with solid wastes from textile, metal-mechanic and pulp and paper industries. The results have shown that several industrial wastes induced mortality, reproductive eects, micronucleus formation and increases in the rate of lipid peroxidation and DNA methylation of the organisms tested. These results, analyzed together through the WTox Model, allowed the classication of the environmental risk of industrial wastes. The evaluation showed that the toxicological environmental risk of the samples analyzed can be classied as signicant or critical.

  12. Maximum Likelihood in a Generalized Linear Finite Mixture Model by Using the EM Algorithm

    NARCIS (Netherlands)

    Jansen, R.C.

    A generalized linear finite mixture model and an EM algorithm to fit the model to data are described. By this approach the finite mixture model is embedded within the general framework of generalized linear models (GLMs). Implementation of the proposed EM algorithm can be readily done in statistical

  13. A model for calculating heat transfer coefficient concerning ethanol-water mixtures condensation

    Science.gov (United States)

    Wang, J. S.; Yan, J. J.; Hu, S. H.; Yang, Y. S.

    2010-03-01

    The attempt of the author in this research is made to calculate a heat transfer coefficient (HTC) by combining the filmwise theory with the dropwise notion for ethanol-water mixtures condensation. A new model, including ethanol concentration, vapor pressure and velocity, is developed by introducing a characteristic coefficient to combine the two mentioned-above theories. Under different concentration, pressure and velocity, the calculation is in comparison with experiment. It turns out that the calculation value is in good agreement with the experimental result; the maximal error is within ±30.1%. In addition, the model is applied to calculate related experiment in other literature and the values obtained agree well with results in reference.

  14. A study of finite mixture model: Bayesian approach on financial time series data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-07-01

    Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.

  15. Human Inspired Self-developmental Model of Neural Network (HIM): Introducing Content/Form Computing

    Science.gov (United States)

    Krajíček, Jiří

    This paper presents cross-disciplinary research between medical/psychological evidence on human abilities and informatics needs to update current models in computer science to support alternative methods for computation and communication. In [10] we have already proposed hypothesis introducing concept of human information model (HIM) as cooperative system. Here we continue on HIM design in detail. In our design, first we introduce Content/Form computing system which is new principle of present methods in evolutionary computing (genetic algorithms, genetic programming). Then we apply this system on HIM (type of artificial neural network) model as basic network self-developmental paradigm. Main inspiration of our natural/human design comes from well known concept of artificial neural networks, medical/psychological evidence and Sheldrake theory of "Nature as Alive" [22].

  16. Non-electrostatic surface complexation models for protons and lead(II) sorption onto single minerals and their mixture.

    Science.gov (United States)

    Pagnanelli, Francesca; Bornoroni, Lorena; Moscardini, Emanuela; Toro, Luigi

    2006-05-01

    Potentiometric titrations and lead sorption tests were conducted using muscovite, clinochlore, hematite, goethite, quartz, and a mixture of these same minerals. Mechanistic models were developed to represent and interpret these data. The aim was isolating the specific contribution of each mineral in proton and lead binding. Acid-base properties of each single mineral as well as their mixture were represented by discrete models, which consider the dissociation of n monoprotic sites (n-site/n-K(H) models). A one-site/one-K(H) model (logK(H1) = 10.69) was chosen for quartz (dissociation of SiOH edge hydroxyl groups). Goethite and hematite (FeOH groups) were represented by the same one-site/one-K(H) model (logK(H1) = 10.35). Three-site/three-K(H) models were used for muscovite (logK(H1) = 4.18; logK(H2) = 6.65; logK(H3) = 9.67) and clinochlore (logK(H1) = 3.84; logK(H2) = 6.57; logK(H3) = 9.71) assuming that SiOH and AlOH of the aluminosilicate matrix dissociate in the acid-neutral pH range while SiOH groups of quartz inclusions dissociate in the basic range. Similarly, the mixture of these minerals was represented by a three-site/three-K(H) model (logK(H1) = 3.39; logK(H2) = 6.72; logK(H3) = 10.82). According to crossed comparisons with single minerals, the first two sites of the mixture were associated with the aluminosilicate matrix (SiOH and AlOH respectively) and the third site with iron oxides (FeOH) and quartz groups. Additivity of proton binding in the mixture was demonstrated by simulating the mixture's titration curve. A unified model for the entire set of titration curves (single minerals and mixture) was also developed introducing a three-peak distribution function for proton affinity constants. Experimental data for lead sorption onto the mixture and individual minerals in 3-5 pH range denoted the competition between protons and metallic ions. The entire set of lead isotherms (individual mineral and mixture data) was represented adequately by a unified

  17. Regression mixture models : Does modeling the covariance between independent variables and latent classes improve the results?

    NARCIS (Netherlands)

    Lamont, A.E.; Vermunt, J.K.; Van Horn, M.L.

    2016-01-01

    Regression mixture models are increasingly used as an exploratory approach to identify heterogeneity in the effects of a predictor on an outcome. In this simulation study, we tested the effects of violating an implicit assumption often made in these models; that is, independent variables in the

  18. A person-fit index for polytomous Rasch models, latent class models, and their mixture generalizations

    NARCIS (Netherlands)

    von Davier, M; Molenaar, IW

    2003-01-01

    A normally distributed person-fit index is proposed for detecting aberrant response patterns in latent class models and mixture distribution IRT models for dichotomous and polytomous data. This article extends previous work on the null distribution of person-fit indices for the dichotomous Rasch mod

  19. Strained and unconstrained multivariate normal finite mixture modeling of Piagetian data.

    NARCIS (Netherlands)

    Dolan, C.V.; Jansen, B.R.J.; van der Maas, H.L.J.

    2004-01-01

    We present the results of multivariate normal mixture modeling of Piagetian data. The sample consists of 101 children, who carried out a (pseudo-)conservation computer task on four occasions. We fitted both cross-sectional mixture models, and longitudinal models based on a Markovian transition

  20. Introducing the Core Probability Framework and Discrete-Element Core Probability Model for efficient stochastic macroscopic modelling

    NARCIS (Netherlands)

    Calvert, S.C.; Taale, H.; Hoogendoorn, S.P.

    2014-01-01

    In this contribution the Core Probability Framework (CPF) is introduced with the application of the Discrete-Element Core Probability Model (DE-CPM) as a new DNL for dynamic macroscopic modelling of stochastic traffic flow. The model is demonstrated for validation in a test case and for computationa

  1. Global cross-calibration of Landsat spectral mixture models

    CERN Document Server

    Sousa, Daniel

    2016-01-01

    Data continuity for the Landsat program relies on accurate cross-calibration among sensors. The Landsat 8 OLI has been shown to exhibit superior performance to the sensors on Landsats 4-7 with respect to radiometric calibration, signal to noise, and geolocation. However, improvements to the positioning of the spectral response functions on the OLI have resulted in known biases for commonly used spectral indices because the new band responses integrate absorption features differently from previous Landsat sensors. The objective of this analysis is to quantify the impact of these changes on linear spectral mixture models that use imagery collected by different Landsat sensors. The 2013 underflight of Landsat 7 and 8 provides an opportunity to cross calibrate the spectral mixing spaces of the ETM+ and OLI sensors using near-simultaneous acquisitions from a wide variety of land cover types worldwide. We use 80,910,343 pairs of OLI and ETM+ spectra to characterize the OLI spectral mixing space and perform a cross-...

  2. Fuzzy local Gaussian mixture model for brain MR image segmentation.

    Science.gov (United States)

    Ji, Zexuan; Xia, Yong; Sun, Quansen; Chen, Qiang; Xia, Deshen; Feng, David Dagan

    2012-05-01

    Accurate brain tissue segmentation from magnetic resonance (MR) images is an essential step in quantitative brain image analysis. However, due to the existence of noise and intensity inhomogeneity in brain MR images, many segmentation algorithms suffer from limited accuracy. In this paper, we assume that the local image data within each voxel's neighborhood satisfy the Gaussian mixture model (GMM), and thus propose the fuzzy local GMM (FLGMM) algorithm for automated brain MR image segmentation. This algorithm estimates the segmentation result that maximizes the posterior probability by minimizing an objective energy function, in which a truncated Gaussian kernel function is used to impose the spatial constraint and fuzzy memberships are employed to balance the contribution of each GMM. We compared our algorithm to state-of-the-art segmentation approaches in both synthetic and clinical data. Our results show that the proposed algorithm can largely overcome the difficulties raised by noise, low contrast, and bias field, and substantially improve the accuracy of brain MR image segmentation.

  3. Introducing Mudbox

    CERN Document Server

    Kermanikian, Ara

    2010-01-01

    One of the first books on Autodesk's new Mudbox 3D modeling and sculpting tool!. Autodesk's Mudbox was used to create photorealistic creatures for The Dark Knight , The Mist , and others films. Now you can join the crowd interested in learning this exciting new digital modeling and sculpting tool with this complete guide. Get up to speed on all of Mudbox's features and functions, learn how sculpt and paint, and master the art of using effective workflows to make it all go easier.: Introduces Autodesk's Mudbox, an exciting 3D modeling and sculpting tool that enables you to create photorealistic

  4. Introducing the concept of anisotropy at different scales for modeling optical turbulence.

    Science.gov (United States)

    Toselli, Italo

    2014-08-01

    In this paper, the concept of anisotropy at different atmospheric turbulence scales is introduced. A power spectrum and its associated structure function with inner and outer scale effects and anisotropy are also shown. The power spectrum includes an effective anisotropic parameter ζ(eff) to describe anisotropy, which is useful for modeling optical turbulence when a non-Kolmogorov power law and anisotropy along the direction of propagation are present.

  5. Circular Mixture Modeling of Color Distribution for Blind Stain Separation in Pathology Images.

    Science.gov (United States)

    Li, Xingyu; Plataniotis, Konstantinos N

    2017-01-01

    In digital pathology, to address color variation and histological component colocalization in pathology images, stain decomposition is usually performed preceding spectral normalization and tissue component segmentation. This paper examines the problem of stain decomposition, which is a naturally nonnegative matrix factorization (NMF) problem in algebra, and introduces a systematical and analytical solution consisting of a circular color analysis module and an NMF-based computation module. Unlike the paradigm of existing stain decomposition algorithms where stain proportions are computed from estimated stain spectra using a matrix inverse operation directly, the introduced solution estimates stain spectra and stain depths via probabilistic reasoning individually. Since the proposed method pays extra attentions to achromatic pixels in color analysis and stain co-occurrence in pixel clustering, it achieves consistent and reliable stain decomposition with minimum decomposition residue. Particularly, aware of the periodic and angular nature of hue, we propose the use of a circular von Mises mixture model to analyze the hue distribution, and provide a complete color-based pixel soft-clustering solution to address color mixing introduced by stain overlap. This innovation combined with saturation-weighted computation makes our study effective for weak stains and broad-spectrum stains. Extensive experimentation on multiple public pathology datasets suggests that our approach outperforms state-of-the-art blind stain separation methods in terms of decomposition effectiveness.

  6. Introducing a model for competitiveness of suppliers in supply chain through game theory approach

    Directory of Open Access Journals (Sweden)

    Hengameh Cighary Deljavan

    2012-10-01

    Full Text Available Cighary Deljavan and Fariba Sadeghi PDF (300 KAbstract: The purpose of the present study is to introduce a model for competitiveness of suppliers in supply chain through game theory approach in one of the automobile companies of Iran. In this study, the game is based on price and non-price factors and this company is going to estimate the real profit obtained from collaboration with each of supply chain members. This happens by considering the governing competitive condition based on game theory before entering a bit for purchase of α piece as spare part among 8 companies supplying this piece as the supply chain members. According to experts in this industry, the quality is the main non-price competitiveness factor after price. In the current research models, the model introduced by Lu and Tsao (2011 [Lu, J.C., Tsao, Y.C., & Charoensiriwath, C. (2011. Competition Under manufacturer Service and retail price. Economic Modeling, 28,1256-1264.] with two manufacturers- one distributer, being appropriate for the research data, has been considered as the basis and implemented for case study and then it has been extended to n-manufacturers-one common retailer. Following price elasticity of demand, potential size of market or maximum product demand, retailer price, production price, wholesale price, demand amount, manufacturer and retailer profit are estimated under three scenario of manufacturer Stackelberg, Retailer Sackelberg and Vertical Nash. Therefore, by comparing them, price balance points and optimum level of services are specified and the better optimum scenario can be determined. Sensitivity analysis is performed for new model and manufacturers are ranked based on manufacture profit, Retailer profit and customer satisfaction. Finally, in this research in addition to introducing-person game model, customer satisfaction, which has been presented in the previous models as a missed circle are analyzed.

  7. Advances in behavioral genetics modeling using Mplus: applications of factor mixture modeling to twin data.

    Science.gov (United States)

    Muthén, Bengt; Asparouhov, Tihomir; Rebollo, Irene

    2006-06-01

    This article discusses new latent variable techniques developed by the authors. As an illustration, a new factor mixture model is applied to the monozygotic-dizygotic twin analysis of binary items measuring alcohol-use disorder. In this model, heritability is simultaneously studied with respect to latent class membership and within-class severity dimensions. Different latent classes of individuals are allowed to have different heritability for the severity dimensions. The factor mixture approach appears to have great potential for the genetic analyses of heterogeneous populations. Generalizations for longitudinal data are also outlined.

  8. Maier-Saupe model for a mixture of uniaxial and biaxial molecules

    Science.gov (United States)

    Nascimento, E. S.; Henriques, E. F.; Vieira, A. P.; Salinas, S. R.

    2015-12-01

    We introduce shape variations in a liquid-crystalline system by considering an elementary Maier-Saupe lattice model for a mixture of uniaxial and biaxial molecules. Shape variables are treated in the annealed (thermalized) limit. We analyze the thermodynamic properties of this system in terms of temperature T , concentration c of intrinsically biaxial molecules, and a parameter Δ associated with the degree of biaxiality of the molecules. At the mean-field level, we use standard techniques of statistical mechanics to draw global phase diagrams, which are shown to display a rich structure, including uniaxial and biaxial nematic phases, a reentrant ordered region, and many distinct multicritical points. Also, we use the formalism to write an expansion of the free energy in order to make contact with the Landau-de Gennes theory of nematic phase transitions.

  9. Numerical simulation of slurry jets using mixture model

    Directory of Open Access Journals (Sweden)

    Wen-xin HUAI

    2013-01-01

    Full Text Available Slurry jets in a static uniform environment were simulated with a two-phase mixture model in which flow-particle interactions were considered. A standard k-ε turbulence model was chosen to close the governing equations. The computational results were in agreement with previous laboratory measurements. The characteristics of the two-phase flow field and the influences of hydraulic and geometric parameters on the distribution of the slurry jets were analyzed on the basis of the computational results. The calculated results reveal that if the initial velocity of the slurry jet is high, the jet spreads less in the radial direction. When the slurry jet is less influenced by the ambient fluid (when the Stokes number St is relatively large, the turbulent kinetic energy k and turbulent dissipation rate ε, which are relatively concentrated around the jet axis, decrease more rapidly after the slurry jet passes through the nozzle. For different values of St, the radial distributions of streamwise velocity and particle volume fraction are both self-similar and fit a Gaussian profile after the slurry jet fully develops. The decay rate of the particle velocity is lower than that of water velocity along the jet axis, and the axial distributions of the centerline particle streamwise velocity are self-similar along the jet axis. The pattern of particle dispersion depends on the Stokes number St. When St = 0.39, the particle dispersion along the radial direction is considerable, and the relative velocity is very low due to the low dynamic response time. When St = 3.08, the dispersion of particles along the radial direction is very little, and most of the particles have high relative velocities along the streamwise direction.

  10. Phase equilibrium of liquid mixtures: Experimental and modeled data using statistical associating fluid theory for potential of variable range approach

    Science.gov (United States)

    Giner, Beatriz; Bandrés, Isabel; Carmen López, M.; Lafuente, Carlos; Galindo, Amparo

    2007-10-01

    A study of the phase equilibrium (experimental and modeled) of mixtures formed by a cyclic ether and haloalkanes has been derived. Experimental data for the isothermal vapor liquid equilibrium of mixtures formed by tetrahydrofuran and tetrahydropyran and isomeric chlorobutanes at temperatures of 298.15, 313.15, and 328.15K are presented. Experimental results have been discussed in terms of both molecular characteristics of pure compounds and potential intermolecular interaction between them using thermodynamic information of the mixtures obtained earlier. The statistical associating fluid theory for potential of variable range (SAFT-VR) approach together with standard combining rules without adjustable parameters has been used to model the phase equilibrium. Good agreement between experiment and the prediction is found with such a model. Mean absolute deviations for pressures are of the order of 1kPa, while less than 0.013mole fraction for vapor phase compositions. In order to improve the results obtained, a new modeling has been carried out by introducing a unique transferable parameter kij, which modifies the strength of the dispersion interaction between unlike components in the mixtures, and is valid for all the studied mixtures being not temperature or pressure dependent. This parameter together with the SAFT-VR approach provides a description of the vapor-liquid equilibrium of the mixtures that is in excellent agreement with the experimental data for most cases. The absolute deviations are of the order of 0.005mole fraction for vapor phase compositions and less than 0.3kPa for pressure, excepting for mixtures containing 2-chloro-2-methylpropane which deviations for pressure are larger. Results obtained in this work in the modeling of the phase equilibrium with the SAFT-VR equation of state have been compared to the ones obtained in a previous study when the approach was used to model similar mixtures with clear differences in the thermodynamic behavior. We

  11. Introducing COZIGAM: An R Package for Unconstrained and Constrained Zero-Inflated Generalized Additive Model Analysis

    Directory of Open Access Journals (Sweden)

    Hai Liu

    2010-10-01

    Full Text Available Zero-inflation problem is very common in ecological studies as well as other areas. Nonparametric regression with zero-inflated data may be studied via the zero-inflated generalized additive model (ZIGAM, which assumes that the zero-inflated responses come from a probabilistic mixture of zero and a regular component whose distribution belongs to the 1-parameter exponential family. With the further assumption that the probability of non-zero-inflation is some monotonic function of the mean of the regular component, we propose the constrained zero-inflated generalized additive model (COZIGAM for analyzingzero-inflated data. When the hypothesized constraint obtains, the new approach provides a unified framework for modeling zero-inflated data, which is more parsimonious and efficient than the unconstrained ZIGAM. We have developed an R package COZIGAM which contains functions that implement an iterative algorithm for fitting ZIGAMs and COZIGAMs to zero-inflated data basedon the penalized likelihood approach. Other functions included in the package are useful for model prediction and model selection. We demonstrate the use of the COZIGAM package via some simulation studies and a real application.

  12. Modeling of Sunspot Numbers by a Modified Binary Mixture of Laplace Distribution Functions

    Science.gov (United States)

    Sabarinath, A.; Anilkumar, A. K.

    2008-07-01

    This paper presents a new approach for describing the shape of 11-year sunspot cycles by considering the monthly averaged values. This paper also brings out a prediction model based on the analysis of 22 sunspot cycles from the year 1749 onward. It is found that the shape of the sunspot cycles with monthly averaged values can be described by a functional form of modified binary mixture of Laplace density functions, modified suitably by introducing two additional parameters in the standard functional form. The six parameters, namely two locations, two scales, and two area parameters, characterize this model. The nature of the estimated parameters for the sunspot cycles from 1749 onward has been analyzed and finally we arrived at a sufficient set of the parameters for the proposed model. It is seen that this model picks up the sunspot peaks more closely than any other model without losing the match at other places at the same time. The goodness of fit for the proposed model is also computed with the Hathaway Wilson Reichmann overline{χ} measure, which shows, on average, that the fitted model passes within 0.47 standard deviations of the actual averaged monthly sunspot numbers.

  13. Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2017-06-01

    Full Text Available This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT models with stochastic (or uncertain constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT models (such as log-normal, log-Cauchy, and log-logistic FT models as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works.

  14. Infinite von Mises-Fisher Mixture Modeling of Whole Brain fMRI Data

    DEFF Research Database (Denmark)

    Røge, Rasmus; Madsen, Kristoffer Hougaard; Schmidt, Mikkel Nørgaard

    2017-01-01

    spherical manifold are rarely analyzed, in part due to the computational challenges imposed by directional statistics. In this letter, we discuss a Bayesian von Mises-Fisher (vMF) mixture model for data on the unit hypersphere and present an efficient inference procedure based on collapsed Markov chain...... Monte Carlo sampling. Comparing the vMF and gaussian mixture models on synthetic data, we demonstrate that the vMF model has a slight advantage inferring the true underlying clustering when compared to gaussian-based models on data generated from both a mixture of vMFs and a mixture of gaussians......Cluster analysis of functional magnetic resonance imaging (fMRI) data is often performed using gaussian mixture models, but when the time series are standardized such that the data reside on a hypersphere, this modeling assumption is questionable. The consequences of ignoring the underlying...

  15. Modeling of pharmaceuticals mixtures toxicity with deviation ratio and best-fit functions models.

    Science.gov (United States)

    Wieczerzak, Monika; Kudłak, Błażej; Yotova, Galina; Nedyalkova, Miroslava; Tsakovski, Stefan; Simeonov, Vasil; Namieśnik, Jacek

    2016-11-15

    The present study deals with assessment of ecotoxicological parameters of 9 drugs (diclofenac (sodium salt), oxytetracycline hydrochloride, fluoxetine hydrochloride, chloramphenicol, ketoprofen, progesterone, estrone, androstenedione and gemfibrozil), present in the environmental compartments at specific concentration levels, and their mutual combinations by couples against Microtox® and XenoScreen YES/YAS® bioassays. As the quantitative assessment of ecotoxicity of drug mixtures is an complex and sophisticated topic in the present study we have used two major approaches to gain specific information on the mutual impact of two separate drugs present in a mixture. The first approach is well documented in many toxicological studies and follows the procedure for assessing three types of models, namely concentration addition (CA), independent action (IA) and simple interaction (SI) by calculation of a model deviation ratio (MDR) for each one of the experiments carried out. The second approach used was based on the assumption that the mutual impact in each mixture of two drugs could be described by a best-fit model function with calculation of weight (regression coefficient or other model parameter) for each of the participants in the mixture or by correlation analysis. It was shown that the sign and the absolute value of the weight or the correlation coefficient could be a reliable measure for the impact of either drug A on drug B or, vice versa, of B on A. Results of studies justify the statement, that both of the approaches show similar assessment of the mode of mutual interaction of the drugs studied. It was found that most of the drug mixtures exhibit independent action and quite few of the mixtures show synergic or dependent action. Copyright © 2016. Published by Elsevier B.V.

  16. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    NARCIS (Netherlands)

    M.G. de Jong (Martijn); J-B.E.M. Steenkamp (Jan-Benedict)

    2009-01-01

    textabstractWe present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups

  17. Automatic categorization of web pages and user clustering with mixtures of hidden Markov models

    NARCIS (Netherlands)

    Ypma, A.; Heskes, T.M.

    2003-01-01

    We propose mixtures of hidden Markov models for modelling clickstreams of web surfers. Hence, the page categorization is learned from the data without the need for a (possibly cumbersome) manual categorization. We provide an EM algorithm for training a mixture of HMMs and show that additional static

  18. Automatic categorization of web pages and user clustering with mixtures of hidden Markov models

    NARCIS (Netherlands)

    Ypma, A.; Heskes, T.M.

    2003-01-01

    We propose mixtures of hidden Markov models for modelling clickstreams of web surfers. Hence, the page categorization is learned from the data without the need for a (possibly cumbersome) manual categorization. We provide an EM algorithm for training a mixture of HMMs and show that additional static

  19. Finite mixture models for sub-pixel coastal land cover classification

    CSIR Research Space (South Africa)

    Ritchie, Michaela C

    2017-05-01

    Full Text Available mixture models have been used to generate sub-pixel land cover classifications, however, traditionally this makes use of mixtures of normal distributions. However, these models fail to represent many land cover classes accurately, as these are usually...

  20. Combinatorial bounds on the α-divergence of univariate mixture models

    KAUST Repository

    Nielsen, Frank

    2017-06-20

    We derive lower- and upper-bounds of α-divergence between univariate mixture models with components in the exponential family. Three pairs of bounds are presented in order with increasing quality and increasing computational cost. They are verified empirically through simulated Gaussian mixture models. The presented methodology generalizes to other divergence families relying on Hellinger-type integrals.

  1. Modelling of associating mixtures for applications in the oil & gas and chemical industries

    DEFF Research Database (Denmark)

    Kontogeorgis, Georgios; Folas, Georgios; Muro Sunè, Nuria

    2007-01-01

    -alcohol (glycol)-alkanes and certain acid and amine-containing mixtures. Recent results include glycol-aromatic hydrocarbons including multiphase, multicomponent equilibria and gas hydrate calculations in combination with the van der Waals-Platteeuw model. This article will outline some new applications...... of the model of relevance to the petroleum and chemical industries: high pressure vapor-liquid and liquid-liquid equilibrium in alcohol-containing mixtures, mixtures with gas hydrate inhibitors and mixtures with polar and hydrogen bonding chemicals including organic acids. Some comparisons with conventional...

  2. Modelling of phase equilibria of glycol ethers mixtures using an association model

    DEFF Research Database (Denmark)

    Garrido, Nuno M.; Folas, Georgios; Kontogeorgis, Georgios

    2008-01-01

    Vapor-liquid and liquid-liquid equilibria of glycol ethers (surfactant) mixtures with hydrocarbons, polar compounds and water are calculated using an association model, the Cubic-Plus-Association Equation of State. Parameters are estimated for several non-ionic surfactants of the polyoxyethylene ...

  3. Using Bayesian statistics for modeling PTSD through Latent Growth Mixture Modeling : implementation and discussion

    NARCIS (Netherlands)

    Depaoli, Sarah; van de Schoot, Rens; van Loey, Nancy; Sijbrandij, Marit

    2015-01-01

    BACKGROUND: After traumatic events, such as disaster, war trauma, and injuries including burns (which is the focus here), the risk to develop posttraumatic stress disorder (PTSD) is approximately 10% (Breslau & Davis, 1992). Latent Growth Mixture Modeling can be used to classify individuals into dis

  4. The Impact of Various Class-Distinction Features on Model Selection in the Mixture Rasch Model

    Science.gov (United States)

    Choi, In-Hee; Paek, Insu; Cho, Sun-Joo

    2017-01-01

    The purpose of the current study is to examine the performance of four information criteria (Akaike's information criterion [AIC], corrected AIC [AICC] Bayesian information criterion [BIC], sample-size adjusted BIC [SABIC]) for detecting the correct number of latent classes in the mixture Rasch model through simulations. The simulation study…

  5. Bayesian mixture modeling using a hybrid sampler with application to protein subfamily identification.

    Science.gov (United States)

    Fong, Youyi; Wakefield, Jon; Rice, Kenneth

    2010-01-01

    Predicting protein function is essential to advancing our knowledge of biological processes. This article is focused on discovering the functional diversification within a protein family. A Bayesian mixture approach is proposed to model a protein family as a mixture of profile hidden Markov models. For a given mixture size, a hybrid Markov chain Monte Carlo sampler comprising both Gibbs sampling steps and hierarchical clustering-based split/merge proposals is used to obtain posterior inference. Inference for mixture size concentrates on comparing the integrated likelihoods. The choice of priors is critical with respect to the performance of the procedure. Through simulation studies, we show that 2 priors that are based on independent data sets allow correct identification of the mixture size, both when the data are homogeneous and when the data are generated from a mixture. We illustrate our method using 2 sets of real protein sequences.

  6. A MODEL SELECTION PROCEDURE IN MIXTURE-PROCESS EXPERIMENTS FOR INDUSTRIAL PROCESS OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    Márcio Nascimento de Souza Leão

    2015-08-01

    Full Text Available We present a model selection procedure for use in Mixture and Mixture-Process Experiments. Certain combinations of restrictions on the proportions of the mixture components can result in a very constrained experimental region. This results in collinearity among the covariates of the model, which can make it difficult to fit the model using the traditional method based on the significance of the coefficients. For this reason, a model selection methodology based on information criteria will be proposed for process optimization. Two examples are presented to illustrate this model selection procedure.

  7. Introducing frailty models as a random effect model for a pooled trial analysis

    OpenAIRE

    Quinten, Chantal

    2011-01-01

    Cox proportional hazard models are the most popular way to analyze survival data. Heterogeneity in survival outcomes of cancer patients in a dataset influence the shape of mortality rate observed. Frailty models provide a way to investigate and to describe this variation. The main aim of this study was to compare different extended Cox models that try to capture the heterogeneity in a pooled dataset and to assess the robustness of the models comparing their estimates, confidence intervals...

  8. A Linear Gradient Theory Model for Calculating Interfacial Tensions of Mixtures

    DEFF Research Database (Denmark)

    Zou, You-Xiang; Stenby, Erling Halfdan

    1996-01-01

    In this research work, we assumed that the densities of each component in a mixture are linearly distributed across the interface between the coexisting vapor and liquid phases, and we developed a linear gradient theory model for computing interfacial tensions of mixtures, especially mixtures...... with proper scaling behavior at the critical point is at least required.Key words: linear gradient theory; interfacial tension; equation of state; influence parameter; density profile....

  9. Mathematical model of the component mixture distribution in the molten cast iron during centrifugation (sedimentation)

    Science.gov (United States)

    Bikulov, R. A.; Kotlyar, L. M.

    2014-12-01

    For the development and management of the manufacturing processes of axisymmetric articles with compositional structure by centrifugal casting method [1,2,3,4] is necessary to create a generalized mathematical model of the dynamics of component mixture in the molten cast iron during centrifugation. In article. based on the analysis of the dynamics of two-component mixture at sedimentation, a method of successive approximations to determine the distribution of a multicomponent mixture by centrifugation in a parabolic crucible is developed.

  10. Introducing Modeling Transition Diagrams as a Tool to Connect Mathematical Modeling to Mathematical Thinking

    Science.gov (United States)

    Czocher, Jennifer A.

    2016-01-01

    This study contributes a methodological tool to reconstruct the cognitive processes and mathematical activities carried out by mathematical modelers. Represented as Modeling Transition Diagrams (MTDs), individual modeling routes were constructed for four engineering undergraduate students. Findings stress the importance and limitations of using…

  11. Adaptive Mixture Modelling Metropolis Methods for Bayesian Analysis of Non-linear State-Space Models.

    Science.gov (United States)

    Niemi, Jarad; West, Mike

    2010-06-01

    We describe a strategy for Markov chain Monte Carlo analysis of non-linear, non-Gaussian state-space models involving batch analysis for inference on dynamic, latent state variables and fixed model parameters. The key innovation is a Metropolis-Hastings method for the time series of state variables based on sequential approximation of filtering and smoothing densities using normal mixtures. These mixtures are propagated through the non-linearities using an accurate, local mixture approximation method, and we use a regenerating procedure to deal with potential degeneracy of mixture components. This provides accurate, direct approximations to sequential filtering and retrospective smoothing distributions, and hence a useful construction of global Metropolis proposal distributions for simulation of posteriors for the set of states. This analysis is embedded within a Gibbs sampler to include uncertain fixed parameters. We give an example motivated by an application in systems biology. Supplemental materials provide an example based on a stochastic volatility model as well as MATLAB code.

  12. A Binomial Mixture Model for Classification Performance: A Commentary on Waxman, Chambers, Yntema, and Gelman (1989).

    Science.gov (United States)

    Thomas, Hoben

    1989-01-01

    Individual differences in children's performance on a classification task are modeled by a two component binomial mixture distribution. The model accounts for data well, with variance accounted for ranging from 87 to 95 percent. (RJC)

  13. Modelling viscosity and mass fraction of bitumen - diluent mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Miadonye, A.; Latour, N.; Puttagunta, V.R. [Lakehead Univ., Thunder Bay, ON (Canada)

    1999-07-01

    In recovery of bitumen in oil sands extraction, the reduction of the viscosity is important above and below ground. The addition of liquid diluent breaks down or weakens the intermolecular forces that create a high viscosity in bitumen. The addition of even 5% of diluent can cause a viscosity reduction in excess of 8%, thus facilitating the in situ recovery and pipeline transportation of bitumen. Knowledge of bitumen - diluent viscosity is highly important because without it, determination of upgrading processes, in situ recovery, well simulation, heat transfer, fluid flow and a variety of other engineering problems would be difficult or impossible to solve. The development of a simple correlation to predict the viscosity of binary mixtures of bitumen - diluent in any proportion is described. The developed correlation used to estimate the viscosities and mass fractions of bitumen - diluent mixtures was within acceptable limits of error. For the prediction of mixture viscosities, the developed correlation gave the best results with an overall average absolute deviation of 12% compared to those of Chironis (17%) and Cragoe (23%). Predictions of diluent mass fractions yielded a much better result with an overall average absolute deviation of 5%. The unique features of the correlation include its computational simplicity, its applicability to mixtures at temperatures other than 30 degrees C, and the fact that only the bitumen and diluent viscosities are needed to make predictions. It is the only correlation capable of predicting viscosities of mixtures, as well as diluent mass fractions required to reduce bitumen viscosity to pumping viscosities. The prediction of viscosities at 25, 60.3, and 82.6 degrees C produced excellent results, particularly at high temperatures with an average absolute deviation of below 10%. 11 refs., 3 figs., 8 tabs.

  14. Unsupervised Segmentation of Spectral Images with a Spatialized Gaussian Mixture Model and Model Selection

    Directory of Open Access Journals (Sweden)

    Cohen S.X.

    2014-03-01

    Full Text Available In this article, we describe a novel unsupervised spectral image segmentation algorithm. This algorithm extends the classical Gaussian Mixture Model-based unsupervised classification technique by incorporating a spatial flavor into the model: the spectra are modelized by a mixture of K classes, each with a Gaussian distribution, whose mixing proportions depend on the position. Using a piecewise constant structure for those mixing proportions, we are able to construct a penalized maximum likelihood procedure that estimates the optimal partition as well as all the other parameters, including the number of classes. We provide a theoretical guarantee for this estimation, even when the generating model is not within the tested set, and describe an efficient implementation. Finally, we conduct some numerical experiments of unsupervised segmentation from a real dataset.

  15. Photometry and models of selected main belt asteroids: IX. Introducing interactive service for asteroid models (ISAM)

    DEFF Research Database (Denmark)

    Marciniak, A.; Bartczak, P.; Santana-Ros, T.

    2012-01-01

    from other observing/modelling techniques, we created an on-line service where we allow the inversion models to be orientated interactively. Results. Our sample of objects is quite representative, containing both relatively fast and slow rotators with highly and lowly inclined spin axes. With this work...... occultations, or space probe imaging. Aims. During our ongoing work to increase the set of asteroids with known spin and shape parameters, there appeared a need for displaying the model plane-of-sky orientations for specific epochs to compare models from different techniques. It would also be instructive...... to be able to track how the complex lightcurves are produced by various asteroid shapes. Methods. Basing our analysis on an extensive photometric observational dataset, we obtained eight asteroid models with the convex lightcurve inversion method. To enable comparison of the photometric models with those...

  16. Mixture Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.

    2007-12-01

    A mixture experiment involves combining two or more components in various proportions or amounts and then measuring one or more responses for the resulting end products. Other factors that affect the response(s), such as process variables and/or the total amount of the mixture, may also be studied in the experiment. A mixture experiment design specifies the combinations of mixture components and other experimental factors (if any) to be studied and the response variable(s) to be measured. Mixture experiment data analyses are then used to achieve the desired goals, which may include (i) understanding the effects of components and other factors on the response(s), (ii) identifying components and other factors with significant and nonsignificant effects on the response(s), (iii) developing models for predicting the response(s) as functions of the mixture components and any other factors, and (iv) developing end-products with desired values and uncertainties of the response(s). Given a mixture experiment problem, a practitioner must consider the possible approaches for designing the experiment and analyzing the data, and then select the approach best suited to the problem. Eight possible approaches include 1) component proportions, 2) mathematically independent variables, 3) slack variable, 4) mixture amount, 5) component amounts, 6) mixture process variable, 7) mixture of mixtures, and 8) multi-factor mixture. The article provides an overview of the mixture experiment designs, models, and data analyses for these approaches.

  17. Numerical Simulation of Water Jet Flow Using Diffusion Flux Mixture Model

    Directory of Open Access Journals (Sweden)

    Zhi Shang

    2014-01-01

    Full Text Available A multidimensional diffusion flux mixture model was developed to simulate water jet two-phase flows. Through the modification of the gravity using the gradients of the mixture velocity, the centrifugal force on the water droplets was able to be considered. The slip velocities between the continuous phase (gas and the dispersed phase (water droplets were able to be calculated through multidimensional diffusion flux velocities based on the modified multidimensional drift flux model. Through the numerical simulations, comparing with the experiments and the simulations of traditional algebraic slip mixture model on the water mist spray, the model was validated.

  18. Model-based experimental design for assessing effects of mixtures of chemicals

    Energy Technology Data Exchange (ETDEWEB)

    Baas, Jan, E-mail: jan.baas@falw.vu.n [Vrije Universiteit of Amsterdam, Dept of Theoretical Biology, De Boelelaan 1085, 1081 HV Amsterdam (Netherlands); Stefanowicz, Anna M., E-mail: anna.stefanowicz@uj.edu.p [Institute of Environmental Sciences, Jagiellonian University, Gronostajowa 7, 30-387 Krakow (Poland); Klimek, Beata, E-mail: beata.klimek@uj.edu.p [Institute of Environmental Sciences, Jagiellonian University, Gronostajowa 7, 30-387 Krakow (Poland); Laskowski, Ryszard, E-mail: ryszard.laskowski@uj.edu.p [Institute of Environmental Sciences, Jagiellonian University, Gronostajowa 7, 30-387 Krakow (Poland); Kooijman, Sebastiaan A.L.M., E-mail: bas@bio.vu.n [Vrije Universiteit of Amsterdam, Dept of Theoretical Biology, De Boelelaan 1085, 1081 HV Amsterdam (Netherlands)

    2010-01-15

    We exposed flour beetles (Tribolium castaneum) to a mixture of four poly aromatic hydrocarbons (PAHs). The experimental setup was chosen such that the emphasis was on assessing partial effects. We interpreted the effects of the mixture by a process-based model, with a threshold concentration for effects on survival. The behavior of the threshold concentration was one of the key features of this research. We showed that the threshold concentration is shared by toxicants with the same mode of action, which gives a mechanistic explanation for the observation that toxic effects in mixtures may occur in concentration ranges where the individual components do not show effects. Our approach gives reliable predictions of partial effects on survival and allows for a reduction of experimental effort in assessing effects of mixtures, extrapolations to other mixtures, other points in time, or in a wider perspective to other organisms. - We show a mechanistic approach to assess effects of mixtures in low concentrations.

  19. Introducing a price variation limiter mechanism into a behavioral financial market model.

    Science.gov (United States)

    Naimzada, Ahmad; Pireddu, Marina

    2015-08-01

    In the present paper, we consider a nonlinear financial market model in which, in order to decrease the complexity of the dynamics and to achieve price stabilization, we introduce a price variation limiter mechanism, which in each period bounds the price variation so that the current price is forced to belong to a certain interval determined by the price realization in the previous period. More precisely, we introduce such mechanism into a financial market model in which the price dynamics are described by a sigmoidal price adjustment mechanism characterized by the presence of two asymptotes that bound the price variation and thus the dynamics. We show that the presence of our asymptotes prevents divergence and negativity issues. Moreover, we prove that the basins of attraction are complicated only under suitable conditions on the parameters and that chaos arises just when the price limiters are loose enough. On the other hand, for some suitable parameter configurations, we detect multistability phenomena characterized by the presence of up to three coexisting attractors.

  20. Introducing a more realistic model for opinion formation considering instability in social structure

    Science.gov (United States)

    Salehi, Sajjad; Taghiyareh, Fattaneh

    2016-06-01

    Opinion formation is a process through which interactions of individuals and dynamism of their opinions in effect of neighbors are modeled. In this paper, in an effort to model the opinion formation more realistically, we have introduced a model that considers the role of network structure in opinion dynamics. In this model, each individual changes his opinion in a way so as to decrease its difference with the opinion of trusted neighbors while he intensifies his dissention with the untrusted ones. Considering trust/distrust relations as a signed network, we have defined a structural indicator which shows the degree of instability in social structure and is calculated based on the structural balance theory. It is also applied as feedback to the opinion formation process affecting its dynamics. Our simulation results show formation of a set of clusters containing individuals holding opinions having similar values. Also, the opinion value of each individual is far from the ones of distrusted neighbors. Since this model considers distrust and instability of relations in society, it can offer a more realistic model of opinion formation.

  1. Introducing multisensor satellite radiance-based evaluation for regional Earth System modeling

    Science.gov (United States)

    Matsui, T.; Santanello, J.; Shi, J. J.; Tao, W.-K.; Wu, D.; Peters-Lidard, C.; Kemp, E.; Chin, M.; Starr, D.; Sekiguchi, M.; Aires, F.

    2014-07-01

    Earth System modeling has become more complex, and its evaluation using satellite data has also become more difficult due to model and data diversity. Therefore, the fundamental methodology of using satellite direct measurements with instrumental simulators should be addressed especially for modeling community members lacking a solid background of radiative transfer and scattering theory. This manuscript introduces principles of multisatellite, multisensor radiance-based evaluation methods for a fully coupled regional Earth System model: NASA-Unified Weather Research and Forecasting (NU-WRF) model. We use a NU-WRF case study simulation over West Africa as an example of evaluating aerosol-cloud-precipitation-land processes with various satellite observations. NU-WRF-simulated geophysical parameters are converted to the satellite-observable raw radiance and backscatter under nearly consistent physics assumptions via the multisensor satellite simulator, the Goddard Satellite Data Simulator Unit. We present varied examples of simple yet robust methods that characterize forecast errors and model physics biases through the spatial and statistical interpretation of various satellite raw signals: infrared brightness temperature (Tb) for surface skin temperature and cloud top temperature, microwave Tb for precipitation ice and surface flooding, and radar and lidar backscatter for aerosol-cloud profiling simultaneously. Because raw satellite signals integrate many sources of geophysical information, we demonstrate user-defined thresholds and a simple statistical process to facilitate evaluations, including the infrared-microwave-based cloud types and lidar/radar-based profile classifications.

  2. Introducing Multisensor Satellite Radiance-Based Evaluation for Regional Earth System Modeling

    Science.gov (United States)

    Matsui, T.; Santanello, J.; Shi, J. J.; Tao, W.-K.; Wu, D.; Peters-Lidard, C.; Kemp, E.; Chin, M.; Starr, D.; Sekiguchi, M.; hide

    2014-01-01

    Earth System modeling has become more complex, and its evaluation using satellite data has also become more difficult due to model and data diversity. Therefore, the fundamental methodology of using satellite direct measurements with instrumental simulators should be addressed especially for modeling community members lacking a solid background of radiative transfer and scattering theory. This manuscript introduces principles of multisatellite, multisensor radiance-based evaluation methods for a fully coupled regional Earth System model: NASA-Unified Weather Research and Forecasting (NU-WRF) model. We use a NU-WRF case study simulation over West Africa as an example of evaluating aerosol-cloud-precipitation-land processes with various satellite observations. NU-WRF-simulated geophysical parameters are converted to the satellite-observable raw radiance and backscatter under nearly consistent physics assumptions via the multisensor satellite simulator, the Goddard Satellite Data Simulator Unit. We present varied examples of simple yet robust methods that characterize forecast errors and model physics biases through the spatial and statistical interpretation of various satellite raw signals: infrared brightness temperature (Tb) for surface skin temperature and cloud top temperature, microwave Tb for precipitation ice and surface flooding, and radar and lidar backscatter for aerosol-cloud profiling simultaneously. Because raw satellite signals integrate many sources of geophysical information, we demonstrate user-defined thresholds and a simple statistical process to facilitate evaluations, including the infrared-microwave-based cloud types and lidar/radar-based profile classifications.

  3. Introducing Decorated HODs: modeling assembly bias in the galaxy-halo connection

    CERN Document Server

    Hearin, Andrew P; Bosch, Frank C van den; Campbell, Duncan; Tollerud, Erik

    2015-01-01

    The connection between galaxies and dark matter halos is often inferred from data using probabilistic models, such as the Halo Occupation Distribution (HOD). Conventional HOD formulations assume that only halo mass governs the galaxy-halo connection. Violations of this assumption, known as galaxy assembly bias, threaten the HOD program. We introduce decorated HODs, a new, flexible class of models designed to account for assembly bias. Decorated HODs minimally expand the parameter space and maximize the independence between traditional and novel HOD parameters. We use decorated HODs to quantify the influence of assembly bias on clustering and lensing statistics. For SDSS-like samples, the impact of assembly bias on galaxy clustering can be as large as a factor of two on r ~ 200 kpc scales and ~15% in the linear regime. Assembly bias can either enhance or diminish clustering on large scales, but generally increases clustering on scales r <~ 1 Mpc. We performed our calculations with Halotools, an open-source,...

  4. Numerical Study of Wave Diffraction Effect Introduced in the SWAN Model

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    New version of SWAN model includes the wave diffraction effect which is the main improvement compared with the previous versions. Experimental data collected in the wave basin of the University of Delaware were used to test its performance. Wave heights were compared in the four cases (with different wave energies and directional spreading spectra). The results agreed well with the measurements, especially for the broad directional spectra cases. The effect of wave diffraction was analyzed by switching on/off the corresponding term. By introducing the diffraction term, the distributions of wave height and wave direction were smoothed, especially obvious for the narrow spectrum cases. Compared with the calculations without diffraction, the model with diffraction effect gave better results.

  5. Study of the Internal Mechanical response of an asphalt mixture by 3-D Discrete Element Modeling

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Hofko, Bernhard

    2015-01-01

    In this paper the viscoelastic behavior of asphalt mixture was investigated by employing a three-dimensional Discrete Element Method (DEM). The cylinder model was filled with cubic array of spheres with a specified radius, and was considered as a whole mixture with uniform contact properties for ...

  6. Noah-MP-Crop: Introducing dynamic crop growth in the Noah-MP land surface model

    Science.gov (United States)

    Liu, Xing; Chen, Fei; Barlage, Michael; Zhou, Guangsheng; Niyogi, Dev

    2016-12-01

    Croplands are important in land-atmosphere interactions and in the modification of local and regional weather and climate; however, they are poorly represented in the current version of the coupled Weather Research and Forecasting/Noah with multiparameterization (Noah-MP) land surface modeling system. This study introduced dynamic corn (Zea mays) and soybean (Glycine max) growth simulations and field management (e.g., planting date) into Noah-MP and evaluated the enhanced model (Noah-MP-Crop) at field scales using crop biomass data sets, surface heat fluxes, and soil moisture observations. Compared to the generic dynamic vegetation and prescribed-leaf area index (LAI)-driven methods in Noah-MP, the Noah-MP-Crop showed improved performance in simulating leaf area index (LAI) and crop biomass. This model is able to capture the seasonal and annual variability of LAI and to differentiate corn and soybean in peak values of LAI as well as the length of growing seasons. Improved simulations of crop phenology in Noah-MP-Crop led to better surface heat flux simulations, especially in the early period of growing season where current Noah-MP significantly overestimated LAI. The addition of crop yields as model outputs expand the application of Noah-MP-Crop to regional agriculture studies. There are limitations in the use of current growing degree days (GDD) criteria to predict growth stages, and it is necessary to develop a new method that combines GDD with other environmental factors, to more accurately define crop growth stages. The capability introduced in Noah-MP allows further crop-related studies and development.

  7. Introducing improved structural properties and salt dependence into a coarse-grained model of DNA

    Energy Technology Data Exchange (ETDEWEB)

    Snodin, Benedict E. K., E-mail: benedict.snodin@chem.ox.ac.uk; Mosayebi, Majid; Schreck, John S.; Romano, Flavio; Doye, Jonathan P. K., E-mail: jonathan.doye@chem.ox.ac.uk [Physical and Theoretical Chemistry Laboratory, Department of Chemistry, University of Oxford, South Parks Road, Oxford OX1 3QZ (United Kingdom); Randisi, Ferdinando [Life Sciences Interface Doctoral Training Center, South Parks Road, Oxford OX1 3QU (United Kingdom); Rudolf Peierls Centre for Theoretical Physics, 1 Keble Road, Oxford OX1 3NP (United Kingdom); Šulc, Petr [Center for Studies in Physics and Biology, The Rockefeller University, 1230 York Avenue, New York, New York 10065 (United States); Ouldridge, Thomas E. [Department of Mathematics, Imperial College, 180 Queen’s Gate, London SW7 2AZ (United Kingdom); Tsukanov, Roman; Nir, Eyal [Department of Chemistry and the Ilse Katz Institute for Nanoscale Science and Technology, Ben-Gurion University of the Negev, Beer Sheva (Israel); Louis, Ard A. [Rudolf Peierls Centre for Theoretical Physics, 1 Keble Road, Oxford OX1 3NP (United Kingdom)

    2015-06-21

    We introduce an extended version of oxDNA, a coarse-grained model of deoxyribonucleic acid (DNA) designed to capture the thermodynamic, structural, and mechanical properties of single- and double-stranded DNA. By including explicit major and minor grooves and by slightly modifying the coaxial stacking and backbone-backbone interactions, we improve the ability of the model to treat large (kilobase-pair) structures, such as DNA origami, which are sensitive to these geometric features. Further, we extend the model, which was previously parameterised to just one salt concentration ([Na{sup +}] = 0.5M), so that it can be used for a range of salt concentrations including those corresponding to physiological conditions. Finally, we use new experimental data to parameterise the oxDNA potential so that consecutive adenine bases stack with a different strength to consecutive thymine bases, a feature which allows a more accurate treatment of systems where the flexibility of single-stranded regions is important. We illustrate the new possibilities opened up by the updated model, oxDNA2, by presenting results from simulations of the structure of large DNA objects and by using the model to investigate some salt-dependent properties of DNA.

  8. Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches

    Science.gov (United States)

    Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward

    2015-01-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.

  9. A Bayesian estimation on right censored survival data with mixture and non-mixture cured fraction model based on Beta-Weibull distribution

    Science.gov (United States)

    Yusuf, Madaki Umar; Bakar, Mohd. Rizam B. Abu

    2016-06-01

    Models for survival data that includes the proportion of individuals who are not subject to the event under study are known as a cure fraction models or simply called long-term survival models. The two most common models used to estimate the cure fraction are the mixture model and the non-mixture model. in this work, we present mixture and the non-mixture cure fraction models for survival data based on the beta-Weibull distribution. This four parameter distribution has been proposed as an alternative extension of the Weibull distribution in the analysis of lifetime data. This approach allows the inclusion of covariates in the models, where the estimation of the parameters was obtained under a Bayesian approach using Gibbs sampling methods.

  10. Photometry and models of selected main belt asteroids. IX. Introducing interactive service for asteroid models (ISAM)

    NARCIS (Netherlands)

    Marciniak, A.; Bartczak, P.; Santana-Ros, T.; Michalowski, T.; Antonini, P.; Behrend, R.; Bembrick, C.; Bernasconi, L.; Borczyk, W.; Colas, F.; Coloma, J.; Crippa, R.; Esseiva, N.; Fagas, M.; Fauvaud, M.; Fauvaud, S.; Ferreira, D. D. M.; Hein - Bertelsen, R.P.; Higgins, D.; Hirsch, R.; Kajava, J. J. E.; Kaminski, K.; Kryszczynska, A.; Kwiatkowski, T.; Manzini, F.; Michalowski, J.; Michalowski, M. J.; Paschke, A.; Polinska, M.; Poncy, R.; Roy, R.; Santacana, G.; Sobkowiak, K.; Stasik, M.; Starczewski, S.; Velichko, F.; Wucher, H.; Zafar, T.

    Context. The shapes and spin states of asteroids observed with photometric techniques can be reconstructed using the lightcurve inversion method. The resultant models can then be confirmed or exploited further by other techniques, such as adaptive optics, radar, thermal infrared, stellar

  11. A Bayesian Mixture Model for PoS Induction Using Multiple Features

    OpenAIRE

    Christodoulopoulos, Christos; Goldwater, Sharon; Steedman, Mark

    2011-01-01

    In this paper we present a fully unsupervised syntactic class induction system formulated as a Bayesian multinomial mixture model, where each word type is constrained to belong to a single class. By using a mixture model rather than a sequence model (e.g., HMM), we are able to easily add multiple kinds of features, including those at both the type level (morphology features) and token level (context and alignment features, the latter from parallel corpora). Using only context features, our sy...

  12. Introducing Model-Based System Engineering Transforming System Engineering through Model-Based Systems Engineering

    Science.gov (United States)

    2014-03-31

    MOF   Meta  Object  Facility...MOP   Measure  of  Performance   MVS   Multiple  Virtual   Storage   NASA   National  Aeronautics  and  Space...Modeling  Language™,  <<UML>>™   OMG®,  MDA®,  UML®,   MOF ®,   XMI®,   SysML™,   BPML™   are   registered   trademarks  

  13. Mixture experiment techniques for reducing the number of components applied for modeling waste glass sodium release

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, G.; Redgate, T. [Pacific Northwest National Lab., Richland, WA (United States). Statistics Group

    1997-12-01

    Statistical mixture experiment techniques were applied to a waste glass data set to investigate the effects of the glass components on Product Consistency Test (PCT) sodium release (NR) and to develop a model for PCT NR as a function of the component proportions. The mixture experiment techniques indicate that the waste glass system can be reduced from nine to four components for purposes of modeling PCT NR. Empirical mixture models containing four first-order terms and one or two second-order terms fit the data quite well, and can be used to predict the NR of any glass composition in the model domain. The mixture experiment techniques produce a better model in less time than required by another approach.

  14. Reverse Engineering Boolean Networks: From Bernoulli Mixture Models to Rule Based Systems

    Science.gov (United States)

    Saeed, Mehreen; Ijaz, Maliha; Javed, Kashif; Babri, Haroon Atique

    2012-01-01

    A Boolean network is a graphical model for representing and analyzing the behavior of gene regulatory networks (GRN). In this context, the accurate and efficient reconstruction of a Boolean network is essential for understanding the gene regulation mechanism and the complex relations that exist therein. In this paper we introduce an elegant and efficient algorithm for the reverse engineering of Boolean networks from a time series of multivariate binary data corresponding to gene expression data. We call our method ReBMM, i.e., reverse engineering based on Bernoulli mixture models. The time complexity of most of the existing reverse engineering techniques is quite high and depends upon the indegree of a node in the network. Due to the high complexity of these methods, they can only be applied to sparsely connected networks of small sizes. ReBMM has a time complexity factor, which is independent of the indegree of a node and is quadratic in the number of nodes in the network, a big improvement over other techniques and yet there is little or no compromise in accuracy. We have tested ReBMM on a number of artificial datasets along with simulated data derived from a plant signaling network. We also used this method to reconstruct a network from real experimental observations of microarray data of the yeast cell cycle. Our method provides a natural framework for generating rules from a probabilistic model. It is simple, intuitive and illustrates excellent empirical results. PMID:23284654

  15. Layer modeling of zinc removal from metallic mixture of waste printed circuit boards by vacuum distillation.

    Science.gov (United States)

    Gao, Yujie; Li, Xingang; Ding, Hui

    2015-08-01

    A layer model was established to elucidate the mechanism of zinc removal from the metallic mixture of waste printed circuit boards by vacuum distillation. The removal process was optimized by response surface methodology, and the optimum operating conditions were the chamber pressure of 0.1Pa, heating temperature of 923K, heating time of 60.0min, particle size of 70 mesh (0.212mm) and initial mass of 5.25g. Evaporation efficiency of zinc, the response variable, was 99.79%, which indicates that the zinc can be efficiently removed. Based on the experimental results, a mathematical model, which bears on layer structure, evaporation, mass transfer and condensation, interprets the mechanism of the variable effects. Especially, in order to reveal blocking effect on the zinc removal, the Blake-Kozeny-Burke-Plummer equation was introduced into the mass transfer process. The layer model can be applied to a wider range of metal removal by vacuum distillation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Influence of high power ultrasound on rheological and foaming properties of model ice-cream mixtures

    Directory of Open Access Journals (Sweden)

    Verica Batur

    2010-03-01

    Full Text Available This paper presents research of the high power ultrasound effect on rheological and foaming properties of ice cream model mixtures. Ice cream model mixtures are prepared according to specific recipes, and afterward undergone through different homogenization techniques: mechanical mixing, ultrasound treatment and combination of mechanical and ultrasound treatment. Specific diameter (12.7 mm of ultrasound probe tip has been used for ultrasound treatment that lasted 5 minutes at 100 percent amplitude. Rheological parameters have been determined using rotational rheometer and expressed as flow index, consistency coefficient and apparent viscosity. From the results it can be concluded that all model mixtures have non-newtonian, dilatant type behavior. The highest viscosities have been observed for model mixtures that were homogenizes with mechanical mixing, and significantly lower values of viscosity have been observed for ultrasound treated ones. Foaming properties are expressed as percentage of increase in foam volume, foam stability index and minimal viscosity. It has been determined that ice cream model mixtures treated only with ultrasound had minimal increase in foam volume, while the highest increase in foam volume has been observed for ice cream mixture that has been treated in combination with mechanical and ultrasound treatment. Also, ice cream mixtures having higher amount of proteins in composition had shown higher foam stability. It has been determined that optimal treatment time is 10 minutes.

  17. Irreversible Processes in a Universe modelled as a mixture of a Chaplygin gas and radiation

    CERN Document Server

    Kremer, G M

    2003-01-01

    The evolution of a Universe modelled as a mixture of a Chaplygin gas and radiation is determined by taking into account irreversible processes. This mixture could interpolate periods of a radiation dominated, a matter dominated and a cosmological constant dominated Universe. The results of a Universe modelled by this mixture are compared with the results of a mixture whose constituents are radiation and quintessence. Among other results it is shown that: (a) for both models there exists a period of a past deceleration with a present acceleration; (b) the slope of the acceleration of the Universe modelled as a mixture of a Chaplygin gas with radiation is more pronounced than that modelled as a mixture of quintessence and radiation; (c) the energy density of the Chaplygin gas tends to a constant value at earlier times than the energy density of quintessence does; (d) the energy density of radiation for both mixtures coincide and decay more rapidly than the energy densities of the Chaplygin gas and of quintessen...

  18. Modeling adsorption of liquid mixtures on porous materials

    DEFF Research Database (Denmark)

    Monsalvo, Matias Alfonso; Shapiro, Alexander

    2009-01-01

    The multicomponent potential theory of adsorption (MPTA), which was previously applied to adsorption from gases, is extended onto adsorption of liquid mixtures on porous materials. In the MPTA, the adsorbed fluid is considered as an inhomogeneous liquid with thermodynamic properties that depend...... on the distance from the solid surface (or position in the porous space). The theory describes the two kinds of interactions present in the adsorbed fluid, i.e. the fluid-fluid and fluid-solid interactions, by means of an equation of state and interaction potentials, respectively. The proposed extension...

  19. A Mixture Innovation Heterogeneous Autoregressive Model for Structural Breaks and Long Memory

    DEFF Research Database (Denmark)

    Nonejad, Nima

    We propose a flexible model to describe nonlinearities and long-range dependence in time series dynamics. Our model is an extension of the heterogeneous autoregressive model. Structural breaks occur through mixture distributions in state innovations of linear Gaussian state space models. Monte Ca...... forecasts compared to any single model specification. It provides further improvements when we average over nonlinear specifications....

  20. Probing the Role of Ceramide Headgroup Polarity in Short-Chain Model Skin Barrier Lipid Mixtures by ²H Solid-State NMR Spectroscopy.

    Science.gov (United States)

    Stahlberg, Sören; Lange, Stefan; Dobner, Bodo; Huster, Daniel

    2016-03-01

    The thermoptropic phase behaviors of two stratum corneum model lipid mixtures composed of equimolar contributions of either Cer[NS18] or Cer[NP18] with stearic acid and cholesterol were compared. Each component of the mixture was specifically deuterated such that the temperature-dependent (2)H NMR spectra allowed disentanglement of the complicated phase polymorphism of these lipid mixtures. While Cer[NS] is based on the sphingosine backbone, Cer[NP] features a phytosphingosine, which introduces an additional hydroxyl group into the headgroup of the ceramide and abolishes the double bond. From the NMR spectra, the individual contributions of all lipids to the respective phases could be determined. The comparison of the two lipid mixtures reveals that Cer[NP] containing mixtures have a tendency to form more fluid phases. It is concluded that the additional hydroxyl group of the phytosphingosine-containing ceramide Cer[NP18] in mixture with chain-matched stearic acid and cholesterol creates a packing defect that destabilizes the orthorhombic phase state of canonical SC mixtures. This steric clash favors the gel phase and promotes formation of fluid phases of Cer[NP] containing lipid mixtures at lower temperature compared to those containing Cer[NS18].

  1. Statistical imitation system using relational interest points and Gaussian mixture models

    CSIR Research Space (South Africa)

    Claassens, J

    2009-11-01

    Full Text Available The author proposes an imitation system that uses relational interest points (RIPs) and Gaussian mixture models (GMMs) to characterize a behaviour. The system's structure is inspired by the Robot Programming by Demonstration (RDP) paradigm...

  2. Modeling Hydrodynamic State of Oil and Gas Condensate Mixture in a Pipeline

    Directory of Open Access Journals (Sweden)

    Dudin Sergey

    2016-01-01

    Based on the developed model a calculation method was obtained which is used to analyze hydrodynamic state and composition of hydrocarbon mixture in each ith section of the pipeline when temperature-pressure and hydraulic conditions change.

  3. Optimal Penalty Functions Based on MCMC for Testing Homogeneity of Mixture Models

    Directory of Open Access Journals (Sweden)

    Rahman Farnoosh

    2012-07-01

    Full Text Available This study is intended to provide an estimation of penalty function for testing homogeneity of mixture models based on Markov chain Monte Carlo simulation. The penalty function is considered as a parametric function and parameter of determinative shape of the penalty function in conjunction with parameters of mixture models are estimated by a Bayesian approach. Different mixture of uniform distribution are used as prior. Some simulation examples are perform to confirm the efficiency of the present work in comparison with the previous approaches.

  4. Scattering for mixtures of hard spheres: comparison of total scattering intensities with model.

    Science.gov (United States)

    Anderson, B J; Gopalakrishnan, V; Ramakrishnan, S; Zukoski, C F

    2006-03-01

    The angular dependence of the intensity of x-rays scattered from binary and ternary hard sphere mixtures is investigated and compared to the predictions of two scattering models. Mixture ratio and total volume fraction dependent effects are investigated for size ratios equal to 0.51 and 0.22. Comparisons of model predictions with experimental results indicate the significant impact of the role of particle size distributions in interpreting the angular dependence of the scattering at wave vectors probing density fluctuations intermediate between the sizes of the particles in the mixture.

  5. Improved AIOMFAC model parameterisation of the temperature dependence of activity coefficients for aqueous organic mixtures

    Science.gov (United States)

    Ganbavale, G.; Zuend, A.; Marcolli, C.; Peter, T.

    2015-01-01

    This study presents a new, improved parameterisation of the temperature dependence of activity coefficients in the AIOMFAC (Aerosol Inorganic-Organic Mixtures Functional groups Activity Coefficients) model applicable for aqueous as well as water-free organic solutions. For electrolyte-free organic and organic-water mixtures the AIOMFAC model uses a group-contribution approach based on UNIFAC (UNIversal quasi-chemical Functional-group Activity Coefficients). This group-contribution approach explicitly accounts for interactions among organic functional groups and between organic functional groups and water. The previous AIOMFAC version uses a simple parameterisation of the temperature dependence of activity coefficients, aimed to be applicable in the temperature range from ~ 275 to ~ 400 K. With the goal to improve the description of a wide variety of organic compounds found in atmospheric aerosols, we extend the AIOMFAC parameterisation for the functional groups carboxyl, hydroxyl, ketone, aldehyde, ether, ester, alkyl, aromatic carbon-alcohol, and aromatic hydrocarbon to atmospherically relevant low temperatures. To this end we introduce a new parameterisation for the temperature dependence. The improved temperature dependence parameterisation is derived from classical thermodynamic theory by describing effects from changes in molar enthalpy and heat capacity of a multi-component system. Thermodynamic equilibrium data of aqueous organic and water-free organic mixtures from the literature are carefully assessed and complemented with new measurements to establish a comprehensive database, covering a wide temperature range (~ 190 to ~ 440 K) for many of the functional group combinations considered. Different experimental data types and their processing for the estimation of AIOMFAC model parameters are discussed. The new AIOMFAC parameterisation for the temperature dependence of activity coefficients from low to high temperatures shows an overall improvement of 28% in

  6. Use of Linear Spectral Mixture Model to Estimate Rice Planted Area Based on MODIS Data

    OpenAIRE

    Lei Wang; Satoshi Uchida

    2008-01-01

    MODIS (Moderate Resolution Imaging Spectroradiometer) is a key instrument aboard the Terra (EOS AM) and Aqua (EOS PM) satellites. Linear spectral mixture models are applied to MOIDS data for the sub-pixel classification of land covers. Shaoxing county of Zhejiang Province in China was chosen to be the study site and early rice was selected as the study crop. The derived proportions of land covers from MODIS pixel using linear spectral mixture models were compared with unsupervised classificat...

  7. MULTIPLE REFLECTION EFFECTS IN NONLINEAR MIXTURE MODEL FOR HYPERSPECTRAL IMAGE ANALYSIS

    OpenAIRE

    Liu, C. Y.; Ren, H.

    2016-01-01

    Hyperspectral spectrometers can record electromagnetic energy with hundreds or thousands of spectral channels. With such high spectral resolution, the spectral information has better capability for material identification. Because of the spatial resolution, one pixel in hyperspectral images usually covers several meters, and it may contain more than one material. Therefore, the mixture model must be considered. Linear mixture model (LMM) has been widely used for remote sensing target...

  8. Modeling diffusion coefficients in binary mixtures of polar and non-polar compounds

    DEFF Research Database (Denmark)

    Medvedev, Oleg; Shapiro, Alexander

    2005-01-01

    The theory of transport coefficients in liquids, developed previously, is tested on a description of the diffusion coefficients in binary polar/non-polar mixtures, by applying advanced thermodynamic models. Comparison to a large set of experimental data shows good performance of the model. Only...... components and to only one parameter for mixtures consisting of non-polar components. A possibility of complete prediction of the parameters is discussed....

  9. Genetic Analysis of Somatic Cell Score in Danish Holsteins Using a Liability-Normal Mixture Model

    DEFF Research Database (Denmark)

    Madsen, P; Shariati, M M; Ødegård, J

    2008-01-01

    Mixture models are appealing for identifying hidden structures affecting somatic cell score (SCS) data, such as unrecorded cases of subclinical mastitis. Thus, liability-normal mixture (LNM) models were used for genetic analysis of SCS data, with the aim of predicting breeding values for such cas...... categorizing only the most extreme SCS observations as mastitic, and such cases of subclinical infections may be the most closely related to clinical (treated) mastitis...

  10. Introducing decorated HODs: modelling assembly bias in the galaxy-halo connection

    Science.gov (United States)

    Hearin, Andrew P.; Zentner, Andrew R.; van den Bosch, Frank C.; Campbell, Duncan; Tollerud, Erik

    2016-08-01

    The connection between galaxies and dark matter haloes is often inferred from data using probabilistic models, such as the halo occupation distribution (HOD). Conventional HOD formulations assume that only halo mass governs the galaxy-halo connection. Violations of this assumption, known as galaxy assembly bias, threaten the HOD programme. We introduce decorated HODs, a new, flexible class of models designed to account for assembly bias. Decorated HODs minimally expand the parameter space and maximize the independence between traditional and novel HOD parameters. We use decorated HODs to quantify the influence of assembly bias on clustering and lensing statistics. For SDSS-like samples, the impact of assembly bias on galaxy clustering can be as large as a factor of 2 on r ˜ 200 kpc scales and ˜15 per cent in the linear regime. Assembly bias can either enhance or diminish clustering on large scales, but generally increases clustering on scales r ≲ 1 Mpc. We performed our calculations with HALOTOOLS, an open-source, community-driven PYTHON package for studying the galaxy-halo connection (http://halotools.readthedocs.org). We conclude by describing the use of decorated HODs to treat assembly bias in otherwise conventional likelihood analyses.

  11. Analytic Couple Modeling Introducing Device Design Factor, Fin Factor, Thermal Diffusivity Factor, and Inductance Factor

    Science.gov (United States)

    Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred

    2014-01-01

    A set of convenient thermoelectric device solutions have been derived in order to capture a number of factors which are previously only resolved with numerical techniques. The concise conversion efficiency equations derived from governing equations provide intuitive and straight-forward design guidelines. These guidelines allow for better device design without requiring detailed numerical modeling. The analytical modeling accounts for factors such as i) variable temperature boundary conditions, ii) lateral heat transfer, iii) temperature variable material properties, and iv) transient operation. New dimensionless parameters, similar to the figure of merit, are introduced including the device design factor, fin factor, thermal diffusivity factor, and inductance factor. These new device factors allow for the straight-forward description of phenomenon generally only captured with numerical work otherwise. As an example a device design factor of 0.38, which accounts for thermal resistance of the hot and cold shoes, can be used to calculate a conversion efficiency of 2.28 while the ideal conversion efficiency based on figure of merit alone would be 6.15. Likewise an ideal couple with efficiency of 6.15 will be reduced to 5.33 when lateral heat is accounted for with a fin factor of 1.0.

  12. Does introduced fauna influence soil erosion? A field and modelling assessment.

    Science.gov (United States)

    Hancock, G R; Lowry, J B C; Dever, C; Braggins, M

    2015-06-15

    Pigs (Sus scrofa) are recognised as having significant ecological impacts in many areas of the world including northern Australia. The full consequences of the introduction of pigs are difficult to quantify as the impacts may only be detected over the long-term and there is a lack of quantitative information on the impacts of feral pigs globally. In this study the effect of feral pigs is quantified in an undisturbed catchment in the monsoonal tropics of northern Australia. Over a three-year period, field data showed that the areal extent of pig disturbance ranged from 0.3-3.3% of the survey area. The mass of material exhumed through these activities ranged from 4.3 t ha(-1) yr(-1) to 36.0 t ha(-1) yr(-1). The findings demonstrate that large introduced species such as feral pigs are disturbing large areas as well as exhuming considerable volumes of soil. A numerical landscape evolution and soil erosion model was used to assess the effect of this disturbance on catchment scale erosion rates. The modelling demonstrated that simulated pig disturbance in previously undisturbed areas produced lower erosion rates compared to those areas which had not been impacted by pigs. This is attributed to the pig disturbance increasing surface roughness and trapping sediment. This suggests that in this specific environment, disturbance by pigs does not enhance erosion. However, this conclusion is prefaced by two important caveats. First, the long term impact of soil disturbance is still very uncertain. Secondly, modelling results show a clear differentiation between those from an undisturbed environment and those from a post-mining landscape, in which pig disturbance may enhance erosion.

  13. A Gaussian Mixture MRF for Model-Based Iterative Reconstruction with Applications to Low-Dose X-ray CT

    CERN Document Server

    Zhang, Ruoqiao; Pal, Debashish; Thibault, Jean-Baptiste; Sauer, Ken D; Bouman, Charles A

    2016-01-01

    Markov random fields (MRFs) have been widely used as prior models in various inverse problems such as tomographic reconstruction. While MRFs provide a simple and often effective way to model the spatial dependencies in images, they suffer from the fact that parameter estimation is difficult. In practice, this means that MRFs typically have very simple structure that cannot completely capture the subtle characteristics of complex images. In this paper, we present a novel Gaussian mixture Markov random field model (GM-MRF) that can be used as a very expressive prior model for inverse problems such as denoising and reconstruction. The GM-MRF forms a global image model by merging together individual Gaussian-mixture models (GMMs) for image patches. In addition, we present a novel analytical framework for computing MAP estimates using the GM-MRF prior model through the construction of surrogate functions that result in a sequence of quadratic optimizations. We also introduce a simple but effective method to adjust...

  14. Thermodiffusion in Multicomponent Mixtures Thermodynamic, Algebraic, and Neuro-Computing Models

    CERN Document Server

    Srinivasan, Seshasai

    2013-01-01

    Thermodiffusion in Multicomponent Mixtures presents the computational approaches that are employed in the study of thermodiffusion in various types of mixtures, namely, hydrocarbons, polymers, water-alcohol, molten metals, and so forth. We present a detailed formalism of these methods that are based on non-equilibrium thermodynamics or algebraic correlations or principles of the artificial neural network. The book will serve as single complete reference to understand the theoretical derivations of thermodiffusion models and its application to different types of multi-component mixtures. An exhaustive discussion of these is used to give a complete perspective of the principles and the key factors that govern the thermodiffusion process.

  15. Calculation of Surface Tensions of Polar Mixtures with a Simplified Gradient Theory Model

    DEFF Research Database (Denmark)

    Zuo, You-Xiang; Stenby, Erling Halfdan

    1996-01-01

    Key Words: Thermodynamics, Simplified Gradient Theory, Surface Tension, Equation of state, Influence Parameter.In this work, assuming that the number densities of each component in a mixture across the interface between the coexisting vapor and liquid phases are linearly distributed, we developed...... surface tensions of 34 binary mixtures with an overall average absolute deviation of 3.46%. The results show good agreement between the predicted and experimental surface tensions. Next, the SGT model was applied to correlate surface tensions of binary mixtures containing alcohols, water or/and glycerol...

  16. Measurement and modelling of hydrogen bonding in 1-alkanol plus n-alkane binary mixtures

    DEFF Research Database (Denmark)

    von Solms, Nicolas; Jensen, Lars; Kofod, Jonas L.;

    2007-01-01

    Two equations of state (simplified PC-SAFT and CPA) are used to predict the monomer fraction of 1-alkanols in binary mixtures with n-alkanes. It is found that the choice of parameters and association schemes significantly affects the ability of a model to predict hydrogen bonding in mixtures, even...... studies, which is clarified in the present work. New hydrogen bonding data based on infrared spectroscopy are reported for seven binary mixtures of alcohols and alkanes. (C) 2007 Elsevier B.V. All rights reserved....

  17. Modeling of Video Sequences by Gaussian Mixture: Application in Motion Estimation by Block Matching Method

    Directory of Open Access Journals (Sweden)

    Nsiri Benayad

    2010-01-01

    Full Text Available This article investigates a new method of motion estimation based on block matching criterion through the modeling of image blocks by a mixture of two and three Gaussian distributions. Mixture parameters (weights, means vectors, and covariance matrices are estimated by the Expectation Maximization algorithm (EM which maximizes the log-likelihood criterion. The similarity between a block in the current image and the more resembling one in a search window on the reference image is measured by the minimization of Extended Mahalanobis distance between the clusters of mixture. Performed experiments on sequences of real images have given good results, and PSNR reached 3 dB.

  18. Introducing uncertainty of radar-rainfall estimates to the verification of mesoscale model precipitation forecasts

    Directory of Open Access Journals (Sweden)

    M. P. Mittermaier

    2008-05-01

    Full Text Available A simple measure of the uncertainty associated with using radar-derived rainfall estimates as "truth" has been introduced to the Numerical Weather Prediction (NWP verification process to assess the effect on forecast skill and errors. Deterministic precipitation forecasts from the mesoscale version of the UK Met Office Unified Model for a two-day high-impact event and for a month were verified at the daily and six-hourly time scale using a spatially-based intensity-scale method and various traditional skill scores such as the Equitable Threat Score (ETS and log-odds ratio. Radar-rainfall accumulations from the UK Nimrod radar-composite were used.

    The results show that the inclusion of uncertainty has some effect, shifting the forecast errors and skill. The study also allowed for the comparison of results from the intensity-scale method and traditional skill scores. It showed that the two methods complement each other, one detailing the scale and rainfall accumulation thresholds where the errors occur, the other showing how skillful the forecast is. It was also found that for the six-hourly forecasts the error distributions remain similar with forecast lead time but skill decreases. This highlights the difference between forecast error and forecast skill, and that they are not necessarily the same.

  19. Introducing uncertainty of radar-rainfall estimates to the verification of mesoscale model precipitation forecasts

    Science.gov (United States)

    Mittermaier, M. P.

    2008-05-01

    A simple measure of the uncertainty associated with using radar-derived rainfall estimates as "truth" has been introduced to the Numerical Weather Prediction (NWP) verification process to assess the effect on forecast skill and errors. Deterministic precipitation forecasts from the mesoscale version of the UK Met Office Unified Model for a two-day high-impact event and for a month were verified at the daily and six-hourly time scale using a spatially-based intensity-scale method and various traditional skill scores such as the Equitable Threat Score (ETS) and log-odds ratio. Radar-rainfall accumulations from the UK Nimrod radar-composite were used. The results show that the inclusion of uncertainty has some effect, shifting the forecast errors and skill. The study also allowed for the comparison of results from the intensity-scale method and traditional skill scores. It showed that the two methods complement each other, one detailing the scale and rainfall accumulation thresholds where the errors occur, the other showing how skillful the forecast is. It was also found that for the six-hourly forecasts the error distributions remain similar with forecast lead time but skill decreases. This highlights the difference between forecast error and forecast skill, and that they are not necessarily the same.

  20. Modeling of the effect of intentionally introduced traps on hole transport in single-crystal rubrene

    KAUST Repository

    Dacuña, Javier

    2014-06-05

    Defects have been intentionally introduced in a rubrene single crystal by means of two different mechanisms: ultraviolet ozone (UVO) exposure and x-ray irradiation. A complete drift-diffusion model based on the mobility edge (ME) concept, which takes into account asymmetries and nonuniformities in the semiconductor, is used to estimate the energetic and spatial distribution of trap states. The trap distribution for pristine devices can be decomposed into two well defined regions: a shallow region ascribed to structural disorder and a deeper region ascribed to defects. UVO and x ray increase the hole trap concentration in the semiconductor with different energetic and spatial signatures. The former creates traps near the top surface in the 0.3-0.4 eV region, while the latter induces a wider distribution of traps extending from the band edge with a spatial distribution that peaks near the top and bottom interfaces. In addition to inducing hole trap states in the transport gap, both processes are shown to reduce the mobility with respect to a pristine crystal. © 2014 American Physical Society.

  1. Discriminative variable subsets in Bayesian classification with mixture models, with application in flow cytometry studies.

    Science.gov (United States)

    Lin, Lin; Chan, Cliburn; West, Mike

    2016-01-01

    We discuss the evaluation of subsets of variables for the discriminative evidence they provide in multivariate mixture modeling for classification. The novel development of Bayesian classification analysis presented is partly motivated by problems of design and selection of variables in biomolecular studies, particularly involving widely used assays of large-scale single-cell data generated using flow cytometry technology. For such studies and for mixture modeling generally, we define discriminative analysis that overlays fitted mixture models using a natural measure of concordance between mixture component densities, and define an effective and computationally feasible method for assessing and prioritizing subsets of variables according to their roles in discrimination of one or more mixture components. We relate the new discriminative information measures to Bayesian classification probabilities and error rates, and exemplify their use in Bayesian analysis of Dirichlet process mixture models fitted via Markov chain Monte Carlo methods as well as using a novel Bayesian expectation-maximization algorithm. We present a series of theoretical and simulated data examples to fix concepts and exhibit the utility of the approach, and compare with prior approaches. We demonstrate application in the context of automatic classification and discriminative variable selection in high-throughput systems biology using large flow cytometry datasets.

  2. Volumetric Properties of Chloroalkanes + Amines Mixtures: Theoretical Analysis Using the ERAS-Model

    Science.gov (United States)

    Tôrres, R. B.; Hoga, H. E.; Magalhães, J. G.; Volpe, P. L. O.

    2009-08-01

    In this study, experimental data of excess molar volumes of {dichloromethane (DCM), or trichloromethane (TCM) + n-butylamine (n-BA), or +s-butylamine (s-BA), or +t-butylamine (t-BA), or +diethylamine (DEA), or +triethylamine (TEA)} mixtures as a function of composition have been used to test the applicability of the extended real associated solution model (ERAS-Model). The values of the excess molar volume were negative for (DCM + t-BA, or +DEA, or +TEA and TCM + n-BA, or +s-BA, or +DEA, or +TEA) mixtures and present sigmoid curves for (DCM + n-BA, or +s-BA) mixtures over the complete mole-fraction range. The agreement between theoretical and experimental results is discussed in terms of cross-association between the components present in the mixtures.

  3. Kinetic Modeling of Gasoline Surrogate Components and Mixtures under Engine Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Mehl, M; Pitz, W J; Westbrook, C K; Curran, H J

    2010-01-11

    Real fuels are complex mixtures of thousands of hydrocarbon compounds including linear and branched paraffins, naphthenes, olefins and aromatics. It is generally agreed that their behavior can be effectively reproduced by simpler fuel surrogates containing a limited number of components. In this work, an improved version of the kinetic model by the authors is used to analyze the combustion behavior of several components relevant to gasoline surrogate formulation. Particular attention is devoted to linear and branched saturated hydrocarbons (PRF mixtures), olefins (1-hexene) and aromatics (toluene). Model predictions for pure components, binary mixtures and multicomponent gasoline surrogates are compared with recent experimental information collected in rapid compression machine, shock tube and jet stirred reactors covering a wide range of conditions pertinent to internal combustion engines (3-50 atm, 650-1200K, stoichiometric fuel/air mixtures). Simulation results are discussed focusing attention on the mixing effects of the fuel components.

  4. Simulating asymmetric colloidal mixture with adhesive hard sphere model.

    Science.gov (United States)

    Jamnik, A

    2008-06-21

    Monte Carlo simulation and Percus-Yevick (PY) theory are used to investigate the structural properties of a two-component system of the Baxter adhesive fluids with the size asymmetry of the particles of both components mimicking an asymmetric binary colloidal mixture. The radial distribution functions for all possible species pairs, g(11)(r), g(22)(r), and g(12)(r), exhibit discontinuities at the interparticle distances corresponding to certain combinations of n and m values (n and m being integers) in the sum nsigma(1)+msigma(2) (sigma(1) and sigma(2) being the hard-core diameters of individual components) as a consequence of the impulse character of 1-1, 2-2, and 1-2 attractive interactions. In contrast to the PY theory, which predicts the delta function peaks in the shape of g(ij)(r) only at the distances which are the multiple of the molecular sizes corresponding to different linear structures of successively connected particles, the simulation results reveal additional peaks at intermediate distances originating from the formation of rigid clusters of various geometries.

  5. Mixture Models for the Analysis of Repeated Count Data.

    NARCIS (Netherlands)

    van Duijn, M.A.J.; Böckenholt, U

    1995-01-01

    Repeated count data showing overdispersion are commonly analysed by using a Poisson model with varying intensity parameter. resulting in a mixed model. A mixed model with a gamma distribution for the Poisson parameter does not adequately fit a data set on 721 children's spelling errors. An

  6. Modeling the Thermodynamic and Transport Properties of Decahydronaphthalene/Propane Mixtures: Phase Equilibria, Density, and Viscosity

    Science.gov (United States)

    2011-01-01

    Modeling the Thermodynamic and Transport Properties of Decahydronaphthalene/Propane Mixtures: Phase Equilibria , Density, and Viscosity Nathaniel...Decahydronaphthalene/Propane Mixtures: Phase Equilibria , Density, And Viscosity 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d...Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Keywords: phase equilibria ; modified Sanchez-Lacombe equation of state

  7. Modelling and parameter estimation in reactive continuous mixtures: the catalytic cracking of alkanes - part II

    Directory of Open Access Journals (Sweden)

    F. C. PEIXOTO

    1999-09-01

    Full Text Available Fragmentation kinetics is employed to model a continuous reactive mixture of alkanes under catalytic cracking conditions. Standard moment analysis techniques are employed, and a dynamic system for the time evolution of moments of the mixture's dimensionless concentration distribution function (DCDF is found. The time behavior of the DCDF is recovered with successive estimations of scaled gamma distributions using the moments time data.

  8. A mixture model for the joint analysis of latent developmental trajectories and survival

    NARCIS (Netherlands)

    Klein Entink, R.H.; Fox, J.P.; Hout, A. van den

    2011-01-01

    A general joint modeling framework is proposed that includes a parametric stratified survival component for continuous time survival data, and a mixture multilevel item response component to model latent developmental trajectories given mixed discrete response data. The joint model is illustrated in

  9. A mixture model for the joint analysis of latent developmental trajectories and survival

    NARCIS (Netherlands)

    Klein Entink, R.H.; Fox, J.P.; Hout, A. van den

    2011-01-01

    A general joint modeling framework is proposed that includes a parametric stratified survival component for continuous time survival data, and a mixture multilevel item response component to model latent developmental trajectories given mixed discrete response data. The joint model is illustrated in

  10. Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots

    Science.gov (United States)

    Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.

    2013-01-01

    Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…

  11. On the Bayesian calibration of computer model mixtures through experimental data, and the design of predictive models

    Science.gov (United States)

    Karagiannis, Georgios; Lin, Guang

    2017-08-01

    For many real systems, several computer models may exist with different physics and predictive abilities. To achieve more accurate simulations/predictions, it is desirable for these models to be properly combined and calibrated. We propose the Bayesian calibration of computer model mixture method which relies on the idea of representing the real system output as a mixture of the available computer model outputs with unknown input dependent weight functions. The method builds a fully Bayesian predictive model as an emulator for the real system output by combining, weighting, and calibrating the available models in the Bayesian framework. Moreover, it fits a mixture of calibrated computer models that can be used by the domain scientist as a mean to combine the available computer models, in a flexible and principled manner, and perform reliable simulations. It can address realistic cases where one model may be more accurate than the others at different input values because the mixture weights, indicating the contribution of each model, are functions of the input. Inference on the calibration parameters can consider multiple computer models associated with different physics. The method does not require knowledge of the fidelity order of the models. We provide a technique able to mitigate the computational overhead due to the consideration of multiple computer models that is suitable to the mixture model framework. We implement the proposed method in a real-world application involving the Weather Research and Forecasting large-scale climate model.

  12. Structure-reactivity modeling using mixture-based representation of chemical reactions

    Science.gov (United States)

    Polishchuk, Pavel; Madzhidov, Timur; Gimadiev, Timur; Bodrov, Andrey; Nugmanov, Ramil; Varnek, Alexandre

    2017-07-01

    We describe a novel approach of reaction representation as a combination of two mixtures: a mixture of reactants and a mixture of products. In turn, each mixture can be encoded using an earlier reported approach involving simplex descriptors (SiRMS). The feature vector representing these two mixtures results from either concatenated product and reactant descriptors or the difference between descriptors of products and reactants. This reaction representation doesn't need an explicit labeling of a reaction center. The rigorous "product-out" cross-validation (CV) strategy has been suggested. Unlike the naïve "reaction-out" CV approach based on a random selection of items, the proposed one provides with more realistic estimation of prediction accuracy for reactions resulting in novel products. The new methodology has been applied to model rate constants of E2 reactions. It has been demonstrated that the use of the fragment control domain applicability approach significantly increases prediction accuracy of the models. The models obtained with new "mixture" approach performed better than those required either explicit (Condensed Graph of Reaction) or implicit (reaction fingerprints) reaction center labeling.

  13. Microstructure modeling and virtual test of asphalt mixture based on three-dimensional discrete element method

    Institute of Scientific and Technical Information of China (English)

    马涛; 张德育; 张垚; 赵永利; 黄晓明

    2016-01-01

    The objective of this work is to model the microstructure of asphalt mixture and build virtual test for asphalt mixture by using Particle Flow Code in three dimensions (PFC3D) based on three-dimensional discrete element method. A randomly generating algorithm was proposed to capture the three-dimensional irregular shape of coarse aggregate. And then, modeling algorithm and method for graded aggregates were built. Based on the combination of modeling of coarse aggregates, asphalt mastic and air voids, three-dimensional virtual sample of asphalt mixture was modeled by using PFC3D. Virtual tests for penetration test of aggregate and uniaxial creep test of asphalt mixture were built and conducted by using PFC3D. By comparison of the testing results between virtual tests and actual laboratory tests, the validity of the microstructure modeling and virtual test built in this study was verified. Additionally, compared with laboratory test, the virtual test is easier to conduct and has less variability. It is proved that microstructure modeling and virtual test based on three-dimensional discrete element method is a promising way to conduct research of asphalt mixture.

  14. Gaussian-mixture umbrella sampling

    OpenAIRE

    Maragakis, Paul; van der Vaart, Arjan; Karplus, Martin

    2009-01-01

    We introduce the Gaussian-mixture umbrella sampling method (GAMUS), a biased molecular dynamics technique based on adaptive umbrella sampling that efficiently escapes free energy minima in multi-dimensional problems. The prior simulation data are reweighted with a maximum likelihood formulation, and the new approximate probability density is fit to a Gaussian-mixture model, augmented by information about the unsampled areas. The method can be used to identify free energy minima in multi-dimen...

  15. Three Different Ways of Calibrating Burger's Contact Model for Viscoelastic Model of Asphalt Mixtures by Discrete Element Method

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Stang, Henrik

    2016-01-01

    modulus. Three different approaches have been used and compared for calibrating the Burger's contact model. Values of the dynamic modulus and phase angle of asphalt mixtures were predicted by conducting DE simulation under dynamic strain control loading. The excellent agreement between the predicted......In this paper the viscoelastic behavior of asphalt mixture was investigated by employing a three-dimensional discrete element method. Combined with Burger's model, three contact models were used for the construction of constitutive asphalt mixture model with viscoelastic properties...... in the commercial software PFC3D, including the slip model, linear stiffness-contact model, and contact bond model. A macro-scale Burger's model was first established and the input parameters of Burger's contact model were calibrated by adjusting them so that the model fitted the experimental data for the complex...

  16. Introducing the Interactive Model for the Training of Audiovisual Translators and Analysis of Multimodal Texts

    Directory of Open Access Journals (Sweden)

    Pietro Luigi Iaia

    2015-07-01

    Full Text Available Abstract – This paper introduces the ‘Interactive Model’ of audiovisual translation developed in the context of my PhD research on the cognitive-semantic, functional and socio-cultural features of the Italian-dubbing translation of a corpus of humorous texts. The Model is based on two interactive macro-phases – ‘Multimodal Critical Analysis of Scripts’ (MuCrAS and ‘Multimodal Re-Textualization of Scripts’ (MuReTS. Its construction and application are justified by a multidisciplinary approach to the analysis and translation of audiovisual texts, so as to focus on the linguistic and extralinguistic dimensions affecting both the reception of source texts and the production of target ones (Chaume 2004; Díaz Cintas 2004. By resorting to Critical Discourse Analysis (Fairclough 1995, 2001, to a process-based approach to translation and to a socio-semiotic analysis of multimodal texts (van Leeuwen 2004; Kress and van Leeuwen 2006, the Model is meant to be applied to the training of audiovisual translators and discourse analysts in order to help them enquire into the levels of pragmalinguistic equivalence between the source and the target versions. Finally, a practical application shall be discussed, detailing the Italian rendering of a comic sketch from the American late-night talk show Conan.Abstract – Questo studio introduce il ‘Modello Interattivo’ di traduzione audiovisiva sviluppato durante il mio dottorato di ricerca incentrato sulle caratteristiche cognitivo-semantiche, funzionali e socio-culturali della traduzione italiana per il doppiaggio di un corpus di testi comici. Il Modello è costituito da due fasi: la prima, di ‘Analisi critica e multimodale degli script’ (MuCrAS e la seconda, di ‘Ritestualizzazione critica e multimodale degli script’ (MuReTS, e la sua costruzione e applicazione sono frutto di un approccio multidisciplinare all’analisi e traduzione dei testi audiovisivi, al fine di esaminare le

  17. Mixture Density Mercer Kernels

    Data.gov (United States)

    National Aeronautics and Space Administration — We present a method of generating Mercer Kernels from an ensemble of probabilistic mixture models, where each mixture model is generated from a Bayesian mixture...

  18. Linking asphalt binder fatigue to asphalt mixture fatigue performance using viscoelastic continuum damage modeling

    Science.gov (United States)

    Safaei, Farinaz; Castorena, Cassie; Kim, Y. Richard

    2016-08-01

    Fatigue cracking is a major form of distress in asphalt pavements. Asphalt binder is the weakest asphalt concrete constituent and, thus, plays a critical role in determining the fatigue resistance of pavements. Therefore, the ability to characterize and model the inherent fatigue performance of an asphalt binder is a necessary first step to design mixtures and pavements that are not susceptible to premature fatigue failure. The simplified viscoelastic continuum damage (S-VECD) model has been used successfully by researchers to predict the damage evolution in asphalt mixtures for various traffic and climatic conditions using limited uniaxial test data. In this study, the S-VECD model, developed for asphalt mixtures, is adapted for asphalt binders tested under cyclic torsion in a dynamic shear rheometer. Derivation of the model framework is presented. The model is verified by producing damage characteristic curves that are both temperature- and loading history-independent based on time sweep tests, given that the effects of plasticity and adhesion loss on the material behavior are minimal. The applicability of the S-VECD model to the accelerated loading that is inherent of the linear amplitude sweep test is demonstrated, which reveals reasonable performance predictions, but with some loss in accuracy compared to time sweep tests due to the confounding effects of nonlinearity imposed by the high strain amplitudes included in the test. The asphalt binder S-VECD model is validated through comparisons to asphalt mixture S-VECD model results derived from cyclic direct tension tests and Accelerated Loading Facility performance tests. The results demonstrate good agreement between the asphalt binder and mixture test results and pavement performance, indicating that the developed model framework is able to capture the asphalt binder's contribution to mixture fatigue and pavement fatigue cracking performance.

  19. Treatment of nonignorable missing data when modeling unobserved heterogeneity with finite mixture models.

    Science.gov (United States)

    Lehmann, Thomas; Schlattmann, Peter

    2017-01-01

    Multiple imputation has become a widely accepted technique to deal with the problem of incomplete data. Typically, imputation of missing values and the statistical analysis are performed separately. Therefore, the imputation model has to be consistent with the analysis model. If the data are analyzed with a mixture model, the parameter estimates are usually obtained iteratively. Thus, if the data are missing not at random, parameter estimation and treatment of missingness should be combined. We solve both problems by simultaneously imputing values using the data augmentation method and estimating parameters using the EM algorithm. This iterative procedure ensures that the missing values are properly imputed given the current parameter estimates. Properties of the parameter estimates were investigated in a simulation study. The results are illustrated using data from the National Health and Nutrition Examination Survey.

  20. A Model-Selection-Based Self-Splitting Gaussian Mixture Learning with Application to Speaker Identification

    Directory of Open Access Journals (Sweden)

    Shih-Sian Cheng

    2004-12-01

    Full Text Available We propose a self-splitting Gaussian mixture learning (SGML algorithm for Gaussian mixture modelling. The SGML algorithm is deterministic and is able to find an appropriate number of components of the Gaussian mixture model (GMM based on a self-splitting validity measure, Bayesian information criterion (BIC. It starts with a single component in the feature space and splits adaptively during the learning process until the most appropriate number of components is found. The SGML algorithm also performs well in learning the GMM with a given component number. In our experiments on clustering of a synthetic data set and the text-independent speaker identification task, we have observed the ability of the SGML for model-based clustering and automatically determining the model complexity of the speaker GMMs for speaker identification.

  1. Robust non-rigid point set registration using student's-t mixture model.

    Directory of Open Access Journals (Sweden)

    Zhiyong Zhou

    Full Text Available The Student's-t mixture model, which is heavily tailed and more robust than the Gaussian mixture model, has recently received great attention on image processing. In this paper, we propose a robust non-rigid point set registration algorithm using the Student's-t mixture model. Specifically, first, we consider the alignment of two point sets as a probability density estimation problem and treat one point set as Student's-t mixture model centroids. Then, we fit the Student's-t mixture model centroids to the other point set which is treated as data. Finally, we get the closed-form solutions of registration parameters, leading to a computationally efficient registration algorithm. The proposed algorithm is especially effective for addressing the non-rigid point set registration problem when significant amounts of noise and outliers are present. Moreover, less registration parameters have to be set manually for our algorithm compared to the popular coherent points drift (CPD algorithm. We have compared our algorithm with other state-of-the-art registration algorithms on both 2D and 3D data with noise and outliers, where our non-rigid registration algorithm showed accurate results and outperformed the other algorithms.

  2. Isothermal (vapour + liquid) equilibrium of (cyclic ethers + chlorohexane) mixtures: Experimental results and SAFT modelling

    Energy Technology Data Exchange (ETDEWEB)

    Bandres, I.; Giner, B.; Lopez, M.C.; Artigas, H. [Departamento de Quimica Organica y Quimica Fisica, Facultad de Ciencias, Universidad de Zaragoza, Pedro Cerbuna 12, 50009 Zaragoza (Spain); Lafuente, C. [Departamento de Quimica Organica y Quimica Fisica, Facultad de Ciencias, Universidad de Zaragoza, Pedro Cerbuna 12, 50009 Zaragoza (Spain)], E-mail: celadi@unizar.es

    2008-08-15

    Experimental data for the isothermal (vapour + liquid) equilibrium of mixtures formed by several cyclic ethers (tetrahydrofuran, tetrahydropyran, 1,3-dioxolane, and 1,4-dioxane) and chlorohexane at temperatures of (298.15 and 328.15) K are presented. Experimental results have been discussed in terms of both, molecular characteristics of pure compounds and potential intermolecular interaction between them using thermodynamic information of the mixtures obtained earlier. Furthermore, the influence of the temperature on the (vapour + liquid) equilibrium of these mixtures has been explored and discussed. Transferable parameters of the SAFT-VR approach together with standard combining rules have been used to model the phase equilibrium of the mixtures and a description of the (vapour + liquid) equilibrium of them that is in excellent agreement with the experimental data are provided.

  3. Modeling dependence based on mixture copulas and its application in risk management

    Institute of Scientific and Technical Information of China (English)

    OUYANG Zi-sheng; LIAO Hui; YANG Xiang-qun

    2009-01-01

    This paper is concerned with the statistical modeling of the dependence structure of multivariate financial data using the copula, and the application of copula functions in VaR valuation. After the introduction of the pure copula method and the maximum and minimum mixture copula method, authors present a new algorithm based on the more generalized mixture copula functions and the dependence measure, and apply the method to the portfolio of Shanghai stock composite index and Shenzhen stock component index. Comparing with the results from various methods, one can find that the mixture copula method is better than the pure Gaussia copula method and the maximum and minimum mixture copula method on different VaR level.

  4. Application of the Electronic Nose Technique to Differentiation between Model Mixtures with COPD Markers

    Directory of Open Access Journals (Sweden)

    Jacek Namieśnik

    2013-04-01

    Full Text Available The paper presents the potential of an electronic nose technique in the field of fast diagnostics of patients suspected of Chronic Obstructive Pulmonary Disease (COPD. The investigations were performed using a simple electronic nose prototype equipped with a set of six semiconductor sensors manufactured by FIGARO Co. They were aimed at verification of a possibility of differentiation between model reference mixtures with potential COPD markers (N,N-dimethylformamide and N,N-dimethylacetamide. These mixtures contained volatile organic compounds (VOCs such as acetone, isoprene, carbon disulphide, propan-2-ol, formamide, benzene, toluene, acetonitrile, acetic acid, dimethyl ether, dimethyl sulphide, acrolein, furan, propanol and pyridine, recognized as the components of exhaled air. The model reference mixtures were prepared at three concentration levels—10 ppb, 25 ppb, 50 ppb v/v—of each component, except for the COPD markers. Concentration of the COPD markers in the mixtures was from 0 ppb to 100 ppb v/v. Interpretation of the obtained data employed principal component analysis (PCA. The investigations revealed the usefulness of the electronic device only in the case when the concentration of the COPD markers was twice as high as the concentration of the remaining components of the mixture and for a limited number of basic mixture components.

  5. Nonlinear Random Effects Mixture Models: Maximum Likelihood Estimation via the EM Algorithm.

    Science.gov (United States)

    Wang, Xiaoning; Schumitzky, Alan; D'Argenio, David Z

    2007-08-15

    Nonlinear random effects models with finite mixture structures are used to identify polymorphism in pharmacokinetic/pharmacodynamic phenotypes. An EM algorithm for maximum likelihood estimation approach is developed and uses sampling-based methods to implement the expectation step, that results in an analytically tractable maximization step. A benefit of the approach is that no model linearization is performed and the estimation precision can be arbitrarily controlled by the sampling process. A detailed simulation study illustrates the feasibility of the estimation approach and evaluates its performance. Applications of the proposed nonlinear random effects mixture model approach to other population pharmacokinetic/pharmacodynamic problems will be of interest for future investigation.

  6. Motif Yggdrasil: sampling sequence motifs from a tree mixture model.

    Science.gov (United States)

    Andersson, Samuel A; Lagergren, Jens

    2007-06-01

    In phylogenetic foot-printing, putative regulatory elements are found in upstream regions of orthologous genes by searching for common motifs. Motifs in different upstream sequences are subject to mutations along the edges of the corresponding phylogenetic tree, consequently taking advantage of the tree in the motif search is an appealing idea. We describe the Motif Yggdrasil sampler; the first Gibbs sampler based on a general tree that uses unaligned sequences. Previous tree-based Gibbs samplers have assumed a star-shaped tree or partially aligned upstream regions. We give a probabilistic model (MY model) describing upstream sequences with regulatory elements and build a Gibbs sampler with respect to this model. The model allows toggling, i.e., the restriction of a position to a subset of nucleotides, but does not require aligned sequences nor edge lengths, which may be difficult to come by. We apply the collapsing technique to eliminate the need to sample nuisance parameters, and give a derivation of the predictive update formula. We show that the MY model improves the modeling of difficult motif instances and that the use of the tree achieves a substantial increase in nucleotide level correlation coefficient both for synthetic data and 37 bacterial lexA genes. We investigate the sensitivity to errors in the tree and show that using random trees MY sampler still has a performance similar to the original version.

  7. Solvable model of a trapped mixture of Bose-Einstein condensates

    Science.gov (United States)

    Klaiman, Shachar; Streltsov, Alexej I.; Alon, Ofir E.

    2017-01-01

    A mixture of two kinds of identical bosons held in a harmonic potential and interacting by harmonic particle-particle interactions is discussed. This is an exactly-solvable model of a mixture of two trapped Bose-Einstein condensates which allows us to examine analytically various properties. Generalizing the treatments in Cohen and Lee (1985) and Osadchii and Muraktanov (1991), closed form expressions for the mixture's frequencies and ground-state energy and wave-function, and the lowest-order densities are obtained and analyzed for attractive and repulsive intra-species and inter-species particle-particle interactions. A particular mean-field solution of the corresponding Gross-Pitaevskii theory is also found analytically. This allows us to compare properties of the mixture at the exact, many-body and mean-field levels, both for finite systems and at the limit of an infinite number of particles. We discuss the renormalization of the mixture's frequencies at the mean-field level. Mainly, we hereby prove that the exact ground-state energy per particle and lowest-order intra-species and inter-species densities per particle converge at the infinite-particle limit (when the products of the number of particles times the intra-species and inter-species interaction strengths are held fixed) to the results of the Gross-Pitaevskii theory for the mixture. Finally and on the other end, we use the mixture's and each species' center-of-mass operators to show that the Gross-Pitaevskii theory for mixtures is unable to describe the variance of many-particle operators in the mixture, even in the infinite-particle limit. The variances are computed both in position and momentum space and the respective uncertainty products compared and discussed. The role of the center-of-mass separability and, for generically trapped mixtures, inseparability is elucidated when contrasting the variance at the many-body and mean-field levels in a mixture. Our analytical results show that many

  8. A general mixture model and its application to coastal sandbar migration simulation

    Science.gov (United States)

    Liang, Lixin; Yu, Xiping

    2017-04-01

    A mixture model for general description of sediment laden flows is developed and then applied to coastal sandbar migration simulation. Firstly the mixture model is derived based on the Eulerian-Eulerian approach of the complete two-phase flow theory. The basic equations of the model include the mass and momentum conservation equations for the water-sediment mixture and the continuity equation for sediment concentration. The turbulent motion of the mixture is formulated for the fluid and the particles respectively. A modified k-ɛ model is used to describe the fluid turbulence while an algebraic model is adopted for the particles. A general formulation for the relative velocity between the two phases in sediment laden flows, which is derived by manipulating the momentum equations of the enhanced two-phase flow model, is incorporated into the mixture model. A finite difference method based on SMAC scheme is utilized for numerical solutions. The model is validated by suspended sediment motion in steady open channel flows, both in equilibrium and non-equilibrium state, and in oscillatory flows as well. The computed sediment concentrations, horizontal velocity and turbulence kinetic energy of the mixture are all shown to be in good agreement with experimental data. The mixture model is then applied to the study of sediment suspension and sandbar migration in surf zones under a vertical 2D framework. The VOF method for the description of water-air free surface and topography reaction model is coupled. The bed load transport rate and suspended load entrainment rate are all decided by the sea bed shear stress, which is obtained from the boundary layer resolved mixture model. The simulation results indicated that, under small amplitude regular waves, erosion occurred on the sandbar slope against the wave propagation direction, while deposition dominated on the slope towards wave propagation, indicating an onshore migration tendency. The computation results also shows that

  9. Use of a Modified Vector Model for Odor Intensity Prediction of Odorant Mixtures

    Directory of Open Access Journals (Sweden)

    Luchun Yan

    2015-03-01

    Full Text Available Odor intensity (OI indicates the perceived intensity of an odor by the human nose, and it is usually rated by specialized assessors. In order to avoid restrictions on assessor participation in OI evaluations, the Vector Model which calculates the OI of a mixture as the vector sum of its unmixed components’ odor intensities was modified. Based on a detected linear relation between the OI and the logarithm of odor activity value (OAV—a ratio between chemical concentration and odor threshold of individual odorants, OI of the unmixed component was replaced with its corresponding logarithm of OAV. The interaction coefficient (cosα which represented the degree of interaction between two constituents was also measured in a simplified way. Through a series of odor intensity matching tests for binary, ternary and quaternary odor mixtures, the modified Vector Model provided an effective way of relating the OI of an odor mixture with the lnOAV values of its constituents. Thus, OI of an odor mixture could be directly predicted by employing the modified Vector Model after usual quantitative analysis. Besides, it was considered that the modified Vector Model was applicable for odor mixtures which consisted of odorants with the same chemical functional groups and similar molecular structures.

  10. A homogenized constrained mixture (and mechanical analog) model for growth and remodeling of soft tissue.

    Science.gov (United States)

    Cyron, C J; Aydin, R C; Humphrey, J D

    2016-12-01

    Most mathematical models of the growth and remodeling of load-bearing soft tissues are based on one of two major approaches: a kinematic theory that specifies an evolution equation for the stress-free configuration of the tissue as a whole or a constrained mixture theory that specifies rates of mass production and removal of individual constituents within stressed configurations. The former is popular because of its conceptual simplicity, but relies largely on heuristic definitions of growth; the latter is based on biologically motivated micromechanical models, but suffers from higher computational costs due to the need to track all past configurations. In this paper, we present a temporally homogenized constrained mixture model that combines advantages of both classical approaches, namely a biologically motivated micromechanical foundation, a simple computational implementation, and low computational cost. As illustrative examples, we show that this approach describes well both cell-mediated remodeling of tissue equivalents in vitro and the growth and remodeling of aneurysms in vivo. We also show that this homogenized constrained mixture model suggests an intimate relationship between models of growth and remodeling and viscoelasticity. That is, important aspects of tissue adaptation can be understood in terms of a simple mechanical analog model, a Maxwell fluid (i.e., spring and dashpot in series) in parallel with a "motor element" that represents cell-mediated mechanoregulation of extracellular matrix. This analogy allows a simple implementation of homogenized constrained mixture models within commercially available simulation codes by exploiting available models of viscoelasticity.

  11. Mapping quantitative trait loci in a selectively genotyped outbred population using a mixture model approach

    NARCIS (Netherlands)

    Johnson, David L.; Jansen, Ritsert C.; Arendonk, Johan A.M. van

    1999-01-01

    A mixture model approach is employed for the mapping of quantitative trait loci (QTL) for the situation where individuals, in an outbred population, are selectively genotyped. Maximum likelihood estimation of model parameters is obtained from an Expectation-Maximization (EM) algorithm facilitated by

  12. Solvatochromic and Kinetic Response Models in (Ethyl Acetate + Chloroform or Methanol Solvent Mixtures

    Directory of Open Access Journals (Sweden)

    L. R. Vottero

    2000-03-01

    Full Text Available The present work analyzes the solvent effects upon the solvatochromic response models for a set of chemical probes and the kinetic response models for an aromatic nucleophilic substitution reaction, in binary mixtures in which both pure components are able to form intersolvent complexes by hydrogen bonding.

  13. Approximation of the breast height diameter distribution of two-cohort stands by mixture models I Parameter estimation

    Science.gov (United States)

    Rafal Podlaski; Francis A. Roesch

    2013-01-01

    Study assessed the usefulness of various methods for choosing the initial values for the numerical procedures for estimating the parameters of mixture distributions and analysed variety of mixture models to approximate empirical diameter at breast height (dbh) distributions. Two-component mixtures of either the Weibull distribution or the gamma distribution were...

  14. Gaussian Mixture Models and Model Selection for [18F] Fluorodeoxyglucose Positron Emission Tomography Classification in Alzheimer's Disease.

    Directory of Open Access Journals (Sweden)

    Rui Li

    Full Text Available We present a method to discover discriminative brain metabolism patterns in [18F] fluorodeoxyglucose positron emission tomography (PET scans, facilitating the clinical diagnosis of Alzheimer's disease. In the work, the term "pattern" stands for a certain brain region that characterizes a target group of patients and can be used for a classification as well as interpretation purposes. Thus, it can be understood as a so-called "region of interest (ROI". In the literature, an ROI is often found by a given brain atlas that defines a number of brain regions, which corresponds to an anatomical approach. The present work introduces a semi-data-driven approach that is based on learning the characteristics of the given data, given some prior anatomical knowledge. A Gaussian Mixture Model (GMM and model selection are combined to return a clustering of voxels that may serve for the definition of ROIs. Experiments on both an in-house dataset and data of the Alzheimer's Disease Neuroimaging Initiative (ADNI suggest that the proposed approach arrives at a better diagnosis than a merely anatomical approach or conventional statistical hypothesis testing.

  15. Introducing Patent Box Regime in UK – a possible model to be followed in Romania

    OpenAIRE

    Ristea Luminita; Trandafir Adina

    2013-01-01

    In the last decades the concept of intellectual property (IP) became very important amd therefore the EU jurisdictions have introduced R&D tax credits or patent box regime as incentives to attract foreign investments. Using literature review, this article approach the R&D and patent box regime applied in UK starting with 1 April 2013. Our main conclusion is that, the Patent Box needs to be seen in the context of the efforts of European countries made to introduce specific reliefs for revenues...

  16. Detecting Gustatory–Olfactory Flavor Mixtures: Models of Probability Summation

    Science.gov (United States)

    Veldhuizen, Maria G.; Shepard, Timothy G.; Shavit, Adam Y.

    2012-01-01

    Odorants and flavorants typically contain many components. It is generally easier to detect multicomponent stimuli than to detect a single component, through either neural integration or probability summation (PS) (or both). PS assumes that the sensory effects of 2 (or more) stimulus components (e.g., gustatory and olfactory components of a flavorant) are detected in statistically independent channels, that each channel makes a separate decision whether a component is detected, and that the behavioral response depends solely on the separate decisions. Models of PS traditionally assume high thresholds for detecting each component, noise being irrelevant. The core assumptions may be adapted, however, to signal-detection theory, where noise limits detection. The present article derives predictions of high-threshold and signal-detection models of independent-decision PS in detecting gustatory–olfactory flavorants, comparing predictions in yes/no and 2-alternative forced-choice tasks using blocked and intermixed stimulus designs. The models also extend to measures of response times to suprathreshold flavorants. Predictions derived from high-threshold and signal-detection models differ markedly. Available empirical evidence on gustatory–olfactory flavor detection suggests that neither the high-threshold nor the signal-detection versions of PS can readily account for the results, which likely reflect neural integration in the flavor system. PMID:22075720

  17. Nonlinear Structured Growth Mixture Models in Mplus and OpenMx

    Science.gov (United States)

    Grimm, Kevin J.; Ram, Nilam; Estabrook, Ryne

    2014-01-01

    Growth mixture models (GMMs; Muthén & Muthén, 2000; Muthén & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models because of their common use, flexibility in modeling many types of change patterns, the availability of statistical programs to fit such models, and the ease of programming. In this paper, we present additional ways of modeling nonlinear change patterns with GMMs. Specifically, we show how LCMs that follow specific nonlinear functions can be extended to examine the presence of multiple latent classes using the Mplus and OpenMx computer programs. These models are fit to longitudinal reading data from the Early Childhood Longitudinal Study-Kindergarten Cohort to illustrate their use. PMID:25419006

  18. Nonlinear Structured Growth Mixture Models in Mplus and OpenMx.

    Science.gov (United States)

    Grimm, Kevin J; Ram, Nilam; Estabrook, Ryne

    2010-01-01

    Growth mixture models (GMMs; Muthén & Muthén, 2000; Muthén & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models because of their common use, flexibility in modeling many types of change patterns, the availability of statistical programs to fit such models, and the ease of programming. In this paper, we present additional ways of modeling nonlinear change patterns with GMMs. Specifically, we show how LCMs that follow specific nonlinear functions can be extended to examine the presence of multiple latent classes using the Mplus and OpenMx computer programs. These models are fit to longitudinal reading data from the Early Childhood Longitudinal Study-Kindergarten Cohort to illustrate their use.

  19. Memoized Online Variational Inference for Dirichlet Process Mixture Models

    Science.gov (United States)

    2014-06-27

    for unsupervised modeling of struc- tured data like text documents, time series, and images. They are especially promising for large datasets, as...non-convex unsupervised learning problems, frequently yielding poor solutions (see Fig. 2). While taking the best of multiple runs is possible, this is...16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as Report (SAR) 18. NUMBER OF PAGES 9 19a. NAME OF RESPONSIBLE PERSON a. REPORT

  20. TOFIR: A Tool of Facilitating Information Retrieval - Introduce a Visual Retrieval Model.

    Science.gov (United States)

    Zhang, Jin

    2001-01-01

    Introduces a new method for the visualization of information retrieval called TOFIR (Tool of Facilitating Information Retrieval). Discusses the use of angle attributes of a document to construct the angle-based visual space; two-dimensional and three-dimensional visual tools; ambiguity; and future research directions. (Author/LRW)

  1. Introducing the PCMC Model: An Investigative Framework for Young People's Processing of Commercialized Media Content

    NARCIS (Netherlands)

    Buijzen, M.A.; Reijmersdal, E.A. van; Owen, L.H.

    2010-01-01

    There is a vital need for an updated evaluation of children's and adolescents' changing commercial media environment. In this article, we introduce an investigative framework for young people's processing of commercial media content (PCMC) that can deal with current and future developments in the me

  2. A generalized physiologically-based toxicokinetic modeling system for chemical mixtures containing metals

    Directory of Open Access Journals (Sweden)

    Isukapalli Sastry S

    2010-06-01

    Full Text Available Abstract Background Humans are routinely and concurrently exposed to multiple toxic chemicals, including various metals and organics, often at levels that can cause adverse and potentially synergistic effects. However, toxicokinetic modeling studies of exposures to these chemicals are typically performed on a single chemical basis. Furthermore, the attributes of available models for individual chemicals are commonly estimated specifically for the compound studied. As a result, the available models usually have parameters and even structures that are not consistent or compatible across the range of chemicals of concern. This fact precludes the systematic consideration of synergistic effects, and may also lead to inconsistencies in calculations of co-occurring exposures and corresponding risks. There is a need, therefore, for a consistent modeling framework that would allow the systematic study of cumulative risks from complex mixtures of contaminants. Methods A Generalized Toxicokinetic Modeling system for Mixtures (GTMM was developed and evaluated with case studies. The GTMM is physiologically-based and uses a consistent, chemical-independent physiological description for integrating widely varying toxicokinetic models. It is modular and can be directly "mapped" to individual toxicokinetic models, while maintaining physiological consistency across different chemicals. Interaction effects of complex mixtures can be directly incorporated into the GTMM. Conclusions The application of GTMM to different individual metals and metal compounds showed that it explains available observational data as well as replicates the results from models that have been optimized for individual chemicals. The GTMM also made it feasible to model toxicokinetics of complex, interacting mixtures of multiple metals and nonmetals in humans, based on available literature information. The GTMM provides a central component in the development of a "source

  3. Theory of phase equilibria for model mixtures of n-alkanes, perfluoroalkanes and perfluoroalkylalkane diblock surfactants

    Science.gov (United States)

    Dos Ramos, María Carolina; Blas, Felipe J.

    2007-05-01

    An extension of the SAFT-VR equation of state, the so-called hetero-SAFT approach [Y. Peng, H. Zhao, and C. McCabe, Molec. Phys. 104, 571 (2006)], is used to examine the phase equilibria exhibited by a number of model binary mixtures of n-alkanes, perfluoroalkanes and perfluoroalkylalkane diblock surfactants. Despite the increasing recent interest in semifluorinated alkanes (or perfluoroalkylalkane diblock molecules), the phase behaviour of mixtures involving these molecules with n-alkanes or perfluoroalkanes is practically unknown from the experimental point of view. In this work, we use simple molecular models for n-alkanes, perfluoroalkanes and perfluoroalkylalkane diblock molecules to predict, from a molecular perspective, the phase behaviour of selected model mixtures of perfluoroalkylalkanes with n-alkanes and perfluoroalkanes. In particular, we focus our interest on the understanding of the microscopic conditions that control the liquid-liquid separation and the stabilization of these mixtures. n-Alkanes and perfluoroalkanes are modelled as tangentially bonded monomer segments with molecular parameters taken from the literature. The perfluoroalkylalkane diblock molecules are modelled as heterosegmented diblock chains, with parameters for the alkyl and perfluoroalkyl segments developed in earlier work. This simple approach, which was proposed in previous work [P. Morgado, H. Zhao, F. J. Blas, C. McCabe, L. P. N. Rebelo, and E. J. M. Filipe, J. Phys. Chem. B, 111, 2856], is now extended to describe model n-alkane (or perfluoroalkane) + perfluroalkylalkane binary mixtures. We have obtained the phase behaviour of different mixtures and studied the effect of the molecular weight of n-alkanes and perfluoroalkanes on the type of phase behaviour observed in these mixtures. We have also analysed the effect of the number of alkyl and perfluoroalkyl chemical groups in the surfactant molecule on the phase behaviour. In addition to the usual vapour-liquid phase

  4. EXISTENCE AND REGULARITY OF SOLUTIONS TO MODEL FOR LIQUID MIXTURE OF 3HE-4HE

    Institute of Scientific and Technical Information of China (English)

    Luo Hong; Pu Zhilin

    2012-01-01

    Existence and regularity of solutions to model for liquid mixture of 3He-4He is considered in this paper.First,it is proved that this system possesses a unique global weak solution in H1(Ω,C × R) by using Galerkin method.Secondly,by using an iteration procedure,regularity estimates for the linear semigroups,it is proved that the model for liquid mixture of 3He-4He has a unique solution in Hk(Ω,C × R) for all k ≥ 1.

  5. A mathematical framework for estimating pathogen transmission fitness and inoculum size using data from a competitive mixtures animal model.

    Directory of Open Access Journals (Sweden)

    James M McCaw

    2011-04-01

    Full Text Available We present a method to measure the relative transmissibility ("transmission fitness" of one strain of a pathogen compared to another. The model is applied to data from "competitive mixtures" experiments in which animals are co-infected with a mixture of two strains. We observe the mixture in each animal over time and over multiple generations of transmission. We use data from influenza experiments in ferrets to demonstrate the approach. Assessment of the relative transmissibility between two strains of influenza is important in at least three contexts: 1 Within the human population antigenically novel strains of influenza arise and compete for susceptible hosts. 2 During a pandemic event, a novel sub-type of influenza competes with the existing seasonal strain(s. The unfolding epidemiological dynamics are dependent upon both the population's susceptibility profile and the inherent transmissibility of the novel strain compared to the existing strain(s. 3 Neuraminidase inhibitors (NAIs, while providing significant potential to reduce transmission of influenza, exert selective pressure on the virus and so promote the emergence of drug-resistant strains. Any adverse outcome due to selection and subsequent spread of an NAI-resistant strain is exquisitely dependent upon the transmission fitness of that strain. Measurement of the transmission fitness of two competing strains of influenza is thus of critical importance in determining the likely time-course and epidemiology of an influenza outbreak, or the potential impact of an intervention measure such as NAI distribution. The mathematical framework introduced here also provides an estimate for the size of the transmitted inoculum. We demonstrate the framework's behaviour using data from ferret transmission studies, and through simulation suggest how to optimise experimental design for assessment of transmissibility. The method introduced here for assessment of mixed transmission events has

  6. Non-racemic mixture model: a computational approach.

    Science.gov (United States)

    Polanco, Carlos; Buhse, Thomas

    2017-01-01

    The behavior of a slight chiral bias in favor of l-amino acids over d-amino acids was studied in an evolutionary mathematical model generating mixed chiral peptide hexamers. The simulations aimed to reproduce a very generalized prebiotic scenario involving a specified couple of amino acid enantiomers and a possible asymmetric amplification through autocatalytic peptide self-replication while forming small multimers of a defined length. Our simplified model allowed the observation of a small ascending but not conclusive tendency in the l-amino acid over the d-amino acid profile for the resulting mixed chiral hexamers in computer simulations of 100 peptide generations. This simulation was carried out by changing the chiral bias from 1% to 3%, in three stages of 15, 50 and 100 generations to observe any alteration that could mean a drastic change in behavior. So far, our simulations lead to the assumption that under the exposure of very slight non-racemic conditions, a significant bias between l- and d-amino acids, as present in our biosphere, was unlikely generated under prebiotic conditions if autocatalytic peptide self-replication was the main or the only driving force of chiral auto-amplification.

  7. Introducing Vectors.

    Science.gov (United States)

    Roche, John

    1997-01-01

    Suggests an approach to teaching vectors that promotes active learning through challenging questions addressed to the class, as opposed to subtle explanations. Promotes introducing vector graphics with concrete examples, beginning with an explanation of the displacement vector. Also discusses artificial vectors, vector algebra, and unit vectors.…

  8. A multiscale transport model for binary Lennard Jones mixtures in slit nanopores

    Science.gov (United States)

    Bhadauria, Ravi; Aluru, N. R.

    2016-11-01

    We present a quasi-continuum multiscale hydrodynamic transport model for one dimensional isothermal, non-reacting binary mixture confined in slit shaped nanochannels. We focus on species transport equation that includes the viscous dissipation and interspecies diffusion term of the Maxwell-Stefan form. Partial viscosity variation is modeled by van der Waals one fluid approximation and the Local Average Density Method. We use friction boundary conditions where the wall-species friction parameter is computed using a novel species specific Generalized Langevin Equation model. The transport model accuracy is tested by predicting the velocity profiles of Lennard-Jones (LJ) methane-hydrogen and LJ methane-argon mixtures in graphene slit channels of different width. The resultant slip length from the continuum model is found to be invariant of channel width for a fixed mixture molar concentration. The mixtures considered are observed to behave as single species pseudo fluid, with the friction parameter displaying a linear dependence on the molar composition. The proposed model yields atomistic level accuracy with continuum scale efficiency.

  9. A Finite Mixture of Nonlinear Random Coefficient Models for Continuous Repeated Measures Data.

    Science.gov (United States)

    Kohli, Nidhi; Harring, Jeffrey R; Zopluoglu, Cengiz

    2016-09-01

    Nonlinear random coefficient models (NRCMs) for continuous longitudinal data are often used for examining individual behaviors that display nonlinear patterns of development (or growth) over time in measured variables. As an extension of this model, this study considers the finite mixture of NRCMs that combine features of NRCMs with the idea of finite mixture (or latent class) models. The efficacy of this model is that it allows the integration of intrinsically nonlinear functions where the data come from a mixture of two or more unobserved subpopulations, thus allowing the simultaneous investigation of intra-individual (within-person) variability, inter-individual (between-person) variability, and subpopulation heterogeneity. Effectiveness of this model to work under real data analytic conditions was examined by executing a Monte Carlo simulation study. The simulation study was carried out using an R routine specifically developed for the purpose of this study. The R routine used maximum likelihood with the expectation-maximization algorithm. The design of the study mimicked the output obtained from running a two-class mixture model on task completion data.

  10. EEG Signal Classification With Super-Dirichlet Mixture Model

    DEFF Research Database (Denmark)

    Ma, Zhanyu; Tan, Zheng-Hua; Prasad, Swati

    2012-01-01

    Classification of the Electroencephalogram (EEG) signal is a challengeable task in the brain-computer interface systems. The marginalized discrete wavelet transform (mDWT) coefficients extracted from the EEG signals have been frequently used in researches since they reveal features related to the...... vector machine (SVM) based classifier, the SDMM based classifier performs more stable and shows a promising improvement, with both channel selection strategies....... by the Dirichlet distribution and the distribution of the mDWT coefficients from more than one channels is described by a super-Dirichletmixture model (SDMM). The Fisher ratio and the generalization error estimation are applied to select relevant channels, respectively. Compared to the state-of-the-art support...

  11. Land Cover Classification for Polarimetric SAR Images Based on Mixture Models

    Directory of Open Access Journals (Sweden)

    Wei Gao

    2014-04-01

    Full Text Available In this paper, two mixture models are proposed for modeling heterogeneous regions in single-look and multi-look polarimetric SAR images, along with their corresponding maximum likelihood classifiers for land cover classification. The classical Gaussian and Wishart models are suitable for modeling scattering vectors and covariance matrices from homogeneous regions, while their performance deteriorates for regions that are heterogeneous. By comparison, the proposed mixture models reduce the modeling error by expressing the data distribution as a weighted sum of multiple component distributions. For single-look and multi-look polarimetric SAR data, complex Gaussian and complex Wishart components are adopted, respectively. Model parameters are determined by employing the expectation-maximization (EM algorithm. Two maximum likelihood classifiers are then constructed based on the proposed mixture models. These classifiers are assessed using polarimetric SAR images from the RADARSAT-2 sensor of the Canadian Space Agency (CSA, the AIRSAR sensor of the Jet Propulsion Laboratory (JPL and the EMISAR sensor of the Technical University of Denmark (DTU. Experiment results demonstrate that the new models fit heterogeneous regions preferably to the classical models and are especially appropriate for extremely heterogeneous regions, such as urban areas. The overall accuracy of land cover classification is also improved due to the more refined modeling.

  12. Kinetic model for astaxanthin aggregation in water-methanol mixtures

    Science.gov (United States)

    Giovannetti, Rita; Alibabaei, Leila; Pucciarelli, Filippo

    2009-07-01

    The aggregation of astaxanthin in hydrated methanol was kinetically studied in the temperature range from 10 °C to 50 °C, at different astaxanthin concentrations and solvent composition. A kinetic model for the formation and transformation of astaxanthin aggregated has been proposed. Spectrophotometric studies showed that monomeric astaxanthin decayed to H-aggregates that after-wards formed J-aggregates when water content was 50% and the temperature lower than 20 °C; at higher temperatures, very stable J-aggregates were formed directly. Monomer formed very stable H-aggregates when the water content was greater than 60%; in these conditions H-aggregates decayed into J-aggregates only when the temperature was at least 50 °C. Through these findings it was possible to establish that the aggregation reactions took place through a two steps consecutive reaction with first order kinetic constants and that the values of these depended on the solvent composition and temperature.

  13. Application of pattern mixture models to address missing data in longitudinal data analysis using SPSS.

    Science.gov (United States)

    Son, Heesook; Friedmann, Erika; Thomas, Sue A

    2012-01-01

    Longitudinal studies are used in nursing research to examine changes over time in health indicators. Traditional approaches to longitudinal analysis of means, such as analysis of variance with repeated measures, are limited to analyzing complete cases. This limitation can lead to biased results due to withdrawal or data omission bias or to imputation of missing data, which can lead to bias toward the null if data are not missing completely at random. Pattern mixture models are useful to evaluate the informativeness of missing data and to adjust linear mixed model (LMM) analyses if missing data are informative. The aim of this study was to provide an example of statistical procedures for applying a pattern mixture model to evaluate the informativeness of missing data and conduct analyses of data with informative missingness in longitudinal studies using SPSS. The data set from the Patients' and Families' Psychological Response to Home Automated External Defibrillator Trial was used as an example to examine informativeness of missing data with pattern mixture models and to use a missing data pattern in analysis of longitudinal data. Prevention of withdrawal bias, omitted data bias, and bias toward the null in longitudinal LMMs requires the assessment of the informativeness of the occurrence of missing data. Missing data patterns can be incorporated as fixed effects into LMMs to evaluate the contribution of the presence of informative missingness to and control for the effects of missingness on outcomes. Pattern mixture models are a useful method to address the presence and effect of informative missingness in longitudinal studies.

  14. A Mechanistic Modeling Framework for Predicting Metabolic Interactions in Complex Mixtures

    Science.gov (United States)

    Cheng, Shu

    2011-01-01

    Background: Computational modeling of the absorption, distribution, metabolism, and excretion of chemicals is now theoretically able to describe metabolic interactions in realistic mixtures of tens to hundreds of substances. That framework awaits validation. Objectives: Our objectives were to a) evaluate the conditions of application of such a framework, b) confront the predictions of a physiologically integrated model of benzene, toluene, ethylbenzene, and m-xylene (BTEX) interactions with observed kinetics data on these substances in mixtures and, c) assess whether improving the mechanistic description has the potential to lead to better predictions of interactions. Methods: We developed three joint models of BTEX toxicokinetics and metabolism and calibrated them using Markov chain Monte Carlo simulations and single-substance exposure data. We then checked their predictive capabilities for metabolic interactions by comparison with mixture kinetic data. Results: The simplest joint model (BTEX interacting competitively for cytochrome P450 2E1 access) gives qualitatively correct and quantitatively acceptable predictions (with at most 50% deviations from the data). More complex models with two pathways or back-competition with metabolites have the potential to further improve predictions for BTEX mixtures. Conclusions: A systems biology approach to large-scale prediction of metabolic interactions is advantageous on several counts and technically feasible. However, ways to obtain the required parameters need to be further explored. PMID:21835728

  15. Calculated flame temperature (CFT) modeling of fuel mixture lower flammability limits.

    Science.gov (United States)

    Zhao, Fuman; Rogers, William J; Mannan, M Sam

    2010-02-15

    Heat loss can affect experimental flammability limits, and it becomes indispensable to quantify flammability limits when apparatus quenching effect becomes significant. In this research, the lower flammability limits of binary hydrocarbon mixtures are predicted using calculated flame temperature (CFT) modeling, which is based on the principle of energy conservation. Specifically, the hydrocarbon mixture lower flammability limit is quantitatively correlated to its final flame temperature at non-adiabatic conditions. The modeling predictions are compared with experimental observations to verify the validity of CFT modeling, and the minor deviations between them indicated that CFT modeling can represent experimental measurements very well. Moreover, the CFT modeling results and Le Chatelier's Law predictions are also compared, and the agreement between them indicates that CFT modeling provides a theoretical justification for the Le Chatelier's Law.

  16. A joint finite mixture model for clustering genes from independent Gaussian and beta distributed data

    Directory of Open Access Journals (Sweden)

    Yli-Harja Olli

    2009-05-01

    Full Text Available Abstract Background Cluster analysis has become a standard computational method for gene function discovery as well as for more general explanatory data analysis. A number of different approaches have been proposed for that purpose, out of which different mixture models provide a principled probabilistic framework. Cluster analysis is increasingly often supplemented with multiple data sources nowadays, and these heterogeneous information sources should be made as efficient use of as possible. Results This paper presents a novel Beta-Gaussian mixture model (BGMM for clustering genes based on Gaussian distributed and beta distributed data. The proposed BGMM can be viewed as a natural extension of the beta mixture model (BMM and the Gaussian mixture model (GMM. The proposed BGMM method differs from other mixture model based methods in its integration of two different data types into a single and unified probabilistic modeling framework, which provides a more efficient use of multiple data sources than methods that analyze different data sources separately. Moreover, BGMM provides an exceedingly flexible modeling framework since many data sources can be modeled as Gaussian or beta distributed random variables, and it can also be extended to integrate data that have other parametric distributions as well, which adds even more flexibility to this model-based clustering framework. We developed three types of estimation algorithms for BGMM, the standard expectation maximization (EM algorithm, an approximated EM and a hybrid EM, and propose to tackle the model selection problem by well-known model selection criteria, for which we test the Akaike information criterion (AIC, a modified AIC (AIC3, the Bayesian information criterion (BIC, and the integrated classification likelihood-BIC (ICL-BIC. Conclusion Performance tests with simulated data show that combining two different data sources into a single mixture joint model greatly improves the clustering

  17. Statistical-thermodynamic model for light scattering from eye lens protein mixtures

    Science.gov (United States)

    Bell, Michael M.; Ross, David S.; Bautista, Maurino P.; Shahmohamad, Hossein; Langner, Andreas; Hamilton, John F.; Lahnovych, Carrie N.; Thurston, George M.

    2017-02-01

    We model light-scattering cross sections of concentrated aqueous mixtures of the bovine eye lens proteins γB- and α-crystallin by adapting a statistical-thermodynamic model of mixtures of spheres with short-range attractions. The model reproduces measured static light scattering cross sections, or Rayleigh ratios, of γB-α mixtures from dilute concentrations where light scattering intensity depends on molecular weights and virial coefficients, to realistically high concentration protein mixtures like those of the lens. The model relates γB-γB and γB-α attraction strengths and the γB-α size ratio to the free energy curvatures that set light scattering efficiency in tandem with protein refractive index increments. The model includes (i) hard-sphere α-α interactions, which create short-range order and transparency at high protein concentrations, (ii) short-range attractive plus hard-core γ-γ interactions, which produce intense light scattering and liquid-liquid phase separation in aqueous γ-crystallin solutions, and (iii) short-range attractive plus hard-core γ-α interactions, which strongly influence highly non-additive light scattering and phase separation in concentrated γ-α mixtures. The model reveals a new lens transparency mechanism, that prominent equilibrium composition fluctuations can be perpendicular to the refractive index gradient. The model reproduces the concave-up dependence of the Rayleigh ratio on α/γ composition at high concentrations, its concave-down nature at intermediate concentrations, non-monotonic dependence of light scattering on γ-α attraction strength, and more intricate, temperature-dependent features. We analytically compute the mixed virial series for light scattering efficiency through third order for the sticky-sphere mixture, and find that the full model represents the available light scattering data at concentrations several times those where the second and third mixed virial contributions fail. The model

  18. Community Collaboration to Improve Schools: Introducing a New Model from Ohio

    Science.gov (United States)

    Anderson-Butcher, Dawn; Lawson, Hal A.; Bean, Jerry; Flaspohler, Paul; Boone, Barbara; Kwiatkowski, Amber

    2008-01-01

    Conventional school improvement models traditionally involve "walled-in" approaches. These models focus primarily on academic learning strategies in response to standards-based accountabilities. Although positive outcomes have been documented, expanded school improvement models such as the Ohio Community Collaboration Model for School…

  19. A hybrid finite mixture model for exploring heterogeneous ordering patterns of driver injury severity.

    Science.gov (United States)

    Ma, Lu; Wang, Guan; Yan, Xuedong; Weng, Jinxian

    2016-04-01

    Debates on the ordering patterns of crash injury severity are ongoing in the literature. Models without proper econometrical structures for accommodating the complex ordering patterns of injury severity could result in biased estimations and misinterpretations of factors. This study proposes a hybrid finite mixture (HFM) model aiming to capture heterogeneous ordering patterns of driver injury severity while enhancing modeling flexibility. It attempts to probabilistically partition samples into two groups in which one group represents an unordered/nominal data-generating process while the other represents an ordered data-generating process. Conceptually, the newly developed model offers flexible coefficient settings for mining additional information from crash data, and more importantly it allows the coexistence of multiple ordering patterns for the dependent variable. A thorough modeling performance comparison is conducted between the HFM model, and the multinomial logit (MNL), ordered logit (OL), finite mixture multinomial logit (FMMNL) and finite mixture ordered logit (FMOL) models. According to the empirical results, the HFM model presents a strong ability to extract information from the data, and more importantly to uncover heterogeneous ordering relationships between factors and driver injury severity. In addition, the estimated weight parameter associated with the MNL component in the HFM model is greater than the one associated with the OL component, which indicates a larger likelihood of the unordered pattern than the ordered pattern for driver injury severity.

  20. A computer graphical user interface for survival mixture modelling of recurrent infections.

    Science.gov (United States)

    Lee, Andy H; Zhao, Yun; Yau, Kelvin K W; Ng, S K

    2009-03-01

    Recurrent infections data are commonly encountered in medical research, where the recurrent events are characterised by an acute phase followed by a stable phase after the index episode. Two-component survival mixture models, in both proportional hazards and accelerated failure time settings, are presented as a flexible method of analysing such data. To account for the inherent dependency of the recurrent observations, random effects are incorporated within the conditional hazard function, in the manner of generalised linear mixed models. Assuming a Weibull or log-logistic baseline hazard in both mixture components of the survival mixture model, an EM algorithm is developed for the residual maximum quasi-likelihood estimation of fixed effect and variance component parameters. The methodology is implemented as a graphical user interface coded using Microsoft visual C++. Application to model recurrent urinary tract infections for elderly women is illustrated, where significant individual variations are evident at both acute and stable phases. The survival mixture methodology developed enable practitioners to identify pertinent risk factors affecting the recurrent times and to draw valid conclusions inferred from these correlated and heterogeneous survival data.

  1. Using the Mixture Rasch Model to Explore Knowledge Resources Students Invoke in Mathematic and Science Assessments

    Science.gov (United States)

    Zhang, Danhui; Orrill, Chandra; Campbell, Todd

    2015-01-01

    The purpose of this study was to investigate whether mixture Rasch models followed by qualitative item-by-item analysis of selected Programme for International Student Assessment (PISA) mathematics and science items offered insight into knowledge students invoke in mathematics and science separately and combined. The researchers administered an…

  2. The Impact of Misspecifying Class-Specific Residual Variances in Growth Mixture Models

    Science.gov (United States)

    Enders, Craig K.; Tofighi, Davood

    2008-01-01

    The purpose of this study was to examine the impact of misspecifying a growth mixture model (GMM) by assuming that Level-1 residual variances are constant across classes, when they do, in fact, vary in each subpopulation. Misspecification produced bias in the within-class growth trajectories and variance components, and estimates were…

  3. Measurement error in earnings data : Using a mixture model approach to combine survey and register data

    NARCIS (Netherlands)

    Meijer, E.; Rohwedder, S.; Wansbeek, T.J.

    2012-01-01

    Survey data on earnings tend to contain measurement error. Administrative data are superior in principle, but are worthless in case of a mismatch. We develop methods for prediction in mixture factor analysis models that combine both data sources to arrive at a single earnings figure. We apply the me

  4. Market segment derivation and profiling via a finite mixture model framework

    NARCIS (Netherlands)

    Wedel, M; Desarbo, WS

    2002-01-01

    The Marketing literature has shown how difficult it is to profile market segments derived with finite mixture models. especially using traditional descriptor variables (e.g., demographics). Such profiling is critical for the proper implementation of segmentation strategy. we propose a new finite mix

  5. Comparison of criteria for choosing the number of classes in Bayesian finite mixture models

    NARCIS (Netherlands)

    K. Nasserinejad (Kazem); J.M. van Rosmalen (Joost); W. de Kort (Wim); E.M.E.H. Lesaffre (Emmanuel)

    2017-01-01

    textabstractIdentifying the number of classes in Bayesian finite mixture models is a challenging problem. Several criteria have been proposed, such as adaptations of the deviance information criterion, marginal likelihoods, Bayes factors, and reversible jump MCMC techniques. It was recently shown th

  6. Bayesian Inference for Growth Mixture Models with Latent Class Dependent Missing Data

    Science.gov (United States)

    Lu, Zhenqiu Laura; Zhang, Zhiyong; Lubke, Gitta

    2011-01-01

    "Growth mixture models" (GMMs) with nonignorable missing data have drawn increasing attention in research communities but have not been fully studied. The goal of this article is to propose and to evaluate a Bayesian method to estimate the GMMs with latent class dependent missing data. An extended GMM is first presented in which class…

  7. Estimating Lion Abundance using N-mixture Models for Social Species.

    Science.gov (United States)

    Belant, Jerrold L; Bled, Florent; Wilton, Clay M; Fyumagwa, Robert; Mwampeta, Stanslaus B; Beyer, Dean E

    2016-10-27

    Declining populations of large carnivores worldwide, and the complexities of managing human-carnivore conflicts, require accurate population estimates of large carnivores to promote their long-term persistence through well-informed management We used N-mixture models to estimate lion (Panthera leo) abundance from call-in and track surveys in southeastern Serengeti National Park, Tanzania. Because of potential habituation to broadcasted calls and social behavior, we developed a hierarchical observation process within the N-mixture model conditioning lion detectability on their group response to call-ins and individual detection probabilities. We estimated 270 lions (95% credible interval = 170-551) using call-ins but were unable to estimate lion abundance from track data. We found a weak negative relationship between predicted track density and predicted lion abundance from the call-in surveys. Luminosity was negatively correlated with individual detection probability during call-in surveys. Lion abundance and track density were influenced by landcover, but direction of the corresponding effects were undetermined. N-mixture models allowed us to incorporate multiple parameters (e.g., landcover, luminosity, observer effect) influencing lion abundance and probability of detection directly into abundance estimates. We suggest that N-mixture models employing a hierarchical observation process can be used to estimate abundance of other social, herding, and grouping species.

  8. Densities of Pure Ionic Liquids and Mixtures: Modeling and Data Analysis

    DEFF Research Database (Denmark)

    Abildskov, Jens; O’Connell, John P.

    2015-01-01

    Our two-parameter corresponding states model for liquid densities and compressibilities has been extended to more pure ionic liquids and to their mixtures with one or two solvents. A total of 19 new group contributions (5 new cations and 14 new anions) have been obtained for predicting pressure...

  9. Multivariate compressive sensing for image reconstruction in the wavelet domain: using scale mixture models.

    Science.gov (United States)

    Wu, Jiao; Liu, Fang; Jiao, L C; Wang, Xiaodong; Hou, Biao

    2011-12-01

    Most wavelet-based reconstruction methods of compressive sensing (CS) are developed under the independence assumption of the wavelet coefficients. However, the wavelet coefficients of images have significant statistical dependencies. Lots of multivariate prior models for the wavelet coefficients of images have been proposed and successfully applied to the image estimation problems. In this paper, the statistical structures of the wavelet coefficients are considered for CS reconstruction of images that are sparse or compressive in wavelet domain. A multivariate pursuit algorithm (MPA) based on the multivariate models is developed. Several multivariate scale mixture models are used as the prior distributions of MPA. Our method reconstructs the images by means of modeling the statistical dependencies of the wavelet coefficients in a neighborhood. The proposed algorithm based on these scale mixture models provides superior performance compared with many state-of-the-art compressive sensing reconstruction algorithms.

  10. Growth of Saccharomyces cerevisiae CBS 426 on mixtures of glucose and succinic acid: a model

    Energy Technology Data Exchange (ETDEWEB)

    Bonnet, J.A.B.A.F.; Koellmann, C.J.W.; Dekkers-de Kok, H.E.; Roels, J.A.

    1984-03-01

    Saccharomyces cerevisiae CBS 426 was grown in continuous culture in a defined medium with a mixture of glucose and succinic acid as the carbon source. Growth on succinic acid was possible after long adaptation periods. The flows of glucose, succinic acid, oxygen, carbon dioxide, and biomass to and from the system were measured. It proved necessary to expand our previous model to accommodate the active transport of succinic acid by the cell. The values found for the efficiency of the oxidative phosphorylation (PIO) and the amount of ATP needed for production of biomass from monomers gave the same values as found for substrate mixtures taken up passively. (Refs. 13).

  11. Numerical Investigation of Nanofluid Thermocapillary Convection Based on Two-Phase Mixture Model

    Science.gov (United States)

    Jiang, Yanni; Xu, Zelin

    2017-08-01

    Numerical investigation of nanofluid thermocapillary convection in a two-dimensional rectangular cavity was carried out, in which the two-phase mixture model was used to simulate the nanoparticles-fluid mixture flow, and the influences of volume fraction of nanoparticles on the flow characteristics and heat transfer performance were discussed. The results show that, with the increase of nanoparticle volume fraction, thermocapillary convection intensity weakens gradually, and the heat conduction effect strengthens; meanwhile, the temperature gradient at free surface increases but the free surface velocity decreases gradually. The average Nusselt number of hot wall and the total entropy generation decrease with nanoparticle volume fraction increasing.

  12. Infrared image segmentation based on region of interest extraction with Gaussian mixture modeling

    Science.gov (United States)

    Yeom, Seokwon

    2017-05-01

    Infrared (IR) imaging has the capability to detect thermal characteristics of objects under low-light conditions. This paper addresses IR image segmentation with Gaussian mixture modeling. An IR image is segmented with Expectation Maximization (EM) method assuming the image histogram follows the Gaussian mixture distribution. Multi-level segmentation is applied to extract the region of interest (ROI). Each level of the multi-level segmentation is composed of the k-means clustering, the EM algorithm, and a decision process. The foreground objects are individually segmented from the ROI windows. In the experiments, various methods are applied to the IR image capturing several humans at night.

  13. GIS disconnector model performance with SF{sub 6}/N{sub 2} mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Gaillac, C. [Schneider Electric (France)

    1999-07-01

    The lightning impulse breakdown voltage of a model, 145 kV GIS disconnector was studied using SF{sub 6}/N{sub 2} mixtures. Mixtures with between 0% and 15% SF{sub 6} were used. Sphere-sphere, point-plane and sphere-rod geometrics were studied. In most cases, breakdown strength increased with both SF{sub 6} content and pressure. In the case of surface flashover, a pressure of about 8 bar with 15% SF{sub 6}, gave roughly equivalent results to that of 4 bar pure SF{sub 6}. (author)

  14. Introducing economic parameters in industrial flotation dimensionless models used for intra-factory technology transfer

    Science.gov (United States)

    Batzias, Dimitris; Ifanti, Konstantina

    2012-12-01

    In this work, intra-factory technology transfer is realized by means of scale-up procedures, including the formation of a representative original set of dimensionless groups, when know-how obtained in the laboratory is transferred progressively (in successive steps) into industrial scale. For saving resources (highly skilled manpower, time, materials, energy) a Knowledge Base (KB) is designed/developed to maintain experience in flotation and select relevant information from other Data/Information/Knowledge Bases. Of significant importance is the introduction of economic parameters referring to investment and operation of the industrial unit, thus taking into account the capital and operating cost of output, respectively. We have proved that this introduction causes several problems since new technological dimensions should be also introduced (so that the economic parameters become meaningful) resulting by dimensional analysis to a new solution set that is incompatible to the original one. We solved this problem by keeping the original set and incorporating into it only the new dimensionless groups (eliminating all additional technological dimensions introduced ad hoc).

  15. Development and application of a multimetal multibiotic ligand model for assessing aquatic toxicity of metal mixtures.

    Science.gov (United States)

    Santore, Robert C; Ryan, Adam C

    2015-04-01

    A multimetal, multiple binding site version of the biotic ligand model (mBLM) has been developed for predicting and explaining the bioavailability and toxicity of mixtures of metals to aquatic organisms. The mBLM was constructed by combining information from single-metal BLMs to preserve compatibility between the single-metal and multiple-metal approaches. The toxicities from individual metals were predicted by assuming additivity of the individual responses. Mixture toxicity was predicted based on both dissolved metal and mBLM-normalized bioavailable metal. Comparison of the 2 prediction methods indicates that metal mixtures frequently appear to have greater toxicity than an additive estimation of individual effects on a dissolved metal basis. However, on an mBLM-normalized basis, mixtures of metals appear to be additive or less than additive. This difference results from interactions between metals and ligands in solutions including natural organic matter, processes that are accounted for in the mBLM. As part of the mBLM approach, a technique for considering variability was developed to calculate confidence bounds (called response envelopes) around the central concentration-response relationship. Predictions using the mBLM and response envelope were compared with observed toxicity for a number of invertebrate and fish species. The results show that the mBLM is a useful tool for considering bioavailability when assessing the toxicity of metal mixtures.

  16. Dynamic mean field theory for lattice gas models of fluid mixtures confined in mesoporous materials.

    Science.gov (United States)

    Edison, J R; Monson, P A

    2013-11-12

    We present the extension of dynamic mean field theory (DMFT) for fluids in porous materials (Monson, P. A. J. Chem. Phys. 2008, 128, 084701) to the case of mixtures. The theory can be used to describe the relaxation processes in the approach to equilibrium or metastable equilibrium states for fluids in pores after a change in the bulk pressure or composition. It is especially useful for studying systems where there are capillary condensation or evaporation transitions. Nucleation processes associated with these transitions are emergent features of the theory and can be visualized via the time dependence of the density distribution and composition distribution in the system. For mixtures an important component of the dynamics is relaxation of the composition distribution in the system, especially in the neighborhood of vapor-liquid interfaces. We consider two different types of mixtures, modeling hydrocarbon adsorption in carbon-like slit pores. We first present results on bulk phase equilibria of the mixtures and then the equilibrium (stable/metastable) behavior of these mixtures in a finite slit pore and an inkbottle pore. We then use DMFT to describe the evolution of the density and composition in the pore in the approach to equilibrium after changing the state of the bulk fluid via composition or pressure changes.

  17. A polynomial hyperelastic model for the mixture of fat and glandular tissue in female breast.

    Science.gov (United States)

    Calvo-Gallego, Jose L; Martínez-Reina, Javier; Domínguez, Jaime

    2015-09-01

    In the breast of adult women, glandular and fat tissues are intermingled and cannot be clearly distinguished. This work studies if this mixture can be treated as a homogenized tissue. A mechanical model is proposed for the mixture of tissues as a function of the fat content. Different distributions of individual tissues and geometries have been tried to verify the validity of the mixture model. A multiscale modelling approach was applied in a finite element model of a representative volume element (RVE) of tissue, formed by randomly assigning fat or glandular elements to the mesh. Both types of tissues have been assumed as isotropic, quasi-incompressible hyperelastic materials, modelled with a polynomial strain energy function, like the homogenized model. The RVE was subjected to several load cases from which the constants of the polynomial function of the homogenized tissue were fitted in the least squares sense. The results confirm that the fat volume ratio is a key factor in determining the properties of the homogenized tissue, but the spatial distribution of fat is not so important. Finally, a simplified model of a breast was developed to check the validity of the homogenized model in a geometry similar to the actual one.

  18. Sleep-promoting effects of the GABA/5-HTP mixture in vertebrate models.

    Science.gov (United States)

    Hong, Ki-Bae; Park, Yooheon; Suh, Hyung Joo

    2016-09-01

    The aim of this study was to investigate the sleep-promoting effect of combined γ-aminobutyric acid (GABA) and 5-hydroxytryptophan (5-HTP) on sleep quality and quantity in vertebrate models. Pentobarbital-induced sleep test and electroencephalogram (EEG) analysis were applied to investigate sleep latency, duration, total sleeping time and sleep quality of two amino acids and GABA/5-HTP mixture. In addition, real-time PCR and HPLC analysis were applied to analyze the signaling pathway. The GABA/5-HTP mixture significantly regulated the sleep latency, duration (pHTP mixture modulates both GABAergic and serotonergic signaling. Moreover, the sleep architecture can be controlled by the regulation of GABAA receptor and GABA content with 5-HTP.

  19. Mixtures of endocrine disrupting contaminants modelled on human high end exposures

    DEFF Research Database (Denmark)

    Christiansen, Sofie; Kortenkamp, A.; Petersen, Marta Axelstad

    2012-01-01

    in vivo endocrine disrupting effects and information about human exposures was available, including phthalates, pesticides, UV‐filters, bisphenol A, parabens and the drug paracetamol. The mixture ratio was chosen to reflect high end human intakes. To make decisions about the dose levels for studies...... though each individual chemical is present at low, ineffective doses, but the effects of mixtures modelled based on human intakes have not previously been investigated. To address this issue for the first time, we selected 13 chemicals for a developmental mixture toxicity study in rats where data about...... in the rat, we employed the point of departure index (PODI) approach, which sums up ratios between estimated exposure levels and no‐observed‐adverse‐effect‐level (NOAEL) values of individual substances. For high end human exposures to the 13 selected chemicals, we calculated a PODI of 0.016. As only a PODI...

  20. Modelling of phase equilibria and related properties of mixtures involving lipids

    DEFF Research Database (Denmark)

    Cunico, Larissa

    Many challenges involving physical and thermodynamic properties in the production of edible oils and biodiesel are observed, such as availability of experimental data and realiable prediction. In the case of lipids, a lack of experimental data for pure components and also for their mixtures in open...... literature was observed, what makes it necessary to development reliable predictive models from limited data. One of the first steps of this project was the creation of a database containing properties of mixtures involved in tasks related to process design, simulation, and optimization as well as design...... of chemicals based products. This database was combined with the existing lipids database of pure component properties. To contribute to the missing data, measurements of isobaric vapour-liquid equilibrium (VLE) data of two binary mixtures at two different pressures were performed using Differential Scanning...

  1. Introducing Aviary

    CERN Document Server

    Peutz, Mike

    2010-01-01

    The world is changing. Where before you needed to purchase and install big and expensive programs on your computer in order to create stunning images, you can now do it all online for free using Aviary. Aviary is an online collection of applications that enable you to upload and modify your own photographs and images, and create new imagery from scratch. It includes a powerful photo-manipulation tool called Phoenix, a vector-drawing application called Raven, an effects suite for creating eye-watering image effects called Peacock, and much more. Introducing Aviary takes you through all of these

  2. Developing and Modeling Complex Social Interventions: Introducing the Connecting People Intervention

    Science.gov (United States)

    Webber, Martin; Reidy, Hannah; Ansari, David; Stevens, Martin; Morris, David

    2016-01-01

    Objectives: Modeling the processes involved in complex social interventions is important in social work practice, as it facilitates their implementation and translation into different contexts. This article reports the process of developing and modeling the connecting people intervention (CPI), a model of practice that supports people with mental…

  3. Re-introducing the Trivium as a Model for Teaching in the Humanities.

    Science.gov (United States)

    Siebach, James L.

    1998-01-01

    Argues that the Trivium, a model for a basic education from classical times, is useful in providing students with a humanities education because students within this model learn skills in rhetoric, grammar, and logic. Defines a humanities education, describes the Trivium model in detail, and applies the Trivium to contemporary education. (CMK)

  4. Finite mixture models for the computation of isotope ratios in mixed isotopic samples

    Science.gov (United States)

    Koffler, Daniel; Laaha, Gregor; Leisch, Friedrich; Kappel, Stefanie; Prohaska, Thomas

    2013-04-01

    Finite mixture models have been used for more than 100 years, but have seen a real boost in popularity over the last two decades due to the tremendous increase in available computing power. The areas of application of mixture models range from biology and medicine to physics, economics and marketing. These models can be applied to data where observations originate from various groups and where group affiliations are not known, as is the case for multiple isotope ratios present in mixed isotopic samples. Recently, the potential of finite mixture models for the computation of 235U/238U isotope ratios from transient signals measured in individual (sub-)µm-sized particles by laser ablation - multi-collector - inductively coupled plasma mass spectrometry (LA-MC-ICPMS) was demonstrated by Kappel et al. [1]. The particles, which were deposited on the same substrate, were certified with respect to their isotopic compositions. Here, we focus on the statistical model and its application to isotope data in ecogeochemistry. Commonly applied evaluation approaches for mixed isotopic samples are time-consuming and are dependent on the judgement of the analyst. Thus, isotopic compositions may be overlooked due to the presence of more dominant constituents. Evaluation using finite mixture models can be accomplished unsupervised and automatically. The models try to fit several linear models (regression lines) to subgroups of data taking the respective slope as estimation for the isotope ratio. The finite mixture models are parameterised by: • The number of different ratios. • Number of points belonging to each ratio-group. • The ratios (i.e. slopes) of each group. Fitting of the parameters is done by maximising the log-likelihood function using an iterative expectation-maximisation (EM) algorithm. In each iteration step, groups of size smaller than a control parameter are dropped; thereby the number of different ratios is determined. The analyst only influences some control

  5. M3B: A coarse grain model for the simulation of oligosaccharides and their water mixtures.

    Science.gov (United States)

    Goddard, William A.; Cagin, Tahir; Molinero, Valeria

    2003-03-01

    Water and sugar dynamics in concentrated carbohydrate solutions is of utmost importance in food and pharmaceutical technology. Water diffusion in concentrated sugar mixtures can be slowed down many orders of magnitude with respect to bulk water [1], making extremely expensive the simulation of these systems with atomistic detail for the required time-scales. We present a coarse grain model (M3B) for malto-oligosaccharides and their water mixtures. M3B speeds up molecular dynamics simulations about 500-1000 times with respect to the atomistic model while retaining enough detail to be mapped back to the atomistic structures with low uncertainty in the positions. The former characteristic allows the study of water and carbohydrate dynamics in supercooled and polydisperse mixtures with characteristic time scales above the nanosecond. The latter makes M3B well suited for combined atomistic-mesoscale simulations. We present the parameterization of M3B force field for water and a family of technologically relevant glucose oligosaccharides, the alpha-(1->4) glucans. The coarse grain force field is completely parameterized from atomistic simulations to reproduce the density, cohesive energy and structural parameters of amorphous sugars. We will show that M3B is capable to describe the helical character of the higher oligosaccharides, and that the water structure in low moisture mixtures shows the same features obtained with the atomistic and M3B models. [1] R Parker, SG Ring: Carbohydr. Res. 273 (1995) 147-55.

  6. Proteochemometric modeling of the bioactivity spectra of HIV-1 protease inhibitors by introducing protein-ligand interaction fingerprint.

    Directory of Open Access Journals (Sweden)

    Qi Huang

    Full Text Available HIV-1 protease is one of the main therapeutic targets in HIV. However, a major problem in treatment of HIV is the rapid emergence of drug-resistant strains. It should be particularly helpful to clinical therapy of AIDS if one method can be used to predict antivirus capability of compounds for different variants. In our study, proteochemometric (PCM models were created to study the bioactivity spectra of 92 chemical compounds with 47 unique HIV-1 protease variants. In contrast to other PCM models, which used Multiplication of Ligands and Proteins Descriptors (MLPD as cross-term, one new cross-term, i.e. Protein-Ligand Interaction Fingerprint (PLIF was introduced in our modeling. With different combinations of ligand descriptors, protein descriptors and cross-terms, nine PCM models were obtained, and six of them achieved good predictive abilities (Q(2(test>0.7. These results showed that the performance of PCM models could be improved when ligand and protein descriptors were complemented by the newly introduced cross-term PLIF. Compared with the conventional cross-term MLPD, the newly introduced PLIF had a better predictive ability. Furthermore, our best model (GD & P & PLIF: Q(2(test = 0.8271 could select out those inhibitors which have a broad antiviral activity. As a conclusion, our study indicates that proteochemometric modeling with PLIF as cross-term is a potential useful way to solve the HIV-1 drug-resistant problem.

  7. Mathematical modelling in engineering: A proposal to introduce linear algebra concepts

    OpenAIRE

    Andrea Dorila Cárcamo; Joan Vicenç Gómez; Josep María Fortuny

    2016-01-01

    The modern dynamic world requires that basic science courses for engineering, including linear algebra, emphasize the development of mathematical abilities primarily associated with modelling and interpreting, which aren´t limited only to calculus abilities. Considering this, an instructional design was elaborated based on mathematic modelling and emerging heuristic models for the construction of specific linear algebra concepts:  span and spanning set. This was applied to first year e...

  8. Mathematical modelling in engineering: a proposal to introduce linear algebra concepts

    OpenAIRE

    Cárcamo Bahamonde, Andrea Dorila; Gómez Urgellés, Joan Vicenç; Fortuny Aymeni, José María

    2016-01-01

    The modern dynamic world requires that basic science courses for engineering, including linear algebra, emphasize the development of mathematical abilities primarily associated with modelling and interpreting, which aren´t limited only to calculus abilities. Considering this, an instructional design was elaborated based on mathematic modelling and emerging heuristic models for the construction of specific linear algebra concepts: span and spanning set. This was applied to first year engineeri...

  9. Mathematical modelling in engineering: a proposal to introduce linear algebra concepts

    OpenAIRE

    Cárcamo Bahamonde, Andrea; Gómez Urgellés, Joan Vicenç; Fortuny Aymemi, Josep Maria

    2016-01-01

    The modern dynamic world requires that basic science courses for engineering, including linear algebra, emphasize the development of mathematical abilities primarily associated with modelling and interpreting, which aren´t limited only to calculus abilities. Considering this, an instructional design was elaborated based on mathematic modelling and emerging heuristic models for the construction of specific linear algebra concepts: span and spanning set. This was applied to first year engineer...

  10. Introducing formalism in economics: The growth model of John von Neumann

    Directory of Open Access Journals (Sweden)

    Gloria-Palermo Sandye

    2010-01-01

    Full Text Available The objective is to interpret John von Neumann's growth model as a decisive step of the forthcoming formalist revolution of the 1950s in economics. This model gave rise to an impressive variety of comments about its classical or neoclassical underpinnings. We go beyond this traditional criterion and interpret rather this model as the manifestation of von Neumann's involvement in the formalist programme of mathematician David Hilbert. We discuss the impact of Kurt Gödel's discoveries on this programme. We show that the growth model reflects the pragmatic turn of the formalist programme after Gödel and proposes the extension of modern axiomatisation to economics.

  11. Using a factor mixture modeling approach in alcohol dependence in a general population sample.

    Science.gov (United States)

    Kuo, Po-Hsiu; Aggen, Steven H; Prescott, Carol A; Kendler, Kenneth S; Neale, Michael C

    2008-11-01

    Alcohol dependence (AD) is a complex and heterogeneous disorder. The identification of more homogeneous subgroups of individuals with drinking problems and the refinement of the diagnostic criteria are inter-related research goals. They have the potential to improve our knowledge of etiology and treatment effects, and to assist in the identification of risk factors or specific genetic factors. Mixture modeling has advantages over traditional modeling that focuses on either the dimensional or categorical latent structure. The mixture modeling combines both latent class and latent trait models, but has not been widely applied in substance use research. The goal of the present study is to assess whether the AD criteria in the population could be better characterized by a continuous dimension, a few discrete subgroups, or a combination of the two. More than seven thousand participants were recruited from the population-based Virginia Twin Registry, and were interviewed to obtain DSM-IV (Diagnostic and Statistical Manual of Mental Disorder, version IV) symptoms and diagnosis of AD. We applied factor analysis, latent class analysis, and factor mixture models for symptom items based on the DSM-IV criteria. Our results showed that a mixture model with 1 factor and 3 classes for both genders fit well. The 3 classes were a non-problem drinking group and severe and moderate drinking problem groups. By contrast, models constrained to conform to DSM-IV diagnostic criteria were rejected by model fitting indices providing empirical evidence for heterogeneity in the AD diagnosis. Classification analysis showed different characteristics across subgroups, including alcohol-caused behavioral problems, comorbid disorders, age at onset for alcohol-related milestones, and personality. Clinically, the expanded classification of AD may aid in identifying suitable treatments, interventions and additional sources of comorbidity based on these more homogenous subgroups of alcohol use

  12. Comparison of the Noise Robustness of FVC Retrieval Algorithms Based on Linear Mixture Models

    OpenAIRE

    Hiroki Yoshioka; Kenta Obata

    2011-01-01

    The fraction of vegetation cover (FVC) is often estimated by unmixing a linear mixture model (LMM) to assess the horizontal spread of vegetation within a pixel based on a remotely sensed reflectance spectrum. The LMM-based algorithm produces results that can vary to a certain degree, depending on the model assumptions. For example, the robustness of the results depends on the presence of errors in the measured reflectance spectra. The objective of this study was to derive a factor that could ...

  13. Modulational instability, solitons and periodic waves in a model of quantum degenerate boson-fermion mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Belmonte-Beitia, Juan [Departamento de Matematicas, E. T. S. de Ingenieros Industriales, Universidad de Castilla-La Mancha 13071, Ciudad Real (Spain); Perez-Garcia, Victor M. [Departamento de Matematicas, E. T. S. de Ingenieros Industriales, Universidad de Castilla-La Mancha 13071, Ciudad Real (Spain); Vekslerchik, Vadym [Departamento de Matematicas, E. T. S. de Ingenieros Industriales, Universidad de Castilla-La Mancha 13071, Ciudad Real (Spain)

    2007-05-15

    In this paper, we study a system of coupled nonlinear Schroedinger equations modelling a quantum degenerate mixture of bosons and fermions. We analyze the stability of plane waves, give precise conditions for the existence of solitons and write explicit solutions in the form of periodic waves. We also check that the solitons observed previously in numerical simulations of the model correspond exactly to our explicit solutions and see how plane waves destabilize to form periodic waves.

  14. Using Bayesian statistics for modeling PTSD through Latent Growth Mixture Modeling: implementation and discussion

    Directory of Open Access Journals (Sweden)

    Sarah Depaoli

    2015-03-01

    Full Text Available Background: After traumatic events, such as disaster, war trauma, and injuries including burns (which is the focus here, the risk to develop posttraumatic stress disorder (PTSD is approximately 10% (Breslau & Davis, 1992. Latent Growth Mixture Modeling can be used to classify individuals into distinct groups exhibiting different patterns of PTSD (Galatzer-Levy, 2015. Currently, empirical evidence points to four distinct trajectories of PTSD patterns in those who have experienced burn trauma. These trajectories are labeled as: resilient, recovery, chronic, and delayed onset trajectories (e.g., Bonanno, 2004; Bonanno, Brewin, Kaniasty, & Greca, 2010; Maercker, Gäbler, O'Neil, Schützwohl, & Müller, 2013; Pietrzak et al., 2013. The delayed onset trajectory affects only a small group of individuals, that is, about 4–5% (O'Donnell, Elliott, Lau, & Creamer, 2007. In addition to its low frequency, the later onset of this trajectory may contribute to the fact that these individuals can be easily overlooked by professionals. In this special symposium on Estimating PTSD trajectories (Van de Schoot, 2015a, we illustrate how to properly identify this small group of individuals through the Bayesian estimation framework using previous knowledge through priors (see, e.g., Depaoli & Boyajian, 2014; Van de Schoot, Broere, Perryck, Zondervan-Zwijnenburg, & Van Loey, 2015. Method: We used latent growth mixture modeling (LGMM (Van de Schoot, 2015b to estimate PTSD trajectories across 4 years that followed a traumatic burn. We demonstrate and compare results from traditional (maximum likelihood and Bayesian estimation using priors (see, Depaoli, 2012, 2013. Further, we discuss where priors come from and how to define them in the estimation process. Results: We demonstrate that only the Bayesian approach results in the desired theory-driven solution of PTSD trajectories. Since the priors are chosen subjectively, we also present a sensitivity analysis of the

  15. Introducing the Clean-Tech Adoption Model: A California Case Study

    NARCIS (Netherlands)

    Bijlveld, P.C. (Paul); Riezebos, P. (Peter); Wierstra, E. (Erik)

    2012-01-01

    Abstract. The Clean-Tech Adoption Model (C-TAM) explains the adoption process of clean technology. Based on the Unified Theory of Acceptance and Usage of Technology (UTAUT) combined with qualitative research and empirical data gathering, the model predicts adoption based on the perceived quality, ef

  16. Mathematical Modelling in Engineering: A Proposal to Introduce Linear Algebra Concepts

    Science.gov (United States)

    Cárcamo Bahamonde, Andrea; Gómez Urgelles, Joan; Fortuny Aymemí, Josep

    2016-01-01

    The modern dynamic world requires that basic science courses for engineering, including linear algebra, emphasise the development of mathematical abilities primarily associated with modelling and interpreting, which are not exclusively calculus abilities. Considering this, an instructional design was created based on mathematical modelling and…

  17. Introducing a new open source GIS user interface for the SWAT model

    Science.gov (United States)

    The Soil and Water Assessment Tool (SWAT) model is a robust watershed modelling tool. It typically uses the ArcSWAT interface to create its inputs. ArcSWAT is public domain software which works in the licensed ArcGIS environment. The aim of this paper was to develop an open source user interface ...

  18. Introducing labour productivity changes into models used for economic impact analysis in tourism

    NARCIS (Netherlands)

    Klijs, Jeroen; Peerlings, Jack; Heijman, Wim

    2017-01-01

    In tourism management, traditional input-output models are often applied to calculate economic impacts, including employment impacts. These models imply that increases in output are translated into proportional increases in labour, indicating constant labour productivity. In non-linear input- out

  19. Mixture models of geometric distributions in genomic analysis of inter-nucleotide distances

    Directory of Open Access Journals (Sweden)

    Adelaide Valente Freitas

    2013-11-01

    Full Text Available The mapping defined by inter-nucleotide distances (InD provides a reversible numerical representation of the primary structure of DNA. If nucleotides were independently placed along the genome, a finite mixture model of four geometric distributions could be fitted to the InD where the four marginal distributions would be the expected distributions of the four nucleotide types. We analyze a finite mixture model of geometric distributions (f_2, with marginals not explicitly addressed to the nucleotide types, as an approximation to the InD. We use BIC in the composite likelihood framework for choosing the number of components of the mixture and the EM algorithm for estimating the model parameters. Based on divergence profiles, an experimental study was carried out on the complete genomes of 45 species to evaluate f_2. Although the proposed model is not suited to the InD, our analysis shows that divergence profiles involving the empirical distribution of the InD are also exhibited by profiles involving f_2. It suggests that statistical regularities of the InD can be described by the model f_2. Some characteristics of the DNA sequences captured by the model f_2 are illustrated. In particular, clusterings of subgroups of eukaryotes (primates, mammalians, animals and plants are detected.

  20. A WYNER-ZIV VIDEO CODING METHOD UTILIZING MIXTURE CORRELATION NOISE MODEL

    Institute of Scientific and Technical Information of China (English)

    Hu Xiaofei; Zhu Xiuchang

    2012-01-01

    In Wyner-Ziv (WZ) Distributed Video Coding (DVC),correlation noise model is often used to describe the error distribution between WZ frame and the side information.The accuracy of the model can influence the performance of the video coder directly.A mixture correlation noise model in Discrete Cosine Transform (DCT) domain for WZ video coding is established in this paper.Different correlation noise estimation method is used for direct current and alternating current coefficients.Parameter estimation method based on expectation maximization algorithm is used to estimate the Laplace distribution center of direct current frequency band and Mixture Laplace-Uniform Distribution Model (MLUDM) is established for alternating current coefficients.Experimental results suggest that the proposed mixture correlation noise model can describe the heavy tail and sudden change of the noise accurately at high rate and make significant improvement on the coding efficiency compared with the noise model presented by DIStributed COding for Video sERvices (DISCOVER).

  1. Applicability of linearized Dusty Gas Model for multicomponent diffusion of gas mixtures in porous solids

    Directory of Open Access Journals (Sweden)

    Marković Jelena

    2007-01-01

    Full Text Available The transport of gaseous components through porous media could be described according to the well-known Fick model and its modifications. It is also known that Fick’s law is not suitable for predicting the fluxes in multicomponent gas mixtures, excluding binary mixtures. This model is still frequently used in chemical engineering because of its simplicity. Unfortunately, besides the Fick’s model there is no generally accepted model for mass transport through porous media (membranes, catalysts etc.. Numerous studies on transport through porous media reveal that Dusty Gas Model (DGM is superior in its ability to predict fluxes in multicomponent mixtures. Its wider application is limited by more complicated calculation procedures comparing to Fick’s model. It should be noted that there were efforts to simplify DGM in order to obtain satisfactory accurate results. In this paper linearized DGM, as the simplest form of DGM, is tested under conditions of zero system pressure drop, small pressure drop, and different temperatures. Published experimental data are used in testing the accuracy of the linearized procedure. It is shown that this simplified procedure is accurate enough compared to the standard more complicated calculations.

  2. Comparison of activity coefficient models for atmospheric aerosols containing mixtures of electrolytes, organics, and water

    Science.gov (United States)

    Tong, Chinghang; Clegg, Simon L.; Seinfeld, John H.

    Atmospheric aerosols generally comprise a mixture of electrolytes, organic compounds, and water. Determining the gas-particle distribution of volatile compounds, including water, requires equilibrium or mass transfer calculations, at the heart of which are models for the activity coefficients of the particle-phase components. We evaluate here the performance of four recent activity coefficient models developed for electrolyte/organic/water mixtures typical of atmospheric aerosols. Two of the models, the CSB model [Clegg, S.L., Seinfeld, J.H., Brimblecombe, P., 2001. Thermodynamic modelling of aqueous aerosols containing electrolytes and dissolved organic compounds. Journal of Aerosol Science 32, 713-738] and the aerosol diameter dependent equilibrium model (ADDEM) [Topping, D.O., McFiggans, G.B., Coe, H., 2005. A curved multi-component aerosol hygroscopicity model framework: part 2—including organic compounds. Atmospheric Chemistry and Physics 5, 1223-1242] treat ion-water and organic-water interactions but do not include ion-organic interactions; these can be referred to as "decoupled" models. The other two models, reparameterized Ming and Russell model 2005 [Raatikainen, T., Laaksonen, A., 2005. Application of several activity coefficient models to water-organic-electrolyte aerosols of atmospheric interest. Atmospheric Chemistry and Physics 5, 2475-2495] and X-UNIFAC.3 [Erdakos, G.B., Change, E.I., Pandow, J.F., Seinfeld, J.H., 2006. Prediction of activity coefficients in liquid aerosol particles containing organic compounds, dissolved inorganic salts, and water—Part 3: Organic compounds, water, and ionic constituents by consideration of short-, mid-, and long-range effects using X-UNIFAC.3. Atmospheric Environment 40, 6437-6452], include ion-organic interactions; these are referred to as "coupled" models. We address the question—Does the inclusion of a treatment of ion-organic interactions substantially improve the performance of the coupled models over

  3. A Note Comparing Component-Slope, Scheffé, and Cox Parameterizations of the Linear Mixture Experiment Model

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.

    2006-05-01

    A mixture experiment involves combining two or more components in various proportions and collecting data on one or more responses. A linear mixture model may adequately represent the relationship between a response and mixture component proportions and be useful in screening the mixture components. The Scheffé and Cox parameterizations of the linear mixture model are commonly used for analyzing mixture experiment data. With the Scheffé parameterization, the fitted coefficient for a component is the predicted response at that pure component (i.e., single-component mixture). With the Cox parameterization, the fitted coefficient for a mixture component is the predicted difference in response at that pure component and at a pre-specified reference composition. This paper presents a new component-slope parameterization, in which the fitted coefficient for a mixture component is the predicted slope of the linear response surface along the direction determined by that pure component and at a pre-specified reference composition. The component-slope, Scheffé, and Cox parameterizations of the linear mixture model are compared and their advantages and disadvantages are discussed.

  4. Introducing DeBRa: a detailed breast model for radiological studies

    Science.gov (United States)

    Ma, Andy K. W.; Gunn, Spencer; Darambara, Dimitra G.

    2009-07-01

    Currently, x-ray mammography is the method of choice in breast cancer screening programmes. As the mammography technology moves from 2D imaging modalities to 3D, conventional computational phantoms do not have sufficient detail to support the studies of these advanced imaging systems. Studies of these 3D imaging systems call for a realistic and sophisticated computational model of the breast. DeBRa (Detailed Breast model for Radiological studies) is the most advanced, detailed, 3D computational model of the breast developed recently for breast imaging studies. A DeBRa phantom can be constructed to model a compressed breast, as in film/screen, digital mammography and digital breast tomosynthesis studies, or a non-compressed breast as in positron emission mammography and breast CT studies. Both the cranial-caudal and mediolateral oblique views can be modelled. The anatomical details inside the phantom include the lactiferous duct system, the Cooper ligaments and the pectoral muscle. The fibroglandular tissues are also modelled realistically. In addition, abnormalities such as microcalcifications, irregular tumours and spiculated tumours are inserted into the phantom. Existing sophisticated breast models require specialized simulation codes. Unlike its predecessors, DeBRa has elemental compositions and densities incorporated into its voxels including those of the explicitly modelled anatomical structures and the noise-like fibroglandular tissues. The voxel dimensions are specified as needed by any study and the microcalcifications are embedded into the voxels so that the microcalcification sizes are not limited by the voxel dimensions. Therefore, DeBRa works with general-purpose Monte Carlo codes. Furthermore, general-purpose Monte Carlo codes allow different types of imaging modalities and detector characteristics to be simulated with ease. DeBRa is a versatile and multipurpose model specifically designed for both x-ray and γ-ray imaging studies.

  5. Introducing heterogeneous users and vehicles into models and algorithms for the dial-a-ride problem.

    Science.gov (United States)

    Parragh, Sophie N

    2011-08-01

    Dial-a-ride problems deal with the transportation of people between pickup and delivery locations. Given the fact that people are subject to transportation, constraints related to quality of service are usually present, such as time windows and maximum user ride time limits. In many real world applications, different types of users exist. In the field of patient and disabled people transportation, up to four different transportation modes can be distinguished. In this article we consider staff seats, patient seats, stretchers and wheelchair places. Furthermore, most companies involved in the transportation of the disabled or ill dispose of different types of vehicles. We introduce both aspects into state-of-the-art formulations and branch-and-cut algorithms for the standard dial-a-ride problem. Also a recent metaheuristic method is adapted to this new problem. In addition, a further service quality related issue is analyzed: vehicle waiting time with passengers aboard. Instances with up to 40 requests are solved to optimality. High quality solutions are obtained with the heuristic method.

  6. Introducing WISDEM:An Integrated System Modeling for Wind Turbines and Plant (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Dykes, K.; Graf, P.; Scott, G.; Ning, A.; King, R.; Guo, Y.; Parsons, T.; Damiani, R.; Felker, F.; Veers, P.

    2015-01-01

    The National Wind Technology Center wind energy systems engineering initiative has developed an analysis platform to leverage its research capabilities toward integrating wind energy engineering and cost models across wind plants. This Wind-Plant Integrated System Design & Engineering Model (WISDEM) platform captures the important interactions between various subsystems to achieve a better National Wind Technology Center wind energy systems engineering initiative has developed an analysis platform to leverage its research capabilities toward integrating wind energy engineering and cost models across wind plants. This Wind-Plant Integrated System Design & Engineering Model (WISDEM) platform captures the important interactions between various subsystems to achieve a better understanding of how to improve system-level performance and achieve system-level cost reductions. This work illustrates a few case studies with WISDEM that focus on the design and analysis of wind turbines and plants at different system levels.

  7. Introducing the Tripartite Digitization Model for Engaging with the Intangible Cultural Heritage of the City

    DEFF Research Database (Denmark)

    Rehm, Matthias; Rodil, Kasper

    2017-01-01

    In this paper we investigate the notion of intangible cultural heritage as a driver for smart city learning applications. To this end, we shortly explore the notion of intangible heritage before presenting the tripartite digitization model that was originally developed for indigenous cultural...... heritage but can equally be applied to the smart city context. We then discuss parts of the model making use of a specific case study aiming at re-creating places in the city....

  8. Concentration addition, independent action and generalized concentration addition models for mixture effect prediction of sex hormone synthesis in vitro

    DEFF Research Database (Denmark)

    Hadrup, Niels; Taxvig, Camilla; Pedersen, Mikael;

    2013-01-01

    , antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone...

  9. A Systematic Investigation of Within-Subject and Between-Subject Covariance Structures in Growth Mixture Models

    Science.gov (United States)

    Liu, Junhui

    2012-01-01

    The current study investigated how between-subject and within-subject variance-covariance structures affected the detection of a finite mixture of unobserved subpopulations and parameter recovery of growth mixture models in the context of linear mixed-effects models. A simulation study was conducted to evaluate the impact of variance-covariance…

  10. Fitting a mixture model by expectation maximization to discover motifs in biopolymers

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, T.L.; Elkan, C. [Univ. of California, La Jolla, CA (United States)

    1994-12-31

    The algorithm described in this paper discovers one or more motifs in a collection of DNA or protein sequences by using the technique of expectation maximization to fit a two-component finite mixture model to the set of sequences. Multiple motifs are found by fitting a mixture model to the data, probabilistically erasing the occurrences of the motif thus found, and repeating the process to find successive motifs. The algorithm requires only a set of unaligned sequences and a number specifying the width of the motifs as input. It returns a model of each motif and a threshold which together can be used as a Bayes-optimal classifier for searching for occurrences of the motif in other databases. The algorithm estimates how many times each motif occurs in each sequence in the dataset and outputs an alignment of the occurrences of the motif. The algorithm is capable of discovering several different motifs with differing numbers of occurrences in a single dataset.

  11. Filling the gaps: Gaussian mixture models from noisy, truncated or incomplete samples

    CERN Document Server

    Melchior, Peter

    2016-01-01

    We extend the common mixtures-of-Gaussians density estimation approach to account for a known sample incompleteness by simultaneous imputation from the current model. The method called GMMis generalizes existing Expectation-Maximization techniques for truncated data to arbitrary truncation geometries and probabilistic rejection. It can incorporate an uniform background distribution as well as independent multivariate normal measurement errors for each of the observed samples, and recovers an estimate of the error-free distribution from which both observed and unobserved samples are drawn. We compare GMMis to the standard Gaussian mixture model for simple test cases with different types of incompleteness, and apply it to observational data from the NASA Chandra X-ray telescope. The python code is capable of performing density estimation with millions of samples and thousands of model components and is released as an open-source package at https://github.com/pmelchior/pyGMMis

  12. Modelling of a shell-and-tube evaporator using the zeotropic mixture R-407C

    Energy Technology Data Exchange (ETDEWEB)

    Necula, H.; Badea, A. [Universite Politecnica de Bucarest (Romania). Faculte d' Energetique; Lallemand, M. [INSA, Villeurbanne (France). Centre de Thermique de Lyon; Marvillet, C. [CEA-Grenoble (France)

    2001-11-01

    This study concerns the steady state modelling of a shell-and-tube evaporator using the zeotropic mixture R-407C. In this local type model, the control volumes are a function of the geometric configuration of the evaporator in which baffles are fitted. The validation of the model has been made by comparison between theoretical and experimental results obtained from an experimental investigation with a refrigerating machine. For test conditions, the flow pattern has been identified from a flow pattern map as being stratified. Theoretical results show the effect of different parameters such as the saturation pressure, the inlet quality, etc. on the local variables (temperature, slip ratio). The effect of leakage on the mixture composition has also been investigated. (author)

  13. A lattice traffic model with consideration of preceding mixture traffic information

    Institute of Scientific and Technical Information of China (English)

    Li Zhi-Peng; Liu Fu-Qiang; Sun Jian

    2011-01-01

    In this paper,the lattice model is presented,incorporating not only site information about preceding cars but also relative currents in front.We derive the stability condition of the extended model by considering a small perturbation around the homogeneous flow solution and find that the improvement in the stability of traffic flow is obtained by taking into account preceding mixture traffic information.Direct simulations also confirm that the traffic jam can be suppressed efficiently by considering the relative currents ahead,just like incorporating site information in front.Moreover,from the nonlinear analysis of the extended models,the preceding mixture traffic information dependence of the propagating kink solutions for traffic jams is obtained by deriving the modified KdV equation near the critical point using the reductive perturbation method.

  14. Personal exposure to mixtures of volatile organic compounds: modeling and further analysis of the RIOPA data.

    Science.gov (United States)

    Batterman, Stuart; Su, Feng-Chiao; Li, Shi; Mukherjee, Bhramar; Jia, Chunrong

    2014-06-01

    Emission sources of volatile organic compounds (VOCs*) are numerous and widespread in both indoor and outdoor environments. Concentrations of VOCs indoors typically exceed outdoor levels, and most people spend nearly 90% of their time indoors. Thus, indoor sources generally contribute the majority of VOC exposures for most people. VOC exposure has been associated with a wide range of acute and chronic health effects; for example, asthma, respiratory diseases, liver and kidney dysfunction, neurologic impairment, and cancer. Although exposures to most VOCs for most persons fall below health-based guidelines, and long-term trends show decreases in ambient emissions and concentrations, a subset of individuals experience much higher exposures that exceed guidelines. Thus, exposure to VOCs remains an important environmental health concern. The present understanding of VOC exposures is incomplete. With the exception of a few compounds, concentration and especially exposure data are limited; and like other environmental data, VOC exposure data can show multiple modes, low and high extreme values, and sometimes a large portion of data below method detection limits (MDLs). Field data also show considerable spatial or interpersonal variability, and although evidence is limited, temporal variability seems high. These characteristics can complicate modeling and other analyses aimed at risk assessment, policy actions, and exposure management. In addition to these analytic and statistical issues, exposure typically occurs as a mixture, and mixture components may interact or jointly contribute to adverse effects. However most pollutant regulations, guidelines, and studies remain focused on single compounds, and thus may underestimate cumulative exposures and risks arising from coexposures. In addition, the composition of VOC mixtures has not been thoroughly investigated, and mixture components show varying and complex dependencies. Finally, although many factors are known to

  15. Comparison of Criteria for Choosing the Number of Classes in Bayesian Finite Mixture Models.

    Science.gov (United States)

    Nasserinejad, Kazem; van Rosmalen, Joost; de Kort, Wim; Lesaffre, Emmanuel

    2017-01-01

    Identifying the number of classes in Bayesian finite mixture models is a challenging problem. Several criteria have been proposed, such as adaptations of the deviance information criterion, marginal likelihoods, Bayes factors, and reversible jump MCMC techniques. It was recently shown that in overfitted mixture models, the overfitted latent classes will asymptotically become empty under specific conditions for the prior of the class proportions. This result may be used to construct a criterion for finding the true number of latent classes, based on the removal of latent classes that have negligible proportions. Unlike some alternative criteria, this criterion can easily be implemented in complex statistical models such as latent class mixed-effects models and multivariate mixture models using standard Bayesian software. We performed an extensive simulation study to develop practical guidelines to determine the appropriate number of latent classes based on the posterior distribution of the class proportions, and to compare this criterion with alternative criteria. The performance of the proposed criterion is illustrated using a data set of repeatedly measured hemoglobin values of blood donors.

  16. A Bayesian threshold-normal mixture model for analysis of a continuous mastitis-related trait.

    Science.gov (United States)

    Ødegård, J; Madsen, P; Gianola, D; Klemetsdal, G; Jensen, J; Heringstad, B; Korsgaard, I R

    2005-07-01

    Mastitis is associated with elevated somatic cell count in milk, inducing a positive correlation between milk somatic cell score (SCS) and the absence or presence of the disease. In most countries, selection against mastitis has focused on selecting parents with genetic evaluations that have low SCS. Univariate or multivariate mixed linear models have been used for statistical description of SCS. However, an observation of SCS can be regarded as drawn from a 2- (or more) component mixture defined by the (usually) unknown health status of a cow at the test-day on which SCS is recorded. A hierarchical 2-component mixture model was developed, assuming that the health status affecting the recorded test-day SCS is completely specified by an underlying liability variable. Based on the observed SCS, inferences can be drawn about disease status and parameters of both SCS and liability to mastitis. The prior probability of putative mastitis was allowed to vary between subgroups (e.g., herds, families), by specifying fixed and random effects affecting both SCS and liability. Using simulation, it was found that a Bayesian model fitted to the data yielded parameter estimates close to their true values. The model provides selection criteria that are more appealing than selection for lower SCS. The proposed model can be extended to handle a wide range of problems related to genetic analyses of mixture traits.

  17. New approach in modeling Cr(VI) sorption onto biomass from metal binary mixtures solutions

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Chang [College of Environmental Science and Engineering, Anhui Normal University, South Jiuhua Road, 189, 241002 Wuhu (China); Chemical Engineering Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain); Fiol, Núria [Chemical Engineering Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain); Villaescusa, Isabel, E-mail: Isabel.Villaescusa@udg.edu [Chemical Engineering Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain); Poch, Jordi [Applied Mathematics Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain)

    2016-01-15

    In the last decades Cr(VI) sorption equilibrium and kinetic studies have been carried out using several types of biomasses. However there are few researchers that consider all the simultaneous processes that take place during Cr(VI) sorption (i.e., sorption/reduction of Cr(VI) and simultaneous formation and binding of reduced Cr(III)) when formulating a model that describes the overall sorption process. On the other hand Cr(VI) scarcely exists alone in wastewaters, it is usually found in mixtures with divalent metals. Therefore, the simultaneous removal of Cr(VI) and divalent metals in binary mixtures and the interactive mechanism governing Cr(VI) elimination have gained more and more attention. In the present work, kinetics of Cr(VI) sorption onto exhausted coffee from Cr(VI)–Cu(II) binary mixtures has been studied in a stirred batch reactor. A model including Cr(VI) sorption and reduction, Cr(III) sorption and the effect of the presence of Cu(II) in these processes has been developed and validated. This study constitutes an important advance in modeling Cr(VI) sorption kinetics especially when chromium sorption is in part based on the sorbent capacity of reducing hexavalent chromium and a metal cation is present in the binary mixture. - Highlights: • A kinetic model including Cr(VI) reduction, Cr(VI) and Cr(III) sorption/desorption • Synergistic effect of Cu(II) on Cr(VI) elimination included in the modelModel validation by checking it against independent sets of data.

  18. Extensions to Multivariate Space Time Mixture Modeling of Small Area Cancer Data

    Directory of Open Access Journals (Sweden)

    Rachel Carroll

    2017-05-01

    Full Text Available Oral cavity and pharynx cancer, even when considered together, is a fairly rare disease. Implementation of multivariate modeling with lung and bronchus cancer, as well as melanoma cancer of the skin, could lead to better inference for oral cavity and pharynx cancer. The multivariate structure of these models is accomplished via the use of shared random effects, as well as other multivariate prior distributions. The results in this paper indicate that care should be taken when executing these types of models, and that multivariate mixture models may not always be the ideal option, depending on the data of interest.

  19. Calculation of Surface Tensions of Polar Mixtures with a Simplified Gradient Theory Model

    DEFF Research Database (Denmark)

    Zuo, You-Xiang; Stenby, Erling Halfdan

    1996-01-01

    Key Words: Thermodynamics, Simplified Gradient Theory, Surface Tension, Equation of state, Influence Parameter.In this work, assuming that the number densities of each component in a mixture across the interface between the coexisting vapor and liquid phases are linearly distributed, we developed...... a simplified gradient theory (SGT) model for computing surface tensions. With this model, it is not required to solve the time-consuming density profile equations of the gradient theory model. The SRK EOS was applied to calculate the properties of the homogeneous fluid. First, the SGT model was used to predict...

  20. Analysis of Two-sample Censored Data Using a Semiparametric Mixture Model

    Institute of Scientific and Technical Information of China (English)

    Gang Li; Chien-tai Lin

    2009-01-01

    In this article we study a semiparametric mixture model for the two-sample problem with right censored data. The model implies that the densities for the continuous outcomes are related by a parametric tilt but otherwise unspecified. It provides a useful alternative to the Cox (1972) proportional hazards model for the comparison of treatments based on right censored survival data. We propose an iterative algorithm for the semiparametric maximum likelihood estimates of the parametric and nonparametric components of the model. The performance of the proposed method is studied using simulation. We illustrate our method in an application to melanoma.

  1. Mathematical modelling in engineering: A proposal to introduce linear algebra concepts

    Directory of Open Access Journals (Sweden)

    Andrea Dorila Cárcamo

    2016-03-01

    Full Text Available The modern dynamic world requires that basic science courses for engineering, including linear algebra, emphasize the development of mathematical abilities primarily associated with modelling and interpreting, which aren´t limited only to calculus abilities. Considering this, an instructional design was elaborated based on mathematic modelling and emerging heuristic models for the construction of specific linear algebra concepts:  span and spanning set. This was applied to first year engineering students. Results suggest that this type of instructional design contributes to the construction of these mathematical concepts and can also favour first year engineering students understanding of key linear algebra concepts and potentiate the development of higher order skills.

  2. Introducing spatial information into predictive NF-kappaB modelling--an agent-based approach.

    Directory of Open Access Journals (Sweden)

    Mark Pogson

    Full Text Available Nature is governed by local interactions among lower-level sub-units, whether at the cell, organ, organism, or colony level. Adaptive system behaviour emerges via these interactions, which integrate the activity of the sub-units. To understand the system level it is necessary to understand the underlying local interactions. Successful models of local interactions at different levels of biological organisation, including epithelial tissue and ant colonies, have demonstrated the benefits of such 'agent-based' modelling. Here we present an agent-based approach to modelling a crucial biological system--the intracellular NF-kappaB signalling pathway. The pathway is vital to immune response regulation, and is fundamental to basic survival in a range of species. Alterations in pathway regulation underlie a variety of diseases, including atherosclerosis and arthritis. Our modelling of individual molecules, receptors and genes provides a more comprehensive outline of regulatory network mechanisms than previously possible with equation-based approaches. The method also permits consideration of structural parameters in pathway regulation; here we predict that inhibition of NF-kappaB is directly affected by actin filaments of the cytoskeleton sequestering excess inhibitors, therefore regulating steady-state and feedback behaviour.

  3. Exploring Use of New Media in Environmental Education Contexts: Introducing Visitors' Technology Use in Zoos Model

    Science.gov (United States)

    Yocco, Victor; Danter, Elizabeth H.; Heimlich, Joseph E.; Dunckel, Betty A.; Myers, Chris

    2011-01-01

    Modern zoological gardens have invested substantial resources in technology to deliver environmental education concepts to visitors. Investment in these media reflects a currently unsubstantiated belief that visitors will both use and learn from these media alongside more traditional and less costly displays. This paper proposes a model that…

  4. Exploring Use of New Media in Environmental Education Contexts: Introducing Visitors' Technology Use in Zoos Model

    Science.gov (United States)

    Yocco, Victor; Danter, Elizabeth H.; Heimlich, Joseph E.; Dunckel, Betty A.; Myers, Chris

    2011-01-01

    Modern zoological gardens have invested substantial resources in technology to deliver environmental education concepts to visitors. Investment in these media reflects a currently unsubstantiated belief that visitors will both use and learn from these media alongside more traditional and less costly displays. This paper proposes a model that…

  5. Using a Genetic mixture model to study Phenotypic traits: Differential fecundity among Yukon river Chinook Salmon

    Science.gov (United States)

    Bromaghin, J.F.; Evenson, D.F.; McLain, T.H.; Flannery, B.G.

    2011-01-01

    Fecundity is a vital population characteristic that is directly linked to the productivity of fish populations. Historic data from Yukon River (Alaska) Chinook salmon Oncorhynchus tshawytscha suggest that length-adjusted fecundity differs among populations within the drainage and either is temporally variable or has declined. Yukon River Chinook salmon have been harvested in large-mesh gill-net fisheries for decades, and a decline in fecundity was considered a potential evolutionary response to size-selective exploitation. The implications for fishery conservation and management led us to further investigate the fecundity of Yukon River Chinook salmon populations. Matched observations of fecundity, length, and genotype were collected from a sample of adult females captured from the multipopulation spawning migration near the mouth of the Yukon River in 2008. These data were modeled by using a new mixture model, which was developed by extending the conditional maximum likelihood mixture model that is commonly used to estimate the composition of multipopulation mixtures based on genetic data. The new model facilitates maximum likelihood estimation of stock-specific fecundity parameters without first using individual assignment to a putative population of origin, thus avoiding potential biases caused by assignment error.The hypothesis that fecundity of Chinook salmon has declined was not supported; this result implies that fecundity exhibits high interannual variability. However, length-adjusted fecundity estimates decreased as migratory distance increased, and fecundity was more strongly dependent on fish size for populations spawning in the middle and upper portions of the drainage. These findings provide insights into potential constraints on reproductive investment imposed by long migrations and warrant consideration in fisheries management and conservation. The new mixture model extends the utility of genetic markers to new applications and can be easily adapted

  6. Modeling the surface tension of complex, reactive organic-inorganic mixtures

    Science.gov (United States)

    Schwier, A. N.; Viglione, G. A.; Li, Z.; McNeill, V. Faye

    2013-11-01

    Atmospheric aerosols can contain thousands of organic compounds which impact aerosol surface tension, affecting aerosol properties such as heterogeneous reactivity, ice nucleation, and cloud droplet formation. We present new experimental data for the surface tension of complex, reactive organic-inorganic aqueous mixtures mimicking tropospheric aerosols. Each solution contained 2-6 organic compounds, including methylglyoxal, glyoxal, formaldehyde, acetaldehyde, oxalic acid, succinic acid, leucine, alanine, glycine, and serine, with and without ammonium sulfate. We test two semi-empirical surface tension models and find that most reactive, complex, aqueous organic mixtures which do not contain salt are well described by a weighted Szyszkowski-Langmuir (S-L) model which was first presented by Henning et al. (2005). Two approaches for modeling the effects of salt were tested: (1) the Tuckermann approach (an extension of the Henning model with an additional explicit salt term), and (2) a new implicit method proposed here which employs experimental surface tension data obtained for each organic species in the presence of salt used with the Henning model. We recommend the use of method (2) for surface tension modeling of aerosol systems because the Henning model (using data obtained from organic-inorganic systems) and Tuckermann approach provide similar modeling results and goodness-of-fit (χ2) values, yet the Henning model is a simpler and more physical approach to modeling the effects of salt, requiring less empirically determined parameters.

  7. Modeling the surface tension of complex, reactive organic-inorganic mixtures

    Directory of Open Access Journals (Sweden)

    A. N. Schwier

    2013-01-01

    Full Text Available Atmospheric aerosols can contain thousands of organic compounds which impact aerosol surface tension, affecting aerosol properties such as cloud condensation nuclei (CCN ability. We present new experimental data for the surface tension of complex, reactive organic-inorganic aqueous mixtures mimicking tropospheric aerosols. Each solution contained 2–6 organic compounds, including methylglyoxal, glyoxal, formaldehyde, acetaldehyde, oxalic acid, succinic acid, leucine, alanine, glycine, and serine, with and without ammonium sulfate. We test two surface tension models and find that most reactive, complex, aqueous organic mixtures which do not contain salt are well-described by a weighted Szyszkowski–Langmuir (S–L model which was first presented by Henning et al. (2005. Two approaches for modeling the effects of salt were tested: (1 the Tuckermann approach (an extension of the Henning model with an additional explicit salt term, and (2 a new implicit method proposed here which employs experimental surface tension data obtained for each organic species in the presence of salt used with the Henning model. We recommend the use of method (2 for surface tension modeling because the Henning model (using data obtained from organic-inorganic systems and Tuckermann approach provide similar modeling fits and goodness of fit (χ2 values, yet the Henning model is a simpler and more physical approach to modeling the effects of salt, requiring less empirically determined parameters.

  8. A dirichlet process covarion mixture model and its assessments using posterior predictive discrepancy tests.

    Science.gov (United States)

    Zhou, Yan; Brinkmann, Henner; Rodrigue, Nicolas; Lartillot, Nicolas; Philippe, Hervé

    2010-02-01

    Heterotachy, the variation of substitution rate at a site across time, is a prevalent phenomenon in nucleotide and amino acid alignments, which may mislead probabilistic-based phylogenetic inferences. The covarion model is a special case of heterotachy, in which sites change between the "ON" state (allowing substitutions according to any particular model of sequence evolution) and the "OFF" state (prohibiting substitutions). In current implementations, the switch rates between ON and OFF states are homogeneous across sites, a hypothesis that has never been tested. In this study, we developed an infinite mixture model, called the covarion mixture (CM) model, which allows the covarion parameters to vary across sites, controlled by a Dirichlet process prior. Moreover, we combine the CM model with other approaches. We use a second independent Dirichlet process that models the heterogeneities of amino acid equilibrium frequencies across sites, known as the CAT model, and general rate-across-site heterogeneity is modeled by a gamma distribution. The application of the CM model to several large alignments demonstrates that the covarion parameters are significantly heterogeneous across sites. We describe posterior predictive discrepancy tests and use these to demonstrate the importance of these different elements of the models.

  9. Cure fraction estimation from the mixture cure models for grouped survival data.

    Science.gov (United States)

    Yu, Binbing; Tiwari, Ram C; Cronin, Kathleen A; Feuer, Eric J

    2004-06-15

    Mixture cure models are usually used to model failure time data with long-term survivors. These models have been applied to grouped survival data. The models provide simultaneous estimates of the proportion of the patients cured from disease and the distribution of the survival times for uncured patients (latency distribution). However, a crucial issue with mixture cure models is the identifiability of the cure fraction and parameters of kernel distribution. Cure fraction estimates can be quite sensitive to the choice of latency distributions and length of follow-up time. In this paper, sensitivity of parameter estimates under semi-parametric model and several most commonly used parametric models, namely lognormal, loglogistic, Weibull and generalized Gamma distributions, is explored. The cure fraction estimates from the model with generalized Gamma distribution is found to be quite robust. A simulation study was carried out to examine the effect of follow-up time and latency distribution specification on cure fraction estimation. The cure models with generalized Gamma latency distribution are applied to the population-based survival data for several cancer sites from the Surveillance, Epidemiology and End Results (SEER) Program. Several cautions on the general use of cure model are advised.

  10. Symmetrization of excess Gibbs free energy: A simple model for binary liquid mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Castellanos-Suarez, Aly J., E-mail: acastell@ivic.gob.v [Centro de Estudios Interdisciplinarios de la Fisica (CEIF), Instituto Venezolano de Investigaciones Cientificas (IVIC), Apartado 21827, Caracas 1020A (Venezuela, Bolivarian Republic of); Garcia-Sucre, Maximo, E-mail: mgs@ivic.gob.v [Centro de Estudios Interdisciplinarios de la Fisica (CEIF), Instituto Venezolano de Investigaciones Cientificas (IVIC), Apartado 21827, Caracas 1020A (Venezuela, Bolivarian Republic of)

    2011-03-15

    A symmetric expression for the excess Gibbs free energy of liquid binary mixtures is obtained using an appropriate definition for the effective contact fraction. We have identified a mechanism of local segregation as the main cause of the contact fraction variation with the concentration. Starting from this mechanism we develop a simple model for describing binary liquid mixtures. In this model two parameters appear: one adjustable, and the other parameter depending on the first one. Following this procedure we reproduce the experimental data of (liquid + vapor) equilibrium with a degree of accuracy comparable to well-known more elaborated models. The way in which we take into account the effective contacts between molecules allows identifying the compound which may be considered to induce one of the following processes: segregation, anti-segregation and dispersion of the components in the liquid mixture. Finally, the simplicity of the model allows one to obtain only one resulting interaction energy parameter, which makes easier the physical interpretation of the results.

  11. Reconstruction of coronary artery centrelines from x-ray rotational angiography using a probabilistic mixture model

    Science.gov (United States)

    Ćimen, Serkan; Gooya, Ali; Frangi, Alejandro F.

    2016-03-01

    Three-dimensional reconstructions of coronary arterial trees from X-ray rotational angiography (RA) images have the potential to compensate the limitations of RA due to projective imaging. Most of the existing model based reconstruction algorithms are either based on forward-projection of a 3D deformable model onto X-ray angiography images or back-projection of 2D information extracted from X-ray angiography images to 3D space for further processing. All of these methods have their shortcomings such as dependency on accurate 2D centerline segmentations. In this paper, the reconstruction is approached from a novel perspective, and is formulated as a probabilistic reconstruction method based on mixture model (MM) representation of point sets describing the coronary arteries. Specifically, it is assumed that the coronary arteries could be represented by a set of 3D points, whose spatial locations denote the Gaussian components in the MM. Additionally, an extra uniform distribution is incorporated in the mixture model to accommodate outliers (noise, over-segmentation etc.) in the 2D centerline segmentations. Treating the given 2D centreline segmentations as data points generated from MM, the 3D means, isotropic variance, and mixture weights of the Gaussian components are estimated by maximizing a likelihood function. Initial results from a phantom study show that the proposed method is able to handle outliers in 2D centreline segmentations, which indicates the potential of our formulation. Preliminary reconstruction results in the clinical data are also presented.

  12. Improved partial least squares models for stability-indicating analysis of mebeverine and sulpiride mixtures in pharmaceutical preparation: a comparative study.

    Science.gov (United States)

    Darwish, Hany W; Naguib, Ibrahim A

    2013-05-01

    Performance of partial least squares regression (PLSR) is enhanced in the presented work by three multivariate models, including weighted regression PLSR (Weighted-PLSR), genetic algorithm PLSR (GA-PLSR), and wavelet transform PLSR (WT-PLSR). The proposed models were applied for the stability-indicating analysis of mixtures of mebeverine hydrochloride (meb) and sulpiride (sul) in the presence of their reported impurities and degradation products. The work introduced in this paper aims to compare these different chemometric methods, showing the underlying algorithm for each and making a comparison of analysis results. For proper analysis, a 6-factor, 5-level experimental design was established resulting in a training set of 25 mixtures containing different ratios of the interfering species. A test set consisting of 5 mixtures was used to validate the prediction ability of the suggested models. Leave one out (LOO) and bootstrap were applied to predict number of PLS components. The GA-PLSR proposed method was successfully applied for the analysis of raw material (test set 101.03% ± 1.068, 101.47% ± 2.721 for meb and sul, respectively) and pharmaceutical tablets containing meb and sul mixtures (10.10% ± 0.566, 98.16% ± 1.081 for meb and sul).

  13. Partitioning detectability components in populations subject to within-season temporary emigration using binomial mixture models.

    Directory of Open Access Journals (Sweden)

    Katherine M O'Donnell

    Full Text Available Detectability of individual animals is highly variable and nearly always < 1; imperfect detection must be accounted for to reliably estimate population sizes and trends. Hierarchical models can simultaneously estimate abundance and effective detection probability, but there are several different mechanisms that cause variation in detectability. Neglecting temporary emigration can lead to biased population estimates because availability and conditional detection probability are confounded. In this study, we extend previous hierarchical binomial mixture models to account for multiple sources of variation in detectability. The state process of the hierarchical model describes ecological mechanisms that generate spatial and temporal patterns in abundance, while the observation model accounts for the imperfect nature of counting individuals due to temporary emigration and false absences. We illustrate our model's potential advantages, including the allowance of temporary emigration between sampling periods, with a case study of southern red-backed salamanders Plethodon serratus. We fit our model and a standard binomial mixture model to counts of terrestrial salamanders surveyed at 40 sites during 3-5 surveys each spring and fall 2010-2012. Our models generated similar parameter estimates to standard binomial mixture models. Aspect was the best predictor of salamander abundance in our case study; abundance increased as aspect became more northeasterly. Increased time-since-rainfall strongly decreased salamander surface activity (i.e. availability for sampling, while higher amounts of woody cover objects and rocks increased conditional detection probability (i.e. probability of capture, given an animal is exposed to sampling. By explicitly accounting for both components of detectability, we increased congruence between our statistical modeling and our ecological understanding of the system. We stress the importance of choosing survey locations and

  14. Modelling the spectral energy distribution of galaxies: introducing the artificial neural network

    CERN Document Server

    Silva, L; Granato, G L; Almeida, C; Baugh, C M; Frenk, C S; Lacey, C G; Paoletti, L; Petrella, A; Selvestrel, D

    2010-01-01

    The spectral energy distribution of galaxies is a complex function of the star formation history and geometrical arrangement of stars and gas in galaxies. The computation of the radiative transfer of stellar radiation through the dust distribution is time-consuming. This aspect becomes unacceptable in particular when dealing with the predictions by semi-analytical galaxy formation models populating cosmological volumes, to be then compared with multi-wavelength surveys. Mainly for this aim, we have implemented an artificial neural network algorithm into the spectro-photometric and radiative transfer code GRASIL in order to compute the spectral energy distribution of galaxies in a short computing time. This allows to avoid the adoption of empirical templates that may have nothing to do with the mock galaxies output by models. The ANN has been implemented to compute the dust emission spectrum (the bottleneck of the computation), and separately for the star-forming molecular clouds and the diffuse dust (due to t...

  15. Introducing mixotrophy into a biogeochemical model describing an eutrophied coastal ecosystem: The Southern North Sea

    Science.gov (United States)

    Ghyoot, Caroline; Lancelot, Christiane; Flynn, Kevin J.; Mitra, Aditee; Gypens, Nathalie

    2017-04-01

    Most biogeochemical/ecological models divide planktonic protists between phototrophs (phytoplankton) and heterotrophs (zooplankton). However, a large number of planktonic protists are able to combine several mechanisms of carbon and nutrient acquisition. Not representing these multiple mechanisms in biogeochemical/ecological models describing eutrophied coastal ecosystems can potentially lead to different conclusions regarding ecosystem functioning, especially regarding the success of harmful algae, which are often reported as mixotrophic. This modelling study investigates, for the first time, the implications for trophic dynamics of including 3 contrasting forms of mixotrophy, namely osmotrophy (using alkaline phosphatase activity, APA), non-constitutive mixotrophy (acquired phototrophy by microzooplankton) and also constitutive mixotrophy. The application is in the Southern North Sea, an ecosystem that faced, between 1985 and 2005, a significant increase in the nutrient supply N:P ratio (from 31 to 81 mole N:P). The comparison with a traditional model shows that, when the winter N:P ratio in the Southern North Sea is above 22 molN molP-1 (as occurred from mid-1990s), APA allows a 3 to 32% increase of annual gross primary production (GPP). In result of the higher GPP, the annual sedimentation increases as well as the bacterial production. By contrast, APA does not affect the export of matter to higher trophic levels because the increased GPP is mainly due to Phaeocystis colonies, which are not grazed by copepods. The effect of non-constitutive mixotrophy depends on light and affects the ecosystem functioning in terms of annual GPP, transfer to higher trophic levels, sedimentation, and nutrient remineralisation. Constitutive mixotrophy in nanoflagellates appears to have little influence on this ecosystem functioning. An important conclusion from this work is that different forms of mixotrophy have different impacts on system dynamics and it is thus important to

  16. Temperature response functions introduce high uncertainty in modelled carbon stocks in cold temperature regimes

    Directory of Open Access Journals (Sweden)

    H. Portner

    2009-08-01

    Full Text Available Models of carbon cycling in terrestrial ecosystems contain formulations for the dependence of respiration on temperature, but the sensitivity of predicted carbon pools and fluxes to these formulations and their parameterization is not understood. Thus, we made an uncertainty analysis of soil organic matter decomposition with respect to its temperature dependency using the ecosystem model LPJ-GUESS.

    We used five temperature response functions (Exponential, Arrhenius, Lloyd-Taylor, Gaussian, Van't Hoff. We determined the parameter uncertainty ranges of the functions by nonlinear regression analysis based on eight experimental datasets from northern hemisphere ecosystems. We sampled over the uncertainty bounds of the parameters and run simulations for each pair of temperature response function and calibration site. The uncertainty in both long-term and short-term soil carbon dynamics was analyzed over an elevation gradient in southern Switzerland.

    The function of Lloyd-Taylor turned out to be adequate for modelling the temperature dependency of soil organic matter decomposition, whereas the other functions either resulted in poor fits (Exponential, Arrhenius or were not applicable for all datasets (Gaussian, Van't Hoff. There were two main sources of uncertainty for model simulations: (1 the uncertainty in the parameter estimates of the response functions, which increased with increasing temperature and (2 the uncertainty in the simulated size of carbon pools, which increased with elevation, as slower turn-over times lead to higher carbon stocks and higher associated uncertainties. The higher uncertainty in carbon pools with slow turn-over rates has important implications for the uncertainty in the projection of the change of soil carbon stocks driven by climate change, which turned out to be more uncertain for higher elevations and hence higher latitudes, which are of key importance for the global terrestrial carbon

  17. Introducing a rainfall compound distribution model based on weather patterns sub-sampling

    Directory of Open Access Journals (Sweden)

    F. Garavaglia

    2010-06-01

    Full Text Available This paper presents a probabilistic model for daily rainfall, using sub-sampling based on meteorological circulation. We classified eight typical but contrasted synoptic situations (weather patterns for France and surrounding areas, using a "bottom-up" approach, i.e. from the shape of the rain field to the synoptic situations described by geopotential fields. These weather patterns (WP provide a discriminating variable that is consistent with French climatology, and allows seasonal rainfall records to be split into more homogeneous sub-samples, in term of meteorological genesis.

    First results show how the combination of seasonal and WP sub-sampling strongly influences the identification of the asymptotic behaviour of rainfall probabilistic models. Furthermore, with this level of stratification, an asymptotic exponential behaviour of each sub-sample appears as a reasonable hypothesis. This first part is illustrated with two daily rainfall records from SE of France.

    The distribution of the multi-exponential weather patterns (MEWP is then defined as the composition, for a given season, of all WP sub-sample marginal distributions, weighted by the relative frequency of occurrence of each WP. This model is finally compared to Exponential and Generalized Pareto distributions, showing good features in terms of robustness and accuracy. These final statistical results are computed from a wide dataset of 478 rainfall chronicles spread on the southern half of France. All these data cover the 1953–2005 period.

  18. Introducing a model of organizational envy management among university faculty members: A mixed research approach

    Directory of Open Access Journals (Sweden)

    Maris Zarin Daneshvar

    2016-01-01

    Full Text Available The present study aimed at offering a model of organizational envy management among faculty members of Islamic Azad Universities of East Azerbaijan Province. A mixed method through involving qualitative data and then quantitative data emphasizing on quantitative analysis. Population of the study was the entire faculty members with associate or higher degree in the educational year of 2014-2015. In the qualitative stage 20 individuals (experts were selected to design the primary model and questionnaire, and to fit the model 316 faculty members were selected. In the qualitative section it was specified that influential variables on envy management in faculty members are health organizational climate, spiritual leadership, effective communication, job satisfaction and professional development of professors and approved, as well in the quantitative section findings showed that there is a significant relationship between effective variables so that in indirect analysis of effect of organizational climate on envy management, the variable of spiritual leadership via the variable of effective communication had little effect on envy management than variables of professional development and job satisfaction. It is concluded that university managers should provide conditions and backgrounds of envy management in the universities and enable professors for more effective roles without envy in the scientific climate of university to achieve in educational, research and servicing efficiency.

  19. Use of Linear Spectral Mixture Model to Estimate Rice Planted Area Based on MODIS Data

    Directory of Open Access Journals (Sweden)

    Lei Wang

    2008-06-01

    Full Text Available MODIS (Moderate Resolution Imaging Spectroradiometer is a key instrument aboard the Terra (EOS AM and Aqua (EOS PM satellites. Linear spectral mixture models are applied to MOIDS data for the sub-pixel classification of land covers. Shaoxing county of Zhejiang Province in China was chosen to be the study site and early rice was selected as the study crop. The derived proportions of land covers from MODIS pixel using linear spectral mixture models were compared with unsupervised classification derived from TM data acquired on the same day, which implies that MODIS data could be used as satellite data source for rice cultivation area estimation, possibly rice growth monitoring and yield forecasting on the regional scale.

  20. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies

    DEFF Research Database (Denmark)

    Thompson, Wesley K.; Wang, Yunpeng; Schork, Andrew J.

    2015-01-01

    Characterizing the distribution of effects from genome-wide genotyping data is crucial for understanding important aspects of the genetic architecture of complex traits, such as number or proportion of non-null loci, average proportion of phenotypic variance explained per non-null effect, power...... for discovery, and polygenic risk prediction. To this end, previous work has used effect-size models based on various distributions, including the normal and normal mixture distributions, among others. In this paper we propose a scale mixture of two normals model for effect size distributions of genome...... of variance explained by genotyped SNPs, CD and SZ have a broadly dissimilar genetic architecture, due to differing mean effect size and proportion of non-null loci....

  1. Automated sleep spindle detection using IIR filters and a Gaussian Mixture Model.

    Science.gov (United States)

    Patti, Chanakya Reddy; Penzel, Thomas; Cvetkovic, Dean

    2015-08-01

    Sleep spindle detection using modern signal processing techniques such as the Short-Time Fourier Transform and Wavelet Analysis are common research methods. These methods are computationally intensive, especially when analysing data from overnight sleep recordings. The authors of this paper propose an alternative using pre-designed IIR filters and a multivariate Gaussian Mixture Model. Features extracted with IIR filters are clustered using a Gaussian Mixture Model without the use of any subject independent thresholds. The Algorithm was tested on a database consisting of overnight sleep PSG of 5 subjects and an online public spindles database consisting of six 30 minute sleep excerpts. An overall sensitivity of 57% and a specificity of 98.24% was achieved in the overnight database group and a sensitivity of 65.19% at a 16.9% False Positive proportion for the 6 sleep excerpts.

  2. Use of Linear Spectral Mixture Model to Estimate Rice Planted Area Based on MODIS Data

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    MODIS (Moderate Resolution Imaging Spectroradiometer) is a key instrument aboard the Terra (EOS AM) and Aqua (EOS PM) satellites.Linear spectral mixture models are applied to MOIDS data for the sub-pixel classification of land covers.Shaoxing county of Zhcjiang Province in China was chosen to be the study site and early rice was selected as the study crop.The derived proportions of land covers from MODIS pixel using linear spectral mixture models were compared with unsupervised classification derived from TM data acquired on the same day,which implies that MODIS data could be used as satellite data source for rice cultivation area estimation,possibly rice growth monitoring and yield forecasting on the regional scale.

  3. A cross-association model for CO2-methanol and CO2-ethanol mixtures

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    A cross-association model was proposed for CO2-alcohol mixtures based on the statistical associating fluid theory (SAFT).CO2 was treated as a pseudo-associating molecule and both the self-association between alcohol hydroxyls and the cross-association between CO2 and alcohol hydroxyls were considered.The equilibrium properties from low temperature-pressure to high temperature-pressure were investigated using this model.The calculated p-x and p-p diagrams of CO2-methanol and CO2-ethanol mixtures agreed with the experimental data.The results showed that when the cross-association was taken into account for Helmholtz free energy,the calculated equilibrium properties could be significantly improved,and the error prediction of the three phase equilibria and triple points in low temperature regions could be avoided.

  4. Empirical Bayes ranking and selection methods via semiparametric hierarchical mixture models in microarray studies.

    Science.gov (United States)

    Noma, Hisashi; Matsui, Shigeyuki

    2013-05-20

    The main purpose of microarray studies is screening of differentially expressed genes as candidates for further investigation. Because of limited resources in this stage, prioritizing genes are relevant statistical tasks in microarray studies. For effective gene selections, parametric empirical Bayes methods for ranking and selection of genes with largest effect sizes have been proposed (Noma et al., 2010; Biostatistics 11: 281-289). The hierarchical mixture model incorporates the differential and non-differential components and allows information borrowing across differential genes with separation from nuisance, non-differential genes. In this article, we develop empirical Bayes ranking methods via a semiparametric hierarchical mixture model. A nonparametric prior distribution, rather than parametric prior distributions, for effect sizes is specified and estimated using the "smoothing by roughening" approach of Laird and Louis (1991; Computational statistics and data analysis 12: 27-37). We present applications to childhood and infant leukemia clinical studies with microarrays for exploring genes related to prognosis or disease progression.

  5. A generalized longitudinal mixture IRT model for measuring differential growth in learning environments.

    Science.gov (United States)

    Kadengye, Damazo T; Ceulemans, Eva; Van den Noortgate, Wim

    2014-09-01

    This article describes a generalized longitudinal mixture item response theory (IRT) model that allows for detecting latent group differences in item response data obtained from electronic learning (e-learning) environments or other learning environments that result in large numbers of items. The described model can be viewed as a combination of a longitudinal Rasch model, a mixture Rasch model, and a random-item IRT model, and it includes some features of the explanatory IRT modeling framework. The model assumes the possible presence of latent classes in item response patterns, due to initial person-level differences before learning takes place, to latent class-specific learning trajectories, or to a combination of both. Moreover, it allows for differential item functioning over the classes. A Bayesian model estimation procedure is described, and the results of a simulation study are presented that indicate that the parameters are recovered well, particularly for conditions with large item sample sizes. The model is also illustrated with an empirical sample data set from a Web-based e-learning environment.

  6. Phase Equilibria of Water/CO2 and Water/n-Alkane Mixtures from Polarizable Models.

    Science.gov (United States)

    Jiang, Hao; Economou, Ioannis G; Panagiotopoulos, Athanassios Z

    2017-02-16

    Phase equilibria of water/CO2 and water/n-alkane mixtures over a range of temperatures and pressures were obtained from Monte Carlo simulations in the Gibbs ensemble. Three sets of Drude-type polarizable models for water, namely the BK3, GCP, and HBP models, were combined with a polarizable Gaussian charge CO2 (PGC) model to represent the water/CO2 mixture. The HBP water model describes hydrogen bonds between water and CO2 explicitly. All models underestimate CO2 solubility in water if standard combining rules are used for the dispersion interactions between water and CO2. With the dispersion parameters optimized to phase compositions, the BK3 and GCP models were able to represent the CO2 solubility in water, however, the water composition in CO2-rich phase is systematically underestimated. Accurate representation of compositions for both water- and CO2-rich phases cannot be achieved even after optimizing the cross interaction parameters. By contrast, accurate compositions for both water- and CO2-rich phases were obtained with hydrogen bonding parameters determined from the second virial coefficient for water/CO2. Phase equilibria of water/n-alkane mixtures were also studied using the HBP water and an exponenial-6 united-atom n-alkanes model. The dispersion interactions between water and n-alkanes were optimized to Henry's constants of methane and ethane in water. The HBP water and united-atom n-alkane models underestimate water content in the n-alkane-rich phase; this underestimation is likely due to the neglect of electrostatic and induction energies in the united-atom model.

  7. The Precise Measurement of Vapor-Liquid Equilibrium Properties of the CO2/Isopentane Binary Mixture, and Fitted Parameters for a Helmholtz Energy Mixture Model

    Science.gov (United States)

    Miyamoto, H.; Shoji, Y.; Akasaka, R.; Lemmon, E. W.

    2017-10-01

    Natural working fluid mixtures, including combinations of CO2, hydrocarbons, water, and ammonia, are expected to have applications in energy conversion processes such as heat pumps and organic Rankine cycles. However, the available literature data, much of which were published between 1975 and 1992, do not incorporate the recommendations of the Guide to the Expression of Uncertainty in Measurement. Therefore, new and more reliable thermodynamic property measurements obtained with state-of-the-art technology are required. The goal of the present study was to obtain accurate vapor-liquid equilibrium (VLE) properties for complex mixtures based on two different gases with significant variations in their boiling points. Precise VLE data were measured with a recirculation-type apparatus with a 380 cm3 equilibration cell and two windows allowing observation of the phase behavior. This cell was equipped with recirculating and expansion loops that were immersed in temperature-controlled liquid and air baths, respectively. Following equilibration, the composition of the sample in each loop was ascertained by gas chromatography. VLE data were acquired for CO2/ethanol and CO2/isopentane binary mixtures within the temperature range from 300 K to 330 K and at pressures up to 7 MPa. These data were used to fit interaction parameters in a Helmholtz energy mixture model. Comparisons were made with the available literature data and values calculated by thermodynamic property models.

  8. Expected epidemiological impacts of introducing an HIV vaccine in Thailand: a model-based analysis.

    Science.gov (United States)

    Schneider, Karen; Kerr, Cliff C; Hoare, Alexander; Wilson, David P

    2011-08-18

    The RV144 trial conducted in Thailand was the first to demonstrate modest protective efficacy of an HIV vaccine. Its estimated initial efficacy was ∼74%, but this waned considerably over time. We developed a mathematical model to reflect historical and current HIV trends across different at-risk populations in Thailand. The model was used to estimate the expected number of infections that would be averted if a vaccine with outcome characteristics similar to the RV144 vaccine was implemented in Thailand at varying levels of coverage. In the absence of a vaccine, we projected roughly 65,000 new HIV infections among adults during the period between 2011 and 2021. Due to the waning efficacy of the vaccine, vaccination campaigns were found to have modest long-term public health benefit unless re-vaccination occurred. We forecast that an RV144-like vaccine with coverage of 30% of the population would lead to a 3% reduction in HIV incidence during the next 10 years. In comparison, 30% coverage of annual or biennial re-vaccination with the vaccine was found to result in 23% and 14% reductions in incidence, respectively. Coverage of 60% without re-vaccination resulted in a 7% reduction. Epidemiological outcomes were found to depend primarily on three factors: vaccination coverage, vaccine efficacy, and the duration of protection the vaccine provided. Due to the short duration of protection the vaccine provides without re-vaccination, our model predicts modest benefit from a vaccination campaign with an RV144-like HIV vaccine in Thailand. Annual or biannual re-vaccination is predicted to greatly increase the long-term public health benefits of a vaccination campaign. The feasibility of vaccine implementation, as well as its economic viability, remains to be determined. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. The Reactive-Causal Architecture: Introducing an Emotion Model along with Theories of Needs

    Science.gov (United States)

    Aydin, Ali Orhan; Orgun, Mehmet Ali

    In the entertainment application area, one of the major aims is to develop believable agents. To achieve this aim, agents should be highly autonomous, situated, flexible, and display affect. The Reactive-Causal Architecture (ReCau) is proposed to simulate these core attributes. In its current form, ReCau cannot explain the effects of emotions on intelligent behaviour. This study aims is to further improve the emotion model of ReCau to explain the effects of emotions on intelligent behaviour. This improvement allows ReCau to be emotional to support the development of believable agents.

  10. Introducing mixotrophy into a biogeochemical model describing an eutrophied coastal ecosystem: The Southern North Sea

    Science.gov (United States)

    Ghyoot, Caroline; Lancelot, Christiane; Flynn, Kevin J.; Mitra, Aditee; Gypens, Nathalie

    2017-09-01

    Most biogeochemical/ecological models divide planktonic protists between phototrophs (phytoplankton) and heterotrophs (zooplankton). However, a large number of planktonic protists are able to combine several mechanisms of carbon and nutrient acquisition. Not representing these multiple mechanisms in biogeochemical/ecological models describing eutrophied coastal ecosystems can potentially lead to different conclusions regarding ecosystem functioning, especially regarding the success of harmful algae, which are often reported as mixotrophic. This modelling study investigates the implications for trophic dynamics of including 3 contrasting forms of mixotrophy, namely osmotrophy (using alkaline phosphatase activity, APA), non-constitutive mixotrophy (acquired phototrophy by microzooplankton) and also constitutive mixotrophy. The application is in the Southern North Sea, an ecosystem that faced, between 1985 and 2005, a significant increase in the nutrient supply N:P ratio (from 31 to 81 mol N:P). The comparison with a traditional model shows that, when the winter N:P ratio in the Southern North Sea is above 22 molN molP-1 (as occurred from mid-1990s), APA allows a 3-32% increase of annual gross primary production (GPP). In result of the higher GPP, the annual sedimentation increases as well as the bacterial production. By contrast, APA does not affect the export of matter to higher trophic levels because the increased GPP is mainly due to Phaeocystis colonies, which are not grazed by copepods. Under high irradiance, non-constitutive mixotrophy appreciably increases annual GPP, transfer to higher trophic levels, sedimentation, and nutrient remineralisation. In this ecosystem, non-constitutive mixotrophy is also observed to have an indirect stimulating effect on diatoms. Constitutive mixotrophy in nanoflagellates appears to have little influence on this ecosystem functioning. An important conclusion from this work is that contrasting forms of mixotrophy have different

  11. Catalytically stabilized combustion of lean methane-air-mixtures: a numerical model

    Energy Technology Data Exchange (ETDEWEB)

    Dogwiler, U.; Benz, P.; Mantharas, I. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1997-06-01

    The catalytically stabilized combustion of lean methane/air mixtures has been studied numerically under conditions closely resembling the ones prevailing in technical devices. A detailed numerical model has been developed for a laminar, stationary, 2-D channel flow with full heterogeneous and homogeneous reaction mechanisms. The computations provide direct information on the coupling between heterogeneous-homogeneous combustion and in particular on the means of homogeneous ignitions and stabilization. (author) 4 figs., 3 refs.

  12. Condition monitoring of oil-impregnated paper bushings using extension neural network, Gaussian mixture and hidden Markov models

    CSIR Research Space (South Africa)

    Miya, WS

    2008-10-01

    Full Text Available In this paper, a comparison between Extension Neural Network (ENN), Gaussian Mixture Model (GMM) and Hidden Markov model (HMM) is conducted for bushing condition monitoring. The monitoring process is a two-stage implementation of a classification...

  13. Modeling of columnar and equiaxed solidification of binary mixtures; Modelisation de la solidification colonnaire et equiaxe de melanges binaires

    Energy Technology Data Exchange (ETDEWEB)

    Roux, P

    2005-12-15

    This work deals with the modelling of dendritic solidification in binary mixtures. Large scale phenomena are represented by volume averaging of the local conservation equations. This method allows to rigorously derive the partial differential equations of averaged fields and the closure problems associated to the deviations. Such problems can be resolved numerically on periodic cells, representative of dendritic structures, in order to give a precise evaluation of macroscopic transfer coefficients (Drag coefficients, exchange coefficients, diffusion-dispersion tensors...). The method had already been applied for a model of columnar dendritic mushy zone and it is extended to the case of equiaxed dendritic solidification, where solid grains can move. The two-phase flow is modelled with an Eulerian-Eulerian approach and the novelty is to account for the dispersion of solid velocity through the kinetic agitation of the particles. A coupling of the two models is proposed thanks to an original adaptation of the columnar model, allowing for undercooling calculation: a solid-liquid interfacial area density is introduced and calculated. At last, direct numerical simulations of crystal growth are proposed with a diffuse interface method for a representation of local phenomena. (author)

  14. Personal Exposure to Mixtures of Volatile Organic Compounds: Modeling and Further Analysis of the RIOPA Data

    Science.gov (United States)

    Batterman, Stuart; Su, Feng-Chiao; Li, Shi; Mukherjee, Bhramar; Jia, Chunrong

    2015-01-01

    INTRODUCTION Emission sources of volatile organic compounds (VOCs) are numerous and widespread in both indoor and outdoor environments. Concentrations of VOCs indoors typically exceed outdoor levels, and most people spend nearly 90% of their time indoors. Thus, indoor sources generally contribute the majority of VOC exposures for most people. VOC exposure has been associated with a wide range of acute and chronic health effects; for example, asthma, respiratory diseases, liver and kidney dysfunction, neurologic impairment, and cancer. Although exposures to most VOCs for most persons fall below health-based guidelines, and long-term trends show decreases in ambient emissions and concentrations, a subset of individuals experience much higher exposures that exceed guidelines. Thus, exposure to VOCs remains an important environmental health concern. The present understanding of VOC exposures is incomplete. With the exception of a few compounds, concentration and especially exposure data are limited; and like other environmental data, VOC exposure data can show multiple modes, low and high extreme values, and sometimes a large portion of data below method detection limits (MDLs). Field data also show considerable spatial or interpersonal variability, and although evidence is limited, temporal variability seems high. These characteristics can complicate modeling and other analyses aimed at risk assessment, policy actions, and exposure management. In addition to these analytic and statistical issues, exposure typically occurs as a mixture, and mixture components may interact or jointly contribute to adverse effects. However most pollutant regulations, guidelines, and studies remain focused on single compounds, and thus may underestimate cumulative exposures and risks arising from coexposures. In addition, the composition of VOC mixtures has not been thoroughly investigated, and mixture components show varying and complex dependencies. Finally, although many factors are

  15. Novel pseudo-divergence of Gaussian mixture models based speaker clustering method

    Institute of Scientific and Technical Information of China (English)

    Wang Bo; Xu Yiqiong; Li Bicheng

    2006-01-01

    Serial structure is applied to speaker recognition to reduce the algorithm delay and computational complexity. The speech is first classified into speaker class, and then searches the most likely one inside the class.Difference between Gaussian Mixture Models (GMMs) is widely applied in speaker classification. The paper proposes a novel mean of pseudo-divergence, the ratio of Inter-Model dispersion to Intra-Model dispersion, to present the difference between GMMs, to perform speaker cluster. Weight, mean and variance, GMM's components, are involved in the dispersion. Experiments indicate that the measurement can well present the difference of GMMs and has improved performance of speaker clustering.

  16. Comparisons between Hygroscopic Measurements and UNIFAC Model Predictions for Dicarboxylic Organic Aerosol Mixtures

    Directory of Open Access Journals (Sweden)

    Jae Young Lee

    2013-01-01

    Full Text Available Hygroscopic behavior was measured at 12°C over aqueous bulk solutions containing dicarboxylic acids, using a Baratron pressure transducer. Our experimental measurements of water activity for malonic acid solutions (0–10 mol/kg water and glutaric acid solutions (0–5 mol/kg water agreed to within 0.6% and 0.8% of the predictions using Peng’s modified UNIFAC model, respectively (except for the 10 mol/kg water value, which differed by 2%. However, for solutions containing mixtures of malonic/glutaric acids, malonic/succinic acids, and glutaric/succinic acids, the disagreements between the measurements and predictions using the ZSR model or Peng’s modified UNIFAC model are higher than those for the single-component cases. Measurements of the overall water vapor pressure for 50 : 50 molar mixtures of malonic/glutaric acids closely followed that for malonic acid alone. For mixtures of malonic/succinic acids and glutaric/succinic acids, the influence of a constant concentration of succinic acid on water uptake became more significant as the concentration of malonic acid or glutaric acid was increased.

  17. Performance of growth mixture models in the presence of time-varying covariates.

    Science.gov (United States)

    Diallo, Thierno M O; Morin, Alexandre J S; Lu, HuiZhong

    2016-10-31

    Growth mixture modeling is often used to identify unobserved heterogeneity in populations. Despite the usefulness of growth mixture modeling in practice, little is known about the performance of this data analysis technique in the presence of time-varying covariates. In the present simulation study, we examined the impacts of five design factors: the proportion of the total variance of the outcome explained by the time-varying covariates, the number of time points, the error structure, the sample size, and the mixing ratio. More precisely, we examined the impact of these factors on the accuracy of parameter and standard error estimates, as well as on the class enumeration accuracy. Our results showed that the consistent Akaike information criterion (CAIC), the sample-size-adjusted CAIC (SCAIC), the Bayesian information criterion (BIC), and the integrated completed likelihood criterion (ICL-BIC) proved to be highly reliable indicators of the true number of latent classes in the data, across design conditions, and that the sample-size-adjusted BIC (SBIC) also proved quite accurate, especially in larger samples. In contrast, the Akaike information criterion (AIC), the entropy, the normalized entropy criterion (NEC), and the classification likelihood criterion (CLC) proved to be unreliable indicators of the true number of latent classes in the data. Our results also showed that substantial biases in the parameter and standard error estimates tended to be associated with growth mixture models that included only four time points.

  18. A Rough Set Bounded Spatially Constrained Asymmetric Gaussian Mixture Model for Image Segmentation.

    Science.gov (United States)

    Ji, Zexuan; Huang, Yubo; Sun, Quansen; Cao, Guo; Zheng, Yuhui

    2017-01-01

    Accurate image segmentation is an important issue in image processing, where Gaussian mixture models play an important part and have been proven effective. However, most Gaussian mixture model (GMM) based methods suffer from one or more limitations, such as limited noise robustness, over-smoothness for segmentations, and lack of flexibility to fit data. In order to address these issues, in this paper, we propose a rough set bounded asymmetric Gaussian mixture model with spatial constraint for image segmentation. First, based on our previous work where each cluster is characterized by three automatically determined rough-fuzzy regions, we partition the target image into three rough regions with two adaptively computed thresholds. Second, a new bounded indicator function is proposed to determine the bounded support regions of the observed data. The bounded indicator and posterior probability of a pixel that belongs to each sub-region is estimated with respect to the rough region where the pixel lies. Third, to further reduce over-smoothness for segmentations, two novel prior factors are proposed that incorporate the spatial information among neighborhood pixels, which are constructed based on the prior and posterior probabilities of the within- and between-clusters, and considers the spatial direction. We compare our algorithm to state-of-the-art segmentation approaches in both synthetic and real images to demonstrate the superior performance of the proposed algorithm.

  19. Assessment of the Conditions for Introducing the All-Day Model in Basic Schools

    Directory of Open Access Journals (Sweden)

    Andreja Tinta

    2015-06-01

    Full Text Available The results of the PISA 2000 prompted the German school authorities to extend the network of all-day schools and already the results of the PISA 2012 research showed good students results justify the actions in the direction of designing a quality educational system in Germany. So, we wondered, whether Slovenian basic schools meet the conditions for the introduction of the full-day model. We carried out a non-experimental survey on a sample of headteachers who were asked to assess the spatial and personnel conditions for the implementation of the all-day basic school. In doing so, we paid attention to the differences in terms of the stratum and the number of students. The survey showed that, regardless of the stratum and the number of pupils, most of the schools met the relevant conditions. The schools in which these have not been met yet, could assure them with minimum financial investment.

  20. Introducing a Model for Suspicious Behaviors Detection in Electronic Banking by Using Decision Tree Algorithms

    Directory of Open Access Journals (Sweden)

    Rohulla Kosari Langari

    2014-02-01

    Full Text Available Change the world through information technology and Internet development, has created competitive knowledge in the field of electronic commerce, lead to increasing in competitive potential among organizations. In this condition The increasing rate of commercial deals developing guaranteed with speed and light quality is due to provide dynamic system of electronic banking until by using modern technology to facilitate electronic business process. Internet banking is enumerate as a potential opportunity the fundamental pillars and determinates of e-banking that in cyber space has been faced with various obstacles and threats. One of this challenge is complete uncertainty in security guarantee of financial transactions also exist of suspicious and unusual behavior with mail fraud for financial abuse. Now various systems because of intelligence mechanical methods and data mining technique has been designed for fraud detection in users’ behaviors and applied in various industrial such as insurance, medicine and banking. Main of article has been recognizing of unusual users behaviors in e-banking system. Therefore, detection behavior user and categories of emerged patterns to paper the conditions for predicting unauthorized penetration and detection of suspicious behavior. Since detection behavior user in internet system has been uncertainty and records of transactions can be useful to understand these movement and therefore among machine method, decision tree technique is considered common tool for classification and prediction, therefore in this research at first has determinate banking effective variable and weight of everything in internet behaviors production and in continuation combining of various behaviors manner draw out such as the model of inductive rules to provide ability recognizing of different behaviors. At least trend of four algorithm Chaid, ex_Chaid, C4.5, C5.0 has compared and evaluated for classification and detection of exist

  1. Modeling the long-term effects of introduced herbivores on the spread of an invasive tree

    Science.gov (United States)

    Zhang, Bo; DeAngelis, Don; Rayamajhi, Min B.; Botkin, Daniel B.

    2017-01-01

    ContextMelaleuca quinquenervia (Cav.) Blake (hereafter melaleuca) is an invasive tree from Australia that has spread over the freshwater ecosystems of southern Florida, displacing native vegetation, thus threatening native biodiversity. Suppression of melaleuca appears to be progressing through the introduction of insect species, the weevil, Oxiops vitiosa, and the psyllid, Boreioglycaspis melaleucae.ObjectiveTo improve understanding of the possible effects of herbivory on the landscape dynamics of melaleuca in native southern Florida plant communities.MethodsWe projected likely future changes in plant communities using the individual based modeling platform, JABOWA-II, by simulating successional processes occurring in two types of southern Florida habitat, cypress swamp and bay swamp, occupied by native species and melaleuca, with the impact of insect herbivores.ResultsComputer simulations show melaleuca invasion leads to decreases in density and basal area of native species, but herbivory would effectively control melaleuca to low levels, resulting in a recovery of native species. When herbivory was modeled on pure melaleuca stands, it was more effective in stands with initially larger-sized melaleuca. Although the simulated herbivory did not eliminate melaleuca, it decreased its presence dramatically in all cases, supporting the long-term effectiveness of herbivory in controlling melaleuca invasion.ConclusionsThe results provide three conclusions relevant to management: (1) The introduction of insect herbivory that has been applied to melaleuca appears sufficient to suppress melaleuca over the long term, (2) dominant native species may recover in about 50 years, and (3) regrowth of native species will further suppress melaleuca through competition.

  2. Psychophysical model of chromatic perceptual transparency based on substractive color mixture.

    Science.gov (United States)

    Faul, Franz; Ekroll, Vebjørn

    2002-06-01

    Variants of Metelli's episcotister model, which are based on additive color mixture, have been found to describe the luminance conditions for perceptual transparency very accurately. However, the findings in the chromatic domain are not that clear-cut, since there exist chromatic stimuli that conform to the additive model but do not appear transparent. We present evidence that such failures are of a systematic nature, and we propose an alternative psychophysical model based on subtractive color mixture. Results of a computer simulation revealed that this model approximately describes color changes that occur when a surface is covered by a filter. We present the results of two psychophysical experiments with chromatic stimuli, in which we directly compared the predictions of the additive model and the predictions of the new model. These results show that the color relations leading to the perception of a homogeneous transparent layer conform very closely to the predictions of the new model and deviate systematically from the predictions of the additive model.

  3. Modelling plant interspecific interactions from experiments of perennial crop mixtures to predict optimal combinations.

    Science.gov (United States)

    Halty, Virginia; Valdés, Matías; Tejera, Mauricio; Picasso, Valentín; Fort, Hugo

    2017-07-28

    The contribution of plant species richness to productivity and ecosystem functioning is a long standing issue in Ecology, with relevant implications for both conservation and agriculture. Both experiments and quantitative modelling are fundamental to the design of sustainable agroecosystems and the optimization of crop production. We modelled communities of perennial crop mixtures by using a generalized Lotka-Volterra model, i.e. a model such that the interspecific interactions are more general than purely competitive. We estimated model parameters -carrying capacities and interaction coefficientsfrom, respectively, the observed biomass of monocultures and bicultures measured in a large diversity experiment of seven perennial forage species in Iowa, United States. The sign and absolute value of the interaction coefficients showed that the biological interactions between species pairs included amensalism, competition, and parasitism (asymmetric positive-negative interaction), with various degrees of intensity. We tested the model fit by simulating the combinations of more than two species and comparing them with the polycultures experimental data. Overall, theoretical predictions are in good agreement with the experiments. Using this model, we also simulated species combinations that were not sown. From all possible mixtures (sown and not sown) we identified which are the most productive species combinations. Our results demonstrate that a combination of experiments and modelling can contribute to the design of sustainable agricultural systems in general and to the optimization of crop production in particular. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  4. A Neural Network Based Hybrid Mixture Model to Extract Information from Non-linear Mixed Pixels

    Directory of Open Access Journals (Sweden)

    Uttam Kumar

    2012-09-01

    Full Text Available Signals acquired by sensors in the real world are non-linear combinations, requiring non-linear mixture models to describe the resultant mixture spectra for the endmember’s (pure pixel’s distribution. This communication discusses inferring class fraction through a novel hybrid mixture model (HMM. HMM is a three-step process, where the endmembers are first derived from the images themselves using the N-FINDR algorithm. These endmembers are used by the linear mixture model (LMM in the second step that provides an abundance estimation in a linear fashion. Finally, the abundance values along with the training samples representing the actual ground proportions are fed into neural network based multi-layer perceptron (MLP architecture as input to train the neurons. The neural output further refines the abundance estimates to account for the non-linear nature of the mixing classes of interest. HMM is first implemented and validated on simulated hyper spectral data of 200 bands and subsequently on real time MODIS data with a spatial resolution of 250 m. The results on computer simulated data show that the method gives acceptable results for unmixing pixels with an overall RMSE of 0.0089 ± 0.0022 with LMM and 0.0030 ± 0.0001 with the HMM when compared to actual class proportions. The unmixed MODIS images showed overall RMSE with HMM as 0.0191 ± 0.022 as compared to the LMM output considered alone that had an overall RMSE of 0.2005 ± 0.41, indicating that individual class abundances obtained from HMM are very close to the real observations.

  5. A proposed experimental platform for measuring the properties of warm dense mixtures: Testing the applicability of the linear mixing model

    Science.gov (United States)

    Hawreliak, James

    2017-06-01

    This paper presents a proposed experimental technique for investigating the impact of chemical interactions in warm dense liquid mixtures. It uses experimental equation of state (EOS) measurements of warm dense liquid mixtures with different compositions to determine the deviation from the linear mixing model. Statistical mechanics is used to derive the EOS of a mixture with a constant pressure linear mixing term (Amagat's rule) and an interspecies interaction term. A ratio between the particle density of two different compositions of mixtures, K(P, T)i: ii, is defined. By comparing this ratio for a range of mixtures, the impact of interspecies interactions can be studied. Hydrodynamic simulations of mixtures with different carbon/hydrogen ratios are used to demonstrate the application of this proposed technique to multiple shock and ramp compression experiments. The limit of the pressure correction that can be measured due to interspecies interactions using this methodology is determined by the uncertainty in the density measurement.

  6. Probabilistic Modeling of Landfill Subsidence Introduced by Buried Structure Collapse - 13229

    Energy Technology Data Exchange (ETDEWEB)

    Foye, Kevin; Soong, Te-Yang [CTI and Associates, Inc., 51331 W. Pontiac Trail, Wixom, MI 48393 (United States)

    2013-07-01

    The long-term reliability of land disposal facility final cover systems - and therefore the overall waste containment - depends on the distortions imposed on these systems by differential settlement/subsidence. The evaluation of differential settlement is challenging because of the heterogeneity of the waste mass and buried structure placement. Deterministic approaches to long-term final cover settlement prediction are not able to capture the spatial variability in the waste mass and sub-grade properties, especially discontinuous inclusions, which control differential settlement. An alternative is to use a probabilistic model to capture the non-uniform collapse of cover soils and buried structures and the subsequent effect of that collapse on the final cover system. Both techniques are applied to the problem of two side-by-side waste trenches with collapsible voids. The results show how this analytical technique can be used to connect a metric of final cover performance (inundation area) to the susceptibility of the sub-grade to collapse and the effective thickness of the cover soils. This approach allows designers to specify cover thickness, reinforcement, and slope to meet the demands imposed by the settlement of the underlying waste trenches. (authors)

  7. Introducing a Semi-Coated Model to Investigate Antibacterial Effects of Biocompatible Polymers on Titanium Surfaces

    Science.gov (United States)

    Winkel, Andreas; Dempwolf, Wibke; Gellermann, Eva; Sluszniak, Magdalena; Grade, Sebastian; Heuer, Wieland; Eisenburger, Michael; Menzel, Henning; Stiesch, Meike

    2015-01-01

    Peri-implant infections from bacterial biofilms on artificial surfaces are a common threat to all medical implants. They are a handicap for the patient and can lead to implant failure or even life-threatening complications. New implant surfaces have to be developed to reduce biofilm formation and to improve the long-term prognosis of medical implants. The aim of this study was (1) to develop a new method to test the antibacterial efficacy of implant surfaces by direct surface contact and (2) to elucidate whether an innovative antimicrobial copolymer coating of 4-vinyl-N-hexylpyridinium bromide and dimethyl(2-methacryloyloxyethyl) phosphonate (VP:DMMEP 30:70) on titanium is able to reduce the attachment of bacteria prevalent in peri-implant infections. With a new in vitro model with semi-coated titanium discs, we were able to show a dramatic reduction in the adhesion of various pathogenic bacteria (Streptococcus sanguinis, Escherichia coli, Staphylococcus aureus, Staphylococcus epidermidis), completely independently of effects caused by soluble materials. In contrast, soft tissue cells (human gingival or dermis fibroblasts) were less affected by the same coating, despite a moderate reduction in initial adhesion of gingival fibroblasts. These data confirm the hypothesis that VP:DMMEP 30:70 is a promising antibacterial copolymer that may be of use in several clinical applications. PMID:25690041

  8. A framework to prevent and control tobacco among adolescents and children: introducing the IMPACT model.

    Science.gov (United States)

    Arora, Monika; Mathur, Manu Raj; Singh, Neha

    2013-03-01

    The objective of this paper is to provide a comprehensive evidence based model aimed at addressing multi-level risk factors influencing tobacco use among children and adolescents with multi-level policy and programmatic approaches in India. Evidences around effectiveness of policy and program interventions from developed and developing countries were reviewed using Pubmed, Scopus, Google Scholar and Ovid databases. This evidence was then categorized under three broad approaches: Policy level approaches (increased taxation on tobacco products, smoke-free laws in public places and work places, effective health warnings, prohibiting tobacco advertising, promotions and sponsorships, and restricting access to minors); Community level approaches (school health programs, mass media campaigns, community based interventions, promoting tobacco free norms) and Individual level approaches (promoting cessation in various settings). This review of literature around determinants and interventions was organized into developing the IMPACT framework. The paper further presents a comparative analysis of tobacco control interventions in India vis a vis the proposed approaches. Mixed results were found for prevention and control efforts targeting youth. However, this article suggests a number of intervention strategies that have shown to be effective. Implementing these interventions in a coordinated way will provide potential synergies across interventions. Pediatricians have prominent role in advocating and implementing the IMPACT framework in countries aiming to prevent and control tobacco use among adolescents and children.

  9. Introducing a Semi-Coated Model to Investigate Antibacterial Effects of Biocompatible Polymers on Titanium Surfaces

    Directory of Open Access Journals (Sweden)

    Andreas Winkel

    2015-02-01

    Full Text Available Peri-implant infections from bacterial biofilms on artificial surfaces are a common threat to all medical implants. They are a handicap for the patient and can lead to implant failure or even life-threatening complications. New implant surfaces have to be developed to reduce biofilm formation and to improve the long-term prognosis of medical implants. The aim of this study was (1 to develop a new method to test the antibacterial efficacy of implant surfaces by direct surface contact and (2 to elucidate whether an innovative antimicrobial copolymer coating of 4-vinyl-N-hexylpyridinium bromide and dimethyl(2-methacryloyloxyethyl phosphonate (VP:DMMEP 30:70 on titanium is able to reduce the attachment of bacteria prevalent in peri-implant infections. With a new in vitro model with semi-coated titanium discs, we were able to show a dramatic reduction in the adhesion of various pathogenic bacteria (Streptococcus sanguinis, Escherichia coli, Staphylococcus aureus, Staphylococcus epidermidis, completely independently of effects caused by soluble materials. In contrast, soft tissue cells (human gingival or dermis fibroblasts were less affected by the same coating, despite a moderate reduction in initial adhesion of gingival fibroblasts. These data confirm the hypothesis that VP:DMMEP 30:70 is a promising antibacterial copolymer that may be of use in several clinical applications.

  10. Introducing the Literary Critic: The CARS Model in the Introductions of Academic Papers in Literary Criticism

    Directory of Open Access Journals (Sweden)

    Balázs Sánta

    2015-05-01

    Full Text Available Genre analysis as a “meta-study” is a topic that has been deeply investigated in the field of applied linguistics, one of its more specific areas of interest being research article introductions (RAIs. However, there are still certain kinds of scholarly activity that have received relatively little attention in this regard, such as literary criticism. The paper presents and discusses the results of a small-scale study of the introductions of ten academic essays in this field. The paper’s aim is to see how Swales’ (1990 CARS model can be applied to the rhetorical structure of these RAIs. It is found that at the cost of certain modifications necessitated by the structure of the essays in the corpus, it is not impossible to analyze critical texts produced by scholars belonging in the latter area. The demonstration of this has significance in fulfilling a perceived need for literary criticism to be considered among those disciplines that are worthy of the attention of applied linguistic research.

  11. Granular mixtures modeled as elastic hard spheres subject to a drag force.

    Science.gov (United States)

    Vega Reyes, Francisco; Garzó, Vicente; Santos, Andrés

    2007-06-01

    Granular gaseous mixtures under rapid flow conditions are usually modeled as a multicomponent system of smooth inelastic hard disks (two dimensions) or spheres (three dimensions) with constant coefficients of normal restitution alpha{ij}. In the low density regime an adequate framework is provided by the set of coupled inelastic Boltzmann equations. Due to the intricacy of the inelastic Boltzmann collision operator, in this paper we propose a simpler model of elastic hard disks or spheres subject to the action of an effective drag force, which mimics the effect of dissipation present in the original granular gas. For each collision term ij, the model has two parameters: a dimensionless factor beta{ij} modifying the collision rate of the elastic hard spheres, and the drag coefficient zeta{ij}. Both parameters are determined by requiring that the model reproduces the collisional transfers of momentum and energy of the true inelastic Boltzmann operator, yielding beta{ij}=(1+alpha{ij})2 and zeta{ij} proportional, variant1-alpha{ij}/{2}, where the proportionality constant is a function of the partial densities, velocities, and temperatures of species i and j. The Navier-Stokes transport coefficients for a binary mixture are obtained from the model by application of the Chapman-Enskog method. The three coefficients associated with the mass flux are the same as those obtained from the inelastic Boltzmann equation, while the remaining four transport coefficients show a general good agreement, especially in the case of the thermal conductivity. The discrepancies between both descriptions are seen to be similar to those found for monocomponent gases. Finally, the approximate decomposition of the inelastic Boltzmann collision operator is exploited to construct a model kinetic equation for granular mixtures as a direct extension of a known kinetic model for elastic collisions.

  12. Binding of Solvent Molecules to a Protein Surface in Binary Mixtures Follows a Competitive Langmuir Model.

    Science.gov (United States)

    Kulschewski, Tobias; Pleiss, Jürgen

    2016-09-06

    The binding of solvent molecules to a protein surface was modeled by molecular dynamics simulations of of Candida antarctica (C. antarctica) lipase B in binary mixtures of water, methanol, and toluene. Two models were analyzed: a competitive Langmuir model which assumes identical solvent binding sites with a different affinity toward water (KWat), methanol (KMet), and toluene (KTol) and a competitive Langmuir model with an additional interaction between free water and already bound water (KWatWat). The numbers of protein-bound molecules of both components of a binary mixture were determined for different compositions as a function of their thermodynamic activities in the bulk phase, and the binding constants were simultaneously fitted to the six binding curves (two components of three different mixtures). For both Langmuir models, the values of KWat, KMet, and KTol were highly correlated. The highest binding affinity was found for methanol, which was almost 4-fold higher than the binding affinities of water and toluene (KMet ≫ KWat ≈ KTol). Binding of water was dominated by the water-water interaction (KWatWat). Even for the three protein surface patches of highest water affinity, the binding affinity of methanol was 2-fold higher than water and 8-fold higher than toluene (KMet > KWat > KTol). The Langmuir model provides insights into the protein destabilizing mechanism of methanol which has a high binding affinity toward the protein surface. Thus, destabilizing solvents compete with intraprotein interactions and disrupt the tertiary structure. In contrast, benign solvents such as water or toluene have a low affinity toward the protein surface. Water is a special solvent: only few water molecules bind directly to the protein; most water molecules bind to already bound water molecules thus forming water patches. A quantitative mechanistic model of protein-solvent interactions that includes competition and miscibility of the components contributes a robust basis

  13. Dynamic viscosity modeling of methane plus n-decane and methane plus toluene mixtures: Comparative study of some representative models

    DEFF Research Database (Denmark)

    Baylaucq, A.; Boned, C.; Canet, X.;

    2005-01-01

    .15 and for several methane compositions. Although very far from real petroleum fluids, these mixtures are interesting in order to study the potential of extending various models to the simulation of complex fluids with asymmetrical components (light/heavy hydrocarbon). These data (575 data points) have been...... discussed in the framework of recent representative models (hard sphere scheme, friction theory, and free volume model) and with mixing laws and two empirical models (particularly the LBC model which is commonly used in petroleum engineering, and the self-referencing model). This comparative study shows...

  14. Managing Model Data Introduced Uncertainties in Simulator Predictions for Generation IV Systems via Optimum Experimental Design

    Energy Technology Data Exchange (ETDEWEB)

    Turinsky, Paul J [North Carolina State Univ., Raleigh, NC (United States); Abdel-Khalik, Hany S [North Carolina State Univ., Raleigh, NC (United States); Stover, Tracy E [North Carolina State Univ., Raleigh, NC (United States)

    2011-03-01

    An optimization technique has been developed to select optimized experimental design specifications to produce data specifically designed to be assimilated to optimize a given reactor concept. Data from the optimized experiment is assimilated to generate posteriori uncertainties on the reactor concept’s core attributes from which the design responses are computed. The reactor concept is then optimized with the new data to realize cost savings by reducing margin. The optimization problem iterates until an optimal experiment is found to maximize the savings. A new generation of innovative nuclear reactor designs, in particular fast neutron spectrum recycle reactors, are being considered for the application of closing the nuclear fuel cycle in the future. Safe and economical design of these reactors will require uncertainty reduction in basic nuclear data which are input to the reactor design. These data uncertainty propagate to design responses which in turn require the reactor designer to incorporate additional safety margin into the design, which often increases the cost of the reactor. Therefore basic nuclear data needs to be improved and this is accomplished through experimentation. Considering the high cost of nuclear experiments, it is desired to have an optimized experiment which will provide the data needed for uncertainty reduction such that a reactor design concept can meet its target accuracies or to allow savings to be realized by reducing the margin required due to uncertainty propagated from basic nuclear data. However, this optimization is coupled to the reactor design itself because with improved data the reactor concept can be re-optimized itself. It is thus desired to find the experiment that gives the best optimized reactor design. Methods are first established to model both the reactor concept and the experiment and to efficiently propagate the basic nuclear data uncertainty through these models to outputs. The representativity of the experiment

  15. Managing Model Data Introduced Uncertainties in Simulator Predictions for Generation IV Systems via Optimum Experimental Design

    Energy Technology Data Exchange (ETDEWEB)

    Turinsky, Paul J; Abdel-Khalik, Hany S; Stover, Tracy E

    2011-03-31

    An optimization technique has been developed to select optimized experimental design specifications to produce data specifically designed to be assimilated to optimize a given reactor concept. Data from the optimized experiment is assimilated to generate posteriori uncertainties on the reactor concept’s core attributes from which the design responses are computed. The reactor concept is then optimized with the new data to realize cost savings by reducing margin. The optimization problem iterates until an optimal experiment is found to maximize the savings. A new generation of innovative nuclear reactor designs, in particular fast neutron spectrum recycle reactors, are being considered for the application of closing the nuclear fuel cycle in the future. Safe and economical design of these reactors will require uncertainty reduction in basic nuclear data which are input to the reactor design. These data uncertainty propagate to design responses which in turn require the reactor designer to incorporate additional safety margin into the design, which often increases the cost of the reactor. Therefore basic nuclear data needs to be improved and this is accomplished through experimentation. Considering the high cost of nuclear experiments, it is desired to have an optimized experiment which will provide the data needed for uncertainty reduction such that a reactor design concept can meet its target accuracies or to allow savings to be realized by reducing the margin required due to uncertainty propagated from basic nuclear data. However, this optimization is coupled to the reactor design itself because with improved data the reactor concept can be re-optimized itself. It is thus desired to find the experiment that gives the best optimized reactor design. Methods are first established to model both the reactor concept and the experiment and to efficiently propagate the basic nuclear data uncertainty through these models to outputs. The representativity of the experiment

  16. Discrete Element Method Modeling of the Rheological Properties of Coke/Pitch Mixtures

    Directory of Open Access Journals (Sweden)

    Behzad Majidi

    2016-05-01

    Full Text Available Rheological properties of pitch and pitch/coke mixtures at temperatures around 150 °C are of great interest for the carbon anode manufacturing process in the aluminum industry. In the present work, a cohesive viscoelastic contact model based on Burger’s model is developed using the discrete element method (DEM on the YADE, the open-source DEM software. A dynamic shear rheometer (DSR is used to measure the viscoelastic properties of pitch at 150 °C. The experimental data obtained is then used to estimate the Burger’s model parameters and calibrate the DEM model. The DSR tests were then simulated by a three-dimensional model. Very good agreement was observed between the experimental data and simulation results. Coke aggregates were modeled by overlapping spheres in the DEM model. Coke/pitch mixtures were numerically created by adding 5, 10, 20, and 30 percent of coke aggregates of the size range of 0.297–0.595 mm (−30 + 50 mesh to pitch. Adding up to 30% of coke aggregates to pitch can increase its complex shear modulus at 60 Hz from 273 Pa to 1557 Pa. Results also showed that adding coke particles increases both storage and loss moduli, while it does not have a meaningful effect on the phase angle of pitch.

  17. Study of normal and shear material properties for viscoelastic model of asphalt mixture by discrete element method

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Stang, Henrik

    2015-01-01

    In this paper, the viscoelastic behavior of asphalt mixture was studied by using discrete element method. The dynamic properties of asphalt mixture were captured by implementing Burger’s contact model. Different ways of taking into account of the normal and shear material properties of asphalt mi...

  18. Highlighting pitfalls in the Maxwell-Stefan modeling of water-alcohol mixture permeation across pervaporation membranes

    NARCIS (Netherlands)

    Krishna, R.; van Baten, J.M.

    2010-01-01

    The Maxwell-Stefan (M-S) equations are widely used for modeling permeation of water-alcohol mixtures across microporous membranes in pervaporation and dehydration process applications. For binary mixtures, for example, the following set of assumptions is commonly invoked, either explicitly or

  19. Microstructural Analysis and Rheological Modeling of Asphalt Mixtures Containing Recycled Asphalt Materials

    Directory of Open Access Journals (Sweden)

    Augusto Cannone Falchetto

    2014-09-01

    Full Text Available The use of recycled materials in pavement construction has seen, over the years, a significant increase closely associated with substantial economic and environmental benefits. During the past decades, many transportation agencies have evaluated the effect of adding Reclaimed Asphalt Pavement (RAP, and, more recently, Recycled Asphalt Shingles (RAS on the performance of asphalt pavement, while limits were proposed on the amount of recycled materials which can be used. In this paper, the effect of adding RAP and RAS on the microstructural and low temperature properties of asphalt mixtures is investigated using digital image processing (DIP and modeling of rheological data obtained with the Bending Beam Rheometer (BBR. Detailed information on the internal microstructure of asphalt mixtures is acquired based on digital images of small beam specimens and numerical estimations of spatial correlation functions. It is found that RAP increases the autocorrelation length (ACL of the spatial distribution of aggregates, asphalt mastic and air voids phases, while an opposite trend is observed when RAS is included. Analogical and semi empirical models are used to back-calculate binder creep stiffness from mixture experimental data. Differences between back-calculated results and experimental data suggest limited or partial blending between new and aged binder.

  20. Computational modeling of photoacoustic signals from mixtures of melanoma and red blood cells.

    Science.gov (United States)

    Saha, Ratan K

    2014-10-01

    A theoretical approach to model photoacoustic (PA) signals from mixtures of melanoma cells (MCs) and red blood cells (RBCs) is discussed. The PA signal from a cell approximated as a fluid sphere was evaluated using a frequency domain method. The tiny signals from individual cells were summed up obtaining the resultant PA signal. The local signal to noise ratio for a MC was about 5.32 and 5.40 for 639 and 822 nm illuminations, respectively. The PA amplitude exhibited a monotonic rise with increasing number of MCs for each incident radiation. The power spectral lines also demonstrated similar variations over a large frequency range (5-200 MHz). For instance, spectral intensity was observed to be 5.5 and 4.0 dB greater at 7.5 MHz for a diseased sample containing 1 MC and 22,952 RBCs than a normal sample composed of 22,958 RBCs at those irradiations, respectively. The envelope histograms generated from PA signals for mixtures of small numbers of MCs and large numbers of RBCs seemed to obey pre-Rayleigh statistics. The generalized gamma distribution found to facilitate better fits to the histograms than the Rayleigh and Nakagami distributions. The model provides a means to study PAs from mixtures of different populations of absorbers.

  1. Cost-effectiveness model for a specific mixture of prebiotics in The Netherlands.

    Science.gov (United States)

    Lenoir-Wijnkoop, I; van Aalderen, W M C; Boehm, G; Klaassen, D; Sprikkelman, A B; Nuijten, M J C

    2012-02-01

    The objective of this study was to assess the cost-effectiveness of the use of prebiotics for the primary prevention of atopic dermatitis in The Netherlands. A model was constructed using decision analytical techniques. The model was developed to estimate the health economic impact of prebiotic preventive disease management of atopic dermatitis. Data sources used include published literature, clinical trials and official price/tariff lists and national population statistics. The comparator was no supplementation with prebiotics. The primary perspective for conducting the economic evaluation was based on the situation in The Netherlands in 2009. The results show that the use of prebiotics infant formula (IMMUNOFORTIS(®)) leads to an additional cost of € 51 and an increase in Quality Adjusted Life Years (QALY) of 0.108, when compared with no prebiotics. Consequently, the use of infant formula with a specific mixture of prebiotics results in an incremental cost-effectiveness ratio (ICER) of € 472. The sensitivity analyses show that the ICER remains in all analyses far below the threshold of € 20,000/QALY. This study shows that the favourable health benefit of the use of a specific mixture of prebiotics results in positive short- and long-term health economic benefits. In addition, this study demonstrates that the use of infant formula with a specific mixture of prebiotics is a highly cost-effective way of preventing atopic dermatitis in The Netherlands.

  2. Multivariate spatial Gaussian mixture modeling for statistical clustering of hemodynamic parameters in functional MRI

    Energy Technology Data Exchange (ETDEWEB)

    Fouque, A.L.; Ciuciu, Ph.; Risser, L. [NeuroSpin/CEA, F-91191 Gif-sur-Yvette (France); Fouque, A.L.; Ciuciu, Ph.; Risser, L. [IFR 49, Institut d' Imagerie Neurofonctionnelle, Paris (France)

    2009-07-01

    In this paper, a novel statistical parcellation of intra-subject functional MRI (fMRI) data is proposed. The key idea is to identify functionally homogenous regions of interest from their hemodynamic parameters. To this end, a non-parametric voxel-based estimation of hemodynamic response function is performed as a prerequisite. Then, the extracted hemodynamic features are entered as the input data of a Multivariate Spatial Gaussian Mixture Model (MSGMM) to be fitted. The goal of the spatial aspect is to favor the recovery of connected components in the mixture. Our statistical clustering approach is original in the sense that it extends existing works done on univariate spatially regularized Gaussian mixtures. A specific Gibbs sampler is derived to account for different covariance structures in the feature space. On realistic artificial fMRI datasets, it is shown that our algorithm is helpful for identifying a parsimonious functional parcellation required in the context of joint detection estimation of brain activity. This allows us to overcome the classical assumption of spatial stationarity of the BOLD signal model. (authors)

  3. Hyperspectral Small Target Detection by Combining Kernel PCA with Linear Mixture Model

    Institute of Scientific and Technical Information of China (English)

    GUYanfeng; ZHANGYe

    2005-01-01

    In this paper, a kernel-based invariant detection method is proposed for small target detection of hyperspectral images. The method combines Kernel principal component analysis (KPCA) with Iinear mixture model (LMM) together. The LMM is used to describe each pixel in the hyperspectral images as a mixture of target,background and noise. The KPCA is used to build back-ground subspace. Finally, a generalized likelihood ratio test is used to detect whether each pixel in hyperspectral image includes target. The numerical experiments are performed on hyperspectral data with 126 bands collected by Airborne visible/infrared imaging spectrometer (AVIRIS).The experimental results show the effectiveness of the proposed method and prove that this method can commendably overcome spectral variability and sparsity of target in the hyperspectral target detection, and it has great ability to separate target from background.

  4. Two-component mixture model: Application to palm oil and exchange rate

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-12-01

    Palm oil is a seed crop which is widely adopt for food and non-food products such as cookie, vegetable oil, cosmetics, household products and others. Palm oil is majority growth in Malaysia and Indonesia. However, the demand for palm oil is getting growth and rapidly running out over the years. This phenomenal cause illegal logging of trees and destroy the natural habitat. Hence, the present paper investigates the relationship between exchange rate and palm oil price in Malaysia by using Maximum Likelihood Estimation via Newton-Raphson algorithm to fit a two components mixture model. Besides, this paper proposes a mixture of normal distribution to accommodate with asymmetry characteristics and platykurtic time series data.

  5. Introducing a Method for Modeling Knowledge Bases in Expert Systems Using the Example of Large Software Development Projects

    Directory of Open Access Journals (Sweden)

    Franz Felix Füssl

    2015-12-01

    Full Text Available Goal of this paper is to develop a meta-model, which provides the basis for developing highly scalable artificial intelligence systems that should be able to make autonomously decisions based on different dynamic and specific influences. An artificial neural network builds the entry point for developing a multi-layered human readable model that serves as knowledge base and can be used for further investigations in deductive and inductive reasoning. A graph-theoretical consideration gives a detailed view into the model structure. In addition to it the model is introduced using the example of large software development projects. The integration of Constraints and Deductive Reasoning Element Pruning are illustrated, which are required for executing deductive reasoning efficiently.

  6. A thermodynamically consistent model for granular-fluid mixtures considering pore pressure evolution and hypoplastic behavior

    Science.gov (United States)

    Hess, Julian; Wang, Yongqi

    2016-11-01

    A new mixture model for granular-fluid flows, which is thermodynamically consistent with the entropy principle, is presented. The extra pore pressure described by a pressure diffusion equation and the hypoplastic material behavior obeying a transport equation are taken into account. The model is applied to granular-fluid flows, using a closing assumption in conjunction with the dynamic fluid pressure to describe the pressure-like residual unknowns, hereby overcoming previous uncertainties in the modeling process. Besides the thermodynamically consistent modeling, numerical simulations are carried out and demonstrate physically reasonable results, including simple shear flow in order to investigate the vertical distribution of the physical quantities, and a mixture flow down an inclined plane by means of the depth-integrated model. Results presented give insight in the ability of the deduced model to capture the key characteristics of granular-fluid flows. We acknowledge the support of the Deutsche Forschungsgemeinschaft (DFG) for this work within the Project Number WA 2610/3-1.

  7. Beyond GLMs: a generative mixture modeling approach to neural system identification.

    Science.gov (United States)

    Theis, Lucas; Chagas, Andrè Maia; Arnstein, Daniel; Schwarz, Cornelius; Bethge, Matthias

    2013-01-01

    Generalized linear models (GLMs) represent a popular choice for the probabilistic characterization of neural spike responses. While GLMs are attractive for their computational tractability, they also impose strong assumptions and thus only allow for a limited range of stimulus-response relationships to be discovered. Alternative approaches exist that make only very weak assumptions but scale poorly to high-dimensional stimulus spaces. Here we seek an approach which can gracefully interpolate between the two extremes. We extend two frequently used special cases of the GLM-a linear and a quadratic model-by assuming that the spike-triggered and non-spike-triggered distributions can be adequately represented using Gaussian mixtures. Because we derive the model from a generative perspective, its components are easy to interpret as they correspond to, for example, the spike-triggered distribution and the interspike interval distribution. The model is able to capture complex dependencies on high-dimensional stimuli with far fewer parameters than other approaches such as histogram-based methods. The added flexibility comes at the cost of a non-concave log-likelihood. We show that in practice this does not have to be an issue and the mixture-based model is able to outperform generalized linear and quadratic models.

  8. Beyond GLMs: a generative mixture modeling approach to neural system identification.

    Directory of Open Access Journals (Sweden)

    Lucas Theis

    Full Text Available Generalized linear models (GLMs represent a popular choice for the probabilistic characterization of neural spike responses. While GLMs are attractive for their computational tractability, they also impose strong assumptions and thus only allow for a limited range of stimulus-response relationships to be discovered. Alternative approaches exist that make only very weak assumptions but scale poorly to high-dimensional stimulus spaces. Here we seek an approach which can gracefully interpolate between the two extremes. We extend two frequently used special cases of the GLM-a linear and a quadratic model-by assuming that the spike-triggered and non-spike-triggered distributions can be adequately represented using Gaussian mixtures. Because we derive the model from a generative perspective, its components are easy to interpret as they correspond to, for example, the spike-triggered distribution and the interspike interval distribution. The model is able to capture complex dependencies on high-dimensional stimuli with far fewer parameters than other approaches such as histogram-based methods. The added flexibility comes at the cost of a non-concave log-likelihood. We show that in practice this does not have to be an issue and the mixture-based model is able to outperform generalized linear and quadratic models.

  9. Inhalation pressure distributions for medical gas mixtures calculated in an infant airway morphology model.

    Science.gov (United States)

    Gouinaud, Laure; Katz, Ira; Martin, Andrew; Hazebroucq, Jean; Texereau, Joëlle; Caillibotte, Georges

    2015-01-01

    A numerical pressure loss model previously used for adult human airways has been modified to simulate the inhalation pressure distribution in a healthy 9-month-old infant lung morphology model. Pressure distributions are calculated for air as well as helium and xenon mixtures with oxygen to investigate the effects of gas density and viscosity variations for this age group. The results indicate that there are significant pressure losses in infant extrathoracic airways due to inertial effects leading to much higher pressures to drive nominal flows in the infant airway model than for an adult airway model. For example, the pressure drop through the nasopharynx model of the infant is much greater than that for the nasopharynx model of the adult; that is, for the adult-versus-child the pressure differences are 0.08 cm H2O versus 0.4 cm H2O, 0.16 cm H2O versus 1.9 cm H2O and 0.4 cm H2O versus 7.7 cm H2O, breathing helium-oxygen (78/22%), nitrogen-oxygen (78/22%) and xenon-oxygen (60/40%), respectively. Within the healthy lung, viscous losses are of the same order for the three gas mixtures, so the differences in pressure distribution are relatively small.

  10. Mathematical Modeling and Evaluation of Human Motions in Physical Therapy Using Mixture Density Neural Networks

    Science.gov (United States)

    Vakanski, A; Ferguson, JM; Lee, S

    2016-01-01

    Objective The objective of the proposed research is to develop a methodology for modeling and evaluation of human motions, which will potentially benefit patients undertaking a physical rehabilitation therapy (e.g., following a stroke or due to other medical conditions). The ultimate aim is to allow patients to perform home-based rehabilitation exercises using a sensory system for capturing the motions, where an algorithm will retrieve the trajectories of a patient’s exercises, will perform data analysis by comparing the performed motions to a reference model of prescribed motions, and will send the analysis results to the patient’s physician with recommendations for improvement. Methods The modeling approach employs an artificial neural network, consisting of layers of recurrent neuron units and layers of neuron units for estimating a mixture density function over the spatio-temporal dependencies within the human motion sequences. Input data are sequences of motions related to a prescribed exercise by a physiotherapist to a patient, and recorded with a motion capture system. An autoencoder subnet is employed for reducing the dimensionality of captured sequences of human motions, complemented with a mixture density subnet for probabilistic modeling of the motion data using a mixture of Gaussian distributions. Results The proposed neural network architecture produced a model for sets of human motions represented with a mixture of Gaussian density functions. The mean log-likelihood of observed sequences was employed as a performance metric in evaluating the consistency of a subject’s performance relative to the reference dataset of motions. A publically available dataset of human motions captured with Microsoft Kinect was used for validation of the proposed method. Conclusion The article presents a novel approach for modeling and evaluation of human motions with a potential application in home-based physical therapy and rehabilitation. The described approach

  11. Toxicogenomic responses in rainbow trout (Oncorhynchus mykiss) hepatocytes exposed to model chemicals and a synthetic mixture

    Energy Technology Data Exchange (ETDEWEB)

    Finne, E.F. [Norwegian Institute for Water Research, Gaustadalleen 21, N-0349 Oslo (Norway) and University of Oslo, Department of Biology, P.O. Box 1066, Blindern, N-0316 Oslo (Norway)]. E-mail: eivind.finne@niva.no; Cooper, G.A. [Centre for Biomedical Research, University of Victoria, BC V8P5C2 (Canada); Koop, B.F. [Centre for Biomedical Research, University of Victoria, BC V8P5C2 (Canada); Hylland, K. [Norwegian Institute for Water Research, Gaustadalleen 21, N-0349 Oslo (Norway); University of Oslo, Department of Biology, P.O. Box 1066, Blindern, N-0316 Oslo (Norway); Tollefsen, K.E. [Norwegian Institute for Water Research, Gaustadalleen 21, N-0349 Oslo (Norway)

    2007-03-10

    As more salmon gene expression data has become available, the cDNA microarray platform has emerged as an appealing alternative in ecotoxicological screening of single chemicals and environmental samples relevant to the aquatic environment. This study was performed to validate biomarker gene responses of in vitro cultured rainbow trout (Oncorhynchus mykiss) hepatocytes exposed to model chemicals, and to investigate effects of mixture toxicity in a synthetic mixture. Chemicals used for 24 h single chemical- and mixture exposures were 10 nM 17{alpha}-ethinylestradiol (EE2), 0.75 nM 2,3,7,8-tetrachloro-di-benzodioxin (TCDD), 100 {mu}M paraquat (PQ) and 0.75 {mu}M 4-nitroquinoline-1-oxide (NQO). RNA was isolated from exposed cells, DNAse treated and quality controlled before cDNA synthesis, fluorescent labelling and hybridisation to a 16k salmonid microarray. The salmonid 16k cDNA array identified differential gene expression predictive of exposure, which could be verified by quantitative real time PCR. More precisely, the responses of biomarker genes such as cytochrome p4501A and UDP-glucuronosyl transferase to TCDD exposure, glutathione reductase and gammaglutamyl cysteine synthetase to paraquat exposure, as well as vitellogenin and vitelline envelope protein to EE2 exposure validated the use of microarray applied to RNA extracted from in vitro exposed hepatocytes. The mutagenic compound NQO did not result in any change in gene expression. Results from exposure to a synthetic mixture of the same four chemicals, using identical concentrations as for single chemical exposures, revealed combined effects that were not predicted by results for individual chemicals alone. In general, the response of exposure to this mixture led to an average loss of approximately 60% of the transcriptomic signature found for single chemical exposure. The present findings show that microarray analyses may contribute to our mechanistic understanding of single contaminant mode of action as

  12. A poromechanical model for coal seams saturated with binary mixtures of CH4 and CO2

    Science.gov (United States)

    Nikoosokhan, Saeid; Vandamme, Matthieu; Dangla, Patrick

    2014-11-01

    Underground coal bed reservoirs naturally contain methane which can be produced. In parallel of the production of this methane, carbon dioxide can be injected, either to enhance the production of methane, or to have this carbon dioxide stored over geological periods of time. As a prerequisite to any simulation of an Enhanced Coal Bed Methane recovery process (ECBM), we need state equations to model the behavior of the seam when cleats are saturated with a miscible mixture of CH4 and CO2. This paper presents a poromechanical model of coal seams exposed to such binary mixtures filling both the cleats in the seam and the porosity of the coal matrix. This model is an extension of a previous work which dealt with pure fluid. Special care is dedicated to keep the model consistent thermodynamically. The model is fully calibrated with a mix of experimental data and numerical data from molecular simulations. Predicting variations of porosity or permeability requires only calibration based on swelling data. With the calibrated state equations, we predict numerically how porosity, permeability, and adsorbed amounts of fluid vary in a representative volume element of coal seam in isochoric or oedometric conditions, as a function of the pressure and of the composition of the fluid in the cleats.

  13. Simulation and reference interaction site model theory of methanol and carbon tetrachloride mixtures.

    Science.gov (United States)

    Munaò, G; Costa, D; Saija, F; Caccamo, C

    2010-02-28

    We report molecular dynamics and reference interaction site model (RISM) theory of methanol and carbon tetrachloride mixtures. Our study encompasses the whole concentration range, by including the pure component limits. We majorly focus on an analysis of partial, total, and concentration-concentration structure factors, and examine in detail the k-->0 limits of these functions. Simulation results confirm the tendency of methanol to self-associate with the formation of ring structures in the high dilution regime of this species, in agreement with experimental studies and with previous simulations by other authors. This behavior emerges as strongly related to the high nonideality of the mixture, a quantitative estimate of which is provided in terms of concentration fluctuation correlations, through the structure factors examined. The interaggregate correlation distance is also thereby estimated. Finally, the compressibility of the mixture is found in good agreement with experimental data. The RISM predictions are throughout assessed against simulation; the theory describes better the apolar solvent than the alcohol properties. Self-association of methanol is qualitatively reproduced, though this trend is much less marked in comparison with simulation results.

  14. Self-assembly in a model colloidal mixture of dimers and spherical particles

    Science.gov (United States)

    Prestipino, Santi; Munaò, Gianmarco; Costa, Dino; Caccamo, Carlo

    2017-02-01

    We investigate the structure of a dilute mixture of amphiphilic dimers and spherical particles, a model relevant to the problem of encapsulating globular "guest" molecules in a dispersion. Dimers and spheres are taken to be hard particles, with an additional attraction between spheres and the smaller monomers in a dimer. Using the Monte Carlo simulation, we document the low-temperature formation of aggregates of guests (clusters) held together by dimers, whose typical size and shape depend on the guest concentration χ. For low χ (less than 10%), most guests are isolated and coated with a layer of dimers. As χ progressively increases, clusters grow in size becoming more and more elongated and polydisperse; after reaching a shallow maximum for χ ≈50 % , the size of clusters again reduces upon increasing χ further. In one case only (χ =50 % and moderately low temperature) the mixture relaxed to a fluid of lamellae, suggesting that in this case clusters are metastable with respect to crystal-vapor separation. On heating, clusters shrink until eventually the system becomes homogeneous on all scales. On the other hand, as the mixture is made denser and denser at low temperature, clusters get increasingly larger until a percolating network is formed.

  15. Modelling shallow debris flows of the Coulomb-mixture type over temporally varying topography

    Directory of Open Access Journals (Sweden)

    Y. C. Tai

    2012-02-01

    Full Text Available We propose a saturated binary mixture model for debris flows of the Coulomb-mixture type over temporally varying topography, where the effects of erosion and deposition are considered. Due to the deposition or erosion processes, the interface between the moving material and the stagnant base is a non-material singular surface. The motion of this singular surface is determined by the mass exchange between the flowing layer and the ground. The ratio of the relative velocity between the two constituents to the velocity of the solid phase is assumed to be small, so that the governing equations can be reduced to a system of the quasi-single-phase type. A shock-capturing numerical scheme is implemented to solve the derived equation system. The deposition shapes of a finite mass sliding down an inclined planary chute are investigated for a range of mixture ratios. The geometric evolution of the deposition is presented, which allows the possibility of mimicking the development of levee deposition.

  16. Irruptive dynamics of introduced caribou on Adak Island, Alaska: an evaluation of Riney-Caughley model predictions

    Science.gov (United States)

    Ricca, Mark A.; Van Vuren, Dirk H.; Weckerly, Floyd W.; Williams, Jeffrey C.; Miles, A. Keith

    2014-01-01

    Large mammalian herbivores introduced to islands without predators are predicted to undergo irruptive population and spatial dynamics, but only a few well-documented case studies support this paradigm. We used the Riney-Caughley model as a framework to test predictions of irruptive population growth and spatial expansion of caribou (Rangifer tarandus granti) introduced to Adak Island in the Aleutian archipelago of Alaska in 1958 and 1959. We utilized a time series of spatially explicit counts conducted on this population intermittently over a 54-year period. Population size increased from 23 released animals to approximately 2900 animals in 2012. Population dynamics were characterized by two distinct periods of irruptive growth separated by a long time period of relative stability, and the catalyst for the initial irruption was more likely related to annual variation in hunting pressure than weather conditions. An unexpected pattern resembling logistic population growth occurred between the peak of the second irruption in 2005 and the next survey conducted seven years later in 2012. Model simulations indicated that an increase in reported harvest alone could not explain the deceleration in population growth, yet high levels of unreported harvest combined with increasing density-dependent feedbacks on fecundity and survival were the most plausible explanation for the observed population trend. No studies of introduced island Rangifer have measured a time series of spatial use to the extent described in this study. Spatial use patterns during the post-calving season strongly supported Riney-Caughley model predictions, whereby high-density core areas expanded outwardly as population size increased. During the calving season, caribou displayed marked site fidelity across the full range of population densities despite availability of other suitable habitats for calving. Finally, dispersal and reproduction on neighboring Kagalaska Island represented a new dispersal front

  17. Separation of deviatoric stress tensors from heterogeneous calcite twin data using a statistical mixture model

    Science.gov (United States)

    Yamaji, Atsushi

    2016-04-01

    It is essential for the techniques of paleostress analysis to separate stresses from heterogeneous data (e.g., Tikoff et al., 2013). A statistical mixture model is shown in this paper to be effective for calcite twinning paleopiezometry: Given the orientations of twinned e-planes and their gliding directions, the present inverse method based on the mixture model determines not only deviatoric stress tensors, but also estimates the number of tensors that should be read from a data set using Bayesian information criterion. The present method is based on the fact that mechanical twinning occurs on an e-plane if the resolved shear stress along its gliding direction, τ, is greater than a critical value, τc (e.g., Lacombe, 2010). The orientation data from e-planes corresponds to points on a 5-dimensional unit sphere, a spherical cap on which indicates a deviatoric stress tensor. The twinning condition, τ > τc, is identical with the condition that the points corresponding to the orientation data are distributed upon the spherical cap (Yamaji, 2015a). It means that the paleostress analysis of calcite twins comes down to the problem of fitting a spherical cap to data points on the sphere (Yamaji, 2015b). Given a heterogeneous data set, two or more spherical caps should be fitted to the data point on the sphere. A statistical mixture model is employed for this fitting in the present work. Such a statistical model enables us to evaluate the number of stresses recorded in the data set. The present method was tested with artificial data sets and a natural data set obtained from a Miocene graben in central Japan. From the former type of data sets, the method determined the deviatoric stress tensors that were assumed to generate the data sets. The natural data were inverted to give two stresses that appeared appropriate for the tectonic setting of the area where the data were obtained.

  18. Improved AIOMFAC model parameterisation of the temperature dependence of activity coefficients for aqueous organic mixtures

    Directory of Open Access Journals (Sweden)

    G. Ganbavale

    2014-06-01

    Full Text Available This study presents a new, improved parameterisation of the temperature dependence of activity coefficients in the AIOMFAC (Aerosol Inorganic–Organic Mixtures Functional groups Activity Coefficients model applicable for aqueous as well as water-free organic solutions. For electrolyte-free organic and organic–water mixtures the AIOMFAC model uses a group-contribution approach based on UNIFAC (UNIversal quasi-chemical Functional-group Activity Coefficients. This group-contribution approach explicitly accounts for interactions among organic functional groups and between organic functional groups and water. The previous AIOMFAC version uses a simple parameterisation of the temperature dependence of activity coefficients, aimed to be applicable in the temperature range from ~275 to ~400 K. With the goal to improve the description of a wide variety of organic compounds found in atmospheric aerosols, we extend the AIOMFAC parameterisation for the functional groups carboxyl, hydroxyl, ketone, aldehyde, ether, ester, alkyl, aromatic carbon-alcohol, and aromatic hydrocarbon to atmospherically relevant low temperatures with the introduction of a new temperature dependence parameterisation. The improved temperature dependence parameterisation is derived from classical thermodynamic theory by describing effects from changes in molar enthalpy and heat capacity of a multicomponent system. Thermodynamic equilibrium data of aqueous organic and water-free organic mixtures from the literature are carefully assessed and complemented with new measurements to establish a comprehensive database, covering a wide temperature range (~190 to ~440 K for many of the functional group combinations considered. Different experimental data types and their processing for the estimation of AIOMFAC model parameters are discussed. The new AIOMFAC parameterisation for the temperature dependence of activity coefficients from low to high temperatures shows an overall improvement of

  19. A mixture model for robust point matching under multi-layer motion.

    Directory of Open Access Journals (Sweden)

    Jiayi Ma

    Full Text Available This paper proposes an efficient mixture model for establishing robust point correspondences between two sets of points under multi-layer motion. Our algorithm starts by creating a set of putative correspondences which can contain a number of false correspondences, or outliers, in addition to the true correspondences (inliers. Next we solve for correspondence by interpolating a set of spatial transformations on the putative correspondence set based on a mixture model, which involves estimating a consensus of inlier points whose matching follows a non-parametric geometrical constraint. We formulate this as a maximum a posteriori (MAP estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose non-parametric geometrical constraints on the correspondence, as a prior distribution, in a reproducing kernel Hilbert space (RKHS. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation. We further provide a fast implementation based on sparse approximation which can achieve a significant speed-up without much performance degradation. We illustrate the proposed method on 2D and 3D real images for sparse feature correspondence, as well as a public available dataset for shape matching. The quantitative results demonstrate that our method is robust to non-rigid deformation and multi-layer/large discontinuous motion.

  20. Growth mixture modeling as an exploratory analysis tool in longitudinal quantitative trait loci analysis.

    Science.gov (United States)

    Chang, Su-Wei; Choi, Seung Hoan; Li, Ke; Fleur, Rose Saint; Huang, Chengrui; Shen, Tong; Ahn, Kwangmi; Gordon, Derek; Kim, Wonkuk; Wu, Rongling; Mendell, Nancy R; Finch, Stephen J

    2009-12-15

    We examined the properties of growth mixture modeling in finding longitudinal quantitative trait loci in a genome-wide association study. Two software packages are commonly used in these analyses: Mplus and the SAS TRAJ procedure. We analyzed the 200 replicates of the simulated data with these programs using three tests: the likelihood-ratio test statistic, a direct test of genetic model coefficients, and the chi-square test classifying subjects based on the trajectory model's posterior Bayesian probability. The Mplus program was not effective in this application due to its computational demands. The distributions of these tests applied to genes not related to the trait were sensitive to departures from Hardy-Weinberg equilibrium. The likelihood-ratio test statistic was not usable in this application because its distribution was far from the expected asymptotic distributions when applied to markers with no genetic relation to the quantitative trait. The other two tests were satisfactory. Power was still substantial when we used markers near the gene rather than the gene itself. That is, growth mixture modeling may be useful in genome-wide association studies. For markers near the actual gene, there was somewhat greater power for the direct test of the coefficients and lesser power for the posterior Bayesian probability chi-square test.

  1. Multigrid Nonlocal Gaussian Mixture Model for Segmentation of Brain Tissues in Magnetic Resonance Images.

    Science.gov (United States)

    Chen, Yunjie; Zhan, Tianming; Zhang, Ji; Wang, Hongyuan

    2016-01-01

    We propose a novel segmentation method based on regional and nonlocal information to overcome the impact of image intensity inhomogeneities and noise in human brain magnetic resonance images. With the consideration of the spatial distribution of different tissues in brain images, our method does not need preestimation or precorrection procedures for intensity inhomogeneities and noise. A nonlocal information based Gaussian mixture model (NGMM) is proposed to reduce the effect of noise. To reduce the effect of intensity inhomogeneity, the multigrid nonlocal Gaussian mixture model (MNGMM) is proposed to segment brain MR images in each nonoverlapping multigrid generated by using a new multigrid generation method. Therefore the proposed model can simultaneously overcome the impact of noise and intensity inhomogeneity and automatically classify 2D and 3D MR data into tissues of white matter, gray matter, and cerebral spinal fluid. To maintain the statistical reliability and spatial continuity of the segmentation, a fusion strategy is adopted to integrate the clustering results from different grid. The experiments on synthetic and clinical brain MR images demonstrate the superior performance of the proposed model comparing with several state-of-the-art algorithms.

  2. Online estimation of B-spline mixture models from TOF-PET list-mode data

    Energy Technology Data Exchange (ETDEWEB)

    Schretter, Colas; Kobbelt, Leif [RWTH Aachen Univ. (Germany). Computer Graphics Group; Sun, Jianyong [Nottingham Univ. (United Kingdom). Intelligent Modelling and Analysis Research Group

    2011-07-01

    In emission tomography, images are usually represented by regular grids of voxels or overlapping smooth image elements (blobs). Few other image models have been proposed like tetrahedral meshes or point clouds that are adapted to an anatomical image. This work proposes a practical sparse and continuous image model inspired from the field of parametric density estimation for Gaussian mixture models. The position, size, aspect ratio and orientation of each image element is optimized as well as its weight with a very fast online estimation method. Furthermore, the number of mixture components, hence the image resolution, is locally adapted according to the available data. The system model is represented in the same basis as image elements and captures time of flight and positron range effects in an exact way. Computations use apodized B-spline approximations of Gaussians and simple closed-form analytical expressions without any sampling or interpolation. In consequence, the reconstructed image never suffers from spurious aliasing artifacts. Noiseless images of the XCAT brain phantom were reconstructed from simulated data. (orig.)

  3. Regional SAR Image Segmentation Based on Fuzzy Clustering with Gamma Mixture Model

    Science.gov (United States)

    Li, X. L.; Zhao, Q. H.; Li, Y.

    2017-09-01

    Most of stochastic based fuzzy clustering algorithms are pixel-based, which can not effectively overcome the inherent speckle noise in SAR images. In order to deal with the problem, a regional SAR image segmentation algorithm based on fuzzy clustering with Gamma mixture model is proposed in this paper. First, initialize some generating points randomly on the image, the image domain is divided into many sub-regions using Voronoi tessellation technique. Each sub-region is regarded as a homogeneous area in which the pixels share the same cluster label. Then, assume the probability of the pixel to be a Gamma mixture model with the parameters respecting to the cluster which the pixel belongs to. The negative logarithm of the probability represents the dissimilarity measure between the pixel and the cluster. The regional dissimilarity measure of one sub-region is defined as the sum of the measures of pixels in the region. Furthermore, the Markov Random Field (MRF) model is extended from pixels level to Voronoi sub-regions, and then the regional objective function is established under the framework of fuzzy clustering. The optimal segmentation results can be obtained by the solution of model parameters and generating points. Finally, the effectiveness of the proposed algorithm can be proved by the qualitative and quantitative analysis from the segmentation results of the simulated and real SAR images.

  4. Modeling Math Growth Trajectory--An Application of Conventional Growth Curve Model and Growth Mixture Model to ECLS K-5 Data

    Science.gov (United States)

    Lu, Yi

    2016-01-01

    To model students' math growth trajectory, three conventional growth curve models and three growth mixture models are applied to the Early Childhood Longitudinal Study Kindergarten-Fifth grade (ECLS K-5) dataset in this study. The results of conventional growth curve model show gender differences on math IRT scores. When holding socio-economic…

  5. Three-dimensional modeling and simulation of asphalt concrete mixtures based on X-ray CT microstructure images

    Directory of Open Access Journals (Sweden)

    Hainian Wang

    2014-02-01

    Full Text Available X-ray CT (computed tomography was used to scan asphalt mixture specimen to obtain high resolution continuous cross-section images and the meso-structure. According to the theory of three-dimensional (3D reconstruction, the 3D reconstruction algorithm was investigated in this paper. The key to the reconstruction technique is the acquisition of the voxel positions and the relationship between the pixel element and node. Three-dimensional numerical model of asphalt mixture specimen was created by a self-developed program. A splitting test was conducted to predict the stress distributions of the asphalt mixture and verify the rationality of the 3D model.

  6. Decomposition driven interface evolution for layers of binary mixtures: I. Model derivation and stratified base states

    CERN Document Server

    Thiele, Uwe; Frastia, Lubor

    2007-01-01

    A dynamical model is proposed to describe the coupled decomposition and profile evolution of a free surface film of a binary mixture. An example is a thin film of a polymer blend on a solid substrate undergoing simultaneous phase separation and dewetting. The model is based on model-H describing the coupled transport of the mass of one component (convective Cahn-Hilliard equation) and momentum (Navier-Stokes-Korteweg equations) supplemented by appropriate boundary conditions at the solid substrate and the free surface. General transport equations are derived using phenomenological non-equilibrium thermodynamics for a general non-isothermal setting taking into account Soret and Dufour effects and interfacial viscosity for the internal diffuse interface between the two components. Focusing on an isothermal setting the resulting model is compared to literature results and its base states corresponding to homogeneous or vertically stratified flat layers are analysed.

  7. Mixed Platoon Flow Dispersion Model Based on Speed-Truncated Gaussian Mixture Distribution

    Directory of Open Access Journals (Sweden)

    Weitiao Wu

    2013-01-01

    Full Text Available A mixed traffic flow feature is presented on urban arterials in China due to a large amount of buses. Based on field data, a macroscopic mixed platoon flow dispersion model (MPFDM was proposed to simulate the platoon dispersion process along the road section between two adjacent intersections from the flow view. More close to field observation, truncated Gaussian mixture distribution was adopted as the speed density distribution for mixed platoon. Expectation maximum (EM algorithm was used for parameters estimation. The relationship between the arriving flow distribution at downstream intersection and the departing flow distribution at upstream intersection was investigated using the proposed model. Comparison analysis using virtual flow data was performed between the Robertson model and the MPFDM. The results confirmed the validity of the proposed model.

  8. Generation of a mixture model ground-motion prediction equation for Northern Chile

    Science.gov (United States)

    Haendel, A.; Kuehn, N. M.; Scherbaum, F.

    2012-12-01

    In probabilistic seismic hazard analysis (PSHA) empirically derived ground motion prediction equations (GMPEs) are usually applied to estimate the ground motion at a site of interest as a function of source, path and site related predictor variables. Because GMPEs are derived from limited datasets they are not expected to give entirely accurate estimates or to reflect the whole range of possible future ground motion, thus giving rise to epistemic uncertainty in the hazard estimates. This is especially true for regions without an indigenous GMPE where foreign models have to be applied. The choice of appropriate GMPEs can then dominate the overall uncertainty in hazard assessments. In order to quantify this uncertainty, the set of ground motion models used in a modern PSHA has to capture (in SSHAC language) the center, body, and range of the possible ground motion at the site of interest. This was traditionally done within a logic tree framework in which existing (or only slightly modified) GMPEs occupy the branches of the tree and the branch weights describe the degree-of-belief of the analyst in their applicability. This approach invites the problem to combine GMPEs of very different quality and hence to potentially overestimate epistemic uncertainty. Some recent hazard analysis have therefore resorted to using a small number of high quality GMPEs as backbone models from which the full distribution of GMPEs for the logic tree (to capture the full range of possible ground motion uncertainty) where subsequently generated by scaling (in a general sense). In the present study, a new approach is proposed to determine an optimized backbone model as weighted components of a mixture model. In doing so, each GMPE is assumed to reflect the generation mechanism (e. g. in terms of stress drop, propagation properties, etc.) for at least a fraction of possible ground motions in the area of interest. The combination of different models into a mixture model (which is learned from

  9. Semi-Supervised Classification based on Gaussian Mixture Model for remote imagery

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Semi-Supervised Classification (SSC),which makes use of both labeled and unlabeled data to determine classification borders in feature space,has great advantages in extracting classification information from mass data.In this paper,a novel SSC method based on Gaussian Mixture Model (GMM) is proposed,in which each class’s feature space is described by one GMM.Experiments show the proposed method can achieve high classification accuracy with small amount of labeled data.However,for the same accuracy,supervised classification methods such as Support Vector Machine,Object Oriented Classification,etc.should be provided with much more labeled data.

  10. A solid-fluid mixture model allowing for solid dilatation under external pressure

    CERN Document Server

    Sciarra, Giulio; Hutter, Kolumban

    2010-01-01

    A sponge subjected to an increase of the outside fluid pressure expands its volume but nearly mantains its true density and thus gives way to an increase of the interstitial volume. This behaviour, not yet properly described by solid-fluid mixture theories, is studied here by using the Principle of Virtual Power with the most simple dependence of the free energy as a function of the partial apparent densities of the solid and the fluid. The model is capable of accounting for the above mentioned dilatational behaviour, but in order to isolate its essential features more clearly we compromise on the other aspects of deformation.

  11. Heteroscedastic nonlinear regression models based on scale mixtures of skew-normal distributions.

    Science.gov (United States)

    Lachos, Victor H; Bandyopadhyay, Dipankar; Garay, Aldo M

    2011-08-01

    An extension of some standard likelihood based procedures to heteroscedastic nonlinear regression models under scale mixtures of skew-normal (SMSN) distributions is developed. We derive a simple EM-type algorithm for iteratively computing maximum likelihood (ML) estimates and the observed information matrix is derived analytically. Simulation studies demonstrate the robustness of this flexible class against outlying and influential observations, as well as nice asymptotic properties of the proposed EM-type ML estimates. Finally, the methodology is illustrated using an ultrasonic calibration data.

  12. Linear Mixture Models and Partial Unmixing in Multi- and Hyperspectral Image Data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    1998-01-01

    As a supplement or an alternative to classification of hyperspectral image data the linear mixture model is considered in order to obtain estimates of abundance of each class or end-member in pixels with mixed membership. Full unmixing and the partial unmixing methods orthogonal subspace projection...... (OSP), constrained energy minimization (CEM) and an eigenvalue formulation alternative are dealt with. The solution to the eigenvalue formulation alternative proves to be identical to the CEM solution. The matrix inversion involved in CEM can be avoided by working on (a subset of) orthogonally...

  13. Industry-Cost-Curve Approach for Modeling the Environmental Impact of Introducing New Technologies in Life Cycle Assessment.

    Science.gov (United States)

    Kätelhön, Arne; von der Assen, Niklas; Suh, Sangwon; Jung, Johannes; Bardow, André

    2015-07-07

    The environmental costs and benefits of introducing a new technology depend not only on the technology itself, but also on the responses of the market where substitution or displacement of competing technologies may occur. An internationally accepted method taking both technological and market-mediated effects into account, however, is still lacking in life cycle assessment (LCA). For the introduction of a new technology, we here present a new approach for modeling the environmental impacts within the framework of LCA. Our approach is motivated by consequential life cycle assessment (CLCA) and aims to contribute to the discussion on how to operationalize consequential thinking in LCA practice. In our approach, we focus on new technologies producing homogeneous products such as chemicals or raw materials. We employ the industry cost-curve (ICC) for modeling market-mediated effects. Thereby, we can determine substitution effects at a level of granularity sufficient to distinguish between competing technologies. In our approach, a new technology alters the ICC potentially replacing the highest-cost producer(s). The technologies that remain competitive after the new technology's introduction determine the new environmental impact profile of the product. We apply our approach in a case study on a new technology for chlor-alkali electrolysis to be introduced in Germany.

  14. Features of non-congruent phase transition in modified Coulomb model of the binary ionic mixture

    Science.gov (United States)

    Stroev, N. E.; Iosilevskiy, I. L.

    2016-11-01

    Non-congruent gas-liquid phase transition (NCPT) have been studied previously in modified Coulomb model of a binary ionic mixture C(+6) + O(+8) on a uniformly compressible ideal electronic background /BIM(∼)/. The features of NCPT in improved version of the BIM(∼) model for the same mixture on background of non-ideal electronic Fermi-gas and comparison it with the previous calculations are the subject of present study. Analytical fits for Coulomb corrections to equation of state of electronic and ionic subsystems were used in present calculations within the Gibbs-Guggenheim conditions of non-congruent phase equilibrium. Parameters of critical point-line were calculated on the entire range of proportions of mixed ions 0 BIM(∼) model. Just similar distillation was obtained in the variant of NCPT in dense nuslear matter. The absence of azeotropic compositions was revealed in studied variants of BIM(∼) in contrast to an explicit existence of the azeotropic compositions for the NCPT in chemically reacting plasmas and in astrophysical applications.

  15. Features of non-congruent phase transition in modified Coulomb model of the binary ionic mixture

    CERN Document Server

    Stroev, N E

    2016-01-01

    Non-congruent gas-liquid phase transition (NCPT) have been studied in modified Coulomb model of a binary ionic mixture C(+6) + O(+8) on a \\textit{uniformly compressible} ideal electronic background /BIM($\\sim$)/. The features of NCPT in improved version of the BIM($\\sim$) model for the same mixture on background of \\textit{non-ideal} electronic Fermi-gas and comparison it with the previous calculations are the subject of present study. Analytical fits for Coulomb corrections to EoS of electronic and ionic subsystems were used in present calculations within the Gibbs--Guggenheim conditions of non-congruent phase equilibrium.Parameters of critical point-line (CPL) were calculated on the entire range of proportions of mixed ions $0model. Just similar distillation was obtained in variant of NCPT in dense nuslear matter. The absence of azeotropic compositions was revealed in studied variants of BIM($\\sim$) in contrast to explicit e...

  16. One-dimensional modelling of DBDs in Ne-Xe mixtures for excimer lamps

    Science.gov (United States)

    Belasri, A.; Khodja, K.; Bendella, S.; Harrache, Z.

    2010-11-01

    Dielectric barrier discharges (DBDs) are a promising technology for high-intensity sources of specific ultraviolet (UV) and vacuum ultraviolet (VUV) radiation. In this work, the microdischarge dynamics in DBDs for Ne-Xe mixtures under the close conditions of excimer lamp working has been investigated. The computer model including the cathode fall, the positive column and the dielectric is composed of two coupled sub-models. The first submodel describes the electrical properties of the discharge and is based on a fluid, two-moments description of electron and ion transport coupled with Poisson's equation during the discharge pulse. The second submodel, based on three main modules: a plasma chemistry module, a circuit module and a Boltzmann equation module, with source terms deduced from the electric model, describes the time variations of charged and excited species concentrations and the UV photon emission. The use of the present description allows a good resolution near the sheath at high pressure and it predicts correctly the waveform of the discharge behaviour. The effects of operation voltage, dielectric capacitance, gas mixture composition, gas pressure, as well as the secondary electron emission by ion at the cathode on the discharge characteristics and the 173 nm photon generation have been investigated and discussed.

  17. Modeling Intensive Longitudinal Data With Mixtures of Nonparametric Trajectories and Time-Varying Effects

    Science.gov (United States)

    Dziak, John J.; Li, Runze; Tan, Xianming; Shiffman, Saul; Shiyko, Mariya P.

    2015-01-01

    Behavioral scientists increasingly collect intensive longitudinal data (ILD), in which phenomena are measured at high frequency and in real time. In many such studies, it is of interest to describe the pattern of change over time in important variables as well as the changing nature of the relationship between variables. Individuals' trajectories on variables of interest may be far from linear, and the predictive relationship between variables of interest and related covariates may also change over time in a nonlinear way. Time-varying effect models (TVEMs; see Tan, Shiyko, Li, Li, & Dierker, 2012) address these needs by allowing regression coefficients to be smooth, nonlinear functions of time rather than constants. However, it is possible that not only observed covariates but also unknown, latent variables may be related to the outcome. That is, regression coefficients may change over time and also vary for different kinds of individuals. Therefore, we describe a finite mixture version of TVEM for situations in which the population is heterogeneous and in which a single trajectory would conceal important, inter-individual differences. This extended approach, MixTVEM, combines finite mixture modeling with non- or semi-parametric regression modeling, in order to describe a complex pattern of change over time for distinct latent classes of individuals. The usefulness of the method is demonstrated in an empirical example from a smoking cessation study. We provide a versatile SAS macro and R function for fitting MixTVEMs. PMID:26390169

  18. Assessing clustering strategies for Gaussian mixture filtering a subsurface contaminant model

    KAUST Repository

    Liu, Bo

    2016-02-03

    An ensemble-based Gaussian mixture (GM) filtering framework is studied in this paper in term of its dependence on the choice of the clustering method to construct the GM. In this approach, a number of particles sampled from the posterior distribution are first integrated forward with the dynamical model for forecasting. A GM representation of the forecast distribution is then constructed from the forecast particles. Once an observation becomes available, the forecast GM is updated according to Bayes’ rule. This leads to (i) a Kalman filter-like update of the particles, and (ii) a Particle filter-like update of their weights, generalizing the ensemble Kalman filter update to non-Gaussian distributions. We focus on investigating the impact of the clustering strategy on the behavior of the filter. Three different clustering methods for constructing the prior GM are considered: (i) a standard kernel density estimation, (ii) clustering with a specified mixture component size, and (iii) adaptive clustering (with a variable GM size). Numerical experiments are performed using a two-dimensional reactive contaminant transport model in which the contaminant concentration and the heterogenous hydraulic conductivity fields are estimated within a confined aquifer using solute concentration data. The experimental results suggest that the performance of the GM filter is sensitive to the choice of the GM model. In particular, increasing the size of the GM does not necessarily result in improved performances. In this respect, the best results are obtained with the proposed adaptive clustering scheme.

  19. Interactive production planning and ergonomic assessment with Digital Human Models--introducing the Editor for Manual Work Activities (ema).

    Science.gov (United States)

    Fritzsche, Lars; Leidholdt, Wolfgang; Bauer, Sebastian; Jäckel, Thomas; Moreno, Adrian

    2012-01-01

    The aging workforce is a risk factor for manufacturing industries that contain many jobs with high physical workloads. Thus, ergonomic risk factors have to be avoided in early phases of production planning. This paper introduces a new tool for simulating manual work activities with 3D human models, the so-called emaΦ. For the most part, the emaΦ software is based on a unique modular approach including a number of complex operations that were theoretically developed and empirically validated by means of motion capturing technologies. Using these modules for defining the digital work process enables the production planner to compile human simulations more accurately and much quicker compared to any of the existing modeling tools. Features of the emaΦ software implementation, such as ergonomic evaluation and MTM-time analyses, and the workflow for practical application are presented.

  20. Crack propagation monitoring in a full-scale aircraft fatigue test based on guided wave-Gaussian mixture model

    Science.gov (United States)

    Qiu, Lei; Yuan, Shenfang; Bao, Qiao; Mei, Hanfei; Ren, Yuanqiang

    2016-05-01

    For aerospace application of structural health monitoring (SHM) technology, the problem of reliable damage monitoring under time-varying conditions must be addressed and the SHM technology has to be fully validated on real aircraft structures under realistic load conditions on ground before it can reach the status of flight test. In this paper, the guided wave (GW) based SHM method is applied to a full-scale aircraft fatigue test which is one of the most similar test status to the flight test. To deal with the time-varying problem, a GW-Gaussian mixture model (GW-GMM) is proposed. The probability characteristic of GW features, which is introduced by time-varying conditions is modeled by GW-GMM. The weak cumulative variation trend of the crack propagation, which is mixed in time-varying influence can be tracked by the GW-GMM migration during on-line damage monitoring process. A best match based Kullback-Leibler divergence is proposed to measure the GW-GMM migration degree to reveal the crack propagation. The method is validated in the full-scale aircraft fatigue test. The validation results indicate that the reliable crack propagation monitoring of the left landing gear spar and the right wing panel under realistic load conditions are achieved.

  1. On-line updating Gaussian mixture model for aircraft wing spar damage evaluation under time-varying boundary condition

    Science.gov (United States)

    Qiu, Lei; Yuan, Shenfang; Chang, Fu-Kuo; Bao, Qiao; Mei, Hanfei

    2014-12-01

    Structural health monitoring technology for aerospace structures has gradually turned from fundamental research to practical implementations. However, real aerospace structures work under time-varying conditions that introduce uncertainties to signal features that are extracted from sensor signals, giving rise to difficulty in reliably evaluating the damage. This paper proposes an online updating Gaussian Mixture Model (GMM)-based damage evaluation method to improve damage evaluation reliability under time-varying conditions. In this method, Lamb-wave-signal variation indexes and principle component analysis (PCA) are adopted to obtain the signal features. A baseline GMM is constructed on the signal features acquired under time-varying conditions when the structure is in a healthy state. By adopting the online updating mechanism based on a moving feature sample set and inner probability structural reconstruction, the probability structures of the GMM can be updated over time with new monitoring signal features to track the damage progress online continuously under time-varying conditions. This method can be implemented without any physical model of damage or structure. A real aircraft wing spar, which is an important load-bearing structure of an aircraft, is adopted to validate the proposed method. The validation results show that the method is effective for edge crack growth monitoring of the wing spar bolts holes under the time-varying changes in the tightness degree of the bolts.

  2. General multi-group macroscopic modeling for thermo-chemical non-equilibrium gas mixtures

    Science.gov (United States)

    Liu, Yen; Panesi, Marco; Sahai, Amal; Vinokur, Marcel

    2015-04-01

    relaxation model, which can only be applied to molecules, the new model is applicable to atoms, molecules, ions, and their mixtures. Numerical examples and model validations are carried out with two gas mixtures using the maximum entropy linear model: one mixture consists of nitrogen molecules undergoing internal excitation and dissociation and the other consists of nitrogen atoms undergoing internal excitation and ionization. Results show that the original hundreds to thousands of microscopic equations can be reduced to two macroscopic equations with almost perfect agreement for the total number density and total internal energy using only one or two groups. We also obtain good prediction of the microscopic state populations using 5-10 groups in the macroscopic equations.

  3. Modeling competitive adsorption of mixtures of volatile organic compounds in a fixed-bed of beaded activated carbon.

    Science.gov (United States)

    Tefera, Dereje Tamiru; Hashisho, Zaher; Philips, John H; Anderson, James E; Nichols, Mark

    2014-05-06

    A two-dimensional mathematical model was developed to study competitive adsorption of n-component mixtures in a fixed-bed adsorber. The model consists of an isotherm equation to predict adsorption equilibria of n-component volatile organic compounds (VOCs) mixture from single component isotherm data, and a dynamic adsorption model, the macroscopic mass, energy and momentum conservation equations, to simulate the competitive adsorption of the n-components onto a fixed-bed of adsorbent. The model was validated with experimentally measured data of competitive adsorption of binary and eight-component VOCs mixtures onto beaded activated carbon (BAC). The mean relative absolute error (MRAE) was used to compare the modeled and measured breakthrough profiles as well as the amounts of adsorbates adsorbed. For the binary and eight-component mixtures, the MRAE of the breakthrough profiles was 13 and 12%, respectively, whereas, the MRAE of the adsorbed amounts was 1 and 2%, respectively. These data show that the model provides accurate prediction of competitive adsorption of multicomponent VOCs mixtures and the competitive adsorption isotherm equation is able to accurately predict equilibrium adsorption of VOCs mixtures.

  4. Modelling diameter distributions of two-cohort forest stands with various proportions of dominant species: a two-component mixture model approach.

    Science.gov (United States)

    Rafal Podlaski; Francis Roesch

    2014-01-01

    In recent years finite-mixture models have been employed to approximate and model empirical diameter at breast height (DBH) distributions. We used two-component mixtures of either the Weibull distribution or the gamma distribution for describing the DBH distributions of mixed-species, two-cohort forest stands, to analyse the relationships between the DBH components,...

  5. Balancing precision and risk: should multiple detection methods be analyzed separately in N-mixture models?

    Directory of Open Access Journals (Sweden)

    Tabitha A Graves

    Full Text Available Using multiple detection methods can increase the number, kind, and distribution of individuals sampled, which may increase accuracy and precision and reduce cost of population abundance estimates. However, when variables influencing abundance are of interest, if individuals detected via different methods are influenced by the landscape differently, separate analysis of multiple detection methods may be more appropriate. We evaluated the effects of combining two detection methods on the identification of variables important to local abundance using detections of grizzly bears with hair traps (systematic and bear rubs (opportunistic. We used hierarchical abundance models (N-mixture models with separate model components for each detection method. If both methods sample the same population, the use of either data set alone should (1 lead to the selection of the same variables as important and (2 provide similar estimates of relative local abundance. We hypothesized that the inclusion of 2 detection methods versus either method alone should (3 yield more support for variables identified in single method analyses (i.e. fewer variables and models with greater weight, and (4 improve precision of covariate estimates for variables selected in both separate and combined analyses because sample size is larger. As expected, joint analysis of both methods increased precision as well as certainty in variable and model selection. However, the single-method analyses identified different variables and the resulting predicted abundances had different spatial distributions. We recommend comparing single-method and jointly modeled results to identify the presence of individual heterogeneity between detection methods in N-mixture models, along with consideration of detection probabilities, correlations among variables, and tolerance to risk of failing to identify variables important to a subset of the population. The benefits of increased precision should be weighed

  6. General multi-group macroscopic modeling for thermo-chemical non-equilibrium gas mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yen, E-mail: yen.liu@nasa.gov; Vinokur, Marcel [NASA Ames Research Center, Moffett Field, California 94035 (United States); Panesi, Marco; Sahai, Amal [University of Illinois, Urbana-Champaign, Illinois 61801 (United States)

    2015-04-07

    relaxation model, which can only be applied to molecules, the new model is applicable to atoms, molecules, ions, and their mixtures. Numerical examples and model validations are carried out with two gas mixtures using the maximum entropy linear model: one mixture consists of nitrogen molecules undergoing internal excitation and dissociation and the other consists of nitrogen atoms undergoing internal excitation and ionization. Results show that the original hundreds to thousands of microscopic equations can be reduced to two macroscopic equations with almost perfect agreement for the total number density and total internal energy using only one or two groups. We also obtain good prediction of the microscopic state populations using 5-10 groups in the macroscopic equations.

  7. A two-fluid model for reactive dilute solid-liquid mixtures with phase changes

    Science.gov (United States)

    Reis, Martina Costa; Wang, Yongqi

    2016-12-01

    Based on the Eulerian spatial averaging theory and the Müller-Liu entropy principle, a two-fluid model for reactive dilute solid-liquid mixtures is presented. Initially, some averaging theorems and properties of average quantities are discussed and, then, averaged balance equations including interfacial source terms are postulated. Moreover, constitutive equations are proposed for a reactive dilute solid-liquid mixture, where the formation of the solid phase is due to a precipitation chemical reaction that involves ions dissolved in the liquid phase. To this end, principles of constitutive theory are used to propose linearized constitutive equations that account for diffusion, heat conduction, viscous and drag effects, and interfacial deformations. A particularity of the model is that the mass interfacial source term is regarded as an independent constitutive variable. The obtained results show that the inclusion of the mass interfacial source term into the set of independent constitutive variables permits to easily describe the phase changes associated with precipitation chemical reactions.

  8. DPNuc: Identifying Nucleosome Positions Based on the Dirichlet Process Mixture Model.

    Science.gov (United States)

    Chen, Huidong; Guan, Jihong; Zhou, Shuigeng

    2015-01-01

    Nucleosomes and the free linker DNA between them assemble the chromatin. Nucleosome positioning plays an important role in gene transcription regulation, DNA replication and repair, alternative splicing, and so on. With the rapid development of ChIP-seq, it is possible to computationally detect the positions of nucleosomes on chromosomes. However, existing methods cannot provide accurate and detailed information about the detected nucleosomes, especially for the nucleosomes with complex configurations where overlaps and noise exist. Meanwhile, they usually require some prior knowledge of nucleosomes as input, such as the size or the number of the unknown nucleosomes, which may significantly influence the detection results. In this paper, we propose a novel approach DPNuc for identifying nucleosome positions based on the Dirichlet process mixture model. In our method, Markov chain Monte Carlo (MCMC) simulations are employed to determine the mixture model with no need of prior knowledge about nucleosomes. Compared with three existing methods, our approach can provide more detailed information of the detected nucleosomes and can more reasonably reveal the real configurations of the chromosomes; especially, our approach performs better in the complex overlapping situations. By mapping the detected nucleosomes to a synthetic benchmark nucleosome map and two existing benchmark nucleosome maps, it is shown that our approach achieves a better performance in identifying nucleosome positions and gets a higher F-score. Finally, we show that our approach can more reliably detect the size distribution of nucleosomes.

  9. Modelling of a biofiltration process of volatile organic compound mixtures in a biofilter

    Directory of Open Access Journals (Sweden)

    Rasa Vaiškūnaitė

    2016-11-01

    Full Text Available Currently, different methods for air clean-up from chemical pollutants are applied worldwide: adsorption, absorption, and thermal and catalytic oxidation. One of the most promising methods is biological air cleaning. The aim of this study was to test the performance of a developed biofilter with packing material of activated pine bark for biological air cleaning and to mathematically model the biofiltration processes. Comparative analysis of the modelling results for individual pollutants (butyl acetate, butanol and xylene showed strongest dependence of the efficiency of xylene removal from the air on the amount and ratio of other substances (from 20% to 70%. Thus, the process of removal of pollutants (butanol and butyl acetate that are easier to biologically decompose was obtained to be influenced to a lesser extent by the amount and ratio (% of other components. The results also showed that the efficiency of butyl acetate removal mostly depended on the ratio of other substances in the mixture (from 15% to 100%, whereas the efficiency of butyl acetate and xylene removal mostly depended on the amount of other substances in the mixture (from 20% to 100%. With the parameters of the biofilter (height of packing material, incoming air flow velocity and the pollutants to be removed known, the mathematical expression of the filter efficiency was found, which would allow to make theoretical calculation and selection of the most appropriate parameters of the device as well as to achieve maximum efficiency of air cleaning.

  10. Modeling and analysis of time-dependent processes in a chemically reactive mixture

    Science.gov (United States)

    Ramos, M. P.; Ribeiro, C.; Soares, A. J.

    2017-08-01

    In this paper, we study the propagation of sound waves and the dynamics of local wave disturbances induced by spontaneous internal fluctuations in a reactive mixture. We consider a non-diffusive, non-heat conducting and non-viscous mixture described by an Eulerian set of evolution equations. The model is derived from the kinetic theory in a hydrodynamic regime of a fast chemical reaction. The reactive source terms are explicitly computed from the kinetic theory and are built in the model in a proper way. For both time-dependent problems, we first derive the appropriate dispersion relation, which retains the main effects of the chemical process, and then investigate the influence of the chemical reaction on the properties of interest in the problems studied here. We complete our study by developing a rather detailed analysis using the Hydrogen-Chlorine system as reference. Several numerical computations are included illustrating the behavior of the phase velocity and attenuation coefficient in a low-frequency regime and describing the spectrum of the eigenmodes in the small wavenumber limit.

  11. A two-fluid model for reactive dilute solid-liquid mixtures with phase changes

    Science.gov (United States)

    Reis, Martina Costa; Wang, Yongqi

    2017-03-01

    Based on the Eulerian spatial averaging theory and the Müller-Liu entropy principle, a two-fluid model for reactive dilute solid-liquid mixtures is presented. Initially, some averaging theorems and properties of average quantities are discussed and, then, averaged balance equations including interfacial source terms are postulated. Moreover, constitutive equations are proposed for a reactive dilute solid-liquid mixture, where the formation of the solid phase is due to a precipitation chemical reaction that involves ions dissolved in the liquid phase. To this end, principles of constitutive theory are used to propose linearized constitutive equations that account for diffusion, heat conduction, viscous and drag effects, and interfacial deformations. A particularity of the model is that the mass interfacial source term is regarded as an independent constitutive variable. The obtained results show that the inclusion of the mass interfacial source term into the set of independent constitutive variables permits to easily describe the phase changes associated with precipitation chemical reactions.

  12. Modelling the dusty universe I: Introducing the artificial neural network and first applications to luminosity and colour distributions

    CERN Document Server

    Almeida, C; Lacey, C G; Frenk, C S; Granato, G L; Silva, L; Bressan, A

    2009-01-01

    We introduce a new technique based on artificial neural networks which allows us to make accurate predictions for the spectral energy distributions (SEDs) of large samples of galaxies, at wavelengths ranging from the far-ultra-violet to the sub-millimetre and radio. The neural net is trained to reproduce the SEDs predicted by a hybrid code comprised of the GALFORM semi-analytical model of galaxy formation, which predicts the full star formation and galaxy merger histories, and the GRASIL spectro-photometric code, which carries out a self-consistent calculation of the SED, including absorption and emission of radiation by dust. Using a small number of galaxy properties predicted by GALFORM, the method reproduces the luminosities of galaxies in the majority of cases to within 10% of those computed directly using GRASIL. The method performs best in the sub-mm and reasonably well in the mid-infrared and the far-ultra-violet. The luminosity error introduced by the method has negligible impact on predicted statisti...

  13. Europa's surface composition from near-infrared observations: A comparison of results from linear mixture modeling and radiative transfer modeling

    Science.gov (United States)

    Shirley, James H.; Jamieson, Corey S.; Dalton, J. Bradley

    2016-08-01

    Quantitative estimates of the abundance of surface materials and of water ice particle grain sizes at five widely separated locations on the surface of Europa have been obtained by two independent methods in order to search for possible discrepancies that may be attributed to differences in the methods employed. Results of radiative transfer (RT) compositional modeling (also known as intimate mixture modeling) from two prior studies are here employed without modification. Areal (or "checkerboard") mixture modeling, also known as linear mixture (LM) modeling, was performed to allow direct comparisons. The failure to model scattering processes (whose effects may be strongly nonlinear) in the LM approach is recognized as a potential source of errors. RT modeling accounts for nonlinear spectral responses due to scattering but is subject to other uncertainties. By comparing abundance estimates for H2SO4 · nH2O and water ice, obtained through both methods as applied to identical spectra, we may gain some insight into the importance of "volume scattering" effects for investigations of Europa's surface composition. We find that both methods return similar abundances for each location analyzed; linear correlation coefficients of ≥ 0.98 are found between the derived H2SO4 · nH2O and water ice abundances returned by both methods. We thus find no evidence of a significant influence of volume scattering on the compositional solutions obtained by LM modeling for these locations. Some differences in the results obtained for water ice grain sizes are attributed to the limited selection of candidate materials allowed in the RT investigations.

  14. How to introduce medical ethics at the bedside - Factors influencing the implementation of an ethical decision-making model.

    Science.gov (United States)

    Meyer-Zehnder, Barbara; Albisser Schleger, Heidi; Tanner, Sabine; Schnurrer, Valentin; Vogt, Deborah R; Reiter-Theil, Stella; Pargger, Hans

    2017-02-23

    As the implementation of new approaches and procedures of medical ethics is as complex and resource-consuming as in other fields, strategies and activities must be carefully planned to use the available means and funds responsibly. Which facilitators and barriers influence the implementation of a medical ethics decision-making model in daily routine? Up to now, there has been little examination of these factors in this field. A medical ethics decision-making model called METAP was introduced on three intensive care units and two geriatric wards. An evaluation study was performed from 7 months after deployment of the project until two and a half years. Quantitative and qualitative methods including a questionnaire, semi-structured face-to-face and group-interviews were used. Sixty-three participants from different professional groups took part in 33 face-to-face and 9 group interviews, and 122 questionnaires could be analysed. The facilitating factors most frequently mentioned were: acceptance and presence of the model, support given by the medical and nursing management, an existing or developing (explicit) ethics culture, perception of a need for a medical ethics decision-making model, and engaged staff members. Lack of presence and acceptance, insufficient time resources and staff, poor inter-professional collaboration, absence of ethical competence, and not recognizing ethical problems were identified as inhibiting the implementation of the METAP model. However, the results of the questionnaire as well as of explicit inquiry showed that the respondents stated to have had enough time and staff available to use METAP if necessary. Facilitators and barriers of the implementation of a medical ethics decision-making model are quite similar to that of medical guidelines. The planning for implementing an ethics model or guideline can, therefore, benefit from the extensive literature and experience concerning the implementation of medical guidelines. Lack of time and

  15. A modeling approach for heat conduction and radiation diffusion in plasma-photon mixture in temperature nonequilibrium

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Chong [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-09

    We present a simple approach for determining ion, electron, and radiation temperatures of heterogeneous plasma-photon mixtures, in which temperatures depend on both material type and morphology of the mixture. The solution technique is composed of solving ion, electron, and radiation energy equations for both mixed and pure phases of each material in zones containing random mixture and solving pure material energy equations in subdivided zones using interface reconstruction. Application of interface reconstruction is determined by the material configuration in the surrounding zones. In subdivided zones, subzonal inter-material energy exchanges are calculated by heat fluxes across the material interfaces. Inter-material energy exchange in zones with random mixtures is modeled using the length scale and contact surface area models. In those zones, inter-zonal heat flux in each material is determined using the volume fractions.

  16. Introducing a Clustering Step in a Consensus Approach for the Scoring of Protein-Protein Docking Models

    Science.gov (United States)

    Lensink, Marc F.; Petta, Andrea; Serra, Luigi; Scarano, Vittorio; Cavallo, Luigi; Oliva, Romina

    2016-01-01

    Correctly scoring protein-protein docking models to single out native-like ones is an open challenge. It is also an object of assessment in CAPRI (Critical Assessment of PRedicted Interactions), the community-wide blind docking experiment. We introduced in the field the first pure consensus method, CONSRANK, which ranks models based on their ability to match the most conserved contacts in the ensemble they belong to. In CAPRI, scorers are asked to evaluate a set of available models and select the top ten ones, based on their own scoring approach. Scorers’ performance is ranked based on the number of targets/interfaces for which they could provide at least one correct solution. In such terms, blind testing in CAPRI Round 30 (a joint prediction round with CASP11) has shown that critical cases for CONSRANK are represented by targets showing multiple interfaces or for which only a very small number of correct solutions are available. To address these challenging cases, CONSRANK has now been modified to include a contact-based clustering of the models as a preliminary step of the scoring process. We used an agglomerative hierarchical clustering based on the number of common inter-residue contacts within the models. Two criteria, with different thresholds, were explored in the cluster generation, setting either the number of common contacts or of total clusters. For each clustering approach, after selecting the top (most populated) ten clusters, CONSRANK was run on these clusters and the top-ranked model for each cluster was selected, in the limit of 10 models per target. We have applied our modified scoring approach, Clust-CONSRANK, to SCORE_SET, a set of CAPRI scoring models made recently available by CAPRI assessors, and to the subset of homodimeric targets in CAPRI Round 30 for which CONSRANK failed to include a correct solution within the ten selected models. Results show that, for the challenging cases, the clustering step typically enriches the ten top ranked

  17. Introducing a Clustering Step in a Consensus Approach for the Scoring of Protein-Protein Docking Models

    KAUST Repository

    Chermak, Edrisse

    2016-11-15

    Correctly scoring protein-protein docking models to single out native-like ones is an open challenge. It is also an object of assessment in CAPRI (Critical Assessment of PRedicted Interactions), the community-wide blind docking experiment. We introduced in the field the first pure consensus method, CONSRANK, which ranks models based on their ability to match the most conserved contacts in the ensemble they belong to. In CAPRI, scorers are asked to evaluate a set of available models and select the top ten ones, based on their own scoring approach. Scorers\\' performance is ranked based on the number of targets/interfaces for which they could provide at least one correct solution. In such terms, blind testing in CAPRI Round 30 (a joint prediction round with CASP11) has shown that critical cases for CONSRANK are represented by targets showing multiple interfaces or for which only a very small number of correct solutions are available. To address these challenging cases, CONSRANK has now been modified to include a contact-based clustering of the models as a preliminary step of the scoring process. We used an agglomerative hierarchical clustering based on the number of common inter-residue contacts within the models. Two criteria, with different thresholds, were explored in the cluster generation, setting either the number of common contacts or of total clusters. For each clustering approach, after selecting the top (most populated) ten clusters, CONSRANK was run on these clusters and the top-ranked model for each cluster was selected, in the limit of 10 models per target. We have applied our modified scoring approach, Clust-CONSRANK, to SCORE_SET, a set of CAPRI scoring models made recently available by CAPRI assessors, and to the subset of homodimeric targets in CAPRI Round 30 for which CONSRANK failed to include a correct solution within the ten selected models. Results show that, for the challenging cases, the clustering step typically enriches the ten top ranked

  18. Reading Ability Development from Kindergarten to Junior Secondary: Latent Transition Analyses with Growth Mixture Modeling

    Directory of Open Access Journals (Sweden)

    Yuan Liu

    2016-10-01

    Full Text Available The present study examined the reading ability development of children in the large scale Early Childhood Longitudinal Study (Kindergarten Class of 1998-99 data; Tourangeau, Nord, Lê, Pollack, & Atkins-Burnett, 2006 under the dynamic systems. To depict children's growth pattern, we extended the measurement part of latent transition analysis to the growth mixture model and found that the new model fitted the data well. Results also revealed that most of the children stayed in the same ability group with few cross-level changes in their classes. After adding the environmental factors as predictors, analyses showed that children receiving higher teachers' ratings, with higher socioeconomic status, and of above average poverty status, would have higher probability to transit into the higher ability group.

  19. Cosmological models described by a mixture of van der Waals fluid and dark energy

    CERN Document Server

    Kremer, G M

    2003-01-01

    The Universe is modeled as a binary mixture whose constituents are described by a van der Waals fluid and by a dark energy density. The dark energy density is considered either as the quintessence or as the Chaplygin gas. The irreversible processes concerning the energy transfer between the van der Waals fluid and the gravitational field are taken into account. This model can simulate: (a) an inflationary period where the acceleration grows exponentially and the van der Waals fluid behaves like an inflaton; (b) an inflationary period where the acceleration is positive but it decreases and tends to zero whereas the energy density of the van der Waals fluid decays; (c) a decelerated period which corresponds to a matter dominated period with a non-negative pressure; and (d) a present accelerated period where the dark energy density outweighs the energy density of the van der Waals fluid.

  20. An efficient approach for shadow detection based on Gaussian mixture model

    Institute of Scientific and Technical Information of China (English)

    韩延祥; 张志胜; 陈芳; 陈恺

    2014-01-01

    An efficient approach was proposed for discriminating shadows from moving objects. In the background subtraction stage, moving objects were extracted. Then, the initial classification for moving shadow pixels and foreground object pixels was performed by using color invariant features. In the shadow model learning stage, instead of a single Gaussian distribution, it was assumed that the density function computed on the values of chromaticity difference or bright difference, can be modeled as a mixture of Gaussian consisting of two density functions. Meanwhile, the Gaussian parameter estimation was performed by using EM algorithm. The estimates were used to obtain shadow mask according to two constraints. Finally, experiments were carried out. The visual experiment results confirm the effectiveness of proposed method. Quantitative results in terms of the shadow detection rate and the shadow discrimination rate (the maximum values are 85.79%and 97.56%, respectively) show that the proposed approach achieves a satisfying result with post-processing step.