WorldWideScience

Sample records for two-component mixture models

  1. Two-component mixture model: Application to palm oil and exchange rate

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-12-01

    Palm oil is a seed crop which is widely adopt for food and non-food products such as cookie, vegetable oil, cosmetics, household products and others. Palm oil is majority growth in Malaysia and Indonesia. However, the demand for palm oil is getting growth and rapidly running out over the years. This phenomenal cause illegal logging of trees and destroy the natural habitat. Hence, the present paper investigates the relationship between exchange rate and palm oil price in Malaysia by using Maximum Likelihood Estimation via Newton-Raphson algorithm to fit a two components mixture model. Besides, this paper proposes a mixture of normal distribution to accommodate with asymmetry characteristics and platykurtic time series data.

  2. Modelling diameter distributions of two-cohort forest stands with various proportions of dominant species: a two-component mixture model approach.

    Science.gov (United States)

    Rafal Podlaski; Francis Roesch

    2014-01-01

    In recent years finite-mixture models have been employed to approximate and model empirical diameter at breast height (DBH) distributions. We used two-component mixtures of either the Weibull distribution or the gamma distribution for describing the DBH distributions of mixed-species, two-cohort forest stands, to analyse the relationships between the DBH components,...

  3. Modelling diameter distributions of two-cohort forest stands with various proportions of dominant species: a two-component mixture model approach.

    Science.gov (United States)

    Podlaski, Rafał; Roesch, Francis A

    2014-03-01

    In recent years finite-mixture models have been employed to approximate and model empirical diameter at breast height (DBH) distributions. We used two-component mixtures of either the Weibull distribution or the gamma distribution for describing the DBH distributions of mixed-species, two-cohort forest stands, to analyse the relationships between the DBH components, age cohorts and dominant species, and to assess the significance of differences between the mixture distributions and the kernel density estimates. The data consisted of plots from the Świętokrzyski National Park (Central Poland) and areas close to and including the North Carolina section of the Great Smoky Mountains National Park (USA; southern Appalachians). The fit of the mixture Weibull model to empirical DBH distributions had a precision similar to that of the mixture gamma model, slightly less accurate estimate was obtained with the kernel density estimator. Generally, in the two-cohort, two-storied, multi-species stands in the southern Appalachians, the two-component DBH structure was associated with age cohort and dominant species. The 1st DBH component of the mixture model was associated with the 1st dominant species sp1 occurred in young age cohort (e.g., sweetgum, eastern hemlock); and to a lesser degree, the 2nd DBH component was associated with the 2nd dominant species sp2 occurred in old age cohort (e.g., loblolly pine, red maple). In two-cohort, partly multilayered, stands in the Świętokrzyski National Park, the DBH structure was usually associated with only age cohorts (two dominant species often occurred in both young and old age cohorts). When empirical DBH distributions representing stands of complex structure are approximated using mixture models, the convergence of the estimation process is often significantly dependent on the starting strategies. Depending on the number of DBHs measured, three methods for choosing the initial values are recommended: min.k/max.k, 0.5/1.5/mean

  4. Two-component Abelian sandpile models.

    Science.gov (United States)

    Alcaraz, F C; Pyatov, P; Rittenberg, V

    2009-04-01

    In one-component Abelian sandpile models, the toppling probabilities are independent quantities. This is not the case in multicomponent models. The condition of associativity of the underlying Abelian algebras imposes nonlinear relations among the toppling probabilities. These relations are derived for the case of two-component quadratic Abelian algebras. We show that Abelian sandpile models with two conservation laws have only trivial avalanches.

  5. Two-component model of solar plages

    Institute of Scientific and Technical Information of China (English)

    LI; Jianping(李建平); DING; Mingde(丁明德); FANG; Cheng(方成)

    2002-01-01

    By use of the 2-m Mcmath-Pierce telescope at Kitt Peak, the high-quality spectra of a plage with moderate brightness near the center of solar disk were obtained. The data include seven spectral lines, which are Hα, Hβ, CaII H and K lines and the infrared triplet. With the consideration of fine structures of solar plages, a two-component atmospheric model is constructed by keeping the cool component to be the quiet atmosphere. Three cases of the hot component are given for different filling factors where the temperature and density distribution are adjusted in order to reproduce the seven observed spectral profiles. We also briefly discuss the influence of the column density at the base of the corona, m0, and the macro-turbulent velocity on the required filling factor and computed profiles. The two-component model is compared with precious one-component semi-empirical models. The limitation of the model is pointed out and further improvement is indicated.

  6. Transport of a two-component mixture in one-dimensional channels

    NARCIS (Netherlands)

    Borman, VD; Tronin, VN; Tronin, [No Value; Troyan, [No Value

    2004-01-01

    The transport of a two-component gas mixture in subnanometer channels is investigated theoretically for an arbitrary filling of channels. Special attention is paid to consistent inclusion of density effects, which are associated both with the interaction and with a finite size of particles. The anal

  7. Transport of a two-component mixture in one-dimensional channels

    NARCIS (Netherlands)

    Borman, VD; Tronin, VN; Tronin, [No Value; Troyan, [No Value

    2004-01-01

    The transport of a two-component gas mixture in subnanometer channels is investigated theoretically for an arbitrary filling of channels. Special attention is paid to consistent inclusion of density effects, which are associated both with the interaction and with a finite size of particles. The

  8. Comparing numerical and analytical approaches to strongly interacting two-component mixtures in one dimensional traps

    DEFF Research Database (Denmark)

    Bellotti, Filipe Furlan; Salami Dehkharghani, Amin; Zinner, Nikolaj Thomas

    2017-01-01

    We investigate one-dimensional harmonically trapped two-component systems for repulsive interaction strengths ranging from the non-interacting to the strongly interacting regime for Fermi-Fermi mixtures. A new and powerful mapping between the interaction strength parameters from a continuous......) and exact diagonalization) and analytically. Since DMRG results do not converge as the interaction strength is increased, analytical solutions are used as a benchmark to identify the point where these calculations become unstable. We use the proposed mapping to set a quantitative limit on the interaction...

  9. A polaritonic two-component Bose-Hubbard model

    Energy Technology Data Exchange (ETDEWEB)

    Hartmann, M J; Brandao, F G S L; Plenio, M B [Institute for Mathematical Sciences, Imperial College London, 53 Exhibition Road, SW7 2PE (United Kingdom)], E-mail: m.hartmann@imperial.ac.uk

    2008-03-15

    We demonstrate that polaritons in an array of interacting micro-cavities with strong atom-photon coupling can form a two-component Bose-Hubbard model in which both polariton species are protected against spontaneous emission as their atomic part is stored in two ground states of the atoms. The parameters of the effective model can be tuned via the driving strength of external lasers and include attractive and repulsive polariton interactions. We also describe a method to measure the number statistics in one cavity for each polariton species independently.

  10. Comparing numerical and analytical approaches to strongly interacting two-component mixtures in one dimensional traps

    Science.gov (United States)

    Bellotti, Filipe F.; Dehkharghani, Amin S.; Zinner, Nikolaj T.

    2017-02-01

    We investigate one-dimensional harmonically trapped two-component systems for repulsive interaction strengths ranging from the non-interacting to the strongly interacting regime for Fermi-Fermi mixtures. A new and powerful mapping between the interaction strength parameters from a continuous Hamiltonian and a discrete lattice Hamiltonian is derived. As an example, we show that this mapping does not depend neither on the state of the system nor on the number of particles. Energies, density profiles and correlation functions are obtained both numerically (density matrix renormalization group (DMRG) and exact diagonalization) and analytically. Since DMRG results do not converge as the interaction strength is increased, analytical solutions are used as a benchmark to identify the point where these calculations become unstable. We use the proposed mapping to set a quantitative limit on the interaction parameter of a discrete lattice Hamiltonian above which DMRG gives unrealistic results.

  11. Phase equilibria in DOPC/DPPC: Conversion from gel to subgel in two component mixtures.

    Science.gov (United States)

    Schmidt, Miranda L; Ziani, Latifa; Boudreau, Michelle; Davis, James H

    2009-11-07

    Biological membranes contain a mixture of phospholipids with varying degrees of hydrocarbon chain unsaturation. Mixtures of long chain saturated and unsaturated lipids with cholesterol have attracted a lot of attention because of the formation of two coexisting fluid bilayer phases in such systems over a broad range of temperature and composition. Interpretation of the phase behavior of such ternary mixtures must be based on a thorough understanding of the phase behavior of the binary mixtures formed with the same components. This article describes the phase behavior of mixtures of 1,2-dioleoyl-sn-glycero-3-phosphocholine (DOPC) with 1,2-di-d(31)-palmitoyl-sn-glycero-3-phosphocholine (DPPC) between -20 and 50 degrees C. Particular attention has been paid to the phase coexistence below about 16 degrees C where the subgel phase appears. The changes in the shape of the spectrum (and its spectral moments) during the slow transformation process leads to the conclusion that below 16 degrees C the gel phase is metastable and the gel component of the two-phase mixture slowly transforms to the subgel phase with a slightly different composition. This results in a line of three-phase coexistence near 16 degrees C. Analysis of the transformation of the metastable gel domains into the subgel phase using the nucleation and growth model shows that the subgel domain growth is a two dimensional process.

  12. A minimal model for two-component dark matter

    Energy Technology Data Exchange (ETDEWEB)

    Esch, Sonja; Klasen, Michael; Yaguna, Carlos E. [Institut fuer theoretische Physik, Universitaet Muenster, Wilhelm-Klemm-Strasse 9,D-48149 Muenster (Germany)

    2015-07-01

    We propose and study a new minimal model for two-component dark matter. The model contains only three additional fields, one fermion and two scalars, all singlets under the Standard Model gauge group. Two of these fields, one fermion and one scalar, are odd under a Z{sub 2} symmetry that renders them simultaneously stable. Thus, both particles contribute to the observed dark matter density. This model resembles the union of the singlet scalar and the singlet fermionic models but it contains some new features of its own. We analyze in some detail its dark matter phenomenology. Regarding the relic density, the main novelty is the possible annihilation of one dark matter particle into the other, which can affect the predicted relic density in a significant way. Regarding dark matter detection, we identify a new contribution that can lead either to an enhancement or to a suppression of the spin-independent cross section for the scalar dark matter particle. Finally, we define a set of five benchmarks models compatible with all present bounds and examine their direct detection prospects at planned experiments. A generic feature of this model is that both particles give rise to observable signals in 1-ton direct detection experiments. In fact, such experiments will be able to probe even a subdominant dark matter component at the percent level.

  13. A minimal model for two-component dark matter

    Science.gov (United States)

    Esch, Sonja; Klasen, Michael; Yaguna, Carlos E.

    2014-09-01

    We propose and study a new minimal model for two-component dark matter. The model contains only three additional fields, one fermion and two scalars, all singlets under the Standard Model gauge group. Two of these fields, one fermion and one scalar, are odd under a Z 2 symmetry that renders them simultaneously stable. Thus, both particles contribute to the observed dark matter density. This model resembles the union of the singlet scalar and the singlet fermionic models but it contains some new features of its own. We analyze in some detail its dark matter phenomenology. Regarding the relic density, the main novelty is the possible annihilation of one dark matter particle into the other, which can affect the predicted relic density in a significant way. Regarding dark matter detection, we identify a new contribution that can lead either to an enhancement or to a suppression of the spin-independent cross section for the scalar dark matter particle. Finally, we define a set of five benchmarks models compatible with all present bounds and examine their direct detection prospects at planned experiments. A generic feature of this model is that both particles give rise to observable signals in 1-ton direct detection experiments. In fact, such experiments will be able to probe even a subdominant dark matter component at the percent level.

  14. A minimal model for two-component dark matter

    CERN Document Server

    Esch, Sonja; Yaguna, Carlos E

    2014-01-01

    We propose and study a new minimal model for two-component dark matter. The model contains only three additional fields, one fermion and two scalars, all singlets under the Standard Model gauge group. Two of these fields, one fermion and one scalar, are odd under a $Z_2$ symmetry that renders them simultaneously stable. Thus, both particles contribute to the observed dark matter density. This model resembles the union of the singlet scalar and the singlet fermionic models but it contains some new features of its own. We analyze in some detail its dark matter phenomenology. Regarding the relic density, the main novelty is the possible annihilation of one dark matter particle into the other, which can affect the predicted relic density in a significant way. Regarding dark matter detection, we identify a new contribution that can lead either to an enhancement or to a suppression of the spin-independent cross section for the scalar dark matter particle. Finally, we define a set of five benchmarks models compatibl...

  15. Error Propagation in Equations for Geochemical Modeling of Radiogenic Isotopes in Two-Component Mixing

    Indian Academy of Sciences (India)

    Surendra P Verma

    2000-03-01

    This paper presents error propagation equations for modeling of radiogenic isotopes during mixing of two components or end-members. These equations can be used to estimate errors on an isotopic ratio in the mixture of two components, as a function of the analytical errors or the total errors of geological field sampling and analytical errors. Two typical cases (``Small errors'' and ``Large errors'') are illustrated for mixing of Sr isotopes. Similar examples can be formulated for the other radiogenic isotopic ratios. Actual isotopic data for sediment and basalt samples from the Cocos plate are also included to further illustrate the use of these equations. The isotopic compositions of the predicted mixtures can be used to constrain the origin of magmas in the central part of the Mexican Volcanic Belt. These examples show the need of high quality experimental data for them to be useful in geochemical modeling of magmatic processes.

  16. Continuous fractionation of a two-component mixture by zone electrophoresis.

    Science.gov (United States)

    Zalewski, Dawid R; Gardeniers, Han J G E

    2009-12-01

    Synchronized continuous-flow zone electrophoresis is a recently demonstrated tool for performing electrophoretic fractionation of a complex sample. The method resembles free flow electrophoresis, but unlike in that technique, no mechanical fluid pumping is required. Instead, fast electrokinetic flow switching is used to produce complex stream patterns, which results in lateral separation of components in a separation chamber. Here a solution is presented which allows for simultaneous collection of two fractions in synchronized continuous-flow zone electrophoresis. The method is demonstrated on a model mixture, with subsequent evaluation of the collected fractions purity by MCE. The necessary theoretical background is provided including both steering schemes and calculations of optimum operating points.

  17. The graphic representations for the one-dimensional solutions of problem from elastic mechanic deformations of two-component mixture

    Directory of Open Access Journals (Sweden)

    Ghenadie Bulgac

    2006-12-01

    Full Text Available In this paper we find the analytical solution of simple one-dimensional unsteady elastic problem of two-component mixture using Laplace integral transformation. The integral transformations simplify the initial motion systems for finding analytical solutions. The analytical solutions are represented as the graphic on time dependence in the fixed point of medium, and the graphic on the horizontal coordinate at the fixed time.

  18. Transport of Solar Wind Fluctuations: A Two-Component Model

    Science.gov (United States)

    Oughton, S.; Matthaeus, W. H.; Smith, C. W.; Breech, B.; Isenberg, P. A.

    2011-01-01

    We present a new model for the transport of solar wind fluctuations which treats them as two interacting incompressible components: quasi-two-dimensional turbulence and a wave-like piece. Quantities solved for include the energy, cross helicity, and characteristic transverse length scale of each component, plus the proton temperature. The development of the model is outlined and numerical solutions are compared with spacecraft observations. Compared to previous single-component models, this new model incorporates a more physically realistic treatment of fluctuations induced by pickup ions and yields improved agreement with observed values of the correlation length, while maintaining good observational accord with the energy, cross helicity, and temperature.

  19. Travelling wave solutions for some two-component shallow water models

    Science.gov (United States)

    Dutykh, Denys; Ionescu-Kruse, Delia

    2016-07-01

    In the present study we perform a unified analysis of travelling wave solutions to three different two-component systems which appear in shallow water theory. Namely, we analyze the celebrated Green-Naghdi equations, the integrable two-component Camassa-Holm equations and a new two-component system of Green-Naghdi type. In particular, we are interested in solitary and cnoidal-type solutions, as two most important classes of travelling waves that we encounter in applications. We provide a complete phase-plane analysis of all possible travelling wave solutions which may arise in these models. In particular, we show the existence of new type of solutions.

  20. Modeling Thermal Dust Emission with Two Components: Application to the Planck High Frequency Instrument Maps

    Science.gov (United States)

    Meisner, Aaron M.; Finkbeiner, Douglas P.

    2015-01-01

    We apply the Finkbeiner et al. two-component thermal dust emission model to the Planck High Frequency Instrument maps. This parameterization of the far-infrared dust spectrum as the sum of two modified blackbodies (MBBs) serves as an important alternative to the commonly adopted single-MBB dust emission model. Analyzing the joint Planck/DIRBE dust spectrum, we show that two-component models provide a better fit to the 100-3000 GHz emission than do single-MBB models, though by a lesser margin than found by Finkbeiner et al. based on FIRAS and DIRBE. We also derive full-sky 6.'1 resolution maps of dust optical depth and temperature by fitting the two-component model to Planck 217-857 GHz along with DIRBE/IRAS 100 μm data. Because our two-component model matches the dust spectrum near its peak, accounts for the spectrum's flattening at millimeter wavelengths, and specifies dust temperature at 6.'1 FWHM, our model provides reliable, high-resolution thermal dust emission foreground predictions from 100 to 3000 GHz. We find that, in diffuse sky regions, our two-component 100-217 GHz predictions are on average accurate to within 2.2%, while extrapolating the Planck Collaboration et al. single-MBB model systematically underpredicts emission by 18.8% at 100 GHz, 12.6% at 143 GHz, and 7.9% at 217 GHz. We calibrate our two-component optical depth to reddening, and compare with reddening estimates based on stellar spectra. We find the dominant systematic problems in our temperature/reddening maps to be zodiacal light on large angular scales and the cosmic infrared background anisotropy on small angular scales.

  1. Instabilities on crystal surfaces: The two-component body-centered solid-on-solid model

    NARCIS (Netherlands)

    Carlon, E.; van Beijeren, H.; Mazzeo, G.

    1996-01-01

    The free energy of crystal surfaces that can be described by the two-component body-centered solid-on-solid model has been calculated in a mean-field approximation. The system may model ionic crystals with a bcc lattice structure (for instance CsCl). Crossings between steps are energetically favored

  2. Domain Walls and Textured Vortices in a Two-Component Ginzburg-Landau Model

    DEFF Research Database (Denmark)

    Madsen, Søren Peder; Gaididei, Yu. B.; Christiansen, Peter Leth

    2005-01-01

    We look for domain wall and textured vortex solutions in a two-component Ginzburg-Landau model inspired by two-band superconductivity. The two-dimensional two-component model, with equal coherence lengths and no magnetic field, shows some interesting properties. In the absence of a Josephson type...... coupling between the two order parameters a ''textured vortex'' is found by analytical and numerical solution of the Ginzburg-Landau equations. With a Josephson type coupling between the two order parameters we find the system to split up in two domains separated by a domain wall, where the order parameter...

  3. Domain Walls and Textured Vortices in a Two-Component Ginzburg-Landau Model

    DEFF Research Database (Denmark)

    Madsen, Søren Peder; Gaididei, Yu. B.; Christiansen, Peter Leth

    2005-01-01

    We look for domain wall and textured vortex solutions in a two-component Ginzburg-Landau model inspired by two-band superconductivity. The two-dimensional two-component model, with equal coherence lengths and no magnetic field, shows some interesting properties. In the absence of a Josephson type...... coupling between the two order parameters a ''textured vortex'' is found by analytical and numerical solution of the Ginzburg-Landau equations. With a Josephson type coupling between the two order parameters we find the system to split up in two domains separated by a domain wall, where the order parameter...

  4. Modeling Thermal Dust Emission with Two Components: Application to the Planck HFI Maps

    CERN Document Server

    Meisner, Aaron

    2014-01-01

    We apply the Finkbeiner et al. (1999) two-component thermal dust emission model to the Planck HFI maps. This parametrization of the far-infrared dust spectrum as the sum of two modified blackbodies serves as an important alternative to the commonly adopted single modified blackbody (MBB) dust emission model. Analyzing the joint Planck/DIRBE dust spectrum, we show that two-component models provide a better fit to the 100-3000 GHz emission than do single-MBB models, though by a lesser margin than found by Finkbeiner et al. (1999) based on FIRAS and DIRBE. We also derive full-sky 6.1' resolution maps of dust optical depth and temperature by fitting the two-component model to Planck 217-857 GHz along with DIRBE/IRAS 100 micron data. Because our two-component model matches the dust spectrum near its peak, accounts for the spectrum's flattening at millimeter wavelengths, and specifies dust temperature at 6.1' FWHM, our model provides reliable, high-resolution thermal dust emission foreground predictions from 100 to...

  5. Two-component model of the interaction of an interstellar cloud with surrounding hot plasma

    OpenAIRE

    Provornikova, E. A.; Izmodenov, V. V.; Lallement, R.

    2011-01-01

    We present a two-component gasdynamic model of an interstellar cloud embedded in a hot plasma. It is assumed that the cloud consists of atomic hydrogen gas, interstellar plasma is quasineutral. Hydrogen atoms and plasma protons interact through a charge exchange process. Magnetic felds and radiative processes are ignored in the model. The influence of heat conduction within plasma on the interaction between a cloud and plasma is studied. We consider the extreme case and assume that hot plasma...

  6. Modelling elliptical galaxies phase-space constraints on two-component (gamma1,gamma2) models

    CERN Document Server

    Ciotti, L

    1999-01-01

    In the context of the study of the properties of the mutual mass distribution of the bright and dark matter in elliptical galaxies, present a family of two-component, spherical, self-consistent galaxy models, where one density distribution follows a gamma_1 profile, and the other a gamma_2 profile [(gamma_1,gamma_2) models], with different total masses and ``core'' radii. A variable amount of Osipkov-Merritt (radial) orbital anisotropy is allowed in both components. For these models, I derive analytically the necessary and sufficient conditions that the model parameters must satisfy in order to correspond to a physical system. Moreover, the possibility of adding a black hole at the center of radially anisotropic gamma models is discussed, determining analytically a lower limit of the anisotropy radius as a function of gamma. The analytical phase-space distribution function for (1,0) models is presented, together with the solution of the Jeans equations and the quantities entering the scalar virial theorem. It...

  7. A two-component Frenkel-Kontorowa model for surface alloy formation

    CERN Document Server

    Daruka, I

    2003-01-01

    It has been shown by recent experiments that bulk immiscible metals (e.g. Ag/Cu, Ag/Co and Au/Ni) can form binary alloys on certain surfaces where the substrate mediates the elastic misfits between the two components, thus relieving the elastic strain in the overlayer. These novel surface alloys exhibit a rich phase structure. We formulate a two-component Frenkel-Kontorova model in one dimension to study surface alloy formation. This model can naturally incorporate dislocation formation that plays a crucial role in determining the actual structure of the system. Using energy minimization calculations we provide a phase diagram in terms of average alloy composition and the energy of mixing. Monte Carlo simulations were also performed to study the structure and interaction of the emerging dislocations.

  8. Mapping the Two-Component Atomic Fermi Gas to the Nuclear Shell-Model

    DEFF Research Database (Denmark)

    Özen, C.; Zinner, Nikolaj Thomas

    2014-01-01

    of the external potential becomes important. A system of two-species fermionic cold atoms with an attractive zero-range interaction is analogous to a simple model of nucleus in which neutrons and protons interact only through a residual pairing interaction. In this article, we discuss how the problem of a two......-component atomic fermi gas in a tight external trap can be mapped to the nuclear shell model so that readily available many-body techniques in nuclear physics, such as the Shell Model Monte Carlo (SMMC) method, can be directly applied to the study of these systems. We demonstrate an application of the SMMC method...

  9. Two-component model of strong Langmuir turbulence - Scalings, spectra, and statistics of Langmuir waves

    Science.gov (United States)

    Robinson, P. A.; Newman, D. L.

    1990-01-01

    A simple two-component model of strong turbulence that makes clear predictions for the scalings, spectra, and statistics of Langmuir waves is developed. Scalings of quantities such as energy density, power input, dissipation power wave collapse, and number density of collapsing objects are investigated in detail and found to agree well with model predictions. The nucleation model of wave-packet formation is strongly supported by the results. Nucleation proceeds with energy flowing from background to localized states even in the absence of a driver. Modulational instabilities play little or no role in maintaining the turbulent state when significant density nonuniformities are present.

  10. Mapping the Two-Component Atomic Fermi Gas to the Nuclear Shell-Model

    DEFF Research Database (Denmark)

    Özen, C.; Zinner, Nikolaj Thomas

    2014-01-01

    The physics of a two-component cold fermi gas is now frequently addressed in laboratories. Usually this is done for large samples of tens to hundreds of thousands of particles. However, it is now possible to produce few-body systems (1-100 particles) in very tight traps where the shell structure...... of the external potential becomes important. A system of two-species fermionic cold atoms with an attractive zero-range interaction is analogous to a simple model of nucleus in which neutrons and protons interact only through a residual pairing interaction. In this article, we discuss how the problem of a two...

  11. Two-component model of the interaction of an interstellar cloud with surrounding hot plasma

    CERN Document Server

    Provornikova, E A; Lallement, R

    2011-01-01

    We present a two-component gasdynamic model of an interstellar cloud embedded in a hot plasma. It is assumed that the cloud consists of atomic hydrogen gas, interstellar plasma is quasineutral. Hydrogen atoms and plasma protons interact through a charge exchange process. Magnetic felds and radiative processes are ignored in the model. The influence of heat conduction within plasma on the interaction between a cloud and plasma is studied. We consider the extreme case and assume that hot plasma electrons instantly heat the plasma in the interaction region and that plasma flow can be described as isothermal. Using the two-component model of the interaction of cold neutral cloud and hot plasma, we estimate the lifetime of interstellar clouds. We focus on the clouds typical for the cluster of local interstellar clouds embedded in the hot Local Bubble and give an estimate of the lifetime of the Local interstellar cloud where the Sun currently travels. The charge transfer between highly charged plasma ions and neutr...

  12. Level shift two-components autoregressive conditional heteroscedasticity modelling for WTI crude oil market

    Science.gov (United States)

    Sin, Kuek Jia; Cheong, Chin Wen; Hooi, Tan Siow

    2017-04-01

    This study aims to investigate the crude oil volatility using a two components autoregressive conditional heteroscedasticity (ARCH) model with the inclusion of abrupt jump feature. The model is able to capture abrupt jumps, news impact, clustering volatility, long persistence volatility and heavy-tailed distributed error which are commonly observed in the crude oil time series. For the empirical study, we have selected the WTI crude oil index from year 2000 to 2016. The results found that by including the multiple-abrupt jumps in ARCH model, there are significant improvements of estimation evaluations as compared with the standard ARCH models. The outcomes of this study can provide useful information for risk management and portfolio analysis in the crude oil markets.

  13. Modeling and Simulation of Two-Phase Two-Component Flow with Disappearing Nonwetting Phase

    CERN Document Server

    Neumann, Rebecca; Ippisch, Olaf

    2012-01-01

    Carbon Capture and Storage (CCS) is a recently discussed new technology, aimed at allowing an ongoing use of fossil fuels while preventing the produced CO2 to be released to the atmosphere. CSS can be modeled with two components (water and CO2) in two phases (liquid and CO2). To simulate the process, a multiphase flow equation with equilibrium phase exchange is used. One of the big problems arising in two-phase two-component flow simulations is the disappearance of the nonwetting phase, which leads to a degeneration of the equations satisfied by the saturation. A standard choice of primary variables, which is the pressure of one phase and the saturation of the other phase, cannot be applied here. We developed a new approach using the pressure of the nonwetting phase and the capillary pressure as primary variables. One important advantage of this approach is the fact that we have only one set of primary variables that can be used for the biphasic as well as the monophasic case. We implemented this new choice o...

  14. Miscibility of Two Components in a Binary Mixture of 9-Phenyl Anthracene Mixed with Stearic Acid or Polymethyl Methacrylate at Air-Water Interface

    Institute of Scientific and Technical Information of China (English)

    P. K. Paul; Md. N. Islam; D. Bhattacharjee; S. A. Hussain

    2007-01-01

    We report the miscibility characteristics of two components in a binary mixture of 9-phenyl anthracene (PA) mixed with stearic acid (SA) or polymethyl methacrylate (PMMA). The behaviour of surface pressure versus area per molecule isotherms reveal that the area per molecule decreases systematically with increasing molefractions of PA. The characteristics of areas per molecule versus molefractions and collapse pressure vs molefraction indicate that various interactions involved among the sample and matrix molecules. The interaction scheme is found to change with the change in surface pressure and molefraction of mixing. Scanning electron microscopic study confirms the aggregation of PA molecules in the mixed films.

  15. Freshwater DOM quantity and quality from a two-component model of UV absorbance

    Science.gov (United States)

    Carter, Heather T.; Tipping, Edward; Koprivnjak, Jean-Francois; Miller, Matthew P.; Cookson, Brenda; Hamilton-Taylor, John

    2012-01-01

    We present a model that considers UV-absorbing dissolved organic matter (DOM) to consist of two components (A and B), each with a distinct and constant spectrum. Component A absorbs UV light strongly, and is therefore presumed to possess aromatic chromophores and hydrophobic character, whereas B absorbs weakly and can be assumed hydrophilic. We parameterised the model with dissolved organic carbon concentrations [DOC] and corresponding UV spectra for c. 1700 filtered surface water samples from North America and the United Kingdom, by optimising extinction coefficients for A and B, together with a small constant concentration of non-absorbing DOM (0.80 mg DOC L-1). Good unbiased predictions of [DOC] from absorbance data at 270 and 350 nm were obtained (r2 = 0.98), the sum of squared residuals in [DOC] being reduced by 66% compared to a regression model fitted to absorbance at 270 nm alone. The parameterised model can use measured optical absorbance values at any pair of suitable wavelengths to calculate both [DOC] and the relative amounts of A and B in a water sample, i.e. measures of quantity and quality. Blind prediction of [DOC] was satisfactory for 9 of 11 independent data sets (181 of 213 individual samples).

  16. The two-component model of memory development, and its potential implications for educational settings.

    Science.gov (United States)

    Sander, Myriam C; Werkle-Bergner, Markus; Gerjets, Peter; Shing, Yee Lee; Lindenberger, Ulman

    2012-02-15

    We recently introduced a two-component model of the mechanisms underlying age differences in memory functioning across the lifespan. According to this model, memory performance is based on associative and strategic components. The associative component is relatively mature by middle childhood, whereas the strategic component shows a maturational lag and continues to develop until young adulthood. Focusing on work from our own lab, we review studies from the domains of episodic and working memory informed by this model, and discuss their potential implications for educational settings. The episodic memory studies uncover the latent potential of the associative component in childhood by documenting children's ability to greatly improve their memory performance following mnemonic instruction and training. The studies on working memory also point to an immature strategic component in children whose operation is enhanced under supportive conditions. Educational settings may aim at fostering the interplay between associative and strategic components. We explore possible routes towards this goal by linking our findings to recent trends in research on instructional design. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Numerical modeling of sintering of two-component metal powders with laser beams

    Science.gov (United States)

    Niziev, V. G.; Koldoba, A. V.; Mirzade, F. Kh.; Panchenko, V. Ya.; Poveschenko, Yu. A.; Popov, M. V.

    2011-02-01

    Direct laser sintering of a mixture of two metal powders with significantly different melting points is investigated by numerical simulation. The model is based on self-consistent non-linear continuity equations for volume fractions of components and on energy transfer equations for the powder mixture. It includes the movement of the solid particles due to shrinkage because of the density change of the powder mixture and the liquid flow driven by the capillary and gravity forces. The liquid flow is determined by Darcy's law. The effect of surface settlement of the powder is obtained. The width increasing rate of the melting zone depend both on the parameters of the laser radiation (on the power of the beam) and on the physical parameters of the particle's material, and it increases with the increasing of the penetrability or the increasing of the phase-transition heat. The increasing of the laser power under other factors being equal results in the acceleration of the melting front propagation.

  18. Bioorthogonal two-component drug delivery in HER2(+) breast cancer mouse models

    Science.gov (United States)

    Hapuarachchige, Sudath; Kato, Yoshinori; Artemov, Dmitri

    2016-04-01

    The HER2 receptor is overexpressed in approximately 20% of breast cancers and is associated with tumorigenesis, metastasis, and a poor prognosis. Trastuzumab is a first-line targeted drug used against HER2(+) breast cancers; however, at least 50% of HER2(+) tumors develop resistance to trastuzumab. To treat these patients, trastuzumab-based antibody-drug conjugates (ACDs) have been developed and are currently used in the clinic. Despite their high efficacy, the long circulation half-life and non-specific binding of cytotoxic ADCs can result in systemic toxicity. In addition, standard ADCs do not provide an image-guided mode of administration. Here, we have developed a two-component, two-step, pre-targeting drug delivery system integrated with image guidance to circumvent these issues. In this strategy, HER2 receptors are pre-labeled with a functionalized trastuzumab antibody followed by the delivery of drug-loaded nanocarriers. Both components are cross-linked by multiple bioorthogonal click reactions in situ on the surface of the target cell and internalized as nanoclusters. We have explored the efficacy of this delivery strategy in HER2(+) human breast cancer models. Our therapeutic study confirms the high therapeutic efficacy of the new delivery system, with no significant toxicity.

  19. Large-scale Models Reveal the Two-component Mechanics of Striated Muscle

    Directory of Open Access Journals (Sweden)

    Robert Jarosch

    2008-12-01

    Full Text Available This paper provides a comprehensive explanation of striated muscle mechanics and contraction on the basis of filament rotations. Helical proteins, particularly the coiled-coils of tropomyosin, myosin and α-actinin, shorten their H-bonds cooperatively and produce torque and filament rotations when the Coulombic net-charge repulsion of their highly charged side-chains is diminished by interaction with ions. The classical “two-component model” of active muscle differentiated a “contractile component” which stretches the “series elastic component” during force production. The contractile components are the helically shaped thin filaments of muscle that shorten the sarcomeres by clockwise drilling into the myosin cross-bridges with torque decrease (= force-deficit. Muscle stretch means drawing out the thin filament helices off the cross-bridges under passive counterclockwise rotation with torque increase (= stretch activation. Since each thin filament is anchored by four elastic α-actinin Z-filaments (provided with forceregulating sites for Ca2+ binding, the thin filament rotations change the torsional twist of the four Z-filaments as the “series elastic components”. Large scale models simulate the changes of structure and force in the Z-band by the different Z-filament twisting stages A, B, C, D, E, F and G. Stage D corresponds to the isometric state. The basic phenomena of muscle physiology, i. e. latency relaxation, Fenn-effect, the force-velocity relation, the length-tension relation, unexplained energy, shortening heat, the Huxley-Simmons phases, etc. are explained and interpreted with the help of the model experiments.

  20. Pointer Sentinel Mixture Models

    OpenAIRE

    Merity, Stephen; Xiong, Caiming; Bradbury, James; Socher, Richard

    2016-01-01

    Recent neural network sequence models with softmax classifiers have achieved their best language modeling performance only with very large hidden states and large vocabularies. Even then they struggle to predict rare or unseen words even if the context makes the prediction unambiguous. We introduce the pointer sentinel mixture architecture for neural sequence models which has the ability to either reproduce a word from the recent context or produce a word from a standard softmax classifier. O...

  1. Discrete kink dynamics in hydrogen-bonded chains: The two-component model

    DEFF Research Database (Denmark)

    Karpan, V.M.; Zolotaryuk, Yaroslav; Christiansen, Peter Leth

    2004-01-01

    We study discrete topological solitary waves (kinks and antikinks) in two nonlinear diatomic chain models that describe the collective dynamics of proton transfers in one-dimensional hydrogen-bonded networks. The essential ingredients of the models are (i) a realistic (anharmonic) ion-proton inte......We study discrete topological solitary waves (kinks and antikinks) in two nonlinear diatomic chain models that describe the collective dynamics of proton transfers in one-dimensional hydrogen-bonded networks. The essential ingredients of the models are (i) a realistic (anharmonic) ion...... principal differences, like a significant difference in the stability switchings behavior for the kinks and the antikinks. Water-filled carbon nanotubes are briefly discussed as possible realistic systems, where topological discrete (anti)kink states might exist....

  2. Two Component Dark Matters in S_4 x Z_2 Flavor Symmetric Extra U(1) Model

    CERN Document Server

    Daikoku, Yasuhiro; Toma, Takashi

    2011-01-01

    We study cosmic-ray anomaly observed by PAMELA based on E_6 inspired extra U(1) model with S_4 x Z_2 flavor symmetry. In our model, the lightest flavon has very long lifetime of O(10^{18)) second which is longer than the age of the universe, but not long enough to explain the PAMELA result ~ O(10^{26}) sec. Such a situation could be avoidable by considering that the flavon is not the dominant component of dark matters and the dominant one is the lightest neutralino. With appropriate parameter set, density parameter of dark matter and over-abundance of positron flux in cosmic-ray are realized at the same time. There is interesting correlation between spectrum of positron flux and V_{MNS}. No excess of anti-proton in cosmic-ray suggests that sfermions are heavier than 4 TeV and the masses of the light Higgs bosons are degenerated.

  3. Two-Component Jet Models of Gamma-Ray Burst Sources

    CERN Document Server

    Peng, F; Granot, J; Peng, Fang; Konigl, Arieh; Granot, Jonathan

    2004-01-01

    Recent observational and theoretical studies have raised the possibility that the collimated outflows in gamma-ray burst (GRB) sources have two distinct components: a narrow (opening half-angle $\\theta_{\\rm n}$), highly relativistic (initial Lorentz factor $\\eta_\\rmn \\gtrsim 10^2$) outflow, from which the $\\gamma$-ray emission originates, and a wider ($\\theta_{\\rm w} \\lesssim 3 \\theta_{\\rm n}$), moderately relativistic ($\\eta_{\\rm w}\\sim 10$) surrounding flow. Using a simple synchrotron emission model, we calculate the R-band afterglow lightcurves expected in this scenario and derive algebraic expressions for the flux ratios of the emission from the two jet components at the main transition times in the lightcurve. We apply this model to GRB sources, for explaining the structure of afterglows and source energetics, as well as to X-ray flash sources, which we interpret as GRB jets viewed at an angle $\\theta_{\\rm obs} > \\theta_{\\rm n}$. Finally, we argue that a neutron-rich hydromagnetic outflow may naturally g...

  4. A two component model for thermal emission from organic grains in Comet Halley

    Science.gov (United States)

    Chyba, Christopher; Sagan, Carl

    1988-01-01

    Observations of Comet Halley in the near infrared reveal a triple-peaked emission feature near 3.4 micrometer, characteristic of C-H stretching in hydrocarbons. A variety of plausible cometary materials exhibit these features, including the organic residue of irradiated candidate cometary ices (such as the residue of irradiated methane ice clathrate, and polycyclic aromatic hydrocarbons. Indeed, any molecule containing -CH3 and -CH2 alkanes will emit at 3.4 micrometer under suitable conditions. Therefore tentative identifications must rest on additional evidence, including a plausible account of the origins of the organic material, a plausible model for the infrared emission of this material, and a demonstration that this conjunction of material and model not only matches the 3 to 4 micrometer spectrum, but also does not yield additional emission features where none is observed. In the case of the residue of irradiated low occupancy methane ice clathrate, it is argued that the lab synthesis of the organic residue well simulates the radiation processing experienced by Comet Halley.

  5. Three-body recombination of two-component cold atomic gases into deep dimers in an optical model

    DEFF Research Database (Denmark)

    Mikkelsen, Mathias; Jensen, A. S.; Fedorov, D. V.

    2015-01-01

    We consider three-body recombination into deep dimers in a mass-imbalanced two-component atomic gas. We use an optical model where a phenomenological imaginary potential is added to the lowest adiabatic hyper-spherical potential. The consequent imaginary part of the energy eigenvalue corresponds...... to the decay rate or recombination probability of the three-body system. The method is formulated in details and the relevant qualitative features are discussed as functions of scattering lengths and masses. We use zero-range model in analyses of recent recombination data. The dominating scattering length...

  6. Stochastic kinetic model of two component system signalling reveals all-or-none, graded and mixed mode stochastic switching responses.

    Science.gov (United States)

    Kierzek, Andrzej M; Zhou, Lu; Wanner, Barry L

    2010-03-01

    Two-component systems (TCSs) are prevalent signal transduction systems in bacteria that control innumerable adaptive responses to environmental cues and host-pathogen interactions. We constructed a detailed stochastic kinetic model of two component signalling based on published data. Our model has been validated with flow cytometry data and used to examine reporter gene expression in response to extracellular signal strength. The model shows that, depending on the actual kinetic parameters, TCSs exhibit all-or-none, graded or mixed mode responses. In accordance with other studies, positively autoregulated TCSs exhibit all-or-none responses. Unexpectedly, our model revealed that TCSs lacking a positive feedback loop exhibit not only graded but also mixed mode responses, in which variation of the signal strength alters the level of gene expression in induced cells while the regulated gene continues to be expressed at the basal level in a substantial fraction of cells. The graded response of the TCS changes to mixed mode response by an increase of the translation initiation rate of the histidine kinase. Thus, a TCS is an evolvable design pattern capable of implementing deterministic regulation and stochastic switches associated with both graded and threshold responses. This has implications for understanding the emergence of population diversity in pathogenic bacteria and the design of genetic circuits in synthetic biology applications. The model is available in systems biology markup language (SBML) and systems biology graphical notation (SBGN) formats and can be used as a component of large-scale biochemical reaction network models.

  7. Critical point of gas-liquid type phase transition and phase equilibrium functions in developed two-component plasma model

    Energy Technology Data Exchange (ETDEWEB)

    Butlitsky, M. A.; Zelener, B. V. [Joint Institute for High Temperature of Russian Academy of Science, 125412, Russia, Moscow, Izhorskaya str. 13/2 (Russian Federation); Zelener, B. B. [Joint Institute for High Temperature of Russian Academy of Science, 125412, Russia, Moscow, Izhorskaya str. 13/2 (Russian Federation); Moscow Engineering Physics Institute, 115409, Russia, Moscow, Kashirskoe sh. 31 (Russian Federation)

    2014-07-14

    A two-component plasma model, which we called a “shelf Coulomb” model has been developed in this work. A Monte Carlo study has been undertaken to calculate equations of state, pair distribution functions, internal energies, and other thermodynamics properties. A canonical NVT ensemble with periodic boundary conditions was used. The motivation behind the model is also discussed in this work. The “shelf Coulomb” model can be compared to classical two-component (electron-proton) model where charges with zero size interact via a classical Coulomb law. With important difference for interaction of opposite charges: electrons and protons interact via the Coulomb law for large distances between particles, while interaction potential is cut off on small distances. The cut off distance is defined by an arbitrary ε parameter, which depends on system temperature. All the thermodynamics properties of the model depend on dimensionless parameters ε and γ = βe{sup 2}n{sup 1/3} (where β = 1/k{sub B}T, n is the particle's density, k{sub B} is the Boltzmann constant, and T is the temperature) only. In addition, it has been shown that the virial theorem works in this model. All the calculations were carried over a wide range of dimensionless ε and γ parameters in order to find the phase transition region, critical point, spinodal, and binodal lines of a model system. The system is observed to undergo a first order gas-liquid type phase transition with the critical point being in the vicinity of ε{sub crit}≈13(T{sub crit}{sup *}≈0.076),γ{sub crit}≈1.8(v{sub crit}{sup *}≈0.17),P{sub crit}{sup *}≈0.39, where specific volume v* = 1/γ{sup 3} and reduced temperature T{sup *} = ε{sup −1}.

  8. Critical point of gas-liquid type phase transition and phase equilibrium functions in developed two-component plasma model.

    Science.gov (United States)

    Butlitsky, M A; Zelener, B B; Zelener, B V

    2014-07-14

    A two-component plasma model, which we called a "shelf Coulomb" model has been developed in this work. A Monte Carlo study has been undertaken to calculate equations of state, pair distribution functions, internal energies, and other thermodynamics properties. A canonical NVT ensemble with periodic boundary conditions was used. The motivation behind the model is also discussed in this work. The "shelf Coulomb" model can be compared to classical two-component (electron-proton) model where charges with zero size interact via a classical Coulomb law. With important difference for interaction of opposite charges: electrons and protons interact via the Coulomb law for large distances between particles, while interaction potential is cut off on small distances. The cut off distance is defined by an arbitrary ɛ parameter, which depends on system temperature. All the thermodynamics properties of the model depend on dimensionless parameters ɛ and γ = βe(2)n(1/3) (where β = 1/kBT, n is the particle's density, kB is the Boltzmann constant, and T is the temperature) only. In addition, it has been shown that the virial theorem works in this model. All the calculations were carried over a wide range of dimensionless ɛ and γ parameters in order to find the phase transition region, critical point, spinodal, and binodal lines of a model system. The system is observed to undergo a first order gas-liquid type phase transition with the critical point being in the vicinity of ɛ(crit) ≈ 13(T(*)(crit) ≈ 0.076), γ(crit) ≈ 1.8(v(*)(crit) ≈ 0.17), P(*)(crit) ≈ 0.39, where specific volume v* = 1/γ(3) and reduced temperature T(*) = ɛ(-1).

  9. Three-body recombination of two-component cold atomic gases into deep dimers in an optical model

    DEFF Research Database (Denmark)

    Mikkelsen, Mathias; Jensen, A. S.; Fedorov, D. V.

    2015-01-01

    . The Efimov scaling between recombination peaks is calculated and shown to depend on both scattering lengths. Recombination is predicted to be largest for heavy-heavy-light systems. Universal properties of the optical parameters are indicated. We compare to available experiments and find in general very......We consider three-body recombination into deep dimers in a mass-imbalanced two-component atomic gas. We use an optical model where a phenomenological imaginary potential is added to the lowest adiabatic hyper-spherical potential. The consequent imaginary part of the energy eigenvalue corresponds...... to the decay rate or recombination probability of the three-body system. The method is formulated in details and the relevant qualitative features are discussed as functions of scattering lengths and masses. We use zero-range model in analyses of recent recombination data. The dominating scattering length...

  10. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  11. Concomitant variables in finite mixture models

    NARCIS (Netherlands)

    Wedel, M

    The standard mixture model, the concomitant variable mixture model, the mixture regression model and the concomitant variable mixture regression model all enable simultaneous identification and description of groups of observations. This study reviews the different ways in which dependencies among

  12. Maladaptive correlates of the failure to forgive self and others: further evidence for a two-component model of forgiveness.

    Science.gov (United States)

    Ross, Scott R; Hertenstein, Matthew J; Wrobel, Thomas A

    2007-04-01

    In a sample composed of 162 young adults, we examined the generalizability of an orthogonal, 2-component model of forgiveness previously reported by Ross, Kendall, Matters, Rye, and Wrobel (2004). Furthermore, we examined the relationship of these two components with maladaptive personality characteristics as measured by the Schedule for Nonadaptive and Adaptive Personality (SNAP; Clark, 1993), with an emphasis on Five-factor model markers of personality. Using multiple measures of forgiveness, principal components analysis supported a 2-component model representing self-forgiveness and other forgiveness. Despite the independence of self-forgiveness and other forgiveness, zero order correlations with SNAP scales supported convergent more than discriminant validity. In contrast, hierarchical multiple regression analyses emphasized the discriminant validity of self-forgiveness and other forgiveness. Among indices of Neuroticism, Extraversion, and Agreeableness, Negative Temperament (+) was the sole predictor of self-forgiveness. In contrast, Positive Temperament (+), Aggression (-), and Histrionic PD (-) were most associated with other forgiveness. Overall, these findings support the validity of these factors and highlight the importance of self-forgiveness in clinical assessment.

  13. A two-component jet model based on the Blandford-Znajek and Blandford-Payne processes

    CERN Document Server

    Xie, Wei; Zou, Yuan-Chuan; Wang, Ding-Xiong; Wu, Qingwen; Wang, Jiu-Zhou

    2012-01-01

    We propose a two-component jet model consistent with the observations of several gamma ray bursts (GRBs) and active galactic nuclei (AGNs). The jet consists of inner and outer components, and they are supposed to be driven by the Blandford-Znajek (BZ) and Blandford-Payne (BP) processes, respectively. The baryons in the BP jet is accelerated centrifugally via the magnetic field anchored in the accretion disk. The BZ jet is assumed to be entrained a fraction of accreting matter leaving the inner edge of the accretion disk, and the baryons are accelerated in the conversion from electromagnetic energy to the kinetic energy. By fitting the Lorentz factors of some GRBs (GRB 030329, GRB 051221A, GRB 080413B) and AGNs (Cen A, Mkn 501 and Mkn 421) with this model, we constrain the physical parameters related to the accretion and outflow of these two kind of objects. We conclude that the spine/sheath structure of the jet from these sources can be interpreted naturally by the BZ and BP processes.

  14. A two-component jet model based on the Blandford-Znajek and Blandford-Payne processes

    Science.gov (United States)

    Xie, Wei; Lei, Wei-Hua; Zou, Yuan-Chuan; Wang, Ding-Xiong; Wu, Qingwen; Wang, Jiu-Zhou

    2012-07-01

    We propose a two-component jet model consistent with the observations of several gamma ray bursts (GRBs) and active galactic nuclei (AGNs). The jet consists of inner and outer components, which are supposed to be driven by the Blandford-Znajek (BZ) and Blandford-Payne (BP) processes, respectively. The baryons in the BP jet are accelerated centrifugally via the magnetic field anchored in the accretion disk. The BZ jet is assumed to be entrained in a fraction of accreting matter leaving the inner edge of the accretion disk, and the baryons are accelerated in the conversion from electromagnetic energy to kinetic energy. By fitting the Lorentz factors of some GRBs (GRB 030329, GRB 051221A and GRB 080413B) and AGNs (Cen A, Mkn 501 and Mkn 421) with this model, we constrain the physical parameters related to the accretion and outflow of these two kinds of objects. We conclude that the spine/sheath structure of the jet from these sources can be interpreted naturally by the BZ and BP processes.

  15. A two-component jet model based on the Blandford-Znajek and Blandford-Payne processes

    Institute of Scientific and Technical Information of China (English)

    Wei Xie; Wei-Hua Lei; Yuan-Chuan Zou; Ding-Xiong Wang; Qingwen Wu; Jiu-Zhou Wang

    2012-01-01

    We propose a two-component jet model consistent with the observations of several gamma ray bursts(GRBs)and active galactic nuclei(AGNs).The jet consists of inner and outer components,which are supposed to be driven by the BlandfordZnajek(BZ)and Blandford-Payne(BP)processes,respectively.The baryons in the BP jet are accelerated centrifugally via the magnetic field anchored in the accretion disk.The BZ jet is assumed to be entrained in a fraction of accreting matter leaving the inner edge of the accretion disk,and the baryons are accelerated in the conversion from electromagnetic energy to kinetic energy.By fitting the Lorentz factors of some GRBs(GRB 030329,GRB 051221A and GRB 080413B)and AGNs(Cen A,Mkn 501 and Mkn 421)with this model,we constrain the physical parameters related to the accretion and outflow of these two kinds of objects.We conclude that the spine/sheath structure of the jet from these sources can be interpreted naturally by the BZ and BP processes.

  16. High Energy Positrons and Gamma Radiation from Decaying Constituents of a two-component Dark Atom Model

    CERN Document Server

    Belotsky, K; Kouvaris, C; Laletin, M

    2015-01-01

    We study a two component dark matter candidate inspired by the Minimal Walking Technicolor model. Dark matter consists of a dominant SIMP-like dark atom component made of bound states between primordial helium nuclei and a doubly charged technilepton, and a small WIMP-like component made of another dark atom bound state between a doubly charged technibaryon and a technilepton. This scenario is consistent with direct search experimental findings because the dominant SIMP component interacts too strongly to reach the depths of current detectors with sufficient energy to recoil and the WIMP-like component is too small to cause significant amount of events. In this context a metastable technibaryon that decays to $e^+e^+$, $\\mu^+ \\mu^+$ and $\\tau^+ \\tau^+$ can in principle explain the observed positron excess by AMS-02 and PAMELA, while being consistent with the photon flux observed by FERMI/LAT. We scan the parameters of the model and we find the best possible fit to the latest experimental data. We find that th...

  17. High-power gas-discharge excimer ArF, KrCl, KrF and XeCl lasers utilising two-component gas mixtures without a buffer gas

    Energy Technology Data Exchange (ETDEWEB)

    Razhev, A M; Kargapol' tsev, E S [Institute of Laser Physics, Siberian Branch, Russian Academy of Sciences, Novosibirsk (Russian Federation); Churkin, D S [Novosibirsk State University, Novosibirsk (Russian Federation)

    2016-03-31

    Results of an experimental study of the influence of a gas mixture (laser active medium) composition on an output energy and total efficiency of gas-discharge excimer lasers on ArF* (193 nm), KrCl* (222 nm), KrF* (248 nm) and XeCl* (308 nm) molecules operating without a buffer gas are presented. The optimal ratios of gas components (from the viewpoint of a maximum output energy) of an active medium are found, which provide an efficient operation of laser sources. It is experimentally confirmed that for gas-discharge excimer lasers on halogenides of inert gases the presence of a buffer gas in an active medium is not a necessary condition for efficient operation. For the first time, in two-component gas mixtures of repetitively pulsed gas-discharge excimer lasers on electron transitions of excimer molecules ArF*, KrCl*, KrF* and XeCl*, the pulsed energy of laser radiation obtained under pumping by a transverse volume electric discharge in a low-pressure gas mixture without a buffer gas reached up to 170 mJ and a high pulsed output power (of up to 24 MW) was obtained at a FWHM duration of the KrF-laser pulse of 7 ns. The maximal total efficiency obtained in the experiment with two-component gas mixtures of KrF and XeCl lasers was 0.8%. (lasers)

  18. High-power gas-discharge excimer ArF, KrCl, KrF and XeCl lasers utilising two-component gas mixtures without a buffer gas

    Science.gov (United States)

    Razhev, A. M.; Kargapol'tsev, E. S.; Churkin, D. S.

    2016-03-01

    Results of an experimental study of the influence of a gas mixture (laser active medium) composition on an output energy and total efficiency of gas-discharge excimer lasers on ArF* (193 nm), KrCl* (222 nm), KrF* (248 nm) and XeCl* (308 nm) molecules operating without a buffer gas are presented. The optimal ratios of gas components (from the viewpoint of a maximum output energy) of an active medium are found, which provide an efficient operation of laser sources. It is experimentally confirmed that for gas-discharge excimer lasers on halogenides of inert gases the presence of a buffer gas in an active medium is not a necessary condition for efficient operation. For the first time, in two-component gas mixtures of repetitively pulsed gas-discharge excimer lasers on electron transitions of excimer molecules ArF*, KrCl*, KrF* and XeCl*, the pulsed energy of laser radiation obtained under pumping by a transverse volume electric discharge in a low-pressure gas mixture without a buffer gas reached up to 170 mJ and a high pulsed output power (of up to 24 MW) was obtained at a FWHM duration of the KrF-laser pulse of 7 ns. The maximal total efficiency obtained in the experiment with two-component gas mixtures of KrF and XeCl lasers was 0.8%.

  19. A Binomial Mixture Model for Classification Performance: A Commentary on Waxman, Chambers, Yntema, and Gelman (1989).

    Science.gov (United States)

    Thomas, Hoben

    1989-01-01

    Individual differences in children's performance on a classification task are modeled by a two component binomial mixture distribution. The model accounts for data well, with variance accounted for ranging from 87 to 95 percent. (RJC)

  20. Diagnostics for the structure of AGNs’broad line regions with reverberation mapping data:confirmation of the two-component broad line region model

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    We re-examine the ten Reverberation Mapping(RM) sources with public data based on the two-component model of the Broad Line Region(BLR).In fitting their broad Hβ Mlines,six of them only need one Gaussian component,one of them has a double-peak profile,one has an irregular profile,and only two of them need two components,i.e.,a Very Broad Gaussian Component(VBGC) and an Inter-Mediate Gaussian Component(IMGC).The Gaussian components are assumed to come from two distinct regions in the two-component model;they are the Very Broad Line Region(VBLR) and the Inter-Mediate Line region(IMLR).The two sources with a two-component profile are Mrk 509 and NGC 4051.The time lags of the two components of both sources satisfy tIMLR/tVBLR=V 2VBLR/V 2IMLR,where tIMLR and tVBLR are the lags of the two components while VIMLR and VVBLR represent the mean gas velocities of the two regions,supporting the two-component model of the BLR of Active Galactic Nuclei(AGNs).The fact that most of these ten sources only have the VBGC confirms the assumption that RM mainly measures the radius of the VBLR;consequently,the radius obtained from the R-L relationship mainly represents the radius of VBLR.Moreover,NGC 4051,with a lag of about 5 days in the one component model,is an outlier on the R-L relationship as shown in Kaspi et al.(2005);however this problem disappears in our two-component model with lags of about 2 and 6 days for the VBGC and IMGC,respectively.

  1. Multilevel Mixture Factor Models

    Science.gov (United States)

    Varriale, Roberta; Vermunt, Jeroen K.

    2012-01-01

    Factor analysis is a statistical method for describing the associations among sets of observed variables in terms of a small number of underlying continuous latent variables. Various authors have proposed multilevel extensions of the factor model for the analysis of data sets with a hierarchical structure. These Multilevel Factor Models (MFMs)…

  2. A two-component model of host–parasitoid interactions: determination of the size of inundative releases of parasitoids in biological pest contro

    NARCIS (Netherlands)

    Grasman, J.; Herwaarden, van O.A.; Hemerik, L.; Lenteren, van J.C.

    2001-01-01

    A two-component differential equation model is formulated for a host–parasitoid interaction. Transient dynamics and population crashes of this system are analysed using differential inequalities. Two different cases can be distinguished: either the intrinsic growth rate of the host population is sma

  3. A two-component dark matter model with real singlet scalars confronting GeV -ray excess from galactic centre and Fermi bubble

    Indian Academy of Sciences (India)

    Debasish Majumdar; Kamakshya Prasad Modak; Subhendu Rakshit

    2016-02-01

    We propose a two-component dark matter (DM) model, each component of which is a real singlet scalar, to explain results from both direct and indirect detection experiments. We put the constraints on the model parameters from theoretical bounds, PLANCK relic density results and direct DM experiments. The -ray flux is computed from DM annihilation in this framework and is then compared with the Fermi-LAT observations from galactic centre region and Fermi bubble.

  4. Modeling the monthly mean soil-water balance with a statistical-dynamical ecohydrology model as coupled to a two-component canopy model

    Directory of Open Access Journals (Sweden)

    J. P. Kochendorfer

    2010-10-01

    Full Text Available The statistical-dynamical annual water balance model of Eagleson (1978 is a pioneering work in the analysis of climate, soil and vegetation interactions. This paper describes several enhancements and modifications to the model that improve its physical realism at the expense of its mathematical elegance and analytical tractability. In particular, the analytical solutions for the root zone fluxes are re-derived using separate potential rates of transpiration and bare-soil evaporation. Those potential rates, along with the rate of evaporation from canopy interception, are calculated using the two-component Shuttleworth-Wallace (1985 canopy model. In addition, the soil column is divided into two layers, with the upper layer representing the dynamic root zone. The resulting ability to account for changes in root-zone water storage allows for implementation at the monthly timescale. This new version of the Eagleson model is coined the Statistical-Dynamical Ecohydrology Model (SDEM. The ability of the SDEM to capture the seasonal dynamics of the local-scale soil-water balance is demonstrated for two grassland sites in the US Great Plains. Sensitivity of the results to variations in peak green leaf area index (LAI suggests that the mean peak green LAI is determined by some minimum in root zone soil moisture during the growing season. That minimum appears to be close to the soil matric potential at which the dominant grass species begins to experience water stress and well above the wilting point, thereby suggesting an ecological optimality hypothesis in which the need to avoid water-stress-induced leaf abscission is balanced by the maximization of carbon assimilation (and associated transpiration. Finally, analysis of the sensitivity of model-determined peak green LAI to soil texture shows that the coupled model is able to reproduce the so-called "inverse texture effect", which consists of the observation that natural vegetation in dry climates tends

  5. Modeling the monthly mean soil-water balance with a statistical-dynamical ecohydrology model as coupled to a two-component canopy model

    Directory of Open Access Journals (Sweden)

    J. P. Kochendorfer

    2008-03-01

    Full Text Available The statistical-dynamical annual water balance model of Eagleson (1978 is a pioneering work in the analysis of climate, soil and vegetation interactions. This paper describes several enhancements and modifications to the model that improve its physical realism at the expense of its mathematical elegance and analytical tractability. In particular, the analytical solutions for the root zone fluxes are re-derived using separate potential rates of transpiration and bare-soil evaporation. Those potential rates, along with the rate of evaporation from canopy interception, are calculated using the two-component Shuttleworth-Wallace (1985 canopy model. In addition, the soil column is divided into two layers, with the upper layer representing the dynamic root zone. The resulting ability to account for changes in root-zone water storage allows for implementation at the monthly timescale. This new version of the Eagleson model is coined the Statistical-Dynamical Ecohydrology Model (SDEM. The ability of the SDEM to capture the seasonal dynamics of the local-scale soil-water balance is demonstrated for two grassland sites in the US Great Plains. Sensitivity of the results to variations in peak green Leaf Area Index (LAI suggests that the mean peak green LAI is determined by some minimum in root zone soil moisture during the growing season. That minimum appears to be close to the soil matric potential at which the dominant grass species begins to experience water stress and well above the wilting point, thereby suggesting an ecological optimality hypothesis in which the need to avoid water-stress-induced leaf abscission is balanced by the maximization of carbon assimilation (and associated transpiration. Finally, analysis of the sensitivity of model-determined peak green LAI to soil texture shows that the coupled model is able to reproduce the so-called "inverse texture effect", which consists of the observation that natural vegetation in dry climates tends

  6. Approximation of the breast height diameter distribution of two-cohort stands by mixture models I Parameter estimation

    Science.gov (United States)

    Rafal Podlaski; Francis A. Roesch

    2013-01-01

    Study assessed the usefulness of various methods for choosing the initial values for the numerical procedures for estimating the parameters of mixture distributions and analysed variety of mixture models to approximate empirical diameter at breast height (dbh) distributions. Two-component mixtures of either the Weibull distribution or the gamma distribution were...

  7. A model system for pathogen detection using a two-component bacteriophage/bioluminescent signal amplification assay

    Science.gov (United States)

    Bright, Nathan G.; Carroll, Richard J.; Applegate, Bruce M.

    2004-03-01

    Microbial contamination has become a mounting concern the last decade due to an increased emphasis of minimally processed food products specifically produce, and the recognition of foodborne pathogens such as Campylobacter jejuni, Escherichia coli O157:H7, and Listeria monocytogenes. This research investigates a detection approach utilizing bacteriophage pathogen specificity coupled with a bacterial bioluminescent bioreporter utilizing the quorum sensing molecule from Vibrio fischeri, N-(3-oxohexanoyl)-homoserine lactone (3-oxo-C6-HSL). The 3-oxo-C6-HSL molecules diffuse out of the target cell after infection and induce bioluminescence from a population of 3-oxo-C6-HSL bioreporters (ROLux). E. coli phage M13, a well-characterized bacteriophage, offers a model system testing the use of bacteriophage for pathogen detection through cell-to-cell communication via a LuxR/3-oxo-C6-HSL system. Simulated temperate phage assays tested functionality of the ROLux reporter and production of 3-oxo-C6-HSL by various test strains. These assays showed detection limits of 102cfu after 24 hours in a varietry of detection formats. Assays incorporating the bacteriophage M13-luxI with the ROLux reporter and a known population of target cells were subsequently developed and have shown consistent detection limits of 105cfu target organisms. Measurable light response from high concentrations of target cells was almost immediate, suggesting an enrichment step to further improve detection limits and reduce assay time.

  8. AN ENTROPIC ORDER QUANTITY MODEL WITH FUZZY HOLDING COST AND FUZZY DISPOSAL COST FOR PERISHABLE ITEMS UNDER TWO COMPONENT DEMAND AND DISCOUNTED SELLING PRICE

    Directory of Open Access Journals (Sweden)

    P.K. Tripathy

    2008-07-01

    Full Text Available A new type of replenishment policy is suggested in an entropy order quantity model for a perishable product possessing fuzzy holding cost and fuzzy disposal cost. This model represents an appropriate combination of two component demand with discounted selling price, particularly over a finite time horizon. Its main aim lies in the need for an entropic cost of the cycle time is a key feature of specific perishable product like fruits, vegetables, food stuffs, fishes etc. To handle this multiplicity of objectives in a pragmatic approach, entropic ordering quantity model with discounted selling price during pre and post deterioration of perishable items to optimize its payoff is proposed. It has been imperative to demonstrate this model by analysis, which reveals some important characteristics of discounted structure. Furthermore, numerical experiments are conducted to evaluate the difference between the crisp and fuzzy cases in EOQ and EnOQ separately. This paper explores the economy of investing in economics of lot sizing in Fuzzy EOQ, Crisp EOQ and Crisp EnOQ models. The proposed paper reveals itself as a pragmatic alternative to other approaches based on two component demand function with very sound theoretical underpinnings but with few possibilities of actually being put into practice. The results indicate that this can become a good model and can be replicated by researchers in neighbourhood of its possible extensions.

  9. Mathematical model of the component mixture distribution in the molten cast iron during centrifugation (sedimentation)

    Science.gov (United States)

    Bikulov, R. A.; Kotlyar, L. M.

    2014-12-01

    For the development and management of the manufacturing processes of axisymmetric articles with compositional structure by centrifugal casting method [1,2,3,4] is necessary to create a generalized mathematical model of the dynamics of component mixture in the molten cast iron during centrifugation. In article. based on the analysis of the dynamics of two-component mixture at sedimentation, a method of successive approximations to determine the distribution of a multicomponent mixture by centrifugation in a parabolic crucible is developed.

  10. Essays on Finite Mixture Models

    NARCIS (Netherlands)

    A. van Dijk (Bram)

    2009-01-01

    textabstractFinite mixture distributions are a weighted average of a ¯nite number of distributions. The latter are usually called the mixture components. The weights are usually described by a multinomial distribution and are sometimes called mixing proportions. The mixture components may be the

  11. Essays on Finite Mixture Models

    NARCIS (Netherlands)

    A. van Dijk (Bram)

    2009-01-01

    textabstractFinite mixture distributions are a weighted average of a ¯nite number of distributions. The latter are usually called the mixture components. The weights are usually described by a multinomial distribution and are sometimes called mixing proportions. The mixture components may be the sam

  12. Spatial mixture multiscale modeling for aggregated health data.

    Science.gov (United States)

    Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin

    2016-09-01

    One of the main goals in spatial epidemiology is to study the geographical pattern of disease risks. For such purpose, the convolution model composed of correlated and uncorrelated components is often used. However, one of the two components could be predominant in some regions. To investigate the predominance of the correlated or uncorrelated component for multiple scale data, we propose four different spatial mixture multiscale models by mixing spatially varying probability weights of correlated (CH) and uncorrelated heterogeneities (UH). The first model assumes that there is no linkage between the different scales and, hence, we consider independent mixture convolution models at each scale. The second model introduces linkage between finer and coarser scales via a shared uncorrelated component of the mixture convolution model. The third model is similar to the second model but the linkage between the scales is introduced through the correlated component. Finally, the fourth model accommodates for a scale effect by sharing both CH and UH simultaneously. We applied these models to real and simulated data, and found that the fourth model is the best model followed by the second model.

  13. Replenishment policy for Entropic Order Quantity (EnOQ model with two component demand and partial back-logging under inflation

    Directory of Open Access Journals (Sweden)

    Bhanupriya Dash

    2017-09-01

    Full Text Available Background: Replenishment policy for entropic order quantity model with two component demand and partial backlogging under inflation is an important subject in the stock management. Methods: In this paper an inventory model for  non-instantaneous  deteriorating items with stock dependant consumption rate and partial back logged in addition the effect of inflection and time value of money on replacement policy with zero lead time consider was developed. Profit maximization model is formulated by considering the effects of partial backlogging under inflation with cash discounts. Further numerical example presented to evaluate the relative performance between the entropic order quantity and EOQ models separately. Numerical example is present to demonstrate the developed model and to illustrate the procedure. Lingo 13.0 version software used to derive optimal order quantity and total cost of inventory. Finally sensitivity analysis of the optimal solution with respect to different parameters of the system carried out. Results and conclusions: The obtained inventory model is very useful in retail business. This model can extend to total backorder.

  14. Bayesian mixture models for spectral density estimation

    OpenAIRE

    Cadonna, Annalisa

    2017-01-01

    We introduce a novel Bayesian modeling approach to spectral density estimation for multiple time series. Considering first the case of non-stationary timeseries, the log-periodogram of each series is modeled as a mixture of Gaussiandistributions with frequency-dependent weights and mean functions. The implied model for the log-spectral density is a mixture of linear mean functionswith frequency-dependent weights. The mixture weights are built throughsuccessive differences of a logit-normal di...

  15. Mixture Modeling: Applications in Educational Psychology

    Science.gov (United States)

    Harring, Jeffrey R.; Hodis, Flaviu A.

    2016-01-01

    Model-based clustering methods, commonly referred to as finite mixture modeling, have been applied to a wide variety of cross-sectional and longitudinal data to account for heterogeneity in population characteristics. In this article, we elucidate 2 such approaches: growth mixture modeling and latent profile analysis. Both techniques are…

  16. A generalized two-component model of solar wind turbulence and ab initio diffusion mean free paths and drift lengthscales of cosmic rays

    CERN Document Server

    Wiengarten, Tobias; Engelbrecht, Eugene; Fichtner, Horst; Kleimann, Jens; Scherer, Klaus

    2016-01-01

    We extend a two-component model for the evolution of fluctuations in the solar wind plasma so that it is fully three-dimensional (3D) and also coupled self-consistently to the large-scale magnetohydrodynamic (MHD) equations describing the background solar wind. The two classes of fluctuations considered are a high-frequency parallel-propagating wave-like piece and a low-frequency quasi-two-dimensional component. For both components, the nonlinear dynamics is dominanted by quasi-perpendicular spectral cascades of energy. Driving of the fluctuations, by, for example, velocity shear and pickup ions, is included. Numerical solutions to the new model are obtained using the Cronos framework, and validated against previous simpler models. Comparing results from the new model with spacecraft measurements, we find improved agreement relative to earlier models that employ prescribed background solar wind fields. Finally, the new results for the wave-like and quasi-two-dimensional fluctuations are used to calculate ab i...

  17. The weathervane model, a functional and structural organization of the two-component alkanesulfonate oxidoreductase SsuD from Xanthomonas citri

    Energy Technology Data Exchange (ETDEWEB)

    Pegos, V.R. [Universidade Estadual de Campinas (UNICAMP), SP (Brazil); Oliveira, P.S.L.; Balan, A. [Laboratorio Nacional de Biociencias - LNBIO, Campinas, SP (Brazil)

    2012-07-01

    Full text: In Xanthomonas citri, the phytopathogen responsible for the canker citrus disease, we identified in the ssuABCDE operon, genes encoding the alkanesulfonate ABC transporter as well as the two enzymes responsible for oxido reduction of the respective substrates. SsuD and SsuE proteins represent a two-component system that can be assigned to the group of FMNH{sub 2} -dependent monooxygenases. How- ever, despite of the biochemical information about SsuD and SsuE orthologs from Escherichia coli, there is no structural information of how the two proteins work together. In this work, we used ultracentrifugation, SAXS data and molecular modeling to construct a structural/functional model, which consists of eight molecules organized in a weathervane shape. Through this model, SsuD ligand-binding site for NADPH{sub 2} and FMN substrates is clearly exposed, in a way that might allow the protein-protein interactions with SsuE. Moreover, based on molecular dynamics simulations of SsuD in apo state, docked with NADPH{sub 2}, FMN or both substrates, we characterized the residues of the pocket, the mechanism of substrate interaction and transfer of electrons from NADPH{sub 2} to FMN. This is the first report that links functional and biochemical data with structural analyses. (author)

  18. Two-component mantle melting-mixing model for the generation of mid-ocean ridge basalts: Implications for the volatile content of the Pacific upper mantle

    Science.gov (United States)

    Shimizu, Kei; Saal, Alberto E.; Myers, Corinne E.; Nagle, Ashley N.; Hauri, Erik H.; Forsyth, Donald W.; Kamenetsky, Vadim S.; Niu, Yaoling

    2016-03-01

    We report major, trace, and volatile element (CO2, H2O, F, Cl, S) contents and Sr, Nd, and Pb isotopes of mid-ocean ridge basalt (MORB) glasses from the Northern East Pacific Rise (NEPR) off-axis seamounts, the Quebrada-Discovery-GoFar (QDG) transform fault system, and the Macquarie Island. The incompatible trace element (ITE) contents of the samples range from highly depleted (DMORB, Th/La ⩽ 0.035) to enriched (EMORB, Th/La ⩾ 0.07), and the isotopic composition spans the entire range observed in EPR MORB. Our data suggest that at the time of melt generation, the source that generated the EMORB was essentially peridotitic, and that the composition of NMORB might not represent melting of a single upper mantle source (DMM), but rather mixing of melts from a two-component mantle (depleted and enriched DMM or D-DMM and E-DMM, respectively). After filtering the volatile element data for secondary processes (degassing, sulfide saturation, assimilation of seawater-derived component, and fractional crystallization), we use the volatiles to ITE ratios of our samples and a two-component mantle melting-mixing model to estimate the volatile content of the D-DMM (CO2 = 22 ppm, H2O = 59 ppm, F = 8 ppm, Cl = 0.4 ppm, and S = 100 ppm) and the E-DMM (CO2 = 990 ppm, H2O = 660 ppm, F = 31 ppm, Cl = 22 ppm, and S = 165 ppm). Our two-component mantle melting-mixing model reproduces the kernel density estimates (KDE) of Th/La and 143Nd/144Nd ratios for our samples and for EPR axial MORB compiled from the literature. This model suggests that: (1) 78% of the Pacific upper mantle is highly depleted (D-DMM) while 22% is enriched (E-DMM) in volatile and refractory ITE, (2) the melts produced during variable degrees of melting of the E-DMM controls most of the MORB geochemical variation, and (3) a fraction (∼65% to 80%) of the low degree EMORB melts (produced by ∼1.3% melting) may escape melt aggregation by freezing at the base of the oceanic lithosphere, significantly enriching it in

  19. Numerical modeling of Non-isothermal two-phase two-component flow process with phase change phenomena in the porous media

    Science.gov (United States)

    Huang, Y.; Shao, H.; Thullner, M.; Kolditz, O.

    2014-12-01

    In applications of Deep Geothermal reservoirs, thermal recovery processes, and contaminated groundwater sites, the multiphase multicomponent flow and transport processes are often considered the most important underlying physical process. In particular, the behavior of phase appearance and disappearance is the critical to the performance of many geo-reservoirs, and great interests exit in the scientific community to simulate this coupled process. This work is devoted to the modeling and simulation of two-phase, two components flow and transport in the porous medium, whereas the phase change behavior in non-isothermal conditions is considered. In this work, we have implemented the algorithm developed by Marchand, et al., into the open source scientific software OpenGeoSys. The governing equation is formulated in terms of molar fraction of the light component and mean pressure as the persistent primary variables, which leads to a fully coupled nonlinear PDE system. One of the important advantages of this approach is avoiding the primary variables switching between single phase and two phase zones, so that this uniform system can be applied to describe the behavior of phase change. On the other hand, due to the number of unkown variables closure relationships are also formulated to close the whole equation system by using the approach of complementarity constrains. For the numerical technical scheme: The standard Galerkin Finite element method is applied for space discretization, while a fully implicit scheme for the time discretization, and Newton-Raphson method is utilized for the global linearization, as well as the closure relationship. This model is verified based on one test case developed to simulate the heat pipe problem. This benchmark involves two-phase two-component flow in saturated/unsaturated porous media under non-isothermal condition, including phase change and mineral-water geochemical reactive transport processes. The simulation results will be

  20. Quasiparticle density of states of 2H-NbSe2 single crystals revealed by low-temperature specific heat measurements according to a two-component model

    Institute of Scientific and Technical Information of China (English)

    Yan Jing; Shan Lei; Wang Yue; Xiao Zhi-Li; Wen Hai-Hu

    2008-01-01

    Low-temperature specific heat in a dichalcogenide superconductor 2H-NbSe2 is measured in various magnetic fields. It is found that the specific heat can be described very well by a simple model concerning two components corresponding to vortex normal core and ambient superconducting region, separately. For calculating the specific heat outside the vortex core region, we use the Bardeen-Cooper-Schrieffer (BCS) formalism under the assumption of a narrow distribution of the superconducting gaps. The field-dependent vortex core size in the mixed state of 2H-NbSe2, determined by using this model, can explain the nonlinear field dependence of specific heat coefficient γ(H), which is in good agreement with the previous experimental results and more formal calculations. With the high-temperature specific heat data, we can find that, in the multi-band superconductor 2H-NbSe2, the recovered density of states (or Fermi surface) below Tc under a magnetic field seems not to be gapped again by the charge density wave (CDW) gap, which suggests that the superconducting gap and the CDW gap may open on different Fermi surface sheets.

  1. Learning and evolution in bacterial taxis: an operational amplifier circuit modeling the computational dynamics of the prokaryotic 'two component system' protein network.

    Science.gov (United States)

    Di Paola, Vieri; Marijuán, Pedro C; Lahoz-Beltra, Rafael

    2004-01-01

    Adaptive behavior in unicellular organisms (i.e., bacteria) depends on highly organized networks of proteins governing purposefully the myriad of molecular processes occurring within the cellular system. For instance, bacteria are able to explore the environment within which they develop by utilizing the motility of their flagellar system as well as a sophisticated biochemical navigation system that samples the environmental conditions surrounding the cell, searching for nutrients or moving away from toxic substances or dangerous physical conditions. In this paper we discuss how proteins of the intervening signal transduction network could be modeled as artificial neurons, simulating the dynamical aspects of the bacterial taxis. The model is based on the assumption that, in some important aspects, proteins can be considered as processing elements or McCulloch-Pitts artificial neurons that transfer and process information from the bacterium's membrane surface to the flagellar motor. This simulation of bacterial taxis has been carried out on a hardware realization of a McCulloch-Pitts artificial neuron using an operational amplifier. Based on the behavior of the operational amplifier we produce a model of the interaction between CheY and FliM, elements of the prokaryotic two component system controlling chemotaxis, as well as a simulation of learning and evolution processes in bacterial taxis. On the one side, our simulation results indicate that, computationally, these protein 'switches' are similar to McCulloch-Pitts artificial neurons, suggesting a bridge between evolution and learning in dynamical systems at cellular and molecular levels and the evolutive hardware approach. On the other side, important protein 'tactilizing' properties are not tapped by the model, and this suggests further complexity steps to explore in the approach to biological molecular computing.

  2. Equation-free analysis of two-component system signalling model reveals the emergence of co-existing phenotypes in the absence of multistationarity.

    Directory of Open Access Journals (Sweden)

    Rebecca B Hoyle

    Full Text Available Phenotypic differences of genetically identical cells under the same environmental conditions have been attributed to the inherent stochasticity of biochemical processes. Various mechanisms have been suggested, including the existence of alternative steady states in regulatory networks that are reached by means of stochastic fluctuations, long transient excursions from a stable state to an unstable excited state, and the switching on and off of a reaction network according to the availability of a constituent chemical species. Here we analyse a detailed stochastic kinetic model of two-component system signalling in bacteria, and show that alternative phenotypes emerge in the absence of these features. We perform a bifurcation analysis of deterministic reaction rate equations derived from the model, and find that they cannot reproduce the whole range of qualitative responses to external signals demonstrated by direct stochastic simulations. In particular, the mixed mode, where stochastic switching and a graded response are seen simultaneously, is absent. However, probabilistic and equation-free analyses of the stochastic model that calculate stationary states for the mean of an ensemble of stochastic trajectories reveal that slow transcription of either response regulator or histidine kinase leads to the coexistence of an approximate basal solution and a graded response that combine to produce the mixed mode, thus establishing its essential stochastic nature. The same techniques also show that stochasticity results in the observation of an all-or-none bistable response over a much wider range of external signals than would be expected on deterministic grounds. Thus we demonstrate the application of numerical equation-free methods to a detailed biochemical reaction network model, and show that it can provide new insight into the role of stochasticity in the emergence of phenotypic diversity.

  3. On the characterization of dynamic supramolecular systems: a general mathematical association model for linear supramolecular copolymers and application on a complex two-component hydrogen-bonding system.

    Science.gov (United States)

    Odille, Fabrice G J; Jónsson, Stefán; Stjernqvist, Susann; Rydén, Tobias; Wärnmark, Kenneth

    2007-01-01

    A general mathematical model for the characterization of the dynamic (kinetically labile) association of supramolecular assemblies in solution is presented. It is an extension of the equal K (EK) model by the stringent use of linear algebra to allow for the simultaneous presence of an unlimited number of different units in the resulting assemblies. It allows for the analysis of highly complex dynamic equilibrium systems in solution, including both supramolecular homo- and copolymers without the recourse to extensive approximations, in a field in which other analytical methods are difficult. The derived mathematical methodology makes it possible to analyze dynamic systems such as supramolecular copolymers regarding for instance the degree of polymerization, the distribution of a given monomer in different copolymers as well as its position in an aggregate. It is to date the only general means to characterize weak supramolecular systems. The model was fitted to NMR dilution titration data by using the program Matlab, and a detailed algorithm for the optimization of the different parameters has been developed. The methodology is applied to a case study, a hydrogen-bonded supramolecular system, salen 4+porphyrin 5. The system is formally a two-component system but in reality a three-component system. This results in a complex dynamic system in which all monomers are associated to each other by hydrogen bonding with different association constants, resulting in homo- and copolymers 4n5m as well as cyclic structures 6 and 7, in addition to free 4 and 5. The system was analyzed by extensive NMR dilution titrations at variable temperatures. All chemical shifts observed at different temperatures were used in the fitting to obtain the DeltaH degrees and DeltaS degrees values producing the best global fit. From the derived general mathematical expressions, system 4+5 could be characterized with respect to above-mentioned parameters.

  4. Two-way and three-way approaches to ultra high performance liquid chromatography-photodiode array dataset for the quantitative resolution of a two-component mixture containing ciprofloxacin and ornidazole.

    Science.gov (United States)

    Dinç, Erdal; Ertekin, Zehra Ceren; Büker, Eda

    2016-09-01

    Two-way and three-way calibration models were applied to ultra high performance liquid chromatography with photodiode array data with coeluted peaks in the same wavelength and time regions for the simultaneous quantitation of ciprofloxacin and ornidazole in tablets. The chromatographic data cube (tensor) was obtained by recording chromatographic spectra of the standard and sample solutions containing ciprofloxacin and ornidazole with sulfadiazine as an internal standard as a function of time and wavelength. Parallel factor analysis and trilinear partial least squares were used as three-way calibrations for the decomposition of the tensor, whereas three-way unfolded partial least squares was applied as a two-way calibration to the unfolded dataset obtained from the data array of ultra high performance liquid chromatography with photodiode array detection. The validity and ability of two-way and three-way analysis methods were tested by analyzing validation samples: synthetic mixture, interday and intraday samples, and standard addition samples. Results obtained from two-way and three-way calibrations were compared to those provided by traditional ultra high performance liquid chromatography. The proposed methods, parallel factor analysis, trilinear partial least squares, unfolded partial least squares, and traditional ultra high performance liquid chromatography were successfully applied to the quantitative estimation of the solid dosage form containing ciprofloxacin and ornidazole.

  5. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, Scott; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimoneous represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimoneous...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: 'Are we actually dealing with a convolutive mixture?'. We try to answer this question for EEG data....

  6. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: ’Are we actually dealing with a convolutive mixture?’. We try to answer this question for EEG data....

  7. On the mixture model for multiphase flow

    Energy Technology Data Exchange (ETDEWEB)

    Manninen, M.; Taivassalo, V. [VTT Energy, Espoo (Finland). Nuclear Energy; Kallio, S. [Aabo Akademi, Turku (Finland)

    1996-12-31

    Numerical flow simulation utilising a full multiphase model is impractical for a suspension possessing wide distributions in the particle size or density. Various approximations are usually made to simplify the computational task. In the simplest approach, the suspension is represented by a homogeneous single-phase system and the influence of the particles is taken into account in the values of the physical properties. This study concentrates on the derivation and closing of the model equations. The validity of the mixture model is also carefully analysed. Starting from the continuity and momentum equations written for each phase in a multiphase system, the field equations for the mixture are derived. The mixture equations largely resemble those for a single-phase flow but are represented in terms of the mixture density and velocity. The volume fraction for each dispersed phase is solved from a phase continuity equation. Various approaches applied in closing the mixture model equations are reviewed. An algebraic equation is derived for the velocity of a dispersed phase relative to the continuous phase. Simplifications made in calculating the relative velocity restrict the applicability of the mixture model to cases in which the particles reach the terminal velocity in a short time period compared to the characteristic time scale of the flow of the mixture. (75 refs.)

  8. Mixture

    Directory of Open Access Journals (Sweden)

    Silva-Aguilar Martín

    2011-01-01

    Full Text Available Metals are ubiquitous pollutants present as mixtures. In particular, mixture of arsenic-cadmium-lead is among the leading toxic agents detected in the environment. These metals have carcinogenic and cell-transforming potential. In this study, we used a two step cell transformation model, to determine the role of oxidative stress in transformation induced by a mixture of arsenic-cadmium-lead. Oxidative damage and antioxidant response were determined. Metal mixture treatment induces the increase of damage markers and the antioxidant response. Loss of cell viability and increased transforming potential were observed during the promotion phase. This finding correlated significantly with generation of reactive oxygen species. Cotreatment with N-acetyl-cysteine induces effect on the transforming capacity; while a diminution was found in initiation, in promotion phase a total block of the transforming capacity was observed. Our results suggest that oxidative stress generated by metal mixture plays an important role only in promotion phase promoting transforming capacity.

  9. Modeling text with generalizable Gaussian mixtures

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Sigurdsson, Sigurdur; Kolenda, Thomas

    2000-01-01

    We apply and discuss generalizable Gaussian mixture (GGM) models for text mining. The model automatically adapts model complexity for a given text representation. We show that the generalizability of these models depends on the dimensionality of the representation and the sample size. We discuss...

  10. Identifiability of large phylogenetic mixture models.

    Science.gov (United States)

    Rhodes, John A; Sullivant, Seth

    2012-01-01

    Phylogenetic mixture models are statistical models of character evolution allowing for heterogeneity. Each of the classes in some unknown partition of the characters may evolve by different processes, or even along different trees. Such models are of increasing interest for data analysis, as they can capture the variety of evolutionary processes that may be occurring across long sequences of DNA or proteins. The fundamental question of whether parameters of such a model are identifiable is difficult to address, due to the complexity of the parameterization. Identifiability is, however, essential to their use for statistical inference.We analyze mixture models on large trees, with many mixture components, showing that both numerical and tree parameters are indeed identifiable in these models when all trees are the same. This provides a theoretical justification for some current empirical studies, and indicates that extensions to even more mixture components should be theoretically well behaved. We also extend our results to certain mixtures on different trees, using the same algebraic techniques.

  11. Flexible Rasch Mixture Models with Package psychomix

    Directory of Open Access Journals (Sweden)

    Hannah Frick

    2012-05-01

    Full Text Available Measurement invariance is an important assumption in the Rasch model and mixture models constitute a flexible way of checking for a violation of this assumption by detecting unobserved heterogeneity in item response data. Here, a general class of Rasch mixture models is established and implemented in R, using conditional maximum likelihood estimation of the item parameters (given the raw scores along with flexible specification of two model building blocks: (1 Mixture weights for the unobserved classes can be treated as model parameters or based on covariates in a concomitant variable model. (2 The distribution of raw score probabilities can be parametrized in two possible ways, either using a saturated model or a specification through mean and variance. The function raschmix( in the R package psychomix provides these models, leveraging the general infrastructure for fitting mixture models in the flexmix package. Usage of the function and its associated methods is illustrated on artificial data as well as empirical data from a study of verbally aggressive behavior.

  12. Itinerant Ferromagnetism in a Polarized Two-Component Fermi Gas

    DEFF Research Database (Denmark)

    Massignan, Pietro; Yu, Zhenhua; Bruun, Georg

    2013-01-01

    We analyze when a repulsively interacting two-component Fermi gas becomes thermodynamically unstable against phase separation. We focus on the strongly polarized limit, where the free energy of the homogeneous mixture can be calculated accurately in terms of well-defined quasiparticles, the repul......We analyze when a repulsively interacting two-component Fermi gas becomes thermodynamically unstable against phase separation. We focus on the strongly polarized limit, where the free energy of the homogeneous mixture can be calculated accurately in terms of well-defined quasiparticles...

  13. Lattice Model for water-solute mixtures

    OpenAIRE

    Furlan, A. P.; Almarza, N. G.; M. C. Barbosa

    2016-01-01

    A lattice model for the study of mixtures of associating liquids is proposed. Solvent and solute are modeled by adapting the associating lattice gas (ALG) model. The nature of interaction solute/solvent is controlled by tuning the energy interactions between the patches of ALG model. We have studied three set of parameters, resulting on, hydrophilic, inert and hydrophobic interactions. Extensive Monte Carlo simulations were carried out and the behavior of pure components and the excess proper...

  14. A Skew-Normal Mixture Regression Model

    Science.gov (United States)

    Liu, Min; Lin, Tsung-I

    2014-01-01

    A challenge associated with traditional mixture regression models (MRMs), which rest on the assumption of normally distributed errors, is determining the number of unobserved groups. Specifically, even slight deviations from normality can lead to the detection of spurious classes. The current work aims to (a) examine how sensitive the commonly…

  15. Mixture model analysis of complex samples

    NARCIS (Netherlands)

    Wedel, M; ter Hofstede, F; Steenkamp, JBEM

    1998-01-01

    We investigate the effects of a complex sampling design on the estimation of mixture models. An approximate or pseudo likelihood approach is proposed to obtain consistent estimates of class-specific parameters when the sample arises from such a complex design. The effects of ignoring the sample desi

  16. The Supervised Learning Gaussian Mixture Model

    Institute of Scientific and Technical Information of China (English)

    马继涌; 高文

    1998-01-01

    The traditional Gaussian Mixture Model(GMM)for pattern recognition is an unsupervised learning method.The parameters in the model are derived only by the training samples in one class without taking into account the effect of sample distributions of other classes,hence,its recognition accuracy is not ideal sometimes.This paper introduces an approach for estimating the parameters in GMM in a supervising way.The Supervised Learning Gaussian Mixture Model(SLGMM)improves the recognition accuracy of the GMM.An experimental example has shown its effectiveness.The experimental results have shown that the recognition accuracy derived by the approach is higher than those obtained by the Vector Quantization(VQ)approach,the Radial Basis Function (RBF) network model,the Learning Vector Quantization (LVQ) approach and the GMM.In addition,the training time of the approach is less than that of Multilayer Perceptrom(MLP).

  17. Two component theory and electron magnetic moment

    NARCIS (Netherlands)

    Veltman, M.J.G.

    1998-01-01

    The two-component formulation of quantum electrodynamics is studied. The relation with the usual Dirac formulation is exhibited, and the Feynman rules for the two-component form of the theory are presented in terms of familiar objects. The transformation from the Dirac theory to the two-component th

  18. Two component theory and electron magnetic moment

    NARCIS (Netherlands)

    Veltman, M.J.G.

    1998-01-01

    The two-component formulation of quantum electrodynamics is studied. The relation with the usual Dirac formulation is exhibited, and the Feynman rules for the two-component form of the theory are presented in terms of familiar objects. The transformation from the Dirac theory to the two-component

  19. Population mixture model for nonlinear telomere dynamics

    Science.gov (United States)

    Itzkovitz, Shalev; Shlush, Liran I.; Gluck, Dan; Skorecki, Karl

    2008-12-01

    Telomeres are DNA repeats protecting chromosomal ends which shorten with each cell division, eventually leading to cessation of cell growth. We present a population mixture model that predicts an exponential decrease in telomere length with time. We analytically solve the dynamics of the telomere length distribution. The model provides an excellent fit to available telomere data and accounts for the previously unexplained observation of telomere elongation following stress and bone marrow transplantation, thereby providing insight into the nature of the telomere clock.

  20. Self-assembly models for lipid mixtures

    Science.gov (United States)

    Singh, Divya; Porcar, Lionel; Butler, Paul; Perez-Salas, Ursula

    2006-03-01

    Solutions of mixed long and short (detergent-like) phospholipids referred to as ``bicelle'' mixtures in the literature, are known to form a variety of different morphologies based on their total lipid composition and temperature in a complex phase diagram. Some of these morphologies have been found to orient in a magnetic field, and consequently bicelle mixtures are widely used to study the structure of soluble as well as membrane embedded proteins using NMR. In this work, we report on the low temperature phase of the DMPC and DHPC bicelle mixture, where there is agreement on the discoid structures but where molecular packing models are still being contested. The most widely accepted packing arrangement, first proposed by Vold and Prosser had the lipids completely segregated in the disk: DHPC in the rim and DMPC in the disk. Using data from small angle neutron scattering (SANS) experiments, we show how radius of the planar domain of the disks is governed by the effective molar ratio qeff of lipids in aggregate and not the molar ratio q (q = [DMPC]/[DHPC] ) as has been understood previously. We propose a new quantitative (packing) model and show that in this self assembly scheme, qeff is the real determinant of disk sizes. Based on qeff , a master equation can then scale the radii of disks from mixtures with varying q and total lipid concentration.

  1. Two-component Fermi gas in a Harmonic Trap

    CERN Document Server

    Yi, X X; Cui, H T; Zhang, C M

    2002-01-01

    We consider a mixture of two-component Fermi gases at low temperature. The density profile of this degenerate Fermi gas is calculated under the semiclassical approximation. The results show that the fermion-fermion interactions make a large correction to the density profile at low temperature. The phase separation of such a mixture is also discussed for both attractive and repulsive interatomic interactions, and the numerical calculations demonstrate the exist of a stable temperature region $T_{c1}mixture. In addition, we give the critical temperature of the BCS-type transition in this system beyond the semiclassical approximation.

  2. Hierarchical mixture models for assessing fingerprint individuality

    OpenAIRE

    Dass, Sarat C.; Li, Mingfei

    2009-01-01

    The study of fingerprint individuality aims to determine to what extent a fingerprint uniquely identifies an individual. Recent court cases have highlighted the need for measures of fingerprint individuality when a person is identified based on fingerprint evidence. The main challenge in studies of fingerprint individuality is to adequately capture the variability of fingerprint features in a population. In this paper hierarchical mixture models are introduced to infer the extent of individua...

  3. Gaussian mixture model of heart rate variability.

    Directory of Open Access Journals (Sweden)

    Tommaso Costa

    Full Text Available Heart rate variability (HRV is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters.

  4. Bayesian mixture models for partially verified data

    DEFF Research Database (Denmark)

    Kostoulas, Polychronis; Browne, William J.; Nielsen, Søren Saxmose;

    2013-01-01

    for some individuals, in order to minimize this loss in the discriminatory power. The distribution of the continuous antibody response against MAP has been obtained for healthy, MAP-infected and MAP-infectious cows of different age groups. The overall power of the milk-ELISA to discriminate between healthy......Bayesian mixture models can be used to discriminate between the distributions of continuous test responses for different infection stages. These models are particularly useful in case of chronic infections with a long latent period, like Mycobacterium avium subsp. paratuberculosis (MAP) infection...

  5. Video compressive sensing using Gaussian mixture models.

    Science.gov (United States)

    Yang, Jianbo; Yuan, Xin; Liao, Xuejun; Llull, Patrick; Brady, David J; Sapiro, Guillermo; Carin, Lawrence

    2014-11-01

    A Gaussian mixture model (GMM)-based algorithm is proposed for video reconstruction from temporally compressed video measurements. The GMM is used to model spatio-temporal video patches, and the reconstruction can be efficiently computed based on analytic expressions. The GMM-based inversion method benefits from online adaptive learning and parallel computation. We demonstrate the efficacy of the proposed inversion method with videos reconstructed from simulated compressive video measurements, and from a real compressive video camera. We also use the GMM as a tool to investigate adaptive video compressive sensing, i.e., adaptive rate of temporal compression.

  6. [Comparison of two spectral mixture analysis models].

    Science.gov (United States)

    Wang, Qin-Jun; Lin, Qi-Zhong; Li, Ming-Xiao; Wang, Li-Ming

    2009-10-01

    A spectral mixture analysis experiment was designed to compare the spectral unmixing effects of linear spectral mixture analysis (LSMA) and constraint linear spectral mixture analysis (CLSMA). In the experiment, red, green, blue and yellow colors were printed on a coarse album as four end members. Thirty nine mixed samples were made according to each end member's different percent in one pixel. Then, field spectrometer was located on the top of the mixed samples' center to measure spectrum one by one. Inversion percent of each end member in the pixel was extracted using LSMA and CLSMA models. Finally, normalized mean squared error was calculated between inversion and real percent to compare the two models' effects on spectral unmixing. Results from experiment showed that the total error of LSMA was 0.30087 and that of CLSMA was 0.37552 when using all bands in the spectrum. Therefore, LSMA was 0.075 less than that of CLSMA when the whole bands of four end members' spectra were used. On the other hand, the total error of LSMA was 0.28095 and that of CLSMA was 0.29805 after band selection. So, LSMA was 0.017 less than that of CLSMA when bands selection was performed. Therefore, whether all or selected bands were used, the accuracy of LSMA was better than that of CLSMA because during the process of spectrum measurement, errors caused by instrument or human were introduced into the model, leading to that the measured data could not mean the strict requirement of CLSMA and therefore reduced its accuracy: Furthermore, the total error of LSMA using selected bands was 0.02 less than that using the whole bands. The total error of CLSMA using selected bands was 0.077 less than that using the whole bands. So, in the same model, spectral unmixing using selected bands to reduce the correlation of end members' spectra was superior to that using the whole bands.

  7. Investigation of a Gamma model for mixture STR samples

    DEFF Research Database (Denmark)

    Christensen, Susanne; Bøttcher, Susanne Gammelgaard; Lauritzen, Steffen L.

    The behaviour of PCR Amplification Kit, when used for mixture STR samples, is investigated. A model based on the Gamma distribution is fitted to the amplifier output for constructed mixtures, and the assumptions of the model is evaluated via residual analysis.......The behaviour of PCR Amplification Kit, when used for mixture STR samples, is investigated. A model based on the Gamma distribution is fitted to the amplifier output for constructed mixtures, and the assumptions of the model is evaluated via residual analysis....

  8. Two-component Duality and Strings

    CERN Document Server

    Freund, Peter G O

    2007-01-01

    A phenomenologically successful two-component hadronic duality picture led to Veneziano's amplitude, the fundamental first step to string theory. This picture is briefly recalled and its two components are identified as the open strings (mesons and baryons) and closed strings (Pomeron).

  9. Simulation of mixture microstructures via particle packing models and their direct comparison with real mixtures

    Science.gov (United States)

    Gulliver, Eric A.

    The objective of this thesis to identify and develop techniques providing direct comparison between simulated and real packed particle mixture microstructures containing submicron-sized particles. This entailed devising techniques for simulating powder mixtures, producing real mixtures with known powder characteristics, sectioning real mixtures, interrogating mixture cross-sections, evaluating and quantifying the mixture interrogation process and for comparing interrogation results between mixtures. A drop and roll-type particle-packing model was used to generate simulations of random mixtures. The simulated mixtures were then evaluated to establish that they were not segregated and free from gross defects. A powder processing protocol was established to provide real mixtures for direct comparison and for use in evaluating the simulation. The powder processing protocol was designed to minimize differences between measured particle size distributions and the particle size distributions in the mixture. A sectioning technique was developed that was capable of producing distortion free cross-sections of fine scale particulate mixtures. Tessellation analysis was used to interrogate mixture cross sections and statistical quality control charts were used to evaluate different types of tessellation analysis and to establish the importance of differences between simulated and real mixtures. The particle-packing program generated crescent shaped pores below large particles but realistic looking mixture microstructures otherwise. Focused ion beam milling was the only technique capable of sectioning particle compacts in a manner suitable for stereological analysis. Johnson-Mehl and Voronoi tessellation of the same cross-sections produced tessellation tiles with different the-area populations. Control charts analysis showed Johnson-Mehl tessellation measurements are superior to Voronoi tessellation measurements for detecting variations in mixture microstructure, such as altered

  10. Thermodynamic modeling of CO2 mixtures

    DEFF Research Database (Denmark)

    Bjørner, Martin Gamel

    performed satisfactorily and predicted the general behavior of the systems, but qCPA used fewer adjustable parameters to achieve similar predictions. It has been demonstrated that qCPA is a promising model which, compared to CPA, systematically improves the predictions of the experimentally determined phase......, accurate predictions of the thermodynamic properties and phase equilibria of mixtures containing CO2 are challenging with classical models such as the Soave-Redlich-Kwong (SRK) equation of state (EoS). This is believed to be due to the fact, that CO2 has a large quadrupole moment which the classical models...... do not explicitly account for. In this thesis, in an attempt to obtain a physically more consistent model, the cubicplus association (CPA) EoS is extended to include quadrupolar interactions. The new quadrupolar CPA (qCPA) can be used with the experimental value of the quadrupolemoment...

  11. Bayesian Estimation of a Mixture Model

    Directory of Open Access Journals (Sweden)

    Ilhem Merah

    2015-05-01

    Full Text Available We present the properties of a bathtub curve reliability model having both a sufficient adaptability and a minimal number of parameters introduced by Idée and Pierrat (2010. This one is a mixture of a Gamma distribution G(2, (1/θ and a new distribution L(θ. We are interesting by Bayesian estimation of the parameters and survival function of this model with a squared-error loss function and non-informative prior using the approximations of Lindley (1980 and Tierney and Kadane (1986. Using a statistical sample of 60 failure data relative to a technical device, we illustrate the results derived. Based on a simulation study, comparisons are made between these two methods and the maximum likelihood method of this two parameters model.

  12. Mixture latent autoregressive models for longitudinal data

    CERN Document Server

    Bartolucci, Francesco; Pennoni, Fulvia

    2011-01-01

    Many relevant statistical and econometric models for the analysis of longitudinal data include a latent process to account for the unobserved heterogeneity between subjects in a dynamic fashion. Such a process may be continuous (typically an AR(1)) or discrete (typically a Markov chain). In this paper, we propose a model for longitudinal data which is based on a mixture of AR(1) processes with different means and correlation coefficients, but with equal variances. This model belongs to the class of models based on a continuous latent process, and then it has a natural interpretation in many contexts of application, but it is more flexible than other models in this class, reaching a goodness-of-fit similar to that of a discrete latent process model, with a reduced number of parameters. We show how to perform maximum likelihood estimation of the proposed model by the joint use of an Expectation-Maximisation algorithm and a Newton-Raphson algorithm, implemented by means of recursions developed in the hidden Mark...

  13. Modeling dynamic functional connectivity using a wishart mixture model

    DEFF Research Database (Denmark)

    Nielsen, Søren Føns Vind; Madsen, Kristoffer Hougaard; Schmidt, Mikkel Nørgaard

    2017-01-01

    .e. the window length. In this work we use the Wishart Mixture Model (WMM) as a probabilistic model for dFC based on variational inference. The framework admits arbitrary window lengths and number of dynamic components and includes the static one-component model as a special case. We exploit that the WMM...

  14. A two-component model for the electron distribution function in a high-current pseudospark or back-lighted thyratron

    Science.gov (United States)

    Bauer, Hannes R.; Kirkman, George; Gundersen, Martin A.

    1990-04-01

    Temperature, energy, and densities of two electron distribution function components, including an isotropic bulk part and an anisotropic beam, are analyzed for a hydrogen pseudospark and/or backlighted thyratron switch plasma with a peak electron density of 1-3 x 10 to the 15th/cu cm and peak current density of about 10 kA/sq cm. Estimates of a very small cathode-fall width during the conduction phase and high electric field strengths lead to the injection of an electron beam with energies of about 100 eV and density (1-10) x 10 to the 13th/cu cm into a Maxwellian bulk plasma. Collisional and radiative processes of monoenergetic beam electrons, bulk plasma electrons and ions, and atomic hydrogen are modeled by a set of rate equations, and line intensity ratios are compared with measurements. Under these high-current conditions, for an initial density nH2 = 10 to the 16th/cu cm and electron temperature of 0.8-1 eV, the estimated beam density is about (1-10) x 10 to the 13th/cu cm.

  15. A structural model of anti-anti-[sigma] inhibition by a two-component receiver domain: the PhyR stress response regulator

    Energy Technology Data Exchange (ETDEWEB)

    Herrou, Julien; Foreman, Robert; Fiebig, Aretha; Crosson, Sean (UC)

    2012-05-09

    PhyR is a hybrid stress regulator conserved in {alpha}-proteobacteria that contains an N-terminal {sigma}-like (SL) domain and a C-terminal receiver domain. Phosphorylation of the receiver domain is known to promote binding of the SL domain to an anti-{sigma} factor. PhyR thus functions as an anti-anti-{sigma} factor in its phosphorylated state. We present genetic evidence that Caulobacter crescentus PhyR is a phosphorylation-dependent stress regulator that functions in the same pathway as {sigma}{sup T} and its anti-{sigma} factor, NepR. Additionally, we report the X-ray crystal structure of PhyR at 1.25 {angstrom} resolution, which provides insight into the mechanism of anti-anti-{sigma} regulation. Direct intramolecular contact between the PhyR receiver and SL domains spans regions {sigma}{sub 2} and {sigma}{sub 4}, likely serving to stabilize the SL domain in a closed conformation. The molecular surface of the receiver domain contacting the SL domain is the structural equivalent of {alpha}4-{beta}5-{alpha}5, which is known to undergo dynamic conformational change upon phosphorylation in a diverse range of receiver proteins. We propose a structural model of PhyR regulation in which receiver phosphorylation destabilizes the intramolecular interaction between SL and receiver domains, thereby permitting regions {sigma}{sub 2} and {sigma}{sub 4} in the SL domain to open about a flexible connector loop and bind anti-{sigma} factor.

  16. A structural model of anti-anti-[sigma];#963; inhibition by a two-component receiver domain: the PhyR stress response regulator

    Energy Technology Data Exchange (ETDEWEB)

    Herrou, Julien; Foreman, Robert; Fiebig, Aretha; Crosson, Sean (UC)

    2012-03-30

    PhyR is a hybrid stress regulator conserved in {alpha}-proteobacteria that contains an N-terminal {sigma}-like (SL) domain and a C-terminal receiver domain. Phosphorylation of the receiver domain is known to promote binding of the SL domain to an anti-{sigma} factor. PhyR thus functions as an anti-anti-{sigma} factor in its phosphorylated state. We present genetic evidence that Caulobacter crescentus PhyR is a phosphorylation-dependent stress regulator that functions in the same pathway as {sigma}{sup T} and its anti-{sigma} factor, NepR. Additionally, we report the X-ray crystal structure of PhyR at 1.25 {angstrom} resolution, which provides insight into the mechanism of anti-anti-{sigma} regulation. Direct intramolecular contact between the PhyR receiver and SL domains spans regions {sigma}{sub 2} and {sigma}{sub 4}, likely serving to stabilize the SL domain in a closed conformation. The molecular surface of the receiver domain contacting the SL domain is the structural equivalent of {alpha}4-{beta}5-{alpha}5, which is known to undergo dynamic conformational change upon phosphorylation in a diverse range of receiver proteins. We propose a structural model of PhyR regulation in which receiver phosphorylation destabilizes the intramolecular interaction between SL and receiver domains, thereby permitting regions {sigma}{sub 2} and {sigma}{sub 4} in the SL domain to open about a flexible connector loop and bind anti-{sigma} factor.

  17. Mixture Model and MDSDCA for Textual Data

    Science.gov (United States)

    Allouti, Faryel; Nadif, Mohamed; Hoai An, Le Thi; Otjacques, Benoît

    E-mailing has become an essential component of cooperation in business. Consequently, the large number of messages manually produced or automatically generated can rapidly cause information overflow for users. Many research projects have examined this issue but surprisingly few have tackled the problem of the files attached to e-mails that, in many cases, contain a substantial part of the semantics of the message. This paper considers this specific topic and focuses on the problem of clustering and visualization of attached files. Relying on the multinomial mixture model, we used the Classification EM algorithm (CEM) to cluster the set of files, and MDSDCA to visualize the obtained classes of documents. Like the Multidimensional Scaling method, the aim of the MDSDCA algorithm based on the Difference of Convex functions is to optimize the stress criterion. As MDSDCA is iterative, we propose an initialization approach to avoid starting with random values. Experiments are investigated using simulations and textual data.

  18. Mixtures of multiplicative cascade models in geochemistry

    Directory of Open Access Journals (Sweden)

    F. P. Agterberg

    2007-05-01

    Full Text Available Multifractal modeling of geochemical map data can help to explain the nature of frequency distributions of element concentration values for small rock samples and their spatial covariance structure. Useful frequency distribution models are the lognormal and Pareto distributions which plot as straight lines on logarithmic probability and log-log paper, respectively. The model of de Wijs is a simple multiplicative cascade resulting in discrete logbinomial distribution that closely approximates the lognormal. In this model, smaller blocks resulting from dividing larger blocks into parts have concentration values with constant ratios that are scale-independent. The approach can be modified by adopting random variables for these ratios. Other modifications include a single cascade model with ratio parameters that depend on magnitude of concentration value. The Turcotte model, which is another variant of the model of de Wijs, results in a Pareto distribution. Often a single straight line on logarithmic probability or log-log paper does not provide a good fit to observed data and two or more distributions should be fitted. For example, geochemical background and anomalies (extremely high values have separate frequency distributions for concentration values and for local singularity coefficients. Mixtures of distributions can be simulated by adding the results of separate cascade models. Regardless of properties of background, an unbiased estimate can be obtained of the parameter of the Pareto distribution characterizing anomalies in the upper tail of the element concentration frequency distribution or lower tail of the local singularity distribution. Computer simulation experiments and practical examples are used to illustrate the approach.

  19. Empirical profile mixture models for phylogenetic reconstruction

    National Research Council Canada - National Science Library

    Si Quang, Le; Gascuel, Olivier; Lartillot, Nicolas

    2008-01-01

    Motivation: Previous studies have shown that accounting for site-specific amino acid replacement patterns using mixtures of stationary probability profiles offers a promising approach for improving...

  20. Fitting a mixture model by expectation maximization to discover motifs in biopolymers

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, T.L.; Elkan, C. [Univ. of California, La Jolla, CA (United States)

    1994-12-31

    The algorithm described in this paper discovers one or more motifs in a collection of DNA or protein sequences by using the technique of expectation maximization to fit a two-component finite mixture model to the set of sequences. Multiple motifs are found by fitting a mixture model to the data, probabilistically erasing the occurrences of the motif thus found, and repeating the process to find successive motifs. The algorithm requires only a set of unaligned sequences and a number specifying the width of the motifs as input. It returns a model of each motif and a threshold which together can be used as a Bayes-optimal classifier for searching for occurrences of the motif in other databases. The algorithm estimates how many times each motif occurs in each sequence in the dataset and outputs an alignment of the occurrences of the motif. The algorithm is capable of discovering several different motifs with differing numbers of occurrences in a single dataset.

  1. Modeling methods for mixture-of-mixtures experiments applied to a tablet formulation problem.

    Science.gov (United States)

    Piepel, G F

    1999-01-01

    During the past few years, statistical methods for the experimental design, modeling, and optimization of mixture experiments have been widely applied to drug formulation problems. Different methods are required for mixture-of-mixtures (MoM) experiments in which a formulation is a mixture of two or more "major" components, each of which is a mixture of one or more "minor" components. Two types of MoM experiments are briefly described. A tablet formulation optimization example from a 1997 article in this journal is used to illustrate one type of MoM experiment and corresponding empirical modeling methods. Literature references that discuss other methods for MoM experiments are also provided.

  2. Goal-Directed Aiming: Two Components but Multiple Processes

    Science.gov (United States)

    Elliott, Digby; Hansen, Steve; Grierson, Lawrence E. M.; Lyons, James; Bennett, Simon J.; Hayes, Spencer J.

    2010-01-01

    This article reviews the behavioral literature on the control of goal-directed aiming and presents a multiple-process model of limb control. The model builds on recent variants of Woodworth's (1899) two-component model of speed-accuracy relations in voluntary movement and incorporates ideas about dynamic online limb control based on prior…

  3. Inhibitors targeting two-component signal transduction.

    Science.gov (United States)

    Watanabe, Takafumi; Okada, Ario; Gotoh, Yasuhiro; Utsumi, Ryutaro

    2008-01-01

    A two-component signal transduction system (TCS) is an attractive target for antibacterial agents. In this chapter, we review the TCS inhibitors developed during the past decade and introduce novel drug discovery systems to isolate the inhibitors of the YycG/YycF system, an essential TCS for bacterial growth, in an effort to develop a new class of antibacterial agents.

  4. Evaluating Mixture Modeling for Clustering: Recommendations and Cautions

    Science.gov (United States)

    Steinley, Douglas; Brusco, Michael J.

    2011-01-01

    This article provides a large-scale investigation into several of the properties of mixture-model clustering techniques (also referred to as latent class cluster analysis, latent profile analysis, model-based clustering, probabilistic clustering, Bayesian classification, unsupervised learning, and finite mixture models; see Vermunt & Magdison,…

  5. Species Tree Inference Using a Mixture Model.

    Science.gov (United States)

    Ullah, Ikram; Parviainen, Pekka; Lagergren, Jens

    2015-09-01

    Species tree reconstruction has been a subject of substantial research due to its central role across biology and medicine. A species tree is often reconstructed using a set of gene trees or by directly using sequence data. In either of these cases, one of the main confounding phenomena is the discordance between a species tree and a gene tree due to evolutionary events such as duplications and losses. Probabilistic methods can resolve the discordance by coestimating gene trees and the species tree but this approach poses a scalability problem for larger data sets. We present MixTreEM-DLRS: A two-phase approach for reconstructing a species tree in the presence of gene duplications and losses. In the first phase, MixTreEM, a novel structural expectation maximization algorithm based on a mixture model is used to reconstruct a set of candidate species trees, given sequence data for monocopy gene families from the genomes under study. In the second phase, PrIME-DLRS, a method based on the DLRS model (Åkerborg O, Sennblad B, Arvestad L, Lagergren J. 2009. Simultaneous Bayesian gene tree reconstruction and reconciliation analysis. Proc Natl Acad Sci U S A. 106(14):5714-5719), is used for selecting the best species tree. PrIME-DLRS can handle multicopy gene families since DLRS, apart from modeling sequence evolution, models gene duplication and loss using a gene evolution model (Arvestad L, Lagergren J, Sennblad B. 2009. The gene evolution model and computing its associated probabilities. J ACM. 56(2):1-44). We evaluate MixTreEM-DLRS using synthetic and biological data, and compare its performance with a recent genome-scale species tree reconstruction method PHYLDOG (Boussau B, Szöllősi GJ, Duret L, Gouy M, Tannier E, Daubin V. 2013. Genome-scale coestimation of species and gene trees. Genome Res. 23(2):323-330) as well as with a fast parsimony-based algorithm Duptree (Wehe A, Bansal MS, Burleigh JG, Eulenstein O. 2008. Duptree: a program for large-scale phylogenetic

  6. A computer graphical user interface for survival mixture modelling of recurrent infections.

    Science.gov (United States)

    Lee, Andy H; Zhao, Yun; Yau, Kelvin K W; Ng, S K

    2009-03-01

    Recurrent infections data are commonly encountered in medical research, where the recurrent events are characterised by an acute phase followed by a stable phase after the index episode. Two-component survival mixture models, in both proportional hazards and accelerated failure time settings, are presented as a flexible method of analysing such data. To account for the inherent dependency of the recurrent observations, random effects are incorporated within the conditional hazard function, in the manner of generalised linear mixed models. Assuming a Weibull or log-logistic baseline hazard in both mixture components of the survival mixture model, an EM algorithm is developed for the residual maximum quasi-likelihood estimation of fixed effect and variance component parameters. The methodology is implemented as a graphical user interface coded using Microsoft visual C++. Application to model recurrent urinary tract infections for elderly women is illustrated, where significant individual variations are evident at both acute and stable phases. The survival mixture methodology developed enable practitioners to identify pertinent risk factors affecting the recurrent times and to draw valid conclusions inferred from these correlated and heterogeneous survival data.

  7. Itinerant ferromagnetism in a polarized two-component Fermi gas.

    Science.gov (United States)

    Massignan, Pietro; Yu, Zhenhua; Bruun, Georg M

    2013-06-07

    We analyze when a repulsively interacting two-component Fermi gas becomes thermodynamically unstable against phase separation. We focus on the strongly polarized limit, where the free energy of the homogeneous mixture can be calculated accurately in terms of well-defined quasiparticles, the repulsive polarons. Phase diagrams as a function of polarization, temperature, mass imbalance, and repulsive polaron energy, as well as scattering length and range parameter, are provided. We show that the lifetime of the repulsive polaron increases significantly with the interaction range and the mass of the minority atoms, raising the prospects of detecting the transition to the elusive itinerant ferromagnetic state with ultracold atoms.

  8. Learning High-Dimensional Mixtures of Graphical Models

    CERN Document Server

    Anandkumar, A; Kakade, S M

    2012-01-01

    We consider the problem of learning mixtures of discrete graphical models in high dimensions and propose a novel method for estimating the mixture components with provable guarantees. The method proceeds mainly in three stages. In the first stage, it estimates the union of the Markov graphs of the mixture components (referred to as the union graph) via a series of rank tests. It then uses this estimated union graph to compute the mixture components via a spectral decomposition method. The spectral decomposition method was originally proposed for latent class models, and we adapt this method for learning the more general class of graphical model mixtures. In the end, the method produces tree approximations of the mixture components via the Chow-Liu algorithm. Our output is thus a tree-mixture model which serves as a good approximation to the underlying graphical model mixture. When the union graph has sparse node separators, we prove that our method has sample and computational complexities scaling as poly(p, ...

  9. Second-order model selection in mixture experiments

    Energy Technology Data Exchange (ETDEWEB)

    Redgate, P.E.; Piepel, G.F.; Hrma, P.R.

    1992-07-01

    Full second-order models for q-component mixture experiments contain q(q+l)/2 terms, which increases rapidly as q increases. Fitting full second-order models for larger q may involve problems with ill-conditioning and overfitting. These problems can be remedied by transforming the mixture components and/or fitting reduced forms of the full second-order mixture model. Various component transformation and model reduction approaches are discussed. Data from a 10-component nuclear waste glass study are used to illustrate ill-conditioning and overfitting problems that can be encountered when fitting a full second-order mixture model. Component transformation, model term selection, and model evaluation/validation techniques are discussed and illustrated for the waste glass example.

  10. A stochastic evolutionary model generating a mixture of exponential distributions

    CERN Document Server

    Fenner, Trevor; Loizou, George

    2015-01-01

    Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in \\cite{FENN15} so that it can generate mixture models,in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.

  11. Decomposition driven interface evolution for layers of binary mixtures: I. Model derivation and stratified base states

    CERN Document Server

    Thiele, Uwe; Frastia, Lubor

    2007-01-01

    A dynamical model is proposed to describe the coupled decomposition and profile evolution of a free surface film of a binary mixture. An example is a thin film of a polymer blend on a solid substrate undergoing simultaneous phase separation and dewetting. The model is based on model-H describing the coupled transport of the mass of one component (convective Cahn-Hilliard equation) and momentum (Navier-Stokes-Korteweg equations) supplemented by appropriate boundary conditions at the solid substrate and the free surface. General transport equations are derived using phenomenological non-equilibrium thermodynamics for a general non-isothermal setting taking into account Soret and Dufour effects and interfacial viscosity for the internal diffuse interface between the two components. Focusing on an isothermal setting the resulting model is compared to literature results and its base states corresponding to homogeneous or vertically stratified flat layers are analysed.

  12. Detection of unobserved heterogeneity with growth mixture models

    OpenAIRE

    Jost Reinecke; Luca Mariotti

    2009-01-01

    Latent growth curve models as structural equation models are extensively discussedin various research fields (Duncan et al., 2006). Recent methodological and statisticalextension are focused on the consideration of unobserved heterogeneity in empiricaldata. Muth´en extended the classical structural equation approach by mixture components,i. e. categorical latent classes (Muth´en 2002, 2004, 2007).The paper will discuss applications of growth mixture models with data from oneof the first panel...

  13. An equiratio mixture model for non-additive components : a case study for aspartame/acesulfame-K mixtures

    NARCIS (Netherlands)

    Schifferstein, H.N.J.

    1996-01-01

    The Equiratio Mixture Model predicts the psychophysical function for an equiratio mixture type on the basis of the psychophysical functions for the unmixed components. The model reliably estimates the sweetness of mixtures of sugars and sugar-alchohols, but is unable to predict intensity for asparta

  14. Approximation of the breast height diameter distribution of two-cohort stands by mixture models III Kernel density estimators vs mixture models

    Science.gov (United States)

    Rafal Podlaski; Francis A. Roesch

    2014-01-01

    Two-component mixtures of either the Weibull distribution or the gamma distribution and the kernel density estimator were used for describing the diameter at breast height (dbh) empirical distributions of two-cohort stands. The data consisted of study plots from the Å wietokrzyski National Park (central Poland) and areas close to and including the North Carolina section...

  15. Two Component Signal Transduction in Desulfovibrio Species

    Energy Technology Data Exchange (ETDEWEB)

    Luning, Eric; Rajeev, Lara; Ray, Jayashree; Mukhopadhyay, Aindrila

    2010-05-17

    The environmentally relevant Desulfovibrio species are sulfate-reducing bacteria that are of interest in the bioremediation of heavy metal contaminated water. Among these, the genome of D. vulgaris Hildenborough encodes a large number of two component systems consisting of 72 putative response regulators (RR) and 64 putative histidinekinases (HK), the majority of which are uncharacterized. We classified the D. vulgaris Hildenborough RRs based on their output domains and compared the distribution of RRs in other sequenced Desulfovibrio species. We have successfully purified most RRs and several HKs as His-tagged proteins. We performed phospho-transfer experiments to verify relationships between cognate pairs of HK and RR, and we have also mapped a few non-cognate HK-RR pairs. Presented here are our discoveries from the Desulfovibrio RR categorization and results from the in vitro studies using purified His tagged D. vulgaris HKs and RRs.

  16. Two-Component Description for Relativistic Fermions

    Institute of Scientific and Technical Information of China (English)

    CHEN Yu-Qi; SANG Wen-Long; YANG Lan-Fei

    2009-01-01

    We propose a two-component form to describe massive relativistic fermions in gauge theories. Relations between the Green's functions in this form and those in the conventional four-component form are derived. It is shown that the S-matrix elements in both forms are exactly the same. The description of the fermion in the new form simplifies significantly the γ-matrix algebra in the four-component form. In particular, in perturbative calculations the propagator of the fermion is a scalar function. As examples, we use this form to reproduce the relativistic spectrum of hydrodron atom, the S-matrix of e+ e-→μ+ μ- and QED one-loop vacuum polarization of photon.

  17. Modeling and interpreting biological effects of mixtures in the environment: introduction to the metal mixture modeling evaluation project.

    Science.gov (United States)

    Van Genderen, Eric; Adams, William; Dwyer, Robert; Garman, Emily; Gorsuch, Joseph

    2015-04-01

    The fate and biological effects of chemical mixtures in the environment are receiving increased attention from the scientific and regulatory communities. Understanding the behavior and toxicity of metal mixtures poses unique challenges for incorporating metal-specific concepts and approaches, such as bioavailability and metal speciation, in multiple-metal exposures. To avoid the use of oversimplified approaches to assess the toxicity of metal mixtures, a collaborative 2-yr research project and multistakeholder group workshop were conducted to examine and evaluate available higher-tiered chemical speciation-based metal mixtures modeling approaches. The Metal Mixture Modeling Evaluation project and workshop achieved 3 important objectives related to modeling and interpretation of biological effects of metal mixtures: 1) bioavailability models calibrated for single-metal exposures can be integrated to assess mixture scenarios; 2) the available modeling approaches perform consistently well for various metal combinations, organisms, and endpoints; and 3) several technical advancements have been identified that should be incorporated into speciation models and environmental risk assessments for metals.

  18. Simulation of rheological behavior of asphalt mixture with lattice model

    Institute of Scientific and Technical Information of China (English)

    杨圣枫; 杨新华; 陈传尧

    2008-01-01

    A three-dimensional(3D) lattice model for predicting the rheological behavior of asphalt mixtures was presented.In this model asphalt mixtures were described as a two-phase composite material consisting of asphalt sand and coarse aggregates distributed randomly.Asphalt sand was regarded as a viscoelastic material and aggregates as an elastic material.The rheological response of asphalt mixture subjected to different constant stresses was simulated.The calibrated overall creep strain shows a good approximation to experimental results.

  19. Option Pricing with Asymmetric Heteroskedastic Normal Mixture Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V. K; Stentoft, Lars

    2015-01-01

    2011, and compute dollar losses and implied standard deviation losses. We compare our results to those of existing mixture models and other benchmarks like component models and jump models. Using the model confidence set test, the overall dollar root mean squared error of the best performing benchmark...

  20. Detection of mastitis in dairy cattle by use of mixture models for repeated somatic cell scores: a Bayesian approach via Gibbs sampling.

    Science.gov (United States)

    Odegård, J; Jensen, J; Madsen, P; Gianola, D; Klemetsdal, G; Heringstad, B

    2003-11-01

    The distribution of somatic cell scores could be regarded as a mixture of at least two components depending on a cow's udder health status. A heteroscedastic two-component Bayesian normal mixture model with random effects was developed and implemented via Gibbs sampling. The model was evaluated using datasets consisting of simulated somatic cell score records. Somatic cell score was simulated as a mixture representing two alternative udder health statuses ("healthy" or "diseased"). Animals were assigned randomly to the two components according to the probability of group membership (Pm). Random effects (additive genetic and permanent environment), when included, had identical distributions across mixture components. Posterior probabilities of putative mastitis were estimated for all observations, and model adequacy was evaluated using measures of sensitivity, specificity, and posterior probability of misclassification. Fitting different residual variances in the two mixture components caused some bias in estimation of parameters. When the components were difficult to disentangle, so were their residual variances, causing bias in estimation of Pm and of location parameters of the two underlying distributions. When all variance components were identical across mixture components, the mixture model analyses returned parameter estimates essentially without bias and with a high degree of precision. Including random effects in the model increased the probability of correct classification substantially. No sizable differences in probability of correct classification were found between models in which a single cow effect (ignoring relationships) was fitted and models where this effect was split into genetic and permanent environmental components, utilizing relationship information. When genetic and permanent environmental effects were fitted, the between-replicate variance of estimates of posterior means was smaller because the model accounted for random genetic drift.

  1. Conversion of syngas to liquid hydrocarbons over a two-component (Cr{sub 2}O{sub 3}-ZnO and ZSM-5 zeolite) catalyst: kinetic modelling and catalyst deactivation

    Energy Technology Data Exchange (ETDEWEB)

    Erena, J.; Arandes, J.M.; Bilbao, J.; Gayubo, A.G. [Universidad del Pais Vasco, Bilbao (Spain). Dept. de Ingeneria Quimica; De Lasa, H.I. [University of Western Ontario, London, ONT (Canada). Chemical Reactor Engineering Centre

    2000-05-01

    The present study describes the kinetics of syngas transformation into liquid hydrocarbons (boiling point in the gasoline range) using as catalyst a mixture of a metallic component, Cr{sub 2}O{sub 3}-ZnO, and of an acidic component, ZSM-5 zeolite. Experimental results were obtained in an isothermal fixed-bed integral reactor. The validity of several kinetic models, available for methanol synthesis, is analysed and modifications are proposed. These changes involve a rate equation with a CO{sub 2} concentration-dependent term. Catalyst deactivation is also evaluated and the effect of the operating conditions on coke deposition is established. Moreover, the rate of CO conversion and the change of catalytic activity with time-on-stream were described using a kinetic model showing a weak influence of temperature. (Author)

  2. Tobacco two-component gene NTHK2

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    By using a previously isolated tobacco two- component gene NTHK1 as a probe, we screened a cDNA library and obtained a homologous gene designated NTHK2. Sequencing analysis revealed that NTHK2 encoded a putative ethylene receptor homolog and contained a histidine kinase domain and a receiver domain. In the histidine kinase domain, the histidine at the phosphorylation site was replaced by an asparagine. Southern analysis indicated that NTHK2 was present at low copies in tobacco genome. The expression of NTHK2 was studied using a competitive RT-PCR method. It was found that, in young flower buds, NTHK2 was expressed abundantly, while in other organs or tissues, it was expressed in a low level. When leaf was subjected to wounding (cutting) treatment, NTHK2 expression was increased. When tobacco seedlings were stressed with PEG and heat shock, NTHK2 transcription was also enhanced. Other treatments showed little effects. These results indicated that NTHK2 might be involved in the developmental processes and in plant responses to some environmental stresses.

  3. Proper Versus Improper Mixtures in the ESR Model

    CERN Document Server

    Garola, Claudio

    2011-01-01

    The interpretation of mixtures is problematic in quantum mechanics (QM) because of nonobjectivity of properties. The ESR model restores objectivity reinterpreting quantum probabilities as conditional on detection and embodying the mathematical formalism of QM into a broader noncontextual (hence local) framework. We have recently provided a Hilbert space representation of the generalized observables that appear in the ESR model. We show here that each proper mixture is represented by a family of density operators parametrized by the macroscopic properties characterizing the physical system $\\Omega$ that is considered and that each improper mixture is represented by a single density operator which coincides with the operator that represents it in QM. The new representations avoid the problems mentioned above and entail some predictions that differ from the predictions of QM. One can thus contrive experiments for distinguishing empirically proper from improper mixtures, hence for confirming or disproving the ESR...

  4. Mixture modeling approach to flow cytometry data.

    Science.gov (United States)

    Boedigheimer, Michael J; Ferbas, John

    2008-05-01

    Flow Cytometry has become a mainstay technique for measuring fluorescent and physical attributes of single cells in a suspended mixture. These data are reduced during analysis using a manual or semiautomated process of gating. Despite the need to gate data for traditional analyses, it is well recognized that analyst-to-analyst variability can impact the dataset. Moreover, cells of interest can be inadvertently excluded from the gate, and relationships between collected variables may go unappreciated because they were not included in the original analysis plan. A multivariate non-gating technique was developed and implemented that accomplished the same goal as traditional gating while eliminating many weaknesses. The procedure was validated against traditional gating for analysis of circulating B cells in normal donors (n = 20) and persons with Systemic Lupus Erythematosus (n = 42). The method recapitulated relationships in the dataset while providing for an automated and objective assessment of the data. Flow cytometry analyses are amenable to automated analytical techniques that are not predicated on discrete operator-generated gates. Such alternative approaches can remove subjectivity in data analysis, improve efficiency and may ultimately enable construction of large bioinformatics data systems for more sophisticated approaches to hypothesis testing.

  5. Stochastic downscaling of precipitation with neural network conditional mixture models

    Science.gov (United States)

    Carreau, Julie; Vrac, Mathieu

    2011-10-01

    We present a new class of stochastic downscaling models, the conditional mixture models (CMMs), which builds on neural network models. CMMs are mixture models whose parameters are functions of predictor variables. These functions are implemented with a one-layer feed-forward neural network. By combining the approximation capabilities of mixtures and neural networks, CMMs can, in principle, represent arbitrary conditional distributions. We evaluate the CMMs at downscaling precipitation data at three stations in the French Mediterranean region. A discrete (Dirac) component is included in the mixture to handle the "no-rain" events. Positive rainfall is modeled with a mixture of continuous densities, which can be either Gaussian, log-normal, or hybrid Pareto (an extension of the generalized Pareto). CMMs are stochastic weather generators in the sense that they provide a model for the conditional density of local variables given large-scale information. In this study, we did not look for the most appropriate set of predictors, and we settled for a decent set as the basis to compare the downscaling models. The set of predictors includes the National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalyses sea level pressure fields on a 6 × 6 grid cell region surrounding the stations plus three date variables. We compare the three distribution families of CMMs with a simpler benchmark model, which is more common in the downscaling community. The difference between the benchmark model and CMMs is that positive rainfall is modeled with a single Gamma distribution. The results show that CMM with hybrid Pareto components outperforms both the CMM with Gaussian components and the benchmark model in terms of log-likelihood. However, there is no significant difference with the log-normal CMM. In general, the additional flexibility of mixture models, as opposed to using a single distribution, allows us to better represent the

  6. Count data modeling and classification using finite mixtures of distributions.

    Science.gov (United States)

    Bouguila, Nizar

    2011-02-01

    In this paper, we consider the problem of constructing accurate and flexible statistical representations for count data, which we often confront in many areas such as data mining, computer vision, and information retrieval. In particular, we analyze and compare several generative approaches widely used for count data clustering, namely multinomial, multinomial Dirichlet, and multinomial generalized Dirichlet mixture models. Moreover, we propose a clustering approach via a mixture model based on a composition of the Liouville family of distributions, from which we select the Beta-Liouville distribution, and the multinomial. The novel proposed model, which we call multinomial Beta-Liouville mixture, is optimized by deterministic annealing expectation-maximization and minimum description length, and strives to achieve a high accuracy of count data clustering and model selection. An important feature of the multinomial Beta-Liouville mixture is that it has fewer parameters than the recently proposed multinomial generalized Dirichlet mixture. The performance evaluation is conducted through a set of extensive empirical experiments, which concern text and image texture modeling and classification and shape modeling, and highlights the merits of the proposed models and approaches.

  7. Detecting Housing Submarkets using Unsupervised Learning of Finite Mixture Models

    DEFF Research Database (Denmark)

    Ntantamis, Christos

    framework. The global form of heterogeneity is incorporated in a Hedonic Price Index model that encompasses a nonlinear function of the geographical coordinates of each dwelling. The local form of heterogeneity is subsequently modeled as a Finite Mixture Model for the residuals of the Hedonic Index...

  8. Novel mixture model for the representation of potential energy surfaces

    Science.gov (United States)

    Pham, Tien Lam; Kino, Hiori; Terakura, Kiyoyuki; Miyake, Takashi; Dam, Hieu Chi

    2016-10-01

    We demonstrate that knowledge of chemical physics on a materials system can be automatically extracted from first-principles calculations using a data mining technique; this information can then be utilized to construct a simple empirical atomic potential model. By using unsupervised learning of the generative Gaussian mixture model, physically meaningful patterns of atomic local chemical environments can be detected automatically. Based on the obtained information regarding these atomic patterns, we propose a chemical-structure-dependent linear mixture model for estimating the atomic potential energy. Our experiments show that the proposed mixture model significantly improves the accuracy of the prediction of the potential energy surface for complex systems that possess a large diversity in their local structures.

  9. Finite mixture varying coefficient models for analyzing longitudinal heterogenous data.

    Science.gov (United States)

    Lu, Zhaohua; Song, Xinyuan

    2012-03-15

    This paper aims to develop a mixture model to study heterogeneous longitudinal data on the treatment effect of heroin use from a California Civil Addict Program. Each component of the mixture is characterized by a varying coefficient mixed effect model. We use the Bayesian P-splines approach to approximate the varying coefficient functions. We develop Markov chain Monte Carlo algorithms to estimate the smooth functions, unknown parameters, and latent variables in the model. We use modified deviance information criterion to determine the number of components in the mixture. A simulation study demonstrates that the modified deviance information criterion selects the correct number of components and the estimation of unknown quantities is accurate. We apply the proposed model to the heroin treatment study. Furthermore, we identify heterogeneous longitudinal patterns.

  10. Phylogenetic mixture models can reduce node-density artifacts.

    Science.gov (United States)

    Venditti, Chris; Meade, Andrew; Pagel, Mark

    2008-04-01

    We investigate the performance of phylogenetic mixture models in reducing a well-known and pervasive artifact of phylogenetic inference known as the node-density effect, comparing them to partitioned analyses of the same data. The node-density effect refers to the tendency for the amount of evolutionary change in longer branches of phylogenies to be underestimated compared to that in regions of the tree where there are more nodes and thus branches are typically shorter. Mixture models allow more than one model of sequence evolution to describe the sites in an alignment without prior knowledge of the evolutionary processes that characterize the data or how they correspond to different sites. If multiple evolutionary patterns are common in sequence evolution, mixture models may be capable of reducing node-density effects by characterizing the evolutionary processes more accurately. In gene-sequence alignments simulated to have heterogeneous patterns of evolution, we find that mixture models can reduce node-density effects to negligible levels or remove them altogether, performing as well as partitioned analyses based on the known simulated patterns. The mixture models achieve this without knowledge of the patterns that generated the data and even in some cases without specifying the full or true model of sequence evolution known to underlie the data. The latter result is especially important in real applications, as the true model of evolution is seldom known. We find the same patterns of results for two real data sets with evidence of complex patterns of sequence evolution: mixture models substantially reduced node-density effects and returned better likelihoods compared to partitioning models specifically fitted to these data. We suggest that the presence of more than one pattern of evolution in the data is a common source of error in phylogenetic inference and that mixture models can often detect these patterns even without prior knowledge of their presence in the

  11. Community Detection Using Multilayer Edge Mixture Model

    CERN Document Server

    Zhang, Han; Lai, Jian-Huang; Yu, Philip S

    2016-01-01

    A wide range of complex systems can be modeled as networks with corresponding constraints on the edges and nodes, which have been extensively studied in recent years. Nowadays, with the progress of information technology, systems that contain the information collected from multiple perspectives have been generated. The conventional models designed for single perspective networks fail to depict the diverse topological properties of such systems, so multilayer network models aiming at describing the structure of these networks emerge. As a major concern in network science, decomposing the networks into communities, which usually refers to closely interconnected node groups, extracts valuable information about the structure and interactions of the network. Unlike the contention of dozens of models and methods in conventional single-layer networks, methods aiming at discovering the communities in the multilayer networks are still limited. In order to help explore the community structure in multilayer networks, we...

  12. Modeling Biodegradation Kinetics on Benzene and Toluene and Their Mixture

    Directory of Open Access Journals (Sweden)

    Aparecido N. Módenes

    2007-10-01

    Full Text Available The objective of this work was to model the biodegradation kinetics of toxic compounds toluene and benzene as pure substrates and in a mixture. As a control, Monod and Andrews models were used. To predict substrates interactions, more sophisticated models of inhibition and competition, and SKIP (sum kinetics interactions parameters model were applied. The models evaluation was performed based on the experimental data from Pseudomonas putida F1 activities published in the literature. In parameter identification procedure, the global method of particle swarm optimization (PSO was applied. The simulation results show that the better description of the biodegradation process of pure toxic substrate can be achieved by Andrews' model. The biodegradation process of a mixture of toxic substrates is modeled the best when modified competitive inhibition and SKIP models are used. The developed software can be used as a toolbox of a kinetics model catalogue of industrial wastewater treatment for process design and optimization.

  13. Binding of Solvent Molecules to a Protein Surface in Binary Mixtures Follows a Competitive Langmuir Model.

    Science.gov (United States)

    Kulschewski, Tobias; Pleiss, Jürgen

    2016-09-06

    The binding of solvent molecules to a protein surface was modeled by molecular dynamics simulations of of Candida antarctica (C. antarctica) lipase B in binary mixtures of water, methanol, and toluene. Two models were analyzed: a competitive Langmuir model which assumes identical solvent binding sites with a different affinity toward water (KWat), methanol (KMet), and toluene (KTol) and a competitive Langmuir model with an additional interaction between free water and already bound water (KWatWat). The numbers of protein-bound molecules of both components of a binary mixture were determined for different compositions as a function of their thermodynamic activities in the bulk phase, and the binding constants were simultaneously fitted to the six binding curves (two components of three different mixtures). For both Langmuir models, the values of KWat, KMet, and KTol were highly correlated. The highest binding affinity was found for methanol, which was almost 4-fold higher than the binding affinities of water and toluene (KMet ≫ KWat ≈ KTol). Binding of water was dominated by the water-water interaction (KWatWat). Even for the three protein surface patches of highest water affinity, the binding affinity of methanol was 2-fold higher than water and 8-fold higher than toluene (KMet > KWat > KTol). The Langmuir model provides insights into the protein destabilizing mechanism of methanol which has a high binding affinity toward the protein surface. Thus, destabilizing solvents compete with intraprotein interactions and disrupt the tertiary structure. In contrast, benign solvents such as water or toluene have a low affinity toward the protein surface. Water is a special solvent: only few water molecules bind directly to the protein; most water molecules bind to already bound water molecules thus forming water patches. A quantitative mechanistic model of protein-solvent interactions that includes competition and miscibility of the components contributes a robust basis

  14. Statistical Compressed Sensing of Gaussian Mixture Models

    CERN Document Server

    Yu, Guoshen

    2011-01-01

    A novel framework of compressed sensing, namely statistical compressed sensing (SCS), that aims at efficiently sampling a collection of signals that follow a statistical distribution, and achieving accurate reconstruction on average, is introduced. SCS based on Gaussian models is investigated in depth. For signals that follow a single Gaussian model, with Gaussian or Bernoulli sensing matrices of O(k) measurements, considerably smaller than the O(k log(N/k)) required by conventional CS based on sparse models, where N is the signal dimension, and with an optimal decoder implemented via linear filtering, significantly faster than the pursuit decoders applied in conventional CS, the error of SCS is shown tightly upper bounded by a constant times the best k-term approximation error, with overwhelming probability. The failure probability is also significantly smaller than that of conventional sparsity-oriented CS. Stronger yet simpler results further show that for any sensing matrix, the error of Gaussian SCS is u...

  15. Evaluating Differential Effects Using Regression Interactions and Regression Mixture Models

    Science.gov (United States)

    Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung

    2015-01-01

    Research increasingly emphasizes understanding differential effects. This article focuses on understanding regression mixture models, which are relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their…

  16. Multi-resolution image segmentation based on Gaussian mixture model

    Institute of Scientific and Technical Information of China (English)

    Tang Yinggan; Liu Dong; Guan Xinping

    2006-01-01

    Mixture model based image segmentation method, which assumes that image pixels are independent and do not consider the position relationship between pixels, is not robust to noise and usually leads to misclassification. A new segmentation method, called multi-resolution Gaussian mixture model method, is proposed. First, an image pyramid is constructed and son-father link relationship is built between each level of pyramid. Then the mixture model segmentation method is applied to the top level. The segmentation result on the top level is passed top-down to the bottom level according to the son-father link relationship between levels. The proposed method considers not only local but also global information of image, it overcomes the effect of noise and can obtain better segmentation result. Experimental result demonstrates its effectiveness.

  17. A Gamma Model for Mixture STR Samples

    DEFF Research Database (Denmark)

    Christensen, Susanne; Bøttcher, Susanne Gammelgaard; Morling, Niels

    This project investigates the behavior of the PCR Amplification Kit. A number of known DNA-profiles are mixed two by two in "known" proportions and analyzed. Gamma distribution models are fitted to the resulting data to learn to what extent actual mixing proportions can be rediscovered in the amp......This project investigates the behavior of the PCR Amplification Kit. A number of known DNA-profiles are mixed two by two in "known" proportions and analyzed. Gamma distribution models are fitted to the resulting data to learn to what extent actual mixing proportions can be rediscovered...... in the amplifier output and thereby the question of confidence in separate DNA -profiles suggested by an output is addressed....

  18. Modeling of Complex Mixtures: JP-8 Toxicokinetics

    Science.gov (United States)

    2008-10-01

    diffusion, including metabolic loss via the cytochrome P-450 system, described by non-linear Michaelis - Menten kinetics as shown in the following...point. Inhalation and iv were the dose routes for the rat study. The modelers used saturable ( Michaelis - Menten ) kinetics as well as a second... Michaelis - Menten liver metabolic constants for n-decane have been measured (Km = 1.5 mg/L and Vmax = 0.4 mg/hour) using rat liver slices in a vial

  19. Hidden Markov Models with Factored Gaussian Mixtures Densities

    Institute of Scientific and Technical Information of China (English)

    LI Hao-zheng; LIU Zhi-qiang; ZHU Xiang-hua

    2004-01-01

    We present a factorial representation of Gaussian mixture models for observation densities in Hidden Markov Models(HMMs), which uses the factorial learning in the HMM framework. We derive the reestimation formulas for estimating the factorized parameters by the Expectation Maximization (EM) algorithm. We conduct several experiments to compare the performance of this model structure with Factorial Hidden Markov Models(FHMMs) and HMMs, some conclusions and promising empirical results are presented.

  20. A stochastic evolutionary model generating a mixture of exponential distributions

    Science.gov (United States)

    Fenner, Trevor; Levene, Mark; Loizou, George

    2016-02-01

    Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in [T. Fenner, M. Levene, G. Loizou, J. Stat. Mech. 2015, P08015 (2015)] so that it can generate mixture models, in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.

  1. Multinomial mixture model with heterogeneous classification probabilities

    Science.gov (United States)

    Holland, M.D.; Gray, B.R.

    2011-01-01

    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  2. Hard-sphere kinetic models for inert and reactive mixtures

    Science.gov (United States)

    Polewczak, Jacek

    2016-10-01

    I consider stochastic variants of a simple reacting sphere (SRS) kinetic model (Xystris and Dahler 1978 J. Chem. Phys. 68 387-401, Qin and Dahler 1995 J. Chem. Phys. 103 725-50, Dahler and Qin 2003 J. Chem. Phys. 118 8396-404) for dense reacting mixtures. In contrast to the line-of-center models of chemical reactive models, in the SRS kinetic model, the microscopic reversibility (detailed balance) can be easily shown to be satisfied, and thus all mathematical aspects of the model can be fully justified. In the SRS model, the molecules behave as if they were single mass points with two internal states. Collisions may alter the internal states of the molecules, and this occurs when the kinetic energy associated with the reactive motion exceeds the activation energy. Reactive and non-reactive collision events are considered to be hard sphere-like. I consider a four component mixture A, B, A *, B *, in which the chemical reactions are of the type A+B\\rightleftharpoons {{A}\\ast}+{{B}\\ast} , with A * and B * being distinct species from A and B. This work extends the joined works with George Stell to the kinetic models of dense inert and reactive mixtures. The idea of introducing smearing-type effect in the collisional process results in a new class of stochastic kinetic models for both inert and reactive mixtures. In this paper the important new mathematical properties of such systems of kinetic equations are proven. The new results for stochastic revised Enskog system for inert mixtures are also provided.

  3. Robust estimation of unbalanced mixture models on samples with outliers.

    Science.gov (United States)

    Galimzianova, Alfiia; Pernuš, Franjo; Likar, Boštjan; Špiclin, Žiga

    2015-11-01

    Mixture models are often used to compactly represent samples from heterogeneous sources. However, in real world, the samples generally contain an unknown fraction of outliers and the sources generate different or unbalanced numbers of observations. Such unbalanced and contaminated samples may, for instance, be obtained by high density data sensors such as imaging devices. Estimation of unbalanced mixture models from samples with outliers requires robust estimation methods. In this paper, we propose a novel robust mixture estimator incorporating trimming of the outliers based on component-wise confidence level ordering of observations. The proposed method is validated and compared to the state-of-the-art FAST-TLE method on two data sets, one consisting of synthetic samples with a varying fraction of outliers and a varying balance between mixture weights, while the other data set contained structural magnetic resonance images of the brain with tumors of varying volumes. The results on both data sets clearly indicate that the proposed method is capable to robustly estimate unbalanced mixtures over a broad range of outlier fractions. As such, it is applicable to real-world samples, in which the outlier fraction cannot be estimated in advance.

  4. Option Pricing with Asymmetric Heteroskedastic Normal Mixture Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars

    varying higher order moments of the risk neutral distribution. When forecasting out-of-sample a large set of index options between 1996 and 2009, substantial improvements are found compared to several benchmark models in terms of dollar losses and the ability to explain the smirk in implied volatilities....... Overall, the dollar root mean squared error of the best performing benchmark component model is 39% larger than for the mixture model. When considering the recent financial crisis this difference increases to 69%....

  5. The R Package bgmm : Mixture Modeling with Uncertain Knowledge

    Directory of Open Access Journals (Sweden)

    Przemys law Biecek

    2012-04-01

    Full Text Available Classical supervised learning enjoys the luxury of accessing the true known labels for the observations in a modeled dataset. Real life, however, poses an abundance of problems, where the labels are only partially defined, i.e., are uncertain and given only for a subsetof observations. Such partial labels can occur regardless of the knowledge source. For example, an experimental assessment of labels may have limited capacity and is prone to measurement errors. Also expert knowledge is often restricted to a specialized area and is thus unlikely to provide trustworthy labels for all observations in the dataset. Partially supervised mixture modeling is able to process such sparse and imprecise input. Here, we present an R package calledbgmm, which implements two partially supervised mixture modeling methods: soft-label and belief-based modeling. For completeness, we equipped the package also with the functionality of unsupervised, semi- and fully supervised mixture modeling. On real data we present the usage of bgmm for basic model-fitting in all modeling variants. The package can be applied also to selection of the best-fitting from a set of models with different component numbers or constraints on their structures. This functionality is presented on an artificial dataset, which can be simulated in bgmm from a distribution defined by a given model.

  6. Simulating asymmetric colloidal mixture with adhesive hard sphere model.

    Science.gov (United States)

    Jamnik, A

    2008-06-21

    Monte Carlo simulation and Percus-Yevick (PY) theory are used to investigate the structural properties of a two-component system of the Baxter adhesive fluids with the size asymmetry of the particles of both components mimicking an asymmetric binary colloidal mixture. The radial distribution functions for all possible species pairs, g(11)(r), g(22)(r), and g(12)(r), exhibit discontinuities at the interparticle distances corresponding to certain combinations of n and m values (n and m being integers) in the sum nsigma(1)+msigma(2) (sigma(1) and sigma(2) being the hard-core diameters of individual components) as a consequence of the impulse character of 1-1, 2-2, and 1-2 attractive interactions. In contrast to the PY theory, which predicts the delta function peaks in the shape of g(ij)(r) only at the distances which are the multiple of the molecular sizes corresponding to different linear structures of successively connected particles, the simulation results reveal additional peaks at intermediate distances originating from the formation of rigid clusters of various geometries.

  7. The Semiparametric Normal Variance-Mean Mixture Model

    DEFF Research Database (Denmark)

    Korsholm, Lars

    1997-01-01

    We discuss the normal vairance-mean mixture model from a semi-parametric point of view, i.e. we let the mixing distribution belong to a non parametric family. The main results are consistency of the non parametric maximum likelihood estimat or in this case, and construction of an asymptotically...... normal and efficient estimator....

  8. Comparing State SAT Scores Using a Mixture Modeling Approach

    Science.gov (United States)

    Kim, YoungKoung Rachel

    2009-01-01

    Presented at the national conference for AERA (American Educational Research Association) in April 2009. The large variability of SAT taker population across states makes state-by-state comparisons of the SAT scores challenging. Using a mixture modeling approach, therefore, the current study presents a method of identifying subpopulations in terms…

  9. Detecting Social Desirability Bias Using Factor Mixture Models

    Science.gov (United States)

    Leite, Walter L.; Cooper, Lou Ann

    2010-01-01

    Based on the conceptualization that social desirable bias (SDB) is a discrete event resulting from an interaction between a scale's items, the testing situation, and the respondent's latent trait on a social desirability factor, we present a method that makes use of factor mixture models to identify which examinees are most likely to provide…

  10. Graphene Oxide: A One- versus Two-Component Material.

    Science.gov (United States)

    Naumov, Anton; Grote, Fabian; Overgaard, Marc; Roth, Alexandra; Halbig, Christian E; Nørgaard, Kasper; Guldi, Dirk M; Eigler, Siegfried

    2016-09-14

    The structure of graphene oxide (GO) is a matter of discussion. While established GO models are based on functional groups attached to the carbon framework, another frequently used model claims that GO consists of two components, a slightly oxidized graphene core and highly oxidized molecular species, oxidative debris (OD), adsorbed on it. Those adsorbents are claimed to be the origin for optical properties of GO. Here, we examine this model by preparing GO with a low degree of functionalization, combining it with OD and studying the optical properties of both components and their combination in an artificial two-component system. The analyses of absorption and emission spectra as well as lifetime measurements reveal that properties of the combined system are distinctly different from those of GO. That confirms structural models of GO as a separate oxygenated hexagonal carbon framework with optical properties governed by its internal structure rather than the presence of OD. Understanding the structure of GO allows further reliable interpretation of its optical and electronic properties and enables controlled processing of GO.

  11. An integral equation model for warm and hot dense mixtures

    CERN Document Server

    Starrett, C E; Daligault, J; Hamel, S

    2014-01-01

    In Starrett and Saumon [Phys. Rev. E 87, 013104 (2013)] a model for the calculation of electronic and ionic structures of warm and hot dense matter was described and validated. In that model the electronic structure of one "atom" in a plasma is determined using a density functional theory based average-atom (AA) model, and the ionic structure is determined by coupling the AA model to integral equations governing the fluid structure. That model was for plasmas with one nuclear species only. Here we extend it to treat plasmas with many nuclear species, i.e. mixtures, and apply it to a carbon-hydrogen mixture relevant to inertial confinement fusion experiments. Comparison of the predicted electronic and ionic structures with orbital-free and Kohn-Sham molecular dynamics simulations reveals excellent agreement wherever chemical bonding is not significant.

  12. Modeling adsorption of binary and ternary mixtures on microporous media

    DEFF Research Database (Denmark)

    Monsalvo, Matias Alfonso; Shapiro, Alexander

    2007-01-01

    The goal of this work is to analyze the adsorption of binary and ternary mixtures on the basis of the multicomponent potential theory of adsorption (MPTA). In the MPTA, the adsorbate is considered as a segregated mixture in the external potential field emitted by the solid adsorbent. This makes...... it possible using the same equation of state to describe the thermodynamic properties of the segregated and the bulk phases. For comparison, we also used the ideal adsorbed solution theory (IAST) to describe adsorption equilibria. The main advantage of these two models is their capabilities to predict...

  13. TASI 2011 lectures notes: two-component fermion notation and supersymmetry

    OpenAIRE

    Martin, Stephen P.

    2012-01-01

    These notes, based on work with Herbi Dreiner and Howie Haber, discuss how to do practical calculations of cross sections and decay rates using two-component fermion notation, as appropriate for supersymmetry and other beyond-the-Standard-Model theories. Included are a list of two-component fermion Feynman rules for the Minimal Supersymmetric Standard Model, and some example calculations.

  14. A general mixture model for sediment laden flows

    Science.gov (United States)

    Liang, Lixin; Yu, Xiping; Bombardelli, Fabián

    2017-09-01

    A mixture model for general description of sediment-laden flows is developed based on an Eulerian-Eulerian two-phase flow theory, with the aim at gaining computational speed in the prediction, but preserving the accuracy of the complete two-fluid model. The basic equations of the model include the mass and momentum conservation equations for the sediment-water mixture, and the mass conservation equation for sediment. However, a newly-obtained expression for the slip velocity between phases allows for the computation of the sediment motion, without the need of solving the momentum equation for sediment. The turbulent motion is represented for both the fluid and the particulate phases. A modified k-ε model is used to describe the fluid turbulence while an algebraic model is adopted for turbulent motion of particles. A two-dimensional finite difference method based on the SMAC scheme was used to numerically solve the mathematical model. The model is validated through simulations of fluid and suspended sediment motion in steady open-channel flows, both in equilibrium and non-equilibrium states, as well as in oscillatory flows. The computed sediment concentrations, horizontal velocity and turbulent kinetic energy of the mixture are all shown to be in good agreement with available experimental data, and importantly, this is done at a fraction of the computational efforts required by the complete two-fluid model.

  15. Adaptive mixture observation models for multiple object tracking

    Institute of Scientific and Technical Information of China (English)

    CUI Peng; SUN LiFeng; YANG ShiQiang

    2009-01-01

    Multiple object tracking (MOT) poses many difficulties to conventional well-studied single object track-ing (SOT) algorithms, such as severe expansion of configuration space, high complexity of motion con-ditions, and visual ambiguities among nearby targets, among which the visual ambiguity problem is the central challenge. In this paper, we address this problem by embedding adaptive mixture observation models (AMOM) into a mixture tracker which is implemented in Particle Filter framework. In AMOM, the extracted multiple features for appearance description are combined according to their discriminative power between ambiguity prone objects, where the discriminability of features are evaluated by online entropy-based feature selection techniques. The induction of AMOM can help to surmount the Incapa-bility of conventional mixture tracker in handling object occlusions, and meanwhile retain its merits of flexibility and high efficiency. The final experiments show significant improvement in MOT scenarios compared with other methods.

  16. Phylogenetic mixtures and linear invariants for equal input models.

    Science.gov (United States)

    Casanellas, Marta; Steel, Mike

    2017-04-01

    The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).

  17. The physical model for research of behavior of grouting mixtures

    Science.gov (United States)

    Hajovsky, Radovan; Pies, Martin; Lossmann, Jaroslav

    2016-06-01

    The paper deals with description of physical model designed for verification of behavior of grouting mixtures when applied below underground water level. Described physical model has been set up to determine propagation of grouting mixture in a given environment. Extension of grouting in this environment is based on measurement of humidity and temperature with the use of combined sensors located within preinstalled special measurement probes around grouting needle. Humidity was measured by combined capacity sensor DTH-1010, temperature was gathered by a NTC thermistor. Humidity sensors measured time when grouting mixture reached sensor location point. NTC thermistors measured temperature changes in time starting from initial of injection. This helped to develop 3D map showing the distribution of grouting mixture through the environment. Accomplishment of this particular measurement was carried out by a designed primary measurement module capable of connecting 4 humidity and temperature sensors. This module also takes care of converting these physical signals into unified analogue signals consequently brought to the input terminals of analogue input of programmable automation controller (PAC) WinPAC-8441. This controller ensures the measurement itself, archiving and visualization of all data. Detail description of a complex measurement system and evaluation in form of 3D animations and graphs is supposed to be in a full paper.

  18. Landmine detection using mixture of discrete hidden Markov models

    Science.gov (United States)

    Frigui, Hichem; Hamdi, Anis; Missaoui, Oualid; Gader, Paul

    2009-05-01

    We propose a landmine detection algorithm that uses a mixture of discrete hidden Markov models. We hypothesize that the data are generated by K models. These different models reflect the fact that mines and clutter objects have different characteristics depending on the mine type, soil and weather conditions, and burial depth. Model identification could be achieved through clustering in the parameters space or in the feature space. However, this approach is inappropriate as it is not trivial to define a meaningful distance metric for model parameters or sequence comparison. Our proposed approach is based on clustering in the log-likelihood space, and has two main steps. First, one HMM is fit to each of the R individual sequence. For each fitted model, we evaluate the log-likelihood of each sequence. This will result in an R×R log-likelihood distance matrix that will be partitioned into K groups using a hierarchical clustering algorithm. In the second step, we pool the sequences, according to which cluster they belong, into K groups, and we fit one HMM to each group. The mixture of these K HMMs would be used to build a descriptive model of the data. An artificial neural networks is then used to fuse the output of the K models. Results on large and diverse Ground Penetrating Radar data collections show that the proposed method can identify meaningful and coherent HMM models that describe different properties of the data. Each HMM models a group of alarm signatures that share common attributes such as clutter, mine type, and burial depth. Our initial experiments have also indicated that the proposed mixture model outperform the baseline HMM that uses one model for the mine and one model for the background.

  19. Gaussian mixture models as flux prediction method for central receivers

    Science.gov (United States)

    Grobler, Annemarie; Gauché, Paul; Smit, Willie

    2016-05-01

    Flux prediction methods are crucial to the design and operation of central receiver systems. Current methods such as the circular and elliptical (bivariate) Gaussian prediction methods are often used in field layout design and aiming strategies. For experimental or small central receiver systems, the flux profile of a single heliostat often deviates significantly from the circular and elliptical Gaussian models. Therefore a novel method of flux prediction was developed by incorporating the fitting of Gaussian mixture models onto flux profiles produced by flux measurement or ray tracing. A method was also developed to predict the Gaussian mixture model parameters of a single heliostat for a given time using image processing. Recording the predicted parameters in a database ensures that more accurate predictions are made in a shorter time frame.

  20. No electrostatic supersolitons in two-component plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Verheest, Frank, E-mail: frank.verheest@ugent.be [Sterrenkundig Observatorium, Universiteit Gent, Krijgslaan 281, B–9000 Gent (Belgium); School of Chemistry and Physics, University of KwaZulu-Natal, Durban 4000 (South Africa); Lakhina, Gurbax S., E-mail: lakhina@iigm.iigs.res.in [Indian Institute of Geomagnetism, New Panvel (W), Navi Mumbai (India); Hellberg, Manfred A., E-mail: hellberg@ukzn.ac.za [School of Chemistry and Physics, University of KwaZulu-Natal, Durban 4000 (South Africa)

    2014-06-15

    The concept of acoustic supersolitons was introduced for a very specific plasma with five constituents, and discussed only for a single set of plasma parameters. Supersolitons are characterized by having subsidiary extrema on the sides of a typical bipolar electric field signature, or by association with a root beyond double layers in the fully nonlinear Sagdeev pseudopotential description. It was subsequently found that supersolitons could exist in several plasma models having three constituent species, rather than four or five. In the present paper, it is proved that standard two-component plasma models cannot generate supersolitons, by recalling and extending results already in the literature, and by establishing the necessary properties of a more recent model.

  1. A Generalized Gamma Mixture Model for Ultrasonic Tissue Characterization

    Directory of Open Access Journals (Sweden)

    Gonzalo Vegas-Sanchez-Ferrero

    2012-01-01

    Full Text Available Several statistical models have been proposed in the literature to describe the behavior of speckles. Among them, the Nakagami distribution has proven to very accurately characterize the speckle behavior in tissues. However, it fails when describing the heavier tails caused by the impulsive response of a speckle. The Generalized Gamma (GG distribution (which also generalizes the Nakagami distribution was proposed to overcome these limitations. Despite the advantages of the distribution in terms of goodness of fitting, its main drawback is the lack of a closed-form maximum likelihood (ML estimates. Thus, the calculation of its parameters becomes difficult and not attractive. In this work, we propose (1 a simple but robust methodology to estimate the ML parameters of GG distributions and (2 a Generalized Gama Mixture Model (GGMM. These mixture models are of great value in ultrasound imaging when the received signal is characterized by a different nature of tissues. We show that a better speckle characterization is achieved when using GG and GGMM rather than other state-of-the-art distributions and mixture models. Results showed the better performance of the GG distribution in characterizing the speckle of blood and myocardial tissue in ultrasonic images.

  2. Modeling, clustering, and segmenting video with mixtures of dynamic textures.

    Science.gov (United States)

    Chan, Antoni B; Vasconcelos, Nuno

    2008-05-01

    A dynamic texture is a spatio-temporal generative model for video, which represents video sequences as observations from a linear dynamical system. This work studies the mixture of dynamic textures, a statistical model for an ensemble of video sequences that is sampled from a finite collection of visual processes, each of which is a dynamic texture. An expectationmaximization (EM) algorithm is derived for learning the parameters of the model, and the model is related to previous works in linear systems, machine learning, time-series clustering, control theory, and computer vision. Through experimentation, it is shown that the mixture of dynamic textures is a suitable representation for both the appearance and dynamics of a variety of visual processes that have traditionally been challenging for computer vision (e.g. fire, steam, water, vehicle and pedestrian traffic, etc.). When compared with state-of-the-art methods in motion segmentation, including both temporal texture methods and traditional representations (e.g. optical flow or other localized motion representations), the mixture of dynamic textures achieves superior performance in the problems of clustering and segmenting video of such processes.

  3. Evaluation of Distance Measures Between Gaussian Mixture Models of MFCCs

    DEFF Research Database (Denmark)

    Jensen, Jesper Højvang; Ellis, Dan P. W.; Christensen, Mads Græsbøll

    2007-01-01

    In music similarity and in the related task of genre classification, a distance measure between Gaussian mixture models is frequently needed. We present a comparison of the Kullback-Leibler distance, the earth movers distance and the normalized L2 distance for this application. Although the norma......In music similarity and in the related task of genre classification, a distance measure between Gaussian mixture models is frequently needed. We present a comparison of the Kullback-Leibler distance, the earth movers distance and the normalized L2 distance for this application. Although...... the normalized L2 distance was slightly inferior to the Kullback-Leibler distance with respect to classification performance, it has the advantage of obeying the triangle inequality, which allows for efficient searching....

  4. Detecting Clusters in Atom Probe Data with Gaussian Mixture Models.

    Science.gov (United States)

    Zelenty, Jennifer; Dahl, Andrew; Hyde, Jonathan; Smith, George D W; Moody, Michael P

    2017-04-01

    Accurately identifying and extracting clusters from atom probe tomography (APT) reconstructions is extremely challenging, yet critical to many applications. Currently, the most prevalent approach to detect clusters is the maximum separation method, a heuristic that relies heavily upon parameters manually chosen by the user. In this work, a new clustering algorithm, Gaussian mixture model Expectation Maximization Algorithm (GEMA), was developed. GEMA utilizes a Gaussian mixture model to probabilistically distinguish clusters from random fluctuations in the matrix. This machine learning approach maximizes the data likelihood via expectation maximization: given atomic positions, the algorithm learns the position, size, and width of each cluster. A key advantage of GEMA is that atoms are probabilistically assigned to clusters, thus reflecting scientifically meaningful uncertainty regarding atoms located near precipitate/matrix interfaces. GEMA outperforms the maximum separation method in cluster detection accuracy when applied to several realistically simulated data sets. Lastly, GEMA was successfully applied to real APT data.

  5. Translated Poisson Mixture Model for Stratification Learning (PREPRINT)

    Science.gov (United States)

    2007-09-01

    unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified Translated Poisson Mixture Model for Stratification Learning Gloria Haro Dept. Teoria ...Pless. Figure 1 shows, for each algorithm, the point cloud with each point colored and marked differently according to its classification. In the dif...1: Clustering of a spiral and a plane. Results with different algorithms (this is a color figure). Due to the statistical nature of the R-TPMM

  6. Analysis of Forest Foliage Using a Multivariate Mixture Model

    Science.gov (United States)

    Hlavka, C. A.; Peterson, David L.; Johnson, L. F.; Ganapol, B.

    1997-01-01

    Data with wet chemical measurements and near infrared spectra of ground leaf samples were analyzed to test a multivariate regression technique for estimating component spectra which is based on a linear mixture model for absorbance. The resulting unmixed spectra for carbohydrates, lignin, and protein resemble the spectra of extracted plant starches, cellulose, lignin, and protein. The unmixed protein spectrum has prominent absorption spectra at wavelengths which have been associated with nitrogen bonds.

  7. XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling

    Science.gov (United States)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-08-01

    XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.

  8. KONVERGENSI ESTIMATOR DALAM MODEL MIXTURE BERBASIS MISSING DATA

    Directory of Open Access Journals (Sweden)

    N Dwidayati

    2014-11-01

    Full Text Available Model mixture dapat mengestimasi proporsi pasien yang sembuh (cured dan fungsi survival pasien tak sembuh (uncured. Pada kajian ini, model mixture dikembangkan untuk  analisis cure rate berbasis missing data. Ada beberapa metode yang dapat digunakan untuk analisis missing data.  Salah satu metode yang dapat digunakan adalah Algoritma EM, Metode ini didasarkan pada dua langkah, yaitu: (1 Expectation Step dan (2 Maximization Step. Algoritma EM merupakan pendekatan iterasi untuk mempelajari model dari data dengan nilai hilang melalui empat  langkah, yaitu(1 pilih himpunan inisial dari parameter untuk sebuah model, (2 tentukan nilai ekspektasi untuk data hilang, (3 buat induksi parameter model baru dari gabungan nilai ekspekstasi dan data asli, dan (4 jika parameter tidak converged, ulangi langkah 2 menggunakan model baru. Berdasar kajian yang dilakukan dapat ditunjukkan bahwa pada algoritma EM, log-likelihood untuk missing data  mengalami kenaikan setelah dilakukan setiap iterasi dari algoritmanya. Dengan demikian berdasar algoritma EM, barisan likelihood konvergen jika likelihood terbatas ke bawah. Model mixture can estimate the proportion of recovering (cured patients and function of survival but do not recover (uncured patients. In this study, a model mixture has been developed to analyze the curing rate based on missing data. There are some methods applicable to analyze missing data. One of the methods is EM Algorithm, This method is based on two (2 steps, i.e.: ( 1 Expectation Step and ( 2 Maximization Step. EM Algorithm is an iteration approach to study the model from data with missing values in four (4 steps, i.e. (1 to choose initial set from parameters for a model, ( 2 to determine the expectation value for missing data, ( 3 to make induction for the new model parameter from the combined expectation values and the original data, and ( 4 if parameter is not converged, repeat step 2 using new model. The current study indicated that for

  9. Induced polarization of clay-sand mixtures. Experiments and modelling.

    Science.gov (United States)

    Okay, G.; Leroy, P.

    2012-04-01

    The complex conductivity of saturated unconsolidated sand-clay mixtures was experimentally investigated using two types of clay minerals, kaolinite and smectite (mainly Na-Montmorillonite) in the frequency range 1.4 mHz - 12 kHz. The experiments were performed with various clay contents (1, 5, 20, and 100 % in volume of the sand-clay mixture) and salinities (distilled water, 0.1 g/L, 1 g/L, and 10 g/L NaCl solution). Induced polarization measurements were performed with a cylindrical four-electrode sample-holder associated with a SIP-Fuchs II impedance meter and non-polarizing Cu/CuSO4 electrodes. The results illustrate the strong impact of the CEC of the clay minerals upon the complex conductivity. The quadrature conductivity increases steadily with the clay content. We observe that the dependence on frequency of the quadrature conductivity of sand-kaolinite mixtures is more important than for sand-bentonite mixtures. For both types of clay, the quadrature conductivity seems to be fairly independent on the pore fluid salinity except at very low clay contents. The experimental data show good agreement with predicted values given by our SIP model. This complex conductivity model considers the electrochemical polarization of the Stern layer coating the clay particles and the Maxwell-Wagner polarization. We use the differential effective medium theory to calculate the complex conductivity of the porous medium constituted of the grains and the electrolyte. The SIP model includes also the effect of the grain size distribution upon the complex conductivity spectra.

  10. Sand - rubber mixtures submitted to isotropic loading: a minimal model

    Science.gov (United States)

    Platzer, Auriane; Rouhanifar, Salman; Richard, Patrick; Cazacliu, Bogdan; Ibraim, Erdin

    2017-06-01

    The volume of scrap tyres, an undesired urban waste, is increasing rapidly in every country. Mixing sand and rubber particles as a lightweight backfill is one of the possible alternatives to avoid stockpiling them in the environment. This paper presents a minimal model aiming to capture the evolution of the void ratio of sand-rubber mixtures undergoing an isotropic compression loading. It is based on the idea that, submitted to a pressure, the rubber chips deform and partially fill the porous space of the system, leading to a decrease of the void ratio with increasing pressure. Our simple approach is capable of reproducing experimental data for two types of sand (a rounded one and a sub-angular one) and up to mixtures composed of 50% of rubber.

  11. Dirichlet multinomial mixtures: generative models for microbial metagenomics.

    Science.gov (United States)

    Holmes, Ian; Harris, Keith; Quince, Christopher

    2012-01-01

    We introduce Dirichlet multinomial mixtures (DMM) for the probabilistic modelling of microbial metagenomics data. This data can be represented as a frequency matrix giving the number of times each taxa is observed in each sample. The samples have different size, and the matrix is sparse, as communities are diverse and skewed to rare taxa. Most methods used previously to classify or cluster samples have ignored these features. We describe each community by a vector of taxa probabilities. These vectors are generated from one of a finite number of Dirichlet mixture components each with different hyperparameters. Observed samples are generated through multinomial sampling. The mixture components cluster communities into distinct 'metacommunities', and, hence, determine envirotypes or enterotypes, groups of communities with a similar composition. The model can also deduce the impact of a treatment and be used for classification. We wrote software for the fitting of DMM models using the 'evidence framework' (http://code.google.com/p/microbedmm/). This includes the Laplace approximation of the model evidence. We applied the DMM model to human gut microbe genera frequencies from Obese and Lean twins. From the model evidence four clusters fit this data best. Two clusters were dominated by Bacteroides and were homogenous; two had a more variable community composition. We could not find a significant impact of body mass on community structure. However, Obese twins were more likely to derive from the high variance clusters. We propose that obesity is not associated with a distinct microbiota but increases the chance that an individual derives from a disturbed enterotype. This is an example of the 'Anna Karenina principle (AKP)' applied to microbial communities: disturbed states having many more configurations than undisturbed. We verify this by showing that in a study of inflammatory bowel disease (IBD) phenotypes, ileal Crohn's disease (ICD) is associated with a more variable

  12. Dirichlet multinomial mixtures: generative models for microbial metagenomics.

    Directory of Open Access Journals (Sweden)

    Ian Holmes

    Full Text Available We introduce Dirichlet multinomial mixtures (DMM for the probabilistic modelling of microbial metagenomics data. This data can be represented as a frequency matrix giving the number of times each taxa is observed in each sample. The samples have different size, and the matrix is sparse, as communities are diverse and skewed to rare taxa. Most methods used previously to classify or cluster samples have ignored these features. We describe each community by a vector of taxa probabilities. These vectors are generated from one of a finite number of Dirichlet mixture components each with different hyperparameters. Observed samples are generated through multinomial sampling. The mixture components cluster communities into distinct 'metacommunities', and, hence, determine envirotypes or enterotypes, groups of communities with a similar composition. The model can also deduce the impact of a treatment and be used for classification. We wrote software for the fitting of DMM models using the 'evidence framework' (http://code.google.com/p/microbedmm/. This includes the Laplace approximation of the model evidence. We applied the DMM model to human gut microbe genera frequencies from Obese and Lean twins. From the model evidence four clusters fit this data best. Two clusters were dominated by Bacteroides and were homogenous; two had a more variable community composition. We could not find a significant impact of body mass on community structure. However, Obese twins were more likely to derive from the high variance clusters. We propose that obesity is not associated with a distinct microbiota but increases the chance that an individual derives from a disturbed enterotype. This is an example of the 'Anna Karenina principle (AKP' applied to microbial communities: disturbed states having many more configurations than undisturbed. We verify this by showing that in a study of inflammatory bowel disease (IBD phenotypes, ileal Crohn's disease (ICD is associated with

  13. Two-component jet simulations: Combining analytical and numerical approaches

    CERN Document Server

    Matsakos, T; Trussoni, E; Tsinganos, K; Vlahakis, N; Sauty, C; Mignone, A

    2009-01-01

    Recent observations as well as theoretical studies of YSO jets suggest the presence of two steady components: a disk wind type outflow needed to explain the observed high mass loss rates and a stellar wind type outflow probably accounting for the observed stellar spin down. In this framework, we construct numerical two-component jet models by properly mixing an analytical disk wind solution with a complementary analytically derived stellar outflow. Their combination is controlled by both spatial and temporal parameters, in order to address different physical conditions and time variable features. We study the temporal evolution and the interaction of the two jet components on both small and large scales. The simulations reach steady state configurations close to the initial solutions. Although time variability is not found to considerably affect the dynamics, flow fluctuations generate condensations, whose large scale structures have a strong resemblance to observed YSO jet knots.

  14. Efficient two-component relativistic method for large systems

    Energy Technology Data Exchange (ETDEWEB)

    Nakai, Hiromi [Department of Chemitsry and Biochemistry, School of Advanced Science and Engineering, Waseda University, Tokyo 169-8555 (Japan); Research Institute for Science and Engineering, Waseda University, Tokyo 169-8555 (Japan); CREST, Japan Science and Technology Agency, 4-1-8 Honcho, Kawaguchi, Saitama 332-0012 (Japan); Elements Strategy Initiative for Catalysts and Batteries (ESICB), Kyoto University, Katsura, Kyoto 615-8520 (Japan)

    2015-12-31

    This paper reviews a series of theoretical studies to develop efficient two-component (2c) relativistic method for large systems by the author’s group. The basic theory is the infinite-order Douglas-Kroll-Hess (IODKH) method for many-electron Dirac-Coulomb Hamiltonian. The local unitary transformation (LUT) scheme can effectively produce the 2c relativistic Hamiltonian, and the divide-and-conquer (DC) method can achieve linear-scaling of Hartree-Fock and electron correlation methods. The frozen core potential (FCP) theoretically connects model potential calculations with the all-electron ones. The accompanying coordinate expansion with a transfer recurrence relation (ACE-TRR) scheme accelerates the computations of electron repulsion integrals with high angular momenta and long contractions.

  15. Budding Transition of Asymmetric Two-component Lipid Domains

    CERN Document Server

    Wolff, Jean; Andelman, David

    2016-01-01

    We propose a model that accounts for the budding transition of asymmetric two-component lipid domains, where the two monolayers (leaflets) have different average compositions controlled by independent chemical potentials. Assuming a coupling between the local curvature and local lipid composition in each of the leaflets, we discuss the morphology and thermodynamic behavior of asymmetric lipid domains. The membrane free-energy contains three contributions: the bending energy, the line tension, and a Landau free-energy for a lateral phase separation. Within a mean-field treatment, we obtain various phase diagrams containing fully budded, dimpled, and flat states as a function of the two leaflet compositions. The global phase behavior is analyzed, and depending on system parameters, the phase diagrams include one-phase, two-phase and three-phase regions. In particular, we predict various phase coexistence regions between different morphologies of domains, which may be observed in multi-component membranes or ves...

  16. The mechanism of signal transduction by two-component systems.

    Science.gov (United States)

    Casino, Patricia; Rubio, Vicente; Marina, Alberto

    2010-12-01

    Two-component systems, composed of a homodimeric histidine kinase (HK) and a response regulator (RR), are major signal transduction devices in bacteria. Typically the signal triggers HK autophosphorylation at one His residue, followed by phosphoryl transfer from the phospho-His to an Asp residue in the RR. Signal extinction frequently involves phospho-RR dephosphorylation by a phosphatase activity of the HK. Our understanding of these reactions and of the determinants of partner specificity among HK-RR couples has been greatly increased by recent crystal structures and biochemical experiments on HK-RR complexes. Cis-autophosphorylation (one subunit phosphorylates itself) occurs in some HKs while trans-autophosphorylation takes place in others. We review and integrate this new information, discuss the mechanism of the three reactions and propose a model for transmembrane signaling by these systems. Copyright © 2010 Elsevier Ltd. All rights reserved.

  17. The Spectral Mixture Models: A Minimum Information Divergence Approach

    Science.gov (United States)

    2010-04-01

    Bayesian   Information   Criterion .   Developing a metric that measures the fitness of different models is beyond the scope of our discussion.    2.1...data,  then  the  results  are  questionable  or  perhaps  wrong.    Various  information   criteria  have  been  proposed  such  as  the  Akaike   and...LABORATORY INFORMATION DIRECTORATE THE SPECTRAL MIXTURE MODELS

  18. Shear viscosity of liquid mixtures Mass dependence

    CERN Document Server

    Kaushal, R

    2002-01-01

    Expressions for zeroth, second, and fourth sum rules of transverse stress autocorrelation function of two component fluid have been derived. These sum rules and Mori's memory function formalism have been used to study shear viscosity of Ar-Kr and isotopic mixtures. It has been found that theoretical result is in good agreement with the computer simulation result for the Ar-Kr mixture. The mass dependence of shear viscosity for different mole fraction shows that deviation from ideal linear model comes even from mass difference in two species of fluid mixture. At higher mass ratio shear viscosity of mixture is not explained by any of the emperical model.

  19. Determining of migraine prognosis using latent growth mixture models

    Institute of Scientific and Technical Information of China (English)

    Bahar Tasdelen; Aynur Ozge; Hakan Kaleagasi; Semra Erdogan; Tufan Mengi

    2011-01-01

    Background This paper presents a retrospective study to classify patients into subtypes of the treatment according to baseline and longitudinally observed values considering heterogenity in migraine prognosis. In the classical prospective clinical studies,participants are classified with respect to baseline status and followed within a certain time period.However,latent growth mixture model is the most suitable method,which considers the population heterogenity and is not affected drop-outs if they are missing at random. Hence,we planned this comprehensive study to identify prognostic factors in migraine.Methods The study data have been based on a 10-year computer-based follow-up data of Mersin University Headache Outpatient Department. The developmental trajectories within subgroups were described for the severity,frequency,and duration of headache separately and the probabilities of each subgroup were estimated by using latent growth mixture models. SAS PROC TRAJ procedures,semiparametric and group-based mixture modeling approach,were applied to define the developmental trajectories.Results While the three-group model for the severity (mild,moderate,severe) and frequency (low,medium,high) of headache appeared to be appropriate,the four-group model for the duration (low,medium,high,extremely high) was more suitable. The severity of headache increased in the patients with nausea,vomiting,photophobia and phonophobia.The frequency of headache was especially related with increasing age and unilateral pain. Nausea and photophobia were also related with headache duration.Conclusions Nausea,vomiting and photophobia were the most significant factors to identify developmental trajectories.The remission time was not the same for the severity,frequency,and duration of headache.

  20. Background Subtraction with DirichletProcess Mixture Models.

    Science.gov (United States)

    Haines, Tom S F; Tao Xiang

    2014-04-01

    Video analysis often begins with background subtraction. This problem is often approached in two steps-a background model followed by a regularisation scheme. A model of the background allows it to be distinguished on a per-pixel basis from the foreground, whilst the regularisation combines information from adjacent pixels. We present a new method based on Dirichlet process Gaussian mixture models, which are used to estimate per-pixel background distributions. It is followed by probabilistic regularisation. Using a non-parametric Bayesian method allows per-pixel mode counts to be automatically inferred, avoiding over-/under- fitting. We also develop novel model learning algorithms for continuous update of the model in a principled fashion as the scene changes. These key advantages enable us to outperform the state-of-the-art alternatives on four benchmarks.

  1. Molecular Code Division Multiple Access: Gaussian Mixture Modeling

    Science.gov (United States)

    Zamiri-Jafarian, Yeganeh

    Communications between nano-devices is an emerging research field in nanotechnology. Molecular Communication (MC), which is a bio-inspired paradigm, is a promising technique for communication in nano-network. In MC, molecules are administered to exchange information among nano-devices. Due to the nature of molecular signals, traditional communication methods can't be directly applied to the MC framework. The objective of this thesis is to present novel diffusion-based MC methods when multi nano-devices communicate with each other in the same environment. A new channel model and detection technique, along with a molecular-based access method, are proposed in here for communication between asynchronous users. In this work, the received molecular signal is modeled as a Gaussian mixture distribution when the MC system undergoes Brownian noise and inter-symbol interference (ISI). This novel approach demonstrates a suitable modeling for diffusion-based MC system. Using the proposed Gaussian mixture model, a simple receiver is designed by minimizing the error probability. To determine an optimum detection threshold, an iterative algorithm is derived which minimizes a linear approximation of the error probability function. Also, a memory-based receiver is proposed to improve the performance of the MC system by considering previously detected symbols in obtaining the threshold value. Numerical evaluations reveal that theoretical analysis of the bit error rate (BER) performance based on the Gaussian mixture model match simulation results very closely. Furthermore, in this thesis, molecular code division multiple access (MCDMA) is proposed to overcome the inter-user interference (IUI) caused by asynchronous users communicating in a shared propagation environment. Based on the selected molecular codes, a chip detection scheme with an adaptable threshold value is developed for the MCDMA system when the proposed Gaussian mixture model is considered. Results indicate that the

  2. Modeling phase equilibria for acid gas mixtures using the CPA equation of state. Part II: Binary mixtures with CO2

    DEFF Research Database (Denmark)

    Tsivintzelis, Ioannis; Kontogeorgis, Georgios; Michelsen, Michael Locht

    2011-01-01

    In Part I of this series of articles, the study of H2S mixtures has been presented with CPA. In this study the phase behavior of CO2 containing mixtures is modeled. Binary mixtures with water, alcohols, glycols and hydrocarbons are investigated. Both phase equilibria (vapor–liquid and liquid......, alcohols and glycols) are considered, the importance of cross-association is investigated. The cross-association is accounted for either via combining rules or using a cross-solvation energy obtained from experimental spectroscopic or calorimetric data or from ab initio calculations. In both cases two...

  3. An Introductory Idea for Teaching Two-Component Phase Diagrams

    Science.gov (United States)

    Peckham, Gavin D.; McNaught, Ian J.

    2011-01-01

    The teaching of two-component phase diagrams has attracted little attention in this "Journal," and it is hoped that this article will make a useful contribution. Current physical chemistry textbooks describe two-component phase diagrams adequately, but do so in a piecemeal fashion one section at a time; first solid-liquid equilibria, then…

  4. Two-component micro injection moulding for hearing aid applications

    DEFF Research Database (Denmark)

    Islam, Aminul; Hansen, Hans Nørgaard; Marhöfer, David Maximilian

    2012-01-01

    Two-component (2k) injection moulding is an important process technique at the present state of technology, and it is growing rapidly in the field of precision micro moulding. Besides combining different material properties in the same product, two-component moulding can eliminate many assembly s...

  5. Feedback Control of Two-Component Regulatory Systems.

    Science.gov (United States)

    Groisman, Eduardo A

    2016-09-08

    Two-component systems are a dominant form of bacterial signal transduction. The prototypical two-component system consists of a sensor that responds to a specific input(s) by modifying the output of a cognate regulator. Because the output of a two-component system is the amount of phosphorylated regulator, feedback mechanisms may alter the amount of regulator, and/or modify the ability of a sensor or other proteins to alter the phosphorylation state of the regulator. Two-component systems may display intrinsic feedback whereby the amount of phosphorylated regulator changes under constant inducing conditions and without the participation of additional proteins. Feedback control allows a two-component system to achieve particular steady-state levels, to reach a given steady state with distinct dynamics, to express coregulated genes in a given order, and to activate a regulator to different extents, depending on the signal acting on the sensor.

  6. Modeling human mortality using mixtures of bathtub shaped failure distributions.

    Science.gov (United States)

    Bebbington, Mark; Lai, Chin-Diew; Zitikis, Ricardas

    2007-04-07

    Aging and mortality is usually modeled by the Gompertz-Makeham distribution, where the mortality rate accelerates with age in adult humans. The resulting parameters are interpreted as the frailty and decrease in vitality with age. This fits well to life data from 'westernized' societies, where the data are accurate, of high resolution, and show the effects of high quality post-natal care. We show, however, that when the data are of lower resolution, and contain considerable structure in the infant mortality, the fit can be poor. Moreover, the Gompertz-Makeham distribution is consistent with neither the force of natural selection, nor the recently identified 'late life mortality deceleration'. Although actuarial models such as the Heligman-Pollard distribution can, in theory, achieve an improved fit, the lack of a closed form for the survival function makes fitting extremely arduous, and the biological interpretation can be lacking. We show, that a mixture, assigning mortality to exogenous or endogenous causes, using the reduced additive and flexible Weibull distributions, models well human mortality over the entire life span. The components of the mixture are asymptotically consistent with the reliability and biological theories of aging. The relative simplicity of the mixture distribution makes feasible a technique where the curvature functions of the corresponding survival and hazard rate functions are used to identify the beginning and the end of various life phases, such as infant mortality, the end of the force of natural selection, and late life mortality deceleration. We illustrate our results with a comparative analysis of Canadian and Indonesian mortality data.

  7. Experiments with Mixtures Designs, Models, and the Analysis of Mixture Data

    CERN Document Server

    Cornell, John A

    2011-01-01

    The most comprehensive, single-volume guide to conducting experiments with mixtures"If one is involved, or heavily interested, in experiments on mixtures of ingredients, one must obtain this book. It is, as was the first edition, the definitive work."-Short Book Reviews (Publication of the International Statistical Institute)"The text contains many examples with worked solutions and with its extensive coverage of the subject matter will prove invaluable to those in the industrial and educational sectors whose work involves the design and analysis of mixture experiments."-Journal of the Royal S

  8. Improved Gaussian Mixture Models for Adaptive Foreground Segmentation

    DEFF Research Database (Denmark)

    Katsarakis, Nikolaos; Pnevmatikakis, Aristodemos; Tan, Zheng-Hua

    2016-01-01

    Adaptive foreground segmentation is traditionally performed using Stauffer & Grimson’s algorithm that models every pixel of the frame by a mixture of Gaussian distributions with continuously adapted parameters. In this paper we provide an enhancement of the algorithm by adding two important dynamic...... elements to the baseline algorithm: The learning rate can change across space and time, while the Gaussian distributions can be merged together if they become similar due to their adaptation process. We quantify the importance of our enhancements and the effect of parameter tuning using an annotated...

  9. A mixture copula Bayesian network model for multimodal genomic data

    Directory of Open Access Journals (Sweden)

    Qingyang Zhang

    2017-04-01

    Full Text Available Gaussian Bayesian networks have become a widely used framework to estimate directed associations between joint Gaussian variables, where the network structure encodes the decomposition of multivariate normal density into local terms. However, the resulting estimates can be inaccurate when the normality assumption is moderately or severely violated, making it unsuitable for dealing with recent genomic data such as the Cancer Genome Atlas data. In the present paper, we propose a mixture copula Bayesian network model which provides great flexibility in modeling non-Gaussian and multimodal data for causal inference. The parameters in mixture copula functions can be efficiently estimated by a routine expectation–maximization algorithm. A heuristic search algorithm based on Bayesian information criterion is developed to estimate the network structure, and prediction can be further improved by the best-scoring network out of multiple predictions from random initial values. Our method outperforms Gaussian Bayesian networks and regular copula Bayesian networks in terms of modeling flexibility and prediction accuracy, as demonstrated using a cell signaling data set. We apply the proposed methods to the Cancer Genome Atlas data to study the genetic and epigenetic pathways that underlie serous ovarian cancer.

  10. Efficient speaker verification using Gaussian mixture model component clustering.

    Energy Technology Data Exchange (ETDEWEB)

    De Leon, Phillip L. (New Mexico State University, Las Cruces, NM); McClanahan, Richard D.

    2012-04-01

    In speaker verification (SV) systems that employ a support vector machine (SVM) classifier to make decisions on a supervector derived from Gaussian mixture model (GMM) component mean vectors, a significant portion of the computational load is involved in the calculation of the a posteriori probability of the feature vectors of the speaker under test with respect to the individual component densities of the universal background model (UBM). Further, the calculation of the sufficient statistics for the weight, mean, and covariance parameters derived from these same feature vectors also contribute a substantial amount of processing load to the SV system. In this paper, we propose a method that utilizes clusters of GMM-UBM mixture component densities in order to reduce the computational load required. In the adaptation step we score the feature vectors against the clusters and calculate the a posteriori probabilities and update the statistics exclusively for mixture components belonging to appropriate clusters. Each cluster is a grouping of multivariate normal distributions and is modeled by a single multivariate distribution. As such, the set of multivariate normal distributions representing the different clusters also form a GMM. This GMM is referred to as a hash GMM which can be considered to a lower resolution representation of the GMM-UBM. The mapping that associates the components of the hash GMM with components of the original GMM-UBM is referred to as a shortlist. This research investigates various methods of clustering the components of the GMM-UBM and forming hash GMMs. Of five different methods that are presented one method, Gaussian mixture reduction as proposed by Runnall's, easily outperformed the other methods. This method of Gaussian reduction iteratively reduces the size of a GMM by successively merging pairs of component densities. Pairs are selected for merger by using a Kullback-Leibler based metric. Using Runnal's method of reduction, we

  11. Nonlinear sensor fault diagnosis using mixture of probabilistic PCA models

    Science.gov (United States)

    Sharifi, Reza; Langari, Reza

    2017-02-01

    This paper presents a methodology for sensor fault diagnosis in nonlinear systems using a Mixture of Probabilistic Principal Component Analysis (MPPCA) models. This methodology separates the measurement space into several locally linear regions, each of which is associated with a Probabilistic PCA (PPCA) model. Using the transformation associated with each PPCA model, a parity relation scheme is used to construct a residual vector. Bayesian analysis of the residuals forms the basis for detection and isolation of sensor faults across the entire range of operation of the system. The resulting method is demonstrated in its application to sensor fault diagnosis of a fully instrumented HVAC system. The results show accurate detection of sensor faults under the assumption that a single sensor is faulty.

  12. Gaussian Mixture Model and Rjmcmc Based RS Image Segmentation

    Science.gov (United States)

    Shi, X.; Zhao, Q. H.

    2017-09-01

    For the image segmentation method based on Gaussian Mixture Model (GMM), there are some problems: 1) The number of component was usually a fixed number, i.e., fixed class and 2) GMM is sensitive to image noise. This paper proposed a RS image segmentation method that combining GMM with reversible jump Markov Chain Monte Carlo (RJMCMC). In proposed algorithm, GMM was designed to model the distribution of pixel intensity in RS image. Assume that the number of component was a random variable. Respectively build the prior distribution of each parameter. In order to improve noise resistance, used Gibbs function to model the prior distribution of GMM weight coefficient. According to Bayes' theorem, build posterior distribution. RJMCMC was used to simulate the posterior distribution and estimate its parameters. Finally, an optimal segmentation is obtained on RS image. Experimental results show that the proposed algorithm can converge to the optimal number of class and get an ideal segmentation results.

  13. Refining personality disorder subtypes and classification using finite mixture modeling.

    Science.gov (United States)

    Yun, Rebecca J; Stern, Barry L; Lenzenweger, Mark F; Tiersky, Lana A

    2013-04-01

    The current Diagnostic and Statistical Manual of Mental Disorders (DSM) diagnostic system for Axis II disorders continues to be characterized by considerable heterogeneity and poor discriminant validity. Such problems impede accurate personality disorder (PD) diagnosis. As a result, alternative assessment tools are often used in conjunction with the DSM. One popular framework is the object relational model developed by Kernberg and his colleagues (J. F. Clarkin, M. F. Lenzenweger, F. Yeomans, K. N. Levy, & O. F. Kernberg, 2007, An object relations model of borderline pathology, Journal of Personality Disorders, Vol. 21, pp. 474-499; O. F. Kernberg, 1984, Severe Personality Disorders, New Haven, CT: Yale University Press; O. F. Kernberg & E. Caligor, 2005, A psychoanalytic theory of personality disorders, in M. F. Lenzenweger & J. F. Clarkin, Eds., Major Theories of Personality Disorder, New York, NY: Guilford Press). Drawing on this model and empirical studies thereof, the current study attempted to clarify Kernberg's (1984) PD taxonomy and identify subtypes within a sample with varying levels of personality pathology using finite mixture modeling. Subjects (N = 141) were recruited to represent a wide range of pathology. The finite mixture modeling results indicated that 3 components were harbored within the variables analyzed. Group 1 was characterized by low levels of antisocial, paranoid, and aggressive features, and Group 2 was characterized by elevated paranoid features. Group 3 revealed the highest levels across the 3 variables. The validity of the obtained solution was then evaluated by reference to a variety of external measures that supported the validity of the identified grouping structure. Findings generally appear congruent with previous research, which argued that a PD taxonomy based on paranoid, aggressive, and antisocial features is a viable supplement to current diagnostic systems. Our study suggests that Kernberg's object relational model offers a

  14. A model for steady flows of magma-volatile mixtures

    CERN Document Server

    Belan, Marco

    2012-01-01

    A general one-dimensional model for the steady adiabatic motion of liquid-volatile mixtures in vertical ducts with varying cross-section is presented. The liquid contains a dissolved part of the volatile and is assumed to be incompressible and in thermomechanical equilibrium with a perfect gas phase, which is generated by the exsolution of the same volatile. An inverse problem approach is used -- the pressure along the duct is set as an input datum, and the other physical quantities are obtained as output. This fluid-dynamic model is intended as an approximate description of magma-volatile mixture flows of interest to geophysics and planetary sciences. It is implemented as a symbolic code, where each line stands for an analytic expression, whether algebraic or differential, which is managed by the software kernel independently of the numerical value of each variable. The code is versatile and user-friendly and permits to check the consequences of different hypotheses even through its early steps. Only the las...

  15. Microbial comparative pan-genomics using binomial mixture models

    Directory of Open Access Journals (Sweden)

    Ussery David W

    2009-08-01

    Full Text Available Abstract Background The size of the core- and pan-genome of bacterial species is a topic of increasing interest due to the growing number of sequenced prokaryote genomes, many from the same species. Attempts to estimate these quantities have been made, using regression methods or mixture models. We extend the latter approach by using statistical ideas developed for capture-recapture problems in ecology and epidemiology. Results We estimate core- and pan-genome sizes for 16 different bacterial species. The results reveal a complex dependency structure for most species, manifested as heterogeneous detection probabilities. Estimated pan-genome sizes range from small (around 2600 gene families in Buchnera aphidicola to large (around 43000 gene families in Escherichia coli. Results for Echerichia coli show that as more data become available, a larger diversity is estimated, indicating an extensive pool of rarely occurring genes in the population. Conclusion Analyzing pan-genomics data with binomial mixture models is a way to handle dependencies between genomes, which we find is always present. A bottleneck in the estimation procedure is the annotation of rarely occurring genes.

  16. Implementation of Two Component Advective Flow Solution in XSPEC

    CERN Document Server

    Debnath, Dipak; Mondal, Santanu

    2014-01-01

    Spectral and Temporal properties of black hole candidates can be explained reasonably well using Chakrabarti-Titarchuk solution of two component advective flow (TCAF). This model requires two accretion rates, namely, the Keplerian disk accretion rate and the halo accretion rate, the latter being composed of a sub-Keplerian, low angular momentum flow which may or may not develop a shock. In this solution, the relevant parameter is the relative importance of the halo (which creates the Compton cloud region) rate with respect to the Keplerian disk rate (soft photon source). Though this model has been used earlier to manually fit data of several black hole candidates quite satisfactorily, for the first time, we made it user friendly by implementing it into XSPEC software of GSFC/NASA. This enables any user to extract physical parameters of the accretion flows, such as two accretion rates, the shock location, the shock strength etc. for any black hole candidate. We provide some examples of fitting a few cases usin...

  17. Mixture of a seismicity model based on the rate-and-state friction and ETAS model

    Science.gov (United States)

    Iwata, T.

    2015-12-01

    Currently the ETAS model [Ogata, 1988, JASA] is considered to be a standard model of seismicity. However, because the ETAS model is a purely statistical one, the physics-based seismicity model derived from the rate-and-state friction (hereafter referred to as Dieterich model) [Dieterich, 1994, JGR] is frequently examined. However, the original version of the Dieterich model has several problems in the application to real earthquake sequences and therefore modifications have been conducted in previous studies. Iwata [2015, Pageoph] is one of such studies and shows that the Dieterich model is significantly improved as a result of the inclusion of the effect of secondary aftershocks (i.e., aftershocks caused by previous aftershocks). However, still the performance of the ETAS model is superior to that of the improved Dieterich model. For further improvement, the mixture of the Dieterich and ETAS models is examined in this study. To achieve the mixture, the seismicity rate is represented as a sum of the ETAS and Dieterich models of which weights are given as k and 1-k, respectively. This mixture model is applied to the aftershock sequences of the 1995 Kobe and 2004 Mid-Niigata sequences which have been analyzed in Iwata [2015]. Additionally, the sequence of the Matsushiro earthquake swarm in central Japan 1965-1970 is also analyzed. The value of k and parameters of the ETAS and Dieterich models are estimated by means of the maximum likelihood method, and the model performances are assessed on the basis of AIC. For the two aftershock sequences, the AIC values of the ETAS model are around 3-9 smaller (i.e., better) than those of the mixture model. On the contrary, for the Matsushiro swarm, the AIC value of the mixture model is 5.8 smaller than that of the ETAS model, indicating that the mixture of the two models results in significant improvement of the seismicity model.

  18. MODELLING AND PARAMETER ESTIMATION IN REACTIVE CONTINUOUS MIXTURES: THE CATALYTIC CRACKING OF ALKANES. PART I

    Directory of Open Access Journals (Sweden)

    PEIXOTO F. C.

    1999-01-01

    Full Text Available Fragmentation kinetics is employed to model a continuous reactive mixture. An explicit solution is found and experimental data on the catalytic cracking of a mixture of alkanes are used for deactivation and kinetic parameter estimation.

  19. Classifying Gamma-Ray Bursts with Gaussian Mixture Model

    CERN Document Server

    Yang, En-Bo; Choi, Chul-Sung; Chang, Heon-Young

    2016-01-01

    Using Gaussian Mixture Model (GMM) and Expectation Maximization Algorithm, we perform an analysis of time duration ($T_{90}$) for \\textit{CGRO}/BATSE, \\textit{Swift}/BAT and \\textit{Fermi}/GBM Gamma-Ray Bursts. The $T_{90}$ distributions of 298 redshift-known \\textit{Swift}/BAT GRBs have also been studied in both observer and rest frames. Bayesian Information Criterion has been used to compare between different GMM models. We find that two Gaussian components are better to describe the \\textit{CGRO}/BATSE and \\textit{Fermi}/GBM GRBs in the observer frame. Also, we caution that two groups are expected for the \\textit{Swift}/BAT bursts in the rest frame, which is consistent with some previous results. However, \\textit{Swift} GRBs in the observer frame seem to show a trimodal distribution, of which the superficial intermediate class may result from the selection effect of \\textit{Swift}/BAT.

  20. Classifying gamma-ray bursts with Gaussian Mixture Model

    Science.gov (United States)

    Zhang, Zhi-Bin; Yang, En-Bo; Choi, Chul-Sung; Chang, Heon-Young

    2016-11-01

    Using Gaussian Mixture Model (GMM) and expectation-maximization algorithm, we perform an analysis of time duration (T90) for Compton Gamma Ray Observatory (CGRO)/BATSE, Swift/BAT and Fermi/GBM gamma-ray bursts (GRBs). The T90 distributions of 298 redshift-known Swift/BAT GRBs have also been studied in both observer and rest frames. Bayesian information criterion has been used to compare between different GMM models. We find that two Gaussian components are better to describe the CGRO/BATSE and Fermi/GBM GRBs in the observer frame. Also, we caution that two groups are expected for the Swift/BAT bursts in the rest frame, which is consistent with some previous results. However, Swift GRBs in the observer frame seem to show a trimodal distribution, of which the superficial intermediate class may result from the selection effect of Swift/BAT.

  1. Mixtures of Polya trees for flexible spatial frailty survival modelling.

    Science.gov (United States)

    Zhao, Luping; Hanson, Timothy E; Carlin, Bradley P

    2009-06-01

    Mixtures of Polya trees offer a very flexible nonparametric approach for modelling time-to-event data. Many such settings also feature spatial association that requires further sophistication, either at the point level or at the lattice level. In this paper, we combine these two aspects within three competing survival models, obtaining a data analytic approach that remains computationally feasible in a fully hierarchical Bayesian framework using Markov chain Monte Carlo methods. We illustrate our proposed methods with an analysis of spatially oriented breast cancer survival data from the Surveillance, Epidemiology and End Results program of the National Cancer Institute. Our results indicate appreciable advantages for our approach over competing methods that impose unrealistic parametric assumptions, ignore spatial association or both.

  2. Bayesian nonparametric meta-analysis using Polya tree mixture models.

    Science.gov (United States)

    Branscum, Adam J; Hanson, Timothy E

    2008-09-01

    Summary. A common goal in meta-analysis is estimation of a single effect measure using data from several studies that are each designed to address the same scientific inquiry. Because studies are typically conducted in geographically disperse locations, recent developments in the statistical analysis of meta-analytic data involve the use of random effects models that account for study-to-study variability attributable to differences in environments, demographics, genetics, and other sources that lead to heterogeneity in populations. Stemming from asymptotic theory, study-specific summary statistics are modeled according to normal distributions with means representing latent true effect measures. A parametric approach subsequently models these latent measures using a normal distribution, which is strictly a convenient modeling assumption absent of theoretical justification. To eliminate the influence of overly restrictive parametric models on inferences, we consider a broader class of random effects distributions. We develop a novel hierarchical Bayesian nonparametric Polya tree mixture (PTM) model. We present methodology for testing the PTM versus a normal random effects model. These methods provide researchers a straightforward approach for conducting a sensitivity analysis of the normality assumption for random effects. An application involving meta-analysis of epidemiologic studies designed to characterize the association between alcohol consumption and breast cancer is presented, which together with results from simulated data highlight the performance of PTMs in the presence of nonnormality of effect measures in the source population.

  3. Role of functionality in two-component signal transduction: A stochastic study

    Science.gov (United States)

    Maity, Alok Kumar; Bandyopadhyay, Arnab; Chaudhury, Pinaki; Banik, Suman K.

    2014-03-01

    We present a stochastic formalism for signal transduction processes in a bacterial two-component system. Using elementary mass action kinetics, the proposed model takes care of signal transduction in terms of a phosphotransfer mechanism between the cognate partners of a two-component system, viz., the sensor kinase and the response regulator. Based on the difference in functionality of the sensor kinase, the noisy phosphotransfer mechanism has been studied for monofunctional and bifunctional two-component systems using the formalism of the linear noise approximation. Steady-state analysis of both models quantifies different physically realizable quantities, e.g., the variance, the Fano factor (variance/mean), and mutual information. The resultant data reveal that both systems reliably transfer information of extracellular environment under low external stimulus and in a high-kinase-and-phosphatase regime. We extend our analysis further by studying the role of the two-component system in downstream gene regulation.

  4. Advances in Behavioral Genetics Modeling Using Mplus: Applications of Factor Mixture Modeling to Twin Data

    National Research Council Canada - National Science Library

    Muthen, Bengt; Asparouhov, Tihomir; Rebollo, Irene

    2006-01-01

    This article discusses new latent variable techniques developed by the authors. As an illustration, a new factor mixture model is applied to the monozygotic-dizygotic twin analysis of binary items measuring alcohol-use disorder...

  5. Receptor domains of two-component signal transduction systems.

    Science.gov (United States)

    Perry, Julie; Koteva, Kalinka; Wright, Gerard

    2011-05-01

    Two-component signal transduction systems are found ubiquitously in prokaryotes, and in archaea, fungi, yeast and some plants, where they regulate physiologic and molecular processes at both transcriptional and post-transcriptional levels. Two-component systems sense changes in environmental conditions when a specific ligand binds to the receptor domain of the histidine kinase sensory component. The structures of many histidine kinase receptors are known, including those which sense extracellular and cytoplasmic signals. In this review, we discuss the basic architecture of two-component signalling circuits, including known system ligands, structure and function of both receptor and signalling domains, the chemistry of phosphotransfer, and cross-talk between different two-component pathways. Given the importance of these systems in regulating cellular responses, many biochemical techniques have been developed for their study and analysis. We therefore also review current methods used to study two-component signalling, including a new affinity-based proteomics approach used to study inducible resistance to the antibiotic vancomycin through the VanSR two-component signal transduction system.

  6. The Escherichia coli BarA-UvrY two-component system is a virulence determinant in the urinary tract

    Directory of Open Access Journals (Sweden)

    Georgellis Dimitris

    2006-03-01

    Full Text Available Abstract Background The Salmonella enterica BarA-SirA, the Erwinia carotovora ExpS-ExpA, the Vibrio cholerae BarA-VarA and the Pseudomonas spp GacS-GacA all belong to the same orthologous family of two-component systems as the Escherichia coli BarA-UvrY. In the first four species it has been demonstrated that disruption of this two-component system leads to a clear reduction in virulence of the bacteria. Our aim was to determine if the Escherichia coli BarA-UvrY two-component system is connected with virulence using a monkey cystitis model. Results Cystitis was generated in Macaque fascularis monkeys by infecting the bladder with a 1:1 mixture of the uropathogenic Escherichia coli isolate DS17 and a derivative where the uvrY gene had been disrupted with a kanamycin resistance gene. Urine was collected through bladder punctuation at subsequent time intervals and the relative amount of uvrY mutant was determined. This showed that inactivation of the UvrY response regulator leads to a reduced fitness. In similar competitions in culture flasks with Luria Broth (LB the uvrY mutant rather had a higher fitness than the wild type. When the competitions were done in flasks with human urine the uvrY mutant initially had a lower fitness. This was followed by a fluctuation in the level of mutant in the long-term culture, with a pattern that was specific for the individual urines that were tested. Addition of LB to the different urine competition cultures however clearly led to a consistently higher fitness of the uvrY mutant. Conclusion This paper demonstrates that the BarA-UvrY two-component system is a determinant for virulence in a monkey cystitis model. The observed competition profiles strengthen our previous hypothesis that disruption of the BarA-UvrY two-component system impairs the ability of the bacteria to switch between different carbon sources. The urine in the bladder contains several different carbon sources and its composition changes over

  7. Modeling mixtures of thyroid gland function disruptors in a vertebrate alternative model, the zebrafish eleutheroembryo

    Energy Technology Data Exchange (ETDEWEB)

    Thienpont, Benedicte; Barata, Carlos [Department of Environmental Chemistry, Institute of Environmental Assessment and Water Research (IDAEA, CSIC), Jordi Girona, 18-26, 08034 Barcelona (Spain); Raldúa, Demetrio, E-mail: drpqam@cid.csic.es [Department of Environmental Chemistry, Institute of Environmental Assessment and Water Research (IDAEA, CSIC), Jordi Girona, 18-26, 08034 Barcelona (Spain); Maladies Rares: Génétique et Métabolisme (MRGM), University of Bordeaux, EA 4576, F-33400 Talence (France)

    2013-06-01

    Maternal thyroxine (T4) plays an essential role in fetal brain development, and even mild and transitory deficits in free-T4 in pregnant women can produce irreversible neurological effects in their offspring. Women of childbearing age are daily exposed to mixtures of chemicals disrupting the thyroid gland function (TGFDs) through the diet, drinking water, air and pharmaceuticals, which has raised the highest concern for the potential additive or synergic effects on the development of mild hypothyroxinemia during early pregnancy. Recently we demonstrated that zebrafish eleutheroembryos provide a suitable alternative model for screening chemicals impairing the thyroid hormone synthesis. The present study used the intrafollicular T4-content (IT4C) of zebrafish eleutheroembryos as integrative endpoint for testing the hypotheses that the effect of mixtures of TGFDs with a similar mode of action [inhibition of thyroid peroxidase (TPO)] was well predicted by a concentration addition concept (CA) model, whereas the response addition concept (RA) model predicted better the effect of dissimilarly acting binary mixtures of TGFDs [TPO-inhibitors and sodium-iodide symporter (NIS)-inhibitors]. However, CA model provided better prediction of joint effects than RA in five out of the six tested mixtures. The exception being the mixture MMI (TPO-inhibitor)-KClO{sub 4} (NIS-inhibitor) dosed at a fixed ratio of EC{sub 10} that provided similar CA and RA predictions and hence it was difficult to get any conclusive result. There results support the phenomenological similarity criterion stating that the concept of concentration addition could be extended to mixture constituents having common apical endpoints or common adverse outcomes. - Highlights: • Potential synergic or additive effect of mixtures of chemicals on thyroid function. • Zebrafish as alternative model for testing the effect of mixtures of goitrogens. • Concentration addition seems to predict better the effect of

  8. Two-component perfect fluid in FRW universe

    CERN Document Server

    ,

    2012-01-01

    We propose the cosmological model which allows to describe on equal footing the evolution of matter in the universe on the time interval from the inflation till the domination of dark energy. The matter is considered as a two-component perfect fluid imitated by homogeneous scalar fields between which there is energy exchange. Dark energy is represented by the cosmological constant, which is supposed invariable during the whole evolution of the universe. The matter changes its equation of state with time, so that the era of radiation domination in the early universe smoothly passes into the era of a pressureless gas, which then passes into the late-time epoch, when the matter is represented by a gas of low-velocity cosmic strings. The inflationary phase is described as an analytic continuation of the energy density in the very early universe into the region of small negative values of the parameter which characterizes typical time of energy transfer from one matter component to another. The Hubble expansion ra...

  9. A smooth mixture of Tobits model for healthcare expenditure.

    Science.gov (United States)

    Keane, Michael; Stavrunova, Olena

    2011-09-01

    This paper develops a smooth mixture of Tobits (SMTobit) model for healthcare expenditure. The model is a generalization of the smoothly mixing regressions framework of Geweke and Keane (J Econometrics 2007; 138: 257-290) to the case of a Tobit-type limited dependent variable. A Markov chain Monte Carlo algorithm with data augmentation is developed to obtain the posterior distribution of model parameters. The model is applied to the US Medicare Current Beneficiary Survey data on total medical expenditure. The results suggest that the model can capture the overall shape of the expenditure distribution very well, and also provide a good fit to a number of characteristics of the conditional (on covariates) distribution of expenditure, such as the conditional mean, variance and probability of extreme outcomes, as well as the 50th, 90th, and 95th, percentiles. We find that healthier individuals face an expenditure distribution with lower mean, variance and probability of extreme outcomes, compared with their counterparts in a worse state of health. Males have an expenditure distribution with higher mean, variance and probability of an extreme outcome, compared with their female counterparts. The results also suggest that heart and cardiovascular diseases affect the expenditure of males more than that of females.

  10. Direct molecular dynamics simulation of liquid-solid phase equilibria for two-component plasmas.

    Science.gov (United States)

    Schneider, A S; Hughto, J; Horowitz, C J; Berry, D K

    2012-06-01

    We determine the liquid-solid phase diagram for carbon-oxygen and oxygen-selenium plasma mixtures using two-phase molecular dynamics simulations. We identify liquid, solid, and interface regions using a bond angle metric. To study finite-size effects, we perform 27,648- and 55,296-ion simulations. To help monitor nonequilibrium effects, we calculate diffusion constants D(i). For the carbon-oxygen system we find that D(O) for oxygen ions in the solid is much smaller than D(C) for carbon ions and that both diffusion constants are 80 or more times smaller than diffusion constants in the liquid phase. There is excellent agreement between our carbon-oxygen phase diagram and that predicted by Medin and Cumming. This suggests that errors from finite-size and nonequilibrium effects are small and that the carbon-oxygen phase diagram is now accurately known. The oxygen-selenium system is a simple two-component model for more complex rapid proton capture nucleosynthesis ash compositions for an accreting neutron star. Diffusion of oxygen, in a predominantly selenium crystal, is remarkably fast, comparable to diffusion in the liquid phase. We find a somewhat lower melting temperature for the oxygen-selenium system than that predicted by Medin and Cumming. This is probably because of electron screening effects.

  11. Adhesion-induced phase behavior of two-component membranes and vesicles.

    Science.gov (United States)

    Rouhiparkouhi, Tahereh; Weikl, Thomas R; Discher, Dennis E; Lipowsky, Reinhard

    2013-01-22

    The interplay of adhesion and phase separation is studied theoretically for two-component membranes that can phase separate into two fluid phases such as liquid-ordered and liquid-disordered phases. Many adhesion geometries provide two different environments for these membranes and then partition the membranes into two segments that differ in their composition. Examples are provided by adhering vesicles, by hole- or pore-spanning membranes, and by membranes supported by chemically patterned surfaces. Generalizing a lattice model for binary mixtures to these adhesion geometries, we show that the phase behavior of the adhering membranes depends, apart from composition and temperature, on two additional parameters, the area fraction of one membrane segment and the affinity contrast between the two segments. For the generic case of non-vanishing affinity contrast, the adhering membranes undergo two distinct phase transitions and the phase diagrams in the composition/temperature plane have a generic topology that consists of two two-phase coexistence regions separated by an intermediate one-phase region. As a consequence, phase separation and domain formation is predicted to occur separately in each of the two membrane segments but not in both segments simultaneously. Furthermore, adhesion is also predicted to suppress the phase separation process for certain regions of the phase diagrams. These generic features of the adhesion-induced phase behavior are accessible to experiment.

  12. Microbial comparative pan-genomics using binomial mixture models

    DEFF Research Database (Denmark)

    Ussery, David; Snipen, L; Almøy, T

    2009-01-01

    The size of the core- and pan-genome of bacterial species is a topic of increasing interest due to the growing number of sequenced prokaryote genomes, many from the same species. Attempts to estimate these quantities have been made, using regression methods or mixture models. We extend the latter...... approach by using statistical ideas developed for capture-recapture problems in ecology and epidemiology. RESULTS: We estimate core- and pan-genome sizes for 16 different bacterial species. The results reveal a complex dependency structure for most species, manifested as heterogeneous detection...... probabilities. Estimated pan-genome sizes range from small (around 2600 gene families) in Buchnera aphidicola to large (around 43000 gene families) in Escherichia coli. Results for Echerichia coli show that as more data become available, a larger diversity is estimated, indicating an extensive pool of rarely...

  13. Mixture models versus free energy of hydration models for waste glass durability

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, G.; Redgate, T.; Masuga, P.

    1996-03-01

    Two approaches for modeling high-level waste glass durability as a function of glass composition are compared. The mixture approach utilizes first-order mixture (FOM) or second-order mixture (SOM) polynomials in composition, whereas the free energy of hydration (FEH) approach assumes durability is linearly related to the FEH of glass. Both approaches fit their models to data using least squares regression. The mixture and FEH approaches are used to model glass durability as a function of glass composition for several simulated waste glass data sets. The resulting FEH and FOM model coefficients and goodness-of-fit statistics are compared, both within and across data sets. The goodness-of-fit statistics show that the FOM model fits/predicts durability in each data set better (sometimes much better) than the FEH model. Considerable differences also exist between some FEH and FOM model component coefficients for each of the data sets. These differences are due to the mixture approach having a greater flexibility to account for the effect of a glass component depending on the level and range of the component and on the levels of other glass components. The mixture approach can also account for higher-order (e.g., curvilinear or interactive) effects of components, whereas the FEH approach cannot. SOM models were developed for three of the data sets, and are shown to improve on the corresponding FOM models. Thus, the mixture approach has much more flexibility than the FEH approach for approximating the relationship between glass composition and durability for various glass composition regions.

  14. Improved model for mixtures of polymers and hard spheres

    Science.gov (United States)

    D'Adamo, Giuseppe; Pelissetto, Andrea

    2016-12-01

    Extensive Monte Carlo simulations are used to investigate how model systems of mixtures of polymers and hard spheres approach the scaling limit. We represent polymers as lattice random walks of length L with an energy penalty w for each intersection (Domb-Joyce model), interacting with hard spheres of radius R c via a hard-core pair potential of range {{R}\\text{mon}}+{{R}c} , where R mon is identified as the monomer radius. We show that the mixed polymer-colloid interaction gives rise to new confluent corrections. The leading ones scale as {{L}-ν} , where ν ≈ 0.588 is the usual Flory exponent. Finally, we determine optimal values of the model parameters w and R mon that guarantee the absence of the two leading confluent corrections. This improved model shows a significantly faster convergence to the asymptotic limit L\\to ∞ and is amenable for extensive and accurate numerical simulations at finite density, with only a limited computational effort.

  15. Compressive sensing by learning a Gaussian mixture model from measurements.

    Science.gov (United States)

    Yang, Jianbo; Liao, Xuejun; Yuan, Xin; Llull, Patrick; Brady, David J; Sapiro, Guillermo; Carin, Lawrence

    2015-01-01

    Compressive sensing of signals drawn from a Gaussian mixture model (GMM) admits closed-form minimum mean squared error reconstruction from incomplete linear measurements. An accurate GMM signal model is usually not available a priori, because it is difficult to obtain training signals that match the statistics of the signals being sensed. We propose to solve that problem by learning the signal model in situ, based directly on the compressive measurements of the signals, without resorting to other signals to train a model. A key feature of our method is that the signals being sensed are treated as random variables and are integrated out in the likelihood. We derive a maximum marginal likelihood estimator (MMLE) that maximizes the likelihood of the GMM of the underlying signals given only their linear compressive measurements. We extend the MMLE to a GMM with dominantly low-rank covariance matrices, to gain computational speedup. We report extensive experimental results on image inpainting, compressive sensing of high-speed video, and compressive hyperspectral imaging (the latter two based on real compressive cameras). The results demonstrate that the proposed methods outperform state-of-the-art methods by significant margins.

  16. Flexible Mixture-Amount Models for Business and Industry Using Gaussian Processes

    NARCIS (Netherlands)

    A. Ruseckaite (Aiste); D. Fok (Dennis); P.P. Goos (Peter)

    2016-01-01

    markdownabstractMany products and services can be described as mixtures of ingredients whose proportions sum to one. Specialized models have been developed for linking the mixture proportions to outcome variables, such as preference, quality and liking. In many scenarios, only the mixture

  17. Flexible Mixture-Amount Models for Business and Industry Using Gaussian Processes

    NARCIS (Netherlands)

    A. Ruseckaite (Aiste); D. Fok (Dennis); P.P. Goos (Peter)

    2016-01-01

    markdownabstractMany products and services can be described as mixtures of ingredients whose proportions sum to one. Specialized models have been developed for linking the mixture proportions to outcome variables, such as preference, quality and liking. In many scenarios, only the mixture proportion

  18. Modeling Phase Equilibria for Acid Gas Mixtures Using the CPA Equation of State. I. Mixtures with H2S

    DEFF Research Database (Denmark)

    Tsivintzelis, Ioannis; Kontogeorgis, Georgios; Michelsen, Michael Locht

    2010-01-01

    The Cubic-Plus-Association (CPA) equation of state is applied to a large variety of mixtures containing H2S, which are of interest in the oil and gas industry. Binary H2S mixtures with alkanes, CO2, water, methanol, and glycols are first considered. The interactions of H2S with polar compounds...... (water, methanol, and glycols) are modeled assuming presence or not of cross-association interactions. Such interactions are accounted for using either a combining rule or a cross-solvation energy obtained from spectroscopic data. Using the parameters obtained from the binary systems, one ternary...

  19. Fully Bayesian mixture model for differential gene expression: simulations and model checks.

    Science.gov (United States)

    Lewin, Alex; Bochkina, Natalia; Richardson, Sylvia

    2007-01-01

    We present a Bayesian hierarchical model for detecting differentially expressed genes using a mixture prior on the parameters representing differential effects. We formulate an easily interpretable 3-component mixture to classify genes as over-expressed, under-expressed and non-differentially expressed, and model gene variances as exchangeable to allow for variability between genes. We show how the proportion of differentially expressed genes, and the mixture parameters, can be estimated in a fully Bayesian way, extending previous approaches where this proportion was fixed and empirically estimated. Good estimates of the false discovery rates are also obtained. Different parametric families for the mixture components can lead to quite different classifications of genes for a given data set. Using Affymetrix data from a knock out and wildtype mice experiment, we show how predictive model checks can be used to guide the choice between possible mixture priors. These checks show that extending the mixture model to allow extra variability around zero instead of the usual point mass null fits the data better. A software package for R is available.

  20. Two component systems: physiological effect of a third component.

    Directory of Open Access Journals (Sweden)

    Baldiri Salvado

    Full Text Available Signal transduction systems mediate the response and adaptation of organisms to environmental changes. In prokaryotes, this signal transduction is often done through Two Component Systems (TCS. These TCS are phosphotransfer protein cascades, and in their prototypical form they are composed by a kinase that senses the environmental signals (SK and by a response regulator (RR that regulates the cellular response. This basic motif can be modified by the addition of a third protein that interacts either with the SK or the RR in a way that could change the dynamic response of the TCS module. In this work we aim at understanding the effect of such an additional protein (which we call "third component" on the functional properties of a prototypical TCS. To do so we build mathematical models of TCS with alternative designs for their interaction with that third component. These mathematical models are analyzed in order to identify the differences in dynamic behavior inherent to each design, with respect to functionally relevant properties such as sensitivity to changes in either the parameter values or the molecular concentrations, temporal responsiveness, possibility of multiple steady states, or stochastic fluctuations in the system. The differences are then correlated to the physiological requirements that impinge on the functioning of the TCS. This analysis sheds light on both, the dynamic behavior of synthetically designed TCS, and the conditions under which natural selection might favor each of the designs. We find that a third component that modulates SK activity increases the parameter space where a bistable response of the TCS module to signals is possible, if SK is monofunctional, but decreases it when the SK is bifunctional. The presence of a third component that modulates RR activity decreases the parameter space where a bistable response of the TCS module to signals is possible.

  1. Toxicological risk assessment of complex mixtures through the Wtox model

    Directory of Open Access Journals (Sweden)

    William Gerson Matias

    2015-01-01

    Full Text Available Mathematical models are important tools for environmental management and risk assessment. Predictions about the toxicity of chemical mixtures must be enhanced due to the complexity of eects that can be caused to the living species. In this work, the environmental risk was accessed addressing the need to study the relationship between the organism and xenobiotics. Therefore, ve toxicological endpoints were applied through the WTox Model, and with this methodology we obtained the risk classication of potentially toxic substances. Acute and chronic toxicity, citotoxicity and genotoxicity were observed in the organisms Daphnia magna, Vibrio scheri and Oreochromis niloticus. A case study was conducted with solid wastes from textile, metal-mechanic and pulp and paper industries. The results have shown that several industrial wastes induced mortality, reproductive eects, micronucleus formation and increases in the rate of lipid peroxidation and DNA methylation of the organisms tested. These results, analyzed together through the WTox Model, allowed the classication of the environmental risk of industrial wastes. The evaluation showed that the toxicological environmental risk of the samples analyzed can be classied as signicant or critical.

  2. Circulation Condition of Two-component Bose-Einstein Condensate

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    In the report we point out that there exists an intrinsic difference in the internal symmetry of the two components spin-1/2 Bose condensates from that of spinor Bose condensates of the atoms with hyperfine states of nonzero integer-spins,which gives rise to a new topological constrain on the circulation for this two-component spin-1/2 Bose condensates.It is shown that the SU(2) symmetry of the spin-1/2 Bose condensate implies a

  3. Maximum Likelihood in a Generalized Linear Finite Mixture Model by Using the EM Algorithm

    NARCIS (Netherlands)

    Jansen, R.C.

    A generalized linear finite mixture model and an EM algorithm to fit the model to data are described. By this approach the finite mixture model is embedded within the general framework of generalized linear models (GLMs). Implementation of the proposed EM algorithm can be readily done in statistical

  4. A study of finite mixture model: Bayesian approach on financial time series data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-07-01

    Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.

  5. Two-component membrane material properties and domain formation from dissipative particle dynamics.

    Science.gov (United States)

    Illya, G; Lipowsky, R; Shillcock, J C

    2006-09-21

    The material parameters (area stretch modulus and bending rigidity) of two-component amphiphilic membranes are determined from dissipative particle dynamics simulations. The preferred area per molecule for each species is varied so as to produce homogeneous mixtures or nonhomogeneous mixtures that form domains. If the latter mixtures are composed of amphiphiles with the same tail length, but different preferred areas per molecule, their material parameters increase monotonically as a function of composition. By contrast, mixtures of amphiphiles that differ in both tail length and preferred area per molecule form both homogeneous and nonhomogeneous mixtures that both exhibit smaller values of their material properties compared to the corresponding pure systems. When the same nonhomogeneous mixtures of amphiphiles are assembled into planar membrane patches and vesicles, the resulting domain shapes are different when the bending rigidities of the domains are sufficiently different. Additionally, both bilayer and monolayer domains are observed in vesicles. We conclude that the evolution of the domain shapes is influenced by the high curvature of the vesicles in the simulation, a result that may be relevant for biological vesicle membranes.

  6. Two component permeation through thin zeolite MFI membranes

    NARCIS (Netherlands)

    Keizer, K.; Burggraaf, A.J.; Vroon, Z.A.E.P.; Verweij, H.

    1998-01-01

    Two component permeation measurements have been performed by the Wicke-Kallenbach method on a thin (3 μm) zeolite MFI (Silicalite-1) membrane with molecules of different kinetic diameters, d(k). The membrane was supported by a flat porous α-Al2O3 substrate. The results obtained could be classified i

  7. two component permeation through thin zeolite MFI membranes

    NARCIS (Netherlands)

    Keizer, Klaas; Burggraaf, Anthonie; Burggraaf, A.J.; Vroon, Z.A.E.P.; Vroon, Z.A.E.P.; Verweij, H.

    1998-01-01

    Two component permeation measurements have been performed by the Wicke–Kallenbach method on a thin (3 μm) zeolite MFI (Silicalite-1) membrane with molecules of different kinetic diameters, dk. The membrane was supported by a flat porous -Al2O3 substrate. The results obtained could be classified in s

  8. TWO-COMPONENT JETS AND THE FANAROFF-RILEY DICHOTOMY

    NARCIS (Netherlands)

    Meliani, Z.; Keppens, R.; Sauty, C.

    2010-01-01

    Transversely stratified jets are observed in many classes of astrophysical objects, ranging from young stellar objects, mu-quasars, to active galactic nuclei and even in gamma-ray bursts. Theoretical arguments support this transverse stratification of jets with two components induced by intrinsic fe

  9. Two component injection moulding: Present and future perspectives

    DEFF Research Database (Denmark)

    Islam, Aminul; Hansen, Hans Nørgaard

    2009-01-01

    Two component injection moulding has widespread industrial applications. Still the technology is yet to gain its full potential in highly demanding and technically challenging applications areas. The smart use of this technology can open the doors for cost effective and convergent manufacturing...

  10. Entanglement Properties in Two-Component Bose-Einstein Condensate

    Science.gov (United States)

    Jiang, Di-You

    2016-10-01

    We investigate entanglement inseparability and bipartite entanglement of in two-component Bose-Einstein condensate in the presence of the nonlinear interatomic interaction, interspecies interaction. Entanglement inseparability and bipartite entanglement have the similar properties. More entanglement can be generated by adjusting the nonlinear interatomic interaction and control the time interval of the entanglement by adjusting interspecies interaction.

  11. A small protein that mediates the activation of a two-component system by another two-component system

    OpenAIRE

    Kox, Linda F.F.; Wösten, Marc M. S. M.; Groisman, Eduardo A.

    2000-01-01

    The PmrA–PmrB two-component system of Salmonella enterica controls resistance to the peptide antibiotic polymyxin B and to several antimicrobial proteins from human neutrophils. Transcription of PmrA-activated genes is induced by high iron, but can also be promoted by growth in low magnesium in a process that requires another two-component system, PhoP–PhoQ. Here, we define the genetic basis for the interaction between the PhoP–PhoQ and PmrA–PmrB systems. We have identified pmrD as a PhoP-act...

  12. Regression mixture models : Does modeling the covariance between independent variables and latent classes improve the results?

    NARCIS (Netherlands)

    Lamont, A.E.; Vermunt, J.K.; Van Horn, M.L.

    2016-01-01

    Regression mixture models are increasingly used as an exploratory approach to identify heterogeneity in the effects of a predictor on an outcome. In this simulation study, we tested the effects of violating an implicit assumption often made in these models; that is, independent variables in the

  13. A person-fit index for polytomous Rasch models, latent class models, and their mixture generalizations

    NARCIS (Netherlands)

    von Davier, M; Molenaar, IW

    2003-01-01

    A normally distributed person-fit index is proposed for detecting aberrant response patterns in latent class models and mixture distribution IRT models for dichotomous and polytomous data. This article extends previous work on the null distribution of person-fit indices for the dichotomous Rasch mod

  14. Strained and unconstrained multivariate normal finite mixture modeling of Piagetian data.

    NARCIS (Netherlands)

    Dolan, C.V.; Jansen, B.R.J.; van der Maas, H.L.J.

    2004-01-01

    We present the results of multivariate normal mixture modeling of Piagetian data. The sample consists of 101 children, who carried out a (pseudo-)conservation computer task on four occasions. We fitted both cross-sectional mixture models, and longitudinal models based on a Markovian transition

  15. Global cross-calibration of Landsat spectral mixture models

    CERN Document Server

    Sousa, Daniel

    2016-01-01

    Data continuity for the Landsat program relies on accurate cross-calibration among sensors. The Landsat 8 OLI has been shown to exhibit superior performance to the sensors on Landsats 4-7 with respect to radiometric calibration, signal to noise, and geolocation. However, improvements to the positioning of the spectral response functions on the OLI have resulted in known biases for commonly used spectral indices because the new band responses integrate absorption features differently from previous Landsat sensors. The objective of this analysis is to quantify the impact of these changes on linear spectral mixture models that use imagery collected by different Landsat sensors. The 2013 underflight of Landsat 7 and 8 provides an opportunity to cross calibrate the spectral mixing spaces of the ETM+ and OLI sensors using near-simultaneous acquisitions from a wide variety of land cover types worldwide. We use 80,910,343 pairs of OLI and ETM+ spectra to characterize the OLI spectral mixing space and perform a cross-...

  16. Fuzzy local Gaussian mixture model for brain MR image segmentation.

    Science.gov (United States)

    Ji, Zexuan; Xia, Yong; Sun, Quansen; Chen, Qiang; Xia, Deshen; Feng, David Dagan

    2012-05-01

    Accurate brain tissue segmentation from magnetic resonance (MR) images is an essential step in quantitative brain image analysis. However, due to the existence of noise and intensity inhomogeneity in brain MR images, many segmentation algorithms suffer from limited accuracy. In this paper, we assume that the local image data within each voxel's neighborhood satisfy the Gaussian mixture model (GMM), and thus propose the fuzzy local GMM (FLGMM) algorithm for automated brain MR image segmentation. This algorithm estimates the segmentation result that maximizes the posterior probability by minimizing an objective energy function, in which a truncated Gaussian kernel function is used to impose the spatial constraint and fuzzy memberships are employed to balance the contribution of each GMM. We compared our algorithm to state-of-the-art segmentation approaches in both synthetic and clinical data. Our results show that the proposed algorithm can largely overcome the difficulties raised by noise, low contrast, and bias field, and substantially improve the accuracy of brain MR image segmentation.

  17. The dynamics of nonstationary solutions in one-dimensional two-component Bose-Einstein condensates

    Institute of Scientific and Technical Information of China (English)

    Lü Bin-Bin; Hao Xue; Tian Qiang

    2011-01-01

    This paper investigates the dynamical properties of nonstationary solutions in one-dimensional two-component Bose-Einstein condensates. It gives three kinds of stationary solutions to this model and develops a general method of constructing nonstationary solutions. It obtains the unique features about general evolution and soliton evolution of nonstationary solutions in this model.

  18. Advances in behavioral genetics modeling using Mplus: applications of factor mixture modeling to twin data.

    Science.gov (United States)

    Muthén, Bengt; Asparouhov, Tihomir; Rebollo, Irene

    2006-06-01

    This article discusses new latent variable techniques developed by the authors. As an illustration, a new factor mixture model is applied to the monozygotic-dizygotic twin analysis of binary items measuring alcohol-use disorder. In this model, heritability is simultaneously studied with respect to latent class membership and within-class severity dimensions. Different latent classes of individuals are allowed to have different heritability for the severity dimensions. The factor mixture approach appears to have great potential for the genetic analyses of heterogeneous populations. Generalizations for longitudinal data are also outlined.

  19. Numerical simulation of slurry jets using mixture model

    Directory of Open Access Journals (Sweden)

    Wen-xin HUAI

    2013-01-01

    Full Text Available Slurry jets in a static uniform environment were simulated with a two-phase mixture model in which flow-particle interactions were considered. A standard k-ε turbulence model was chosen to close the governing equations. The computational results were in agreement with previous laboratory measurements. The characteristics of the two-phase flow field and the influences of hydraulic and geometric parameters on the distribution of the slurry jets were analyzed on the basis of the computational results. The calculated results reveal that if the initial velocity of the slurry jet is high, the jet spreads less in the radial direction. When the slurry jet is less influenced by the ambient fluid (when the Stokes number St is relatively large, the turbulent kinetic energy k and turbulent dissipation rate ε, which are relatively concentrated around the jet axis, decrease more rapidly after the slurry jet passes through the nozzle. For different values of St, the radial distributions of streamwise velocity and particle volume fraction are both self-similar and fit a Gaussian profile after the slurry jet fully develops. The decay rate of the particle velocity is lower than that of water velocity along the jet axis, and the axial distributions of the centerline particle streamwise velocity are self-similar along the jet axis. The pattern of particle dispersion depends on the Stokes number St. When St = 0.39, the particle dispersion along the radial direction is considerable, and the relative velocity is very low due to the low dynamic response time. When St = 3.08, the dispersion of particles along the radial direction is very little, and most of the particles have high relative velocities along the streamwise direction.

  20. A two-component NZRI metamaterial based rectangular cloak

    Science.gov (United States)

    Islam, Sikder Sunbeam; Faruque, Mohammd Rashed Iqbal; Islam, Mohammad Tariqul

    2015-10-01

    A new two-component, near zero refractive index (NZRI) metamaterial is presented for electromagnetic rectangular cloaking operation in the microwave range. In the basic design a pi-shaped, metamaterial was developed and its characteristics were investigated for the two major axes (x and z-axis) wave propagation through the material. For the z-axis wave propagation, it shows more than 2 GHz bandwidth and for the x-axis wave propagation; it exhibits more than 1 GHz bandwidth of NZRI property. The metamaterial was then utilized in designing a rectangular cloak where a metal cylinder was cloaked perfectly in the C-band area of microwave regime. The experimental result was provided for the metamaterial and the cloak and these results were compared with the simulated results. This is a novel and promising design for its two-component NZRI characteristics and rectangular cloaking operation in the electromagnetic paradigm.

  1. On a periodic two-component Hunter-Saxton equation

    CERN Document Server

    Kohlmann, Martin

    2011-01-01

    We determine the solution of the geodesic equation associated with a periodic two-component Hunter-Saxton system on a semidirect product obtained from the diffeomorphism group of the circle, modulo rigid rotations, and a space of scalar functions. In particular, we compute the time of breakdown of the geodesic flow. As a further goal, we establish a local well-posedness result for the two-component Hunter-Saxton system in the smooth category. The paper gets in line with some recent results for the generalized Hunter-Saxton equation provided by Escher, Wu and Wunsch in [J. Escher, Preprint 2010] and [H. Wu, M. Wunsch, arXiv:1009.1688v1 [math.AP

  2. Two Component Injection Moulding for Moulded Interconnect Devices

    DEFF Research Database (Denmark)

    Islam, Aminul

    The moulded interconnect devices (MIDs) contain huge possibilities for many applications in micro electro-mechanical-systems because of their potential in reducing the number of components, process steps and finally in miniaturization of the product. Among the available MID process chains, two...... component (2k) injection moulding is one of the most industrially adaptive processes. However, the use of two component injection moulding for MID fabrication, with circuit patterns in sub-millimeter range, is still a big challenge. This book searches for the technical difficulties associated...... with the process and makes attempts to overcome those challenges. In search of suitable polymer materials for MID applications, potential materials are characterized in terms of polymer-polymer bond strength, polymer-polymer interface quality and selective metallization. The experimental results find the factors...

  3. Two-component microinjection moulding for MID fabrication

    DEFF Research Database (Denmark)

    Islam, Aminul; Hansen, Hans Nørgaard; Tang, Peter Torben

    2010-01-01

    Moulded interconnect devices (MIDs) are plastic substrates with electrical infrastructure. The fabrication of MIDs is usually based on injection moulding, and different process chains may be identified from this starting point. The use of MIDs has been driven primarily by the automotive sector......, but recently, the medical sector seems more and more interested. In particular, the possibility of miniaturisation of three-dimensional components with electrical infrastructure is attractive. The present paper describes possible manufacturing routes and challenges of miniaturised MIDs based on two......-component injection moulding and subsequent metallisation. This technology promises cost effective and convergent manufacturing approaches for both macro- and microapplications. This paper presents the results of industrial MID production based on two-component injection moulding and discusses the important issues...

  4. The Fractional Virial Potential Energy in Two-Component Systems

    Directory of Open Access Journals (Sweden)

    Caimmi, R.

    2008-12-01

    Full Text Available Two-component systems are conceived as macrogases, and the related equation of state is expressed using the virial theorem for subsystems, under the restriction of homeoidally striated density profiles. Explicit calculations are performed for a useful reference case and a few cases of astrophysical interest, both with and without truncation radius. Shallower density profiles are found to yield an equation of state, $phi=phi(y,m$, characterized (for assigned values of the fractional mass, $m=M_j/ M_i$ by the occurrence of two extremum points, a minimum and a maximum, as found in an earlier attempt. Steeper density profiles produce a similar equation of state, which implies that a special value of $m$ is related to a critical curve where the above mentioned extremum points reduce to a single horizontal inflexion point, and curves below the critical one show no extremum points. The similarity of the isofractional mass curves to van der Waals' isothermal curves, suggests the possibility of a phase transition in a bell-shaped region of the $({sf O}yphi$ plane, where the fractional truncation radius along a selected direction is $y=R_j/R_i$, and the fractional virial potential energy is $phi=(E_{ji}_mathrm{vir}/(E_{ij}_mathrm{vir}$. Further investigation is devoted to mass distributions described by Hernquist (1990 density profiles, for which an additional relation can be used to represent a sample of $N=16$ elliptical galaxies (EGs on the $({sf O}yphi$ plane. Even if the evolution of elliptical galaxies and their hosting dark matter (DM haloes, in the light of the model, has been characterized by equal fractional mass, $m$, and equal scaled truncation radius, or concentration, $Xi_u=R_u/r_u^dagger$, $u=i,j$, still it cannot be considered as strictly homologous, due to different values of fractional truncation radii, $y$, or fractional scaling radii, $y^dagger=r_j^dagger/r_i^dagger$, deduced from sample objects.

  5. Background based Gaussian mixture model lesion segmentation in PET

    Energy Technology Data Exchange (ETDEWEB)

    Soffientini, Chiara Dolores, E-mail: chiaradolores.soffientini@polimi.it; Baselli, Giuseppe [DEIB, Department of Electronics, Information, and Bioengineering, Politecnico di Milano, Piazza Leonardo da Vinci 32, Milan 20133 (Italy); De Bernardi, Elisabetta [Department of Medicine and Surgery, Tecnomed Foundation, University of Milano—Bicocca, Monza 20900 (Italy); Zito, Felicia; Castellani, Massimo [Nuclear Medicine Department, Fondazione IRCCS Ca’ Granda Ospedale Maggiore Policlinico, via Francesco Sforza 35, Milan 20122 (Italy)

    2016-05-15

    Purpose: Quantitative {sup 18}F-fluorodeoxyglucose positron emission tomography is limited by the uncertainty in lesion delineation due to poor SNR, low resolution, and partial volume effects, subsequently impacting oncological assessment, treatment planning, and follow-up. The present work develops and validates a segmentation algorithm based on statistical clustering. The introduction of constraints based on background features and contiguity priors is expected to improve robustness vs clinical image characteristics such as lesion dimension, noise, and contrast level. Methods: An eight-class Gaussian mixture model (GMM) clustering algorithm was modified by constraining the mean and variance parameters of four background classes according to the previous analysis of a lesion-free background volume of interest (background modeling). Hence, expectation maximization operated only on the four classes dedicated to lesion detection. To favor the segmentation of connected objects, a further variant was introduced by inserting priors relevant to the classification of neighbors. The algorithm was applied to simulated datasets and acquired phantom data. Feasibility and robustness toward initialization were assessed on a clinical dataset manually contoured by two expert clinicians. Comparisons were performed with respect to a standard eight-class GMM algorithm and to four different state-of-the-art methods in terms of volume error (VE), Dice index, classification error (CE), and Hausdorff distance (HD). Results: The proposed GMM segmentation with background modeling outperformed standard GMM and all the other tested methods. Medians of accuracy indexes were VE <3%, Dice >0.88, CE <0.25, and HD <1.2 in simulations; VE <23%, Dice >0.74, CE <0.43, and HD <1.77 in phantom data. Robustness toward image statistic changes (±15%) was shown by the low index changes: <26% for VE, <17% for Dice, and <15% for CE. Finally, robustness toward the user-dependent volume initialization was

  6. Infinite von Mises-Fisher Mixture Modeling of Whole Brain fMRI Data

    DEFF Research Database (Denmark)

    Røge, Rasmus; Madsen, Kristoffer Hougaard; Schmidt, Mikkel Nørgaard

    2017-01-01

    spherical manifold are rarely analyzed, in part due to the computational challenges imposed by directional statistics. In this letter, we discuss a Bayesian von Mises-Fisher (vMF) mixture model for data on the unit hypersphere and present an efficient inference procedure based on collapsed Markov chain...... Monte Carlo sampling. Comparing the vMF and gaussian mixture models on synthetic data, we demonstrate that the vMF model has a slight advantage inferring the true underlying clustering when compared to gaussian-based models on data generated from both a mixture of vMFs and a mixture of gaussians......Cluster analysis of functional magnetic resonance imaging (fMRI) data is often performed using gaussian mixture models, but when the time series are standardized such that the data reside on a hypersphere, this modeling assumption is questionable. The consequences of ignoring the underlying...

  7. Modeling of pharmaceuticals mixtures toxicity with deviation ratio and best-fit functions models.

    Science.gov (United States)

    Wieczerzak, Monika; Kudłak, Błażej; Yotova, Galina; Nedyalkova, Miroslava; Tsakovski, Stefan; Simeonov, Vasil; Namieśnik, Jacek

    2016-11-15

    The present study deals with assessment of ecotoxicological parameters of 9 drugs (diclofenac (sodium salt), oxytetracycline hydrochloride, fluoxetine hydrochloride, chloramphenicol, ketoprofen, progesterone, estrone, androstenedione and gemfibrozil), present in the environmental compartments at specific concentration levels, and their mutual combinations by couples against Microtox® and XenoScreen YES/YAS® bioassays. As the quantitative assessment of ecotoxicity of drug mixtures is an complex and sophisticated topic in the present study we have used two major approaches to gain specific information on the mutual impact of two separate drugs present in a mixture. The first approach is well documented in many toxicological studies and follows the procedure for assessing three types of models, namely concentration addition (CA), independent action (IA) and simple interaction (SI) by calculation of a model deviation ratio (MDR) for each one of the experiments carried out. The second approach used was based on the assumption that the mutual impact in each mixture of two drugs could be described by a best-fit model function with calculation of weight (regression coefficient or other model parameter) for each of the participants in the mixture or by correlation analysis. It was shown that the sign and the absolute value of the weight or the correlation coefficient could be a reliable measure for the impact of either drug A on drug B or, vice versa, of B on A. Results of studies justify the statement, that both of the approaches show similar assessment of the mode of mutual interaction of the drugs studied. It was found that most of the drug mixtures exhibit independent action and quite few of the mixtures show synergic or dependent action. Copyright © 2016. Published by Elsevier B.V.

  8. Packing characteristics of two-component bilayers composed of ester- and ether-linked phospholipids.

    Science.gov (United States)

    Batenjany, M M; O'Leary, T J; Levin, I W; Mason, J T

    1997-01-01

    The miscibility properties of ether- and ester-linked phospholipids in two-component, fully hydrated bilayers have been studied by differential scanning calorimetry (DSC) and Raman spectroscopy. Mixtures of 1,2-di-O-hexadecyl-rac-glycero-3-phosphocholine (DHPC) with 1,2-dipalmitoyl-sn-glycero-3-phosphoethanolamine (DHPE) and of 1,2-dipalmitoyl-sn-glycero-3-phosphocholine (DPPC) with 1,2-di-O-hexadecyl-sn-glycero-3-phosphoethanolamine (DHPE) have been investigated. The phase diagram for the DPPC/DHPE mixtures indicates that these two phospholipids are miscible in all proportions in the nonrippled bilayer gel phase. In contrast, the DHPC/DPPE mixtures display two regions of gel phase immiscibility between 10 and 30 mol% DPPE. Raman spectroscopic measurements of DHPC/DPPE mixtures in the C-H stretching mode region suggest that this immiscibility arises from the formation of DHPC-rich interdigitated gel phase domains with strong lateral chain packing interactions at temperatures below 27 degrees C. However, in the absence of interdigitation, our findings, and those of others, lead to the conclusion that the miscibility properties of mixtures of ether- and ester-linked phospholipids are determined by the nature of the phospholipid headgroups and are independent of the character of the hydrocarbon chain linkages. Thus it seems unlikely that the ether linkage has any significant effect on the miscibility properties of phospholipids in biological membranes. PMID:9083673

  9. Interaction Analysis of a Two-Component System Using Nanodiscs.

    Directory of Open Access Journals (Sweden)

    Patrick Hörnschemeyer

    Full Text Available Two-component systems are the major means by which bacteria couple adaptation to environmental changes. All utilize a phosphorylation cascade from a histidine kinase to a response regulator, and some also employ an accessory protein. The system-wide signaling fidelity of two-component systems is based on preferential binding between the signaling proteins. However, information on the interaction kinetics between membrane embedded histidine kinase and its partner proteins is lacking. Here, we report the first analysis of the interactions between the full-length membrane-bound histidine kinase CpxA, which was reconstituted in nanodiscs, and its cognate response regulator CpxR and accessory protein CpxP. Using surface plasmon resonance spectroscopy in combination with interaction map analysis, the affinity of membrane-embedded CpxA for CpxR was quantified, and found to increase by tenfold in the presence of ATP, suggesting that a considerable portion of phosphorylated CpxR might be stably associated with CpxA in vivo. Using microscale thermophoresis, the affinity between CpxA in nanodiscs and CpxP was determined to be substantially lower than that between CpxA and CpxR. Taken together, the quantitative interaction data extend our understanding of the signal transduction mechanism used by two-component systems.

  10. Rewiring the specificity of two-component signal transduction systems.

    Science.gov (United States)

    Skerker, Jeffrey M; Perchuk, Barrett S; Siryaporn, Albert; Lubin, Emma A; Ashenberg, Orr; Goulian, Mark; Laub, Michael T

    2008-06-13

    Two-component signal transduction systems are the predominant means by which bacteria sense and respond to environmental stimuli. Bacteria often employ tens or hundreds of these paralogous signaling systems, comprised of histidine kinases (HKs) and their cognate response regulators (RRs). Faithful transmission of information through these signaling pathways and avoidance of detrimental crosstalk demand exquisite specificity of HK-RR interactions. To identify the determinants of two-component signaling specificity, we examined patterns of amino acid coevolution in large, multiple sequence alignments of cognate kinase-regulator pairs. Guided by these results, we demonstrate that a subset of the coevolving residues is sufficient, when mutated, to completely switch the substrate specificity of the kinase EnvZ. Our results shed light on the basis of molecular discrimination in two-component signaling pathways, provide a general approach for the rational rewiring of these pathways, and suggest that analyses of coevolution may facilitate the reprogramming of other signaling systems and protein-protein interactions.

  11. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    NARCIS (Netherlands)

    M.G. de Jong (Martijn); J-B.E.M. Steenkamp (Jan-Benedict)

    2009-01-01

    textabstractWe present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups

  12. Automatic categorization of web pages and user clustering with mixtures of hidden Markov models

    NARCIS (Netherlands)

    Ypma, A.; Heskes, T.M.

    2003-01-01

    We propose mixtures of hidden Markov models for modelling clickstreams of web surfers. Hence, the page categorization is learned from the data without the need for a (possibly cumbersome) manual categorization. We provide an EM algorithm for training a mixture of HMMs and show that additional static

  13. Automatic categorization of web pages and user clustering with mixtures of hidden Markov models

    NARCIS (Netherlands)

    Ypma, A.; Heskes, T.M.

    2003-01-01

    We propose mixtures of hidden Markov models for modelling clickstreams of web surfers. Hence, the page categorization is learned from the data without the need for a (possibly cumbersome) manual categorization. We provide an EM algorithm for training a mixture of HMMs and show that additional static

  14. Finite mixture models for sub-pixel coastal land cover classification

    CSIR Research Space (South Africa)

    Ritchie, Michaela C

    2017-05-01

    Full Text Available mixture models have been used to generate sub-pixel land cover classifications, however, traditionally this makes use of mixtures of normal distributions. However, these models fail to represent many land cover classes accurately, as these are usually...

  15. Combinatorial bounds on the α-divergence of univariate mixture models

    KAUST Repository

    Nielsen, Frank

    2017-06-20

    We derive lower- and upper-bounds of α-divergence between univariate mixture models with components in the exponential family. Three pairs of bounds are presented in order with increasing quality and increasing computational cost. They are verified empirically through simulated Gaussian mixture models. The presented methodology generalizes to other divergence families relying on Hellinger-type integrals.

  16. Modelling of associating mixtures for applications in the oil & gas and chemical industries

    DEFF Research Database (Denmark)

    Kontogeorgis, Georgios; Folas, Georgios; Muro Sunè, Nuria

    2007-01-01

    -alcohol (glycol)-alkanes and certain acid and amine-containing mixtures. Recent results include glycol-aromatic hydrocarbons including multiphase, multicomponent equilibria and gas hydrate calculations in combination with the van der Waals-Platteeuw model. This article will outline some new applications...... of the model of relevance to the petroleum and chemical industries: high pressure vapor-liquid and liquid-liquid equilibrium in alcohol-containing mixtures, mixtures with gas hydrate inhibitors and mixtures with polar and hydrogen bonding chemicals including organic acids. Some comparisons with conventional...

  17. Modelling of phase equilibria of glycol ethers mixtures using an association model

    DEFF Research Database (Denmark)

    Garrido, Nuno M.; Folas, Georgios; Kontogeorgis, Georgios

    2008-01-01

    Vapor-liquid and liquid-liquid equilibria of glycol ethers (surfactant) mixtures with hydrocarbons, polar compounds and water are calculated using an association model, the Cubic-Plus-Association Equation of State. Parameters are estimated for several non-ionic surfactants of the polyoxyethylene ...

  18. Using Bayesian statistics for modeling PTSD through Latent Growth Mixture Modeling : implementation and discussion

    NARCIS (Netherlands)

    Depaoli, Sarah; van de Schoot, Rens; van Loey, Nancy; Sijbrandij, Marit

    2015-01-01

    BACKGROUND: After traumatic events, such as disaster, war trauma, and injuries including burns (which is the focus here), the risk to develop posttraumatic stress disorder (PTSD) is approximately 10% (Breslau & Davis, 1992). Latent Growth Mixture Modeling can be used to classify individuals into dis

  19. The Impact of Various Class-Distinction Features on Model Selection in the Mixture Rasch Model

    Science.gov (United States)

    Choi, In-Hee; Paek, Insu; Cho, Sun-Joo

    2017-01-01

    The purpose of the current study is to examine the performance of four information criteria (Akaike's information criterion [AIC], corrected AIC [AICC] Bayesian information criterion [BIC], sample-size adjusted BIC [SABIC]) for detecting the correct number of latent classes in the mixture Rasch model through simulations. The simulation study…

  20. Modulational instability of two-component Bose-Einstein condensates in an optical lattice

    CERN Document Server

    Jin, G R; Nahm, K; Jin, Guang-Ri; Kim, Chul Koo; Nahm, Kyun

    2004-01-01

    We study modulational instability of two-component Bose-Einstein condensates in a deep optical lattice, which is modelled as a coupled discrete nonlinear Schr\\"{o}dinger equation. The excitation spectrum and the modulational instability condition of the total system are presented analytically. In the long-wavelength limit, our results agree with the homogeneous two-component Bose-Einstein condensates case. The discreteness effects result in the appearance of the modulational instability for the condensates in miscible region. The numerical calculations confirm our analytical results and show that the interspecies coupling can transfer the instability from one component to another.

  1. Stochastic study of information transmission and population stability in a generic bacterial two-component system

    CERN Document Server

    Mapder, Tarunendu; Banik, Suman K

    2016-01-01

    Studies on the role of fluctuations in signal propagation and on gene regulation in monoclonal bacterial population have been extensively pursued based on the machinery of two-component system. The bacterial two-component system shows noise utilisation through its inherent plasticity. The fluctuations propagation takes place using the phosphotransfer module and the feedback mechanism during gene regulation. To delicately observe the noisy kinetics the generic cascade needs stochastic investigation at the mRNA and protein levels. To this end, we propose a theoretical framework to investigate the noisy signal transduction in a generic two-component system. The model shows reliability in information transmission through quantification of several statistical measures. We further extend our analysis to observe the protein distribution in a population of cells. Through numerical simulation, we identify the regime of the kinetic parameter set that generates a stability switch in the steady state distribution of prot...

  2. Two-Component Signal Transduction Systems in the Cyanobacterium Synechocystis sp. PCC 6803

    Institute of Scientific and Technical Information of China (English)

    LIU Xingguo; HUANG Wei; WU Qingyu

    2006-01-01

    Two-component systems are signal transduction systems which enable bacteria to regulate cellular functions in response to changing environmental conditions. The unicellular Synechocystis sp. PCC 6803 has become a model organism for a range of biochemical and molecular biology studies aiming at investigating environmental stress response. The publication of the complete genome sequence of the cyanobacterium Synechocystis sp. PCC 6803 provided a tremendous stimulus for research in this field, and at least 80 open reading frames were identified as members of the two-component signal transduction systems in this single species of cyanobacteria. To date, functional roles have been determined for only a limited number of such proteins. This review summarizes our current knowledge about the two-component signal transduction systems in Synechocystis sp. PCC 6803 and describes recent achievements in elucidating the functional roles of these systems.

  3. Bayesian mixture modeling using a hybrid sampler with application to protein subfamily identification.

    Science.gov (United States)

    Fong, Youyi; Wakefield, Jon; Rice, Kenneth

    2010-01-01

    Predicting protein function is essential to advancing our knowledge of biological processes. This article is focused on discovering the functional diversification within a protein family. A Bayesian mixture approach is proposed to model a protein family as a mixture of profile hidden Markov models. For a given mixture size, a hybrid Markov chain Monte Carlo sampler comprising both Gibbs sampling steps and hierarchical clustering-based split/merge proposals is used to obtain posterior inference. Inference for mixture size concentrates on comparing the integrated likelihoods. The choice of priors is critical with respect to the performance of the procedure. Through simulation studies, we show that 2 priors that are based on independent data sets allow correct identification of the mixture size, both when the data are homogeneous and when the data are generated from a mixture. We illustrate our method using 2 sets of real protein sequences.

  4. A MODEL SELECTION PROCEDURE IN MIXTURE-PROCESS EXPERIMENTS FOR INDUSTRIAL PROCESS OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    Márcio Nascimento de Souza Leão

    2015-08-01

    Full Text Available We present a model selection procedure for use in Mixture and Mixture-Process Experiments. Certain combinations of restrictions on the proportions of the mixture components can result in a very constrained experimental region. This results in collinearity among the covariates of the model, which can make it difficult to fit the model using the traditional method based on the significance of the coefficients. For this reason, a model selection methodology based on information criteria will be proposed for process optimization. Two examples are presented to illustrate this model selection procedure.

  5. Self-organising mixture autoregressive model for non-stationary time series modelling.

    Science.gov (United States)

    Ni, He; Yin, Hujun

    2008-12-01

    Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.

  6. A Linear Gradient Theory Model for Calculating Interfacial Tensions of Mixtures

    DEFF Research Database (Denmark)

    Zou, You-Xiang; Stenby, Erling Halfdan

    1996-01-01

    In this research work, we assumed that the densities of each component in a mixture are linearly distributed across the interface between the coexisting vapor and liquid phases, and we developed a linear gradient theory model for computing interfacial tensions of mixtures, especially mixtures...... with proper scaling behavior at the critical point is at least required.Key words: linear gradient theory; interfacial tension; equation of state; influence parameter; density profile....

  7. Impacts of photon bending on observational aspects of Two Component Advective Flow

    CERN Document Server

    Chatterjee, Arka

    2016-01-01

    Nature of photon trajectories in a curved spacetime around black holes are studied without constraining their motion to any plane. Impacts of photon bending are separately scrutinized for Keplerian and CENBOL components of Two Component Advective Flow (TCAF) model. Parameters like Red shift, Bolometric Flux, temperature profile and time of arrival of photons are also computed.

  8. Adaptive Mixture Modelling Metropolis Methods for Bayesian Analysis of Non-linear State-Space Models.

    Science.gov (United States)

    Niemi, Jarad; West, Mike

    2010-06-01

    We describe a strategy for Markov chain Monte Carlo analysis of non-linear, non-Gaussian state-space models involving batch analysis for inference on dynamic, latent state variables and fixed model parameters. The key innovation is a Metropolis-Hastings method for the time series of state variables based on sequential approximation of filtering and smoothing densities using normal mixtures. These mixtures are propagated through the non-linearities using an accurate, local mixture approximation method, and we use a regenerating procedure to deal with potential degeneracy of mixture components. This provides accurate, direct approximations to sequential filtering and retrospective smoothing distributions, and hence a useful construction of global Metropolis proposal distributions for simulation of posteriors for the set of states. This analysis is embedded within a Gibbs sampler to include uncertain fixed parameters. We give an example motivated by an application in systems biology. Supplemental materials provide an example based on a stochastic volatility model as well as MATLAB code.

  9. Modelling viscosity and mass fraction of bitumen - diluent mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Miadonye, A.; Latour, N.; Puttagunta, V.R. [Lakehead Univ., Thunder Bay, ON (Canada)

    1999-07-01

    In recovery of bitumen in oil sands extraction, the reduction of the viscosity is important above and below ground. The addition of liquid diluent breaks down or weakens the intermolecular forces that create a high viscosity in bitumen. The addition of even 5% of diluent can cause a viscosity reduction in excess of 8%, thus facilitating the in situ recovery and pipeline transportation of bitumen. Knowledge of bitumen - diluent viscosity is highly important because without it, determination of upgrading processes, in situ recovery, well simulation, heat transfer, fluid flow and a variety of other engineering problems would be difficult or impossible to solve. The development of a simple correlation to predict the viscosity of binary mixtures of bitumen - diluent in any proportion is described. The developed correlation used to estimate the viscosities and mass fractions of bitumen - diluent mixtures was within acceptable limits of error. For the prediction of mixture viscosities, the developed correlation gave the best results with an overall average absolute deviation of 12% compared to those of Chironis (17%) and Cragoe (23%). Predictions of diluent mass fractions yielded a much better result with an overall average absolute deviation of 5%. The unique features of the correlation include its computational simplicity, its applicability to mixtures at temperatures other than 30 degrees C, and the fact that only the bitumen and diluent viscosities are needed to make predictions. It is the only correlation capable of predicting viscosities of mixtures, as well as diluent mass fractions required to reduce bitumen viscosity to pumping viscosities. The prediction of viscosities at 25, 60.3, and 82.6 degrees C produced excellent results, particularly at high temperatures with an average absolute deviation of below 10%. 11 refs., 3 figs., 8 tabs.

  10. Unsupervised Segmentation of Spectral Images with a Spatialized Gaussian Mixture Model and Model Selection

    Directory of Open Access Journals (Sweden)

    Cohen S.X.

    2014-03-01

    Full Text Available In this article, we describe a novel unsupervised spectral image segmentation algorithm. This algorithm extends the classical Gaussian Mixture Model-based unsupervised classification technique by incorporating a spatial flavor into the model: the spectra are modelized by a mixture of K classes, each with a Gaussian distribution, whose mixing proportions depend on the position. Using a piecewise constant structure for those mixing proportions, we are able to construct a penalized maximum likelihood procedure that estimates the optimal partition as well as all the other parameters, including the number of classes. We provide a theoretical guarantee for this estimation, even when the generating model is not within the tested set, and describe an efficient implementation. Finally, we conduct some numerical experiments of unsupervised segmentation from a real dataset.

  11. Two-Component Multi-Parameter Time-Frequency Electromagnetics

    Institute of Scientific and Technical Information of China (English)

    HuangZhou; DongWeibin; HeTiezhi

    2003-01-01

    The two-component multi-parameter time-frequency electromagnetic method, used for the development of oilfields,makes use of both the traditional individual conductivity parameters of oil-producing layers and the dispersion information of the conductivity, i.e., the induced polarization parameter. The frequency-domain dispersion data is used to delineate the contacts between oil and water and the time domain dBz/dt component is used to estimate the depths to the un-known reservoirs so as to offer significant data in many aspects for oil exploration and detection.

  12. Two component micro injection moulding for moulded interconnect devices

    DEFF Research Database (Denmark)

    Islam, Aminul

    2008-01-01

    Moulded interconnect devices (MIDs) contain huge possibilities for many applications in micro electro-mechanical-systems because of their capability of reducing the number of components, process steps and finally in miniaturization of the product. Among the available MID process chains, two...... and a reasonable adhesion between them. • Selective metallization of the two component plastic part (coating one polymer with metal and leaving the other one uncoated) To overcome these two main issues in MID fabrication for micro applications, the current Ph.D. project explores the technical difficulties...

  13. Interaction potentials and thermodynamic properties of two component semiclassical plasma

    Energy Technology Data Exchange (ETDEWEB)

    Ramazanov, T. S.; Moldabekov, Zh. A.; Ismagambetova, T. N. [Al-Farabi Kazakh National University, IETP, 71 al-Farabi Av., Almaty 050040 (Kazakhstan); Gabdullin, M. T. [Al-Farabi Kazakh National University, NNLOT, 71 al-Farabi Av., Almaty 050040 (Kazakhstan)

    2014-01-15

    In this paper, the effective interaction potential in two component semiclassical plasma, taking into account the long-range screening and the quantum-mechanical diffraction effects at short distances, is obtained on the basis of dielectric response function method. The structural properties of the semiclassical plasma are considered. The thermodynamic characteristics (the internal energy and the equation of state) are calculated using two methods: the method of effective potentials and the method of micropotentials with screening effect taken into account by the Ornstein-Zernike equation in the HNC approximation.

  14. Two component micro injection molding for MID fabrication

    DEFF Research Database (Denmark)

    Islam, Mohammad Aminul; Hansen, Hans Nørgaard; Tang, Peter Torben

    2009-01-01

    Molded Interconnect Devices (MIDs) are plastic substrates with electrical infrastructure. The fabrication of MIDs is usually based on injection molding and different process chains may be identified from this starting point. The use of MIDs has been driven primarily by the automotive sector......, but recently the medical sector seems more and more interested. In particular the possibility of miniaturization of 3D components with electrical infrastructure is attractive. The paper describes possible manufacturing routes and challenges of miniaturized MIDs based on two component micro injection molding...

  15. Mixture Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.

    2007-12-01

    A mixture experiment involves combining two or more components in various proportions or amounts and then measuring one or more responses for the resulting end products. Other factors that affect the response(s), such as process variables and/or the total amount of the mixture, may also be studied in the experiment. A mixture experiment design specifies the combinations of mixture components and other experimental factors (if any) to be studied and the response variable(s) to be measured. Mixture experiment data analyses are then used to achieve the desired goals, which may include (i) understanding the effects of components and other factors on the response(s), (ii) identifying components and other factors with significant and nonsignificant effects on the response(s), (iii) developing models for predicting the response(s) as functions of the mixture components and any other factors, and (iv) developing end-products with desired values and uncertainties of the response(s). Given a mixture experiment problem, a practitioner must consider the possible approaches for designing the experiment and analyzing the data, and then select the approach best suited to the problem. Eight possible approaches include 1) component proportions, 2) mathematically independent variables, 3) slack variable, 4) mixture amount, 5) component amounts, 6) mixture process variable, 7) mixture of mixtures, and 8) multi-factor mixture. The article provides an overview of the mixture experiment designs, models, and data analyses for these approaches.

  16. Numerical Simulation of Water Jet Flow Using Diffusion Flux Mixture Model

    Directory of Open Access Journals (Sweden)

    Zhi Shang

    2014-01-01

    Full Text Available A multidimensional diffusion flux mixture model was developed to simulate water jet two-phase flows. Through the modification of the gravity using the gradients of the mixture velocity, the centrifugal force on the water droplets was able to be considered. The slip velocities between the continuous phase (gas and the dispersed phase (water droplets were able to be calculated through multidimensional diffusion flux velocities based on the modified multidimensional drift flux model. Through the numerical simulations, comparing with the experiments and the simulations of traditional algebraic slip mixture model on the water mist spray, the model was validated.

  17. Model-based experimental design for assessing effects of mixtures of chemicals

    Energy Technology Data Exchange (ETDEWEB)

    Baas, Jan, E-mail: jan.baas@falw.vu.n [Vrije Universiteit of Amsterdam, Dept of Theoretical Biology, De Boelelaan 1085, 1081 HV Amsterdam (Netherlands); Stefanowicz, Anna M., E-mail: anna.stefanowicz@uj.edu.p [Institute of Environmental Sciences, Jagiellonian University, Gronostajowa 7, 30-387 Krakow (Poland); Klimek, Beata, E-mail: beata.klimek@uj.edu.p [Institute of Environmental Sciences, Jagiellonian University, Gronostajowa 7, 30-387 Krakow (Poland); Laskowski, Ryszard, E-mail: ryszard.laskowski@uj.edu.p [Institute of Environmental Sciences, Jagiellonian University, Gronostajowa 7, 30-387 Krakow (Poland); Kooijman, Sebastiaan A.L.M., E-mail: bas@bio.vu.n [Vrije Universiteit of Amsterdam, Dept of Theoretical Biology, De Boelelaan 1085, 1081 HV Amsterdam (Netherlands)

    2010-01-15

    We exposed flour beetles (Tribolium castaneum) to a mixture of four poly aromatic hydrocarbons (PAHs). The experimental setup was chosen such that the emphasis was on assessing partial effects. We interpreted the effects of the mixture by a process-based model, with a threshold concentration for effects on survival. The behavior of the threshold concentration was one of the key features of this research. We showed that the threshold concentration is shared by toxicants with the same mode of action, which gives a mechanistic explanation for the observation that toxic effects in mixtures may occur in concentration ranges where the individual components do not show effects. Our approach gives reliable predictions of partial effects on survival and allows for a reduction of experimental effort in assessing effects of mixtures, extrapolations to other mixtures, other points in time, or in a wider perspective to other organisms. - We show a mechanistic approach to assess effects of mixtures in low concentrations.

  18. Evolution of two-component signal transduction systems.

    Science.gov (United States)

    Capra, Emily J; Laub, Michael T

    2012-01-01

    To exist in a wide range of environmental niches, bacteria must sense and respond to a variety of external signals. A primary means by which this occurs is through two-component signal transduction pathways, typically composed of a sensor histidine kinase that receives the input stimuli and then phosphorylates a response regulator that effects an appropriate change in cellular physiology. Histidine kinases and response regulators have an intrinsic modularity that separates signal input, phosphotransfer, and output response; this modularity has allowed bacteria to dramatically expand and diversify their signaling capabilities. Recent work has begun to reveal the molecular basis by which two-component proteins evolve. How and why do orthologous signaling proteins diverge? How do cells gain new pathways and recognize new signals? What changes are needed to insulate a new pathway from existing pathways? What constraints are there on gene duplication and lateral gene transfer? Here, we review progress made in answering these questions, highlighting how the integration of genome sequence data with experimental studies is providing major new insights.

  19. The Evolution of Two-Component Signal Transduction Systems

    Science.gov (United States)

    Capra, Emily J.; Laub, Michael T.

    2014-01-01

    To exist in a wide range of environmental niches, bacteria must sense and respond to a myriad of external signals. A primary means by which this occurs is through two-component signal transduction pathways, typically comprised of a histidine kinase that receives the input stimuli and a response regulator that effects an appropriate change in cellular physiology. Histidine kinases and response regulators have an intrinsic modularity that separates signal input, phosphotransfer, and output response; this modularity has allowed bacteria to dramatically expand and diversify their signaling capabilities. Recent work has begun to reveal the molecular basis by which two-component proteins evolve. How and why do orthologous signaling proteins diverge? How do cells gain new pathways and recognize new signals? What changes are needed to insulate a new pathway from existing pathways? What constraints are there on gene duplication and lateral gene transfer? Here, we review progress made in answering these questions, highlighting how the integration of genome sequence data with experimental studies is providing major new insights. PMID:22746333

  20. Study of the Internal Mechanical response of an asphalt mixture by 3-D Discrete Element Modeling

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Hofko, Bernhard

    2015-01-01

    In this paper the viscoelastic behavior of asphalt mixture was investigated by employing a three-dimensional Discrete Element Method (DEM). The cylinder model was filled with cubic array of spheres with a specified radius, and was considered as a whole mixture with uniform contact properties for ...

  1. A Lattice Boltzmann Model of Binary Fluid Mixture

    CERN Document Server

    Orlandini, E; Yeomans, J M; Orlandini, Enzo; Swift, Michael R.

    1995-01-01

    We introduce a lattice Boltzmann for simulating an immiscible binary fluid mixture. Our collision rules are derived from a macroscopic thermodynamic description of the fluid in a way motivated by the Cahn-Hilliard approach to non-equilibrium dynamics. This ensures that a thermodynamically consistent state is reached in equilibrium. The non-equilibrium dynamics is investigated numerically and found to agree with simple analytic predictions in both the one-phase and the two-phase region of the phase diagram.

  2. Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches

    Science.gov (United States)

    Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward

    2015-01-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.

  3. A Bayesian estimation on right censored survival data with mixture and non-mixture cured fraction model based on Beta-Weibull distribution

    Science.gov (United States)

    Yusuf, Madaki Umar; Bakar, Mohd. Rizam B. Abu

    2016-06-01

    Models for survival data that includes the proportion of individuals who are not subject to the event under study are known as a cure fraction models or simply called long-term survival models. The two most common models used to estimate the cure fraction are the mixture model and the non-mixture model. in this work, we present mixture and the non-mixture cure fraction models for survival data based on the beta-Weibull distribution. This four parameter distribution has been proposed as an alternative extension of the Weibull distribution in the analysis of lifetime data. This approach allows the inclusion of covariates in the models, where the estimation of the parameters was obtained under a Bayesian approach using Gibbs sampling methods.

  4. A Bayesian Mixture Model for PoS Induction Using Multiple Features

    OpenAIRE

    Christodoulopoulos, Christos; Goldwater, Sharon; Steedman, Mark

    2011-01-01

    In this paper we present a fully unsupervised syntactic class induction system formulated as a Bayesian multinomial mixture model, where each word type is constrained to belong to a single class. By using a mixture model rather than a sequence model (e.g., HMM), we are able to easily add multiple kinds of features, including those at both the type level (morphology features) and token level (context and alignment features, the latter from parallel corpora). Using only context features, our sy...

  5. Mixture experiment techniques for reducing the number of components applied for modeling waste glass sodium release

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, G.; Redgate, T. [Pacific Northwest National Lab., Richland, WA (United States). Statistics Group

    1997-12-01

    Statistical mixture experiment techniques were applied to a waste glass data set to investigate the effects of the glass components on Product Consistency Test (PCT) sodium release (NR) and to develop a model for PCT NR as a function of the component proportions. The mixture experiment techniques indicate that the waste glass system can be reduced from nine to four components for purposes of modeling PCT NR. Empirical mixture models containing four first-order terms and one or two second-order terms fit the data quite well, and can be used to predict the NR of any glass composition in the model domain. The mixture experiment techniques produce a better model in less time than required by another approach.

  6. Influence of high power ultrasound on rheological and foaming properties of model ice-cream mixtures

    Directory of Open Access Journals (Sweden)

    Verica Batur

    2010-03-01

    Full Text Available This paper presents research of the high power ultrasound effect on rheological and foaming properties of ice cream model mixtures. Ice cream model mixtures are prepared according to specific recipes, and afterward undergone through different homogenization techniques: mechanical mixing, ultrasound treatment and combination of mechanical and ultrasound treatment. Specific diameter (12.7 mm of ultrasound probe tip has been used for ultrasound treatment that lasted 5 minutes at 100 percent amplitude. Rheological parameters have been determined using rotational rheometer and expressed as flow index, consistency coefficient and apparent viscosity. From the results it can be concluded that all model mixtures have non-newtonian, dilatant type behavior. The highest viscosities have been observed for model mixtures that were homogenizes with mechanical mixing, and significantly lower values of viscosity have been observed for ultrasound treated ones. Foaming properties are expressed as percentage of increase in foam volume, foam stability index and minimal viscosity. It has been determined that ice cream model mixtures treated only with ultrasound had minimal increase in foam volume, while the highest increase in foam volume has been observed for ice cream mixture that has been treated in combination with mechanical and ultrasound treatment. Also, ice cream mixtures having higher amount of proteins in composition had shown higher foam stability. It has been determined that optimal treatment time is 10 minutes.

  7. Irreversible Processes in a Universe modelled as a mixture of a Chaplygin gas and radiation

    CERN Document Server

    Kremer, G M

    2003-01-01

    The evolution of a Universe modelled as a mixture of a Chaplygin gas and radiation is determined by taking into account irreversible processes. This mixture could interpolate periods of a radiation dominated, a matter dominated and a cosmological constant dominated Universe. The results of a Universe modelled by this mixture are compared with the results of a mixture whose constituents are radiation and quintessence. Among other results it is shown that: (a) for both models there exists a period of a past deceleration with a present acceleration; (b) the slope of the acceleration of the Universe modelled as a mixture of a Chaplygin gas with radiation is more pronounced than that modelled as a mixture of quintessence and radiation; (c) the energy density of the Chaplygin gas tends to a constant value at earlier times than the energy density of quintessence does; (d) the energy density of radiation for both mixtures coincide and decay more rapidly than the energy densities of the Chaplygin gas and of quintessen...

  8. Two-component systems and toxinogenesis regulation in Clostridium botulinum.

    Science.gov (United States)

    Connan, Chloé; Popoff, Michel R

    2015-05-01

    Botulinum neurotoxins (BoNTs) are the most potent toxins ever known. They are mostly produced by Clostridium botulinum but also by other clostridia. BoNTs associate with non-toxic proteins (ANTPs) to form complexes of various sizes. Toxin production is highly regulated through complex networks of regulatory systems involving an alternative sigma factor, BotR, and at least 6 recently described two-component systems (TCSs). TCSs allow bacteria to sense environmental changes and to respond to various stimuli by regulating the expression of specific genes at a transcriptional level. Several environmental stimuli have been identified to positively or negatively regulate toxin synthesis; however, the link between environmental stimuli and TCSs is still elusive. This review aims to highlight the role of TCSs as a central point in the regulation of toxin production in C. botulinum.

  9. Exact two-component relativistic energy band theory and application

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Rundong; Zhang, Yong; Xiao, Yunlong; Liu, Wenjian, E-mail: liuwj@pku.edu.cn [Beijing National Laboratory for Molecular Sciences, Institute of Theoretical and Computational Chemistry, State Key Laboratory of Rare Earth Materials Chemistry and Applications, College of Chemistry and Molecular Engineering, and Center for Computational Science and Engineering, Peking University, Beijing 100871 (China)

    2016-01-28

    An exact two-component (X2C) relativistic density functional theory in terms of atom-centered basis functions is proposed for relativistic calculations of band structures and structural properties of periodic systems containing heavy elements. Due to finite radial extensions of the local basis functions, the periodic calculation is very much the same as a molecular calculation, except only for an Ewald summation for the Coulomb potential of fluctuating periodic monopoles. For comparison, the nonrelativistic and spin-free X2C counterparts are also implemented in parallel. As a first and pilot application, the band gaps, lattice constants, cohesive energies, and bulk moduli of AgX (X = Cl, Br, I) are calculated to compare with other theoretical results.

  10. Dynamics of two-component membranes surrounded by viscoelastic media.

    Science.gov (United States)

    Komura, Shigeyuki; Yasuda, Kento; Okamoto, Ryuichi

    2015-11-01

    We discuss the dynamics of two-component fluid membranes which are surrounded by viscoelastic media. We assume that membrane-embedded proteins can diffuse laterally and induce a local membrane curvature. The mean squared displacement of a tagged membrane segment is obtained as a generalized Einstein relation. When the elasticity of the surrounding media obeys a power-law behavior in frequency, an anomalous diffusion of the membrane segment is predicted. We also consider the situation where the proteins generate active non-equilibrium forces. The generalized Einstein relation is further modified by an effective temperature that depends on the force dipole energy. The obtained generalized Einstein relations are useful for membrane microrheology experiments.

  11. Exact two-component relativistic energy band theory and application.

    Science.gov (United States)

    Zhao, Rundong; Zhang, Yong; Xiao, Yunlong; Liu, Wenjian

    2016-01-28

    An exact two-component (X2C) relativistic density functional theory in terms of atom-centered basis functions is proposed for relativistic calculations of band structures and structural properties of periodic systems containing heavy elements. Due to finite radial extensions of the local basis functions, the periodic calculation is very much the same as a molecular calculation, except only for an Ewald summation for the Coulomb potential of fluctuating periodic monopoles. For comparison, the nonrelativistic and spin-free X2C counterparts are also implemented in parallel. As a first and pilot application, the band gaps, lattice constants, cohesive energies, and bulk moduli of AgX (X = Cl, Br, I) are calculated to compare with other theoretical results.

  12. Recent advances in description of few two-component fermions

    CERN Document Server

    Kartavtsev, O I

    2012-01-01

    Overview of the recent advances in description of the few two-component fermions is presented. The zero-range interaction limit is generally considered to discuss the principal aspects of the few-body dynamics. Significant attention is paid to detailed description of two identical fermions of mass $m$ and a distinct particle of mass $m_1$; two universal $L^P = 1^-$ bound states arise for mass ratio $m/m_1$ increasing up to the critical value $\\mu_c \\approx 13.607$, beyond which the Efimov effect takes place. The topics considered include rigorous treatment of the few-fermion problem in the zero-range interaction limit, low-dimensional results, the four-body energy spectrum, crossover of the energy spectra for $m/m_1$ near the critical value $\\mu_c $, and properties of potential-dependent states. At last, enlisted are the problems, whose solution is in due course.

  13. Molecular Mechanisms of Two-Component Signal Transduction.

    Science.gov (United States)

    Zschiedrich, Christopher P; Keidel, Victoria; Szurmant, Hendrik

    2016-09-25

    Two-component systems (TCS) comprising sensor histidine kinases and response regulator proteins are among the most important players in bacterial and archaeal signal transduction and also occur in reduced numbers in some eukaryotic organisms. Given their importance to cellular survival, virulence, and cellular development, these systems are among the most scrutinized bacterial proteins. In the recent years, a flurry of bioinformatics, genetic, biochemical, and structural studies have provided detailed insights into many molecular mechanisms that underlie the detection of signals and the generation of the appropriate response by TCS. Importantly, it has become clear that there is significant diversity in the mechanisms employed by individual systems. This review discusses the current knowledge on common themes and divergences from the paradigm of TCS signaling. An emphasis is on the information gained by a flurry of recent structural and bioinformatics studies.

  14. Bond strength of two component injection moulded MID

    DEFF Research Database (Denmark)

    Islam, Mohammad Aminul; Hansen, Hans Nørgaard; Tang, Peter Torben

    2006-01-01

    Most products of the future will require industrially adapted, cost effective production processes and on this issue two-component (2K) injection moulding is a potential candidate for MID manufacturing. MID based on 2k injection moulded plastic part with selectively metallised circuit tracks allows...... the integration of electrical and mechanical functionalities in a real 3D structure. If 2k injection moulding is applied with two polymers, of which one is plateable and the other is not, it will be possible to make 3D electrical structures directly on the component. To be applicable in the real engineering field...... the two different plastic materials in the MID structure require good bonding between them. This paper finds suitable combinations of materials for MIDs from both bond strength and metallisation view-point. Plastic parts were made by two-shot injection moulding and the effects of some important process...

  15. Determinants of specificity in two-component signal transduction.

    Science.gov (United States)

    Podgornaia, Anna I; Laub, Michael T

    2013-04-01

    Maintaining the faithful flow of information through signal transduction pathways is critical to the survival and proliferation of organisms. This problem is particularly challenging as many signaling proteins are part of large, paralogous families that are highly similar at the sequence and structural levels, increasing the risk of unwanted cross-talk. To detect environmental signals and process information, bacteria rely heavily on two-component signaling systems comprised of sensor histidine kinases and their cognate response regulators. Although most species encode dozens of these signaling pathways, there is relatively little cross-talk, indicating that individual pathways are well insulated and highly specific. Here, we review the molecular mechanisms that enforce this specificity. Further, we highlight recent studies that have revealed how these mechanisms evolve to accommodate the introduction of new pathways by gene duplication. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Rewiring two-component signal transduction with small RNAs.

    Science.gov (United States)

    Göpel, Yvonne; Görke, Boris

    2012-04-01

    Bacterial two-component systems (TCSs) and small regulatory RNAs (sRNAs) form densely interconnected networks that integrate and transduce information from the environment into fine-tuned changes of gene expression. Many TCSs control target genes indirectly through regulation of sRNAs, which in turn regulate gene expression by base-pairing with mRNAs or targeting a protein. Conversely, sRNAs may control TCS synthesis, thereby recruiting the TCS regulon to other regulatory networks. Several TCSs control expression of multiple homologous sRNAs providing the regulatory networks with further flexibility. These sRNAs act redundantly, additively or hierarchically on targets. The regulatory speed of sRNAs and their unique features in gene regulation make them ideal players extending the flexibility, dynamic range or timing of TCS signaling. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Auxiliary phosphatases in two-component signal transduction.

    Science.gov (United States)

    Silversmith, Ruth E

    2010-04-01

    Signal termination in two-component systems occurs by loss of the phosphoryl group from the response regulator protein. This review explores our current understanding of the structures, catalytic mechanisms and means of regulation of the known families of phosphatases that catalyze response regulator dephosphorylation. The CheZ and CheC/CheX/FliY families, despite different overall structures, employ identical catalytic strategies using an amide side chain to orient a water molecule for in-line attack of the aspartyl phosphate. Spo0E phosphatases contain sequence and structural features that suggest a strategy similar to the chemotaxis phosphatases but the mechanism used by the Rap phosphatases is not yet elucidated. Identification of features shared by phosphatase families may aid in the identification of currently unrecognized classes of response regulator phosphatases. Copyright 2010 Elsevier Ltd. All rights reserved.

  18. How insects overcome two-component plant chemical defence

    DEFF Research Database (Denmark)

    Pentzold, Stefan; Zagrobelny, Mika; Rook, Frederik;

    2014-01-01

    Insect herbivory is often restricted by glucosylated plant chemical defence compounds that are activated by plant β-glucosidases to release toxic aglucones upon plant tissue damage. Such two-component plant defences are widespread in the plant kingdom and examples of these classes of compounds...... are alkaloid, benzoxazinoid, cyanogenic and iridoid glucosides as well as glucosinolates and salicinoids. Conversely, many insects have evolved a diversity of counteradaptations to overcome this type of constitutive chemical defence. Here we discuss that such counter-adaptations occur at different time points......-component chemical defence. These adaptations include host plant choice, non-disruptive feeding guilds and various physiological adaptations as well as metabolic enzymatic strategies of the insect’s digestive system. Furthermore, insect adaptations often act in combination, may exist in both generalists...

  19. Parallel TREE code for two-component ultracold plasma analysis

    Science.gov (United States)

    Jeon, Byoungseon; Kress, Joel D.; Collins, Lee A.; Grønbech-Jensen, Niels

    2008-02-01

    The TREE method has been widely used for long-range interaction N-body problems. We have developed a parallel TREE code for two-component classical plasmas with open boundary conditions and highly non-uniform charge distributions. The program efficiently handles millions of particles evolved over long relaxation times requiring millions of time steps. Appropriate domain decomposition and dynamic data management were employed, and large-scale parallel processing was achieved using an intermediate level of granularity of domain decomposition and ghost TREE communication. Even though the computational load is not fully distributed in fine grains, high parallel efficiency was achieved for ultracold plasma systems of charged particles. As an application, we performed simulations of an ultracold neutral plasma with a half million particles and a half million time steps. For the long temporal trajectories of relaxation between heavy ions and light electrons, large configurations of ultracold plasmas can now be investigated, which was not possible in past studies.

  20. Modeling adsorption of liquid mixtures on porous materials

    DEFF Research Database (Denmark)

    Monsalvo, Matias Alfonso; Shapiro, Alexander

    2009-01-01

    The multicomponent potential theory of adsorption (MPTA), which was previously applied to adsorption from gases, is extended onto adsorption of liquid mixtures on porous materials. In the MPTA, the adsorbed fluid is considered as an inhomogeneous liquid with thermodynamic properties that depend...... on the distance from the solid surface (or position in the porous space). The theory describes the two kinds of interactions present in the adsorbed fluid, i.e. the fluid-fluid and fluid-solid interactions, by means of an equation of state and interaction potentials, respectively. The proposed extension...

  1. A Mixture Innovation Heterogeneous Autoregressive Model for Structural Breaks and Long Memory

    DEFF Research Database (Denmark)

    Nonejad, Nima

    We propose a flexible model to describe nonlinearities and long-range dependence in time series dynamics. Our model is an extension of the heterogeneous autoregressive model. Structural breaks occur through mixture distributions in state innovations of linear Gaussian state space models. Monte Ca...... forecasts compared to any single model specification. It provides further improvements when we average over nonlinear specifications....

  2. Statistical imitation system using relational interest points and Gaussian mixture models

    CSIR Research Space (South Africa)

    Claassens, J

    2009-11-01

    Full Text Available The author proposes an imitation system that uses relational interest points (RIPs) and Gaussian mixture models (GMMs) to characterize a behaviour. The system's structure is inspired by the Robot Programming by Demonstration (RDP) paradigm...

  3. Modeling Hydrodynamic State of Oil and Gas Condensate Mixture in a Pipeline

    Directory of Open Access Journals (Sweden)

    Dudin Sergey

    2016-01-01

    Based on the developed model a calculation method was obtained which is used to analyze hydrodynamic state and composition of hydrocarbon mixture in each ith section of the pipeline when temperature-pressure and hydraulic conditions change.

  4. Optimal Penalty Functions Based on MCMC for Testing Homogeneity of Mixture Models

    Directory of Open Access Journals (Sweden)

    Rahman Farnoosh

    2012-07-01

    Full Text Available This study is intended to provide an estimation of penalty function for testing homogeneity of mixture models based on Markov chain Monte Carlo simulation. The penalty function is considered as a parametric function and parameter of determinative shape of the penalty function in conjunction with parameters of mixture models are estimated by a Bayesian approach. Different mixture of uniform distribution are used as prior. Some simulation examples are perform to confirm the efficiency of the present work in comparison with the previous approaches.

  5. Scattering for mixtures of hard spheres: comparison of total scattering intensities with model.

    Science.gov (United States)

    Anderson, B J; Gopalakrishnan, V; Ramakrishnan, S; Zukoski, C F

    2006-03-01

    The angular dependence of the intensity of x-rays scattered from binary and ternary hard sphere mixtures is investigated and compared to the predictions of two scattering models. Mixture ratio and total volume fraction dependent effects are investigated for size ratios equal to 0.51 and 0.22. Comparisons of model predictions with experimental results indicate the significant impact of the role of particle size distributions in interpreting the angular dependence of the scattering at wave vectors probing density fluctuations intermediate between the sizes of the particles in the mixture.

  6. A General Nonlinear Fluid Model for Reacting Plasma-Neutral Mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Meier, E T; Shumlak, U

    2012-04-06

    A generalized, computationally tractable fluid model for capturing the effects of neutral particles in plasmas is derived. The model derivation begins with Boltzmann equations for singly charged ions, electrons, and a single neutral species. Electron-impact ionization, radiative recombination, and resonant charge exchange reactions are included. Moments of the reaction collision terms are detailed. Moments of the Boltzmann equations for electron, ion, and neutral species are combined to yield a two-component plasma-neutral fluid model. Separate density, momentum, and energy equations, each including reaction transfer terms, are produced for the plasma and neutral equations. The required closures for the plasma-neutral model are discussed.

  7. Use of Linear Spectral Mixture Model to Estimate Rice Planted Area Based on MODIS Data

    OpenAIRE

    Lei Wang; Satoshi Uchida

    2008-01-01

    MODIS (Moderate Resolution Imaging Spectroradiometer) is a key instrument aboard the Terra (EOS AM) and Aqua (EOS PM) satellites. Linear spectral mixture models are applied to MOIDS data for the sub-pixel classification of land covers. Shaoxing county of Zhejiang Province in China was chosen to be the study site and early rice was selected as the study crop. The derived proportions of land covers from MODIS pixel using linear spectral mixture models were compared with unsupervised classificat...

  8. MULTIPLE REFLECTION EFFECTS IN NONLINEAR MIXTURE MODEL FOR HYPERSPECTRAL IMAGE ANALYSIS

    OpenAIRE

    Liu, C. Y.; Ren, H.

    2016-01-01

    Hyperspectral spectrometers can record electromagnetic energy with hundreds or thousands of spectral channels. With such high spectral resolution, the spectral information has better capability for material identification. Because of the spatial resolution, one pixel in hyperspectral images usually covers several meters, and it may contain more than one material. Therefore, the mixture model must be considered. Linear mixture model (LMM) has been widely used for remote sensing target...

  9. Modeling diffusion coefficients in binary mixtures of polar and non-polar compounds

    DEFF Research Database (Denmark)

    Medvedev, Oleg; Shapiro, Alexander

    2005-01-01

    The theory of transport coefficients in liquids, developed previously, is tested on a description of the diffusion coefficients in binary polar/non-polar mixtures, by applying advanced thermodynamic models. Comparison to a large set of experimental data shows good performance of the model. Only...... components and to only one parameter for mixtures consisting of non-polar components. A possibility of complete prediction of the parameters is discussed....

  10. Genetic Analysis of Somatic Cell Score in Danish Holsteins Using a Liability-Normal Mixture Model

    DEFF Research Database (Denmark)

    Madsen, P; Shariati, M M; Ødegård, J

    2008-01-01

    Mixture models are appealing for identifying hidden structures affecting somatic cell score (SCS) data, such as unrecorded cases of subclinical mastitis. Thus, liability-normal mixture (LNM) models were used for genetic analysis of SCS data, with the aim of predicting breeding values for such cas...... categorizing only the most extreme SCS observations as mastitic, and such cases of subclinical infections may be the most closely related to clinical (treated) mastitis...

  11. Dynamical principles of two-component genetic oscillators.

    Directory of Open Access Journals (Sweden)

    Raúl Guantes

    2006-03-01

    Full Text Available Genetic oscillators based on the interaction of a small set of molecular components have been shown to be involved in the regulation of the cell cycle, the circadian rhythms, or the response of several signaling pathways. Uncovering the functional properties of such oscillators then becomes important for the understanding of these cellular processes and for the characterization of fundamental properties of more complex clocks. Here, we show how the dynamics of a minimal two-component oscillator is drastically affected by its genetic implementation. We consider a repressor and activator element combined in a simple logical motif. While activation is always exerted at the transcriptional level, repression is alternatively operating at the transcriptional (Design I or post-translational (Design II level. These designs display differences on basic oscillatory features and on their behavior with respect to molecular noise or entrainment by periodic signals. In particular, Design I induces oscillations with large activator amplitudes and arbitrarily small frequencies, and acts as an "integrator" of external stimuli, while Design II shows emergence of oscillations with finite, and less variable, frequencies and smaller amplitudes, and detects better frequency-encoded signals ("resonator". Similar types of stimulus response are observed in neurons, and thus this work enables us to connect very different biological contexts. These dynamical principles are relevant for the characterization of the physiological roles of simple oscillator motifs, the understanding of core machineries of complex clocks, and the bio-engineering of synthetic oscillatory circuits.

  12. Hamiltonian of a homogeneous two-component plasma.

    Science.gov (United States)

    Essén, Hanno; Nordmark, A

    2004-03-01

    The Hamiltonian of one- and two-component plasmas is calculated in the negligible radiation Darwin approximation. Since the Hamiltonian is the phase space energy of the system its form indicates, according to statistical mechanics, the nature of the thermal equilibrium that plasmas strive to attain. The main issue is the length scale of the magnetic interaction energy. In the past a screening length lambda=1/square root of r(e)n], with n number density and r(e) classical electron radius, has been derived. We address the question whether the corresponding longer screening range obtained from the classical proton radius is physically relevant and the answer is affirmative. Starting from the Darwin Lagrangian it is nontrivial to find the Darwin Hamiltonian of a macroscopic system. For a homogeneous system we resolve the difficulty by temporarily approximating the particle number density by a smooth constant density. This leads to Yukawa-type screened vector potential. The nontrivial problem of finding the corresponding, divergence free, Coulomb gauge version is solved.

  13. The multi-step phosphorelay mechanism of unorthodox two-component systems in E. coli realizes ultrasensitivity to stimuli while maintaining robustness to noises.

    Science.gov (United States)

    Kim, Jeong-Rae; Cho, Kwang-Hyun

    2006-12-01

    E. coli has two-component systems composed of histidine kinase proteins and response regulator proteins. For a given extracellular stimulus, a histidine kinase senses the stimulus, autophosphorylates and then passes the phosphates to the cognate response regulators. The histidine kinase in an orthodox two-component system has only one histidine domain where the autophosphorylation occurs, but a histidine kinase in some unusual two-component systems (unorthodox two-component systems) has two histidine domains and one aspartate domain. So, the unorthodox two-component systems have more complex phosphorelay mechanisms than orthodox two-component systems. In general, the two-component systems are required to promptly respond to external stimuli for survival of E. coli. In this respect, the complex multi-step phosphorelay mechanism seems to be disadvantageous, but there are several unorthodox two-component systems in E. coli. In this paper, we investigate the reason why such unorthodox two-component systems are present in E. coli. For this purpose, we have developed simplified mathematical models of both orthodox and unorthodox two-component systems and analyzed their dynamical characteristics through extensive computer simulations. We have finally revealed that the unorthodox two-component systems realize ultrasensitive responses to external stimuli and also more robust responses to noises than the orthodox two-component systems.

  14. Histidine Phosphotransfer Proteins in Fungal Two-Component Signal Transduction Pathways

    OpenAIRE

    2013-01-01

    The histidine phosphotransfer (HPt) protein Ypd1 is an important participant in the Saccharomyces cerevisiae multistep two-component signal transduction pathway and, unlike the expanded histidine kinase gene family, is encoded by a single gene in nearly all model and pathogenic fungi. Ypd1 is essential for viability in both S. cerevisiae and in Cryptococcus neoformans. These and other aspects of Ypd1 biology, combined with the availability of structural and mutational data in S. cerevisiae, s...

  15. A Possible Two-Component Structure of the Non-Perturbative Pomeron

    CERN Document Server

    Gauron, P; Gauron, Pierre; Nicolescu, Basarab

    2000-01-01

    We propose a QCD-inspired two-component Pomeron form which gives an excellent description of the proton-proton, pi-proton, kaon-proton, gamma-proton and gamma-gamma total cross sections. Our fit has a better CHI2/dof for a smaller number of parameters as compared with the PDG fit. Our 2-Pomeron form is fully compatible with weak Regge exchange-degeneracy, universality, Regge factorization and the generalized vector dominance model.

  16. Thermodiffusion in Multicomponent Mixtures Thermodynamic, Algebraic, and Neuro-Computing Models

    CERN Document Server

    Srinivasan, Seshasai

    2013-01-01

    Thermodiffusion in Multicomponent Mixtures presents the computational approaches that are employed in the study of thermodiffusion in various types of mixtures, namely, hydrocarbons, polymers, water-alcohol, molten metals, and so forth. We present a detailed formalism of these methods that are based on non-equilibrium thermodynamics or algebraic correlations or principles of the artificial neural network. The book will serve as single complete reference to understand the theoretical derivations of thermodiffusion models and its application to different types of multi-component mixtures. An exhaustive discussion of these is used to give a complete perspective of the principles and the key factors that govern the thermodiffusion process.

  17. Calculation of Surface Tensions of Polar Mixtures with a Simplified Gradient Theory Model

    DEFF Research Database (Denmark)

    Zuo, You-Xiang; Stenby, Erling Halfdan

    1996-01-01

    Key Words: Thermodynamics, Simplified Gradient Theory, Surface Tension, Equation of state, Influence Parameter.In this work, assuming that the number densities of each component in a mixture across the interface between the coexisting vapor and liquid phases are linearly distributed, we developed...... surface tensions of 34 binary mixtures with an overall average absolute deviation of 3.46%. The results show good agreement between the predicted and experimental surface tensions. Next, the SGT model was applied to correlate surface tensions of binary mixtures containing alcohols, water or/and glycerol...

  18. Measurement and modelling of hydrogen bonding in 1-alkanol plus n-alkane binary mixtures

    DEFF Research Database (Denmark)

    von Solms, Nicolas; Jensen, Lars; Kofod, Jonas L.;

    2007-01-01

    Two equations of state (simplified PC-SAFT and CPA) are used to predict the monomer fraction of 1-alkanols in binary mixtures with n-alkanes. It is found that the choice of parameters and association schemes significantly affects the ability of a model to predict hydrogen bonding in mixtures, even...... studies, which is clarified in the present work. New hydrogen bonding data based on infrared spectroscopy are reported for seven binary mixtures of alcohols and alkanes. (C) 2007 Elsevier B.V. All rights reserved....

  19. Modeling of Video Sequences by Gaussian Mixture: Application in Motion Estimation by Block Matching Method

    Directory of Open Access Journals (Sweden)

    Nsiri Benayad

    2010-01-01

    Full Text Available This article investigates a new method of motion estimation based on block matching criterion through the modeling of image blocks by a mixture of two and three Gaussian distributions. Mixture parameters (weights, means vectors, and covariance matrices are estimated by the Expectation Maximization algorithm (EM which maximizes the log-likelihood criterion. The similarity between a block in the current image and the more resembling one in a search window on the reference image is measured by the minimization of Extended Mahalanobis distance between the clusters of mixture. Performed experiments on sequences of real images have given good results, and PSNR reached 3 dB.

  20. Discriminative variable subsets in Bayesian classification with mixture models, with application in flow cytometry studies.

    Science.gov (United States)

    Lin, Lin; Chan, Cliburn; West, Mike

    2016-01-01

    We discuss the evaluation of subsets of variables for the discriminative evidence they provide in multivariate mixture modeling for classification. The novel development of Bayesian classification analysis presented is partly motivated by problems of design and selection of variables in biomolecular studies, particularly involving widely used assays of large-scale single-cell data generated using flow cytometry technology. For such studies and for mixture modeling generally, we define discriminative analysis that overlays fitted mixture models using a natural measure of concordance between mixture component densities, and define an effective and computationally feasible method for assessing and prioritizing subsets of variables according to their roles in discrimination of one or more mixture components. We relate the new discriminative information measures to Bayesian classification probabilities and error rates, and exemplify their use in Bayesian analysis of Dirichlet process mixture models fitted via Markov chain Monte Carlo methods as well as using a novel Bayesian expectation-maximization algorithm. We present a series of theoretical and simulated data examples to fix concepts and exhibit the utility of the approach, and compare with prior approaches. We demonstrate application in the context of automatic classification and discriminative variable selection in high-throughput systems biology using large flow cytometry datasets.

  1. Volumetric Properties of Chloroalkanes + Amines Mixtures: Theoretical Analysis Using the ERAS-Model

    Science.gov (United States)

    Tôrres, R. B.; Hoga, H. E.; Magalhães, J. G.; Volpe, P. L. O.

    2009-08-01

    In this study, experimental data of excess molar volumes of {dichloromethane (DCM), or trichloromethane (TCM) + n-butylamine (n-BA), or +s-butylamine (s-BA), or +t-butylamine (t-BA), or +diethylamine (DEA), or +triethylamine (TEA)} mixtures as a function of composition have been used to test the applicability of the extended real associated solution model (ERAS-Model). The values of the excess molar volume were negative for (DCM + t-BA, or +DEA, or +TEA and TCM + n-BA, or +s-BA, or +DEA, or +TEA) mixtures and present sigmoid curves for (DCM + n-BA, or +s-BA) mixtures over the complete mole-fraction range. The agreement between theoretical and experimental results is discussed in terms of cross-association between the components present in the mixtures.

  2. Kinetic Modeling of Gasoline Surrogate Components and Mixtures under Engine Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Mehl, M; Pitz, W J; Westbrook, C K; Curran, H J

    2010-01-11

    Real fuels are complex mixtures of thousands of hydrocarbon compounds including linear and branched paraffins, naphthenes, olefins and aromatics. It is generally agreed that their behavior can be effectively reproduced by simpler fuel surrogates containing a limited number of components. In this work, an improved version of the kinetic model by the authors is used to analyze the combustion behavior of several components relevant to gasoline surrogate formulation. Particular attention is devoted to linear and branched saturated hydrocarbons (PRF mixtures), olefins (1-hexene) and aromatics (toluene). Model predictions for pure components, binary mixtures and multicomponent gasoline surrogates are compared with recent experimental information collected in rapid compression machine, shock tube and jet stirred reactors covering a wide range of conditions pertinent to internal combustion engines (3-50 atm, 650-1200K, stoichiometric fuel/air mixtures). Simulation results are discussed focusing attention on the mixing effects of the fuel components.

  3. Some covariance models based on normal scale mixtures

    CERN Document Server

    Schlather, Martin

    2011-01-01

    Modelling spatio-temporal processes has become an important issue in current research. Since Gaussian processes are essentially determined by their second order structure, broad classes of covariance functions are of interest. Here, a new class is described that merges and generalizes various models presented in the literature, in particular models in Gneiting (J. Amer. Statist. Assoc. 97 (2002) 590--600) and Stein (Nonstationary spatial covariance functions (2005) Univ. Chicago). Furthermore, new models and a multivariate extension are introduced.

  4. Mixture Models for the Analysis of Repeated Count Data.

    NARCIS (Netherlands)

    van Duijn, M.A.J.; Böckenholt, U

    1995-01-01

    Repeated count data showing overdispersion are commonly analysed by using a Poisson model with varying intensity parameter. resulting in a mixed model. A mixed model with a gamma distribution for the Poisson parameter does not adequately fit a data set on 721 children's spelling errors. An

  5. Modeling the Thermodynamic and Transport Properties of Decahydronaphthalene/Propane Mixtures: Phase Equilibria, Density, and Viscosity

    Science.gov (United States)

    2011-01-01

    Modeling the Thermodynamic and Transport Properties of Decahydronaphthalene/Propane Mixtures: Phase Equilibria , Density, and Viscosity Nathaniel...Decahydronaphthalene/Propane Mixtures: Phase Equilibria , Density, And Viscosity 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d...Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Keywords: phase equilibria ; modified Sanchez-Lacombe equation of state

  6. Modelling and parameter estimation in reactive continuous mixtures: the catalytic cracking of alkanes - part II

    Directory of Open Access Journals (Sweden)

    F. C. PEIXOTO

    1999-09-01

    Full Text Available Fragmentation kinetics is employed to model a continuous reactive mixture of alkanes under catalytic cracking conditions. Standard moment analysis techniques are employed, and a dynamic system for the time evolution of moments of the mixture's dimensionless concentration distribution function (DCDF is found. The time behavior of the DCDF is recovered with successive estimations of scaled gamma distributions using the moments time data.

  7. A mixture model for the joint analysis of latent developmental trajectories and survival

    NARCIS (Netherlands)

    Klein Entink, R.H.; Fox, J.P.; Hout, A. van den

    2011-01-01

    A general joint modeling framework is proposed that includes a parametric stratified survival component for continuous time survival data, and a mixture multilevel item response component to model latent developmental trajectories given mixed discrete response data. The joint model is illustrated in

  8. A mixture model for the joint analysis of latent developmental trajectories and survival

    NARCIS (Netherlands)

    Klein Entink, R.H.; Fox, J.P.; Hout, A. van den

    2011-01-01

    A general joint modeling framework is proposed that includes a parametric stratified survival component for continuous time survival data, and a mixture multilevel item response component to model latent developmental trajectories given mixed discrete response data. The joint model is illustrated in

  9. Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots

    Science.gov (United States)

    Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.

    2013-01-01

    Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…

  10. On the Bayesian calibration of computer model mixtures through experimental data, and the design of predictive models

    Science.gov (United States)

    Karagiannis, Georgios; Lin, Guang

    2017-08-01

    For many real systems, several computer models may exist with different physics and predictive abilities. To achieve more accurate simulations/predictions, it is desirable for these models to be properly combined and calibrated. We propose the Bayesian calibration of computer model mixture method which relies on the idea of representing the real system output as a mixture of the available computer model outputs with unknown input dependent weight functions. The method builds a fully Bayesian predictive model as an emulator for the real system output by combining, weighting, and calibrating the available models in the Bayesian framework. Moreover, it fits a mixture of calibrated computer models that can be used by the domain scientist as a mean to combine the available computer models, in a flexible and principled manner, and perform reliable simulations. It can address realistic cases where one model may be more accurate than the others at different input values because the mixture weights, indicating the contribution of each model, are functions of the input. Inference on the calibration parameters can consider multiple computer models associated with different physics. The method does not require knowledge of the fidelity order of the models. We provide a technique able to mitigate the computational overhead due to the consideration of multiple computer models that is suitable to the mixture model framework. We implement the proposed method in a real-world application involving the Weather Research and Forecasting large-scale climate model.

  11. Trapping of two-component matter-wave solitons by mismatched optical lattices

    Energy Technology Data Exchange (ETDEWEB)

    Shi, Z.; Law, K.J.H. [Department of Mathematics and Statistics, University of Massachusetts, Amherst, MA 01003-4515 (United States); Kevrekidis, P.G. [Department of Mathematics and Statistics, University of Massachusetts, Amherst, MA 01003-4515 (United States)], E-mail: kevrekid@gmail.com; Malomed, B.A. [Department of Physical Electronics, School of Electrical Engineering, Faculty of Engineering, Tel Aviv University, Tel Aviv 69978 (Israel)

    2008-05-26

    We consider a one-dimensional model of a two-component Bose-Einstein condensate in the presence of periodic external potentials of opposite signs, acting on the two species. The interaction between the species is attractive, while intra-species interactions may be attractive too [the system of the bright-bright (BB) type], or of opposite signs in the two components [the gap-bright (GB) type]. We identify the existence and stability domains for soliton complexes of the BB and GB types. The evolution of unstable solitons leads to the establishment of oscillatory states. The increase of the strength of the nonlinear attraction between the species results in symbiotic stabilization of the complexes, despite the fact that one component is centered around a local maximum of the respective periodic potential.

  12. Structural insight into partner specificity and phosphoryl transfer in two-component signal transduction.

    Science.gov (United States)

    Casino, Patricia; Rubio, Vicente; Marina, Alberto

    2009-10-16

    The chief mechanism used by bacteria for sensing their environment is based on two conserved proteins: a sensor histidine kinase (HK) and an effector response regulator (RR). The signal transduction process involves highly conserved domains of both proteins that mediate autokinase, phosphotransfer, and phosphatase activities whose output is a finely tuned RR phosphorylation level. Here, we report the structure of the complex between the entire cytoplasmic portion of Thermotoga maritima class I HK853 and its cognate, RR468, as well as the structure of the isolated RR468, both free and BeF(3)(-) bound. Our results provide insight into partner specificity in two-component systems, recognition of the phosphorylation state of each partner, and the catalytic mechanism of the phosphatase reaction. Biochemical analysis shows that the HK853-catalyzed autokinase reaction proceeds by a cis autophosphorylation mechanism within the HK subunit. The results suggest a model for the signal transduction mechanism in two-component systems.

  13. A hydrodynamic scheme for two-component winds from hot stars

    CERN Document Server

    Votruba, V; Kubát, J; Rätzel, D

    2007-01-01

    We have developed a time-dependent two-component hydrodynamics code to simulate radiatively-driven stellar winds from hot stars. We use a time-explicit van Leer scheme to solve the hydrodynamic equations of a two-component stellar wind. Dynamical friction due to Coulomb collisions between the passive bulk plasma and the line-scattering ions is treated by a time-implicit, semi-analytic method using a polynomial fit to the Chandrasekhar function. This gives stable results despite the stiffness of the problem. This method was applied to model stars with winds that are both poorly and well-coupled. While for the former case we reproduce the mCAK solution, for the latter case our solution leads to wind decoupling.

  14. Structure-reactivity modeling using mixture-based representation of chemical reactions

    Science.gov (United States)

    Polishchuk, Pavel; Madzhidov, Timur; Gimadiev, Timur; Bodrov, Andrey; Nugmanov, Ramil; Varnek, Alexandre

    2017-07-01

    We describe a novel approach of reaction representation as a combination of two mixtures: a mixture of reactants and a mixture of products. In turn, each mixture can be encoded using an earlier reported approach involving simplex descriptors (SiRMS). The feature vector representing these two mixtures results from either concatenated product and reactant descriptors or the difference between descriptors of products and reactants. This reaction representation doesn't need an explicit labeling of a reaction center. The rigorous "product-out" cross-validation (CV) strategy has been suggested. Unlike the naïve "reaction-out" CV approach based on a random selection of items, the proposed one provides with more realistic estimation of prediction accuracy for reactions resulting in novel products. The new methodology has been applied to model rate constants of E2 reactions. It has been demonstrated that the use of the fragment control domain applicability approach significantly increases prediction accuracy of the models. The models obtained with new "mixture" approach performed better than those required either explicit (Condensed Graph of Reaction) or implicit (reaction fingerprints) reaction center labeling.

  15. Microstructure modeling and virtual test of asphalt mixture based on three-dimensional discrete element method

    Institute of Scientific and Technical Information of China (English)

    马涛; 张德育; 张垚; 赵永利; 黄晓明

    2016-01-01

    The objective of this work is to model the microstructure of asphalt mixture and build virtual test for asphalt mixture by using Particle Flow Code in three dimensions (PFC3D) based on three-dimensional discrete element method. A randomly generating algorithm was proposed to capture the three-dimensional irregular shape of coarse aggregate. And then, modeling algorithm and method for graded aggregates were built. Based on the combination of modeling of coarse aggregates, asphalt mastic and air voids, three-dimensional virtual sample of asphalt mixture was modeled by using PFC3D. Virtual tests for penetration test of aggregate and uniaxial creep test of asphalt mixture were built and conducted by using PFC3D. By comparison of the testing results between virtual tests and actual laboratory tests, the validity of the microstructure modeling and virtual test built in this study was verified. Additionally, compared with laboratory test, the virtual test is easier to conduct and has less variability. It is proved that microstructure modeling and virtual test based on three-dimensional discrete element method is a promising way to conduct research of asphalt mixture.

  16. The CpxRA two-component system is essential for Citrobacter rodentium virulence.

    Science.gov (United States)

    Thomassin, Jenny-Lee; Giannakopoulou, Natalia; Zhu, Lei; Gross, Jeremy; Salmon, Kristiana; Leclerc, Jean-Mathieu; Daigle, France; Le Moual, Hervé; Gruenheid, Samantha

    2015-05-01

    Citrobacter rodentium is a murine intestinal pathogen used as a model for the foodborne human pathogens enterohemorrhagic Escherichia coli and enteropathogenic E. coli. During infection, these pathogens use two-component signal transduction systems to detect and adapt to changing environmental conditions. In E. coli, the CpxRA two-component signal transduction system responds to envelope stress by modulating the expression of a myriad of genes. Quantitative real-time PCR showed that cpxRA was expressed in the colon of C57BL/6J mice infected with C. rodentium. To determine whether CpxRA plays a role during C. rodentium infection, a cpxRA deletion strain was generated and found to have a colonization defect during infection. This defect was independent of an altered growth rate or a defective type III secretion system, and single-copy chromosomal complementation of cpxRA restored virulence. The C. rodentium strains were then tested in C3H/HeJ mice, a lethal intestinal infection model. Mice infected with the ΔcpxRA strain survived infection, whereas mice infected with the wild-type or complemented strains succumbed to infection. Furthermore, we found that the cpxRA expression level was higher during early infection than at a later time point. Taken together, these data demonstrate that the CpxRA two-component signal transduction system is essential for the in vivo virulence of C. rodentium. In addition, these data suggest that fine-tuned cpxRA expression is important for infection. This is the first study that identifies a C. rodentium two-component transduction system required for pathogenesis. This study further indicates that CpxRA is an interesting target for therapeutics against enteric pathogens.

  17. Three Different Ways of Calibrating Burger's Contact Model for Viscoelastic Model of Asphalt Mixtures by Discrete Element Method

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Stang, Henrik

    2016-01-01

    modulus. Three different approaches have been used and compared for calibrating the Burger's contact model. Values of the dynamic modulus and phase angle of asphalt mixtures were predicted by conducting DE simulation under dynamic strain control loading. The excellent agreement between the predicted......In this paper the viscoelastic behavior of asphalt mixture was investigated by employing a three-dimensional discrete element method. Combined with Burger's model, three contact models were used for the construction of constitutive asphalt mixture model with viscoelastic properties...... in the commercial software PFC3D, including the slip model, linear stiffness-contact model, and contact bond model. A macro-scale Burger's model was first established and the input parameters of Burger's contact model were calibrated by adjusting them so that the model fitted the experimental data for the complex...

  18. Mixture Density Mercer Kernels

    Data.gov (United States)

    National Aeronautics and Space Administration — We present a method of generating Mercer Kernels from an ensemble of probabilistic mixture models, where each mixture model is generated from a Bayesian mixture...

  19. Linking asphalt binder fatigue to asphalt mixture fatigue performance using viscoelastic continuum damage modeling

    Science.gov (United States)

    Safaei, Farinaz; Castorena, Cassie; Kim, Y. Richard

    2016-08-01

    Fatigue cracking is a major form of distress in asphalt pavements. Asphalt binder is the weakest asphalt concrete constituent and, thus, plays a critical role in determining the fatigue resistance of pavements. Therefore, the ability to characterize and model the inherent fatigue performance of an asphalt binder is a necessary first step to design mixtures and pavements that are not susceptible to premature fatigue failure. The simplified viscoelastic continuum damage (S-VECD) model has been used successfully by researchers to predict the damage evolution in asphalt mixtures for various traffic and climatic conditions using limited uniaxial test data. In this study, the S-VECD model, developed for asphalt mixtures, is adapted for asphalt binders tested under cyclic torsion in a dynamic shear rheometer. Derivation of the model framework is presented. The model is verified by producing damage characteristic curves that are both temperature- and loading history-independent based on time sweep tests, given that the effects of plasticity and adhesion loss on the material behavior are minimal. The applicability of the S-VECD model to the accelerated loading that is inherent of the linear amplitude sweep test is demonstrated, which reveals reasonable performance predictions, but with some loss in accuracy compared to time sweep tests due to the confounding effects of nonlinearity imposed by the high strain amplitudes included in the test. The asphalt binder S-VECD model is validated through comparisons to asphalt mixture S-VECD model results derived from cyclic direct tension tests and Accelerated Loading Facility performance tests. The results demonstrate good agreement between the asphalt binder and mixture test results and pavement performance, indicating that the developed model framework is able to capture the asphalt binder's contribution to mixture fatigue and pavement fatigue cracking performance.

  20. Treatment of nonignorable missing data when modeling unobserved heterogeneity with finite mixture models.

    Science.gov (United States)

    Lehmann, Thomas; Schlattmann, Peter

    2017-01-01

    Multiple imputation has become a widely accepted technique to deal with the problem of incomplete data. Typically, imputation of missing values and the statistical analysis are performed separately. Therefore, the imputation model has to be consistent with the analysis model. If the data are analyzed with a mixture model, the parameter estimates are usually obtained iteratively. Thus, if the data are missing not at random, parameter estimation and treatment of missingness should be combined. We solve both problems by simultaneously imputing values using the data augmentation method and estimating parameters using the EM algorithm. This iterative procedure ensures that the missing values are properly imputed given the current parameter estimates. Properties of the parameter estimates were investigated in a simulation study. The results are illustrated using data from the National Health and Nutrition Examination Survey.

  1. A class-adaptive spatially variant mixture model for image segmentation.

    Science.gov (United States)

    Nikou, Christophoros; Galatsanos, Nikolaos P; Likas, Aristidis C

    2007-04-01

    We propose a new approach for image segmentation based on a hierarchical and spatially variant mixture model. According to this model, the pixel labels are random variables and a smoothness prior is imposed on them. The main novelty of this work is a new family of smoothness priors for the label probabilities in spatially variant mixture models. These Gauss-Markov random field-based priors allow all their parameters to be estimated in closed form via the maximum a posteriori (MAP) estimation using the expectation-maximization methodology. Thus, it is possible to introduce priors with multiple parameters that adapt to different aspects of the data. Numerical experiments are presented where the proposed MAP algorithms were tested in various image segmentation scenarios. These experiments demonstrate that the proposed segmentation scheme compares favorably to both standard and previous spatially constrained mixture model-based segmentation.

  2. Introduction to the special section on mixture modeling in personality assessment.

    Science.gov (United States)

    Wright, Aidan G C; Hallquist, Michael N

    2014-01-01

    Latent variable models offer a conceptual and statistical framework for evaluating the underlying structure of psychological constructs, including personality and psychopathology. Complex structures that combine or compare categorical and dimensional latent variables can be accommodated using mixture modeling approaches, which provide a powerful framework for testing nuanced theories about psychological structure. This special series includes introductory primers on cross-sectional and longitudinal mixture modeling, in addition to empirical examples applying these techniques to real-world data collected in clinical settings. This group of articles is designed to introduce personality assessment scientists and practitioners to a general latent variable framework that we hope will stimulate new research and application of mixture models to the assessment of personality and its pathology.

  3. A Model-Selection-Based Self-Splitting Gaussian Mixture Learning with Application to Speaker Identification

    Directory of Open Access Journals (Sweden)

    Shih-Sian Cheng

    2004-12-01

    Full Text Available We propose a self-splitting Gaussian mixture learning (SGML algorithm for Gaussian mixture modelling. The SGML algorithm is deterministic and is able to find an appropriate number of components of the Gaussian mixture model (GMM based on a self-splitting validity measure, Bayesian information criterion (BIC. It starts with a single component in the feature space and splits adaptively during the learning process until the most appropriate number of components is found. The SGML algorithm also performs well in learning the GMM with a given component number. In our experiments on clustering of a synthetic data set and the text-independent speaker identification task, we have observed the ability of the SGML for model-based clustering and automatically determining the model complexity of the speaker GMMs for speaker identification.

  4. Robust non-rigid point set registration using student's-t mixture model.

    Directory of Open Access Journals (Sweden)

    Zhiyong Zhou

    Full Text Available The Student's-t mixture model, which is heavily tailed and more robust than the Gaussian mixture model, has recently received great attention on image processing. In this paper, we propose a robust non-rigid point set registration algorithm using the Student's-t mixture model. Specifically, first, we consider the alignment of two point sets as a probability density estimation problem and treat one point set as Student's-t mixture model centroids. Then, we fit the Student's-t mixture model centroids to the other point set which is treated as data. Finally, we get the closed-form solutions of registration parameters, leading to a computationally efficient registration algorithm. The proposed algorithm is especially effective for addressing the non-rigid point set registration problem when significant amounts of noise and outliers are present. Moreover, less registration parameters have to be set manually for our algorithm compared to the popular coherent points drift (CPD algorithm. We have compared our algorithm with other state-of-the-art registration algorithms on both 2D and 3D data with noise and outliers, where our non-rigid registration algorithm showed accurate results and outperformed the other algorithms.

  5. Mixture Based Outlier Filtration

    Directory of Open Access Journals (Sweden)

    P. Pecherková

    2006-01-01

    Full Text Available Success/failure of adaptive control algorithms – especially those designed using the Linear Quadratic Gaussian criterion – depends on the quality of the process data used for model identification. One of the most harmful types of process data corruptions are outliers, i.e. ‘wrong data’ lying far away from the range of real data. The presence of outliers in the data negatively affects an estimation of the dynamics of the system. This effect is magnified when the outliers are grouped into blocks. In this paper, we propose an algorithm for outlier detection and removal. It is based on modelling the corrupted data by a two-component probabilistic mixture. The first component of the mixture models uncorrupted process data, while the second models outliers. When the outlier component is detected to be active, a prediction from the uncorrupted data component is computed and used as a reconstruction of the observed data. The resulting reconstruction filter is compared to standard methods on simulated and real data. The filter exhibits excellent properties, especially in the case of blocks of outliers. 

  6. Isothermal (vapour + liquid) equilibrium of (cyclic ethers + chlorohexane) mixtures: Experimental results and SAFT modelling

    Energy Technology Data Exchange (ETDEWEB)

    Bandres, I.; Giner, B.; Lopez, M.C.; Artigas, H. [Departamento de Quimica Organica y Quimica Fisica, Facultad de Ciencias, Universidad de Zaragoza, Pedro Cerbuna 12, 50009 Zaragoza (Spain); Lafuente, C. [Departamento de Quimica Organica y Quimica Fisica, Facultad de Ciencias, Universidad de Zaragoza, Pedro Cerbuna 12, 50009 Zaragoza (Spain)], E-mail: celadi@unizar.es

    2008-08-15

    Experimental data for the isothermal (vapour + liquid) equilibrium of mixtures formed by several cyclic ethers (tetrahydrofuran, tetrahydropyran, 1,3-dioxolane, and 1,4-dioxane) and chlorohexane at temperatures of (298.15 and 328.15) K are presented. Experimental results have been discussed in terms of both, molecular characteristics of pure compounds and potential intermolecular interaction between them using thermodynamic information of the mixtures obtained earlier. Furthermore, the influence of the temperature on the (vapour + liquid) equilibrium of these mixtures has been explored and discussed. Transferable parameters of the SAFT-VR approach together with standard combining rules have been used to model the phase equilibrium of the mixtures and a description of the (vapour + liquid) equilibrium of them that is in excellent agreement with the experimental data are provided.

  7. Modeling dependence based on mixture copulas and its application in risk management

    Institute of Scientific and Technical Information of China (English)

    OUYANG Zi-sheng; LIAO Hui; YANG Xiang-qun

    2009-01-01

    This paper is concerned with the statistical modeling of the dependence structure of multivariate financial data using the copula, and the application of copula functions in VaR valuation. After the introduction of the pure copula method and the maximum and minimum mixture copula method, authors present a new algorithm based on the more generalized mixture copula functions and the dependence measure, and apply the method to the portfolio of Shanghai stock composite index and Shenzhen stock component index. Comparing with the results from various methods, one can find that the mixture copula method is better than the pure Gaussia copula method and the maximum and minimum mixture copula method on different VaR level.

  8. Application of the Electronic Nose Technique to Differentiation between Model Mixtures with COPD Markers

    Directory of Open Access Journals (Sweden)

    Jacek Namieśnik

    2013-04-01

    Full Text Available The paper presents the potential of an electronic nose technique in the field of fast diagnostics of patients suspected of Chronic Obstructive Pulmonary Disease (COPD. The investigations were performed using a simple electronic nose prototype equipped with a set of six semiconductor sensors manufactured by FIGARO Co. They were aimed at verification of a possibility of differentiation between model reference mixtures with potential COPD markers (N,N-dimethylformamide and N,N-dimethylacetamide. These mixtures contained volatile organic compounds (VOCs such as acetone, isoprene, carbon disulphide, propan-2-ol, formamide, benzene, toluene, acetonitrile, acetic acid, dimethyl ether, dimethyl sulphide, acrolein, furan, propanol and pyridine, recognized as the components of exhaled air. The model reference mixtures were prepared at three concentration levels—10 ppb, 25 ppb, 50 ppb v/v—of each component, except for the COPD markers. Concentration of the COPD markers in the mixtures was from 0 ppb to 100 ppb v/v. Interpretation of the obtained data employed principal component analysis (PCA. The investigations revealed the usefulness of the electronic device only in the case when the concentration of the COPD markers was twice as high as the concentration of the remaining components of the mixture and for a limited number of basic mixture components.

  9. Nonlinear Random Effects Mixture Models: Maximum Likelihood Estimation via the EM Algorithm.

    Science.gov (United States)

    Wang, Xiaoning; Schumitzky, Alan; D'Argenio, David Z

    2007-08-15

    Nonlinear random effects models with finite mixture structures are used to identify polymorphism in pharmacokinetic/pharmacodynamic phenotypes. An EM algorithm for maximum likelihood estimation approach is developed and uses sampling-based methods to implement the expectation step, that results in an analytically tractable maximization step. A benefit of the approach is that no model linearization is performed and the estimation precision can be arbitrarily controlled by the sampling process. A detailed simulation study illustrates the feasibility of the estimation approach and evaluates its performance. Applications of the proposed nonlinear random effects mixture model approach to other population pharmacokinetic/pharmacodynamic problems will be of interest for future investigation.

  10. Motif Yggdrasil: sampling sequence motifs from a tree mixture model.

    Science.gov (United States)

    Andersson, Samuel A; Lagergren, Jens

    2007-06-01

    In phylogenetic foot-printing, putative regulatory elements are found in upstream regions of orthologous genes by searching for common motifs. Motifs in different upstream sequences are subject to mutations along the edges of the corresponding phylogenetic tree, consequently taking advantage of the tree in the motif search is an appealing idea. We describe the Motif Yggdrasil sampler; the first Gibbs sampler based on a general tree that uses unaligned sequences. Previous tree-based Gibbs samplers have assumed a star-shaped tree or partially aligned upstream regions. We give a probabilistic model (MY model) describing upstream sequences with regulatory elements and build a Gibbs sampler with respect to this model. The model allows toggling, i.e., the restriction of a position to a subset of nucleotides, but does not require aligned sequences nor edge lengths, which may be difficult to come by. We apply the collapsing technique to eliminate the need to sample nuisance parameters, and give a derivation of the predictive update formula. We show that the MY model improves the modeling of difficult motif instances and that the use of the tree achieves a substantial increase in nucleotide level correlation coefficient both for synthetic data and 37 bacterial lexA genes. We investigate the sensitivity to errors in the tree and show that using random trees MY sampler still has a performance similar to the original version.

  11. Bioinformatics analysis of two-component regulatory systems in Staphylococcus epidermidis

    Institute of Scientific and Technical Information of China (English)

    QIN Zhiqiang; ZHONG Yang; ZHANG Jian; HE Youyu; WU Yang; JIANG Juan; CHEN Jiemin; LUO Xiaomin; QU Di

    2004-01-01

    Sixteen pairs of two-component regulatory systems are identified in the genome of Staphylococcus epidermidis ATCC12228 strain, which is newly sequenced by our laboratory for Medical Molecular Virology and Chinese National Human Genome Center at Shanghai, by using bioinformatics analysis. Comparative analysis of the twocomponent regulatory systems in S. epidermidis and that of S.aureus and Bacillus subtilis shows that these systems may regulate some important biological functions, e.g. growth,biofilm formation, and expression of virulence factors in S.epidermidis. Two conserved domains, i.e. HATPase_c and REC domains, are found in all 16 pairs of two-component proteins.Homologous modelling analysis indicates that there are 4similar HATPase_c domain structures of histidine kinases and 13 similar REC domain structures of response regulators,and there is one AMP-PNP binding pocket in the HATPase_c domain and three active aspartate residues in the REC domain. Preliminary experiment reveals that the bioinformatics analysis of the conserved domain structures in the two-component regulatory systems in S. epidermidis may provide useful information for discovery of potential drug target.

  12. Solvable model of a trapped mixture of Bose-Einstein condensates

    Science.gov (United States)

    Klaiman, Shachar; Streltsov, Alexej I.; Alon, Ofir E.

    2017-01-01

    A mixture of two kinds of identical bosons held in a harmonic potential and interacting by harmonic particle-particle interactions is discussed. This is an exactly-solvable model of a mixture of two trapped Bose-Einstein condensates which allows us to examine analytically various properties. Generalizing the treatments in Cohen and Lee (1985) and Osadchii and Muraktanov (1991), closed form expressions for the mixture's frequencies and ground-state energy and wave-function, and the lowest-order densities are obtained and analyzed for attractive and repulsive intra-species and inter-species particle-particle interactions. A particular mean-field solution of the corresponding Gross-Pitaevskii theory is also found analytically. This allows us to compare properties of the mixture at the exact, many-body and mean-field levels, both for finite systems and at the limit of an infinite number of particles. We discuss the renormalization of the mixture's frequencies at the mean-field level. Mainly, we hereby prove that the exact ground-state energy per particle and lowest-order intra-species and inter-species densities per particle converge at the infinite-particle limit (when the products of the number of particles times the intra-species and inter-species interaction strengths are held fixed) to the results of the Gross-Pitaevskii theory for the mixture. Finally and on the other end, we use the mixture's and each species' center-of-mass operators to show that the Gross-Pitaevskii theory for mixtures is unable to describe the variance of many-particle operators in the mixture, even in the infinite-particle limit. The variances are computed both in position and momentum space and the respective uncertainty products compared and discussed. The role of the center-of-mass separability and, for generically trapped mixtures, inseparability is elucidated when contrasting the variance at the many-body and mean-field levels in a mixture. Our analytical results show that many

  13. A general mixture model and its application to coastal sandbar migration simulation

    Science.gov (United States)

    Liang, Lixin; Yu, Xiping

    2017-04-01

    A mixture model for general description of sediment laden flows is developed and then applied to coastal sandbar migration simulation. Firstly the mixture model is derived based on the Eulerian-Eulerian approach of the complete two-phase flow theory. The basic equations of the model include the mass and momentum conservation equations for the water-sediment mixture and the continuity equation for sediment concentration. The turbulent motion of the mixture is formulated for the fluid and the particles respectively. A modified k-ɛ model is used to describe the fluid turbulence while an algebraic model is adopted for the particles. A general formulation for the relative velocity between the two phases in sediment laden flows, which is derived by manipulating the momentum equations of the enhanced two-phase flow model, is incorporated into the mixture model. A finite difference method based on SMAC scheme is utilized for numerical solutions. The model is validated by suspended sediment motion in steady open channel flows, both in equilibrium and non-equilibrium state, and in oscillatory flows as well. The computed sediment concentrations, horizontal velocity and turbulence kinetic energy of the mixture are all shown to be in good agreement with experimental data. The mixture model is then applied to the study of sediment suspension and sandbar migration in surf zones under a vertical 2D framework. The VOF method for the description of water-air free surface and topography reaction model is coupled. The bed load transport rate and suspended load entrainment rate are all decided by the sea bed shear stress, which is obtained from the boundary layer resolved mixture model. The simulation results indicated that, under small amplitude regular waves, erosion occurred on the sandbar slope against the wave propagation direction, while deposition dominated on the slope towards wave propagation, indicating an onshore migration tendency. The computation results also shows that

  14. Use of a Modified Vector Model for Odor Intensity Prediction of Odorant Mixtures

    Directory of Open Access Journals (Sweden)

    Luchun Yan

    2015-03-01

    Full Text Available Odor intensity (OI indicates the perceived intensity of an odor by the human nose, and it is usually rated by specialized assessors. In order to avoid restrictions on assessor participation in OI evaluations, the Vector Model which calculates the OI of a mixture as the vector sum of its unmixed components’ odor intensities was modified. Based on a detected linear relation between the OI and the logarithm of odor activity value (OAV—a ratio between chemical concentration and odor threshold of individual odorants, OI of the unmixed component was replaced with its corresponding logarithm of OAV. The interaction coefficient (cosα which represented the degree of interaction between two constituents was also measured in a simplified way. Through a series of odor intensity matching tests for binary, ternary and quaternary odor mixtures, the modified Vector Model provided an effective way of relating the OI of an odor mixture with the lnOAV values of its constituents. Thus, OI of an odor mixture could be directly predicted by employing the modified Vector Model after usual quantitative analysis. Besides, it was considered that the modified Vector Model was applicable for odor mixtures which consisted of odorants with the same chemical functional groups and similar molecular structures.

  15. A homogenized constrained mixture (and mechanical analog) model for growth and remodeling of soft tissue.

    Science.gov (United States)

    Cyron, C J; Aydin, R C; Humphrey, J D

    2016-12-01

    Most mathematical models of the growth and remodeling of load-bearing soft tissues are based on one of two major approaches: a kinematic theory that specifies an evolution equation for the stress-free configuration of the tissue as a whole or a constrained mixture theory that specifies rates of mass production and removal of individual constituents within stressed configurations. The former is popular because of its conceptual simplicity, but relies largely on heuristic definitions of growth; the latter is based on biologically motivated micromechanical models, but suffers from higher computational costs due to the need to track all past configurations. In this paper, we present a temporally homogenized constrained mixture model that combines advantages of both classical approaches, namely a biologically motivated micromechanical foundation, a simple computational implementation, and low computational cost. As illustrative examples, we show that this approach describes well both cell-mediated remodeling of tissue equivalents in vitro and the growth and remodeling of aneurysms in vivo. We also show that this homogenized constrained mixture model suggests an intimate relationship between models of growth and remodeling and viscoelasticity. That is, important aspects of tissue adaptation can be understood in terms of a simple mechanical analog model, a Maxwell fluid (i.e., spring and dashpot in series) in parallel with a "motor element" that represents cell-mediated mechanoregulation of extracellular matrix. This analogy allows a simple implementation of homogenized constrained mixture models within commercially available simulation codes by exploiting available models of viscoelasticity.

  16. Mapping quantitative trait loci in a selectively genotyped outbred population using a mixture model approach

    NARCIS (Netherlands)

    Johnson, David L.; Jansen, Ritsert C.; Arendonk, Johan A.M. van

    1999-01-01

    A mixture model approach is employed for the mapping of quantitative trait loci (QTL) for the situation where individuals, in an outbred population, are selectively genotyped. Maximum likelihood estimation of model parameters is obtained from an Expectation-Maximization (EM) algorithm facilitated by

  17. Mixtures of compound Poisson processes as models of tick-by-tick financial data

    CERN Document Server

    Scalas, E

    2006-01-01

    A model for the phenomenological description of tick-by-tick share prices in a stock exchange is introduced. It is based on mixtures of compound Poisson processes. Preliminary results based on Monte Carlo simulation show that this model can reproduce various stylized facts.

  18. Mixtures of compound Poisson processes as models of tick-by-tick financial data

    Science.gov (United States)

    Scalas, Enrico

    2007-10-01

    A model for the phenomenological description of tick-by-tick share prices in a stock exchange is introduced. It is based on mixtures of compound Poisson processes. Preliminary results based on Monte Carlo simulation show that this model can reproduce various stylized facts.

  19. Solvatochromic and Kinetic Response Models in (Ethyl Acetate + Chloroform or Methanol Solvent Mixtures

    Directory of Open Access Journals (Sweden)

    L. R. Vottero

    2000-03-01

    Full Text Available The present work analyzes the solvent effects upon the solvatochromic response models for a set of chemical probes and the kinetic response models for an aromatic nucleophilic substitution reaction, in binary mixtures in which both pure components are able to form intersolvent complexes by hydrogen bonding.

  20. Detecting Gustatory–Olfactory Flavor Mixtures: Models of Probability Summation

    Science.gov (United States)

    Veldhuizen, Maria G.; Shepard, Timothy G.; Shavit, Adam Y.

    2012-01-01

    Odorants and flavorants typically contain many components. It is generally easier to detect multicomponent stimuli than to detect a single component, through either neural integration or probability summation (PS) (or both). PS assumes that the sensory effects of 2 (or more) stimulus components (e.g., gustatory and olfactory components of a flavorant) are detected in statistically independent channels, that each channel makes a separate decision whether a component is detected, and that the behavioral response depends solely on the separate decisions. Models of PS traditionally assume high thresholds for detecting each component, noise being irrelevant. The core assumptions may be adapted, however, to signal-detection theory, where noise limits detection. The present article derives predictions of high-threshold and signal-detection models of independent-decision PS in detecting gustatory–olfactory flavorants, comparing predictions in yes/no and 2-alternative forced-choice tasks using blocked and intermixed stimulus designs. The models also extend to measures of response times to suprathreshold flavorants. Predictions derived from high-threshold and signal-detection models differ markedly. Available empirical evidence on gustatory–olfactory flavor detection suggests that neither the high-threshold nor the signal-detection versions of PS can readily account for the results, which likely reflect neural integration in the flavor system. PMID:22075720

  1. Nonlinear Structured Growth Mixture Models in Mplus and OpenMx

    Science.gov (United States)

    Grimm, Kevin J.; Ram, Nilam; Estabrook, Ryne

    2014-01-01

    Growth mixture models (GMMs; Muthén & Muthén, 2000; Muthén & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models because of their common use, flexibility in modeling many types of change patterns, the availability of statistical programs to fit such models, and the ease of programming. In this paper, we present additional ways of modeling nonlinear change patterns with GMMs. Specifically, we show how LCMs that follow specific nonlinear functions can be extended to examine the presence of multiple latent classes using the Mplus and OpenMx computer programs. These models are fit to longitudinal reading data from the Early Childhood Longitudinal Study-Kindergarten Cohort to illustrate their use. PMID:25419006

  2. Nonlinear Structured Growth Mixture Models in Mplus and OpenMx.

    Science.gov (United States)

    Grimm, Kevin J; Ram, Nilam; Estabrook, Ryne

    2010-01-01

    Growth mixture models (GMMs; Muthén & Muthén, 2000; Muthén & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models because of their common use, flexibility in modeling many types of change patterns, the availability of statistical programs to fit such models, and the ease of programming. In this paper, we present additional ways of modeling nonlinear change patterns with GMMs. Specifically, we show how LCMs that follow specific nonlinear functions can be extended to examine the presence of multiple latent classes using the Mplus and OpenMx computer programs. These models are fit to longitudinal reading data from the Early Childhood Longitudinal Study-Kindergarten Cohort to illustrate their use.

  3. Predictions of Phase Distribution in Liquid-Liquid Two-Component Flow

    Science.gov (United States)

    Wang, Xia; Sun, Xiaodong; Duval, Walter M.

    2011-06-01

    Ground-based liquid-liquid two-component flow can be used to study reduced-gravity gas-liquid two-phase flows provided that the two liquids are immiscible with similar densities. In this paper, we present a numerical study of phase distribution in liquid-liquid two-component flows using the Eulerian two-fluid model in FLUENT, together with a one-group interfacial area transport equation (IATE) that takes into account fluid particle interactions, such as coalescence and disintegration. This modeling approach is expected to dynamically capture changes in the interfacial structure. We apply the FLUENT-IATE model to a water-Therminol 59® two-component vertical flow in a 25-mm inner diameter pipe, where the two liquids are immiscible with similar densities (3% difference at 20°C). This study covers bubbly (drop) flow and bubbly-to-slug flow transition regimes with area-averaged void (drop) fractions from 3 to 30%. Comparisons of the numerical results with the experimental data indicate that for bubbly flows, the predictions of the lateral phase distributions using the FLUENT-IATE model are generally more accurate than those using the model without the IATE. In addition, we demonstrate that the coalescence of fluid particles is dominated by wake entrainment and enhanced by increasing either the continuous or dispersed phase velocity. However, the predictions show disagreement with experimental data in some flow conditions for larger void fraction conditions, which fall into the bubbly-to-slug flow transition regime. We conjecture that additional fluid particle interaction mechanisms due to the change of flow regimes are possibly involved.

  4. Memoized Online Variational Inference for Dirichlet Process Mixture Models

    Science.gov (United States)

    2014-06-27

    for unsupervised modeling of struc- tured data like text documents, time series, and images. They are especially promising for large datasets, as...non-convex unsupervised learning problems, frequently yielding poor solutions (see Fig. 2). While taking the best of multiple runs is possible, this is...16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as Report (SAR) 18. NUMBER OF PAGES 9 19a. NAME OF RESPONSIBLE PERSON a. REPORT

  5. A 2D Axisymmetric Mixture Multiphase Model for Bottom Stirring in a BOF Converter

    Science.gov (United States)

    Kruskopf, Ari

    2017-02-01

    A process model for basic oxygen furnace (BOF) steel converter is in development. The model will take into account all the essential physical and chemical phenomena, while achieving real-time calculation of the process. The complete model will include a 2D axisymmetric turbulent multiphase flow model for iron melt and argon gas mixture, a steel scrap melting model, and a chemical reaction model. A novel liquid mass conserving mixture multiphase model for bubbling gas jet is introduced in this paper. In-house implementation of the model is tested and validated in this article independently from the other parts of the full process model. Validation data comprise three different water models with different volume flow rates of air blown through a regular nozzle and a porous plug. The water models cover a wide range of dimensionless number R_{{p}} , which include values that are similar for industrial-scale steel converter. The k- ɛ turbulence model is used with wall functions so that a coarse grid can be utilized. The model calculates a steady-state flow field for gas/liquid mixture using control volume method with staggered SIMPLE algorithm.

  6. A 2D Axisymmetric Mixture Multiphase Model for Bottom Stirring in a BOF Converter

    Science.gov (United States)

    Kruskopf, Ari

    2016-11-01

    A process model for basic oxygen furnace (BOF) steel converter is in development. The model will take into account all the essential physical and chemical phenomena, while achieving real-time calculation of the process. The complete model will include a 2D axisymmetric turbulent multiphase flow model for iron melt and argon gas mixture, a steel scrap melting model, and a chemical reaction model. A novel liquid mass conserving mixture multiphase model for bubbling gas jet is introduced in this paper. In-house implementation of the model is tested and validated in this article independently from the other parts of the full process model. Validation data comprise three different water models with different volume flow rates of air blown through a regular nozzle and a porous plug. The water models cover a wide range of dimensionless number R_{p} , which include values that are similar for industrial-scale steel converter. The k-ɛ turbulence model is used with wall functions so that a coarse grid can be utilized. The model calculates a steady-state flow field for gas/liquid mixture using control volume method with staggered SIMPLE algorithm.

  7. A generalized physiologically-based toxicokinetic modeling system for chemical mixtures containing metals

    Directory of Open Access Journals (Sweden)

    Isukapalli Sastry S

    2010-06-01

    Full Text Available Abstract Background Humans are routinely and concurrently exposed to multiple toxic chemicals, including various metals and organics, often at levels that can cause adverse and potentially synergistic effects. However, toxicokinetic modeling studies of exposures to these chemicals are typically performed on a single chemical basis. Furthermore, the attributes of available models for individual chemicals are commonly estimated specifically for the compound studied. As a result, the available models usually have parameters and even structures that are not consistent or compatible across the range of chemicals of concern. This fact precludes the systematic consideration of synergistic effects, and may also lead to inconsistencies in calculations of co-occurring exposures and corresponding risks. There is a need, therefore, for a consistent modeling framework that would allow the systematic study of cumulative risks from complex mixtures of contaminants. Methods A Generalized Toxicokinetic Modeling system for Mixtures (GTMM was developed and evaluated with case studies. The GTMM is physiologically-based and uses a consistent, chemical-independent physiological description for integrating widely varying toxicokinetic models. It is modular and can be directly "mapped" to individual toxicokinetic models, while maintaining physiological consistency across different chemicals. Interaction effects of complex mixtures can be directly incorporated into the GTMM. Conclusions The application of GTMM to different individual metals and metal compounds showed that it explains available observational data as well as replicates the results from models that have been optimized for individual chemicals. The GTMM also made it feasible to model toxicokinetics of complex, interacting mixtures of multiple metals and nonmetals in humans, based on available literature information. The GTMM provides a central component in the development of a "source

  8. Theory of phase equilibria for model mixtures of n-alkanes, perfluoroalkanes and perfluoroalkylalkane diblock surfactants

    Science.gov (United States)

    Dos Ramos, María Carolina; Blas, Felipe J.

    2007-05-01

    An extension of the SAFT-VR equation of state, the so-called hetero-SAFT approach [Y. Peng, H. Zhao, and C. McCabe, Molec. Phys. 104, 571 (2006)], is used to examine the phase equilibria exhibited by a number of model binary mixtures of n-alkanes, perfluoroalkanes and perfluoroalkylalkane diblock surfactants. Despite the increasing recent interest in semifluorinated alkanes (or perfluoroalkylalkane diblock molecules), the phase behaviour of mixtures involving these molecules with n-alkanes or perfluoroalkanes is practically unknown from the experimental point of view. In this work, we use simple molecular models for n-alkanes, perfluoroalkanes and perfluoroalkylalkane diblock molecules to predict, from a molecular perspective, the phase behaviour of selected model mixtures of perfluoroalkylalkanes with n-alkanes and perfluoroalkanes. In particular, we focus our interest on the understanding of the microscopic conditions that control the liquid-liquid separation and the stabilization of these mixtures. n-Alkanes and perfluoroalkanes are modelled as tangentially bonded monomer segments with molecular parameters taken from the literature. The perfluoroalkylalkane diblock molecules are modelled as heterosegmented diblock chains, with parameters for the alkyl and perfluoroalkyl segments developed in earlier work. This simple approach, which was proposed in previous work [P. Morgado, H. Zhao, F. J. Blas, C. McCabe, L. P. N. Rebelo, and E. J. M. Filipe, J. Phys. Chem. B, 111, 2856], is now extended to describe model n-alkane (or perfluoroalkane) + perfluroalkylalkane binary mixtures. We have obtained the phase behaviour of different mixtures and studied the effect of the molecular weight of n-alkanes and perfluoroalkanes on the type of phase behaviour observed in these mixtures. We have also analysed the effect of the number of alkyl and perfluoroalkyl chemical groups in the surfactant molecule on the phase behaviour. In addition to the usual vapour-liquid phase

  9. EXISTENCE AND REGULARITY OF SOLUTIONS TO MODEL FOR LIQUID MIXTURE OF 3HE-4HE

    Institute of Scientific and Technical Information of China (English)

    Luo Hong; Pu Zhilin

    2012-01-01

    Existence and regularity of solutions to model for liquid mixture of 3He-4He is considered in this paper.First,it is proved that this system possesses a unique global weak solution in H1(Ω,C × R) by using Galerkin method.Secondly,by using an iteration procedure,regularity estimates for the linear semigroups,it is proved that the model for liquid mixture of 3He-4He has a unique solution in Hk(Ω,C × R) for all k ≥ 1.

  10. Non-racemic mixture model: a computational approach.

    Science.gov (United States)

    Polanco, Carlos; Buhse, Thomas

    2017-01-01

    The behavior of a slight chiral bias in favor of l-amino acids over d-amino acids was studied in an evolutionary mathematical model generating mixed chiral peptide hexamers. The simulations aimed to reproduce a very generalized prebiotic scenario involving a specified couple of amino acid enantiomers and a possible asymmetric amplification through autocatalytic peptide self-replication while forming small multimers of a defined length. Our simplified model allowed the observation of a small ascending but not conclusive tendency in the l-amino acid over the d-amino acid profile for the resulting mixed chiral hexamers in computer simulations of 100 peptide generations. This simulation was carried out by changing the chiral bias from 1% to 3%, in three stages of 15, 50 and 100 generations to observe any alteration that could mean a drastic change in behavior. So far, our simulations lead to the assumption that under the exposure of very slight non-racemic conditions, a significant bias between l- and d-amino acids, as present in our biosphere, was unlikely generated under prebiotic conditions if autocatalytic peptide self-replication was the main or the only driving force of chiral auto-amplification.

  11. A multiscale transport model for binary Lennard Jones mixtures in slit nanopores

    Science.gov (United States)

    Bhadauria, Ravi; Aluru, N. R.

    2016-11-01

    We present a quasi-continuum multiscale hydrodynamic transport model for one dimensional isothermal, non-reacting binary mixture confined in slit shaped nanochannels. We focus on species transport equation that includes the viscous dissipation and interspecies diffusion term of the Maxwell-Stefan form. Partial viscosity variation is modeled by van der Waals one fluid approximation and the Local Average Density Method. We use friction boundary conditions where the wall-species friction parameter is computed using a novel species specific Generalized Langevin Equation model. The transport model accuracy is tested by predicting the velocity profiles of Lennard-Jones (LJ) methane-hydrogen and LJ methane-argon mixtures in graphene slit channels of different width. The resultant slip length from the continuum model is found to be invariant of channel width for a fixed mixture molar concentration. The mixtures considered are observed to behave as single species pseudo fluid, with the friction parameter displaying a linear dependence on the molar composition. The proposed model yields atomistic level accuracy with continuum scale efficiency.

  12. A Finite Mixture of Nonlinear Random Coefficient Models for Continuous Repeated Measures Data.

    Science.gov (United States)

    Kohli, Nidhi; Harring, Jeffrey R; Zopluoglu, Cengiz

    2016-09-01

    Nonlinear random coefficient models (NRCMs) for continuous longitudinal data are often used for examining individual behaviors that display nonlinear patterns of development (or growth) over time in measured variables. As an extension of this model, this study considers the finite mixture of NRCMs that combine features of NRCMs with the idea of finite mixture (or latent class) models. The efficacy of this model is that it allows the integration of intrinsically nonlinear functions where the data come from a mixture of two or more unobserved subpopulations, thus allowing the simultaneous investigation of intra-individual (within-person) variability, inter-individual (between-person) variability, and subpopulation heterogeneity. Effectiveness of this model to work under real data analytic conditions was examined by executing a Monte Carlo simulation study. The simulation study was carried out using an R routine specifically developed for the purpose of this study. The R routine used maximum likelihood with the expectation-maximization algorithm. The design of the study mimicked the output obtained from running a two-class mixture model on task completion data.

  13. EEG Signal Classification With Super-Dirichlet Mixture Model

    DEFF Research Database (Denmark)

    Ma, Zhanyu; Tan, Zheng-Hua; Prasad, Swati

    2012-01-01

    Classification of the Electroencephalogram (EEG) signal is a challengeable task in the brain-computer interface systems. The marginalized discrete wavelet transform (mDWT) coefficients extracted from the EEG signals have been frequently used in researches since they reveal features related to the...... vector machine (SVM) based classifier, the SDMM based classifier performs more stable and shows a promising improvement, with both channel selection strategies....... by the Dirichlet distribution and the distribution of the mDWT coefficients from more than one channels is described by a super-Dirichletmixture model (SDMM). The Fisher ratio and the generalization error estimation are applied to select relevant channels, respectively. Compared to the state-of-the-art support...

  14. Land Cover Classification for Polarimetric SAR Images Based on Mixture Models

    Directory of Open Access Journals (Sweden)

    Wei Gao

    2014-04-01

    Full Text Available In this paper, two mixture models are proposed for modeling heterogeneous regions in single-look and multi-look polarimetric SAR images, along with their corresponding maximum likelihood classifiers for land cover classification. The classical Gaussian and Wishart models are suitable for modeling scattering vectors and covariance matrices from homogeneous regions, while their performance deteriorates for regions that are heterogeneous. By comparison, the proposed mixture models reduce the modeling error by expressing the data distribution as a weighted sum of multiple component distributions. For single-look and multi-look polarimetric SAR data, complex Gaussian and complex Wishart components are adopted, respectively. Model parameters are determined by employing the expectation-maximization (EM algorithm. Two maximum likelihood classifiers are then constructed based on the proposed mixture models. These classifiers are assessed using polarimetric SAR images from the RADARSAT-2 sensor of the Canadian Space Agency (CSA, the AIRSAR sensor of the Jet Propulsion Laboratory (JPL and the EMISAR sensor of the Technical University of Denmark (DTU. Experiment results demonstrate that the new models fit heterogeneous regions preferably to the classical models and are especially appropriate for extremely heterogeneous regions, such as urban areas. The overall accuracy of land cover classification is also improved due to the more refined modeling.

  15. Approximation of the breast height diameter distribution of two-cohort stands by mixture models II Goodness-of-fit tests

    Science.gov (United States)

    Rafal Podlaski; Francis .A. Roesch

    2013-01-01

    The goals of this study are (1) to analyse the accuracy of the approximation of empirical distributions of diameter at breast height (dbh) using two-component mixtures of either the Weibull distribution or the gamma distribution in two−cohort stands, and (2) to discuss the procedure of choosing goodness−of−fit tests. The study plots were...

  16. Kinetic model for astaxanthin aggregation in water-methanol mixtures

    Science.gov (United States)

    Giovannetti, Rita; Alibabaei, Leila; Pucciarelli, Filippo

    2009-07-01

    The aggregation of astaxanthin in hydrated methanol was kinetically studied in the temperature range from 10 °C to 50 °C, at different astaxanthin concentrations and solvent composition. A kinetic model for the formation and transformation of astaxanthin aggregated has been proposed. Spectrophotometric studies showed that monomeric astaxanthin decayed to H-aggregates that after-wards formed J-aggregates when water content was 50% and the temperature lower than 20 °C; at higher temperatures, very stable J-aggregates were formed directly. Monomer formed very stable H-aggregates when the water content was greater than 60%; in these conditions H-aggregates decayed into J-aggregates only when the temperature was at least 50 °C. Through these findings it was possible to establish that the aggregation reactions took place through a two steps consecutive reaction with first order kinetic constants and that the values of these depended on the solvent composition and temperature.

  17. Application of pattern mixture models to address missing data in longitudinal data analysis using SPSS.

    Science.gov (United States)

    Son, Heesook; Friedmann, Erika; Thomas, Sue A

    2012-01-01

    Longitudinal studies are used in nursing research to examine changes over time in health indicators. Traditional approaches to longitudinal analysis of means, such as analysis of variance with repeated measures, are limited to analyzing complete cases. This limitation can lead to biased results due to withdrawal or data omission bias or to imputation of missing data, which can lead to bias toward the null if data are not missing completely at random. Pattern mixture models are useful to evaluate the informativeness of missing data and to adjust linear mixed model (LMM) analyses if missing data are informative. The aim of this study was to provide an example of statistical procedures for applying a pattern mixture model to evaluate the informativeness of missing data and conduct analyses of data with informative missingness in longitudinal studies using SPSS. The data set from the Patients' and Families' Psychological Response to Home Automated External Defibrillator Trial was used as an example to examine informativeness of missing data with pattern mixture models and to use a missing data pattern in analysis of longitudinal data. Prevention of withdrawal bias, omitted data bias, and bias toward the null in longitudinal LMMs requires the assessment of the informativeness of the occurrence of missing data. Missing data patterns can be incorporated as fixed effects into LMMs to evaluate the contribution of the presence of informative missingness to and control for the effects of missingness on outcomes. Pattern mixture models are a useful method to address the presence and effect of informative missingness in longitudinal studies.

  18. A Mechanistic Modeling Framework for Predicting Metabolic Interactions in Complex Mixtures

    Science.gov (United States)

    Cheng, Shu

    2011-01-01

    Background: Computational modeling of the absorption, distribution, metabolism, and excretion of chemicals is now theoretically able to describe metabolic interactions in realistic mixtures of tens to hundreds of substances. That framework awaits validation. Objectives: Our objectives were to a) evaluate the conditions of application of such a framework, b) confront the predictions of a physiologically integrated model of benzene, toluene, ethylbenzene, and m-xylene (BTEX) interactions with observed kinetics data on these substances in mixtures and, c) assess whether improving the mechanistic description has the potential to lead to better predictions of interactions. Methods: We developed three joint models of BTEX toxicokinetics and metabolism and calibrated them using Markov chain Monte Carlo simulations and single-substance exposure data. We then checked their predictive capabilities for metabolic interactions by comparison with mixture kinetic data. Results: The simplest joint model (BTEX interacting competitively for cytochrome P450 2E1 access) gives qualitatively correct and quantitatively acceptable predictions (with at most 50% deviations from the data). More complex models with two pathways or back-competition with metabolites have the potential to further improve predictions for BTEX mixtures. Conclusions: A systems biology approach to large-scale prediction of metabolic interactions is advantageous on several counts and technically feasible. However, ways to obtain the required parameters need to be further explored. PMID:21835728

  19. Calculated flame temperature (CFT) modeling of fuel mixture lower flammability limits.

    Science.gov (United States)

    Zhao, Fuman; Rogers, William J; Mannan, M Sam

    2010-02-15

    Heat loss can affect experimental flammability limits, and it becomes indispensable to quantify flammability limits when apparatus quenching effect becomes significant. In this research, the lower flammability limits of binary hydrocarbon mixtures are predicted using calculated flame temperature (CFT) modeling, which is based on the principle of energy conservation. Specifically, the hydrocarbon mixture lower flammability limit is quantitatively correlated to its final flame temperature at non-adiabatic conditions. The modeling predictions are compared with experimental observations to verify the validity of CFT modeling, and the minor deviations between them indicated that CFT modeling can represent experimental measurements very well. Moreover, the CFT modeling results and Le Chatelier's Law predictions are also compared, and the agreement between them indicates that CFT modeling provides a theoretical justification for the Le Chatelier's Law.

  20. A joint finite mixture model for clustering genes from independent Gaussian and beta distributed data

    Directory of Open Access Journals (Sweden)

    Yli-Harja Olli

    2009-05-01

    Full Text Available Abstract Background Cluster analysis has become a standard computational method for gene function discovery as well as for more general explanatory data analysis. A number of different approaches have been proposed for that purpose, out of which different mixture models provide a principled probabilistic framework. Cluster analysis is increasingly often supplemented with multiple data sources nowadays, and these heterogeneous information sources should be made as efficient use of as possible. Results This paper presents a novel Beta-Gaussian mixture model (BGMM for clustering genes based on Gaussian distributed and beta distributed data. The proposed BGMM can be viewed as a natural extension of the beta mixture model (BMM and the Gaussian mixture model (GMM. The proposed BGMM method differs from other mixture model based methods in its integration of two different data types into a single and unified probabilistic modeling framework, which provides a more efficient use of multiple data sources than methods that analyze different data sources separately. Moreover, BGMM provides an exceedingly flexible modeling framework since many data sources can be modeled as Gaussian or beta distributed random variables, and it can also be extended to integrate data that have other parametric distributions as well, which adds even more flexibility to this model-based clustering framework. We developed three types of estimation algorithms for BGMM, the standard expectation maximization (EM algorithm, an approximated EM and a hybrid EM, and propose to tackle the model selection problem by well-known model selection criteria, for which we test the Akaike information criterion (AIC, a modified AIC (AIC3, the Bayesian information criterion (BIC, and the integrated classification likelihood-BIC (ICL-BIC. Conclusion Performance tests with simulated data show that combining two different data sources into a single mixture joint model greatly improves the clustering

  1. Statistical-thermodynamic model for light scattering from eye lens protein mixtures

    Science.gov (United States)

    Bell, Michael M.; Ross, David S.; Bautista, Maurino P.; Shahmohamad, Hossein; Langner, Andreas; Hamilton, John F.; Lahnovych, Carrie N.; Thurston, George M.

    2017-02-01

    We model light-scattering cross sections of concentrated aqueous mixtures of the bovine eye lens proteins γB- and α-crystallin by adapting a statistical-thermodynamic model of mixtures of spheres with short-range attractions. The model reproduces measured static light scattering cross sections, or Rayleigh ratios, of γB-α mixtures from dilute concentrations where light scattering intensity depends on molecular weights and virial coefficients, to realistically high concentration protein mixtures like those of the lens. The model relates γB-γB and γB-α attraction strengths and the γB-α size ratio to the free energy curvatures that set light scattering efficiency in tandem with protein refractive index increments. The model includes (i) hard-sphere α-α interactions, which create short-range order and transparency at high protein concentrations, (ii) short-range attractive plus hard-core γ-γ interactions, which produce intense light scattering and liquid-liquid phase separation in aqueous γ-crystallin solutions, and (iii) short-range attractive plus hard-core γ-α interactions, which strongly influence highly non-additive light scattering and phase separation in concentrated γ-α mixtures. The model reveals a new lens transparency mechanism, that prominent equilibrium composition fluctuations can be perpendicular to the refractive index gradient. The model reproduces the concave-up dependence of the Rayleigh ratio on α/γ composition at high concentrations, its concave-down nature at intermediate concentrations, non-monotonic dependence of light scattering on γ-α attraction strength, and more intricate, temperature-dependent features. We analytically compute the mixed virial series for light scattering efficiency through third order for the sticky-sphere mixture, and find that the full model represents the available light scattering data at concentrations several times those where the second and third mixed virial contributions fail. The model

  2. Topological phases of two-component bosons in species-dependent artificial gauge potentials

    Science.gov (United States)

    Wu, Ying-Hai; Shi, Tao

    2016-08-01

    We study bosonic atoms with two internal states in artificial gauge potentials whose strengths are different for the two components. A series of topological phases for such systems is proposed using the composite fermion theory and the parton construction. It is found in exact diagonalization that some of the proposed states may be realized for simple contact interaction between bosons. The ground states and low-energy excitations of these states are modeled using trial wave functions. The effective field theories for these states are also constructed and reveal some interesting properties.

  3. Numerical simulation of two-component flow fluid - fluid in the microchannel T- type

    Directory of Open Access Journals (Sweden)

    Shebeleva A.A.

    2015-01-01

    Full Text Available Results of testing methodology for calculating two-phase flows based on the method of fluid in the cells (VOF method, and the procedure for CSF accounting of surface tension forces in the microchannel are considered in the work. Mathematical modeling of two-component flow fluid -fluid in the T- microchannel conducted using this methodology. The following flow regimes studied slug flow, rivulet flow, parallel flow, dispersed (droplet flow, plug flow. Comparison of numerical results with experimental data done. Satisfactory agreement between the calculated values with the experimental data obtained.

  4. A hybrid finite mixture model for exploring heterogeneous ordering patterns of driver injury severity.

    Science.gov (United States)

    Ma, Lu; Wang, Guan; Yan, Xuedong; Weng, Jinxian

    2016-04-01

    Debates on the ordering patterns of crash injury severity are ongoing in the literature. Models without proper econometrical structures for accommodating the complex ordering patterns of injury severity could result in biased estimations and misinterpretations of factors. This study proposes a hybrid finite mixture (HFM) model aiming to capture heterogeneous ordering patterns of driver injury severity while enhancing modeling flexibility. It attempts to probabilistically partition samples into two groups in which one group represents an unordered/nominal data-generating process while the other represents an ordered data-generating process. Conceptually, the newly developed model offers flexible coefficient settings for mining additional information from crash data, and more importantly it allows the coexistence of multiple ordering patterns for the dependent variable. A thorough modeling performance comparison is conducted between the HFM model, and the multinomial logit (MNL), ordered logit (OL), finite mixture multinomial logit (FMMNL) and finite mixture ordered logit (FMOL) models. According to the empirical results, the HFM model presents a strong ability to extract information from the data, and more importantly to uncover heterogeneous ordering relationships between factors and driver injury severity. In addition, the estimated weight parameter associated with the MNL component in the HFM model is greater than the one associated with the OL component, which indicates a larger likelihood of the unordered pattern than the ordered pattern for driver injury severity.

  5. Using the Mixture Rasch Model to Explore Knowledge Resources Students Invoke in Mathematic and Science Assessments

    Science.gov (United States)

    Zhang, Danhui; Orrill, Chandra; Campbell, Todd

    2015-01-01

    The purpose of this study was to investigate whether mixture Rasch models followed by qualitative item-by-item analysis of selected Programme for International Student Assessment (PISA) mathematics and science items offered insight into knowledge students invoke in mathematics and science separately and combined. The researchers administered an…

  6. The Impact of Misspecifying Class-Specific Residual Variances in Growth Mixture Models

    Science.gov (United States)

    Enders, Craig K.; Tofighi, Davood

    2008-01-01

    The purpose of this study was to examine the impact of misspecifying a growth mixture model (GMM) by assuming that Level-1 residual variances are constant across classes, when they do, in fact, vary in each subpopulation. Misspecification produced bias in the within-class growth trajectories and variance components, and estimates were…

  7. Measurement error in earnings data : Using a mixture model approach to combine survey and register data

    NARCIS (Netherlands)

    Meijer, E.; Rohwedder, S.; Wansbeek, T.J.

    2012-01-01

    Survey data on earnings tend to contain measurement error. Administrative data are superior in principle, but are worthless in case of a mismatch. We develop methods for prediction in mixture factor analysis models that combine both data sources to arrive at a single earnings figure. We apply the me

  8. Market segment derivation and profiling via a finite mixture model framework

    NARCIS (Netherlands)

    Wedel, M; Desarbo, WS

    2002-01-01

    The Marketing literature has shown how difficult it is to profile market segments derived with finite mixture models. especially using traditional descriptor variables (e.g., demographics). Such profiling is critical for the proper implementation of segmentation strategy. we propose a new finite mix

  9. Comparison of criteria for choosing the number of classes in Bayesian finite mixture models

    NARCIS (Netherlands)

    K. Nasserinejad (Kazem); J.M. van Rosmalen (Joost); W. de Kort (Wim); E.M.E.H. Lesaffre (Emmanuel)

    2017-01-01

    textabstractIdentifying the number of classes in Bayesian finite mixture models is a challenging problem. Several criteria have been proposed, such as adaptations of the deviance information criterion, marginal likelihoods, Bayes factors, and reversible jump MCMC techniques. It was recently shown th

  10. Bayesian Inference for Growth Mixture Models with Latent Class Dependent Missing Data

    Science.gov (United States)

    Lu, Zhenqiu Laura; Zhang, Zhiyong; Lubke, Gitta

    2011-01-01

    "Growth mixture models" (GMMs) with nonignorable missing data have drawn increasing attention in research communities but have not been fully studied. The goal of this article is to propose and to evaluate a Bayesian method to estimate the GMMs with latent class dependent missing data. An extended GMM is first presented in which class…

  11. Estimating Lion Abundance using N-mixture Models for Social Species.

    Science.gov (United States)

    Belant, Jerrold L; Bled, Florent; Wilton, Clay M; Fyumagwa, Robert; Mwampeta, Stanslaus B; Beyer, Dean E

    2016-10-27

    Declining populations of large carnivores worldwide, and the complexities of managing human-carnivore conflicts, require accurate population estimates of large carnivores to promote their long-term persistence through well-informed management We used N-mixture models to estimate lion (Panthera leo) abundance from call-in and track surveys in southeastern Serengeti National Park, Tanzania. Because of potential habituation to broadcasted calls and social behavior, we developed a hierarchical observation process within the N-mixture model conditioning lion detectability on their group response to call-ins and individual detection probabilities. We estimated 270 lions (95% credible interval = 170-551) using call-ins but were unable to estimate lion abundance from track data. We found a weak negative relationship between predicted track density and predicted lion abundance from the call-in surveys. Luminosity was negatively correlated with individual detection probability during call-in surveys. Lion abundance and track density were influenced by landcover, but direction of the corresponding effects were undetermined. N-mixture models allowed us to incorporate multiple parameters (e.g., landcover, luminosity, observer effect) influencing lion abundance and probability of detection directly into abundance estimates. We suggest that N-mixture models employing a hierarchical observation process can be used to estimate abundance of other social, herding, and grouping species.

  12. Densities of Pure Ionic Liquids and Mixtures: Modeling and Data Analysis

    DEFF Research Database (Denmark)

    Abildskov, Jens; O’Connell, John P.

    2015-01-01

    Our two-parameter corresponding states model for liquid densities and compressibilities has been extended to more pure ionic liquids and to their mixtures with one or two solvents. A total of 19 new group contributions (5 new cations and 14 new anions) have been obtained for predicting pressure...

  13. Multivariate compressive sensing for image reconstruction in the wavelet domain: using scale mixture models.

    Science.gov (United States)

    Wu, Jiao; Liu, Fang; Jiao, L C; Wang, Xiaodong; Hou, Biao

    2011-12-01

    Most wavelet-based reconstruction methods of compressive sensing (CS) are developed under the independence assumption of the wavelet coefficients. However, the wavelet coefficients of images have significant statistical dependencies. Lots of multivariate prior models for the wavelet coefficients of images have been proposed and successfully applied to the image estimation problems. In this paper, the statistical structures of the wavelet coefficients are considered for CS reconstruction of images that are sparse or compressive in wavelet domain. A multivariate pursuit algorithm (MPA) based on the multivariate models is developed. Several multivariate scale mixture models are used as the prior distributions of MPA. Our method reconstructs the images by means of modeling the statistical dependencies of the wavelet coefficients in a neighborhood. The proposed algorithm based on these scale mixture models provides superior performance compared with many state-of-the-art compressive sensing reconstruction algorithms.

  14. Growth of Saccharomyces cerevisiae CBS 426 on mixtures of glucose and succinic acid: a model

    Energy Technology Data Exchange (ETDEWEB)

    Bonnet, J.A.B.A.F.; Koellmann, C.J.W.; Dekkers-de Kok, H.E.; Roels, J.A.

    1984-03-01

    Saccharomyces cerevisiae CBS 426 was grown in continuous culture in a defined medium with a mixture of glucose and succinic acid as the carbon source. Growth on succinic acid was possible after long adaptation periods. The flows of glucose, succinic acid, oxygen, carbon dioxide, and biomass to and from the system were measured. It proved necessary to expand our previous model to accommodate the active transport of succinic acid by the cell. The values found for the efficiency of the oxidative phosphorylation (PIO) and the amount of ATP needed for production of biomass from monomers gave the same values as found for substrate mixtures taken up passively. (Refs. 13).

  15. Numerical Investigation of Nanofluid Thermocapillary Convection Based on Two-Phase Mixture Model

    Science.gov (United States)

    Jiang, Yanni; Xu, Zelin

    2017-08-01

    Numerical investigation of nanofluid thermocapillary convection in a two-dimensional rectangular cavity was carried out, in which the two-phase mixture model was used to simulate the nanoparticles-fluid mixture flow, and the influences of volume fraction of nanoparticles on the flow characteristics and heat transfer performance were discussed. The results show that, with the increase of nanoparticle volume fraction, thermocapillary convection intensity weakens gradually, and the heat conduction effect strengthens; meanwhile, the temperature gradient at free surface increases but the free surface velocity decreases gradually. The average Nusselt number of hot wall and the total entropy generation decrease with nanoparticle volume fraction increasing.

  16. Infrared image segmentation based on region of interest extraction with Gaussian mixture modeling

    Science.gov (United States)

    Yeom, Seokwon

    2017-05-01

    Infrared (IR) imaging has the capability to detect thermal characteristics of objects under low-light conditions. This paper addresses IR image segmentation with Gaussian mixture modeling. An IR image is segmented with Expectation Maximization (EM) method assuming the image histogram follows the Gaussian mixture distribution. Multi-level segmentation is applied to extract the region of interest (ROI). Each level of the multi-level segmentation is composed of the k-means clustering, the EM algorithm, and a decision process. The foreground objects are individually segmented from the ROI windows. In the experiments, various methods are applied to the IR image capturing several humans at night.

  17. GIS disconnector model performance with SF{sub 6}/N{sub 2} mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Gaillac, C. [Schneider Electric (France)

    1999-07-01

    The lightning impulse breakdown voltage of a model, 145 kV GIS disconnector was studied using SF{sub 6}/N{sub 2} mixtures. Mixtures with between 0% and 15% SF{sub 6} were used. Sphere-sphere, point-plane and sphere-rod geometrics were studied. In most cases, breakdown strength increased with both SF{sub 6} content and pressure. In the case of surface flashover, a pressure of about 8 bar with 15% SF{sub 6}, gave roughly equivalent results to that of 4 bar pure SF{sub 6}. (author)

  18. PHASE TRANSITION PROPERTIES OF A TWO COMPONENT FINITE MAGNETIC SUPERLATTICE

    Institute of Scientific and Technical Information of China (English)

    WANG XIAO-GUANG; LIU NING-NING; PAN SHAO-HUA; YANG GUO-ZHEN

    2000-01-01

    We study an (l, n) finite superlattice, which consists of two alternative magnetic materials(components) of l and n atomic layers, respectively. Based on the Ising model, we examine the phase transition properties of the magnetic superlattice. By transfer matrix method we derive the equation for Curie temperature of the superlattice. Numerical results are obtained for the dependence of Curie temperature on the thickness and exchange constants of the superlattice.

  19. Development and application of a multimetal multibiotic ligand model for assessing aquatic toxicity of metal mixtures.

    Science.gov (United States)

    Santore, Robert C; Ryan, Adam C

    2015-04-01

    A multimetal, multiple binding site version of the biotic ligand model (mBLM) has been developed for predicting and explaining the bioavailability and toxicity of mixtures of metals to aquatic organisms. The mBLM was constructed by combining information from single-metal BLMs to preserve compatibility between the single-metal and multiple-metal approaches. The toxicities from individual metals were predicted by assuming additivity of the individual responses. Mixture toxicity was predicted based on both dissolved metal and mBLM-normalized bioavailable metal. Comparison of the 2 prediction methods indicates that metal mixtures frequently appear to have greater toxicity than an additive estimation of individual effects on a dissolved metal basis. However, on an mBLM-normalized basis, mixtures of metals appear to be additive or less than additive. This difference results from interactions between metals and ligands in solutions including natural organic matter, processes that are accounted for in the mBLM. As part of the mBLM approach, a technique for considering variability was developed to calculate confidence bounds (called response envelopes) around the central concentration-response relationship. Predictions using the mBLM and response envelope were compared with observed toxicity for a number of invertebrate and fish species. The results show that the mBLM is a useful tool for considering bioavailability when assessing the toxicity of metal mixtures.

  20. Dynamic mean field theory for lattice gas models of fluid mixtures confined in mesoporous materials.

    Science.gov (United States)

    Edison, J R; Monson, P A

    2013-11-12

    We present the extension of dynamic mean field theory (DMFT) for fluids in porous materials (Monson, P. A. J. Chem. Phys. 2008, 128, 084701) to the case of mixtures. The theory can be used to describe the relaxation processes in the approach to equilibrium or metastable equilibrium states for fluids in pores after a change in the bulk pressure or composition. It is especially useful for studying systems where there are capillary condensation or evaporation transitions. Nucleation processes associated with these transitions are emergent features of the theory and can be visualized via the time dependence of the density distribution and composition distribution in the system. For mixtures an important component of the dynamics is relaxation of the composition distribution in the system, especially in the neighborhood of vapor-liquid interfaces. We consider two different types of mixtures, modeling hydrocarbon adsorption in carbon-like slit pores. We first present results on bulk phase equilibria of the mixtures and then the equilibrium (stable/metastable) behavior of these mixtures in a finite slit pore and an inkbottle pore. We then use DMFT to describe the evolution of the density and composition in the pore in the approach to equilibrium after changing the state of the bulk fluid via composition or pressure changes.

  1. A polynomial hyperelastic model for the mixture of fat and glandular tissue in female breast.

    Science.gov (United States)

    Calvo-Gallego, Jose L; Martínez-Reina, Javier; Domínguez, Jaime

    2015-09-01

    In the breast of adult women, glandular and fat tissues are intermingled and cannot be clearly distinguished. This work studies if this mixture can be treated as a homogenized tissue. A mechanical model is proposed for the mixture of tissues as a function of the fat content. Different distributions of individual tissues and geometries have been tried to verify the validity of the mixture model. A multiscale modelling approach was applied in a finite element model of a representative volume element (RVE) of tissue, formed by randomly assigning fat or glandular elements to the mesh. Both types of tissues have been assumed as isotropic, quasi-incompressible hyperelastic materials, modelled with a polynomial strain energy function, like the homogenized model. The RVE was subjected to several load cases from which the constants of the polynomial function of the homogenized tissue were fitted in the least squares sense. The results confirm that the fat volume ratio is a key factor in determining the properties of the homogenized tissue, but the spatial distribution of fat is not so important. Finally, a simplified model of a breast was developed to check the validity of the homogenized model in a geometry similar to the actual one.

  2. Sleep-promoting effects of the GABA/5-HTP mixture in vertebrate models.

    Science.gov (United States)

    Hong, Ki-Bae; Park, Yooheon; Suh, Hyung Joo

    2016-09-01

    The aim of this study was to investigate the sleep-promoting effect of combined γ-aminobutyric acid (GABA) and 5-hydroxytryptophan (5-HTP) on sleep quality and quantity in vertebrate models. Pentobarbital-induced sleep test and electroencephalogram (EEG) analysis were applied to investigate sleep latency, duration, total sleeping time and sleep quality of two amino acids and GABA/5-HTP mixture. In addition, real-time PCR and HPLC analysis were applied to analyze the signaling pathway. The GABA/5-HTP mixture significantly regulated the sleep latency, duration (pHTP mixture modulates both GABAergic and serotonergic signaling. Moreover, the sleep architecture can be controlled by the regulation of GABAA receptor and GABA content with 5-HTP.

  3. Mixtures of endocrine disrupting contaminants modelled on human high end exposures

    DEFF Research Database (Denmark)

    Christiansen, Sofie; Kortenkamp, A.; Petersen, Marta Axelstad

    2012-01-01

    in vivo endocrine disrupting effects and information about human exposures was available, including phthalates, pesticides, UV‐filters, bisphenol A, parabens and the drug paracetamol. The mixture ratio was chosen to reflect high end human intakes. To make decisions about the dose levels for studies...... though each individual chemical is present at low, ineffective doses, but the effects of mixtures modelled based on human intakes have not previously been investigated. To address this issue for the first time, we selected 13 chemicals for a developmental mixture toxicity study in rats where data about...... in the rat, we employed the point of departure index (PODI) approach, which sums up ratios between estimated exposure levels and no‐observed‐adverse‐effect‐level (NOAEL) values of individual substances. For high end human exposures to the 13 selected chemicals, we calculated a PODI of 0.016. As only a PODI...

  4. Modelling of phase equilibria and related properties of mixtures involving lipids

    DEFF Research Database (Denmark)

    Cunico, Larissa

    Many challenges involving physical and thermodynamic properties in the production of edible oils and biodiesel are observed, such as availability of experimental data and realiable prediction. In the case of lipids, a lack of experimental data for pure components and also for their mixtures in open...... literature was observed, what makes it necessary to development reliable predictive models from limited data. One of the first steps of this project was the creation of a database containing properties of mixtures involved in tasks related to process design, simulation, and optimization as well as design...... of chemicals based products. This database was combined with the existing lipids database of pure component properties. To contribute to the missing data, measurements of isobaric vapour-liquid equilibrium (VLE) data of two binary mixtures at two different pressures were performed using Differential Scanning...

  5. Finite mixture models for the computation of isotope ratios in mixed isotopic samples

    Science.gov (United States)

    Koffler, Daniel; Laaha, Gregor; Leisch, Friedrich; Kappel, Stefanie; Prohaska, Thomas

    2013-04-01

    Finite mixture models have been used for more than 100 years, but have seen a real boost in popularity over the last two decades due to the tremendous increase in available computing power. The areas of application of mixture models range from biology and medicine to physics, economics and marketing. These models can be applied to data where observations originate from various groups and where group affiliations are not known, as is the case for multiple isotope ratios present in mixed isotopic samples. Recently, the potential of finite mixture models for the computation of 235U/238U isotope ratios from transient signals measured in individual (sub-)µm-sized particles by laser ablation - multi-collector - inductively coupled plasma mass spectrometry (LA-MC-ICPMS) was demonstrated by Kappel et al. [1]. The particles, which were deposited on the same substrate, were certified with respect to their isotopic compositions. Here, we focus on the statistical model and its application to isotope data in ecogeochemistry. Commonly applied evaluation approaches for mixed isotopic samples are time-consuming and are dependent on the judgement of the analyst. Thus, isotopic compositions may be overlooked due to the presence of more dominant constituents. Evaluation using finite mixture models can be accomplished unsupervised and automatically. The models try to fit several linear models (regression lines) to subgroups of data taking the respective slope as estimation for the isotope ratio. The finite mixture models are parameterised by: • The number of different ratios. • Number of points belonging to each ratio-group. • The ratios (i.e. slopes) of each group. Fitting of the parameters is done by maximising the log-likelihood function using an iterative expectation-maximisation (EM) algorithm. In each iteration step, groups of size smaller than a control parameter are dropped; thereby the number of different ratios is determined. The analyst only influences some control

  6. Absolutely stable solitons in two-component active systems

    CERN Document Server

    Malomed, B A; Malomed, Boris; Winful, Herbert

    1995-01-01

    As is known, a solitary pulse in the complex cubic Ginzburg-Landau (GL) equation is unstable. We demonstrate that a system of two linearly coupled GL equations with gain and dissipation in one subsystem and pure dissipation in another produces absolutely stable solitons and their bound states. The problem is solved in a fully analytical form by means of the perturbation theory. The soliton coexists with a stable trivial state; there is also an unstable soliton with a smaller amplitude, which is a separatrix between the two stable states. This model has a direct application in nonlinear fiber optics, describing an Erbium-doped laser based on a dual-core fiber.

  7. M3B: A coarse grain model for the simulation of oligosaccharides and their water mixtures.

    Science.gov (United States)

    Goddard, William A.; Cagin, Tahir; Molinero, Valeria

    2003-03-01

    Water and sugar dynamics in concentrated carbohydrate solutions is of utmost importance in food and pharmaceutical technology. Water diffusion in concentrated sugar mixtures can be slowed down many orders of magnitude with respect to bulk water [1], making extremely expensive the simulation of these systems with atomistic detail for the required time-scales. We present a coarse grain model (M3B) for malto-oligosaccharides and their water mixtures. M3B speeds up molecular dynamics simulations about 500-1000 times with respect to the atomistic model while retaining enough detail to be mapped back to the atomistic structures with low uncertainty in the positions. The former characteristic allows the study of water and carbohydrate dynamics in supercooled and polydisperse mixtures with characteristic time scales above the nanosecond. The latter makes M3B well suited for combined atomistic-mesoscale simulations. We present the parameterization of M3B force field for water and a family of technologically relevant glucose oligosaccharides, the alpha-(1->4) glucans. The coarse grain force field is completely parameterized from atomistic simulations to reproduce the density, cohesive energy and structural parameters of amorphous sugars. We will show that M3B is capable to describe the helical character of the higher oligosaccharides, and that the water structure in low moisture mixtures shows the same features obtained with the atomistic and M3B models. [1] R Parker, SG Ring: Carbohydr. Res. 273 (1995) 147-55.

  8. Optimal mixture experiments

    CERN Document Server

    Sinha, B K; Pal, Manisha; Das, P

    2014-01-01

    The book dwells mainly on the optimality aspects of mixture designs. As mixture models are a special case of regression models, a general discussion on regression designs has been presented, which includes topics like continuous designs, de la Garza phenomenon, Loewner order domination, Equivalence theorems for different optimality criteria and standard optimality results for single variable polynomial regression and multivariate linear and quadratic regression models. This is followed by a review of the available literature on estimation of parameters in mixture models. Based on recent research findings, the volume also introduces optimal mixture designs for estimation of optimum mixing proportions in different mixture models, which include Scheffé’s quadratic model, Darroch-Waller model, log- contrast model, mixture-amount models, random coefficient models and multi-response model.  Robust mixture designs and mixture designs in blocks have been also reviewed. Moreover, some applications of mixture desig...

  9. Instabilities in relativistic two-component (super)fluids

    CERN Document Server

    Haber, Alexander; Stetina, Stephan

    2016-01-01

    We study two-fluid systems with nonzero fluid velocities and compute their sound modes, which indicate various instabilities. For the case of two zero-temperature superfluids we employ a microscopic field-theoretical model of two coupled bosonic fields, including an entrainment coupling and a non-entrainment coupling. We analyse the onset of the various instabilities systematically and point out that the dynamical two-stream instability can only occur beyond Landau's critical velocity, i.e., in an already energetically unstable regime. A qualitative difference is found for the case of two normal fluids, where certain transverse modes suffer a two-stream instability in an energetically stable regime if there is entrainment between the fluids. Since we work in a fully relativistic setup, our results are very general and of potential relevance for (super)fluids in neutron stars and, in the non-relativistic limit of our results, in the laboratory.

  10. Using a factor mixture modeling approach in alcohol dependence in a general population sample.

    Science.gov (United States)

    Kuo, Po-Hsiu; Aggen, Steven H; Prescott, Carol A; Kendler, Kenneth S; Neale, Michael C

    2008-11-01

    Alcohol dependence (AD) is a complex and heterogeneous disorder. The identification of more homogeneous subgroups of individuals with drinking problems and the refinement of the diagnostic criteria are inter-related research goals. They have the potential to improve our knowledge of etiology and treatment effects, and to assist in the identification of risk factors or specific genetic factors. Mixture modeling has advantages over traditional modeling that focuses on either the dimensional or categorical latent structure. The mixture modeling combines both latent class and latent trait models, but has not been widely applied in substance use research. The goal of the present study is to assess whether the AD criteria in the population could be better characterized by a continuous dimension, a few discrete subgroups, or a combination of the two. More than seven thousand participants were recruited from the population-based Virginia Twin Registry, and were interviewed to obtain DSM-IV (Diagnostic and Statistical Manual of Mental Disorder, version IV) symptoms and diagnosis of AD. We applied factor analysis, latent class analysis, and factor mixture models for symptom items based on the DSM-IV criteria. Our results showed that a mixture model with 1 factor and 3 classes for both genders fit well. The 3 classes were a non-problem drinking group and severe and moderate drinking problem groups. By contrast, models constrained to conform to DSM-IV diagnostic criteria were rejected by model fitting indices providing empirical evidence for heterogeneity in the AD diagnosis. Classification analysis showed different characteristics across subgroups, including alcohol-caused behavioral problems, comorbid disorders, age at onset for alcohol-related milestones, and personality. Clinically, the expanded classification of AD may aid in identifying suitable treatments, interventions and additional sources of comorbidity based on these more homogenous subgroups of alcohol use

  11. Comparison of the Noise Robustness of FVC Retrieval Algorithms Based on Linear Mixture Models

    OpenAIRE

    Hiroki Yoshioka; Kenta Obata

    2011-01-01

    The fraction of vegetation cover (FVC) is often estimated by unmixing a linear mixture model (LMM) to assess the horizontal spread of vegetation within a pixel based on a remotely sensed reflectance spectrum. The LMM-based algorithm produces results that can vary to a certain degree, depending on the model assumptions. For example, the robustness of the results depends on the presence of errors in the measured reflectance spectra. The objective of this study was to derive a factor that could ...

  12. Modulational instability, solitons and periodic waves in a model of quantum degenerate boson-fermion mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Belmonte-Beitia, Juan [Departamento de Matematicas, E. T. S. de Ingenieros Industriales, Universidad de Castilla-La Mancha 13071, Ciudad Real (Spain); Perez-Garcia, Victor M. [Departamento de Matematicas, E. T. S. de Ingenieros Industriales, Universidad de Castilla-La Mancha 13071, Ciudad Real (Spain); Vekslerchik, Vadym [Departamento de Matematicas, E. T. S. de Ingenieros Industriales, Universidad de Castilla-La Mancha 13071, Ciudad Real (Spain)

    2007-05-15

    In this paper, we study a system of coupled nonlinear Schroedinger equations modelling a quantum degenerate mixture of bosons and fermions. We analyze the stability of plane waves, give precise conditions for the existence of solitons and write explicit solutions in the form of periodic waves. We also check that the solitons observed previously in numerical simulations of the model correspond exactly to our explicit solutions and see how plane waves destabilize to form periodic waves.

  13. Two-component jet simulations: I. Topological stability of analytical MHD outflow solutions

    CERN Document Server

    Matsakos, T; Vlahakis, N; Massaglia, S; Mignone, A; Trussoni, E

    2007-01-01

    Observations of collimated outflows in young stellar objects indicate that several features of the jets can be understood by adopting the picture of a two-component outflow, wherein a central stellar component around the jet axis is surrounded by an extended disk-wind. The precise contribution of each component may depend on the intrinsic physical properties of the YSO-disk system as well as its evolutionary stage. In this context, the present article starts a systematic investigation of two-component jet models via time-dependent simulations of two prototypical and complementary analytical solutions, each closely related to the properties of stellar-outflows and disk-winds. These models describe a meridionally and a radially self-similar exact solution of the steady-state, ideal hydromagnetic equations, respectively. By using the PLUTO code to carry out the simulations, the study focuses on the topological stability of each of the two analytical solutions, which are successfully extended to all space by remo...

  14. Numerical analysis of a non equilibrium two-component two-compressible flow in porous media

    KAUST Repository

    Saad, Bilal Mohammed

    2013-09-01

    We propose and analyze a finite volume scheme to simulate a non equilibrium two components (water and hydrogen) two phase flow (liquid and gas) model. In this model, the assumption of local mass non equilibrium is ensured and thus the velocity of the mass exchange between dissolved hydrogen and hydrogen in the gas phase is supposed finite. The proposed finite volume scheme is fully implicit in time together with a phase-by-phase upwind approach in space and it is discretize the equations in their general form with gravity and capillary terms We show that the proposed scheme satisfies the maximum principle for the saturation and the concentration of the dissolved hydrogen. We establish stability results on the velocity of each phase and on the discrete gradient of the concentration. We show the convergence of a subsequence to a weak solution of the continuous equations as the size of the discretization tends to zero. At our knowledge, this is the first convergence result of finite volume scheme in the case of two component two phase compressible flow in several space dimensions.

  15. Implications of Two-component Dark Matter Induced by Forbidden Channels and Thermal Freeze-out

    CERN Document Server

    Aoki, Mayumi

    2016-01-01

    We consider a model of two-component dark matter based on a hidden $U(1)_D$ symmetry, in which relic densities of the dark matter are determined by forbidden channels and thermal freeze-out. The hidden $U(1)_D$ symmetry is spontaneously broken to a residual $\\mathbb{Z}_4$ symmetry, and the lightest $\\mathbb{Z}_4$ charged particle can be a dark matter candidate. Moreover, depending on the mass hierarchy in the dark sector, we have two-component dark matter. We show that the relic density of the lighter dark matter component can be determined by forbidden annihilation channels which require larger couplings compared to the normal freeze-out mechanism. As a result, a large self-interaction of the lighter dark matter component can be induced, which may solve small scale problems of $\\Lambda$CDM model. On the other hand, the heavier dark matter component is produced by normal freeze-out mechanism. We find that interesting implications emerge between the two dark matter components in this framework. We explore dete...

  16. Positive autoregulation shapes response timing and intensity in two-component signal transduction systems.

    Science.gov (United States)

    Mitrophanov, Alexander Y; Hadley, Tricia J; Groisman, Eduardo A

    2010-08-27

    Positive feedback loops are regulatory elements that can modulate expression output, kinetics and noise in genetic circuits. Transcriptional regulators participating in such loops are often expressed from two promoters, one constitutive and one autoregulated. Here, we investigate the interplay of promoter strengths and the intensity of the stimulus activating the transcriptional regulator in defining the output of a positively autoregulated genetic circuit. Using a mathematical model of two-component regulatory systems, which are present in all domains of life, we establish that positive feedback strongly affects the steady-state output levels at both low and high levels of stimulus if the constitutive promoter of the regulator is weak. By contrast, the effect of positive feedback is negligible when the constitutive promoter is sufficiently strong, unless the stimulus intensity is very high. Furthermore, we determine that positive feedback can affect both transient and steady state output levels even in the simplest genetic regulatory systems. We tested our modeling predictions by abolishing the positive feedback loop in the two-component regulatory system PhoP/PhoQ of Salmonella enterica, which resulted in diminished induction of PhoP-activated genes. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  17. A theory of scintillation for two-component power law irregularity spectra: Overview and numerical results

    Science.gov (United States)

    Carrano, Charles S.; Rino, Charles L.

    2016-06-01

    We extend the power law phase screen theory for ionospheric scintillation to account for the case where the refractive index irregularities follow a two-component inverse power law spectrum. The two-component model includes, as special cases, an unmodified power law and a modified power law with spectral break that may assume the role of an outer scale, intermediate break scale, or inner scale. As such, it provides a framework for investigating the effects of a spectral break on the scintillation statistics. Using this spectral model, we solve the fourth moment equation governing intensity variations following propagation through two-dimensional field-aligned irregularities in the ionosphere. A specific normalization is invoked that exploits self-similar properties of the structure to achieve a universal scaling, such that different combinations of perturbation strength, propagation distance, and frequency produce the same results. The numerical algorithm is validated using new theoretical predictions for the behavior of the scintillation index and intensity correlation length under strong scatter conditions. A series of numerical experiments are conducted to investigate the morphologies of the intensity spectrum, scintillation index, and intensity correlation length as functions of the spectral indices and strength of scatter; retrieve phase screen parameters from intensity scintillation observations; explore the relative contributions to the scintillation due to large- and small-scale ionospheric structures; and quantify the conditions under which a general spectral break will influence the scintillation statistics.

  18. Using Bayesian statistics for modeling PTSD through Latent Growth Mixture Modeling: implementation and discussion

    Directory of Open Access Journals (Sweden)

    Sarah Depaoli

    2015-03-01

    Full Text Available Background: After traumatic events, such as disaster, war trauma, and injuries including burns (which is the focus here, the risk to develop posttraumatic stress disorder (PTSD is approximately 10% (Breslau & Davis, 1992. Latent Growth Mixture Modeling can be used to classify individuals into distinct groups exhibiting different patterns of PTSD (Galatzer-Levy, 2015. Currently, empirical evidence points to four distinct trajectories of PTSD patterns in those who have experienced burn trauma. These trajectories are labeled as: resilient, recovery, chronic, and delayed onset trajectories (e.g., Bonanno, 2004; Bonanno, Brewin, Kaniasty, & Greca, 2010; Maercker, Gäbler, O'Neil, Schützwohl, & Müller, 2013; Pietrzak et al., 2013. The delayed onset trajectory affects only a small group of individuals, that is, about 4–5% (O'Donnell, Elliott, Lau, & Creamer, 2007. In addition to its low frequency, the later onset of this trajectory may contribute to the fact that these individuals can be easily overlooked by professionals. In this special symposium on Estimating PTSD trajectories (Van de Schoot, 2015a, we illustrate how to properly identify this small group of individuals through the Bayesian estimation framework using previous knowledge through priors (see, e.g., Depaoli & Boyajian, 2014; Van de Schoot, Broere, Perryck, Zondervan-Zwijnenburg, & Van Loey, 2015. Method: We used latent growth mixture modeling (LGMM (Van de Schoot, 2015b to estimate PTSD trajectories across 4 years that followed a traumatic burn. We demonstrate and compare results from traditional (maximum likelihood and Bayesian estimation using priors (see, Depaoli, 2012, 2013. Further, we discuss where priors come from and how to define them in the estimation process. Results: We demonstrate that only the Bayesian approach results in the desired theory-driven solution of PTSD trajectories. Since the priors are chosen subjectively, we also present a sensitivity analysis of the

  19. Mixture models of geometric distributions in genomic analysis of inter-nucleotide distances

    Directory of Open Access Journals (Sweden)

    Adelaide Valente Freitas

    2013-11-01

    Full Text Available The mapping defined by inter-nucleotide distances (InD provides a reversible numerical representation of the primary structure of DNA. If nucleotides were independently placed along the genome, a finite mixture model of four geometric distributions could be fitted to the InD where the four marginal distributions would be the expected distributions of the four nucleotide types. We analyze a finite mixture model of geometric distributions (f_2, with marginals not explicitly addressed to the nucleotide types, as an approximation to the InD. We use BIC in the composite likelihood framework for choosing the number of components of the mixture and the EM algorithm for estimating the model parameters. Based on divergence profiles, an experimental study was carried out on the complete genomes of 45 species to evaluate f_2. Although the proposed model is not suited to the InD, our analysis shows that divergence profiles involving the empirical distribution of the InD are also exhibited by profiles involving f_2. It suggests that statistical regularities of the InD can be described by the model f_2. Some characteristics of the DNA sequences captured by the model f_2 are illustrated. In particular, clusterings of subgroups of eukaryotes (primates, mammalians, animals and plants are detected.

  20. A WYNER-ZIV VIDEO CODING METHOD UTILIZING MIXTURE CORRELATION NOISE MODEL

    Institute of Scientific and Technical Information of China (English)

    Hu Xiaofei; Zhu Xiuchang

    2012-01-01

    In Wyner-Ziv (WZ) Distributed Video Coding (DVC),correlation noise model is often used to describe the error distribution between WZ frame and the side information.The accuracy of the model can influence the performance of the video coder directly.A mixture correlation noise model in Discrete Cosine Transform (DCT) domain for WZ video coding is established in this paper.Different correlation noise estimation method is used for direct current and alternating current coefficients.Parameter estimation method based on expectation maximization algorithm is used to estimate the Laplace distribution center of direct current frequency band and Mixture Laplace-Uniform Distribution Model (MLUDM) is established for alternating current coefficients.Experimental results suggest that the proposed mixture correlation noise model can describe the heavy tail and sudden change of the noise accurately at high rate and make significant improvement on the coding efficiency compared with the noise model presented by DIStributed COding for Video sERvices (DISCOVER).

  1. Applicability of linearized Dusty Gas Model for multicomponent diffusion of gas mixtures in porous solids

    Directory of Open Access Journals (Sweden)

    Marković Jelena

    2007-01-01

    Full Text Available The transport of gaseous components through porous media could be described according to the well-known Fick model and its modifications. It is also known that Fick’s law is not suitable for predicting the fluxes in multicomponent gas mixtures, excluding binary mixtures. This model is still frequently used in chemical engineering because of its simplicity. Unfortunately, besides the Fick’s model there is no generally accepted model for mass transport through porous media (membranes, catalysts etc.. Numerous studies on transport through porous media reveal that Dusty Gas Model (DGM is superior in its ability to predict fluxes in multicomponent mixtures. Its wider application is limited by more complicated calculation procedures comparing to Fick’s model. It should be noted that there were efforts to simplify DGM in order to obtain satisfactory accurate results. In this paper linearized DGM, as the simplest form of DGM, is tested under conditions of zero system pressure drop, small pressure drop, and different temperatures. Published experimental data are used in testing the accuracy of the linearized procedure. It is shown that this simplified procedure is accurate enough compared to the standard more complicated calculations.

  2. Comparison of activity coefficient models for atmospheric aerosols containing mixtures of electrolytes, organics, and water

    Science.gov (United States)

    Tong, Chinghang; Clegg, Simon L.; Seinfeld, John H.

    Atmospheric aerosols generally comprise a mixture of electrolytes, organic compounds, and water. Determining the gas-particle distribution of volatile compounds, including water, requires equilibrium or mass transfer calculations, at the heart of which are models for the activity coefficients of the particle-phase components. We evaluate here the performance of four recent activity coefficient models developed for electrolyte/organic/water mixtures typical of atmospheric aerosols. Two of the models, the CSB model [Clegg, S.L., Seinfeld, J.H., Brimblecombe, P., 2001. Thermodynamic modelling of aqueous aerosols containing electrolytes and dissolved organic compounds. Journal of Aerosol Science 32, 713-738] and the aerosol diameter dependent equilibrium model (ADDEM) [Topping, D.O., McFiggans, G.B., Coe, H., 2005. A curved multi-component aerosol hygroscopicity model framework: part 2—including organic compounds. Atmospheric Chemistry and Physics 5, 1223-1242] treat ion-water and organic-water interactions but do not include ion-organic interactions; these can be referred to as "decoupled" models. The other two models, reparameterized Ming and Russell model 2005 [Raatikainen, T., Laaksonen, A., 2005. Application of several activity coefficient models to water-organic-electrolyte aerosols of atmospheric interest. Atmospheric Chemistry and Physics 5, 2475-2495] and X-UNIFAC.3 [Erdakos, G.B., Change, E.I., Pandow, J.F., Seinfeld, J.H., 2006. Prediction of activity coefficients in liquid aerosol particles containing organic compounds, dissolved inorganic salts, and water—Part 3: Organic compounds, water, and ionic constituents by consideration of short-, mid-, and long-range effects using X-UNIFAC.3. Atmospheric Environment 40, 6437-6452], include ion-organic interactions; these are referred to as "coupled" models. We address the question—Does the inclusion of a treatment of ion-organic interactions substantially improve the performance of the coupled models over

  3. A Note Comparing Component-Slope, Scheffé, and Cox Parameterizations of the Linear Mixture Experiment Model

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.

    2006-05-01

    A mixture experiment involves combining two or more components in various proportions and collecting data on one or more responses. A linear mixture model may adequately represent the relationship between a response and mixture component proportions and be useful in screening the mixture components. The Scheffé and Cox parameterizations of the linear mixture model are commonly used for analyzing mixture experiment data. With the Scheffé parameterization, the fitted coefficient for a component is the predicted response at that pure component (i.e., single-component mixture). With the Cox parameterization, the fitted coefficient for a mixture component is the predicted difference in response at that pure component and at a pre-specified reference composition. This paper presents a new component-slope parameterization, in which the fitted coefficient for a mixture component is the predicted slope of the linear response surface along the direction determined by that pure component and at a pre-specified reference composition. The component-slope, Scheffé, and Cox parameterizations of the linear mixture model are compared and their advantages and disadvantages are discussed.

  4. Exact periodic wave and soliton solutions in two-component Bose-Einstein condensates

    Institute of Scientific and Technical Information of China (English)

    Li Hua-Mei

    2007-01-01

    We present several families of exact solutions to a system of coupled nonlinear Schr(o)dinger equations. The model describes a binary mixture of two Bose-Einstein condensates in a magnetic trap potential. Using a mapping deformation method, we find exact periodic wave and soliton solutions, including bright and dark soliton pairs.

  5. Concentration addition, independent action and generalized concentration addition models for mixture effect prediction of sex hormone synthesis in vitro

    DEFF Research Database (Denmark)

    Hadrup, Niels; Taxvig, Camilla; Pedersen, Mikael;

    2013-01-01

    , antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone...

  6. A Systematic Investigation of Within-Subject and Between-Subject Covariance Structures in Growth Mixture Models

    Science.gov (United States)

    Liu, Junhui

    2012-01-01

    The current study investigated how between-subject and within-subject variance-covariance structures affected the detection of a finite mixture of unobserved subpopulations and parameter recovery of growth mixture models in the context of linear mixed-effects models. A simulation study was conducted to evaluate the impact of variance-covariance…

  7. Filling the gaps: Gaussian mixture models from noisy, truncated or incomplete samples

    CERN Document Server

    Melchior, Peter

    2016-01-01

    We extend the common mixtures-of-Gaussians density estimation approach to account for a known sample incompleteness by simultaneous imputation from the current model. The method called GMMis generalizes existing Expectation-Maximization techniques for truncated data to arbitrary truncation geometries and probabilistic rejection. It can incorporate an uniform background distribution as well as independent multivariate normal measurement errors for each of the observed samples, and recovers an estimate of the error-free distribution from which both observed and unobserved samples are drawn. We compare GMMis to the standard Gaussian mixture model for simple test cases with different types of incompleteness, and apply it to observational data from the NASA Chandra X-ray telescope. The python code is capable of performing density estimation with millions of samples and thousands of model components and is released as an open-source package at https://github.com/pmelchior/pyGMMis

  8. Modelling of a shell-and-tube evaporator using the zeotropic mixture R-407C

    Energy Technology Data Exchange (ETDEWEB)

    Necula, H.; Badea, A. [Universite Politecnica de Bucarest (Romania). Faculte d' Energetique; Lallemand, M. [INSA, Villeurbanne (France). Centre de Thermique de Lyon; Marvillet, C. [CEA-Grenoble (France)

    2001-11-01

    This study concerns the steady state modelling of a shell-and-tube evaporator using the zeotropic mixture R-407C. In this local type model, the control volumes are a function of the geometric configuration of the evaporator in which baffles are fitted. The validation of the model has been made by comparison between theoretical and experimental results obtained from an experimental investigation with a refrigerating machine. For test conditions, the flow pattern has been identified from a flow pattern map as being stratified. Theoretical results show the effect of different parameters such as the saturation pressure, the inlet quality, etc. on the local variables (temperature, slip ratio). The effect of leakage on the mixture composition has also been investigated. (author)

  9. A lattice traffic model with consideration of preceding mixture traffic information

    Institute of Scientific and Technical Information of China (English)

    Li Zhi-Peng; Liu Fu-Qiang; Sun Jian

    2011-01-01

    In this paper,the lattice model is presented,incorporating not only site information about preceding cars but also relative currents in front.We derive the stability condition of the extended model by considering a small perturbation around the homogeneous flow solution and find that the improvement in the stability of traffic flow is obtained by taking into account preceding mixture traffic information.Direct simulations also confirm that the traffic jam can be suppressed efficiently by considering the relative currents ahead,just like incorporating site information in front.Moreover,from the nonlinear analysis of the extended models,the preceding mixture traffic information dependence of the propagating kink solutions for traffic jams is obtained by deriving the modified KdV equation near the critical point using the reductive perturbation method.

  10. Personal exposure to mixtures of volatile organic compounds: modeling and further analysis of the RIOPA data.

    Science.gov (United States)

    Batterman, Stuart; Su, Feng-Chiao; Li, Shi; Mukherjee, Bhramar; Jia, Chunrong

    2014-06-01

    Emission sources of volatile organic compounds (VOCs*) are numerous and widespread in both indoor and outdoor environments. Concentrations of VOCs indoors typically exceed outdoor levels, and most people spend nearly 90% of their time indoors. Thus, indoor sources generally contribute the majority of VOC exposures for most people. VOC exposure has been associated with a wide range of acute and chronic health effects; for example, asthma, respiratory diseases, liver and kidney dysfunction, neurologic impairment, and cancer. Although exposures to most VOCs for most persons fall below health-based guidelines, and long-term trends show decreases in ambient emissions and concentrations, a subset of individuals experience much higher exposures that exceed guidelines. Thus, exposure to VOCs remains an important environmental health concern. The present understanding of VOC exposures is incomplete. With the exception of a few compounds, concentration and especially exposure data are limited; and like other environmental data, VOC exposure data can show multiple modes, low and high extreme values, and sometimes a large portion of data below method detection limits (MDLs). Field data also show considerable spatial or interpersonal variability, and although evidence is limited, temporal variability seems high. These characteristics can complicate modeling and other analyses aimed at risk assessment, policy actions, and exposure management. In addition to these analytic and statistical issues, exposure typically occurs as a mixture, and mixture components may interact or jointly contribute to adverse effects. However most pollutant regulations, guidelines, and studies remain focused on single compounds, and thus may underestimate cumulative exposures and risks arising from coexposures. In addition, the composition of VOC mixtures has not been thoroughly investigated, and mixture components show varying and complex dependencies. Finally, although many factors are known to

  11. Comparison of Criteria for Choosing the Number of Classes in Bayesian Finite Mixture Models.

    Science.gov (United States)

    Nasserinejad, Kazem; van Rosmalen, Joost; de Kort, Wim; Lesaffre, Emmanuel

    2017-01-01

    Identifying the number of classes in Bayesian finite mixture models is a challenging problem. Several criteria have been proposed, such as adaptations of the deviance information criterion, marginal likelihoods, Bayes factors, and reversible jump MCMC techniques. It was recently shown that in overfitted mixture models, the overfitted latent classes will asymptotically become empty under specific conditions for the prior of the class proportions. This result may be used to construct a criterion for finding the true number of latent classes, based on the removal of latent classes that have negligible proportions. Unlike some alternative criteria, this criterion can easily be implemented in complex statistical models such as latent class mixed-effects models and multivariate mixture models using standard Bayesian software. We performed an extensive simulation study to develop practical guidelines to determine the appropriate number of latent classes based on the posterior distribution of the class proportions, and to compare this criterion with alternative criteria. The performance of the proposed criterion is illustrated using a data set of repeatedly measured hemoglobin values of blood donors.

  12. A Bayesian threshold-normal mixture model for analysis of a continuous mastitis-related trait.

    Science.gov (United States)

    Ødegård, J; Madsen, P; Gianola, D; Klemetsdal, G; Jensen, J; Heringstad, B; Korsgaard, I R

    2005-07-01

    Mastitis is associated with elevated somatic cell count in milk, inducing a positive correlation between milk somatic cell score (SCS) and the absence or presence of the disease. In most countries, selection against mastitis has focused on selecting parents with genetic evaluations that have low SCS. Univariate or multivariate mixed linear models have been used for statistical description of SCS. However, an observation of SCS can be regarded as drawn from a 2- (or more) component mixture defined by the (usually) unknown health status of a cow at the test-day on which SCS is recorded. A hierarchical 2-component mixture model was developed, assuming that the health status affecting the recorded test-day SCS is completely specified by an underlying liability variable. Based on the observed SCS, inferences can be drawn about disease status and parameters of both SCS and liability to mastitis. The prior probability of putative mastitis was allowed to vary between subgroups (e.g., herds, families), by specifying fixed and random effects affecting both SCS and liability. Using simulation, it was found that a Bayesian model fitted to the data yielded parameter estimates close to their true values. The model provides selection criteria that are more appealing than selection for lower SCS. The proposed model can be extended to handle a wide range of problems related to genetic analyses of mixture traits.

  13. Stochastic simulation of prokaryotic two-component signalling indicates stochasticity-induced active-state locking and growth-rate dependent bistability

    NARCIS (Netherlands)

    K. Wei (Katy); M. Moinat (Maxim); T.R. Maarleveld (Timo); F.J. Bruggeman (Frank)

    2014-01-01

    htmlabstractSignal transduction by prokaryotes almost exclusively relies on two-component systems for sensing and responding to (extracellular) signals. Here, we use stochastic models of two-component systems to better understand the impact of stochasticity on the fidelity and robustness of signal

  14. New approach in modeling Cr(VI) sorption onto biomass from metal binary mixtures solutions

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Chang [College of Environmental Science and Engineering, Anhui Normal University, South Jiuhua Road, 189, 241002 Wuhu (China); Chemical Engineering Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain); Fiol, Núria [Chemical Engineering Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain); Villaescusa, Isabel, E-mail: Isabel.Villaescusa@udg.edu [Chemical Engineering Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain); Poch, Jordi [Applied Mathematics Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain)

    2016-01-15

    In the last decades Cr(VI) sorption equilibrium and kinetic studies have been carried out using several types of biomasses. However there are few researchers that consider all the simultaneous processes that take place during Cr(VI) sorption (i.e., sorption/reduction of Cr(VI) and simultaneous formation and binding of reduced Cr(III)) when formulating a model that describes the overall sorption process. On the other hand Cr(VI) scarcely exists alone in wastewaters, it is usually found in mixtures with divalent metals. Therefore, the simultaneous removal of Cr(VI) and divalent metals in binary mixtures and the interactive mechanism governing Cr(VI) elimination have gained more and more attention. In the present work, kinetics of Cr(VI) sorption onto exhausted coffee from Cr(VI)–Cu(II) binary mixtures has been studied in a stirred batch reactor. A model including Cr(VI) sorption and reduction, Cr(III) sorption and the effect of the presence of Cu(II) in these processes has been developed and validated. This study constitutes an important advance in modeling Cr(VI) sorption kinetics especially when chromium sorption is in part based on the sorbent capacity of reducing hexavalent chromium and a metal cation is present in the binary mixture. - Highlights: • A kinetic model including Cr(VI) reduction, Cr(VI) and Cr(III) sorption/desorption • Synergistic effect of Cu(II) on Cr(VI) elimination included in the modelModel validation by checking it against independent sets of data.

  15. Investigation of Self-Assembly of Two-Component Organogel System Based on Trigonal Acids and Aminobenzothiazole Derivatives

    Directory of Open Access Journals (Sweden)

    Youbo Di

    2013-01-01

    Full Text Available We reported here the gelation behaviors of two-component organogel system based on different acids and aminobenzothiazole derivatives in various organic solvents. Their gelation behaviors in 20 solvents were tested as new organic gelators. It was shown that the molecular skeletons and substituted groups in these compounds played a crucial role in the gelation behavior of the mixtures. Only the binary mixture of 2-aminobenzothiazole and trigonal 1,3,5-benzenetricarboxylic acid with aromatic core could form organogels in ethanol and acetone. Morphological observations reveal that the microstructures of both xerogels showed similar wrinkle-shaped domains composed of sheet-like aggregates with many holes. Spectral studies reveal the hydrogen bonding interaction between the amide of the gelator and lamellar-like structure of the aggregates in both gels. The present investigation is a perspective to provide new clues for the design of new nanomaterials and functional textile materials with special microstructures.

  16. Two-component systems in microbial communities: approaches and resources for generating and analyzing metagenomic data sets.

    Science.gov (United States)

    Podar, Mircea

    2007-01-01

    Two-component signal transduction represents the main mechanism by which bacterial cells interact with their environment. The functional diversity of two-component systems and their relative importance in the different taxonomic groups and ecotypes of bacteria has become evident with the availability of several hundred genomic sequences. The vast majority of bacteria, including many high rank taxonomic units, while being components of complex microbial communities remain uncultured (i.e., have not been isolated or grown in the laboratory). Environmental genomic data from such communities are becoming available, and in addition to its profound impact on microbial ecology it will propel molecular biological disciplines beyond the traditional model organisms. This chapter describes the general approaches used in generating environmental genomic data and how that data can be used to advance the study of two component-systems and signal transduction in general.

  17. Extensions to Multivariate Space Time Mixture Modeling of Small Area Cancer Data

    Directory of Open Access Journals (Sweden)

    Rachel Carroll

    2017-05-01

    Full Text Available Oral cavity and pharynx cancer, even when considered together, is a fairly rare disease. Implementation of multivariate modeling with lung and bronchus cancer, as well as melanoma cancer of the skin, could lead to better inference for oral cavity and pharynx cancer. The multivariate structure of these models is accomplished via the use of shared random effects, as well as other multivariate prior distributions. The results in this paper indicate that care should be taken when executing these types of models, and that multivariate mixture models may not always be the ideal option, depending on the data of interest.

  18. Calculation of Surface Tensions of Polar Mixtures with a Simplified Gradient Theory Model

    DEFF Research Database (Denmark)

    Zuo, You-Xiang; Stenby, Erling Halfdan

    1996-01-01

    Key Words: Thermodynamics, Simplified Gradient Theory, Surface Tension, Equation of state, Influence Parameter.In this work, assuming that the number densities of each component in a mixture across the interface between the coexisting vapor and liquid phases are linearly distributed, we developed...... a simplified gradient theory (SGT) model for computing surface tensions. With this model, it is not required to solve the time-consuming density profile equations of the gradient theory model. The SRK EOS was applied to calculate the properties of the homogeneous fluid. First, the SGT model was used to predict...

  19. Analysis of Two-sample Censored Data Using a Semiparametric Mixture Model

    Institute of Scientific and Technical Information of China (English)

    Gang Li; Chien-tai Lin

    2009-01-01

    In this article we study a semiparametric mixture model for the two-sample problem with right censored data. The model implies that the densities for the continuous outcomes are related by a parametric tilt but otherwise unspecified. It provides a useful alternative to the Cox (1972) proportional hazards model for the comparison of treatments based on right censored survival data. We propose an iterative algorithm for the semiparametric maximum likelihood estimates of the parametric and nonparametric components of the model. The performance of the proposed method is studied using simulation. We illustrate our method in an application to melanoma.

  20. Using a Genetic mixture model to study Phenotypic traits: Differential fecundity among Yukon river Chinook Salmon

    Science.gov (United States)

    Bromaghin, J.F.; Evenson, D.F.; McLain, T.H.; Flannery, B.G.

    2011-01-01

    Fecundity is a vital population characteristic that is directly linked to the productivity of fish populations. Historic data from Yukon River (Alaska) Chinook salmon Oncorhynchus tshawytscha suggest that length-adjusted fecundity differs among populations within the drainage and either is temporally variable or has declined. Yukon River Chinook salmon have been harvested in large-mesh gill-net fisheries for decades, and a decline in fecundity was considered a potential evolutionary response to size-selective exploitation. The implications for fishery conservation and management led us to further investigate the fecundity of Yukon River Chinook salmon populations. Matched observations of fecundity, length, and genotype were collected from a sample of adult females captured from the multipopulation spawning migration near the mouth of the Yukon River in 2008. These data were modeled by using a new mixture model, which was developed by extending the conditional maximum likelihood mixture model that is commonly used to estimate the composition of multipopulation mixtures based on genetic data. The new model facilitates maximum likelihood estimation of stock-specific fecundity parameters without first using individual assignment to a putative population of origin, thus avoiding potential biases caused by assignment error.The hypothesis that fecundity of Chinook salmon has declined was not supported; this result implies that fecundity exhibits high interannual variability. However, length-adjusted fecundity estimates decreased as migratory distance increased, and fecundity was more strongly dependent on fish size for populations spawning in the middle and upper portions of the drainage. These findings provide insights into potential constraints on reproductive investment imposed by long migrations and warrant consideration in fisheries management and conservation. The new mixture model extends the utility of genetic markers to new applications and can be easily adapted

  1. Modeling the surface tension of complex, reactive organic-inorganic mixtures

    Science.gov (United States)

    Schwier, A. N.; Viglione, G. A.; Li, Z.; McNeill, V. Faye

    2013-11-01

    Atmospheric aerosols can contain thousands of organic compounds which impact aerosol surface tension, affecting aerosol properties such as heterogeneous reactivity, ice nucleation, and cloud droplet formation. We present new experimental data for the surface tension of complex, reactive organic-inorganic aqueous mixtures mimicking tropospheric aerosols. Each solution contained 2-6 organic compounds, including methylglyoxal, glyoxal, formaldehyde, acetaldehyde, oxalic acid, succinic acid, leucine, alanine, glycine, and serine, with and without ammonium sulfate. We test two semi-empirical surface tension models and find that most reactive, complex, aqueous organic mixtures which do not contain salt are well described by a weighted Szyszkowski-Langmuir (S-L) model which was first presented by Henning et al. (2005). Two approaches for modeling the effects of salt were tested: (1) the Tuckermann approach (an extension of the Henning model with an additional explicit salt term), and (2) a new implicit method proposed here which employs experimental surface tension data obtained for each organic species in the presence of salt used with the Henning model. We recommend the use of method (2) for surface tension modeling of aerosol systems because the Henning model (using data obtained from organic-inorganic systems) and Tuckermann approach provide similar modeling results and goodness-of-fit (χ2) values, yet the Henning model is a simpler and more physical approach to modeling the effects of salt, requiring less empirically determined parameters.

  2. Modeling the surface tension of complex, reactive organic-inorganic mixtures

    Directory of Open Access Journals (Sweden)

    A. N. Schwier

    2013-01-01

    Full Text Available Atmospheric aerosols can contain thousands of organic compounds which impact aerosol surface tension, affecting aerosol properties such as cloud condensation nuclei (CCN ability. We present new experimental data for the surface tension of complex, reactive organic-inorganic aqueous mixtures mimicking tropospheric aerosols. Each solution contained 2–6 organic compounds, including methylglyoxal, glyoxal, formaldehyde, acetaldehyde, oxalic acid, succinic acid, leucine, alanine, glycine, and serine, with and without ammonium sulfate. We test two surface tension models and find that most reactive, complex, aqueous organic mixtures which do not contain salt are well-described by a weighted Szyszkowski–Langmuir (S–L model which was first presented by Henning et al. (2005. Two approaches for modeling the effects of salt were tested: (1 the Tuckermann approach (an extension of the Henning model with an additional explicit salt term, and (2 a new implicit method proposed here which employs experimental surface tension data obtained for each organic species in the presence of salt used with the Henning model. We recommend the use of method (2 for surface tension modeling because the Henning model (using data obtained from organic-inorganic systems and Tuckermann approach provide similar modeling fits and goodness of fit (χ2 values, yet the Henning model is a simpler and more physical approach to modeling the effects of salt, requiring less empirically determined parameters.

  3. A dirichlet process covarion mixture model and its assessments using posterior predictive discrepancy tests.

    Science.gov (United States)

    Zhou, Yan; Brinkmann, Henner; Rodrigue, Nicolas; Lartillot, Nicolas; Philippe, Hervé

    2010-02-01

    Heterotachy, the variation of substitution rate at a site across time, is a prevalent phenomenon in nucleotide and amino acid alignments, which may mislead probabilistic-based phylogenetic inferences. The covarion model is a special case of heterotachy, in which sites change between the "ON" state (allowing substitutions according to any particular model of sequence evolution) and the "OFF" state (prohibiting substitutions). In current implementations, the switch rates between ON and OFF states are homogeneous across sites, a hypothesis that has never been tested. In this study, we developed an infinite mixture model, called the covarion mixture (CM) model, which allows the covarion parameters to vary across sites, controlled by a Dirichlet process prior. Moreover, we combine the CM model with other approaches. We use a second independent Dirichlet process that models the heterogeneities of amino acid equilibrium frequencies across sites, known as the CAT model, and general rate-across-site heterogeneity is modeled by a gamma distribution. The application of the CM model to several large alignments demonstrates that the covarion parameters are significantly heterogeneous across sites. We describe posterior predictive discrepancy tests and use these to demonstrate the importance of these different elements of the models.

  4. Cure fraction estimation from the mixture cure models for grouped survival data.

    Science.gov (United States)

    Yu, Binbing; Tiwari, Ram C; Cronin, Kathleen A; Feuer, Eric J

    2004-06-15

    Mixture cure models are usually used to model failure time data with long-term survivors. These models have been applied to grouped survival data. The models provide simultaneous estimates of the proportion of the patients cured from disease and the distribution of the survival times for uncured patients (latency distribution). However, a crucial issue with mixture cure models is the identifiability of the cure fraction and parameters of kernel distribution. Cure fraction estimates can be quite sensitive to the choice of latency distributions and length of follow-up time. In this paper, sensitivity of parameter estimates under semi-parametric model and several most commonly used parametric models, namely lognormal, loglogistic, Weibull and generalized Gamma distributions, is explored. The cure fraction estimates from the model with generalized Gamma distribution is found to be quite robust. A simulation study was carried out to examine the effect of follow-up time and latency distribution specification on cure fraction estimation. The cure models with generalized Gamma latency distribution are applied to the population-based survival data for several cancer sites from the Surveillance, Epidemiology and End Results (SEER) Program. Several cautions on the general use of cure model are advised.

  5. Symmetrization of excess Gibbs free energy: A simple model for binary liquid mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Castellanos-Suarez, Aly J., E-mail: acastell@ivic.gob.v [Centro de Estudios Interdisciplinarios de la Fisica (CEIF), Instituto Venezolano de Investigaciones Cientificas (IVIC), Apartado 21827, Caracas 1020A (Venezuela, Bolivarian Republic of); Garcia-Sucre, Maximo, E-mail: mgs@ivic.gob.v [Centro de Estudios Interdisciplinarios de la Fisica (CEIF), Instituto Venezolano de Investigaciones Cientificas (IVIC), Apartado 21827, Caracas 1020A (Venezuela, Bolivarian Republic of)

    2011-03-15

    A symmetric expression for the excess Gibbs free energy of liquid binary mixtures is obtained using an appropriate definition for the effective contact fraction. We have identified a mechanism of local segregation as the main cause of the contact fraction variation with the concentration. Starting from this mechanism we develop a simple model for describing binary liquid mixtures. In this model two parameters appear: one adjustable, and the other parameter depending on the first one. Following this procedure we reproduce the experimental data of (liquid + vapor) equilibrium with a degree of accuracy comparable to well-known more elaborated models. The way in which we take into account the effective contacts between molecules allows identifying the compound which may be considered to induce one of the following processes: segregation, anti-segregation and dispersion of the components in the liquid mixture. Finally, the simplicity of the model allows one to obtain only one resulting interaction energy parameter, which makes easier the physical interpretation of the results.

  6. Reconstruction of coronary artery centrelines from x-ray rotational angiography using a probabilistic mixture model

    Science.gov (United States)

    Ćimen, Serkan; Gooya, Ali; Frangi, Alejandro F.

    2016-03-01

    Three-dimensional reconstructions of coronary arterial trees from X-ray rotational angiography (RA) images have the potential to compensate the limitations of RA due to projective imaging. Most of the existing model based reconstruction algorithms are either based on forward-projection of a 3D deformable model onto X-ray angiography images or back-projection of 2D information extracted from X-ray angiography images to 3D space for further processing. All of these methods have their shortcomings such as dependency on accurate 2D centerline segmentations. In this paper, the reconstruction is approached from a novel perspective, and is formulated as a probabilistic reconstruction method based on mixture model (MM) representation of point sets describing the coronary arteries. Specifically, it is assumed that the coronary arteries could be represented by a set of 3D points, whose spatial locations denote the Gaussian components in the MM. Additionally, an extra uniform distribution is incorporated in the mixture model to accommodate outliers (noise, over-segmentation etc.) in the 2D centerline segmentations. Treating the given 2D centreline segmentations as data points generated from MM, the 3D means, isotropic variance, and mixture weights of the Gaussian components are estimated by maximizing a likelihood function. Initial results from a phantom study show that the proposed method is able to handle outliers in 2D centreline segmentations, which indicates the potential of our formulation. Preliminary reconstruction results in the clinical data are also presented.

  7. Partitioning detectability components in populations subject to within-season temporary emigration using binomial mixture models.

    Directory of Open Access Journals (Sweden)

    Katherine M O'Donnell

    Full Text Available Detectability of individual animals is highly variable and nearly always < 1; imperfect detection must be accounted for to reliably estimate population sizes and trends. Hierarchical models can simultaneously estimate abundance and effective detection probability, but there are several different mechanisms that cause variation in detectability. Neglecting temporary emigration can lead to biased population estimates because availability and conditional detection probability are confounded. In this study, we extend previous hierarchical binomial mixture models to account for multiple sources of variation in detectability. The state process of the hierarchical model describes ecological mechanisms that generate spatial and temporal patterns in abundance, while the observation model accounts for the imperfect nature of counting individuals due to temporary emigration and false absences. We illustrate our model's potential advantages, including the allowance of temporary emigration between sampling periods, with a case study of southern red-backed salamanders Plethodon serratus. We fit our model and a standard binomial mixture model to counts of terrestrial salamanders surveyed at 40 sites during 3-5 surveys each spring and fall 2010-2012. Our models generated similar parameter estimates to standard binomial mixture models. Aspect was the best predictor of salamander abundance in our case study; abundance increased as aspect became more northeasterly. Increased time-since-rainfall strongly decreased salamander surface activity (i.e. availability for sampling, while higher amounts of woody cover objects and rocks increased conditional detection probability (i.e. probability of capture, given an animal is exposed to sampling. By explicitly accounting for both components of detectability, we increased congruence between our statistical modeling and our ecological understanding of the system. We stress the importance of choosing survey locations and

  8. Use of Linear Spectral Mixture Model to Estimate Rice Planted Area Based on MODIS Data

    Directory of Open Access Journals (Sweden)

    Lei Wang

    2008-06-01

    Full Text Available MODIS (Moderate Resolution Imaging Spectroradiometer is a key instrument aboard the Terra (EOS AM and Aqua (EOS PM satellites. Linear spectral mixture models are applied to MOIDS data for the sub-pixel classification of land covers. Shaoxing county of Zhejiang Province in China was chosen to be the study site and early rice was selected as the study crop. The derived proportions of land covers from MODIS pixel using linear spectral mixture models were compared with unsupervised classification derived from TM data acquired on the same day, which implies that MODIS data could be used as satellite data source for rice cultivation area estimation, possibly rice growth monitoring and yield forecasting on the regional scale.

  9. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies

    DEFF Research Database (Denmark)

    Thompson, Wesley K.; Wang, Yunpeng; Schork, Andrew J.

    2015-01-01

    Characterizing the distribution of effects from genome-wide genotyping data is crucial for understanding important aspects of the genetic architecture of complex traits, such as number or proportion of non-null loci, average proportion of phenotypic variance explained per non-null effect, power...... for discovery, and polygenic risk prediction. To this end, previous work has used effect-size models based on various distributions, including the normal and normal mixture distributions, among others. In this paper we propose a scale mixture of two normals model for effect size distributions of genome...... of variance explained by genotyped SNPs, CD and SZ have a broadly dissimilar genetic architecture, due to differing mean effect size and proportion of non-null loci....

  10. Automated sleep spindle detection using IIR filters and a Gaussian Mixture Model.

    Science.gov (United States)

    Patti, Chanakya Reddy; Penzel, Thomas; Cvetkovic, Dean

    2015-08-01

    Sleep spindle detection using modern signal processing techniques such as the Short-Time Fourier Transform and Wavelet Analysis are common research methods. These methods are computationally intensive, especially when analysing data from overnight sleep recordings. The authors of this paper propose an alternative using pre-designed IIR filters and a multivariate Gaussian Mixture Model. Features extracted with IIR filters are clustered using a Gaussian Mixture Model without the use of any subject independent thresholds. The Algorithm was tested on a database consisting of overnight sleep PSG of 5 subjects and an online public spindles database consisting of six 30 minute sleep excerpts. An overall sensitivity of 57% and a specificity of 98.24% was achieved in the overnight database group and a sensitivity of 65.19% at a 16.9% False Positive proportion for the 6 sleep excerpts.

  11. Use of Linear Spectral Mixture Model to Estimate Rice Planted Area Based on MODIS Data

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    MODIS (Moderate Resolution Imaging Spectroradiometer) is a key instrument aboard the Terra (EOS AM) and Aqua (EOS PM) satellites.Linear spectral mixture models are applied to MOIDS data for the sub-pixel classification of land covers.Shaoxing county of Zhcjiang Province in China was chosen to be the study site and early rice was selected as the study crop.The derived proportions of land covers from MODIS pixel using linear spectral mixture models were compared with unsupervised classification derived from TM data acquired on the same day,which implies that MODIS data could be used as satellite data source for rice cultivation area estimation,possibly rice growth monitoring and yield forecasting on the regional scale.

  12. A cross-association model for CO2-methanol and CO2-ethanol mixtures

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    A cross-association model was proposed for CO2-alcohol mixtures based on the statistical associating fluid theory (SAFT).CO2 was treated as a pseudo-associating molecule and both the self-association between alcohol hydroxyls and the cross-association between CO2 and alcohol hydroxyls were considered.The equilibrium properties from low temperature-pressure to high temperature-pressure were investigated using this model.The calculated p-x and p-p diagrams of CO2-methanol and CO2-ethanol mixtures agreed with the experimental data.The results showed that when the cross-association was taken into account for Helmholtz free energy,the calculated equilibrium properties could be significantly improved,and the error prediction of the three phase equilibria and triple points in low temperature regions could be avoided.

  13. Empirical Bayes ranking and selection methods via semiparametric hierarchical mixture models in microarray studies.

    Science.gov (United States)

    Noma, Hisashi; Matsui, Shigeyuki

    2013-05-20

    The main purpose of microarray studies is screening of differentially expressed genes as candidates for further investigation. Because of limited resources in this stage, prioritizing genes are relevant statistical tasks in microarray studies. For effective gene selections, parametric empirical Bayes methods for ranking and selection of genes with largest effect sizes have been proposed (Noma et al., 2010; Biostatistics 11: 281-289). The hierarchical mixture model incorporates the differential and non-differential components and allows information borrowing across differential genes with separation from nuisance, non-differential genes. In this article, we develop empirical Bayes ranking methods via a semiparametric hierarchical mixture model. A nonparametric prior distribution, rather than parametric prior distributions, for effect sizes is specified and estimated using the "smoothing by roughening" approach of Laird and Louis (1991; Computational statistics and data analysis 12: 27-37). We present applications to childhood and infant leukemia clinical studies with microarrays for exploring genes related to prognosis or disease progression.

  14. A generalized longitudinal mixture IRT model for measuring differential growth in learning environments.

    Science.gov (United States)

    Kadengye, Damazo T; Ceulemans, Eva; Van den Noortgate, Wim

    2014-09-01

    This article describes a generalized longitudinal mixture item response theory (IRT) model that allows for detecting latent group differences in item response data obtained from electronic learning (e-learning) environments or other learning environments that result in large numbers of items. The described model can be viewed as a combination of a longitudinal Rasch model, a mixture Rasch model, and a random-item IRT model, and it includes some features of the explanatory IRT modeling framework. The model assumes the possible presence of latent classes in item response patterns, due to initial person-level differences before learning takes place, to latent class-specific learning trajectories, or to a combination of both. Moreover, it allows for differential item functioning over the classes. A Bayesian model estimation procedure is described, and the results of a simulation study are presented that indicate that the parameters are recovered well, particularly for conditions with large item sample sizes. The model is also illustrated with an empirical sample data set from a Web-based e-learning environment.

  15. Phase Equilibria of Water/CO2 and Water/n-Alkane Mixtures from Polarizable Models.

    Science.gov (United States)

    Jiang, Hao; Economou, Ioannis G; Panagiotopoulos, Athanassios Z

    2017-02-16

    Phase equilibria of water/CO2 and water/n-alkane mixtures over a range of temperatures and pressures were obtained from Monte Carlo simulations in the Gibbs ensemble. Three sets of Drude-type polarizable models for water, namely the BK3, GCP, and HBP models, were combined with a polarizable Gaussian charge CO2 (PGC) model to represent the water/CO2 mixture. The HBP water model describes hydrogen bonds between water and CO2 explicitly. All models underestimate CO2 solubility in water if standard combining rules are used for the dispersion interactions between water and CO2. With the dispersion parameters optimized to phase compositions, the BK3 and GCP models were able to represent the CO2 solubility in water, however, the water composition in CO2-rich phase is systematically underestimated. Accurate representation of compositions for both water- and CO2-rich phases cannot be achieved even after optimizing the cross interaction parameters. By contrast, accurate compositions for both water- and CO2-rich phases were obtained with hydrogen bonding parameters determined from the second virial coefficient for water/CO2. Phase equilibria of water/n-alkane mixtures were also studied using the HBP water and an exponenial-6 united-atom n-alkanes model. The dispersion interactions between water and n-alkanes were optimized to Henry's constants of methane and ethane in water. The HBP water and united-atom n-alkane models underestimate water content in the n-alkane-rich phase; this underestimation is likely due to the neglect of electrostatic and induction energies in the united-atom model.

  16. The Precise Measurement of Vapor-Liquid Equilibrium Properties of the CO2/Isopentane Binary Mixture, and Fitted Parameters for a Helmholtz Energy Mixture Model

    Science.gov (United States)

    Miyamoto, H.; Shoji, Y.; Akasaka, R.; Lemmon, E. W.

    2017-10-01

    Natural working fluid mixtures, including combinations of CO2, hydrocarbons, water, and ammonia, are expected to have applications in energy conversion processes such as heat pumps and organic Rankine cycles. However, the available literature data, much of which were published between 1975 and 1992, do not incorporate the recommendations of the Guide to the Expression of Uncertainty in Measurement. Therefore, new and more reliable thermodynamic property measurements obtained with state-of-the-art technology are required. The goal of the present study was to obtain accurate vapor-liquid equilibrium (VLE) properties for complex mixtures based on two different gases with significant variations in their boiling points. Precise VLE data were measured with a recirculation-type apparatus with a 380 cm3 equilibration cell and two windows allowing observation of the phase behavior. This cell was equipped with recirculating and expansion loops that were immersed in temperature-controlled liquid and air baths, respectively. Following equilibration, the composition of the sample in each loop was ascertained by gas chromatography. VLE data were acquired for CO2/ethanol and CO2/isopentane binary mixtures within the temperature range from 300 K to 330 K and at pressures up to 7 MPa. These data were used to fit interaction parameters in a Helmholtz energy mixture model. Comparisons were made with the available literature data and values calculated by thermodynamic property models.

  17. Catalytically stabilized combustion of lean methane-air-mixtures: a numerical model

    Energy Technology Data Exchange (ETDEWEB)

    Dogwiler, U.; Benz, P.; Mantharas, I. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1997-06-01

    The catalytically stabilized combustion of lean methane/air mixtures has been studied numerically under conditions closely resembling the ones prevailing in technical devices. A detailed numerical model has been developed for a laminar, stationary, 2-D channel flow with full heterogeneous and homogeneous reaction mechanisms. The computations provide direct information on the coupling between heterogeneous-homogeneous combustion and in particular on the means of homogeneous ignitions and stabilization. (author) 4 figs., 3 refs.

  18. Condition monitoring of oil-impregnated paper bushings using extension neural network, Gaussian mixture and hidden Markov models

    CSIR Research Space (South Africa)

    Miya, WS

    2008-10-01

    Full Text Available In this paper, a comparison between Extension Neural Network (ENN), Gaussian Mixture Model (GMM) and Hidden Markov model (HMM) is conducted for bushing condition monitoring. The monitoring process is a two-stage implementation of a classification...

  19. Personal Exposure to Mixtures of Volatile Organic Compounds: Modeling and Further Analysis of the RIOPA Data

    Science.gov (United States)

    Batterman, Stuart; Su, Feng-Chiao; Li, Shi; Mukherjee, Bhramar; Jia, Chunrong

    2015-01-01

    INTRODUCTION Emission sources of volatile organic compounds (VOCs) are numerous and widespread in both indoor and outdoor environments. Concentrations of VOCs indoors typically exceed outdoor levels, and most people spend nearly 90% of their time indoors. Thus, indoor sources generally contribute the majority of VOC exposures for most people. VOC exposure has been associated with a wide range of acute and chronic health effects; for example, asthma, respiratory diseases, liver and kidney dysfunction, neurologic impairment, and cancer. Although exposures to most VOCs for most persons fall below health-based guidelines, and long-term trends show decreases in ambient emissions and concentrations, a subset of individuals experience much higher exposures that exceed guidelines. Thus, exposure to VOCs remains an important environmental health concern. The present understanding of VOC exposures is incomplete. With the exception of a few compounds, concentration and especially exposure data are limited; and like other environmental data, VOC exposure data can show multiple modes, low and high extreme values, and sometimes a large portion of data below method detection limits (MDLs). Field data also show considerable spatial or interpersonal variability, and although evidence is limited, temporal variability seems high. These characteristics can complicate modeling and other analyses aimed at risk assessment, policy actions, and exposure management. In addition to these analytic and statistical issues, exposure typically occurs as a mixture, and mixture components may interact or jointly contribute to adverse effects. However most pollutant regulations, guidelines, and studies remain focused on single compounds, and thus may underestimate cumulative exposures and risks arising from coexposures. In addition, the composition of VOC mixtures has not been thoroughly investigated, and mixture components show varying and complex dependencies. Finally, although many factors are

  20. Novel pseudo-divergence of Gaussian mixture models based speaker clustering method

    Institute of Scientific and Technical Information of China (English)

    Wang Bo; Xu Yiqiong; Li Bicheng

    2006-01-01

    Serial structure is applied to speaker recognition to reduce the algorithm delay and computational complexity. The speech is first classified into speaker class, and then searches the most likely one inside the class.Difference between Gaussian Mixture Models (GMMs) is widely applied in speaker classification. The paper proposes a novel mean of pseudo-divergence, the ratio of Inter-Model dispersion to Intra-Model dispersion, to present the difference between GMMs, to perform speaker cluster. Weight, mean and variance, GMM's components, are involved in the dispersion. Experiments indicate that the measurement can well present the difference of GMMs and has improved performance of speaker clustering.

  1. Comparisons between Hygroscopic Measurements and UNIFAC Model Predictions for Dicarboxylic Organic Aerosol Mixtures

    Directory of Open Access Journals (Sweden)

    Jae Young Lee

    2013-01-01

    Full Text Available Hygroscopic behavior was measured at 12°C over aqueous bulk solutions containing dicarboxylic acids, using a Baratron pressure transducer. Our experimental measurements of water activity for malonic acid solutions (0–10 mol/kg water and glutaric acid solutions (0–5 mol/kg water agreed to within 0.6% and 0.8% of the predictions using Peng’s modified UNIFAC model, respectively (except for the 10 mol/kg water value, which differed by 2%. However, for solutions containing mixtures of malonic/glutaric acids, malonic/succinic acids, and glutaric/succinic acids, the disagreements between the measurements and predictions using the ZSR model or Peng’s modified UNIFAC model are higher than those for the single-component cases. Measurements of the overall water vapor pressure for 50 : 50 molar mixtures of malonic/glutaric acids closely followed that for malonic acid alone. For mixtures of malonic/succinic acids and glutaric/succinic acids, the influence of a constant concentration of succinic acid on water uptake became more significant as the concentration of malonic acid or glutaric acid was increased.

  2. Performance of growth mixture models in the presence of time-varying covariates.

    Science.gov (United States)

    Diallo, Thierno M O; Morin, Alexandre J S; Lu, HuiZhong

    2016-10-31

    Growth mixture modeling is often used to identify unobserved heterogeneity in populations. Despite the usefulness of growth mixture modeling in practice, little is known about the performance of this data analysis technique in the presence of time-varying covariates. In the present simulation study, we examined the impacts of five design factors: the proportion of the total variance of the outcome explained by the time-varying covariates, the number of time points, the error structure, the sample size, and the mixing ratio. More precisely, we examined the impact of these factors on the accuracy of parameter and standard error estimates, as well as on the class enumeration accuracy. Our results showed that the consistent Akaike information criterion (CAIC), the sample-size-adjusted CAIC (SCAIC), the Bayesian information criterion (BIC), and the integrated completed likelihood criterion (ICL-BIC) proved to be highly reliable indicators of the true number of latent classes in the data, across design conditions, and that the sample-size-adjusted BIC (SBIC) also proved quite accurate, especially in larger samples. In contrast, the Akaike information criterion (AIC), the entropy, the normalized entropy criterion (NEC), and the classification likelihood criterion (CLC) proved to be unreliable indicators of the true number of latent classes in the data. Our results also showed that substantial biases in the parameter and standard error estimates tended to be associated with growth mixture models that included only four time points.

  3. A Rough Set Bounded Spatially Constrained Asymmetric Gaussian Mixture Model for Image Segmentation.

    Science.gov (United States)

    Ji, Zexuan; Huang, Yubo; Sun, Quansen; Cao, Guo; Zheng, Yuhui

    2017-01-01

    Accurate image segmentation is an important issue in image processing, where Gaussian mixture models play an important part and have been proven effective. However, most Gaussian mixture model (GMM) based methods suffer from one or more limitations, such as limited noise robustness, over-smoothness for segmentations, and lack of flexibility to fit data. In order to address these issues, in this paper, we propose a rough set bounded asymmetric Gaussian mixture model with spatial constraint for image segmentation. First, based on our previous work where each cluster is characterized by three automatically determined rough-fuzzy regions, we partition the target image into three rough regions with two adaptively computed thresholds. Second, a new bounded indicator function is proposed to determine the bounded support regions of the observed data. The bounded indicator and posterior probability of a pixel that belongs to each sub-region is estimated with respect to the rough region where the pixel lies. Third, to further reduce over-smoothness for segmentations, two novel prior factors are proposed that incorporate the spatial information among neighborhood pixels, which are constructed based on the prior and posterior probabilities of the within- and between-clusters, and considers the spatial direction. We compare our algorithm to state-of-the-art segmentation approaches in both synthetic and real images to demonstrate the superior performance of the proposed algorithm.

  4. Initial data problems for the two-component Camassa-Holm system

    Directory of Open Access Journals (Sweden)

    Xiaohuan Wang

    2014-06-01

    Full Text Available This article concerns the study of some properties of the two-component Camassa-Holm system. By constructing two sequences of solutions of the two-component Camassa-Holm system, we prove that the solution map of the Cauchy problem of the two-component Camassa-Holm system is not uniformly continuous in $H^s(\\mathbb{R}$, $s>5/2$.

  5. Psychophysical model of chromatic perceptual transparency based on substractive color mixture.

    Science.gov (United States)

    Faul, Franz; Ekroll, Vebjørn

    2002-06-01

    Variants of Metelli's episcotister model, which are based on additive color mixture, have been found to describe the luminance conditions for perceptual transparency very accurately. However, the findings in the chromatic domain are not that clear-cut, since there exist chromatic stimuli that conform to the additive model but do not appear transparent. We present evidence that such failures are of a systematic nature, and we propose an alternative psychophysical model based on subtractive color mixture. Results of a computer simulation revealed that this model approximately describes color changes that occur when a surface is covered by a filter. We present the results of two psychophysical experiments with chromatic stimuli, in which we directly compared the predictions of the additive model and the predictions of the new model. These results show that the color relations leading to the perception of a homogeneous transparent layer conform very closely to the predictions of the new model and deviate systematically from the predictions of the additive model.

  6. Modelling plant interspecific interactions from experiments of perennial crop mixtures to predict optimal combinations.

    Science.gov (United States)

    Halty, Virginia; Valdés, Matías; Tejera, Mauricio; Picasso, Valentín; Fort, Hugo

    2017-07-28

    The contribution of plant species richness to productivity and ecosystem functioning is a long standing issue in Ecology, with relevant implications for both conservation and agriculture. Both experiments and quantitative modelling are fundamental to the design of sustainable agroecosystems and the optimization of crop production. We modelled communities of perennial crop mixtures by using a generalized Lotka-Volterra model, i.e. a model such that the interspecific interactions are more general than purely competitive. We estimated model parameters -carrying capacities and interaction coefficientsfrom, respectively, the observed biomass of monocultures and bicultures measured in a large diversity experiment of seven perennial forage species in Iowa, United States. The sign and absolute value of the interaction coefficients showed that the biological interactions between species pairs included amensalism, competition, and parasitism (asymmetric positive-negative interaction), with various degrees of intensity. We tested the model fit by simulating the combinations of more than two species and comparing them with the polycultures experimental data. Overall, theoretical predictions are in good agreement with the experiments. Using this model, we also simulated species combinations that were not sown. From all possible mixtures (sown and not sown) we identified which are the most productive species combinations. Our results demonstrate that a combination of experiments and modelling can contribute to the design of sustainable agricultural systems in general and to the optimization of crop production in particular. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  7. Analytical method for yrast line states in the interacting two-component Bose-Einstein condensate

    Institute of Scientific and Technical Information of China (English)

    解炳昊; 景辉

    2002-01-01

    The yrast spectrum for the harmonically trapped two-component Bose-Einstein condensate (BEC), omitting thedifference between the two components, has been studied using an analytical method. The energy eigenstates andeigenvalues for L=0,1,2,3 are given. We illustrate that there are different eigenstate behaviours between the even Land odd L cases for the two-component BEC in two dimensions. Except for symmetric states, there are antisymmetricstates for the permutation of the two components, which cannot reduce to those in a single condensate case when thevalue of L is odd.

  8. Accuracy assessment of linear spectral mixture model due to terrain undulation

    Science.gov (United States)

    Wang, Tianxing; Chen, Songlin; Ma, Ya

    2008-12-01

    Mixture spectra are common in remote sensing due to the limitations of spatial resolution and the heterogeneity of land surface. During the past 30 years, a lot of subpixel model have developed to investigate the information within mixture pixels. Linear spectral mixture model (LSMM) is a simper and more general subpixel model. LSMM also known as spectral mixture analysis is a widely used procedure to determine the proportion of endmembers (constituent materials) within a pixel based on the endmembers' spectral characteristics. The unmixing accuracy of LSMM is restricted by variety of factors, but now the research about LSMM is mostly focused on appraisement of nonlinear effect relating to itself and techniques used to select endmembers, unfortunately, the environment conditions of study area which could sway the unmixing-accuracy, such as atmospheric scatting and terrain undulation, are not studied. This paper probes emphatically into the accuracy uncertainty of LSMM resulting from the terrain undulation. ASTER dataset was chosen and the C terrain correction algorithm was applied to it. Based on this, fractional abundances for different cover types were extracted from both pre- and post-C terrain illumination corrected ASTER using LSMM. Simultaneously, the regression analyses and the IKONOS image were introduced to assess the unmixing accuracy. Results showed that terrain undulation could dramatically constrain the application of LSMM in mountain area. Specifically, for vegetation abundances, a improved unmixing accuracy of 17.6% (regression against to NDVI) and 18.6% (regression against to MVI) for R2 was achieved respectively by removing terrain undulation. Anyway, this study indicated in a quantitative way that effective removal or minimization of terrain illumination effects was essential for applying LSMM. This paper could also provide a new instance for LSMM applications in mountainous areas. In addition, the methods employed in this study could be

  9. A Neural Network Based Hybrid Mixture Model to Extract Information from Non-linear Mixed Pixels

    Directory of Open Access Journals (Sweden)

    Uttam Kumar

    2012-09-01

    Full Text Available Signals acquired by sensors in the real world are non-linear combinations, requiring non-linear mixture models to describe the resultant mixture spectra for the endmember’s (pure pixel’s distribution. This communication discusses inferring class fraction through a novel hybrid mixture model (HMM. HMM is a three-step process, where the endmembers are first derived from the images themselves using the N-FINDR algorithm. These endmembers are used by the linear mixture model (LMM in the second step that provides an abundance estimation in a linear fashion. Finally, the abundance values along with the training samples representing the actual ground proportions are fed into neural network based multi-layer perceptron (MLP architecture as input to train the neurons. The neural output further refines the abundance estimates to account for the non-linear nature of the mixing classes of interest. HMM is first implemented and validated on simulated hyper spectral data of 200 bands and subsequently on real time MODIS data with a spatial resolution of 250 m. The results on computer simulated data show that the method gives acceptable results for unmixing pixels with an overall RMSE of 0.0089 ± 0.0022 with LMM and 0.0030 ± 0.0001 with the HMM when compared to actual class proportions. The unmixed MODIS images showed overall RMSE with HMM as 0.0191 ± 0.022 as compared to the LMM output considered alone that had an overall RMSE of 0.2005 ± 0.41, indicating that individual class abundances obtained from HMM are very close to the real observations.

  10. A proposed experimental platform for measuring the properties of warm dense mixtures: Testing the applicability of the linear mixing model

    Science.gov (United States)

    Hawreliak, James

    2017-06-01

    This paper presents a proposed experimental technique for investigating the impact of chemical interactions in warm dense liquid mixtures. It uses experimental equation of state (EOS) measurements of warm dense liquid mixtures with different compositions to determine the deviation from the linear mixing model. Statistical mechanics is used to derive the EOS of a mixture with a constant pressure linear mixing term (Amagat's rule) and an interspecies interaction term. A ratio between the particle density of two different compositions of mixtures, K(P, T)i: ii, is defined. By comparing this ratio for a range of mixtures, the impact of interspecies interactions can be studied. Hydrodynamic simulations of mixtures with different carbon/hydrogen ratios are used to demonstrate the application of this proposed technique to multiple shock and ramp compression experiments. The limit of the pressure correction that can be measured due to interspecies interactions using this methodology is determined by the uncertainty in the density measurement.

  11. Granular mixtures modeled as elastic hard spheres subject to a drag force.

    Science.gov (United States)

    Vega Reyes, Francisco; Garzó, Vicente; Santos, Andrés

    2007-06-01

    Granular gaseous mixtures under rapid flow conditions are usually modeled as a multicomponent system of smooth inelastic hard disks (two dimensions) or spheres (three dimensions) with constant coefficients of normal restitution alpha{ij}. In the low density regime an adequate framework is provided by the set of coupled inelastic Boltzmann equations. Due to the intricacy of the inelastic Boltzmann collision operator, in this paper we propose a simpler model of elastic hard disks or spheres subject to the action of an effective drag force, which mimics the effect of dissipation present in the original granular gas. For each collision term ij, the model has two parameters: a dimensionless factor beta{ij} modifying the collision rate of the elastic hard spheres, and the drag coefficient zeta{ij}. Both parameters are determined by requiring that the model reproduces the collisional transfers of momentum and energy of the true inelastic Boltzmann operator, yielding beta{ij}=(1+alpha{ij})2 and zeta{ij} proportional, variant1-alpha{ij}/{2}, where the proportionality constant is a function of the partial densities, velocities, and temperatures of species i and j. The Navier-Stokes transport coefficients for a binary mixture are obtained from the model by application of the Chapman-Enskog method. The three coefficients associated with the mass flux are the same as those obtained from the inelastic Boltzmann equation, while the remaining four transport coefficients show a general good agreement, especially in the case of the thermal conductivity. The discrepancies between both descriptions are seen to be similar to those found for monocomponent gases. Finally, the approximate decomposition of the inelastic Boltzmann collision operator is exploited to construct a model kinetic equation for granular mixtures as a direct extension of a known kinetic model for elastic collisions.

  12. Dynamic viscosity modeling of methane plus n-decane and methane plus toluene mixtures: Comparative study of some representative models

    DEFF Research Database (Denmark)

    Baylaucq, A.; Boned, C.; Canet, X.;

    2005-01-01

    .15 and for several methane compositions. Although very far from real petroleum fluids, these mixtures are interesting in order to study the potential of extending various models to the simulation of complex fluids with asymmetrical components (light/heavy hydrocarbon). These data (575 data points) have been...... discussed in the framework of recent representative models (hard sphere scheme, friction theory, and free volume model) and with mixing laws and two empirical models (particularly the LBC model which is commonly used in petroleum engineering, and the self-referencing model). This comparative study shows...

  13. Discrete Element Method Modeling of the Rheological Properties of Coke/Pitch Mixtures

    Directory of Open Access Journals (Sweden)

    Behzad Majidi

    2016-05-01

    Full Text Available Rheological properties of pitch and pitch/coke mixtures at temperatures around 150 °C are of great interest for the carbon anode manufacturing process in the aluminum industry. In the present work, a cohesive viscoelastic contact model based on Burger’s model is developed using the discrete element method (DEM on the YADE, the open-source DEM software. A dynamic shear rheometer (DSR is used to measure the viscoelastic properties of pitch at 150 °C. The experimental data obtained is then used to estimate the Burger’s model parameters and calibrate the DEM model. The DSR tests were then simulated by a three-dimensional model. Very good agreement was observed between the experimental data and simulation results. Coke aggregates were modeled by overlapping spheres in the DEM model. Coke/pitch mixtures were numerically created by adding 5, 10, 20, and 30 percent of coke aggregates of the size range of 0.297–0.595 mm (−30 + 50 mesh to pitch. Adding up to 30% of coke aggregates to pitch can increase its complex shear modulus at 60 Hz from 273 Pa to 1557 Pa. Results also showed that adding coke particles increases both storage and loss moduli, while it does not have a meaningful effect on the phase angle of pitch.

  14. Study of normal and shear material properties for viscoelastic model of asphalt mixture by discrete element method

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Stang, Henrik

    2015-01-01

    In this paper, the viscoelastic behavior of asphalt mixture was studied by using discrete element method. The dynamic properties of asphalt mixture were captured by implementing Burger’s contact model. Different ways of taking into account of the normal and shear material properties of asphalt mi...

  15. Highlighting pitfalls in the Maxwell-Stefan modeling of water-alcohol mixture permeation across pervaporation membranes

    NARCIS (Netherlands)

    Krishna, R.; van Baten, J.M.

    2010-01-01

    The Maxwell-Stefan (M-S) equations are widely used for modeling permeation of water-alcohol mixtures across microporous membranes in pervaporation and dehydration process applications. For binary mixtures, for example, the following set of assumptions is commonly invoked, either explicitly or

  16. Dynamic form factor of two-component plasmas beyond the static local field approximation

    CERN Document Server

    Daligault, J

    2003-01-01

    The spectrum of ion density fluctuations in a strongly coupled plasma is described both within the static G(k, 0) and high-frequency G(k, infinity) local field approximation. By a direct comparison with molecular dynamics data, we find that for large coupling, G(k, 0) is inadequate. Based on this result, we employ the Zwanzig-Mori memory function approach to model the Thomson scattering cross section, i.e. the electron dynamic form factor S sub e sub e (k, omega) of a dense two-component plasma. We show the sensitivity of S sub e sub e (k, omega) to three approximations for G(k, omega).

  17. Histidine phosphotransfer proteins in fungal two-component signal transduction pathways.

    Science.gov (United States)

    Fassler, Jan S; West, Ann H

    2013-08-01

    The histidine phosphotransfer (HPt) protein Ypd1 is an important participant in the Saccharomyces cerevisiae multistep two-component signal transduction pathway and, unlike the expanded histidine kinase gene family, is encoded by a single gene in nearly all model and pathogenic fungi. Ypd1 is essential for viability in both S. cerevisiae and in Cryptococcus neoformans. These and other aspects of Ypd1 biology, combined with the availability of structural and mutational data in S. cerevisiae, suggest that the essential interactions between Ypd1 and response regulator domains would be a good target for antifungal drug development. The goal of this minireview is to summarize the wealth of data on S. cerevisiae Ypd1 and to consider the potential benefits of conducting related studies in pathogenic fungi.

  18. Cross-talk and specificity in two-component signal transduction pathways.

    Science.gov (United States)

    Agrawal, Ruchi; Sahoo, Bikash Kumar; Saini, Deepak Kumar

    2016-05-01

    Two-component signaling systems (TCSs) are composed of two proteins, sensor kinases and response regulators, which can cross-talk and integrate information between them by virtue of high-sequence conservation and modular nature, to generate concerted and diversified responses. However, TCSs have been shown to be insulated, to facilitate linear signal transmission and response generation. Here, we discuss various mechanisms that confer specificity or cross-talk among TCSs. The presented models are supported with evidence that indicate the physiological significance of the observed TCS signaling architecture. Overall, we propose that the signaling topology of any TCSs cannot be predicted using obvious sequence or structural rules, as TCS signaling is regulated by multiple factors, including spatial and temporal distribution of the participating proteins.

  19. Images and Spectral Properties of Two Component Advective Flows Around Black Holes: Effects of Photon Bending

    CERN Document Server

    Chatterjee, Arka; Ghosh, Himadri

    2016-01-01

    Two component advective flow (TCAF) successfully explains spectral and timing properties of black hole candidates. We study the nature of photon trajectories in the vicinity of a Schwarzschild black hole and incorporate this in predicting images of TCAF with a black hole at the Centre. We also compute the emitted spectra. We employ a Monte-Carlo simulation technique to achieve our goal. For accurate prediction of the image and the spectra, null trajectories are generated without constraining the motion to any specific plane. Red shift, bolometric flux and corresponding temperature have been calculated with appropriate relativistic consideration. The centrifugal barrier dominated boundary layer or CENBOL near the inner region of the disk which acts as the Compton cloud is appropriately modelled as a thick accretion disk in Schwarzschild geometry for the purpose of imaging and computing spectra. The variations of spectra and image with physical parameters such as the accretion rate ($\\dot{m}_d$) and inclination...

  20. Microstructural Analysis and Rheological Modeling of Asphalt Mixtures Containing Recycled Asphalt Materials

    Directory of Open Access Journals (Sweden)

    Augusto Cannone Falchetto

    2014-09-01

    Full Text Available The use of recycled materials in pavement construction has seen, over the years, a significant increase closely associated with substantial economic and environmental benefits. During the past decades, many transportation agencies have evaluated the effect of adding Reclaimed Asphalt Pavement (RAP, and, more recently, Recycled Asphalt Shingles (RAS on the performance of asphalt pavement, while limits were proposed on the amount of recycled materials which can be used. In this paper, the effect of adding RAP and RAS on the microstructural and low temperature properties of asphalt mixtures is investigated using digital image processing (DIP and modeling of rheological data obtained with the Bending Beam Rheometer (BBR. Detailed information on the internal microstructure of asphalt mixtures is acquired based on digital images of small beam specimens and numerical estimations of spatial correlation functions. It is found that RAP increases the autocorrelation length (ACL of the spatial distribution of aggregates, asphalt mastic and air voids phases, while an opposite trend is observed when RAS is included. Analogical and semi empirical models are used to back-calculate binder creep stiffness from mixture experimental data. Differences between back-calculated results and experimental data suggest limited or partial blending between new and aged binder.