WorldWideScience

Sample records for two-component mixture models

  1. Two-component mixture model: Application to palm oil and exchange rate

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-12-01

    Palm oil is a seed crop which is widely adopt for food and non-food products such as cookie, vegetable oil, cosmetics, household products and others. Palm oil is majority growth in Malaysia and Indonesia. However, the demand for palm oil is getting growth and rapidly running out over the years. This phenomenal cause illegal logging of trees and destroy the natural habitat. Hence, the present paper investigates the relationship between exchange rate and palm oil price in Malaysia by using Maximum Likelihood Estimation via Newton-Raphson algorithm to fit a two components mixture model. Besides, this paper proposes a mixture of normal distribution to accommodate with asymmetry characteristics and platykurtic time series data.

  2. Two-component mixture cure rate model with spline estimated nonparametric components.

    Science.gov (United States)

    Wang, Lu; Du, Pang; Liang, Hua

    2012-09-01

    In some survival analysis of medical studies, there are often long-term survivors who can be considered as permanently cured. The goals in these studies are to estimate the noncured probability of the whole population and the hazard rate of the susceptible subpopulation. When covariates are present as often happens in practice, to understand covariate effects on the noncured probability and hazard rate is of equal importance. The existing methods are limited to parametric and semiparametric models. We propose a two-component mixture cure rate model with nonparametric forms for both the cure probability and the hazard rate function. Identifiability of the model is guaranteed by an additive assumption that allows no time-covariate interactions in the logarithm of hazard rate. Estimation is carried out by an expectation-maximization algorithm on maximizing a penalized likelihood. For inferential purpose, we apply the Louis formula to obtain point-wise confidence intervals for noncured probability and hazard rate. Asymptotic convergence rates of our function estimates are established. We then evaluate the proposed method by extensive simulations. We analyze the survival data from a melanoma study and find interesting patterns for this study. © 2011, The International Biometric Society.

  3. Measuring two-phase and two-component mixtures by radiometric technique

    International Nuclear Information System (INIS)

    Mackuliak, D.; Rajniak, I.

    1984-01-01

    The possibility was tried of the application of the radiometric method in measuring steam water content. The experiments were carried out in model conditions where steam was replaced with the two-component mixture of water and air. The beta radiation source was isotope 204 Tl (Esub(max)=0.765 MeV) with an activity of 19.35 MBq. Measurements were carried out within the range of the surface density of the mixture from 0.119 kg.m -2 to 0.130 kg.m -2 . Mixture speed was 5.1 m.s -1 to 7.1 m.s -1 . The observed dependence of relative pulse frequency on the specific water content in the mixture was approximated by a linear regression. (B.S.)

  4. Two-component scattering model and the electron density spectrum

    Science.gov (United States)

    Zhou, A. Z.; Tan, J. Y.; Esamdin, A.; Wu, X. J.

    2010-02-01

    In this paper, we discuss a rigorous treatment of the refractive scintillation caused by a two-component interstellar scattering medium and a Kolmogorov form of density spectrum. It is assumed that the interstellar scattering medium is composed of a thin-screen interstellar medium (ISM) and an extended interstellar medium. We consider the case that the scattering of the thin screen concentrates in a thin layer represented by a δ function distribution and that the scattering density of the extended irregular medium satisfies the Gaussian distribution. We investigate and develop equations for the flux density structure function corresponding to this two-component ISM geometry in the scattering density distribution and compare our result with the observations. We conclude that the refractive scintillation caused by this two-component ISM scattering gives a more satisfactory explanation for the observed flux density variation than does the single extended medium model. The level of refractive scintillation is strongly sensitive to the distribution of scattering material along the line of sight (LOS). The theoretical modulation indices are comparatively less sensitive to the scattering strength of the thin-screen medium, but they critically depend on the distance from the observer to the thin screen. The logarithmic slope of the structure function is sensitive to the scattering strength of the thin-screen medium, but is relatively insensitive to the thin-screen location. Therefore, the proposed model can be applied to interpret the structure functions of flux density observed in pulsar PSR B2111 + 46 and PSR B0136 + 57. The result suggests that the medium consists of a discontinuous distribution of plasma turbulence embedded in the interstellar medium. Thus our work provides some insight into the distribution of the scattering along the LOS to the pulsar PSR B2111 + 46 and PSR B0136 + 57.

  5. A minimal model for two-component dark matter

    International Nuclear Information System (INIS)

    Esch, Sonja; Klasen, Michael; Yaguna, Carlos E.

    2014-01-01

    We propose and study a new minimal model for two-component dark matter. The model contains only three additional fields, one fermion and two scalars, all singlets under the Standard Model gauge group. Two of these fields, one fermion and one scalar, are odd under a Z_2 symmetry that renders them simultaneously stable. Thus, both particles contribute to the observed dark matter density. This model resembles the union of the singlet scalar and the singlet fermionic models but it contains some new features of its own. We analyze in some detail its dark matter phenomenology. Regarding the relic density, the main novelty is the possible annihilation of one dark matter particle into the other, which can affect the predicted relic density in a significant way. Regarding dark matter detection, we identify a new contribution that can lead either to an enhancement or to a suppression of the spin-independent cross section for the scalar dark matter particle. Finally, we define a set of five benchmarks models compatible with all present bounds and examine their direct detection prospects at planned experiments. A generic feature of this model is that both particles give rise to observable signals in 1-ton direct detection experiments. In fact, such experiments will be able to probe even a subdominant dark matter component at the percent level.

  6. Exploring a minimal two-component p53 model

    International Nuclear Information System (INIS)

    Sun, Tingzhe; Zhu, Feng; Shen, Pingping; Yuan, Ruoshi; Xu, Wei

    2010-01-01

    The tumor suppressor p53 coordinates many attributes of cellular processes via interlocked feedback loops. To understand the biological implications of feedback loops in a p53 system, a two-component model which encompasses essential feedback loops was constructed and further explored. Diverse bifurcation properties, such as bistability and oscillation, emerge by manipulating the feedback strength. The p53-mediated MDM2 induction dictates the bifurcation patterns. We first identified irradiation dichotomy in p53 models and further proposed that bistability and oscillation can behave in a coordinated manner. Further sensitivity analysis revealed that p53 basal production and MDM2-mediated p53 degradation, which are central to cellular control, are most sensitive processes. Also, we identified that the much more significant variations in amplitude of p53 pulses observed in experiments can be derived from overall amplitude parameter sensitivity. The combined approach with bifurcation analysis, stochastic simulation and sampling-based sensitivity analysis not only gives crucial insights into the dynamics of the p53 system, but also creates a fertile ground for understanding the regulatory patterns of other biological networks

  7. Multiparticle production in a two-component dual parton model

    International Nuclear Information System (INIS)

    Aurenche, P.; Bopp, F.W.; Capella, A.; Kwiecinski, J.; Maire, M.; Ranft, J.; Tran Thanh Van, J.

    1992-01-01

    The dual parton model (DPM) describes soft and semihard multiparticle production. The version of the DPM presented in this paper includes soft and hard mechanisms as well as diffractive processes. The model is formulated as a Monte Carlo event generator. We calculate in this model, in the energy range of the hadron colliders, rapidity distributions and the rise of the rapidity plateau with the collision energy, transverse-momentum distributions and the rise of average transverse momenta with the collision energy, multiplicity distributions in different pseudorapidity regions, and transverse-energy distributions. For most of these quantities we find a reasonable agreement with experimental data

  8. Superfluid drag in the two-component Bose-Hubbard model

    Science.gov (United States)

    Sellin, Karl; Babaev, Egor

    2018-03-01

    In multicomponent superfluids and superconductors, co- and counterflows of components have, in general, different properties. A. F. Andreev and E. P. Bashkin [Sov. Phys. JETP 42, 164 (1975)] discussed, in the context of He3/He4 superfluid mixtures, that interparticle interactions produce a dissipationless drag. The drag can be understood as a superflow of one component induced by phase gradients of the other component. Importantly, the drag can be both positive (entrainment) and negative (counterflow). The effect is known to have crucial importance for many properties of diverse physical systems ranging from the dynamics of neutron stars and rotational responses of Bose mixtures of ultracold atoms to magnetic responses of multicomponent superconductors. Although substantial literature exists that includes the drag interaction phenomenologically, only a few regimes are covered by quantitative studies of the microscopic origin of the drag and its dependence on microscopic parameters. Here we study the microscopic origin and strength of the drag interaction in a quantum system of two-component bosons on a lattice with short-range interaction. By performing quantum Monte Carlo simulations of a two-component Bose-Hubbard model we obtain dependencies of the drag strength on the boson-boson interactions and properties of the optical lattice. Of particular interest are the strongly correlated regimes where the ratio of coflow and counterflow superfluid stiffnesses can diverge, corresponding to the case of saturated drag.

  9. Two-component network model in voice identification technologies

    Directory of Open Access Journals (Sweden)

    Edita K. Kuular

    2018-03-01

    Full Text Available Among the most important parameters of biometric systems with voice modalities that determine their effectiveness, along with reliability and noise immunity, a speed of identification and verification of a person has been accentuated. This parameter is especially sensitive while processing large-scale voice databases in real time regime. Many research studies in this area are aimed at developing new and improving existing algorithms for presentation and processing voice records to ensure high performance of voice biometric systems. Here, it seems promising to apply a modern approach, which is based on complex network platform for solving complex massive problems with a large number of elements and taking into account their interrelationships. Thus, there are known some works which while solving problems of analysis and recognition of faces from photographs, transform images into complex networks for their subsequent processing by standard techniques. One of the first applications of complex networks to sound series (musical and speech analysis are description of frequency characteristics by constructing network models - converting the series into networks. On the network ontology platform a previously proposed technique of audio information representation aimed on its automatic analysis and speaker recognition has been developed. This implies converting information into the form of associative semantic (cognitive network structure with amplitude and frequency components both. Two speaker exemplars have been recorded and transformed into pertinent networks with consequent comparison of their topological metrics. The set of topological metrics for each of network models (amplitude and frequency one is a vector, and together  those combine a matrix, as a digital "network" voiceprint. The proposed network approach, with its sensitivity to personal conditions-physiological, psychological, emotional, might be useful not only for person identification

  10. Stability equation and two-component Eigenmode for domain walls in scalar potential model

    International Nuclear Information System (INIS)

    Dias, G.S.; Graca, E.L.; Rodrigues, R. de Lima

    2002-08-01

    Supersymmetric quantum mechanics involving a two-component representation and two-component eigenfunctions is applied to obtain the stability equation associated to a potential model formulated in terms of two coupled real scalar fields. We investigate the question of stability by introducing an operator technique for the Bogomol'nyi-Prasad-Sommerfield (BPS) and non-BPS states on two domain walls in a scalar potential model with minimal N 1-supersymmetry. (author)

  11. MODELING THERMAL DUST EMISSION WITH TWO COMPONENTS: APPLICATION TO THE PLANCK HIGH FREQUENCY INSTRUMENT MAPS

    International Nuclear Information System (INIS)

    Meisner, Aaron M.; Finkbeiner, Douglas P.

    2015-01-01

    We apply the Finkbeiner et al. two-component thermal dust emission model to the Planck High Frequency Instrument maps. This parameterization of the far-infrared dust spectrum as the sum of two modified blackbodies (MBBs) serves as an important alternative to the commonly adopted single-MBB dust emission model. Analyzing the joint Planck/DIRBE dust spectrum, we show that two-component models provide a better fit to the 100-3000 GHz emission than do single-MBB models, though by a lesser margin than found by Finkbeiner et al. based on FIRAS and DIRBE. We also derive full-sky 6.'1 resolution maps of dust optical depth and temperature by fitting the two-component model to Planck 217-857 GHz along with DIRBE/IRAS 100 μm data. Because our two-component model matches the dust spectrum near its peak, accounts for the spectrum's flattening at millimeter wavelengths, and specifies dust temperature at 6.'1 FWHM, our model provides reliable, high-resolution thermal dust emission foreground predictions from 100 to 3000 GHz. We find that, in diffuse sky regions, our two-component 100-217 GHz predictions are on average accurate to within 2.2%, while extrapolating the Planck Collaboration et al. single-MBB model systematically underpredicts emission by 18.8% at 100 GHz, 12.6% at 143 GHz, and 7.9% at 217 GHz. We calibrate our two-component optical depth to reddening, and compare with reddening estimates based on stellar spectra. We find the dominant systematic problems in our temperature/reddening maps to be zodiacal light on large angular scales and the cosmic infrared background anisotropy on small angular scales

  12. A two-component dark matter model with real singlet scalars ...

    Indian Academy of Sciences (India)

    2016-01-05

    Jan 5, 2016 ... We propose a two-component dark matter (DM) model, each component of which is a real singlet scalar, to explain results from both direct and indirect detection experiments. We put the constraints on the model parameters from theoretical bounds, PLANCK relic density results and direct DM experiments.

  13. Isolation of EPR spectra and estimation of spin-states in two-component mixtures of paramagnets.

    Science.gov (United States)

    Chabbra, Sonia; Smith, David M; Bode, Bela E

    2018-04-26

    The presence of multiple paramagnetic species can lead to overlapping electron paramagnetic resonance (EPR) signals. This complication can be a critical obstacle for the use of EPR to unravel mechanisms and aid the understanding of earth abundant metal catalysis. Furthermore, redox or spin-crossover processes can result in the simultaneous presence of metal centres in different oxidation or spin states. In this contribution, pulse EPR experiments on model systems containing discrete mixtures of Cr(i) and Cr(iii) or Cu(ii) and Mn(ii) complexes demonstrate the feasibility of the separation of the EPR spectra of these species by inversion recovery filters and the identification of the relevant spin states by transient nutation experiments. We demonstrate the isolation of component spectra and identification of spin states in a mixture of catalyst precursors. The usefulness of the approach is emphasised by monitoring the fate of the chromium species upon activation of an industrially used precatalyst system.

  14. A two-component dark matter model with real singlet scalars ...

    Indian Academy of Sciences (India)

    Theoretical framework. In the present work, the dark matter candidate has two components S and S′ both of ... The scalar sector potential (for Higgs and two real singlet scalars) in this framework can then be written .... In this work we obtain the allowed values of model parameters (δ2, δ′2, MS and M′S) using three direct ...

  15. Correlation inequalities for two-component hypercubic φ4 models. Pt. 2

    International Nuclear Information System (INIS)

    Soria, J.L.; Instituto Tecnologico de Tijuana

    1990-01-01

    We continue the program started in the first paper (J. Stat. Phys. 52 (1988) 711-726). We find new and already known correlation inequalities for a family of two-component hypercubic φ 4 models, using techniques of rotated correlation inequalities and random walk representation. (orig.)

  16. New methods for the characterization of pyrocarbon; The two component model of pyrocarbon

    Energy Technology Data Exchange (ETDEWEB)

    Luhleich, H.; Sutterlin, L.; Hoven, H.; Nickel, H.

    1972-04-19

    In the first part, new experiments to clarify the origin of different pyrocarbon components are described. Three new methods (plasma-oxidation, wet-oxidation, ultrasonic method) are presented to expose the carbon black like component in the pyrocarbon deposited in fluidized beds. In the second part, a two component model of pyrocarbon is proposed and illustrated by examples.

  17. Two component WIMP-FImP dark matter model with singlet fermion, scalar and pseudo scalar

    Energy Technology Data Exchange (ETDEWEB)

    Dutta Banik, Amit; Pandey, Madhurima; Majumdar, Debasish [Saha Institute of Nuclear Physics, HBNI, Astroparticle Physics and Cosmology Division, Kolkata (India); Biswas, Anirban [Harish Chandra Research Institute, Allahabad (India)

    2017-10-15

    We explore a two component dark matter model with a fermion and a scalar. In this scenario the Standard Model (SM) is extended by a fermion, a scalar and an additional pseudo scalar. The fermionic component is assumed to have a global U(1){sub DM} and interacts with the pseudo scalar via Yukawa interaction while a Z{sub 2} symmetry is imposed on the other component - the scalar. These ensure the stability of both dark matter components. Although the Lagrangian of the present model is CP conserving, the CP symmetry breaks spontaneously when the pseudo scalar acquires a vacuum expectation value (VEV). The scalar component of the dark matter in the present model also develops a VEV on spontaneous breaking of the Z{sub 2} symmetry. Thus the various interactions of the dark sector and the SM sector occur through the mixing of the SM like Higgs boson, the pseudo scalar Higgs like boson and the singlet scalar boson. We show that the observed gamma ray excess from the Galactic Centre as well as the 3.55 keV X-ray line from Perseus, Andromeda etc. can be simultaneously explained in the present two component dark matter model and the dark matter self interaction is found to be an order of magnitude smaller than the upper limit estimated from the observational results. (orig.)

  18. Correlation inequalities for two-component hypercubic /varreverse arrowphi/4 models

    International Nuclear Information System (INIS)

    Soria, J.L.

    1988-01-01

    A collection of new and already known correlation inequalities is found for a family of two-component hypercubic /varreverse arrowphi/ 4 models, using techniques of duplicated variables, rotated correlation inequalities, and random walk representation. Among the interesting new inequalities are: rotated very special Dunlop-Newman inequality 2 ; /varreverse arrowphi//sub 1z/ 2 + /varreverse arrowphi//sub 2z/ 2 ≥ 0, rotated Griffiths I inequality 2 - /varreverse arrowphi//sub 2z/ 2 > ≥ 0, and anti-Lebowitz inequality u 4 1111 ≥ 0

  19. A two component model describing nucleon structure functions in the low-x region

    Energy Technology Data Exchange (ETDEWEB)

    Bugaev, E.V. [Institute for Nuclear Research of the Russian Academy of Sciences, 7a, 60th October Anniversary prospect, Moscow 117312 (Russian Federation); Mangazeev, B.V. [Irkutsk State University, 1, Karl Marx Street, Irkutsk 664003 (Russian Federation)

    2009-12-15

    A two component model describing the electromagnetic nucleon structure functions in the low-x region, based on generalized vector dominance and color dipole approaches is briefly described. The model operates with the mesons of rho-family having the mass spectrum of the form m{sub n}{sup 2}=m{sub r}ho{sup 2}(1+2n) and takes into account the nondiagonal transitions in meson-nucleon scattering. The special cut-off factors are introduced in the model, to exclude the gamma-qq-bar-V transitions in the case of narrow qq-bar-pairs. For the color dipole part of the model the well known FKS-parameterization is used.

  20. Level shift two-components autoregressive conditional heteroscedasticity modelling for WTI crude oil market

    Science.gov (United States)

    Sin, Kuek Jia; Cheong, Chin Wen; Hooi, Tan Siow

    2017-04-01

    This study aims to investigate the crude oil volatility using a two components autoregressive conditional heteroscedasticity (ARCH) model with the inclusion of abrupt jump feature. The model is able to capture abrupt jumps, news impact, clustering volatility, long persistence volatility and heavy-tailed distributed error which are commonly observed in the crude oil time series. For the empirical study, we have selected the WTI crude oil index from year 2000 to 2016. The results found that by including the multiple-abrupt jumps in ARCH model, there are significant improvements of estimation evaluations as compared with the standard ARCH models. The outcomes of this study can provide useful information for risk management and portfolio analysis in the crude oil markets.

  1. Discrete kink dynamics in hydrogen-bonded chains: The two-component model

    DEFF Research Database (Denmark)

    Karpan, V.M.; Zolotaryuk, Yaroslav; Christiansen, Peter Leth

    2004-01-01

    We study discrete topological solitary waves (kinks and antikinks) in two nonlinear diatomic chain models that describe the collective dynamics of proton transfers in one-dimensional hydrogen-bonded networks. The essential ingredients of the models are (i) a realistic (anharmonic) ion-proton inte......We study discrete topological solitary waves (kinks and antikinks) in two nonlinear diatomic chain models that describe the collective dynamics of proton transfers in one-dimensional hydrogen-bonded networks. The essential ingredients of the models are (i) a realistic (anharmonic) ion...... chain subject to a substrate with two optical bands), both providing a bistability of the hydrogen-bonded proton. Exact two-component (kink and antikink) discrete solutions for these models are found numerically. We compare the soliton solutions and their properties in both the one- (when the heavy ions...... principal differences, like a significant difference in the stability switchings behavior for the kinks and the antikinks. Water-filled carbon nanotubes are briefly discussed as possible realistic systems, where topological discrete (anti)kink states might exist....

  2. A comparison of two-component and quadratic models to assess survival of irradiated stage-7 oocytes of Drosophila melanogaster

    International Nuclear Information System (INIS)

    Peres, C.A.; Koo, J.O.

    1981-01-01

    In this paper, the quadratic model to analyse data of this kind, i.e. S/S 0 = exp(-αD-bD 2 ), where S and Ssub(o) are defined as before is proposed is shown that the same biological interpretation can be given to the parameters α and A and to the parameters β and B. Furthermore it is shown that the quadratic model involves one probabilistic stage more than the two-component model, and therefore the quadratic model would perhaps be more appropriate as a dose-response model for survival of irradiated stage-7 oocytes of Drosophila melanogaster. In order to apply these results, the data presented by Sankaranarayanan and Sankaranarayanan and Volkers are reanalysed using the quadratic model. It is shown that the quadratic model fits better than the two-component model to the data in most situations. (orig./AJ)

  3. Viscous Growth in Spinodal Decomposition of the Two-component Lennard-Jones Model in Two Dimensions

    DEFF Research Database (Denmark)

    Laradji, M.; Toxvaerd, S.; Mouritsen, Ole G.

    1997-01-01

    The dynamics of phase separation of a two-component Lennard-Jones model in three dimensions is investigated by means of large scale molecular dynamics simulation. A systematic study over a wide range of quench temperatures within the coexistence region shows that the binary system reaches...

  4. Investigation of low-latitude hydrogen emission in terms of a two-component interstellar gas model

    International Nuclear Information System (INIS)

    Baker, P.L.; Burton, W.B.

    1975-01-01

    The high-resolution 21-cm hydrogen line observations at low galactic latitude of Burton and Verschuur have been analyzed to determine the large-scale distribution of galactic hydrogen. The distribution parameters are found by model fitting. Optical depth affects have been computed using a two-component gas model. Analysis shows that a multiphase description of the medium is essential to the interpretation of low-latitude emission observations. Where possible, the number of free parameters in the gas model has been reduced. Calculations were performed for a one-component, uniform spin temperature, gas model in order to show the systematic departures between this model and the data caused by the incorrect treatment of the optical depth effect. In the two-component gas, radiative transfer is treated by a Monte Carlo calculation since the opacity of the gas arises in a randomly distributed, cold, optically thick, low velocity-dispersion, cloud medium. The emission arises in both the cloud medium and a smoothly distributed, optically thin, high velocity-dispersion, intercloud medium. The synthetic profiles computed from the two-component model reproduce both the large-scale trends of the observed emission profiles and the magnitude of the small-scale emission irregularities. The analysis permits the determination of values for []he thickness of the galactic disk between half density points, the total observed neutral hydrogen mass of the Galaxy, and the central number density of the intercloud atoms. In addition, the analysis is sensitive to the size of clouds contributing to the observations. Computations also show that synthetic emission profiles based on the two-component model display both the zero-velocity and high-velocity ridges, indicative of optical thinness on a large scale, in spite of the presence of optically thick gas

  5. Photoproduction within the two-component Dual Parton Model: amplitudes and cross sections

    International Nuclear Information System (INIS)

    Engel, R.; Siegen Univ.

    1995-01-01

    In the framework of the Dual Parton Model an approximation scheme to describe high energy photoproduction processes is presented. Based on the distinction between direct, resolved soft, and resolved hard interaction processes we construct effective impact parameter amplitudes. In order to treat low mass diffraction within the eikonal formalism in a consistent way a phenomenological ansatz is proposed. The free parameters of the model are determined by fits to high energy hadro- and photoproduction cross sections. We calculate the partial photoproduction cross sections and discuss predictions of the model at HERA energies. Using hadro- and photoproduction data together, the uncertainties of the model predictions are strongly reduced. (orig.)

  6. A two-component dark matter model with real singlet scalars ...

    Indian Academy of Sciences (India)

    2016-01-05

    component dark matter model with real singlet scalars confronting GeV -ray excess from galactic centre and Fermi bubble. Debasish Majumdar Kamakshya Prasad Modak Subhendu Rakshit. Special: Cosmology Volume 86 Issue ...

  7. Continuum model of the two-component Becker-Döring equations

    Directory of Open Access Journals (Sweden)

    Ali Reza Soheili

    2004-01-01

    Full Text Available The process of collision between particles is a subject of interest in many fields of physics, astronomy, polymer physics, atmospheric physics, and colloid chemistry. If two types of particles are allowed to participate in the cluster coalescence, then the time evolution of the cluster distribution has been described by an infinite system of ordinary differential equations. In this paper, we describe the model with a second-order two-dimensional partial differential equation, as a continuum model.

  8. Two-component model application for error calculus in the environmental monitoring data analysis

    International Nuclear Information System (INIS)

    Carvalho, Maria Angelica G.; Hiromoto, Goro

    2002-01-01

    Analysis and interpretation of results of an environmental monitoring program is often based on the evaluation of the mean value of a particular set of data, which is strongly affected by the analytical errors associated with each measurement. A model proposed by Rocke and Lorenzato assumes two error components, one additive and one multiplicative, to deal with lower and higher concentration values in a single model. In this communication, an application of this method for re-evaluation of the errors reported in a large set of results of total alpha measurements in a environmental sample is presented. The results show that the mean values calculated taking into account the new errors is higher than as obtained with the original errors, being an indicative that the analytical errors reported before were underestimated in the region of lower concentrations. (author)

  9. Continuum model of the two-component Becker-Döring equations

    OpenAIRE

    Soheili, Ali Reza

    2004-01-01

    The process of collision between particles is a subject of interest in many fields of physics, astronomy, polymer physics, atmospheric physics, and colloid chemistry. If two types of particles are allowed to participate in the cluster coalescence, then the time evolution of the cluster distribution has been described by an infinite system of ordinary differential equations. In this paper, we describe the model with a second-order two-dimensional partial differential equation, as a continuum m...

  10. Field-theoretic model of Harari's two component phenomenological theory of high energy hadron scattering

    International Nuclear Information System (INIS)

    Dymski, T.C.

    1976-01-01

    For high energy scattering of pseudoscalar particles on spin 1 / 2 particles, the transition amplitude (for a given signature) is constructed as an infinite sum over spin of boson exchange graphs of the Feynman type, each of which has impact parameters up to some value R completely removed. This amplitude is advanced as a field theoretic realization of the nondiffractive component of Harari's dual absorption model. Comparing with π/sup +-/p→π/sup +-/p and π - p→π 0 n data shows that the imaginary parts of both helicity amplitudes are excellent, for either signature

  11. A two component model for thermal emission from organic grains in Comet Halley

    Science.gov (United States)

    Chyba, Christopher; Sagan, Carl

    1988-01-01

    Observations of Comet Halley in the near infrared reveal a triple-peaked emission feature near 3.4 micrometer, characteristic of C-H stretching in hydrocarbons. A variety of plausible cometary materials exhibit these features, including the organic residue of irradiated candidate cometary ices (such as the residue of irradiated methane ice clathrate, and polycyclic aromatic hydrocarbons. Indeed, any molecule containing -CH3 and -CH2 alkanes will emit at 3.4 micrometer under suitable conditions. Therefore tentative identifications must rest on additional evidence, including a plausible account of the origins of the organic material, a plausible model for the infrared emission of this material, and a demonstration that this conjunction of material and model not only matches the 3 to 4 micrometer spectrum, but also does not yield additional emission features where none is observed. In the case of the residue of irradiated low occupancy methane ice clathrate, it is argued that the lab synthesis of the organic residue well simulates the radiation processing experienced by Comet Halley.

  12. The two-component spin-fermion model for high-Tc cuprates: its applications in neutron scattering and ARPES experiments

    International Nuclear Information System (INIS)

    Bang, Yunkyu

    2012-01-01

    Motivated by neutron scattering experiments in high-T c cuprates, we propose the two-component spin-fermion model as a minimal phenomenological model, which has both local spins and itinerant fermions as independent degrees of freedom (d.o.f.). Our calculations of the dynamic spin correlation function provide a successful description of the puzzling neutron experiment data and show that: (i) the upward dispersion branch of magnetic excitations is mostly due to local spin excitations; (ii) the downward dispersion branch is from collective particle-hole excitations of fermions; and (iii) the resonance mode is a mixture of both d.o.f. Using the same model with the same set of parameters, we calculated the renormalized quasiparticle (q.p.) dispersion and successfully reproduced one of the key features of the angle-resolved photoemission spectroscopy (ARPES) experiments, namely the high-energy kink structure in the fermion q.p. dispersion, thus supporting the two-component spin-fermion phenomenology. (paper)

  13. Critical point of gas-liquid type phase transition and phase equilibrium functions in developed two-component plasma model.

    Science.gov (United States)

    Butlitsky, M A; Zelener, B B; Zelener, B V

    2014-07-14

    A two-component plasma model, which we called a "shelf Coulomb" model has been developed in this work. A Monte Carlo study has been undertaken to calculate equations of state, pair distribution functions, internal energies, and other thermodynamics properties. A canonical NVT ensemble with periodic boundary conditions was used. The motivation behind the model is also discussed in this work. The "shelf Coulomb" model can be compared to classical two-component (electron-proton) model where charges with zero size interact via a classical Coulomb law. With important difference for interaction of opposite charges: electrons and protons interact via the Coulomb law for large distances between particles, while interaction potential is cut off on small distances. The cut off distance is defined by an arbitrary ɛ parameter, which depends on system temperature. All the thermodynamics properties of the model depend on dimensionless parameters ɛ and γ = βe(2)n(1/3) (where β = 1/kBT, n is the particle's density, kB is the Boltzmann constant, and T is the temperature) only. In addition, it has been shown that the virial theorem works in this model. All the calculations were carried over a wide range of dimensionless ɛ and γ parameters in order to find the phase transition region, critical point, spinodal, and binodal lines of a model system. The system is observed to undergo a first order gas-liquid type phase transition with the critical point being in the vicinity of ɛ(crit) ≈ 13(T(*)(crit) ≈ 0.076), γ(crit) ≈ 1.8(v(*)(crit) ≈ 0.17), P(*)(crit) ≈ 0.39, where specific volume v* = 1/γ(3) and reduced temperature T(*) = ɛ(-1).

  14. Prevalence Incidence Mixture Models

    Science.gov (United States)

    The R package and webtool fits Prevalence Incidence Mixture models to left-censored and irregularly interval-censored time to event data that is commonly found in screening cohorts assembled from electronic health records. Absolute and relative risk can be estimated for simple random sampling, and stratified sampling (the two approaches of superpopulation and a finite population are supported for target populations). Non-parametric (absolute risks only), semi-parametric, weakly-parametric (using B-splines), and some fully parametric (such as the logistic-Weibull) models are supported.

  15. Poisson Mixture Regression Models for Heart Disease Prediction.

    Science.gov (United States)

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  16. Influence of electron-phonon interaction on soliton mediated spin-charge conversion effects in two-component polymer model

    International Nuclear Information System (INIS)

    Sergeenkov, S.; Moraes, F.; Furtado, C.; Araujo-Moreira, F.M.

    2010-01-01

    By mapping a Hubbard-like model describing a two-component polymer in the presence of strong enough electron-phonon interactions (κ) onto the system of two coupled nonlinear Schroedinger equations with U(2) symmetry group, some nontrivial correlations between topological solitons mediated charge Q and spin S degrees of freedom are obtained. Namely, in addition to a charge fractionalization and reentrant like behavior of both Q(κ) and S(κ), the model also predicts a decrease of soliton velocity with κ as well as spin-charge conversion effects which manifest themselves through an explicit S(Q,Ω) dependence (with Ω being a mixing angle between spin-up and spin-down electron amplitudes). A possibility to observe the predicted effects in low-dimensional systems with charge and spin soliton carriers is discussed.

  17. Simple Analytical Forms of the Perpendicular Diffusion Coefficient for Two-component Turbulence. III. Damping Model of Dynamical Turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Gammon, M.; Shalchi, A., E-mail: andreasm4@yahoo.com [Department of Physics and Astronomy, University of Manitoba, Winnipeg, Manitoba R3T 2N2 (Canada)

    2017-10-01

    In several astrophysical applications one needs analytical forms of cosmic-ray diffusion parameters. Some examples are studies of diffusive shock acceleration and solar modulation. In the current article we explore perpendicular diffusion based on the unified nonlinear transport theory. While we focused on magnetostatic turbulence in Paper I, we included the effect of dynamical turbulence in Paper II of the series. In the latter paper we assumed that the temporal correlation time does not depend on the wavenumber. More realistic models have been proposed in the past, such as the so-called damping model of dynamical turbulence. In the present paper we derive analytical forms for the perpendicular diffusion coefficient of energetic particles in two-component turbulence for this type of time-dependent turbulence. We present new formulas for the perpendicular diffusion coefficient and we derive a condition for which the magnetostatic result is recovered.

  18. Three-body recombination of two-component cold atomic gases into deep dimers in an optical model

    DEFF Research Database (Denmark)

    Mikkelsen, Mathias; Jensen, A. S.; Fedorov, D. V.

    2015-01-01

    to the decay rate or recombination probability of the three-body system. The method is formulated in details and the relevant qualitative features are discussed as functions of scattering lengths and masses. We use zero-range model in analyses of recent recombination data. The dominating scattering length......We consider three-body recombination into deep dimers in a mass-imbalanced two-component atomic gas. We use an optical model where a phenomenological imaginary potential is added to the lowest adiabatic hyper-spherical potential. The consequent imaginary part of the energy eigenvalue corresponds...... is usually related to the non-equal two-body systems. We account for temperature smearing which tends to wipe out the higher-lying Efimov peaks. The range and the strength of the imaginary potential determine positions and shapes of the Efimov peaks as well as the absolute value of the recombination rate...

  19. Evaluation of the H-point standard additions method (HPSAM) and the generalized H-point standard additions method (GHPSAM) for the UV-analysis of two-component mixtures.

    Science.gov (United States)

    Hund, E; Massart, D L; Smeyers-Verbeke, J

    1999-10-01

    The H-point standard additions method (HPSAM) and two versions of the generalized H-point standard additions method (GHPSAM) are evaluated for the UV-analysis of two-component mixtures. Synthetic mixtures of anhydrous caffeine and phenazone as well as of atovaquone and proguanil hydrochloride were used. Furthermore, the method was applied to pharmaceutical formulations that contain these compounds as active drug substances. This paper shows both the difficulties that are related to the methods and the conditions by which acceptable results can be obtained.

  20. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  1. Three-body recombination of two-component cold atomic gases into deep dimers in an optical model

    International Nuclear Information System (INIS)

    Mikkelsen, M; Jensen, A S; Fedorov, D V; Zinner, N T

    2015-01-01

    We consider three-body recombination into deep dimers in a mass-imbalanced two-component atomic gas. We use an optical model where a phenomenological imaginary potential is added to the lowest adiabatic hyper-spherical potential. The consequent imaginary part of the energy eigenvalue corresponds to the decay rate or recombination probability of the three-body system. The method is formulated in details and the relevant qualitative features are discussed as functions of scattering lengths and masses. We use zero-range model in analyses of recent recombination data. The dominating scattering length is usually related to the non-equal two-body systems. We account for temperature smearing which tends to wipe out the higher-lying Efimov peaks. The range and the strength of the imaginary potential determine positions and shapes of the Efimov peaks as well as the absolute value of the recombination rate. The Efimov scaling between recombination peaks is calculated and shown to depend on both scattering lengths. Recombination is predicted to be largest for heavy–heavy–light systems. Universal properties of the optical parameters are indicated. We compare to available experiments and find in general very satisfactory agreement. (paper)

  2. Remote sensing of particle dynamics: a two-component unmixing model in a western UK shelf sea.

    Science.gov (United States)

    Mitchell, Catherine; Cunningham, Alex

    2014-05-01

    The relationship between the backscattering and absorption coefficients, in particular the backscattering to absorption ratio, is mediated by the type of particles present in the water column. By considering the optical signals to be driven by phytoplankton and suspended minerals, with a relatively constant influence from CDOM, radiative transfer modelling is used to propose a method for retrieving the optical contribution of phytoplankton and suspended minerals to the total absorption coefficient with mean percentage errors of below 5% for both components. These contributions can be converted to constituent concentrations if the appropriate specific inherent optical properties are known or can be determined from the maximum and minimum backscattering to absorption ratios of the data. Remotely sensed absorption and backscattering coefficients from eight years of MODIS data for the Irish Sea reveal maximum backscattering to absorption coefficient ratios over the winter (with an average for the region of 0.27), which then decrease to a minimum over the summer months (with an average of 0.06) before increasing again through to winter, indicating a change in the particles present in the water column. Application of the two-component unmixing model to this data showed seasonal cycles of both phytoplankton and suspended mineral concentrations which vary in both amplitude and periodicity depending on their location. For example, in the Bristol Channel the amplitude of the suspended mineral concentration throughout one cycle is approximately 75% greater than a yearly cycle in the eastern Irish Sea. These seasonal cycles give an insight into the complex dynamics of particles in the water column, indicating the suspension of sediment throughout the winter months and the loss of sediments from the surface layer over the summer during stratification. The relationship between the timing of the phytoplankton spring bloom and changes in the availability of light in the water

  3. A two-component model of host–parasitoid interactions: determination of the size of inundative releases of parasitoids in biological pest contro

    NARCIS (Netherlands)

    Grasman, J.; Herwaarden, van O.A.; Hemerik, L.; Lenteren, van J.C.

    2001-01-01

    A two-component differential equation model is formulated for a host–parasitoid interaction. Transient dynamics and population crashes of this system are analysed using differential inequalities. Two different cases can be distinguished: either the intrinsic growth rate of the host population is

  4. I. A model for the magnetic equation of state of liquid 3He. II. An induced interaction model for a two-component Fermi liquid

    International Nuclear Information System (INIS)

    Sanchez-Castro, C.R.

    1988-01-01

    This dissertation is divided in six chapters. Chapter 1 is an introduction to the rest of the dissertation. In it, the author presents the different models for the magnetic equation state of liquid 3 He, a derivation of the induced interaction equations for a one component Fermi liquid, and discuss the basic hamiltonian describing the heavy fermion compounds. In Chapter 2 and Chapter 3, he presents a complete discussion of the thermodynamics and Landau theory of a spin polarized Fermi liquid. A phenomenological model is then developed to predict the polarization dependence of the longitudinal Landau parameters in liquid 3 He. This model predicts a new magnetic equation of state and the possibility of liquid 3 He being 'nearly metamagnetic' at high pressures. Chapter 4 contains a microscopic calculation of the magnetic field dependence of the Landau parameters in a strongly correlated Fermi system using the induced interaction model. The system he studied consists of a single component Fermi liquid with parabolic energy bands, and a large on-site repulsive interaction. In Chapter 5, he presents a complete discussion of the Landau theory of a two component Fermi liquid. Then, he generalizes the induced interaction equations to calculate Landau parameters and scattering amplitudes for an arbitrary, spin polarized, two component Fermi liquid. The resulting equations are used to study a model for the heavy fermion Fermi liquid state: a two band electronic system with an antiferromagnetic interaction between the two bands. Chapter 6 contains the concluding remarks of the dissertation

  5. Poisson Mixture Regression Models for Heart Disease Prediction

    Science.gov (United States)

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  6. Modeling the monthly mean soil-water balance with a statistical-dynamical ecohydrology model as coupled to a two-component canopy model

    Directory of Open Access Journals (Sweden)

    J. P. Kochendorfer

    2010-10-01

    Full Text Available The statistical-dynamical annual water balance model of Eagleson (1978 is a pioneering work in the analysis of climate, soil and vegetation interactions. This paper describes several enhancements and modifications to the model that improve its physical realism at the expense of its mathematical elegance and analytical tractability. In particular, the analytical solutions for the root zone fluxes are re-derived using separate potential rates of transpiration and bare-soil evaporation. Those potential rates, along with the rate of evaporation from canopy interception, are calculated using the two-component Shuttleworth-Wallace (1985 canopy model. In addition, the soil column is divided into two layers, with the upper layer representing the dynamic root zone. The resulting ability to account for changes in root-zone water storage allows for implementation at the monthly timescale. This new version of the Eagleson model is coined the Statistical-Dynamical Ecohydrology Model (SDEM. The ability of the SDEM to capture the seasonal dynamics of the local-scale soil-water balance is demonstrated for two grassland sites in the US Great Plains. Sensitivity of the results to variations in peak green leaf area index (LAI suggests that the mean peak green LAI is determined by some minimum in root zone soil moisture during the growing season. That minimum appears to be close to the soil matric potential at which the dominant grass species begins to experience water stress and well above the wilting point, thereby suggesting an ecological optimality hypothesis in which the need to avoid water-stress-induced leaf abscission is balanced by the maximization of carbon assimilation (and associated transpiration. Finally, analysis of the sensitivity of model-determined peak green LAI to soil texture shows that the coupled model is able to reproduce the so-called "inverse texture effect", which consists of the observation that natural vegetation in dry climates tends

  7. Modelling of an homogeneous equilibrium mixture model

    International Nuclear Information System (INIS)

    Bernard-Champmartin, A.; Poujade, O.; Mathiaud, J.; Mathiaud, J.; Ghidaglia, J.M.

    2014-01-01

    We present here a model for two phase flows which is simpler than the 6-equations models (with two densities, two velocities, two temperatures) but more accurate than the standard mixture models with 4 equations (with two densities, one velocity and one temperature). We are interested in the case when the two-phases have been interacting long enough for the drag force to be small but still not negligible. The so-called Homogeneous Equilibrium Mixture Model (HEM) that we present is dealing with both mixture and relative quantities, allowing in particular to follow both a mixture velocity and a relative velocity. This relative velocity is not tracked by a conservation law but by a closure law (drift relation), whose expression is related to the drag force terms of the two-phase flow. After the derivation of the model, a stability analysis and numerical experiments are presented. (authors)

  8. Nonparametric Mixture of Regression Models.

    Science.gov (United States)

    Huang, Mian; Li, Runze; Wang, Shaoli

    2013-07-01

    Motivated by an analysis of US house price index data, we propose nonparametric finite mixture of regression models. We study the identifiability issue of the proposed models, and develop an estimation procedure by employing kernel regression. We further systematically study the sampling properties of the proposed estimators, and establish their asymptotic normality. A modified EM algorithm is proposed to carry out the estimation procedure. We show that our algorithm preserves the ascent property of the EM algorithm in an asymptotic sense. Monte Carlo simulations are conducted to examine the finite sample performance of the proposed estimation procedure. An empirical analysis of the US house price index data is illustrated for the proposed methodology.

  9. Consistency of the MLE under mixture models

    OpenAIRE

    Chen, Jiahua

    2016-01-01

    The large-sample properties of likelihood-based statistical inference under mixture models have received much attention from statisticians. Although the consistency of the nonparametric MLE is regarded as a standard conclusion, many researchers ignore the precise conditions required on the mixture model. An incorrect claim of consistency can lead to false conclusions even if the mixture model under investigation seems well behaved. Under a finite normal mixture model, for instance, the consis...

  10. Exploring the Validity Range of the Polarimetric Two-Scale Two-Component Model for Soil Moisture Retrieval by Using AGRISAR Data

    Science.gov (United States)

    Di Martino, Gerardo; Iodice, Antonio; Natale, Antonio; Riccio, Daniele; Ruello, Giuseppe

    2015-04-01

    The recently proposed polarimetric two-scale two- component model (PTSTCM) in principle allows us obtaining a reasonable estimation of the soil moisture even in moderately vegetated areas, where the volumetric scattering contribution is non-negligible, provided that the surface component is dominant and the double-bounce component is negligible. Here we test the PTSTCM validity range by applying it to polarimetric SAR data acquired on areas for which, at the same times of SAR acquisitions, ground measurements of soil moisture were performed. In particular, we employ the AGRISAR'06 database, which includes data from several fields covering a period that spans all the phases of vegetation growth.

  11. Replenishment policy for Entropic Order Quantity (EnOQ model with two component demand and partial back-logging under inflation

    Directory of Open Access Journals (Sweden)

    Bhanupriya Dash

    2017-09-01

    Full Text Available Background: Replenishment policy for entropic order quantity model with two component demand and partial backlogging under inflation is an important subject in the stock management. Methods: In this paper an inventory model for  non-instantaneous  deteriorating items with stock dependant consumption rate and partial back logged in addition the effect of inflection and time value of money on replacement policy with zero lead time consider was developed. Profit maximization model is formulated by considering the effects of partial backlogging under inflation with cash discounts. Further numerical example presented to evaluate the relative performance between the entropic order quantity and EOQ models separately. Numerical example is present to demonstrate the developed model and to illustrate the procedure. Lingo 13.0 version software used to derive optimal order quantity and total cost of inventory. Finally sensitivity analysis of the optimal solution with respect to different parameters of the system carried out. Results and conclusions: The obtained inventory model is very useful in retail business. This model can extend to total backorder.

  12. A GENERALIZED TWO-COMPONENT MODEL OF SOLAR WIND TURBULENCE AND AB INITIO DIFFUSION MEAN-FREE PATHS AND DRIFT LENGTHSCALES OF COSMIC RAYS

    Energy Technology Data Exchange (ETDEWEB)

    Wiengarten, T.; Fichtner, H.; Kleimann, J.; Scherer, K. [Institut für Theoretische Physik IV, Ruhr-Universität Bochum (Germany); Oughton, S. [Department of Mathematics, University of Waikato, Hamilton 3240 (New Zealand); Engelbrecht, N. E. [Center for Space Research, North-West University, Potchefstroom 2520 (South Africa)

    2016-12-10

    We extend a two-component model for the evolution of fluctuations in the solar wind plasma so that it is fully three-dimensional (3D) and also coupled self-consistently to the large-scale magnetohydrodynamic equations describing the background solar wind. The two classes of fluctuations considered are a high-frequency parallel-propagating wave-like piece and a low-frequency quasi-two-dimensional component. For both components, the nonlinear dynamics is dominanted by quasi-perpendicular spectral cascades of energy. Driving of the fluctuations by, for example, velocity shear and pickup ions is included. Numerical solutions to the new model are obtained using the Cronos framework, and validated against previous simpler models. Comparing results from the new model with spacecraft measurements, we find improved agreement relative to earlier models that employ prescribed background solar wind fields. Finally, the new results for the wave-like and quasi-two-dimensional fluctuations are used to calculate ab initio diffusion mean-free paths and drift lengthscales for the transport of cosmic rays in the turbulent solar wind.

  13. A GENERALIZED TWO-COMPONENT MODEL OF SOLAR WIND TURBULENCE AND AB INITIO DIFFUSION MEAN-FREE PATHS AND DRIFT LENGTHSCALES OF COSMIC RAYS

    International Nuclear Information System (INIS)

    Wiengarten, T.; Fichtner, H.; Kleimann, J.; Scherer, K.; Oughton, S.; Engelbrecht, N. E.

    2016-01-01

    We extend a two-component model for the evolution of fluctuations in the solar wind plasma so that it is fully three-dimensional (3D) and also coupled self-consistently to the large-scale magnetohydrodynamic equations describing the background solar wind. The two classes of fluctuations considered are a high-frequency parallel-propagating wave-like piece and a low-frequency quasi-two-dimensional component. For both components, the nonlinear dynamics is dominanted by quasi-perpendicular spectral cascades of energy. Driving of the fluctuations by, for example, velocity shear and pickup ions is included. Numerical solutions to the new model are obtained using the Cronos framework, and validated against previous simpler models. Comparing results from the new model with spacecraft measurements, we find improved agreement relative to earlier models that employ prescribed background solar wind fields. Finally, the new results for the wave-like and quasi-two-dimensional fluctuations are used to calculate ab initio diffusion mean-free paths and drift lengthscales for the transport of cosmic rays in the turbulent solar wind.

  14. Two-component dressed-bag model for NN interaction: deuteron structure and phase shifts up to 1 GeV

    International Nuclear Information System (INIS)

    Kukulin, V.I.; Obukhovsky, I.T.; Pomerantsev, V.N.; Faessler, A.

    2002-01-01

    A two-component model is developed for the intermediate-range NN interaction based on a new mechanism with an intermediate symmetric six-quark bag 'dressed' by σ and other fields. To calculate the transition amplitude, the microscopic six-quark shell-model in combination with the 3 P 0 -quark pion production mechanism is used. As a result, an effective energy-dependent NN interaction is constructed. The new quark-meson model for the NN interaction has been demonstrated to result in a new type of NN tensor force at intermediate ranges, which is crucially important for the treatment of tensor mixing at intermediate energies. The suggested model is able to describe NN phase shifts in a broad energy range from low energy up to 1 GeV, and the deuteron structure. The generalization of the model results in new spin-orbit 2N and 3N forces and new meson-exchange currents induced by intermediate dressed bag components, and also in the enhancement of a collective σ-field in nuclei. (author)

  15. Modeling L2,3-Edge X-ray Absorption Spectroscopy with Real-Time Exact Two-Component Relativistic Time-Dependent Density Functional Theory.

    Science.gov (United States)

    Kasper, Joseph M; Lestrange, Patrick J; Stetina, Torin F; Li, Xiaosong

    2018-04-10

    X-ray absorption spectroscopy is a powerful technique to probe local electronic and nuclear structure. There has been extensive theoretical work modeling K-edge spectra from first principles. However, modeling L-edge spectra directly with density functional theory poses a unique challenge requiring further study. Spin-orbit coupling must be included in the model, and a noncollinear density functional theory is required. Using the real-time exact two-component method, we are able to variationally include one-electron spin-orbit coupling terms when calculating the absorption spectrum. The abilities of different basis sets and density functionals to model spectra for both closed- and open-shell systems are investigated using SiCl 4 and three transition metal complexes, TiCl 4 , CrO 2 Cl 2 , and [FeCl 6 ] 3- . Although we are working in the real-time framework, individual molecular orbital transitions can still be recovered by projecting the density onto the ground state molecular orbital space and separating contributions to the time evolving dipole moment.

  16. Mixture Modeling: Applications in Educational Psychology

    Science.gov (United States)

    Harring, Jeffrey R.; Hodis, Flaviu A.

    2016-01-01

    Model-based clustering methods, commonly referred to as finite mixture modeling, have been applied to a wide variety of cross-sectional and longitudinal data to account for heterogeneity in population characteristics. In this article, we elucidate 2 such approaches: growth mixture modeling and latent profile analysis. Both techniques are…

  17. The weathervane model, a functional and structural organization of the two-component alkanesulfonate oxidoreductase SsuD from Xanthomonas citri

    International Nuclear Information System (INIS)

    Pegos, V.R.; Oliveira, P.S.L.; Balan, A.

    2012-01-01

    Full text: In Xanthomonas citri, the phytopathogen responsible for the canker citrus disease, we identified in the ssuABCDE operon, genes encoding the alkanesulfonate ABC transporter as well as the two enzymes responsible for oxido reduction of the respective substrates. SsuD and SsuE proteins represent a two-component system that can be assigned to the group of FMNH 2 -dependent monooxygenases. How- ever, despite of the biochemical information about SsuD and SsuE orthologs from Escherichia coli, there is no structural information of how the two proteins work together. In this work, we used ultracentrifugation, SAXS data and molecular modeling to construct a structural/functional model, which consists of eight molecules organized in a weathervane shape. Through this model, SsuD ligand-binding site for NADPH 2 and FMN substrates is clearly exposed, in a way that might allow the protein-protein interactions with SsuE. Moreover, based on molecular dynamics simulations of SsuD in apo state, docked with NADPH 2 , FMN or both substrates, we characterized the residues of the pocket, the mechanism of substrate interaction and transfer of electrons from NADPH 2 to FMN. This is the first report that links functional and biochemical data with structural analyses. (author)

  18. Two-component mantle melting-mixing model for the generation of mid-ocean ridge basalts: Implications for the volatile content of the Pacific upper mantle

    Science.gov (United States)

    Shimizu, Kei; Saal, Alberto E.; Myers, Corinne E.; Nagle, Ashley N.; Hauri, Erik H.; Forsyth, Donald W.; Kamenetsky, Vadim S.; Niu, Yaoling

    2016-03-01

    We report major, trace, and volatile element (CO2, H2O, F, Cl, S) contents and Sr, Nd, and Pb isotopes of mid-ocean ridge basalt (MORB) glasses from the Northern East Pacific Rise (NEPR) off-axis seamounts, the Quebrada-Discovery-GoFar (QDG) transform fault system, and the Macquarie Island. The incompatible trace element (ITE) contents of the samples range from highly depleted (DMORB, Th/La ⩽ 0.035) to enriched (EMORB, Th/La ⩾ 0.07), and the isotopic composition spans the entire range observed in EPR MORB. Our data suggest that at the time of melt generation, the source that generated the EMORB was essentially peridotitic, and that the composition of NMORB might not represent melting of a single upper mantle source (DMM), but rather mixing of melts from a two-component mantle (depleted and enriched DMM or D-DMM and E-DMM, respectively). After filtering the volatile element data for secondary processes (degassing, sulfide saturation, assimilation of seawater-derived component, and fractional crystallization), we use the volatiles to ITE ratios of our samples and a two-component mantle melting-mixing model to estimate the volatile content of the D-DMM (CO2 = 22 ppm, H2O = 59 ppm, F = 8 ppm, Cl = 0.4 ppm, and S = 100 ppm) and the E-DMM (CO2 = 990 ppm, H2O = 660 ppm, F = 31 ppm, Cl = 22 ppm, and S = 165 ppm). Our two-component mantle melting-mixing model reproduces the kernel density estimates (KDE) of Th/La and 143Nd/144Nd ratios for our samples and for EPR axial MORB compiled from the literature. This model suggests that: (1) 78% of the Pacific upper mantle is highly depleted (D-DMM) while 22% is enriched (E-DMM) in volatile and refractory ITE, (2) the melts produced during variable degrees of melting of the E-DMM controls most of the MORB geochemical variation, and (3) a fraction (∼65% to 80%) of the low degree EMORB melts (produced by ∼1.3% melting) may escape melt aggregation by freezing at the base of the oceanic lithosphere, significantly enriching it in

  19. Equation-free analysis of two-component system signalling model reveals the emergence of co-existing phenotypes in the absence of multistationarity.

    Directory of Open Access Journals (Sweden)

    Rebecca B Hoyle

    Full Text Available Phenotypic differences of genetically identical cells under the same environmental conditions have been attributed to the inherent stochasticity of biochemical processes. Various mechanisms have been suggested, including the existence of alternative steady states in regulatory networks that are reached by means of stochastic fluctuations, long transient excursions from a stable state to an unstable excited state, and the switching on and off of a reaction network according to the availability of a constituent chemical species. Here we analyse a detailed stochastic kinetic model of two-component system signalling in bacteria, and show that alternative phenotypes emerge in the absence of these features. We perform a bifurcation analysis of deterministic reaction rate equations derived from the model, and find that they cannot reproduce the whole range of qualitative responses to external signals demonstrated by direct stochastic simulations. In particular, the mixed mode, where stochastic switching and a graded response are seen simultaneously, is absent. However, probabilistic and equation-free analyses of the stochastic model that calculate stationary states for the mean of an ensemble of stochastic trajectories reveal that slow transcription of either response regulator or histidine kinase leads to the coexistence of an approximate basal solution and a graded response that combine to produce the mixed mode, thus establishing its essential stochastic nature. The same techniques also show that stochasticity results in the observation of an all-or-none bistable response over a much wider range of external signals than would be expected on deterministic grounds. Thus we demonstrate the application of numerical equation-free methods to a detailed biochemical reaction network model, and show that it can provide new insight into the role of stochasticity in the emergence of phenotypic diversity.

  20. On the characterization of dynamic supramolecular systems: a general mathematical association model for linear supramolecular copolymers and application on a complex two-component hydrogen-bonding system.

    Science.gov (United States)

    Odille, Fabrice G J; Jónsson, Stefán; Stjernqvist, Susann; Rydén, Tobias; Wärnmark, Kenneth

    2007-01-01

    A general mathematical model for the characterization of the dynamic (kinetically labile) association of supramolecular assemblies in solution is presented. It is an extension of the equal K (EK) model by the stringent use of linear algebra to allow for the simultaneous presence of an unlimited number of different units in the resulting assemblies. It allows for the analysis of highly complex dynamic equilibrium systems in solution, including both supramolecular homo- and copolymers without the recourse to extensive approximations, in a field in which other analytical methods are difficult. The derived mathematical methodology makes it possible to analyze dynamic systems such as supramolecular copolymers regarding for instance the degree of polymerization, the distribution of a given monomer in different copolymers as well as its position in an aggregate. It is to date the only general means to characterize weak supramolecular systems. The model was fitted to NMR dilution titration data by using the program Matlab, and a detailed algorithm for the optimization of the different parameters has been developed. The methodology is applied to a case study, a hydrogen-bonded supramolecular system, salen 4+porphyrin 5. The system is formally a two-component system but in reality a three-component system. This results in a complex dynamic system in which all monomers are associated to each other by hydrogen bonding with different association constants, resulting in homo- and copolymers 4n5m as well as cyclic structures 6 and 7, in addition to free 4 and 5. The system was analyzed by extensive NMR dilution titrations at variable temperatures. All chemical shifts observed at different temperatures were used in the fitting to obtain the DeltaH degrees and DeltaS degrees values producing the best global fit. From the derived general mathematical expressions, system 4+5 could be characterized with respect to above-mentioned parameters.

  1. Two-way and three-way approaches to ultra high performance liquid chromatography-photodiode array dataset for the quantitative resolution of a two-component mixture containing ciprofloxacin and ornidazole.

    Science.gov (United States)

    Dinç, Erdal; Ertekin, Zehra Ceren; Büker, Eda

    2016-09-01

    Two-way and three-way calibration models were applied to ultra high performance liquid chromatography with photodiode array data with coeluted peaks in the same wavelength and time regions for the simultaneous quantitation of ciprofloxacin and ornidazole in tablets. The chromatographic data cube (tensor) was obtained by recording chromatographic spectra of the standard and sample solutions containing ciprofloxacin and ornidazole with sulfadiazine as an internal standard as a function of time and wavelength. Parallel factor analysis and trilinear partial least squares were used as three-way calibrations for the decomposition of the tensor, whereas three-way unfolded partial least squares was applied as a two-way calibration to the unfolded dataset obtained from the data array of ultra high performance liquid chromatography with photodiode array detection. The validity and ability of two-way and three-way analysis methods were tested by analyzing validation samples: synthetic mixture, interday and intraday samples, and standard addition samples. Results obtained from two-way and three-way calibrations were compared to those provided by traditional ultra high performance liquid chromatography. The proposed methods, parallel factor analysis, trilinear partial least squares, unfolded partial least squares, and traditional ultra high performance liquid chromatography were successfully applied to the quantitative estimation of the solid dosage form containing ciprofloxacin and ornidazole. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. On modeling of structured multiphase mixtures

    International Nuclear Information System (INIS)

    Dobran, F.

    1987-01-01

    The usual modeling of multiphase mixtures involves a set of conservation and balance equations of mass, momentum, energy and entropy (the basic set) constructed by an averaging procedure or postulated. The averaged models are constructed by averaging, over space or time segments, the local macroscopic field equations of each phase, whereas the postulated models are usually motivated by the single phase multicomponent mixture models. In both situations, the resulting equations yield superimposed continua models and are closed by the constitutive equations which place restrictions on the possible material response during the motion and phase change. In modeling the structured multiphase mixtures, the modeling of intrinsic motion of grains or particles is accomplished by adjoining to the basic set of field equations the additional balance equations, thereby placing restrictions on the motion of phases only within the imposed extrinsic and intrinsic sources. The use of the additional balance equations has been primarily advocated in the postulatory theories of multiphase mixtures and are usually derived through very special assumptions of the material deformation. Nevertheless, the resulting mixture models can predict a wide variety of complex phenomena such as the Mohr-Coulomb yield criterion in granular media, Rayleigh bubble equation, wave dispersion and dilatancy. Fundamental to the construction of structured models of multiphase mixtures are the problems pertaining to the existence and number of additional balance equations to model the structural characteristics of a mixture. Utilizing a volume averaging procedure it is possible not only to derive the basic set of field equation discussed above, but also a very general set of additional balance equations for modeling of structural properties of the mixture

  3. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: ’Are we actually dealing with a convolutive mixture?’. We try to answer this question for EEG data....

  4. Mixture

    Directory of Open Access Journals (Sweden)

    Silva-Aguilar Martín

    2011-01-01

    Full Text Available Metals are ubiquitous pollutants present as mixtures. In particular, mixture of arsenic-cadmium-lead is among the leading toxic agents detected in the environment. These metals have carcinogenic and cell-transforming potential. In this study, we used a two step cell transformation model, to determine the role of oxidative stress in transformation induced by a mixture of arsenic-cadmium-lead. Oxidative damage and antioxidant response were determined. Metal mixture treatment induces the increase of damage markers and the antioxidant response. Loss of cell viability and increased transforming potential were observed during the promotion phase. This finding correlated significantly with generation of reactive oxygen species. Cotreatment with N-acetyl-cysteine induces effect on the transforming capacity; while a diminution was found in initiation, in promotion phase a total block of the transforming capacity was observed. Our results suggest that oxidative stress generated by metal mixture plays an important role only in promotion phase promoting transforming capacity.

  5. Probabilistic mixture-based image modelling

    Czech Academy of Sciences Publication Activity Database

    Haindl, Michal; Havlíček, Vojtěch; Grim, Jiří

    2011-01-01

    Roč. 47, č. 3 (2011), s. 482-500 ISSN 0023-5954 R&D Projects: GA MŠk 1M0572; GA ČR GA102/08/0593 Grant - others:CESNET(CZ) 387/2010; GA MŠk(CZ) 2C06019; GA ČR(CZ) GA103/11/0335 Institutional research plan: CEZ:AV0Z10750506 Keywords : BTF texture modelling * discrete distribution mixtures * Bernoulli mixture * Gaussian mixture * multi-spectral texture modelling Subject RIV: BD - Theory of Information Impact factor: 0.454, year: 2011 http://library.utia.cas.cz/separaty/2011/RO/haindl-0360244.pdf

  6. Modeling text with generalizable Gaussian mixtures

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Sigurdsson, Sigurdur; Kolenda, Thomas

    2000-01-01

    We apply and discuss generalizable Gaussian mixture (GGM) models for text mining. The model automatically adapts model complexity for a given text representation. We show that the generalizability of these models depends on the dimensionality of the representation and the sample size. We discuss...

  7. Estimation of value at risk and conditional value at risk using normal mixture distributions model

    Science.gov (United States)

    Kamaruzzaman, Zetty Ain; Isa, Zaidi

    2013-04-01

    Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.

  8. Bayesian mixture models for source separation in MEG

    International Nuclear Information System (INIS)

    Calvetti, Daniela; Homa, Laura; Somersalo, Erkki

    2011-01-01

    This paper discusses the problem of imaging electromagnetic brain activity from measurements of the induced magnetic field outside the head. This imaging modality, magnetoencephalography (MEG), is known to be severely ill posed, and in order to obtain useful estimates for the activity map, complementary information needs to be used to regularize the problem. In this paper, a particular emphasis is on finding non-superficial focal sources that induce a magnetic field that may be confused with noise due to external sources and with distributed brain noise. The data are assumed to come from a mixture of a focal source and a spatially distributed possibly virtual source; hence, to differentiate between those two components, the problem is solved within a Bayesian framework, with a mixture model prior encoding the information that different sources may be concurrently active. The mixture model prior combines one density that favors strongly focal sources and another that favors spatially distributed sources, interpreted as clutter in the source estimation. Furthermore, to address the challenge of localizing deep focal sources, a novel depth sounding algorithm is suggested, and it is shown with simulated data that the method is able to distinguish between a signal arising from a deep focal source and a clutter signal. (paper)

  9. Itinerant Ferromagnetism in a Polarized Two-Component Fermi Gas

    DEFF Research Database (Denmark)

    Massignan, Pietro; Yu, Zhenhua; Bruun, Georg

    2013-01-01

    We analyze when a repulsively interacting two-component Fermi gas becomes thermodynamically unstable against phase separation. We focus on the strongly polarized limit, where the free energy of the homogeneous mixture can be calculated accurately in terms of well-defined quasiparticles, the repul......We analyze when a repulsively interacting two-component Fermi gas becomes thermodynamically unstable against phase separation. We focus on the strongly polarized limit, where the free energy of the homogeneous mixture can be calculated accurately in terms of well-defined quasiparticles...

  10. The Umov effect in application to an optically thin two-component cloud of cosmic dust

    Science.gov (United States)

    Zubko, Evgenij; Videen, Gorden; Zubko, Nataliya; Shkuratov, Yuriy

    2018-04-01

    The Umov effect is an inverse correlation between linear polarization of the sunlight scattered by an object and its geometric albedo. The Umov effect has been observed in particulate surfaces, such as planetary regoliths, and recently it also was found in single-scattering small dust particles. Using numerical modeling, we study the Umov effect in a two-component mixture of small irregularly shaped particles. Such a complex chemical composition is suggested in cometary comae and other types of optically thin clouds of cosmic dust. We find that the two-component mixtures of small particles also reveal the Umov effect regardless of the chemical composition of their end-member components. The interrelation between log(Pmax) and log(A) in a two-component mixture of small irregularly shaped particles appears either in a straight linear form or in a slightly curved form. This curvature tends to decrease while the index n in a power-law size distribution r-n grows; at n > 2.5, the log(Pmax)-log(A) diagrams are almost straight linear in appearance. The curvature also noticeably decreases with the packing density of constituent material in irregularly shaped particles forming the mixture. That such a relation exists suggest the Umov effect may also be observed in more complex mixtures.

  11. Optimal designs for linear mixture models

    NARCIS (Netherlands)

    Mendieta, E.J.; Linssen, H.N.; Doornbos, R.

    1975-01-01

    In a recent paper Snee and Marquardt [8] considered designs for linear mixture models, where the components are subject to individual lower and/or upper bounds. When the number of components is large their algorithm XVERT yields designs far too extensive for practical purposes. The purpose of this

  12. Optimal designs for linear mixture models

    NARCIS (Netherlands)

    Mendieta, E.J.; Linssen, H.N.; Doornbos, R.

    1975-01-01

    In a recent paper Snee and Marquardt (1974) considered designs for linear mixture models, where the components are subject to individual lower and/or upper bounds. When the number of components is large their algorithm XVERT yields designs far too extensive for practical purposes. The purpose of

  13. Evaluation of solution stability for two-component polydisperse systems by small-angle scattering

    Science.gov (United States)

    Kryukova, A. E.; Konarev, P. V.; Volkov, V. V.

    2017-12-01

    The article is devoted to the modelling of small-angle scattering data using the program MIXTURE designed for the study of polydisperse multicomponent mixtures. In this work we present the results of solution stability studies for theoretical small-angle scattering data sets from two-component models. It was demonstrated that the addition of the noise to the data influences the stability range of the restored structural parameters. The recommendations for the optimal minimization schemes that permit to restore the volume size distributions for polydisperse systems are suggested.

  14. Direct Importance Estimation with Gaussian Mixture Models

    Science.gov (United States)

    Yamada, Makoto; Sugiyama, Masashi

    The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extention of the Kullback-Leibler importance estimation procedure (KLIEP), an importance estimation method using linear or kernel models. An advantage of GMMs is that covariance matrices can also be learned through an expectation-maximization procedure, so the proposed method — which we call the Gaussian mixture KLIEP (GM-KLIEP) — is expected to work well when the true importance function has high correlation. Through experiments, we show the validity of the proposed approach.

  15. Text document classification based on mixture models

    Czech Academy of Sciences Publication Activity Database

    Novovičová, Jana; Malík, Antonín

    2004-01-01

    Roč. 40, č. 3 (2004), s. 293-304 ISSN 0023-5954 R&D Projects: GA AV ČR IAA2075302; GA ČR GA102/03/0049; GA AV ČR KSK1019101 Institutional research plan: CEZ:AV0Z1075907 Keywords : text classification * text categorization * multinomial mixture model Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.224, year: 2004

  16. Primordial two-component maximally symmetric inflation

    Science.gov (United States)

    Enqvist, K.; Nanopoulos, D. V.; Quirós, M.; Kounnas, C.

    1985-12-01

    We propose a two-component inflation model, based on maximally symmetric supergravity, where the scales of reheating and the inflation potential at the origin are decoupled. This is possible because of the second-order phase transition from SU(5) to SU(3)×SU(2)×U(1) that takes place when φ≅φcinflation at the global minimum, and leads to a reheating temperature TR≅(1015-1016) GeV. This makes it possible to generate baryon asymmetry in the conventional way without any conflict with experimental data on proton lifetime. The mass of the gravitinos is m3/2≅1012 GeV, thus avoiding the gravitino problem. Monopoles are diluted by residual inflation in the broken phase below the cosmological bounds if φcUSA.

  17. Conductivity of two-component systems

    Energy Technology Data Exchange (ETDEWEB)

    Kuijper, A. de; Hofman, J.P.; Waal, J.A. de [Shell Research BV, Rijswijk (Netherlands). Koninklijke/Shell Exploratie en Productie Lab.; Sandor, R.K.J. [Shell International Petroleum Maatschappij, The Hague (Netherlands)

    1996-01-01

    The authors present measurements and computer simulation results on the electrical conductivity of nonconducting grains embedded in a conductive brine host. The shapes of the grains ranged from prolate-ellipsoidal (with an axis ratio of 5:1) through spherical to oblate-ellipsoidal (with an axis ratio of 1:5). The conductivity was studied as a function of porosity and packing, and Archie`s cementation exponent was found to depend on porosity. They used spatially regular and random configurations with aligned and nonaligned packings. The experimental results agree well with the computer simulation data. This data set will enable extensive tests of models for calculating the anisotropic conductivity of two-component systems.

  18. A turbulence model in mixtures. First part: Statistical description of mixture

    International Nuclear Information System (INIS)

    Besnard, D.

    1987-03-01

    Classical theory of mixtures gives a model for molecular mixtures. This kind of model is based on a small gradient approximation for concentration, temperature, and pression. We present here a mixture model, allowing for large gradients in the flow. We also show that, with a local balance assumption between material diffusion and flow gradients evolution, we obtain a model similar to those mentioned above [fr

  19. Gaussian Mixture Model of Heart Rate Variability

    Science.gov (United States)

    Costa, Tommaso; Boccignone, Giuseppe; Ferraro, Mario

    2012-01-01

    Heart rate variability (HRV) is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters. PMID:22666386

  20. Mixture of Regression Models with Single-Index

    OpenAIRE

    Xiang, Sijia; Yao, Weixin

    2016-01-01

    In this article, we propose a class of semiparametric mixture regression models with single-index. We argue that many recently proposed semiparametric/nonparametric mixture regression models can be considered special cases of the proposed model. However, unlike existing semiparametric mixture regression models, the new pro- posed model can easily incorporate multivariate predictors into the nonparametric components. Backfitting estimates and the corresponding algorithms have been proposed for...

  1. Nonparametric Mixture Models for Supervised Image Parcellation.

    Science.gov (United States)

    Sabuncu, Mert R; Yeo, B T Thomas; Van Leemput, Koen; Fischl, Bruce; Golland, Polina

    2009-09-01

    We present a nonparametric, probabilistic mixture model for the supervised parcellation of images. The proposed model yields segmentation algorithms conceptually similar to the recently developed label fusion methods, which register a new image with each training image separately. Segmentation is achieved via the fusion of transferred manual labels. We show that in our framework various settings of a model parameter yield algorithms that use image intensity information differently in determining the weight of a training subject during fusion. One particular setting computes a single, global weight per training subject, whereas another setting uses locally varying weights when fusing the training data. The proposed nonparametric parcellation approach capitalizes on recently developed fast and robust pairwise image alignment tools. The use of multiple registrations allows the algorithm to be robust to occasional registration failures. We report experiments on 39 volumetric brain MRI scans with expert manual labels for the white matter, cerebral cortex, ventricles and subcortical structures. The results demonstrate that the proposed nonparametric segmentation framework yields significantly better segmentation than state-of-the-art algorithms.

  2. Stochastic radiative transfer model for mixture of discontinuous vegetation canopies

    International Nuclear Information System (INIS)

    Shabanov, Nikolay V.; Huang, D.; Knjazikhin, Y.; Dickinson, R.E.; Myneni, Ranga B.

    2007-01-01

    Modeling of the radiation regime of a mixture of vegetation species is a fundamental problem of the Earth's land remote sensing and climate applications. The major existing approaches, including the linear mixture model and the turbid medium (TM) mixture radiative transfer model, provide only an approximate solution to this problem. In this study, we developed the stochastic mixture radiative transfer (SMRT) model, a mathematically exact tool to evaluate radiation regime in a natural canopy with spatially varying optical properties, that is, canopy, which exhibits a structured mixture of vegetation species and gaps. The model solves for the radiation quantities, direct input to the remote sensing/climate applications: mean radiation fluxes over whole mixture and over individual species. The canopy structure is parameterized in the SMRT model in terms of two stochastic moments: the probability of finding species and the conditional pair-correlation of species. The second moment is responsible for the 3D radiation effects, namely, radiation streaming through gaps without interaction with vegetation and variation of the radiation fluxes between different species. We performed analytical and numerical analysis of the radiation effects, simulated with the SMRT model for the three cases of canopy structure: (a) non-ordered mixture of species and gaps (TM); (b) ordered mixture of species without gaps; and (c) ordered mixture of species with gaps. The analysis indicates that the variation of radiation fluxes between different species is proportional to the variation of species optical properties (leaf albedo, density of foliage, etc.) Gaps introduce significant disturbance to the radiation regime in the canopy as their optical properties constitute major contrast to those of any vegetation species. The SMRT model resolves deficiencies of the major existing mixture models: ignorance of species radiation coupling via multiple scattering of photons (the linear mixture model

  3. Two component plasma vortex approach to fusion

    International Nuclear Information System (INIS)

    Ikuta, Kazunari.

    1978-09-01

    Two component operation of the field reversed theta pinch plasma by injection of the energetic ion beam with energy of the order of 1 MeV is considered. A possible trapping scheme of the ion beam in the plasma is discussed in detail. (author)

  4. mixtools: An R Package for Analyzing Mixture Models

    Directory of Open Access Journals (Sweden)

    Tatiana Benaglia

    2009-10-01

    Full Text Available The mixtools package for R provides a set of functions for analyzing a variety of finite mixture models. These functions include both traditional methods, such as EM algorithms for univariate and multivariate normal mixtures, and newer methods that reflect some recent research in finite mixture models. In the latter category, mixtools provides algorithms for estimating parameters in a wide range of different mixture-of-regression contexts, in multinomial mixtures such as those arising from discretizing continuous multivariate data, in nonparametric situations where the multivariate component densities are completely unspecified, and in semiparametric situations such as a univariate location mixture of symmetric but otherwise unspecified densities. Many of the algorithms of the mixtools package are EM algorithms or are based on EM-like ideas, so this article includes an overview of EM algorithms for finite mixture models.

  5. Evaluating Mixture Modeling for Clustering: Recommendations and Cautions

    Science.gov (United States)

    Steinley, Douglas; Brusco, Michael J.

    2011-01-01

    This article provides a large-scale investigation into several of the properties of mixture-model clustering techniques (also referred to as latent class cluster analysis, latent profile analysis, model-based clustering, probabilistic clustering, Bayesian classification, unsupervised learning, and finite mixture models; see Vermunt & Magdison,…

  6. Finite mixture model: A maximum likelihood estimation approach on time series data

    Science.gov (United States)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  7. Modeling the effects of binary mixtures on survival in time.

    NARCIS (Netherlands)

    Baas, J.; van Houte, B.P.P.; van Gestel, C.A.M.; Kooijman, S.A.L.M.

    2007-01-01

    In general, effects of mixtures are difficult to describe, and most of the models in use are descriptive in nature and lack a strong mechanistic basis. The aim of this experiment was to develop a process-based model for the interpretation of mixture toxicity measurements, with effects of binary

  8. Modeling of Multicomponent Mixture Separation Processes Using Hollow fiber Membrane

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sin-Ah; Kim, Jin-Kuk; Lee, Young Moo; Yeo, Yeong-Koo [Hanyang University, Seoul (Korea, Republic of)

    2015-02-15

    So far, most of research activities on modeling of membrane separation processes have been focused on binary feed mixture. But, in actual separation operations, binary feed is hard to find and most separation processes involve multicomponent feed mixture. In this work models for membrane separation processes treating multicomponent feed mixture are developed. Various model types are investigated and validity of proposed models are analysed based on experimental data obtained using hollowfiber membranes. The proposed separation models show quick convergence and exhibit good tracking performance.

  9. Modelling interactions in grass-clover mixtures

    NARCIS (Netherlands)

    Nassiri Mahallati, M.

    1998-01-01

    The study described in this thesis focuses on a quantitative understanding of the complex interactions in binary mixtures of perennial ryegrass (Lolium perenne L.) and white clover (Trifolium repens L.) under cutting. The first part of the study describes the dynamics of growth, production

  10. Thermodynamic modeling of CO2 mixtures

    DEFF Research Database (Denmark)

    Bjørner, Martin Gamel

    Knowledge of the thermodynamic properties and phase equilibria of mixtures containing carbon dioxide (CO2) is important in several industrial processes such as enhanced oil recovery, carbon capture and storage, and supercritical extractions, where CO2 is used as a solvent. Despite this importance...

  11. Communication: Modeling electrolyte mixtures with concentration dependent dielectric permittivity

    Science.gov (United States)

    Chen, Hsieh; Panagiotopoulos, Athanassios Z.

    2018-01-01

    We report a new implicit-solvent simulation model for electrolyte mixtures based on the concept of concentration dependent dielectric permittivity. A combining rule is found to predict the dielectric permittivity of electrolyte mixtures based on the experimentally measured dielectric permittivity for pure electrolytes as well as the mole fractions of the electrolytes in mixtures. Using grand canonical Monte Carlo simulations, we demonstrate that this approach allows us to accurately reproduce the mean ionic activity coefficients of NaCl in NaCl-CaCl2 mixtures at ionic strengths up to I = 3M. These results are important for thermodynamic studies of geologically relevant brines and physiological fluids.

  12. A Dirichlet process mixture model for brain MRI tissue classification.

    Science.gov (United States)

    Ferreira da Silva, Adelino R

    2007-04-01

    Accurate classification of magnetic resonance images according to tissue type or region of interest has become a critical requirement in diagnosis, treatment planning, and cognitive neuroscience. Several authors have shown that finite mixture models give excellent results in the automated segmentation of MR images of the human normal brain. However, performance and robustness of finite mixture models deteriorate when the models have to deal with a variety of anatomical structures. In this paper, we propose a nonparametric Bayesian model for tissue classification of MR images of the brain. The model, known as Dirichlet process mixture model, uses Dirichlet process priors to overcome the limitations of current parametric finite mixture models. To validate the accuracy and robustness of our method we present the results of experiments carried out on simulated MR brain scans, as well as on real MR image data. The results are compared with similar results from other well-known MRI segmentation methods.

  13. Two Component Signal Transduction in Desulfovibrio Species

    Energy Technology Data Exchange (ETDEWEB)

    Luning, Eric; Rajeev, Lara; Ray, Jayashree; Mukhopadhyay, Aindrila

    2010-05-17

    The environmentally relevant Desulfovibrio species are sulfate-reducing bacteria that are of interest in the bioremediation of heavy metal contaminated water. Among these, the genome of D. vulgaris Hildenborough encodes a large number of two component systems consisting of 72 putative response regulators (RR) and 64 putative histidinekinases (HK), the majority of which are uncharacterized. We classified the D. vulgaris Hildenborough RRs based on their output domains and compared the distribution of RRs in other sequenced Desulfovibrio species. We have successfully purified most RRs and several HKs as His-tagged proteins. We performed phospho-transfer experiments to verify relationships between cognate pairs of HK and RR, and we have also mapped a few non-cognate HK-RR pairs. Presented here are our discoveries from the Desulfovibrio RR categorization and results from the in vitro studies using purified His tagged D. vulgaris HKs and RRs.

  14. An equiratio mixture model for non-additive components : a case study for aspartame/acesulfame-K mixtures

    NARCIS (Netherlands)

    Schifferstein, H.N.J.

    1996-01-01

    The Equiratio Mixture Model predicts the psychophysical function for an equiratio mixture type on the basis of the psychophysical functions for the unmixed components. The model reliably estimates the sweetness of mixtures of sugars and sugar-alchohols, but is unable to predict intensity for

  15. Approximation of the breast height diameter distribution of two-cohort stands by mixture models III Kernel density estimators vs mixture models

    Science.gov (United States)

    Rafal Podlaski; Francis A. Roesch

    2014-01-01

    Two-component mixtures of either the Weibull distribution or the gamma distribution and the kernel density estimator were used for describing the diameter at breast height (dbh) empirical distributions of two-cohort stands. The data consisted of study plots from the Å wietokrzyski National Park (central Poland) and areas close to and including the North Carolina section...

  16. Beta Regression Finite Mixture Models of Polarization and Priming

    Science.gov (United States)

    Smithson, Michael; Merkle, Edgar C.; Verkuilen, Jay

    2011-01-01

    This paper describes the application of finite-mixture general linear models based on the beta distribution to modeling response styles, polarization, anchoring, and priming effects in probability judgments. These models, in turn, enhance our capacity for explicitly testing models and theories regarding the aforementioned phenomena. The mixture…

  17. New models for predicting thermophysical properties of ionic liquid mixtures.

    Science.gov (United States)

    Huang, Ying; Zhang, Xiangping; Zhao, Yongsheng; Zeng, Shaojuan; Dong, Haifeng; Zhang, Suojiang

    2015-10-28

    Potential applications of ILs require the knowledge of the physicochemical properties of ionic liquid (IL) mixtures. In this work, a series of semi-empirical models were developed to predict the density, surface tension, heat capacity and thermal conductivity of IL mixtures. Each semi-empirical model only contains one new characteristic parameter, which can be determined using one experimental data point. In addition, as another effective tool, artificial neural network (ANN) models were also established. The two kinds of models were verified by a total of 2304 experimental data points for binary mixtures of ILs and molecular compounds. The overall average absolute deviations (AARDs) of both the semi-empirical and ANN models are less than 2%. Compared to previously reported models, these new semi-empirical models require fewer adjustable parameters and can be applied in a wider range of applications.

  18. Modeling phase equilibria for acid gas mixtures using the CPA equation of state. Part II: Binary mixtures with CO2

    DEFF Research Database (Denmark)

    Tsivintzelis, Ioannis; Kontogeorgis, Georgios; Michelsen, Michael Locht

    2011-01-01

    In Part I of this series of articles, the study of H2S mixtures has been presented with CPA. In this study the phase behavior of CO2 containing mixtures is modeled. Binary mixtures with water, alcohols, glycols and hydrocarbons are investigated. Both phase equilibria (vapor–liquid and liquid–liqu...

  19. Two-component gravitational instability in spiral galaxies

    Science.gov (United States)

    Marchuk, A. A.; Sotnikova, N. Y.

    2018-04-01

    We applied a criterion of gravitational instability, valid for two-component and infinitesimally thin discs, to observational data along the major axis for seven spiral galaxies of early types. Unlike most papers, the dispersion equation corresponding to the criterion was solved directly without using any approximation. The velocity dispersion of stars in the radial direction σR was limited by the range of possible values instead of a fixed value. For all galaxies, the outer regions of the disc were analysed up to R ≤ 130 arcsec. The maximal and sub-maximal disc models were used to translate surface brightness into surface density. The largest destabilizing disturbance stars can exert on a gaseous disc was estimated. It was shown that the two-component criterion differs a little from the one-fluid criterion for galaxies with a large surface gas density, but it allows to explain large-scale star formation in those regions where the gaseous disc is stable. In the galaxy NGC 1167 star formation is entirely driven by the self-gravity of the stars. A comparison is made with the conventional approximations which also include the thickness effect and with models for different sound speed cg. It is shown that values of the effective Toomre parameter correspond to the instability criterion of a two-component disc Qeff < 1.5-2.5. This result is consistent with previous theoretical and observational studies.

  20. Detecting Housing Submarkets using Unsupervised Learning of Finite Mixture Models

    DEFF Research Database (Denmark)

    Ntantamis, Christos

    association between prices that can be attributed, among others, to unobserved neighborhood effects. In this paper, a model of spatial association for housing markets is introduced. Spatial association is treated in the context of spatial heterogeneity, which is explicitly modeled in both a global and a local....... The identified mixtures are considered as the different spatial housing submarkets. The main advantage of the approach is that submarkets are recovered by the housing prices data compared to submarkets imposed by administrative or geographical criteria. The Finite Mixture Model is estimated using the Figueiredo...

  1. Use of finite mixture distribution models in the analysis of wind energy in the Canarian Archipelago

    International Nuclear Information System (INIS)

    Carta, Jose Antonio; Ramirez, Penelope

    2007-01-01

    The statistical characteristics of hourly mean wind speed data recorded at 16 weather stations located in the Canarian Archipelago are analyzed in this paper. As a result of this analysis we see that the typical two parameter Weibull wind speed distribution (W-pdf) does not accurately represent all wind regimes observed in that region. However, a Singly Truncated from below Normal Weibull mixture distribution (TNW-pdf) and a two component mixture Weibull distribution (WW-pdf) developed here do provide very good fits for both unimodal and bimodal wind speed frequency distributions observed in that region and offer less relative errors in determining the annual mean wind power density. The parameters of the distributions are estimated using the least squares method, which is resolved in this paper using the Levenberg-Marquardt algorithm. The suitability of the distributions is judged from the probability plot correlation coefficient plot R 2 , adjusted for degrees of freedom. Based on the results obtained, we conclude that the two mixture distributions proposed here provide very flexible models for wind speed studies and can be applied in a widespread manner to represent the wind regimes in the Canarian archipelago and in other regions with similar characteristics. The TNW-pdf takes into account the frequency of null winds, whereas the WW-pdf and W-pdf do not. It can, therefore, better represent wind regimes with high percentages of null wind speeds. However, calculation of the TNW-pdf is markedly slower

  2. A two-component copula with links to insurance

    Directory of Open Access Journals (Sweden)

    Ismail S.

    2017-12-01

    Full Text Available This paper presents a new copula to model dependencies between insurance entities, by considering how insurance entities are affected by both macro and micro factors. The model used to build the copula assumes that the insurance losses of two companies or lines of business are related through a random common loss factor which is then multiplied by an individual random company factor to get the total loss amounts. The new two-component copula is not Archimedean and it extends the toolkit of copulas for the insurance industry.

  3. Bayesian Plackett-Luce Mixture Models for Partially Ranked Data.

    Science.gov (United States)

    Mollica, Cristina; Tardella, Luca

    2017-06-01

    The elicitation of an ordinal judgment on multiple alternatives is often required in many psychological and behavioral experiments to investigate preference/choice orientation of a specific population. The Plackett-Luce model is one of the most popular and frequently applied parametric distributions to analyze rankings of a finite set of items. The present work introduces a Bayesian finite mixture of Plackett-Luce models to account for unobserved sample heterogeneity of partially ranked data. We describe an efficient way to incorporate the latent group structure in the data augmentation approach and the derivation of existing maximum likelihood procedures as special instances of the proposed Bayesian method. Inference can be conducted with the combination of the Expectation-Maximization algorithm for maximum a posteriori estimation and the Gibbs sampling iterative procedure. We additionally investigate several Bayesian criteria for selecting the optimal mixture configuration and describe diagnostic tools for assessing the fitness of ranking distributions conditionally and unconditionally on the number of ranked items. The utility of the novel Bayesian parametric Plackett-Luce mixture for characterizing sample heterogeneity is illustrated with several applications to simulated and real preference ranked data. We compare our method with the frequentist approach and a Bayesian nonparametric mixture model both assuming the Plackett-Luce model as a mixture component. Our analysis on real datasets reveals the importance of an accurate diagnostic check for an appropriate in-depth understanding of the heterogenous nature of the partial ranking data.

  4. Supervised Gaussian mixture model based remote sensing image ...

    African Journals Online (AJOL)

    Using the supervised classification technique, both simulated and empirical satellite remote sensing data are used to train and test the Gaussian mixture model algorithm. For the purpose of validating the experiment, the resulting classified satellite image is compared with the ground truth data. For the simulated modelling, ...

  5. Investigating Individual Differences in Toddler Search with Mixture Models

    Science.gov (United States)

    Berthier, Neil E.; Boucher, Kelsea; Weisner, Nina

    2015-01-01

    Children's performance on cognitive tasks is often described in categorical terms in that a child is described as either passing or failing a test, or knowing or not knowing some concept. We used binomial mixture models to determine whether individual children could be classified as passing or failing two search tasks, the DeLoache model room…

  6. A mixture model-based approach to the clustering of microarray expression data.

    Science.gov (United States)

    McLachlan, G J; Bean, R W; Peel, D

    2002-03-01

    This paper introduces the software EMMIX-GENE that has been developed for the specific purpose of a model-based approach to the clustering of microarray expression data, in particular, of tissue samples on a very large number of genes. The latter is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. A feasible approach is provided by first selecting a subset of the genes relevant for the clustering of the tissue samples by fitting mixtures of t distributions to rank the genes in order of increasing size of the likelihood ratio statistic for the test of one versus two components in the mixture model. The imposition of a threshold on the likelihood ratio statistic used in conjunction with a threshold on the size of a cluster allows the selection of a relevant set of genes. However, even this reduced set of genes will usually be too large for a normal mixture model to be fitted directly to the tissues, and so the use of mixtures of factor analyzers is exploited to reduce effectively the dimension of the feature space of genes. The usefulness of the EMMIX-GENE approach for the clustering of tissue samples is demonstrated on two well-known data sets on colon and leukaemia tissues. For both data sets, relevant subsets of the genes are able to be selected that reveal interesting clusterings of the tissues that are either consistent with the external classification of the tissues or with background and biological knowledge of these sets. EMMIX-GENE is available at http://www.maths.uq.edu.au/~gjm/emmix-gene/

  7. Identifying Clusters with Mixture Models that Include Radial Velocity Observations

    Science.gov (United States)

    Czarnatowicz, Alexis; Ybarra, Jason E.

    2018-01-01

    The study of stellar clusters plays an integral role in the study of star formation. We present a cluster mixture model that considers radial velocity data in addition to spatial data. Maximum likelihood estimation through the Expectation-Maximization (EM) algorithm is used for parameter estimation. Our mixture model analysis can be used to distinguish adjacent or overlapping clusters, and estimate properties for each cluster.Work supported by awards from the Virginia Foundation for Independent Colleges (VFIC) Undergraduate Science Research Fellowship and The Research Experience @Bridgewater (TREB).

  8. Models for the computation of opacity of mixtures

    International Nuclear Information System (INIS)

    Klapisch, Marcel; Busquet, Michel

    2013-01-01

    We compare four models for the partial densities of the components of mixtures. These models yield different opacities as shown on polystyrene, acrylic and polyimide in local thermodynamical equilibrium (LTE). Two of these models, the ‘whole volume partial pressure’ model (M1) and its modification (M2) are not thermodynamically consistent (TC). The other two models are TC and minimize free energy. M3, the ‘partial volume equal pressure’ model, uses equality of chemical potential. M4 uses commonality of free electron density. The latter two give essentially identical results in LTE, but M4’s convergence is slower. M4 is easily generalized to non-LTE conditions. Non-LTE effects are shown by the variation of the Planck mean opacity of the mixtures with temperature and density. (paper)

  9. Copula Based Factorization in Bayesian Multivariate Infinite Mixture Models

    OpenAIRE

    Martin Burda; Artem Prokhorov

    2012-01-01

    Bayesian nonparametric models based on infinite mixtures of density kernels have been recently gaining in popularity due to their flexibility and feasibility of implementation even in complicated modeling scenarios. In economics, they have been particularly useful in estimating nonparametric distributions of latent variables. However, these models have been rarely applied in more than one dimension. Indeed, the multivariate case suffers from the curse of dimensionality, with a rapidly increas...

  10. The R Package bgmm : Mixture Modeling with Uncertain Knowledge

    Directory of Open Access Journals (Sweden)

    Przemys law Biecek

    2012-04-01

    Full Text Available Classical supervised learning enjoys the luxury of accessing the true known labels for the observations in a modeled dataset. Real life, however, poses an abundance of problems, where the labels are only partially defined, i.e., are uncertain and given only for a subsetof observations. Such partial labels can occur regardless of the knowledge source. For example, an experimental assessment of labels may have limited capacity and is prone to measurement errors. Also expert knowledge is often restricted to a specialized area and is thus unlikely to provide trustworthy labels for all observations in the dataset. Partially supervised mixture modeling is able to process such sparse and imprecise input. Here, we present an R package calledbgmm, which implements two partially supervised mixture modeling methods: soft-label and belief-based modeling. For completeness, we equipped the package also with the functionality of unsupervised, semi- and fully supervised mixture modeling. On real data we present the usage of bgmm for basic model-fitting in all modeling variants. The package can be applied also to selection of the best-fitting from a set of models with different component numbers or constraints on their structures. This functionality is presented on an artificial dataset, which can be simulated in bgmm from a distribution defined by a given model.

  11. Option Pricing with Asymmetric Heteroskedastic Normal Mixture Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V. K; Stentoft, Lars

    2015-01-01

    We propose an asymmetric GARCH in mean mixture model and provide a feasible method for option pricing within this general framework by deriving the appropriate risk neutral dynamics. We forecast the out-of-sample prices of a large sample of options on the S&P 500 index from January 2006 to December...

  12. Application of association models to mixtures containing alkanolamines

    DEFF Research Database (Denmark)

    Avlund, Ane Søgaard; Eriksen, Daniel Kunisch; Kontogeorgis, Georgios

    2011-01-01

    Two association models,the CPA and sPC-SAFT equations of state, are applied to binarymixtures containing alkanolamines and hydrocarbons or water. CPA is applied to mixtures of MEA and DEA, while sPC-SAFT is applied to MEA–n-heptane liquid–liquid equilibria and MEA–water vapor–liquid equilibria. T...

  13. The Semiparametric Normal Variance-Mean Mixture Model

    DEFF Research Database (Denmark)

    Korsholm, Lars

    1997-01-01

    We discuss the normal vairance-mean mixture model from a semi-parametric point of view, i.e. we let the mixing distribution belong to a non parametric family. The main results are consistency of the non parametric maximum likelihood estimat or in this case, and construction of an asymptotically...... normal and efficient estimator....

  14. Evaluation of Distance Measures Between Gaussian Mixture Models of MFCCs

    DEFF Research Database (Denmark)

    Jensen, Jesper Højvang; Ellis, Dan P. W.; Christensen, Mads Græsbøll

    2007-01-01

    In music similarity and in the related task of genre classification, a distance measure between Gaussian mixture models is frequently needed. We present a comparison of the Kullback-Leibler distance, the earth movers distance and the normalized L2 distance for this application. Although...

  15. Parameter Estimation and Model Selection for Mixtures of Truncated Exponentials

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael

    2010-01-01

    Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorithms and provide a flexible way of modeling hybrid domains (domains containing both discrete and continuous variables). On the other hand, estimating an MTE from data has turned out to be a difficul...

  16. Detecting Math Anxiety with a Mixture Partial Credit Model

    Science.gov (United States)

    Ölmez, Ibrahim Burak; Cohen, Allan S.

    2017-01-01

    The purpose of this study was to investigate a new methodology for detection of differences in middle grades students' math anxiety. A mixture partial credit model analysis revealed two distinct latent classes based on homogeneities in response patterns within each latent class. Students in Class 1 had less anxiety about apprehension of math…

  17. Mixture models with entropy regularization for community detection in networks

    Science.gov (United States)

    Chang, Zhenhai; Yin, Xianjun; Jia, Caiyan; Wang, Xiaoyang

    2018-04-01

    Community detection is a key exploratory tool in network analysis and has received much attention in recent years. NMM (Newman's mixture model) is one of the best models for exploring a range of network structures including community structure, bipartite and core-periphery structures, etc. However, NMM needs to know the number of communities in advance. Therefore, in this study, we have proposed an entropy regularized mixture model (called EMM), which is capable of inferring the number of communities and identifying network structure contained in a network, simultaneously. In the model, by minimizing the entropy of mixing coefficients of NMM using EM (expectation-maximization) solution, the small clusters contained little information can be discarded step by step. The empirical study on both synthetic networks and real networks has shown that the proposed model EMM is superior to the state-of-the-art methods.

  18. Mixture modeling of multi-component data sets with application to ion-probe zircon ages

    Science.gov (United States)

    Sambridge, M. S.; Compston, W.

    1994-12-01

    A method is presented for detecting multiple components in a population of analytical observations for zircon and other ages. The procedure uses an approach known as mixture modeling, in order to estimate the most likely ages, proportions and number of distinct components in a given data set. Particular attention is paid to estimating errors in the estimated ages and proportions. At each stage of the procedure several alternative numerical approaches are suggested, each having their own advantages in terms of efficency and accuracy. The methodology is tested on synthetic data sets simulating two or more mixed populations of zircon ages. In this case true ages and proportions of each population are known and compare well with the results of the new procedure. Two examples are presented of its use with sets of SHRIMP U-238 - Pb-206 zircon ages from Palaeozoic rocks. A published data set for altered zircons from bentonite at Meishucun, South China, previously treated as a single-component population after screening for gross alteration effects, can be resolved into two components by the new procedure and their ages, proportions and standard errors estimated. The older component, at 530 +/- 5 Ma (2 sigma), is our best current estimate for the age of the bentonite. Mixture modeling of a data set for unaltered zircons from a tonalite elsewhere defines the magmatic U-238 - Pb-206 age at high precision (2 sigma +/- 1.5 Ma), but one-quarter of the 41 analyses detect hidden and significantly older cores.

  19. Challenges in modelling the random structure correctly in growth mixture models and the impact this has on model mixtures.

    Science.gov (United States)

    Gilthorpe, M S; Dahly, D L; Tu, Y K; Kubzansky, L D; Goodman, E

    2014-06-01

    Lifecourse trajectories of clinical or anthropological attributes are useful for identifying how our early-life experiences influence later-life morbidity and mortality. Researchers often use growth mixture models (GMMs) to estimate such phenomena. It is common to place constrains on the random part of the GMM to improve parsimony or to aid convergence, but this can lead to an autoregressive structure that distorts the nature of the mixtures and subsequent model interpretation. This is especially true if changes in the outcome within individuals are gradual compared with the magnitude of differences between individuals. This is not widely appreciated, nor is its impact well understood. Using repeat measures of body mass index (BMI) for 1528 US adolescents, we estimated GMMs that required variance-covariance constraints to attain convergence. We contrasted constrained models with and without an autocorrelation structure to assess the impact this had on the ideal number of latent classes, their size and composition. We also contrasted model options using simulations. When the GMM variance-covariance structure was constrained, a within-class autocorrelation structure emerged. When not modelled explicitly, this led to poorer model fit and models that differed substantially in the ideal number of latent classes, as well as class size and composition. Failure to carefully consider the random structure of data within a GMM framework may lead to erroneous model inferences, especially for outcomes with greater within-person than between-person homogeneity, such as BMI. It is crucial to reflect on the underlying data generation processes when building such models.

  20. Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.

    Science.gov (United States)

    Zhang, Jiachao; Hirakawa, Keigo

    2017-04-01

    This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.

  1. Chemical evolution of two-component galaxies. II

    International Nuclear Information System (INIS)

    Caimmi, R.

    1978-01-01

    In order to confirm and refine the results obtained in a previous paper the chemical evolution of two-component (spheroid + disk) galaxies is derived rejecting the instantaneous recycling approximation, by means of numerical computations, accounting for (i) the collapse phase of the gas, assumed to be uniform in density and composition, and (ii) a birth-rate stellar function. Computations are performed relatively to the solar neighbourhood and to model galaxies which closely resemble the real morphological sequence: in both cases, numerical results are compared with analytical ones. The numerical models of this paper constitute a first-order approximation, while higher order approximations could be made by rejecting the hypothesis of uniform density and composition, and making use of detailed dynamical models. (Auth.)

  2. Phylogenetic mixtures and linear invariants for equal input models.

    Science.gov (United States)

    Casanellas, Marta; Steel, Mike

    2017-04-01

    The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).

  3. A nonparametric mixture model for cure rate estimation.

    Science.gov (United States)

    Peng, Y; Dear, K B

    2000-03-01

    Nonparametric methods have attracted less attention than their parametric counterparts for cure rate analysis. In this paper, we study a general nonparametric mixture model. The proportional hazards assumption is employed in modeling the effect of covariates on the failure time of patients who are not cured. The EM algorithm, the marginal likelihood approach, and multiple imputations are employed to estimate parameters of interest in the model. This model extends models and improves estimation methods proposed by other researchers. It also extends Cox's proportional hazards regression model by allowing a proportion of event-free patients and investigating covariate effects on that proportion. The model and its estimation method are investigated by simulations. An application to breast cancer data, including comparisons with previous analyses using a parametric model and an existing nonparametric model by other researchers, confirms the conclusions from the parametric model but not those from the existing nonparametric model.

  4. Modeling adsorption of binary and ternary mixtures on microporous media

    DEFF Research Database (Denmark)

    Monsalvo, Matias Alfonso; Shapiro, Alexander

    2007-01-01

    it possible using the same equation of state to describe the thermodynamic properties of the segregated and the bulk phases. For comparison, we also used the ideal adsorbed solution theory (IAST) to describe adsorption equilibria. The main advantage of these two models is their capabilities to predict......The goal of this work is to analyze the adsorption of binary and ternary mixtures on the basis of the multicomponent potential theory of adsorption (MPTA). In the MPTA, the adsorbate is considered as a segregated mixture in the external potential field emitted by the solid adsorbent. This makes...... multicomponent adsorption equilibria on the basis of single-component adsorption data. We compare the MPTA and IAST models to a large set of experimental data, obtaining reasonable good agreement with experimental data and high degree of predictability. Some limitations of both models are also discussed....

  5. Hydrogenic ionization model for mixtures in non-LTE plasmas

    International Nuclear Information System (INIS)

    Djaoui, A.

    1999-01-01

    The Hydrogenic Ionization Model for Mixtures (HIMM) is a non-Local Thermodynamic Equilibrium (non-LTE), time-dependent ionization model for laser-produced plasmas containing mixtures of elements (species). In this version, both collisional and radiative rates are taken into account. An ionization distribution for each species which is consistent with the ambient electron density is obtained by use of an iterative procedure in a single calculation for all species. Energy levels for each shell having a given principal quantum number and for each ion stage of each species in the mixture are calculated using screening constants. Steady-state non-LTE as well as LTE solutions are also provided. The non-LTE rate equations converge to the LTE solution at sufficiently high densities or as the radiation temperature approaches the electron temperature. The model is particularly useful at low temperatures where convergence problems are usually encountered in our previous models. We apply our model to typical situation in x-ray laser research, laser-produced plasmas and inertial confinement fusion. Our results compare well with previously published results for a selenium plasma. (author)

  6. Color Texture Segmentation by Decomposition of Gaussian Mixture Model

    Czech Academy of Sciences Publication Activity Database

    Grim, Jiří; Somol, Petr; Haindl, Michal; Pudil, Pavel

    2006-01-01

    Roč. 19, č. 4225 (2006), s. 287-296 ISSN 0302-9743. [Iberoamerican Congress on Pattern Recognition. CIARP 2006 /11./. Cancun, 14.11.2006-17.11.2006] R&D Projects: GA AV ČR 1ET400750407; GA MŠk 1M0572; GA MŠk 2C06019 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : texture segmentation * gaussian mixture model * EM algorithm Subject RIV: IN - Informatics, Computer Science Impact factor: 0.402, year: 2005 http://library.utia.cas.cz/separaty/historie/grim-color texture segmentation by decomposition of gaussian mixture model.pdf

  7. Efficient implementation of one- and two-component analytical energy gradients in exact two-component theory

    Science.gov (United States)

    Franzke, Yannick J.; Middendorf, Nils; Weigend, Florian

    2018-03-01

    We present an efficient algorithm for one- and two-component analytical energy gradients with respect to nuclear displacements in the exact two-component decoupling approach to the one-electron Dirac equation (X2C). Our approach is a generalization of the spin-free ansatz by Cheng and Gauss [J. Chem. Phys. 135, 084114 (2011)], where the perturbed one-electron Hamiltonian is calculated by solving a first-order response equation. Computational costs are drastically reduced by applying the diagonal local approximation to the unitary decoupling transformation (DLU) [D. Peng and M. Reiher, J. Chem. Phys. 136, 244108 (2012)] to the X2C Hamiltonian. The introduced error is found to be almost negligible as the mean absolute error of the optimized structures amounts to only 0.01 pm. Our implementation in TURBOMOLE is also available within the finite nucleus model based on a Gaussian charge distribution. For a X2C/DLU gradient calculation, computational effort scales cubically with the molecular size, while storage increases quadratically. The efficiency is demonstrated in calculations of large silver clusters and organometallic iridium complexes.

  8. XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling

    Science.gov (United States)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-08-01

    XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.

  9. Option Pricing with Asymmetric Heteroskedastic Normal Mixture Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars

    This paper uses asymmetric heteroskedastic normal mixture models to fit return data and to price options. The models can be estimated straightforwardly by maximum likelihood, have high statistical fit when used on S&P 500 index return data, and allow for substantial negative skewness and time...... varying higher order moments of the risk neutral distribution. When forecasting out-of-sample a large set of index options between 1996 and 2009, substantial improvements are found compared to several benchmark models in terms of dollar losses and the ability to explain the smirk in implied volatilities...

  10. Variable selection for mixture and promotion time cure rate models.

    Science.gov (United States)

    Masud, Abdullah; Tu, Wanzhu; Yu, Zhangsheng

    2016-11-16

    Failure-time data with cured patients are common in clinical studies. Data from these studies are typically analyzed with cure rate models. Variable selection methods have not been well developed for cure rate models. In this research, we propose two least absolute shrinkage and selection operators based methods, for variable selection in mixture and promotion time cure models with parametric or nonparametric baseline hazards. We conduct an extensive simulation study to assess the operating characteristics of the proposed methods. We illustrate the use of the methods using data from a study of childhood wheezing. © The Author(s) 2016.

  11. Anisotropic properties of phase separation in two-component dipolar Bose-Einstein condensates

    Science.gov (United States)

    Wang, Wei; Li, Jinbin

    2018-03-01

    Using Crank-Nicolson method, we calculate ground state wave functions of two-component dipolar Bose-Einstein condensates (BECs) and show that, due to dipole-dipole interaction (DDI), the condensate mixture displays anisotropic phase separation. The effects of DDI, inter-component s-wave scattering, strength of trap potential and particle numbers on the density profiles are investigated. Three types of two-component profiles are present, first cigar, along z-axis and concentric torus, second pancake (or blood cell), in xy-plane, and two non-uniform ellipsoid, separated by the pancake and third two dumbbell shapes.

  12. Phosphatase activity tunes two-component system sensor detection threshold.

    Science.gov (United States)

    Landry, Brian P; Palanki, Rohan; Dyulgyarov, Nikola; Hartsough, Lucas A; Tabor, Jeffrey J

    2018-04-12

    Two-component systems (TCSs) are the largest family of multi-step signal transduction pathways in biology, and a major source of sensors for biotechnology. However, the input concentrations to which biosensors respond are often mismatched with application requirements. Here, we utilize a mathematical model to show that TCS detection thresholds increase with the phosphatase activity of the sensor histidine kinase. We experimentally validate this result in engineered Bacillus subtilis nitrate and E. coli aspartate TCS sensors by tuning their detection threshold up to two orders of magnitude. We go on to apply our TCS tuning method to recently described tetrathionate and thiosulfate sensors by mutating a widely conserved residue previously shown to impact phosphatase activity. Finally, we apply TCS tuning to engineer B. subtilis to sense and report a wide range of fertilizer concentrations in soil. This work will enable the engineering of tailor-made biosensors for diverse synthetic biology applications.

  13. Segmentation and intensity estimation of microarray images using a gamma-t mixture model.

    Science.gov (United States)

    Baek, Jangsun; Son, Young Sook; McLachlan, Geoffrey J

    2007-02-15

    We present a new approach to the analysis of images for complementary DNA microarray experiments. The image segmentation and intensity estimation are performed simultaneously by adopting a two-component mixture model. One component of this mixture corresponds to the distribution of the background intensity, while the other corresponds to the distribution of the foreground intensity. The intensity measurement is a bivariate vector consisting of red and green intensities. The background intensity component is modeled by the bivariate gamma distribution, whose marginal densities for the red and green intensities are independent three-parameter gamma distributions with different parameters. The foreground intensity component is taken to be the bivariate t distribution, with the constraint that the mean of the foreground is greater than that of the background for each of the two colors. The degrees of freedom of this t distribution are inferred from the data but they could be specified in advance to reduce the computation time. Also, the covariance matrix is not restricted to being diagonal and so it allows for nonzero correlation between R and G foreground intensities. This gamma-t mixture model is fitted by maximum likelihood via the EM algorithm. A final step is executed whereby nonparametric (kernel) smoothing is undertaken of the posterior probabilities of component membership. The main advantages of this approach are: (1) it enjoys the well-known strengths of a mixture model, namely flexibility and adaptability to the data; (2) it considers the segmentation and intensity simultaneously and not separately as in commonly used existing software, and it also works with the red and green intensities in a bivariate framework as opposed to their separate estimation via univariate methods; (3) the use of the three-parameter gamma distribution for the background red and green intensities provides a much better fit than the normal (log normal) or t distributions; (4) the

  14. Two-component multistep direct reactions: A microscopic approach

    International Nuclear Information System (INIS)

    Koning, A.J.; Chadwick, M.B.

    1998-03-01

    The authors present two principal advances in multistep direct theory: (1) A two-component formulation of multistep direct reactions, where neutron and proton excitations are explicitly accounted for in the evolution of the reaction, for all orders of scattering. While this may at first seem to be a formidable task, especially for multistep processes where the many possible reaction pathways becomes large in a two-component formalism, the authors show that this is not so -- a rather simple generalization of the FKK convolution expression 1 automatically generates these pathways. Such considerations are particularly relevant when simultaneously analyzing both neutron and proton emission spectra, which is always important since these processes represent competing decay channels. (2) A new, and fully microscopic, method for calculating MSD cross sections which does not make use of particle-hole state densities but instead directly calculates cross sections for all possible particle-hole excitations (again including an exact book-keeping of the neutron/proton type of the particle and hole at all stages of the reaction) determined from a simple non-interacting shell model. This is in contrast to all previous numerical approaches which sample only a small number of such states to estimate the DWBA strength, and utilize simple analytical formulae for the partial state density, based on the equidistant spacing model. The new approach has been applied, along with theories for multistep compound, compound, and collective reactions, to analyze experimental emission spectra for a range of targets and energies. The authors show that the theory correctly accounts for double-differential nucleon spectra

  15. KONVERGENSI ESTIMATOR DALAM MODEL MIXTURE BERBASIS MISSING DATA

    Directory of Open Access Journals (Sweden)

    N Dwidayati

    2014-06-01

    Full Text Available Abstrak __________________________________________________________________________________________ Model mixture dapat mengestimasi proporsi pasien yang sembuh (cured dan fungsi survival pasien tak sembuh (uncured. Pada kajian ini, model mixture dikembangkan untuk  analisis cure rate berbasis missing data. Ada beberapa metode yang dapat digunakan untuk analisis missing data. Salah satu metode yang dapat digunakan adalah Algoritma EM, Metode ini didasarkan pada 2 (dua langkah, yaitu: (1 Expectation Step dan (2 Maximization Step. Algoritma EM merupakan pendekatan iterasi untuk mempelajari model dari data dengan nilai hilang melalui 4 (empat langkah, yaitu(1 pilih himpunan inisial dari parameter untuk sebuah model, (2 tentukan nilai ekspektasi untuk data hilang, (3 buat induksi parameter model baru dari gabungan nilai ekspekstasi dan data asli, dan (4 jika parameter tidak converged, ulangi langkah 2 menggunakan model baru. Berdasar kajian yang dilakukan dapat ditunjukkan bahwa pada algoritma EM, log-likelihood untuk missing data mengalami kenaikan setelah dilakukan setiap iterasi dari algoritmanya. Dengan demikian berdasar algoritma EM, barisan likelihood konvergen jika likelihood terbatas ke bawah.   Abstract __________________________________________________________________________________________ Model mixture can estimate proportion of recovering patient  and function of patient survival do not recover. At this study, model mixture developed to analyse cure rate bases on missing data. There are some method which applicable to analyse missing data. One of method which can be applied is Algoritma EM, This method based on 2 ( two step, that is: ( 1 Expectation Step and ( 2 Maximization Step. EM Algorithm is approach of iteration to study model from data with value loses through 4 ( four step, yaitu(1 select;chooses initial gathering from parameter for a model, ( 2 determines expectation value for data to lose, ( 3 induce newfangled parameter

  16. Molecular Orientation in Two Component Vapor-Deposited Glasses: Effect of Substrate Temperature and Molecular Shape

    Science.gov (United States)

    Powell, Charles; Jiang, Jing; Walters, Diane; Ediger, Mark

    Vapor-deposited glasses are widely investigated for use in organic electronics including the emitting layers of OLED devices. These materials, while macroscopically homogenous, have anisotropic packing and molecular orientation. By controlling this orientation, outcoupling efficiency can be increased by aligning the transition dipole moment of the light-emitting molecules parallel to the substrate. Light-emitting molecules are typically dispersed in a host matrix, as such, it is imperative to understand molecular orientation in two-component systems. In this study we examine two-component vapor-deposited films and the orientations of the constituent molecules using spectroscopic ellipsometry, UV-vis and IR spectroscopy. The role of temperature, composition and molecular shape as it effects molecular orientation is examined for mixtures of DSA-Ph in Alq3 and in TPD. Deposition temperature relative to the glass transition temperature of the two-component mixture is the primary controlling factor for molecular orientation. In mixtures of DSA-Ph in Alq3, the linear DSA-Ph has a horizontal orientation at low temperatures and slight vertical orientation maximized at 0.96Tg,mixture, analogous to one-component films.

  17. Determining of migraine prognosis using latent growth mixture models.

    Science.gov (United States)

    Tasdelen, Bahar; Ozge, Aynur; Kaleagasi, Hakan; Erdogan, Semra; Mengi, Tufan

    2011-04-01

    This paper presents a retrospective study to classify patients into subtypes of the treatment according to baseline and longitudinally observed values considering heterogenity in migraine prognosis. In the classical prospective clinical studies, participants are classified with respect to baseline status and followed within a certain time period. However, latent growth mixture model is the most suitable method, which considers the population heterogenity and is not affected drop-outs if they are missing at random. Hence, we planned this comprehensive study to identify prognostic factors in migraine. The study data have been based on a 10-year computer-based follow-up data of Mersin University Headache Outpatient Department. The developmental trajectories within subgroups were described for the severity, frequency, and duration of headache separately and the probabilities of each subgroup were estimated by using latent growth mixture models. SAS PROC TRAJ procedures, semiparametric and group-based mixture modeling approach, were applied to define the developmental trajectories. While the three-group model for the severity (mild, moderate, severe) and frequency (low, medium, high) of headache appeared to be appropriate, the four-group model for the duration (low, medium, high, extremely high) was more suitable. The severity of headache increased in the patients with nausea, vomiting, photophobia and phonophobia. The frequency of headache was especially related with increasing age and unilateral pain. Nausea and photophobia were also related with headache duration. Nausea, vomiting and photophobia were the most significant factors to identify developmental trajectories. The remission time was not the same for the severity, frequency, and duration of headache.

  18. Spatially adaptive mixture modeling for analysis of FMRI time series.

    Science.gov (United States)

    Vincent, Thomas; Risser, Laurent; Ciuciu, Philippe

    2010-04-01

    Within-subject analysis in fMRI essentially addresses two problems, the detection of brain regions eliciting evoked activity and the estimation of the underlying dynamics. In Makni et aL, 2005 and Makni et aL, 2008, a detection-estimation framework has been proposed to tackle these problems jointly, since they are connected to one another. In the Bayesian formalism, detection is achieved by modeling activating and nonactivating voxels through independent mixture models (IMM) within each region while hemodynamic response estimation is performed at a regional scale in a nonparametric way. Instead of IMMs, in this paper we take advantage of spatial mixture models (SMM) for their nonlinear spatial regularizing properties. The proposed method is unsupervised and spatially adaptive in the sense that the amount of spatial correlation is automatically tuned from the data and this setting automatically varies across brain regions. In addition, the level of regularization is specific to each experimental condition since both the signal-to-noise ratio and the activation pattern may vary across stimulus types in a given brain region. These aspects require the precise estimation of multiple partition functions of underlying Ising fields. This is addressed efficiently using first path sampling for a small subset of fields and then using a recently developed fast extrapolation technique for the large remaining set. Simulation results emphasize that detection relying on supervised SMM outperforms its IMM counterpart and that unsupervised spatial mixture models achieve similar results without any hand-tuning of the correlation parameter. On real datasets, the gain is illustrated in a localizer fMRI experiment: brain activations appear more spatially resolved using SMM in comparison with classical general linear model (GLM)-based approaches, while estimating a specific parcel-based HRF shape. Our approach therefore validates the treatment of unsmoothed fMRI data without fixed GLM

  19. Shear viscosity of liquid mixtures: Mass dependence

    International Nuclear Information System (INIS)

    Kaushal, Rohan; Tankeshwar, K.

    2002-06-01

    Expressions for zeroth, second, and fourth sum rules of transverse stress autocorrelation function of two component fluid have been derived. These sum rules and Mori's memory function formalism have been used to study shear viscosity of Ar-Kr and isotopic mixtures. It has been found that theoretical result is in good agreement with the computer simulation result for the Ar-Kr mixture. The mass dependence of shear viscosity for different mole fraction shows that deviation from ideal linear model comes even from mass difference in two species of fluid mixture. At higher mass ratio shear viscosity of mixture is not explained by any of the emperical model. (author)

  20. Shear viscosity of liquid mixtures Mass dependence

    CERN Document Server

    Kaushal, R

    2002-01-01

    Expressions for zeroth, second, and fourth sum rules of transverse stress autocorrelation function of two component fluid have been derived. These sum rules and Mori's memory function formalism have been used to study shear viscosity of Ar-Kr and isotopic mixtures. It has been found that theoretical result is in good agreement with the computer simulation result for the Ar-Kr mixture. The mass dependence of shear viscosity for different mole fraction shows that deviation from ideal linear model comes even from mass difference in two species of fluid mixture. At higher mass ratio shear viscosity of mixture is not explained by any of the emperical model.

  1. Implementation of two-component advective flow solution in XSPEC

    Science.gov (United States)

    Debnath, Dipak; Chakrabarti, Sandip K.; Mondal, Santanu

    2014-05-01

    Spectral and temporal properties of black hole candidates can be explained reasonably well using Chakrabarti-Titarchuk solution of two-component advective flow (TCAF). This model requires two accretion rates, namely the Keplerian disc accretion rate and the halo accretion rate, the latter being composed of a sub-Keplerian, low-angular-momentum flow which may or may not develop a shock. In this solution, the relevant parameter is the relative importance of the halo (which creates the Compton cloud region) rate with respect to the Keplerian disc rate (soft photon source). Though this model has been used earlier to manually fit data of several black hole candidates quite satisfactorily, for the first time, we made it user friendly by implementing it into XSPEC software of Goddard Space Flight Center (GSFC)/NASA. This enables any user to extract physical parameters of the accretion flows, such as two accretion rates, the shock location, the shock strength, etc., for any black hole candidate. We provide some examples of fitting a few cases using this model. Most importantly, unlike any other model, we show that TCAF is capable of predicting timing properties from the spectral fits, since in TCAF, a shock is responsible for deciding spectral slopes as well as quasi-periodic oscillation frequencies. L86

  2. Modeling Phase Equilibria for Acid Gas Mixtures Using the CPA Equation of State. I. Mixtures with H2S

    DEFF Research Database (Denmark)

    Tsivintzelis, Ioannis; Kontogeorgis, Georgios; Michelsen, Michael Locht

    2010-01-01

    (water, methanol, and glycols) are modeled assuming presence or not of cross-association interactions. Such interactions are accounted for using either a combining rule or a cross-solvation energy obtained from spectroscopic data. Using the parameters obtained from the binary systems, one ternary......The Cubic-Plus-Association (CPA) equation of state is applied to a large variety of mixtures containing H2S, which are of interest in the oil and gas industry. Binary H2S mixtures with alkanes, CO2, water, methanol, and glycols are first considered. The interactions of H2S with polar compounds...... and three quaternary mixtures are considered. It is shown that overall excellent correlation for binary, mixtures and satisfactory prediction results for multicomponent systems are obtained. There are significant differences between the various modeling approaches and the best results are obtained when...

  3. Droplet size and velocity at the exit of a nozzle with two-component near critical and critical flow

    International Nuclear Information System (INIS)

    Lemonnier, H.; Camelo-Cavalcanti, E.S.

    1993-01-01

    Two-component critical flow modelling is an important issue for safety studies of various hazardous industrial activities. When the flow quality is high, the critical flow rate prediction is sensitive to the modelling of gas droplet mixture interfacial area. In order to improve the description of these flows, experiments were conducted with air-water flows in converging nozzles. The pressure was 2 and 4 bar and the gas mass quality ranged between 100% and 20%. The droplets size and velocity have been measured close to the outlet section of a nozzle with a 10 mm diameter throat. Subcritical and critical conditions were observed. These data are compared with the predictions of a critical flow model which includes an interfacial area model based on the classical ideas of Hinze and Kolmogorov. (authors). 9 figs., 12 refs

  4. Effective dielectric mixture model for characterization of diesel contaminated soil

    International Nuclear Information System (INIS)

    Al-Mattarneh, H.M.A.

    2007-01-01

    Human exposure to contaminated soil by diesel isomers can have serious health consequences like neurological diseases or cancer. The potential of dielectric measuring techniques for electromagnetic characterization of contaminated soils was investigated in this paper. The purpose of the research was to develop an empirical dielectric mixture model for soil hydrocarbon contamination application. The paper described the basic theory and elaborated in dielectric mixture theory. The analytical and empirical models were explained in simple algebraic formulas. The experimental study was then described with reference to materials, properties and experimental results. The results of the analytical models were also mathematically explained. The proposed semi-empirical model was also presented. According to the result of the electromagnetic properties of dry soil contaminated with diesel, the diesel presence had no significant effect on the electromagnetic properties of dry soil. It was concluded that diesel had no contribution to the soil electrical conductivity, which confirmed the nonconductive character of diesel. The results of diesel-contaminated soil at saturation condition indicated that both dielectric constant and loss factors of soil were decreased with increasing diesel content. 15 refs., 2 tabs., 9 figs

  5. Experiments with Mixtures Designs, Models, and the Analysis of Mixture Data

    CERN Document Server

    Cornell, John A

    2011-01-01

    The most comprehensive, single-volume guide to conducting experiments with mixtures"If one is involved, or heavily interested, in experiments on mixtures of ingredients, one must obtain this book. It is, as was the first edition, the definitive work."-Short Book Reviews (Publication of the International Statistical Institute)"The text contains many examples with worked solutions and with its extensive coverage of the subject matter will prove invaluable to those in the industrial and educational sectors whose work involves the design and analysis of mixture experiments."-Journal of the Royal S

  6. On population size estimators in the Poisson mixture model.

    Science.gov (United States)

    Mao, Chang Xuan; Yang, Nan; Zhong, Jinhua

    2013-09-01

    Estimating population sizes via capture-recapture experiments has enormous applications. The Poisson mixture model can be adopted for those applications with a single list in which individuals appear one or more times. We compare several nonparametric estimators, including the Chao estimator, the Zelterman estimator, two jackknife estimators and the bootstrap estimator. The target parameter of the Chao estimator is a lower bound of the population size. Those of the other four estimators are not lower bounds, and they may produce lower confidence limits for the population size with poor coverage probabilities. A simulation study is reported and two examples are investigated. © 2013, The International Biometric Society.

  7. A mixture model for robust registration in Kinect sensor

    Science.gov (United States)

    Peng, Li; Zhou, Huabing; Zhu, Shengguo

    2018-03-01

    The Microsoft Kinect sensor has been widely used in many applications, but it suffers from the drawback of low registration precision between color image and depth image. In this paper, we present a robust method to improve the registration precision by a mixture model that can handle multiply images with the nonparametric model. We impose non-parametric geometrical constraints on the correspondence, as a prior distribution, in a reproducing kernel Hilbert space (RKHS).The estimation is performed by the EM algorithm which by also estimating the variance of the prior model is able to obtain good estimates. We illustrate the proposed method on the public available dataset. The experimental results show that our approach outperforms the baseline methods.

  8. Fast Bayesian Inference in Dirichlet Process Mixture Models.

    Science.gov (United States)

    Wang, Lianming; Dunson, David B

    2011-01-01

    There has been increasing interest in applying Bayesian nonparametric methods in large samples and high dimensions. As Markov chain Monte Carlo (MCMC) algorithms are often infeasible, there is a pressing need for much faster algorithms. This article proposes a fast approach for inference in Dirichlet process mixture (DPM) models. Viewing the partitioning of subjects into clusters as a model selection problem, we propose a sequential greedy search algorithm for selecting the partition. Then, when conjugate priors are chosen, the resulting posterior conditionally on the selected partition is available in closed form. This approach allows testing of parametric models versus nonparametric alternatives based on Bayes factors. We evaluate the approach using simulation studies and compare it with four other fast nonparametric methods in the literature. We apply the proposed approach to three datasets including one from a large epidemiologic study. Matlab codes for the simulation and data analyses using the proposed approach are available online in the supplemental materials.

  9. Spin-excited oscillations in two-component fermion condensates

    International Nuclear Information System (INIS)

    Maruyama, Tomoyuki; Bertsch, George F.

    2006-01-01

    We investigate collective spin excitations in two-component fermion condensates with special consideration of unequal populations of the two components. The frequencies of monopole and dipole modes are calculated using Thomas-Fermi theory and the scaling approximation. As the fermion-fermion coupling is varied, the system shows various phases of the spin configuration. We demonstrate that spin oscillations have more sensitivity to the spin phase structures than the density oscillations

  10. Microbial comparative pan-genomics using binomial mixture models

    Directory of Open Access Journals (Sweden)

    Ussery David W

    2009-08-01

    Full Text Available Abstract Background The size of the core- and pan-genome of bacterial species is a topic of increasing interest due to the growing number of sequenced prokaryote genomes, many from the same species. Attempts to estimate these quantities have been made, using regression methods or mixture models. We extend the latter approach by using statistical ideas developed for capture-recapture problems in ecology and epidemiology. Results We estimate core- and pan-genome sizes for 16 different bacterial species. The results reveal a complex dependency structure for most species, manifested as heterogeneous detection probabilities. Estimated pan-genome sizes range from small (around 2600 gene families in Buchnera aphidicola to large (around 43000 gene families in Escherichia coli. Results for Echerichia coli show that as more data become available, a larger diversity is estimated, indicating an extensive pool of rarely occurring genes in the population. Conclusion Analyzing pan-genomics data with binomial mixture models is a way to handle dependencies between genomes, which we find is always present. A bottleneck in the estimation procedure is the annotation of rarely occurring genes.

  11. New Flexible Models and Design Construction Algorithms for Mixtures and Binary Dependent Variables

    NARCIS (Netherlands)

    A. Ruseckaite (Aiste)

    2017-01-01

    markdownabstractThis thesis discusses new mixture(-amount) models, choice models and the optimal design of experiments. Two chapters of the thesis relate to the so-called mixture, which is a product or service whose ingredients’ proportions sum to one. The thesis begins by introducing mixture

  12. Tractography segmentation using a hierarchical Dirichlet processes mixture model.

    Science.gov (United States)

    Wang, Xiaogang; Grimson, W Eric L; Westin, Carl-Fredrik

    2011-01-01

    In this paper, we propose a new nonparametric Bayesian framework to cluster white matter fiber tracts into bundles using a hierarchical Dirichlet processes mixture (HDPM) model. The number of clusters is automatically learned driven by data with a Dirichlet process (DP) prior instead of being manually specified. After the models of bundles have been learned from training data without supervision, they can be used as priors to cluster/classify fibers of new subjects for comparison across subjects. When clustering fibers of new subjects, new clusters can be created for structures not observed in the training data. Our approach does not require computing pairwise distances between fibers and can cluster a huge set of fibers across multiple subjects. We present results on several data sets, the largest of which has more than 120,000 fibers. Copyright © 2010 Elsevier Inc. All rights reserved.

  13. Clustering disaggregated load profiles using a Dirichlet process mixture model

    International Nuclear Information System (INIS)

    Granell, Ramon; Axon, Colin J.; Wallom, David C.H.

    2015-01-01

    Highlights: • We show that the Dirichlet process mixture model is scaleable. • Our model does not require the number of clusters as an input. • Our model creates clusters only by the features of the demand profiles. • We have used both residential and commercial data sets. - Abstract: The increasing availability of substantial quantities of power-use data in both the residential and commercial sectors raises the possibility of mining the data to the advantage of both consumers and network operations. We present a Bayesian non-parametric model to cluster load profiles from households and business premises. Evaluators show that our model performs as well as other popular clustering methods, but unlike most other methods it does not require the number of clusters to be predetermined by the user. We used the so-called ‘Chinese restaurant process’ method to solve the model, making use of the Dirichlet-multinomial distribution. The number of clusters grew logarithmically with the quantity of data, making the technique suitable for scaling to large data sets. We were able to show that the model could distinguish features such as the nationality, household size, and type of dwelling between the cluster memberships

  14. Bayesian nonparametric meta-analysis using Polya tree mixture models.

    Science.gov (United States)

    Branscum, Adam J; Hanson, Timothy E

    2008-09-01

    Summary. A common goal in meta-analysis is estimation of a single effect measure using data from several studies that are each designed to address the same scientific inquiry. Because studies are typically conducted in geographically disperse locations, recent developments in the statistical analysis of meta-analytic data involve the use of random effects models that account for study-to-study variability attributable to differences in environments, demographics, genetics, and other sources that lead to heterogeneity in populations. Stemming from asymptotic theory, study-specific summary statistics are modeled according to normal distributions with means representing latent true effect measures. A parametric approach subsequently models these latent measures using a normal distribution, which is strictly a convenient modeling assumption absent of theoretical justification. To eliminate the influence of overly restrictive parametric models on inferences, we consider a broader class of random effects distributions. We develop a novel hierarchical Bayesian nonparametric Polya tree mixture (PTM) model. We present methodology for testing the PTM versus a normal random effects model. These methods provide researchers a straightforward approach for conducting a sensitivity analysis of the normality assumption for random effects. An application involving meta-analysis of epidemiologic studies designed to characterize the association between alcohol consumption and breast cancer is presented, which together with results from simulated data highlight the performance of PTMs in the presence of nonnormality of effect measures in the source population.

  15. Semiparametric Mixtures of Regressions with Single-index for Model Based Clustering

    OpenAIRE

    Xiang, Sijia; Yao, Weixin

    2017-01-01

    In this article, we propose two classes of semiparametric mixture regression models with single-index for model based clustering. Unlike many semiparametric/nonparametric mixture regression models that can only be applied to low dimensional predictors, the new semiparametric models can easily incorporate high dimensional predictors into the nonparametric components. The proposed models are very general, and many of the recently proposed semiparametric/nonparametric mixture regression models a...

  16. Modeling mixtures of thyroid gland function disruptors in a vertebrate alternative model, the zebrafish eleutheroembryo

    Energy Technology Data Exchange (ETDEWEB)

    Thienpont, Benedicte; Barata, Carlos [Department of Environmental Chemistry, Institute of Environmental Assessment and Water Research (IDAEA, CSIC), Jordi Girona, 18-26, 08034 Barcelona (Spain); Raldúa, Demetrio, E-mail: drpqam@cid.csic.es [Department of Environmental Chemistry, Institute of Environmental Assessment and Water Research (IDAEA, CSIC), Jordi Girona, 18-26, 08034 Barcelona (Spain); Maladies Rares: Génétique et Métabolisme (MRGM), University of Bordeaux, EA 4576, F-33400 Talence (France)

    2013-06-01

    Maternal thyroxine (T4) plays an essential role in fetal brain development, and even mild and transitory deficits in free-T4 in pregnant women can produce irreversible neurological effects in their offspring. Women of childbearing age are daily exposed to mixtures of chemicals disrupting the thyroid gland function (TGFDs) through the diet, drinking water, air and pharmaceuticals, which has raised the highest concern for the potential additive or synergic effects on the development of mild hypothyroxinemia during early pregnancy. Recently we demonstrated that zebrafish eleutheroembryos provide a suitable alternative model for screening chemicals impairing the thyroid hormone synthesis. The present study used the intrafollicular T4-content (IT4C) of zebrafish eleutheroembryos as integrative endpoint for testing the hypotheses that the effect of mixtures of TGFDs with a similar mode of action [inhibition of thyroid peroxidase (TPO)] was well predicted by a concentration addition concept (CA) model, whereas the response addition concept (RA) model predicted better the effect of dissimilarly acting binary mixtures of TGFDs [TPO-inhibitors and sodium-iodide symporter (NIS)-inhibitors]. However, CA model provided better prediction of joint effects than RA in five out of the six tested mixtures. The exception being the mixture MMI (TPO-inhibitor)-KClO{sub 4} (NIS-inhibitor) dosed at a fixed ratio of EC{sub 10} that provided similar CA and RA predictions and hence it was difficult to get any conclusive result. There results support the phenomenological similarity criterion stating that the concept of concentration addition could be extended to mixture constituents having common apical endpoints or common adverse outcomes. - Highlights: • Potential synergic or additive effect of mixtures of chemicals on thyroid function. • Zebrafish as alternative model for testing the effect of mixtures of goitrogens. • Concentration addition seems to predict better the effect of

  17. Modeling mixtures of thyroid gland function disruptors in a vertebrate alternative model, the zebrafish eleutheroembryo

    International Nuclear Information System (INIS)

    Thienpont, Benedicte; Barata, Carlos; Raldúa, Demetrio

    2013-01-01

    Maternal thyroxine (T4) plays an essential role in fetal brain development, and even mild and transitory deficits in free-T4 in pregnant women can produce irreversible neurological effects in their offspring. Women of childbearing age are daily exposed to mixtures of chemicals disrupting the thyroid gland function (TGFDs) through the diet, drinking water, air and pharmaceuticals, which has raised the highest concern for the potential additive or synergic effects on the development of mild hypothyroxinemia during early pregnancy. Recently we demonstrated that zebrafish eleutheroembryos provide a suitable alternative model for screening chemicals impairing the thyroid hormone synthesis. The present study used the intrafollicular T4-content (IT4C) of zebrafish eleutheroembryos as integrative endpoint for testing the hypotheses that the effect of mixtures of TGFDs with a similar mode of action [inhibition of thyroid peroxidase (TPO)] was well predicted by a concentration addition concept (CA) model, whereas the response addition concept (RA) model predicted better the effect of dissimilarly acting binary mixtures of TGFDs [TPO-inhibitors and sodium-iodide symporter (NIS)-inhibitors]. However, CA model provided better prediction of joint effects than RA in five out of the six tested mixtures. The exception being the mixture MMI (TPO-inhibitor)-KClO 4 (NIS-inhibitor) dosed at a fixed ratio of EC 10 that provided similar CA and RA predictions and hence it was difficult to get any conclusive result. There results support the phenomenological similarity criterion stating that the concept of concentration addition could be extended to mixture constituents having common apical endpoints or common adverse outcomes. - Highlights: • Potential synergic or additive effect of mixtures of chemicals on thyroid function. • Zebrafish as alternative model for testing the effect of mixtures of goitrogens. • Concentration addition seems to predict better the effect of mixtures of

  18. Modeling dynamic functional connectivity using a wishart mixture model

    DEFF Research Database (Denmark)

    Nielsen, Søren Føns Vind; Madsen, Kristoffer Hougaard; Schmidt, Mikkel Nørgaard

    2017-01-01

    framework provides model selection by quantifying models generalization to new data. We use this to quantify the number of states within a prespecified window length. We further propose a heuristic procedure for choosing the window length based on contrasting for each window length the predictive...... together whereas short windows are more unstable and influenced by noise and we find that our heuristic correctly identifies an adequate level of complexity. On single subject resting state fMRI data we find that dynamic models generally outperform static models and using the proposed heuristic points...

  19. The two-component afterglow of Swift GRB 050802

    Science.gov (United States)

    Oates, S. R.; de Pasquale, M.; Page, M. J.; Blustin, A. J.; Zane, S.; McGowan, K.; Mason, K. O.; Poole, T. S.; Schady, P.; Roming, P. W. A.; Page, K. L.; Falcone, A.; Gehrels, N.

    2007-09-01

    This paper investigates GRB 050802, one of the best examples of a Swift gamma-ray burst afterglow that shows a break in the X-ray light curve, while the optical counterpart decays as a single power law. This burst has an optically bright afterglow of 16.5 mag, detected throughout the 170-650nm spectral range of the Ultraviolet and Optical Telescope (UVOT) onboard Swift. Observations began with the X-ray Telescope and UVOT telescopes 286s after the initial trigger and continued for 1.2 ×106s. The X-ray light curve consists of three power-law segments: a rise until 420s, followed by a slow decay with α =0.63 +/-0.03 until 5000s, after which, the light curve decays faster with a slope of α3 =1.59 +/-0.03. The optical light curve decays as a single power law with αO =0.82 +/-0.03 throughout the observation. The X-ray data on their own are consistent with the break at 5000s being due to the end of energy injection. Modelling the optical to X-ray spectral energy distribution, we find that the optical afterglow cannot be produced by the same component as the X-ray emission at late times, ruling out a single-component afterglow. We therefore considered two-component jet models and find that the X-ray and optical emission is best reproduced by a model in which both components are energy injected for the duration of the observed afterglow and the X-ray break at 5000s is due to a jet break in the narrow component. This bright, well-observed burst is likely a guide for interpreting the surprising finding of Swift that bursts seldom display achromatic jet breaks.

  20. Analysis of water hammer in two-component two-phase flows

    International Nuclear Information System (INIS)

    Warde, H.; Marzouk, E.; Ibrahim, S.

    1989-01-01

    The water hammer phenomena caused by a sudden valve closure in air-water two-phase flows must be clarified for the safety analysis of LOCA in reactors and further for the safety of boilers, chemical plants, pipe transport of fluids such as petroleum and natural gas. In the present work water hammer phenomena caused by sudden valve closure in two-component two-phase flows are investigated theoretically and experimentally. The phenomena are more complicated than in single phase-flows due to the fact of the presence of compressible component. Basic partial differential equations based on a one-dimensional homogeneous flow model are solved by the method of characteristic. The analysis is extended to include friction in a two-phase mixture depending on the local flow pattern. The profiles of the pressure transients, the propagation velocity of pressure waves and the effect of valve closure on the transient pressure are found. Different two-phase flow pattern and frictional pressure drop correlations were used including Baker, Chesholm and Beggs and Bril correlations. The effect of the flow pattern on the characteristic of wave propagation is discussed primarily to indicate the effect of void fraction on the velocity of wave propagation and on the attenuation of pressure waves. Transient pressure in the mixture were recorded at different air void fractions, rates of uniform valve closure and liquid flow velocities with the aid of pressure transducers, transient wave form recorders interfaced with an on-line pc computer. The results are compared with computation, and good agreement was obtained within experimental accuracy

  1. Multiple Response Regression for Gaussian Mixture Models with Known Labels.

    Science.gov (United States)

    Lee, Wonyul; Du, Ying; Sun, Wei; Hayes, D Neil; Liu, Yufeng

    2012-12-01

    Multiple response regression is a useful regression technique to model multiple response variables using the same set of predictor variables. Most existing methods for multiple response regression are designed for modeling homogeneous data. In many applications, however, one may have heterogeneous data where the samples are divided into multiple groups. Our motivating example is a cancer dataset where the samples belong to multiple cancer subtypes. In this paper, we consider modeling the data coming from a mixture of several Gaussian distributions with known group labels. A naive approach is to split the data into several groups according to the labels and model each group separately. Although it is simple, this approach ignores potential common structures across different groups. We propose new penalized methods to model all groups jointly in which the common and unique structures can be identified. The proposed methods estimate the regression coefficient matrix, as well as the conditional inverse covariance matrix of response variables. Asymptotic properties of the proposed methods are explored. Through numerical examples, we demonstrate that both estimation and prediction can be improved by modeling all groups jointly using the proposed methods. An application to a glioblastoma cancer dataset reveals some interesting common and unique gene relationships across different cancer subtypes.

  2. A smooth mixture of Tobits model for healthcare expenditure.

    Science.gov (United States)

    Keane, Michael; Stavrunova, Olena

    2011-09-01

    This paper develops a smooth mixture of Tobits (SMTobit) model for healthcare expenditure. The model is a generalization of the smoothly mixing regressions framework of Geweke and Keane (J Econometrics 2007; 138: 257-290) to the case of a Tobit-type limited dependent variable. A Markov chain Monte Carlo algorithm with data augmentation is developed to obtain the posterior distribution of model parameters. The model is applied to the US Medicare Current Beneficiary Survey data on total medical expenditure. The results suggest that the model can capture the overall shape of the expenditure distribution very well, and also provide a good fit to a number of characteristics of the conditional (on covariates) distribution of expenditure, such as the conditional mean, variance and probability of extreme outcomes, as well as the 50th, 90th, and 95th, percentiles. We find that healthier individuals face an expenditure distribution with lower mean, variance and probability of extreme outcomes, compared with their counterparts in a worse state of health. Males have an expenditure distribution with higher mean, variance and probability of an extreme outcome, compared with their female counterparts. The results also suggest that heart and cardiovascular diseases affect the expenditure of males more than that of females. Copyright © 2011 John Wiley & Sons, Ltd.

  3. Modeling of columnar and equiaxed solidification of binary mixtures

    International Nuclear Information System (INIS)

    Roux, P.

    2005-12-01

    This work deals with the modelling of dendritic solidification in binary mixtures. Large scale phenomena are represented by volume averaging of the local conservation equations. This method allows to rigorously derive the partial differential equations of averaged fields and the closure problems associated to the deviations. Such problems can be resolved numerically on periodic cells, representative of dendritic structures, in order to give a precise evaluation of macroscopic transfer coefficients (Drag coefficients, exchange coefficients, diffusion-dispersion tensors...). The method had already been applied for a model of columnar dendritic mushy zone and it is extended to the case of equiaxed dendritic solidification, where solid grains can move. The two-phase flow is modelled with an Eulerian-Eulerian approach and the novelty is to account for the dispersion of solid velocity through the kinetic agitation of the particles. A coupling of the two models is proposed thanks to an original adaptation of the columnar model, allowing for undercooling calculation: a solid-liquid interfacial area density is introduced and calculated. At last, direct numerical simulations of crystal growth are proposed with a diffuse interface method for a representation of local phenomena. (author)

  4. Microbial comparative pan-genomics using binomial mixture models

    DEFF Research Database (Denmark)

    Ussery, David; Snipen, L; Almøy, T

    2009-01-01

    The size of the core- and pan-genome of bacterial species is a topic of increasing interest due to the growing number of sequenced prokaryote genomes, many from the same species. Attempts to estimate these quantities have been made, using regression methods or mixture models. We extend the latter...... approach by using statistical ideas developed for capture-recapture problems in ecology and epidemiology. RESULTS: We estimate core- and pan-genome sizes for 16 different bacterial species. The results reveal a complex dependency structure for most species, manifested as heterogeneous detection...... probabilities. Estimated pan-genome sizes range from small (around 2600 gene families) in Buchnera aphidicola to large (around 43000 gene families) in Escherichia coli. Results for Echerichia coli show that as more data become available, a larger diversity is estimated, indicating an extensive pool of rarely...

  5. Two component systems: physiological effect of a third component.

    Directory of Open Access Journals (Sweden)

    Baldiri Salvado

    Full Text Available Signal transduction systems mediate the response and adaptation of organisms to environmental changes. In prokaryotes, this signal transduction is often done through Two Component Systems (TCS. These TCS are phosphotransfer protein cascades, and in their prototypical form they are composed by a kinase that senses the environmental signals (SK and by a response regulator (RR that regulates the cellular response. This basic motif can be modified by the addition of a third protein that interacts either with the SK or the RR in a way that could change the dynamic response of the TCS module. In this work we aim at understanding the effect of such an additional protein (which we call "third component" on the functional properties of a prototypical TCS. To do so we build mathematical models of TCS with alternative designs for their interaction with that third component. These mathematical models are analyzed in order to identify the differences in dynamic behavior inherent to each design, with respect to functionally relevant properties such as sensitivity to changes in either the parameter values or the molecular concentrations, temporal responsiveness, possibility of multiple steady states, or stochastic fluctuations in the system. The differences are then correlated to the physiological requirements that impinge on the functioning of the TCS. This analysis sheds light on both, the dynamic behavior of synthetically designed TCS, and the conditions under which natural selection might favor each of the designs. We find that a third component that modulates SK activity increases the parameter space where a bistable response of the TCS module to signals is possible, if SK is monofunctional, but decreases it when the SK is bifunctional. The presence of a third component that modulates RR activity decreases the parameter space where a bistable response of the TCS module to signals is possible.

  6. Two component micro injection moulding for moulded interconnect devices

    DEFF Research Database (Denmark)

    Islam, Aminul

    2008-01-01

    Moulded interconnect devices (MIDs) contain huge possibilities for many applications in micro electro-mechanical-systems because of their capability of reducing the number of components, process steps and finally in miniaturization of the product. Among the available MID process chains, two...... component injection moulding is one of the most industrially adaptive processes. However, the use of two component injection moulding for MID fabrication, with circuit patterns in the sub-millimeter range, is still a big challenge at the present state of technology. The scope of the current Ph.D. project...... and a reasonable adhesion between them. • Selective metallization of the two component plastic part (coating one polymer with metal and leaving the other one uncoated) To overcome these two main issues in MID fabrication for micro applications, the current Ph.D. project explores the technical difficulties...

  7. Two-component feedback loops and deformed mechanics

    International Nuclear Information System (INIS)

    Tourigny, David S.

    2015-01-01

    It is shown that a general two-component feedback loop can be viewed as a deformed Hamiltonian system. Some of the implications of using ideas from theoretical physics to study biological processes are discussed. - Highlights: • Two-component molecular feedback loops are viewed as q-deformed Hamiltonian systems. • Deformations are reversed using Jackson derivatives to take advantage of working in the Hamiltonian limit. • New results are derived for the particular examples considered. • General deformations are suggested to be associated with a broader class of biological processes

  8. Modeling of non-additive mixture properties using the Online CHEmical database and Modeling environment (OCHEM

    Directory of Open Access Journals (Sweden)

    Oprisiu Ioana

    2013-01-01

    Full Text Available Abstract The Online Chemical Modeling Environment (OCHEM, http://ochem.eu is a web-based platform that provides tools for automation of typical steps necessary to create a predictive QSAR/QSPR model. The platform consists of two major subsystems: a database of experimental measurements and a modeling framework. So far, OCHEM has been limited to the processing of individual compounds. In this work, we extended OCHEM with a new ability to store and model properties of binary non-additive mixtures. The developed system is publicly accessible, meaning that any user on the Web can store new data for binary mixtures and develop models to predict their non-additive properties. The database already contains almost 10,000 data points for the density, bubble point, and azeotropic behavior of binary mixtures. For these data, we developed models for both qualitative (azeotrope/zeotrope and quantitative endpoints (density and bubble points using different learning methods and specially developed descriptors for mixtures. The prediction performance of the models was similar to or more accurate than results reported in previous studies. Thus, we have developed and made publicly available a powerful system for modeling mixtures of chemical compounds on the Web.

  9. Flexible Mixture-Amount Models for Business and Industry Using Gaussian Processes

    NARCIS (Netherlands)

    A. Ruseckaite (Aiste); D. Fok (Dennis); P.P. Goos (Peter)

    2016-01-01

    markdownabstractMany products and services can be described as mixtures of ingredients whose proportions sum to one. Specialized models have been developed for linking the mixture proportions to outcome variables, such as preference, quality and liking. In many scenarios, only the mixture

  10. Induced polarization of clay-sand mixtures: experiments and modeling

    International Nuclear Information System (INIS)

    Okay, G.; Leroy, P.; Tournassat, C.; Ghorbani, A.; Jougnot, D.; Cosenza, P.; Camerlynck, C.; Cabrera, J.; Florsch, N.; Revil, A.

    2012-01-01

    were performed with a cylindrical four-electrode sample-holder (cylinder made of PVC with 30 cm in length and 19 cm in diameter) associated with a SIP-Fuchs II impedance meter and non-polarizing Cu/CuSO 4 electrodes. These electrodes were installed at 10 cm from the base of the sample holder and regularly spaced (each 90 degree). The results illustrate the strong impact of the Cationic Exchange Capacity (CEC) of the clay minerals upon the complex conductivity. The amplitude of the in-phase conductivity of the kaolinite-clay samples is strongly dependent to saturating fluid salinity for all volumetric clay fractions, whereas the in-phase conductivity of the smectite-clay samples is quite independent on the salinity, except at the low clay content (5% and 1% of clay in volume). This is due to the strong and constant surface conductivity of smectite associated with its very high CEC. The quadrature conductivity increases steadily with the CEC and the clay content. We observe that the dependence on frequency of the quadrature conductivity of sand-kaolinite mixtures is more important than for sand-bentonite mixtures. For both types of clay, the quadrature conductivity seems to be fairly independent on the pore fluid salinity except at very low clay contents (1% in volume of kaolinite-clay). This is due to the constant surface site density of Na counter-ions in the Stern layer of clay materials. At the lowest clay content (1%), the magnitude of the quadrature conductivity increases with the salinity, as expected for silica sands. In this case, the surface site density of Na counter-ions in the Stern layer increases with salinity. The experimental data show good agreement with predicted values given by our Spectral Induced Polarization (SIP) model. This complex conductivity model considers the electrochemical polarization of the Stern layer coating the clay particles and the Maxwell-Wagner polarization. We use the differential effective medium theory to calculate the complex

  11. The fractional virial potential energy in two-component systems

    Directory of Open Access Journals (Sweden)

    Caimmi R.

    2008-01-01

    Full Text Available Two-component systems are conceived as macrogases, and the related equation of state is expressed using the virial theorem for subsystems, under the restriction of homeoidally striated density profiles. Explicit calculations are performed for a useful reference case and a few cases of astrophysical interest, both with and without truncation radius. Shallower density profiles are found to yield an equation of state, φ = φ(y, m, characterized (for assigned values of the fractional mass, m = Mj /Mi by the occurrence of two extremum points, a minimum and a maximum, as found in an earlier attempt. Steeper density profiles produce a similar equation of state, which implies that a special value of m is related to a critical curve where the above mentioned extremum points reduce to a single horizontal inflexion point, and curves below the critical one show no extremum points. The similarity of the isofractional mass curves to van der Waals' isothermal curves, suggests the possibility of a phase transition in a bell-shaped region of the (Oyφ plane, where the fractional truncation radius along a selected direction is y = Rj /Ri , and the fractional virial potential energy is φ = (Eji vir /(Eij vir . Further investigation is devoted to mass distributions described by Hernquist (1990 density profiles, for which an additional relation can be used to represent a sample of N = 16 elliptical galaxies (EGs on the (Oyφ plane. Even if the evolution of elliptical galaxies and their hosting dark matter (DM haloes, in the light of the model, has been characterized by equal fractional mass, m, and equal scaled truncation radius, or concentration, Ξu = Ru /r† , u = i, j, still it cannot be considered as strictly homologous, due to different values of fractional truncation radii, y, or fractional scaling radii, y† = r† /r† , deduced from sample objects.

  12. Modelling the effect of mixture components on permeation through skin.

    Science.gov (United States)

    Ghafourian, T; Samaras, E G; Brooks, J D; Riviere, J E

    2010-10-15

    A vehicle influences the concentration of penetrant within the membrane, affecting its diffusivity in the skin and rate of transport. Despite the huge amount of effort made for the understanding and modelling of the skin absorption of chemicals, a reliable estimation of the skin penetration potential from formulations remains a challenging objective. In this investigation, quantitative structure-activity relationship (QSAR) was employed to relate the skin permeation of compounds to the chemical properties of the mixture ingredients and the molecular structures of the penetrants. The skin permeability dataset consisted of permeability coefficients of 12 different penetrants each blended in 24 different solvent mixtures measured from finite-dose diffusion cell studies using porcine skin. Stepwise regression analysis resulted in a QSAR employing two penetrant descriptors and one solvent property. The penetrant descriptors were octanol/water partition coefficient, logP and the ninth order path molecular connectivity index, and the solvent property was the difference between boiling and melting points. The negative relationship between skin permeability coefficient and logP was attributed to the fact that most of the drugs in this particular dataset are extremely lipophilic in comparison with the compounds in the common skin permeability datasets used in QSAR. The findings show that compounds formulated in vehicles with small boiling and melting point gaps will be expected to have higher permeation through skin. The QSAR was validated internally, using a leave-many-out procedure, giving a mean absolute error of 0.396. The chemical space of the dataset was compared with that of the known skin permeability datasets and gaps were identified for future skin permeability measurements. Copyright 2010 Elsevier B.V. All rights reserved.

  13. Toxicological risk assessment of complex mixtures through the Wtox model

    Directory of Open Access Journals (Sweden)

    William Gerson Matias

    2015-01-01

    Full Text Available Mathematical models are important tools for environmental management and risk assessment. Predictions about the toxicity of chemical mixtures must be enhanced due to the complexity of eects that can be caused to the living species. In this work, the environmental risk was accessed addressing the need to study the relationship between the organism and xenobiotics. Therefore, ve toxicological endpoints were applied through the WTox Model, and with this methodology we obtained the risk classication of potentially toxic substances. Acute and chronic toxicity, citotoxicity and genotoxicity were observed in the organisms Daphnia magna, Vibrio scheri and Oreochromis niloticus. A case study was conducted with solid wastes from textile, metal-mechanic and pulp and paper industries. The results have shown that several industrial wastes induced mortality, reproductive eects, micronucleus formation and increases in the rate of lipid peroxidation and DNA methylation of the organisms tested. These results, analyzed together through the WTox Model, allowed the classication of the environmental risk of industrial wastes. The evaluation showed that the toxicological environmental risk of the samples analyzed can be classied as signicant or critical.

  14. Polymer mixtures in confined geometries: Model systems to explore ...

    Indian Academy of Sciences (India)

    to mean field behavior for very long chains, the critical behavior of mixtures confined into thin film geometry falls in the 2d Ising class irrespective of chain length. ..... AB interface does not approach the wall; (b) corresponds to a temperature .... Very recently, these theoretical studies have been extended to polymer mixtures.

  15. Competitive adsorption of a two-component gas on a deformable adsorbent

    International Nuclear Information System (INIS)

    Usenko, A S

    2014-01-01

    We investigate the competitive adsorption of a two-component gas on the surface of an adsorbent whose adsorption properties vary due to the adsorbent deformation. The essential difference of adsorption isotherms for a deformable adsorbent both from the classical Langmuir adsorption isotherms of a two-component gas and from the adsorption isotherms of a one-component gas is obtained, taking into account variations in the adsorption properties of the adsorbent in adsorption. We establish bistability and tristability of the system caused by variations in adsorption properties of the adsorbent in competitive adsorption of gas particles on it. We derive conditions under which adsorption isotherms of a binary gas mixture have two stable asymptotes. It is shown that the specific features of the behavior of the system under study can be described in terms of a potential of the known explicit form. (paper)

  16. A study of finite mixture model: Bayesian approach on financial time series data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-07-01

    Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.

  17. Nonparametric Identification and Estimation of Finite Mixture Models of Dynamic Discrete Choices

    OpenAIRE

    Hiroyuki Kasahara; Katsumi Shimotsu

    2006-01-01

    In dynamic discrete choice analysis, controlling for unobserved heterogeneity is an important issue, and finite mixture models provide flexible ways to account for unobserved heterogeneity. This paper studies nonparametric identifiability of type probabilities and type-specific component distributions in finite mixture models of dynamic discrete choices. We derive sufficient conditions for nonparametric identification for various finite mixture models of dynamic discrete choices used in appli...

  18. Mixture modeling methods for the assessment of normal and abnormal personality, part II: longitudinal models.

    Science.gov (United States)

    Wright, Aidan G C; Hallquist, Michael N

    2014-01-01

    Studying personality and its pathology as it changes, develops, or remains stable over time offers exciting insight into the nature of individual differences. Researchers interested in examining personal characteristics over time have a number of time-honored analytic approaches at their disposal. In recent years there have also been considerable advances in person-oriented analytic approaches, particularly longitudinal mixture models. In this methodological primer we focus on mixture modeling approaches to the study of normative and individual change in the form of growth mixture models and ipsative change in the form of latent transition analysis. We describe the conceptual underpinnings of each of these models, outline approaches for their implementation, and provide accessible examples for researchers studying personality and its assessment.

  19. Modelling of spark to ignition transition in gas mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Akram, M.

    1996-10-01

    This thesis pertains to the models for studying sparking in chemically inert gases. The processes taking place in a spark to flame transition can be segregated into physical and chemical processes, and this study is focused on physical processes. The plasma is regarded as a single-substance material. One and two-dimensional models are developed. The transfer of electrical energy into thermal energy of the gas and its redistribution in space and time along with the evolution of a plasma kernel is studied in the time domain ranging from 10 ns to 40 micros. In the case of ultra-fast sparks, the propagation of the shock and its reflection from a rigid wall is presented. The influence of electrode shape and the gap size on the flow structure development is found to be a dominating factor. It is observed that the flow structure that has developed in the early stage more or less prevails at later stages and strongly influences the shape and evolution of the hot kernel. The electrode geometry and configuration are responsible for the development of the flow structure. The strength of the vortices generated in the flow field is influenced by the power input to the gap and their location of emergence is dictated by the electrode shape and configuration. The heat transfer after 2 micros in the case of ultra-fast sparks is dominated by convection and diffusion. The strong mixing produced by hydrodynamic effects and the electrode geometry give the indication that the magnetic pinch effect might be negligible. Finally, a model for a multicomponent gas mixture is presented. The chemical kinetics mechanism for dissociation and ionization is introduced. 56 refs

  20. Identifiability in N-mixture models: a large-scale screening test with bird data.

    Science.gov (United States)

    Kéry, Marc

    2018-02-01

    Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.

  1. Nonlinear Structured Growth Mixture Models in M"plus" and OpenMx

    Science.gov (United States)

    Grimm, Kevin J.; Ram, Nilam; Estabrook, Ryne

    2010-01-01

    Growth mixture models (GMMs; B. O. Muthen & Muthen, 2000; B. O. Muthen & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models…

  2. A two-component NZRI metamaterial based rectangular cloak

    Directory of Open Access Journals (Sweden)

    Sikder Sunbeam Islam

    2015-10-01

    Full Text Available A new two-component, near zero refractive index (NZRI metamaterial is presented for electromagnetic rectangular cloaking operation in the microwave range. In the basic design a pi-shaped, metamaterial was developed and its characteristics were investigated for the two major axes (x and z-axis wave propagation through the material. For the z-axis wave propagation, it shows more than 2 GHz bandwidth and for the x-axis wave propagation; it exhibits more than 1 GHz bandwidth of NZRI property. The metamaterial was then utilized in designing a rectangular cloak where a metal cylinder was cloaked perfectly in the C-band area of microwave regime. The experimental result was provided for the metamaterial and the cloak and these results were compared with the simulated results. This is a novel and promising design for its two-component NZRI characteristics and rectangular cloaking operation in the electromagnetic paradigm.

  3. Brazilian two-component TLD albedo neutron individual monitoring system

    Energy Technology Data Exchange (ETDEWEB)

    Martins, M.M., E-mail: marcelo@ird.gov.b [Instituto de Radioprotecao e Dosimetria (IRD), Av. Salvador Allende, s/n, CEP: 22780-160, Rio de Janeiro, RJ (Brazil); Mauricio, C.L.P., E-mail: claudia@ird.gov.b [Instituto de Radioprotecao e Dosimetria (IRD), Av. Salvador Allende, s/n, CEP: 22780-160, Rio de Janeiro, RJ (Brazil); Fonseca, E.S. da, E-mail: evaldo@ird.gov.b [Instituto de Radioprotecao e Dosimetria (IRD), Av. Salvador Allende, s/n, CEP: 22780-160, Rio de Janeiro, RJ (Brazil); Silva, A.X. da, E-mail: ademir@con.ufrj.b [Coordenacao dos Programas de Pos-Graduacao em Engenharia, COPPE/PEN Caixa Postal 68509, CEP: 21941-972, Rio de Janeiro, RJ (Brazil)

    2010-12-15

    Since 1983, Instituto de Radioprotecao e Dosimetria, Brazil, uses a TLD one-component albedo neutron monitor, which has a single different calibration factor specifically for each installation type. In order to improve its energy response, a two-component albedo monitor was developed, which measure the thermal neutron component besides the albedo one. The two-component monitor has been calibrated in reference neutron fields: thermal, five accelerator-produced monoenergetic beams (70, 144, 565, 1200 and 5000 keV) and five radionuclide sources ({sup 252}Cf, {sup 252}Cf(D{sub 2}O), {sup 241}Am-Be, {sup 241}Am-B and {sup 238}Pu-Be) at several distances. Since January 2008, mainly Brazilian workers who handle neutron sources at different distances and moderation, such as in well logging and calibration facilities are using it routinely.

  4. The Fractional Virial Potential Energy in Two-Component Systems

    Directory of Open Access Journals (Sweden)

    Caimmi, R.

    2008-12-01

    Full Text Available Two-component systems are conceived as macrogases, and the related equation of state is expressed using the virial theorem for subsystems, under the restriction of homeoidally striated density profiles. Explicit calculations are performed for a useful reference case and a few cases of astrophysical interest, both with and without truncation radius. Shallower density profiles are found to yield an equation of state, $phi=phi(y,m$, characterized (for assigned values of the fractional mass, $m=M_j/ M_i$ by the occurrence of two extremum points, a minimum and a maximum, as found in an earlier attempt. Steeper density profiles produce a similar equation of state, which implies that a special value of $m$ is related to a critical curve where the above mentioned extremum points reduce to a single horizontal inflexion point, and curves below the critical one show no extremum points. The similarity of the isofractional mass curves to van der Waals' isothermal curves, suggests the possibility of a phase transition in a bell-shaped region of the $({sf O}yphi$ plane, where the fractional truncation radius along a selected direction is $y=R_j/R_i$, and the fractional virial potential energy is $phi=(E_{ji}_mathrm{vir}/(E_{ij}_mathrm{vir}$. Further investigation is devoted to mass distributions described by Hernquist (1990 density profiles, for which an additional relation can be used to represent a sample of $N=16$ elliptical galaxies (EGs on the $({sf O}yphi$ plane. Even if the evolution of elliptical galaxies and their hosting dark matter (DM haloes, in the light of the model, has been characterized by equal fractional mass, $m$, and equal scaled truncation radius, or concentration, $Xi_u=R_u/r_u^dagger$, $u=i,j$, still it cannot be considered as strictly homologous, due to different values of fractional truncation radii, $y$, or fractional scaling radii, $y^dagger=r_j^dagger/r_i^dagger$, deduced from sample objects.

  5. On the Alexander polynominals of alternating two-component links

    Directory of Open Access Journals (Sweden)

    Mark E. Kidwell

    1979-01-01

    Full Text Available Let L be an alternating two-component link with Alexander polynomial Δ(x,y. Then the polynomials (1−xΔ(x,y and (1−yΔ(x,y are alternating. That is, (1−yΔ(x,y can be written as ∑i,jcijxiyj in such a way that (−1i+jcij≥0.

  6. Morphology-tunable and photoresponsive properties in a self-assembled two-component gel system.

    Science.gov (United States)

    Zhou, Yifeng; Xu, Miao; Yi, Tao; Xiao, Shuzhang; Zhou, Zhiguo; Li, Fuyou; Huang, Chunhui

    2007-01-02

    Photoresponsive C3-symmetrical trisurea self-assembling building blocks containing three azobenzene groups (LC10 and LC4) at the rim were designed and synthesized. By introducing a trisamide gelator (G18), which can self-aggregate through hydrogen bonds of acylamino moieties to form a fibrous network, the mixture of LC10 (or LC4) and G18 forms an organogel with coral-like supramolecular structure from 1,4-dioxane. The cooperation of hydrogen bonding and the hydrophobic diversity between these components are the main contributions to the specific superstructure. The two-component gel exhibits reversible photoisomerization from trans to cis transition without breakage of the gel state.

  7. Modeling abundance using N-mixture models: the importance of considering ecological mechanisms.

    Science.gov (United States)

    Joseph, Liana N; Elkin, Ché; Martin, Tara G; Possinghami, Hugh P

    2009-04-01

    Predicting abundance across a species' distribution is useful for studies of ecology and biodiversity management. Modeling of survey data in relation to environmental variables can be a powerful method for extrapolating abundances across a species' distribution and, consequently, calculating total abundances and ultimately trends. Research in this area has demonstrated that models of abundance are often unstable and produce spurious estimates, and until recently our ability to remove detection error limited the development of accurate models. The N-mixture model accounts for detection and abundance simultaneously and has been a significant advance in abundance modeling. Case studies that have tested these new models have demonstrated success for some species, but doubt remains over the appropriateness of standard N-mixture models for many species. Here we develop the N-mixture model to accommodate zero-inflated data, a common occurrence in ecology, by employing zero-inflated count models. To our knowledge, this is the first application of this method to modeling count data. We use four variants of the N-mixture model (Poisson, zero-inflated Poisson, negative binomial, and zero-inflated negative binomial) to model abundance, occupancy (zero-inflated models only) and detection probability of six birds in South Australia. We assess models by their statistical fit and the ecological realism of the parameter estimates. Specifically, we assess the statistical fit with AIC and assess the ecological realism by comparing the parameter estimates with expected values derived from literature, ecological theory, and expert opinion. We demonstrate that, despite being frequently ranked the "best model" according to AIC, the negative binomial variants of the N-mixture often produce ecologically unrealistic parameter estimates. The zero-inflated Poisson variant is preferable to the negative binomial variants of the N-mixture, as it models an ecological mechanism rather than a

  8. Real time tracking by LOPF algorithm with mixture model

    Science.gov (United States)

    Meng, Bo; Zhu, Ming; Han, Guangliang; Wu, Zhiguo

    2007-11-01

    A new particle filter-the Local Optimum Particle Filter (LOPF) algorithm is presented for tracking object accurately and steadily in visual sequences in real time which is a challenge task in computer vision field. In order to using the particles efficiently, we first use Sobel algorithm to extract the profile of the object. Then, we employ a new Local Optimum algorithm to auto-initialize some certain number of particles from these edge points as centre of the particles. The main advantage we do this in stead of selecting particles randomly in conventional particle filter is that we can pay more attentions on these more important optimum candidates and reduce the unnecessary calculation on those negligible ones, in addition we can overcome the conventional degeneracy phenomenon in a way and decrease the computational costs. Otherwise, the threshold is a key factor that affecting the results very much. So here we adapt an adaptive threshold choosing method to get the optimal Sobel result. The dissimilarities between the target model and the target candidates are expressed by a metric derived from the Bhattacharyya coefficient. Here, we use both the counter cue to select the particles and the color cur to describe the targets as the mixture target model. The effectiveness of our scheme is demonstrated by real visual tracking experiments. Results from simulations and experiments with real video data show the improved performance of the proposed algorithm when compared with that of the standard particle filter. The superior performance is evident when the target encountering the occlusion in real video where the standard particle filter usually fails.

  9. Continuous Fractionation of a two-component mixture by zone electrophoresis

    NARCIS (Netherlands)

    Zalewski, D.R.; Gardeniers, Johannes G.E.

    2009-01-01

    Synchronized continuous-flow zone electrophoresis is a recently demonstrated tool for performing electrophoretic fractionation of a complex sample. The method resembles free flow electrophoresis, but unlike in that technique, no mechanical fluid pumping is required. Instead, fast electrokinetic flow

  10. A Note on the Use of Mixture Models for Individual Prediction.

    Science.gov (United States)

    Cole, Veronica T; Bauer, Daniel J

    Mixture models capture heterogeneity in data by decomposing the population into latent subgroups, each of which is governed by its own subgroup-specific set of parameters. Despite the flexibility and widespread use of these models, most applications have focused solely on making inferences for whole or sub-populations, rather than individual cases. The current article presents a general framework for computing marginal and conditional predicted values for individuals using mixture model results. These predicted values can be used to characterize covariate effects, examine the fit of the model for specific individuals, or forecast future observations from previous ones. Two empirical examples are provided to demonstrate the usefulness of individual predicted values in applications of mixture models. The first example examines the relative timing of initiation of substance use using a multiple event process survival mixture model whereas the second example evaluates changes in depressive symptoms over adolescence using a growth mixture model.

  11. Background based Gaussian mixture model lesion segmentation in PET

    Energy Technology Data Exchange (ETDEWEB)

    Soffientini, Chiara Dolores, E-mail: chiaradolores.soffientini@polimi.it; Baselli, Giuseppe [DEIB, Department of Electronics, Information, and Bioengineering, Politecnico di Milano, Piazza Leonardo da Vinci 32, Milan 20133 (Italy); De Bernardi, Elisabetta [Department of Medicine and Surgery, Tecnomed Foundation, University of Milano—Bicocca, Monza 20900 (Italy); Zito, Felicia; Castellani, Massimo [Nuclear Medicine Department, Fondazione IRCCS Ca’ Granda Ospedale Maggiore Policlinico, via Francesco Sforza 35, Milan 20122 (Italy)

    2016-05-15

    Purpose: Quantitative {sup 18}F-fluorodeoxyglucose positron emission tomography is limited by the uncertainty in lesion delineation due to poor SNR, low resolution, and partial volume effects, subsequently impacting oncological assessment, treatment planning, and follow-up. The present work develops and validates a segmentation algorithm based on statistical clustering. The introduction of constraints based on background features and contiguity priors is expected to improve robustness vs clinical image characteristics such as lesion dimension, noise, and contrast level. Methods: An eight-class Gaussian mixture model (GMM) clustering algorithm was modified by constraining the mean and variance parameters of four background classes according to the previous analysis of a lesion-free background volume of interest (background modeling). Hence, expectation maximization operated only on the four classes dedicated to lesion detection. To favor the segmentation of connected objects, a further variant was introduced by inserting priors relevant to the classification of neighbors. The algorithm was applied to simulated datasets and acquired phantom data. Feasibility and robustness toward initialization were assessed on a clinical dataset manually contoured by two expert clinicians. Comparisons were performed with respect to a standard eight-class GMM algorithm and to four different state-of-the-art methods in terms of volume error (VE), Dice index, classification error (CE), and Hausdorff distance (HD). Results: The proposed GMM segmentation with background modeling outperformed standard GMM and all the other tested methods. Medians of accuracy indexes were VE <3%, Dice >0.88, CE <0.25, and HD <1.2 in simulations; VE <23%, Dice >0.74, CE <0.43, and HD <1.77 in phantom data. Robustness toward image statistic changes (±15%) was shown by the low index changes: <26% for VE, <17% for Dice, and <15% for CE. Finally, robustness toward the user-dependent volume initialization was

  12. Infinite von Mises-Fisher Mixture Modeling of Whole Brain fMRI Data

    DEFF Research Database (Denmark)

    Røge, Rasmus; Madsen, Kristoffer Hougaard; Schmidt, Mikkel Nørgaard

    2017-01-01

    spherical manifold are rarely analyzed, in part due to the computational challenges imposed by directional statistics. In this letter, we discuss a Bayesian von Mises-Fisher (vMF) mixture model for data on the unit hypersphere and present an efficient inference procedure based on collapsed Markov chain...... Monte Carlo sampling. Comparing the vMF and gaussian mixture models on synthetic data, we demonstrate that the vMF model has a slight advantage inferring the true underlying clustering when compared to gaussian-based models on data generated from both a mixture of vMFs and a mixture of gaussians......Cluster analysis of functional magnetic resonance imaging (fMRI) data is often performed using gaussian mixture models, but when the time series are standardized such that the data reside on a hypersphere, this modeling assumption is questionable. The consequences of ignoring the underlying...

  13. Fluorescence lifetime selectivity in excitation-emission matrices for qualitative analysis of a two-component system

    International Nuclear Information System (INIS)

    Millican, D.W.; McGown, L.B.

    1989-01-01

    Steady-state fluorescence excitation-emission matrices (EEMs), and phase-resolved EEMs (PREEMs) collected at modulation frequencies of 6, 18, and 30 MHz, were used for qualitative analysis of mixtures of benzo[k]fluoranthene (τ = 8 ns) and benzo[b]fluoranthene (τ = 29 ns) in ethanol. The EEMs of the individual components were extracted from mixture EEMs by means of wavelength component vector-gram (WCV) analysis. Phase resolution was found to be superior to steady-state measurements for extraction of the component spectra, for mixtures in which the intensity contributions from the two components are unequal

  14. Automatic categorization of web pages and user clustering with mixtures of hidden Markov models

    NARCIS (Netherlands)

    Ypma, A.; Heskes, T.M.; Zaiane, O.R.; Srivastav, J.

    2003-01-01

    We propose mixtures of hidden Markov models for modelling clickstreams of web surfers. Hence, the page categorization is learned from the data without the need for a (possibly cumbersome) manual categorization. We provide an EM algorithm for training a mixture of HMMs and show that additional static

  15. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    Science.gov (United States)

    de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.

    2010-01-01

    We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…

  16. Genetic Analysis of Somatic Cell Score in Danish Holsteins Using a Liability-Normal Mixture Model

    DEFF Research Database (Denmark)

    Madsen, P; Shariati, M M; Ødegård, J

    2008-01-01

    Mixture models are appealing for identifying hidden structures affecting somatic cell score (SCS) data, such as unrecorded cases of subclinical mastitis. Thus, liability-normal mixture (LNM) models were used for genetic analysis of SCS data, with the aim of predicting breeding values for such cas...

  17. Combinatorial bounds on the α-divergence of univariate mixture models

    KAUST Repository

    Nielsen, Frank

    2017-06-20

    We derive lower- and upper-bounds of α-divergence between univariate mixture models with components in the exponential family. Three pairs of bounds are presented in order with increasing quality and increasing computational cost. They are verified empirically through simulated Gaussian mixture models. The presented methodology generalizes to other divergence families relying on Hellinger-type integrals.

  18. Nonlinear low frequency electrostatic structures in a magnetized two-component auroral plasma

    Energy Technology Data Exchange (ETDEWEB)

    Rufai, O. R., E-mail: rajirufai@gmail.com [University of the Western Cape, Bellville 7535, Cape-Town (South Africa); Scientific Computing, Memorial University of Newfoundland, St John' s, Newfoundland and Labrador A1C 5S7 (Canada); Bharuthram, R., E-mail: rbharuthram@uwc.ac.za [University of the Western Cape, Bellville 7535, Cape-Town (South Africa); Singh, S. V., E-mail: satyavir@iigs.iigm.res.in; Lakhina, G. S., E-mail: lakhina@iigs.iigm.res.in [University of the Western Cape, Bellville 7535, Cape-Town (South Africa); Indian Institute of Geomagnetism, New Panvel (W), Navi Mumbai 410218 (India)

    2016-03-15

    Finite amplitude nonlinear ion-acoustic solitons, double layers, and supersolitons in a magnetized two-component plasma composed of adiabatic warm ions fluid and energetic nonthermal electrons are studied by employing the Sagdeev pseudopotential technique and assuming the charge neutrality condition at equilibrium. The model generates supersoliton structures at supersonic Mach numbers regime in addition to solitons and double layers, whereas in the unmagnetized two-component plasma case only, soliton and double layer solutions can be obtained. Further investigation revealed that wave obliqueness plays a critical role for the evolution of supersoliton structures in magnetized two-component plasmas. In addition, the effect of ion temperature and nonthermal energetic electron tends to decrease the speed of oscillation of the nonlinear electrostatic structures. The present theoretical results are compared with Viking satellite observations.

  19. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    Directory of Open Access Journals (Sweden)

    Jan Hasenauer

    2014-07-01

    Full Text Available Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  20. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    Science.gov (United States)

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J

    2014-07-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  1. Vortex formation in a rotating two-component Fermi gas

    Energy Technology Data Exchange (ETDEWEB)

    Warringa, Harmen J.; Sedrakian, Armen [Institut fuer Theoretische Physik, Goethe-Universitaet Frankfurt am Main, Max-von-Laue-Strasse 1, D-60438 Frankfurt am Main (Germany)

    2011-08-15

    A two-component Fermi gas with attractive s-wave interactions forms a superfluid at low temperatures. When this gas is confined in a rotating trap, fermions can unpair at the edges of the gas and vortices can arise beyond certain critical rotation frequencies. We compute these critical rotation frequencies and construct the phase diagram in the plane of scattering length and rotation frequency for different total numbers of particles. We work at zero temperature and consider a cylindrically symmetric harmonic trapping potential. The calculations are performed in the Hartree-Fock-Bogoliubov approximation which implies that our results are quantitatively reliable for weak interactions.

  2. Two component micro injection molding for MID fabrication

    DEFF Research Database (Denmark)

    Islam, Mohammad Aminul; Hansen, Hans Nørgaard; Tang, Peter Torben

    2009-01-01

    Molded Interconnect Devices (MIDs) are plastic substrates with electrical infrastructure. The fabrication of MIDs is usually based on injection molding and different process chains may be identified from this starting point. The use of MIDs has been driven primarily by the automotive sector......, but recently the medical sector seems more and more interested. In particular the possibility of miniaturization of 3D components with electrical infrastructure is attractive. The paper describes possible manufacturing routes and challenges of miniaturized MIDs based on two component micro injection molding...

  3. Modelling of phase equilibria of glycol ethers mixtures using an association model

    DEFF Research Database (Denmark)

    Garrido, Nuno M.; Folas, Georgios; Kontogeorgis, Georgios

    2008-01-01

    Vapor-liquid and liquid-liquid equilibria of glycol ethers (surfactant) mixtures with hydrocarbons, polar compounds and water are calculated using an association model, the Cubic-Plus-Association Equation of State. Parameters are estimated for several non-ionic surfactants of the polyoxyethylene ...

  4. Distinguishing Continuous and Discrete Approaches to Multilevel Mixture IRT Models: A Model Comparison Perspective

    Science.gov (United States)

    Zhu, Xiaoshu

    2013-01-01

    The current study introduced a general modeling framework, multilevel mixture IRT (MMIRT) which detects and describes characteristics of population heterogeneity, while accommodating the hierarchical data structure. In addition to introducing both continuous and discrete approaches to MMIRT, the main focus of the current study was to distinguish…

  5. Using Bayesian statistics for modeling PTSD through Latent Growth Mixture Modeling : implementation and discussion

    NARCIS (Netherlands)

    Depaoli, Sarah; van de Schoot, Rens; van Loey, Nancy; Sijbrandij, Marit

    2015-01-01

    BACKGROUND: After traumatic events, such as disaster, war trauma, and injuries including burns (which is the focus here), the risk to develop posttraumatic stress disorder (PTSD) is approximately 10% (Breslau & Davis, 1992). Latent Growth Mixture Modeling can be used to classify individuals into

  6. Gassmann Modeling of Acoustic Properties of Sand-clay Mixtures

    Science.gov (United States)

    Gurevich, B.; Carcione, J. M.

    The feasibility of modeling elastic properties of a fluid-saturated sand-clay mixture rock is analyzed by assuming that the rock is composed of macroscopic regions of sand and clay. The elastic properties of such a composite rock are computed using two alternative schemes.The first scheme, which we call the composite Gassmann (CG) scheme, uses Gassmann equations to compute elastic moduli of the saturated sand and clay from their respective dry moduli. The effective elastic moduli of the fluid-saturated composite rock are then computed by applying one of the mixing laws commonly used to estimate elastic properties of composite materials.In the second scheme which we call the Berryman-Milton scheme, the elastic moduli of the dry composite rock matrix are computed from the moduli of dry sand and clay matrices using the same composite mixing law used in the first scheme. Next, the saturated composite rock moduli are computed using the equations of Brown and Korringa, which, together with the expressions for the coefficients derived by Berryman and Milton, provide an extension of Gassmann equations to rocks with a heterogeneous solid matrix.For both schemes, the moduli of the dry homogeneous sand and clay matrices are assumed to obey the Krief's velocity-porosity relationship. As a mixing law we use the self-consistent coherent potential approximation proposed by Berryman.The calculated dependence of compressional and shear velocities on porosity and clay content for a given set of parameters using the two schemes depends on the distribution of total porosity between the sand and clay regions. If the distribution of total porosity between sand and clay is relatively uniform, the predictions of the two schemes in the porosity range up to 0.3 are very similar to each other. For higher porosities and medium-to-large clay content the elastic moduli predicted by CG scheme are significantly higher than those predicted by the BM scheme.This difference is explained by the fact

  7. Self-organising mixture autoregressive model for non-stationary time series modelling.

    Science.gov (United States)

    Ni, He; Yin, Hujun

    2008-12-01

    Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.

  8. Diffusion models for mixtures using a stiff dissipative hyperbolic formalism

    OpenAIRE

    Boudin , Laurent; Grec , Bérénice; Pavan , Vincent

    2018-01-01

    In this article, we are interested in a system of uid equations for mixtures with a sti relaxation term of Maxwell-Stefan diusion type. We use the formalism developed by Chen, Levermore, Liu in [4] to obtain a limit system of Fick type where the species velocities tend to align to a bulk velocity when the relaxation parameter remains small.

  9. Sound speed models for a noncondensible gas-steam-water mixture

    International Nuclear Information System (INIS)

    Ransom, V.H.; Trapp, J.A.

    1984-01-01

    An analytical expression is derived for the homogeneous equilibrium speed of sound in a mixture of noncondensible gas, steam, and water. The expression is based on the Gibbs free energy interphase equilibrium condition for a Gibbs-Dalton mixture in contact with a pure liquid phase. Several simplified models are discussed including the homogeneous frozen model. These idealized models can be used as a reference for data comparison and also serve as a basis for empirically corrected nonhomogeneous and nonequilibrium models

  10. Two-component thermosensitive hydrogels : Phase separation affecting rheological behavior

    NARCIS (Netherlands)

    Abbadessa, Anna; Landín, Mariana; Oude Blenke, Erik; Hennink, Wim E.; Vermonden, Tina

    2017-01-01

    Extracellular matrices are mainly composed of a mixture of different biopolymers and therefore the use of two or more building blocks for the development of tissue-mimicking hydrogels is nowadays an attractive strategy in tissue-engineering. Multi-component hydrogel systems may undergo phase

  11. Model-based experimental design for assessing effects of mixtures of chemicals

    Energy Technology Data Exchange (ETDEWEB)

    Baas, Jan, E-mail: jan.baas@falw.vu.n [Vrije Universiteit of Amsterdam, Dept of Theoretical Biology, De Boelelaan 1085, 1081 HV Amsterdam (Netherlands); Stefanowicz, Anna M., E-mail: anna.stefanowicz@uj.edu.p [Institute of Environmental Sciences, Jagiellonian University, Gronostajowa 7, 30-387 Krakow (Poland); Klimek, Beata, E-mail: beata.klimek@uj.edu.p [Institute of Environmental Sciences, Jagiellonian University, Gronostajowa 7, 30-387 Krakow (Poland); Laskowski, Ryszard, E-mail: ryszard.laskowski@uj.edu.p [Institute of Environmental Sciences, Jagiellonian University, Gronostajowa 7, 30-387 Krakow (Poland); Kooijman, Sebastiaan A.L.M., E-mail: bas@bio.vu.n [Vrije Universiteit of Amsterdam, Dept of Theoretical Biology, De Boelelaan 1085, 1081 HV Amsterdam (Netherlands)

    2010-01-15

    We exposed flour beetles (Tribolium castaneum) to a mixture of four poly aromatic hydrocarbons (PAHs). The experimental setup was chosen such that the emphasis was on assessing partial effects. We interpreted the effects of the mixture by a process-based model, with a threshold concentration for effects on survival. The behavior of the threshold concentration was one of the key features of this research. We showed that the threshold concentration is shared by toxicants with the same mode of action, which gives a mechanistic explanation for the observation that toxic effects in mixtures may occur in concentration ranges where the individual components do not show effects. Our approach gives reliable predictions of partial effects on survival and allows for a reduction of experimental effort in assessing effects of mixtures, extrapolations to other mixtures, other points in time, or in a wider perspective to other organisms. - We show a mechanistic approach to assess effects of mixtures in low concentrations.

  12. Model-based experimental design for assessing effects of mixtures of chemicals

    International Nuclear Information System (INIS)

    Baas, Jan; Stefanowicz, Anna M.; Klimek, Beata; Laskowski, Ryszard; Kooijman, Sebastiaan A.L.M.

    2010-01-01

    We exposed flour beetles (Tribolium castaneum) to a mixture of four poly aromatic hydrocarbons (PAHs). The experimental setup was chosen such that the emphasis was on assessing partial effects. We interpreted the effects of the mixture by a process-based model, with a threshold concentration for effects on survival. The behavior of the threshold concentration was one of the key features of this research. We showed that the threshold concentration is shared by toxicants with the same mode of action, which gives a mechanistic explanation for the observation that toxic effects in mixtures may occur in concentration ranges where the individual components do not show effects. Our approach gives reliable predictions of partial effects on survival and allows for a reduction of experimental effort in assessing effects of mixtures, extrapolations to other mixtures, other points in time, or in a wider perspective to other organisms. - We show a mechanistic approach to assess effects of mixtures in low concentrations.

  13. On Two Mixture-Based Clustering Approaches Used in Modeling an Insurance Portfolio

    Directory of Open Access Journals (Sweden)

    Tatjana Miljkovic

    2018-05-01

    Full Text Available We review two complementary mixture-based clustering approaches for modeling unobserved heterogeneity in an insurance portfolio: the generalized linear mixed cluster-weighted model (CWM and mixture-based clustering for an ordered stereotype model (OSM. The latter is for modeling of ordinal variables, and the former is for modeling losses as a function of mixed-type of covariates. The article extends the idea of mixture modeling to a multivariate classification for the purpose of testing unobserved heterogeneity in an insurance portfolio. The application of both methods is illustrated on a well-known French automobile portfolio, in which the model fitting is performed using the expectation-maximization (EM algorithm. Our findings show that these mixture-based clustering methods can be used to further test unobserved heterogeneity in an insurance portfolio and as such may be considered in insurance pricing, underwriting, and risk management.

  14. Model-based experimental design for assessing effects of mixtures of chemicals

    NARCIS (Netherlands)

    Baas, J.; Stefanowicz, A.M.; Klimek, B.; Laskowski, R.; Kooijman, S.A.L.M.

    2010-01-01

    We exposed flour beetles (Tribolium castaneum) to a mixture of four poly aromatic hydrocarbons (PAHs). The experimental setup was chosen such that the emphasis was on assessing partial effects. We interpreted the effects of the mixture by a process-based model, with a threshold concentration for

  15. A general mixture model for mapping quantitative trait loci by using molecular markers

    NARCIS (Netherlands)

    Jansen, R.C.

    1992-01-01

    In a segregating population a quantitative trait may be considered to follow a mixture of (normal) distributions, the mixing proportions being based on Mendelian segregation rules. A general and flexible mixture model is proposed for mapping quantitative trait loci (QTLs) by using molecular markers.

  16. Zero-range approximation for two-component boson systems

    International Nuclear Information System (INIS)

    Sogo, T.; Fedorov, D.V.; Jensen, A.S.

    2005-01-01

    The hyperspherical adiabatic expansion method is combined with the zero-range approximation to derive angular Faddeev-like equations for two-component boson systems. The angular eigenvalues are solutions to a transcendental equation obtained as a vanishing determinant of a 3 x 3 matrix. The eigenfunctions are linear combinations of Jacobi functions of argument proportional to the distance between pairs of particles. We investigate numerically the influence of two-body correlations on the eigenvalue spectrum, the eigenfunctions and the effective hyperradial potential. Correlations decrease or increase the distance between pairs for effectively attractive or repulsive interactions, respectively. New structures appear for non-identical components. Fingerprints can be found in the nodal structure of the density distributions of the condensates. (author)

  17. How insects overcome two-component plant chemical defence

    DEFF Research Database (Denmark)

    Pentzold, Stefan; Zagrobelny, Mika; Rook, Frederik

    2014-01-01

    Insect herbivory is often restricted by glucosylated plant chemical defence compounds that are activated by plant β-glucosidases to release toxic aglucones upon plant tissue damage. Such two-component plant defences are widespread in the plant kingdom and examples of these classes of compounds...... are alkaloid, benzoxazinoid, cyanogenic and iridoid glucosides as well as glucosinolates and salicinoids. Conversely, many insects have evolved a diversity of counteradaptations to overcome this type of constitutive chemical defence. Here we discuss that such counter-adaptations occur at different time points......, before and during feeding as well as during digestion, and at several levels such as the insects’ feeding behaviour, physiology and metabolism. Insect adaptations frequently circumvent or counteract the activity of the plant β-glucosidases, bioactivating enzymes that are a key element in the plant’s two...

  18. Bond strength of two component injection moulded MID

    DEFF Research Database (Denmark)

    Islam, Mohammad Aminul; Hansen, Hans Nørgaard; Tang, Peter Torben

    2006-01-01

    Most products of the future will require industrially adapted, cost effective production processes and on this issue two-component (2K) injection moulding is a potential candidate for MID manufacturing. MID based on 2k injection moulded plastic part with selectively metallised circuit tracks allows...... the two different plastic materials in the MID structure require good bonding between them. This paper finds suitable combinations of materials for MIDs from both bond strength and metallisation view-point. Plastic parts were made by two-shot injection moulding and the effects of some important process...... the integration of electrical and mechanical functionalities in a real 3D structure. If 2k injection moulding is applied with two polymers, of which one is plateable and the other is not, it will be possible to make 3D electrical structures directly on the component. To be applicable in the real engineering field...

  19. Two Component Injection Moulding for Moulded Interconnect Devices

    DEFF Research Database (Denmark)

    Islam, Aminul

    component (2k) injection moulding is one of the most industrially adaptive processes. However, the use of two component injection moulding for MID fabrication, with circuit patterns in sub-millimeter range, is still a big challenge. This book searches for the technical difficulties associated...... with the process and makes attempts to overcome those challenges. In search of suitable polymer materials for MID applications, potential materials are characterized in terms of polymer-polymer bond strength, polymer-polymer interface quality and selective metallization. The experimental results find the factors...... which can effectively control the quality of 2k moulded parts and metallized MIDs. This book presents documented knowledge about MID process chains, 2k moulding and selective metallization which can be valuable source of information for both academic and industrial users....

  20. Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches

    Science.gov (United States)

    Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward

    2015-01-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.

  1. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions.

    Science.gov (United States)

    Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan

    2016-01-01

    This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability.

  2. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions

    Directory of Open Access Journals (Sweden)

    Yoon Soo ePark

    2016-02-01

    Full Text Available This study investigates the impact of item parameter drift (IPD on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effect on item parameters and examinee ability.

  3. Mathematical Modeling of Nonstationary Separation Processes in Gas Centrifuge Cascade for Separation of Multicomponent Isotope Mixtures

    Directory of Open Access Journals (Sweden)

    Orlov Alexey

    2016-01-01

    Full Text Available This article presents results of development of the mathematical model of nonstationary separation processes occurring in gas centrifuge cascades for separation of multicomponent isotope mixtures. This model was used for the calculation parameters of gas centrifuge cascade for separation of germanium isotopes. Comparison of obtained values with results of other authors revealed that developed mathematical model is adequate to describe nonstationary separation processes in gas centrifuge cascades for separation of multicomponent isotope mixtures.

  4. Influence of high power ultrasound on rheological and foaming properties of model ice-cream mixtures

    Directory of Open Access Journals (Sweden)

    Verica Batur

    2010-03-01

    Full Text Available This paper presents research of the high power ultrasound effect on rheological and foaming properties of ice cream model mixtures. Ice cream model mixtures are prepared according to specific recipes, and afterward undergone through different homogenization techniques: mechanical mixing, ultrasound treatment and combination of mechanical and ultrasound treatment. Specific diameter (12.7 mm of ultrasound probe tip has been used for ultrasound treatment that lasted 5 minutes at 100 percent amplitude. Rheological parameters have been determined using rotational rheometer and expressed as flow index, consistency coefficient and apparent viscosity. From the results it can be concluded that all model mixtures have non-newtonian, dilatant type behavior. The highest viscosities have been observed for model mixtures that were homogenizes with mechanical mixing, and significantly lower values of viscosity have been observed for ultrasound treated ones. Foaming properties are expressed as percentage of increase in foam volume, foam stability index and minimal viscosity. It has been determined that ice cream model mixtures treated only with ultrasound had minimal increase in foam volume, while the highest increase in foam volume has been observed for ice cream mixture that has been treated in combination with mechanical and ultrasound treatment. Also, ice cream mixtures having higher amount of proteins in composition had shown higher foam stability. It has been determined that optimal treatment time is 10 minutes.

  5. Global existence and blow-up phenomena for two-component Degasperis-Procesi system and two-component b-family system

    OpenAIRE

    Liu, Jingjing; Yin, Zhaoyang

    2014-01-01

    This paper is concerned with global existence and blow-up phenomena for two-component Degasperis-Procesi system and two-component b-family system. The strategy relies on our observation on new conservative quantities of these systems. Several new global existence results and a new blowup result of strong solutions to the two-component Degasperis- Procesi system and the two-component b-family system are presented by using these new conservative quantities.

  6. Combinatorial bounds on the α-divergence of univariate mixture models

    KAUST Repository

    Nielsen, Frank; Sun, Ke

    2017-01-01

    We derive lower- and upper-bounds of α-divergence between univariate mixture models with components in the exponential family. Three pairs of bounds are presented in order with increasing quality and increasing computational cost. They are verified

  7. Statistical imitation system using relational interest points and Gaussian mixture models

    CSIR Research Space (South Africa)

    Claassens, J

    2009-11-01

    Full Text Available The author proposes an imitation system that uses relational interest points (RIPs) and Gaussian mixture models (GMMs) to characterize a behaviour. The system's structure is inspired by the Robot Programming by Demonstration (RDP) paradigm...

  8. A compressibility based model for predicting the tensile strength of directly compressed pharmaceutical powder mixtures.

    Science.gov (United States)

    Reynolds, Gavin K; Campbell, Jacqueline I; Roberts, Ron J

    2017-10-05

    A new model to predict the compressibility and compactability of mixtures of pharmaceutical powders has been developed. The key aspect of the model is consideration of the volumetric occupancy of each powder under an applied compaction pressure and the respective contribution it then makes to the mixture properties. The compressibility and compactability of three pharmaceutical powders: microcrystalline cellulose, mannitol and anhydrous dicalcium phosphate have been characterised. Binary and ternary mixtures of these excipients have been tested and used to demonstrate the predictive capability of the model. Furthermore, the model is shown to be uniquely able to capture a broad range of mixture behaviours, including neutral, negative and positive deviations, illustrating its utility for formulation design. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Modeling Hydrodynamic State of Oil and Gas Condensate Mixture in a Pipeline

    Directory of Open Access Journals (Sweden)

    Dudin Sergey

    2016-01-01

    Based on the developed model a calculation method was obtained which is used to analyze hydrodynamic state and composition of hydrocarbon mixture in each ith section of the pipeline when temperature-pressure and hydraulic conditions change.

  10. A predictive model of natural gas mixture combustion in internal combustion engines

    Directory of Open Access Journals (Sweden)

    Henry Espinoza

    2007-05-01

    Full Text Available This study shows the development of a predictive natural gas mixture combustion model for conventional com-bustion (ignition engines. The model was based on resolving two areas; one having unburned combustion mixture and another having combustion products. Energy and matter conservation equations were solved for each crankshaft turn angle for each area. Nonlinear differential equations for each phase’s energy (considering compression, combustion and expansion were solved by applying the fourth-order Runge-Kutta method. The model also enabled studying different natural gas components’ composition and evaluating combustion in the presence of dry and humid air. Validation results are shown with experimental data, demonstrating the software’s precision and accuracy in the results so produced. The results showed cylinder pressure, unburned and burned mixture temperature, burned mass fraction and combustion reaction heat for the engine being modelled using a natural gas mixture.

  11. Mixture estimation with state-space components and Markov model of switching

    Czech Academy of Sciences Publication Activity Database

    Nagy, Ivan; Suzdaleva, Evgenia

    2013-01-01

    Roč. 37, č. 24 (2013), s. 9970-9984 ISSN 0307-904X R&D Projects: GA TA ČR TA01030123 Institutional support: RVO:67985556 Keywords : probabilistic dynamic mixtures, * probability density function * state-space models * recursive mixture estimation * Bayesian dynamic decision making under uncertainty * Kerridge inaccuracy Subject RIV: BC - Control Systems Theory Impact factor: 2.158, year: 2013 http://library.utia.cas.cz/separaty/2013/AS/nagy-mixture estimation with state-space components and markov model of switching.pdf

  12. Parameter studies for a two-component fusion experiment

    International Nuclear Information System (INIS)

    Towner, H.H.

    1975-01-01

    The sensitivity of the energy multiplication of a two-component fusion experiment is examined relative to the following parameters: energy confinement time (tau/sub E/), particle confinement time (tau/sub p/), effective Z of the plasma (Z/sub eff/), injection rate (j/sub I/) and injection energy (E/sub I/). The Energy Research and Development Administration recently approved funding for such a fusion device (the Toroidal Fusion Test Reactor or TFTR) which will be built at the Princeton Plasma Physics Laboratory. Hence, such a parameter study seems both timely and necessary. This work also serves as an independent check on the design values proposed for the TFTR to enable it to achieve energy breakeven (F = 1). Using the nominal TFTR design parameters and a self-consistent ion-electron power balance, the maximum F-value is found to be approximately 1.2 which occurs at an injection energy of approximately 210 KeV. The injector operation, i.e. its current and energy capability are shown to be a very critical factor in the TFTR performance. However, if the injectors meet the design objectives, there appears to be sufficient latitude in the other parameters to offer reasonable assurance that energy breakeven can be achieved. (U.S.)

  13. Composite fermion basis for two-component Bose gases

    Science.gov (United States)

    Meyer, Marius; Liabotro, Ola

    The composite fermion (CF) construction is known to produce wave functions that are not necessarily orthogonal, or even linearly independent, after projection. While usually not a practical issue in the quantum Hall regime, we have previously shown that it presents a technical challenge for rotating Bose gases with low angular momentum. These are systems where the CF approach yield surprisingly good approximations to the exact eigenstates of weak short-range interactions, and so solving the problem of linearly dependent wave functions is of interest. It can also be useful for studying CF excitations for fermions. Here we present several ways of constructing a basis for the space of ``simple CF states'' for two-component rotating Bose gases in the lowest Landau level, and prove that they all give a basis. Using the basis, we study the structure of the lowest-lying state using so-called restricted wave functions. We also examine the scaling of the overlap between the exact and CF wave functions at the maximal possible angular momentum for simple states. This work was financially supported by the Research Council of Norway.

  14. Cold component flow in a two-component mirror machine

    International Nuclear Information System (INIS)

    Rognlien, T.D.

    1975-12-01

    Steady-state solutions are given for the flow characteristics along the magnetic field of the cold plasma component in a two-component mirror machine. The hot plasma component is represented by a fixed density profile. The fluid equations are used to describe the cold plasma, which is assumed to be generated in a localized region at one end of the machine. The ion flow speed, v/sub i/, is required to satisfy the Bohm sheath condition at the end walls, i.e., v/sub i/ greater than or equal to c/sub s/, where c/sub s/ is the ion-acoustic speed. For the case when the cold plasma density, n/sub c/, is much less than the hot plasma density, n/sub h/, the cold plasma is stagnant and does not penetrate through the machine in the zero temperature case. The effect of a finite temperature is to allow for the penetration of a small amount of cold plasma through the machine. For the density range n/sub c/ approximately n/sub h/, the flow solutions are asymmetric about the midplane and have v/sub i/ = c/sub s/ near the midplane. Finally, for n/sub c/ much greater than n/sub h/, the solutions become symmetric about the midplane and approach the Lee--McNamara type solutions with v/sub i/ = c/sub s/ near the mirror throats

  15. Fast-wave heating of a two-component plasma

    International Nuclear Information System (INIS)

    Stix, T.H.

    1975-02-01

    The use of the compressional hydromagnetic mode (also called the magnetosonic or, simply, the fast wave) is examined in some detail with respect to the heating of a tritium plasma containing a few percent deuterium. Efficient absorption of wave energy by the deuteron component is found when ω = ω/sub c/ (deuterons), with Q/sub wave/ greater than or equal to 100. The dominant behavior of the high-energy deuteron distribution function is found to be f(v) approximately exp[3/2) ∫/sup v/ dv less than Δv greater than/less than(Δv/sub perpendicular to/) 2 greater than], where [Δv] is the Chandrasekhar-Spitzer drag coefficient, and [(Δv/sub perpendicular to/) 2 sigma] is the Kennel-Englemann quasilinear diffusion coefficient for wave--particle interaction at the deuteron cyclotron frequency. An analytic solution to the one-dimensional Fokker--Planck equation, with rf-induced diffusion, is developed, and using this solution together with Duane's fit to the D-T fusion cross-section, it is found that the nuclear fusion power output from an rf-produced two-component plasma can significantly exceed the incremental (radiofrequency) power input. (auth)

  16. Light-front QCD. II. Two-component theory

    International Nuclear Information System (INIS)

    Zhang, W.; Harindranath, A.

    1993-01-01

    The light-front gauge A a + =0 is known to be a convenient gauge in practical QCD calculations for short-distance behavior, but there are persistent concerns about its use because of its ''singular'' nature. The study of nonperturbative field theory quantizing on a light-front plane for hadronic bound states requires one to gain a priori systematic control of such gauge singularities. In the second paper of this series we study the two-component old-fashioned perturbation theory and various severe infrared divergences occurring in old-fashioned light-front Hamiltonian calculations for QCD. We also analyze the ultraviolet divergences associated with a large transverse momentum and examine three currently used regulators: an explicit transverse cutoff, transverse dimensional regularization, and a global cutoff. We discuss possible difficulties caused by the light-front gauge singularity in the applications of light-front QCD to both old-fashioned perturbative calculations for short-distance physics and upcoming nonperturbative investigations for hadronic bound states

  17. A General Nonlinear Fluid Model for Reacting Plasma-Neutral Mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Meier, E T; Shumlak, U

    2012-04-06

    A generalized, computationally tractable fluid model for capturing the effects of neutral particles in plasmas is derived. The model derivation begins with Boltzmann equations for singly charged ions, electrons, and a single neutral species. Electron-impact ionization, radiative recombination, and resonant charge exchange reactions are included. Moments of the reaction collision terms are detailed. Moments of the Boltzmann equations for electron, ion, and neutral species are combined to yield a two-component plasma-neutral fluid model. Separate density, momentum, and energy equations, each including reaction transfer terms, are produced for the plasma and neutral equations. The required closures for the plasma-neutral model are discussed.

  18. Determination of two-dimensional correlation lengths in an anisotropic two-component flow

    International Nuclear Information System (INIS)

    Thomson, O.

    1994-05-01

    Former studies have shown that correlation methods can be used for determination of various two-component flow parameters, among these the correlation length. In cases where the flow can be described as a mixture, in which the minority component forms spatially limited perturbations within the majority component, this parameter gives a good indication of the maximum extension of these perturbations. In the former studies, spherical symmetry of the perturbations has been assumed, and the correlation length has been measured in the direction of the flow (axially) only. However, if the flow structure is anisotropic, the correlation length will be different in different directions. In the present study, the method has been developed further, allowing also measurements perpendicular to the flow direction (radially). The measurements were carried out using laser beams and the two-component flows consisted of either glass beads and air or air and water. In order to make local measurements of both the axial and radial correlation length simultaneously, it is necessary to use 3 laser beams and to form the triple cross-covariance. This lead to some unforeseen complications, due to the character of this function. The experimental results are generally positive and size determinations with an accuracy of better than 10% have been achieved in most cases. Less accurate results appeared only for difficult conditions (symmetrical signals), when 3 beams were used. 5 refs, 13 figs, 3 tabs

  19. Theoretical calculation of cryogenic distillation for two-component hydrogen isotope system

    International Nuclear Information System (INIS)

    Xia Xiulong; Luo Yangming; Wang Heyi; Fu Zhonghua; Liu Jun; Han Jun; Gu Mei

    2005-10-01

    Cryogenic distillation model for single column was built to simulating hydrogen isotope separation system. Three two-component system H 2 /HD, H 2 /HT and D 2 /DT was studied. Both temperature and concentration distribution was obtained and the results show a clear separation characteristics. H 2 /HT has the best separation performance while D 2 /DT was the most difficult to separate. (authors)

  20. A quantitative trait locus mixture model that avoids spurious LOD score peaks.

    Science.gov (United States)

    Feenstra, Bjarke; Skovgaard, Ib M

    2004-06-01

    In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.

  1. A Dirichlet process mixture of generalized Dirichlet distributions for proportional data modeling.

    Science.gov (United States)

    Bouguila, Nizar; Ziou, Djemel

    2010-01-01

    In this paper, we propose a clustering algorithm based on both Dirichlet processes and generalized Dirichlet distribution which has been shown to be very flexible for proportional data modeling. Our approach can be viewed as an extension of the finite generalized Dirichlet mixture model to the infinite case. The extension is based on nonparametric Bayesian analysis. This clustering algorithm does not require the specification of the number of mixture components to be given in advance and estimates it in a principled manner. Our approach is Bayesian and relies on the estimation of the posterior distribution of clusterings using Gibbs sampler. Through some applications involving real-data classification and image databases categorization using visual words, we show that clustering via infinite mixture models offers a more powerful and robust performance than classic finite mixtures.

  2. Thermodynamic parameters for mixtures of quartz under shock wave loading in views of the equilibrium model

    International Nuclear Information System (INIS)

    Maevskii, K. K.; Kinelovskii, S. A.

    2015-01-01

    The numerical results of modeling of shock wave loading of mixtures with the SiO 2 component are presented. The TEC (thermodynamic equilibrium component) model is employed to describe the behavior of solid and porous multicomponent mixtures and alloys under shock wave loading. State equations of a Mie–Grüneisen type are used to describe the behavior of condensed phases, taking into account the temperature dependence of the Grüneisen coefficient, gas in pores is one of the components of the environment. The model is based on the assumption that all components of the mixture under shock-wave loading are in thermodynamic equilibrium. The calculation results are compared with the experimental data derived by various authors. The behavior of the mixture containing components with a phase transition under high dynamic loads is described

  3. Latent Transition Analysis with a Mixture Item Response Theory Measurement Model

    Science.gov (United States)

    Cho, Sun-Joo; Cohen, Allan S.; Kim, Seock-Ho; Bottge, Brian

    2010-01-01

    A latent transition analysis (LTA) model was described with a mixture Rasch model (MRM) as the measurement model. Unlike the LTA, which was developed with a latent class measurement model, the LTA-MRM permits within-class variability on the latent variable, making it more useful for measuring treatment effects within latent classes. A simulation…

  4. Complexation in two-component chlortetracycline-melanin solutions

    Science.gov (United States)

    Lapina, V. A.; Pershukevich, P. P.; Dontsov, A. E.; Bel'Kov, M. V.

    2008-01-01

    The spectra and kinetics of fluorescence of two-component solutions of the chlortetracycline (CHTC)-DOPA-melanin (melanin or ME) system in water have been investigated. The data obtained have been compared to similar data for solutions of CHTC-melanosome from bull eye (MB), which contains natural melanin, in K-phosphate buffer at pH 7.4. The overall results indicate the occurrence of complexation between molecules of CHTC and ME as they are being excited. The studies of complexation in the solution of CHTC-MB in the buffer are complicated by the formation of a CHTC-buffer complex. The effect of optical radiation in the range 330-750 nm on the CHTC-ME complex shows selectivity: the greatest change in the spectrum occurs when the wavelength of the exciting radiation coincides with the long-wavelength band maximum of the fluorescence excitation spectrum of the CHTC-ME complex in aqueous solution. In this range, CHTC and especially ME show high photochemical stability. The nature of the radiation effect on the studied compounds in the hard UV range (λ < 330 nm) differs greatly from that in the range 330-750 nm. It is apparently accompanied by significant photochemical transmutations of all system components. By comparing the characteristics of the CHTC-ME systems with those of the related drug doxycycline (DC-ME), the conclusion has been made that the chlorine atom plays a vital role in formation of the short-wavelength band in the fluorescence spectrum of the CHTC-ME complex.

  5. Three Different Ways of Calibrating Burger's Contact Model for Viscoelastic Model of Asphalt Mixtures by Discrete Element Method

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Stang, Henrik

    2016-01-01

    modulus. Three different approaches have been used and compared for calibrating the Burger's contact model. Values of the dynamic modulus and phase angle of asphalt mixtures were predicted by conducting DE simulation under dynamic strain control loading. The excellent agreement between the predicted......In this paper the viscoelastic behavior of asphalt mixture was investigated by employing a three-dimensional discrete element method. Combined with Burger's model, three contact models were used for the construction of constitutive asphalt mixture model with viscoelastic properties...

  6. Two-component fluid membranes near repulsive walls: Linearized hydrodynamics of equilibrium and nonequilibrium states.

    Science.gov (United States)

    Sankararaman, Sumithra; Menon, Gautam I; Sunil Kumar, P B

    2002-09-01

    We study the linearized hydrodynamics of a two-component fluid membrane near a repulsive wall, using a model that incorporates curvature-concentration coupling as well as hydrodynamic interactions. This model is a simplified version of a recently proposed one [J.-B. Manneville et al., Phys. Rev. E 64, 021908 (2001)] for nonequilibrium force centers embedded in fluid membranes, such as light-activated bacteriorhodopsin pumps incorporated in phospholipid egg phosphatidyl choline (EPC) bilayers. The pump-membrane system is modeled as an impermeable, two-component bilayer fluid membrane in the presence of an ambient solvent, in which one component, representing active pumps, is described in terms of force dipoles displaced with respect to the bilayer midpoint. We first discuss the case in which such pumps are rendered inactive, computing the mode structure in the bulk as well as the modification of hydrodynamic properties by the presence of a nearby wall. These results should apply, more generally, to equilibrium fluid membranes comprised of two components, in which the effects of curvature-concentration coupling are significant, above the threshold for phase separation. We then discuss the fluctuations and mode structure in the steady state of active two-component membranes near a repulsive wall. We find that proximity to the wall smoothens membrane height fluctuations in the stable regime, resulting in a logarithmic scaling of the roughness even for initially tensionless membranes. This explicitly nonequilibrium result is a consequence of the incorporation of curvature-concentration coupling in our hydrodynamic treatment. This result also indicates that earlier scaling arguments which obtained an increase in the roughness of active membranes near repulsive walls upon neglecting the role played by such couplings may need to be reevaluated.

  7. Modeling of Video Sequences by Gaussian Mixture: Application in Motion Estimation by Block Matching Method

    Directory of Open Access Journals (Sweden)

    Abdenaceur Boudlal

    2010-01-01

    Full Text Available This article investigates a new method of motion estimation based on block matching criterion through the modeling of image blocks by a mixture of two and three Gaussian distributions. Mixture parameters (weights, means vectors, and covariance matrices are estimated by the Expectation Maximization algorithm (EM which maximizes the log-likelihood criterion. The similarity between a block in the current image and the more resembling one in a search window on the reference image is measured by the minimization of Extended Mahalanobis distance between the clusters of mixture. Performed experiments on sequences of real images have given good results, and PSNR reached 3 dB.

  8. Thermodiffusion in Multicomponent Mixtures Thermodynamic, Algebraic, and Neuro-Computing Models

    CERN Document Server

    Srinivasan, Seshasai

    2013-01-01

    Thermodiffusion in Multicomponent Mixtures presents the computational approaches that are employed in the study of thermodiffusion in various types of mixtures, namely, hydrocarbons, polymers, water-alcohol, molten metals, and so forth. We present a detailed formalism of these methods that are based on non-equilibrium thermodynamics or algebraic correlations or principles of the artificial neural network. The book will serve as single complete reference to understand the theoretical derivations of thermodiffusion models and its application to different types of multi-component mixtures. An exhaustive discussion of these is used to give a complete perspective of the principles and the key factors that govern the thermodiffusion process.

  9. Effects of Test Conditions on APA Rutting and Prediction Modeling for Asphalt Mixtures

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2017-01-01

    Full Text Available APA rutting tests were conducted for six kinds of asphalt mixtures under air-dry and immersing conditions. The influences of test conditions, including load, temperature, air voids, and moisture, on APA rutting depth were analyzed by using grey correlation method, and the APA rutting depth prediction model was established. Results show that the modified asphalt mixtures have bigger rutting depth ratios of air-dry to immersing conditions, indicating that the modified asphalt mixtures have better antirutting properties and water stability than the matrix asphalt mixtures. The grey correlation degrees of temperature, load, air void, and immersing conditions on APA rutting depth decrease successively, which means that temperature is the most significant influencing factor. The proposed indoor APA rutting prediction model has good prediction accuracy, and the correlation coefficient between the predicted and the measured rutting depths is 96.3%.

  10. Study of the Internal Mechanical response of an asphalt mixture by 3-D Discrete Element Modeling

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Hofko, Bernhard

    2015-01-01

    and the reliability of which have been validated. The dynamic modulus of asphalt mixtures were predicted by conducting Discrete Element simulation under dynamic strain control loading. In order to reduce the calculation time, a method based on frequency–temperature superposition principle has been implemented......In this paper the viscoelastic behavior of asphalt mixture was investigated by employing a three-dimensional Discrete Element Method (DEM). The cylinder model was filled with cubic array of spheres with a specified radius, and was considered as a whole mixture with uniform contact properties....... The ball density effect on the internal stress distribution of the asphalt mixture model has been studied when using this method. Furthermore, the internal stresses under dynamic loading have been studied. The agreement between the predicted and the laboratory test results of the complex modulus shows...

  11. The role of the Kubo number in two-component turbulence

    International Nuclear Information System (INIS)

    Qin, G.; Shalchi, A.

    2013-01-01

    We explore the random walk of magnetic field lines in two-component turbulence by using computer simulations. It is often assumed that the two-component model provides a good approximation for solar wind turbulence. We explore the dependence of the field line diffusion coefficient on the Kubo number which is a fundamental and characteristic quantity in the theory of turbulence. We show that there are two transport regimes. One is the well-known quasilinear regime in which the diffusion coefficient is proportional to the Kubo number squared, and the second one is a nonlinear regime in which the diffusion coefficient is directly proportional to the Kubo number. The so-called percolative transport regime which is often discussed in the literature cannot be found. The numerical results obtained in the present paper confirm analytical theories for random walking field lines developed in the past

  12. Excess Properties of Aqueous Mixtures of Methanol: Simple Models Versus Experiment

    Czech Academy of Sciences Publication Activity Database

    Vlček, Lukáš; Nezbeda, Ivo

    roč. 131-132, - (2007), s. 158-162 ISSN 0167-7322. [International Conference on Solution Chemistry /29./. Portorož, 21.08.2005-25.08.2005] R&D Projects: GA AV ČR(CZ) IAA4072303; GA AV ČR(CZ) 1ET400720409 Institutional research plan: CEZ:AV0Z40720504 Keywords : aqueous mixtures * primitive models * water-alcohol mixtures Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 0.982, year: 2007

  13. Modelling and parameter estimation in reactive continuous mixtures: the catalytic cracking of alkanes - part II

    Directory of Open Access Journals (Sweden)

    F. C. PEIXOTO

    1999-09-01

    Full Text Available Fragmentation kinetics is employed to model a continuous reactive mixture of alkanes under catalytic cracking conditions. Standard moment analysis techniques are employed, and a dynamic system for the time evolution of moments of the mixture's dimensionless concentration distribution function (DCDF is found. The time behavior of the DCDF is recovered with successive estimations of scaled gamma distributions using the moments time data.

  14. Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots

    Science.gov (United States)

    Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.

    2013-01-01

    Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…

  15. A globally accurate theory for a class of binary mixture models

    Science.gov (United States)

    Dickman, Adriana G.; Stell, G.

    The self-consistent Ornstein-Zernike approximation results for the 3D Ising model are used to obtain phase diagrams for binary mixtures described by decorated models, yielding the plait point, binodals, and closed-loop coexistence curves for the models proposed by Widom, Clark, Neece, and Wheeler. The results are in good agreement with series expansions and experiments.

  16. Structure-reactivity modeling using mixture-based representation of chemical reactions.

    Science.gov (United States)

    Polishchuk, Pavel; Madzhidov, Timur; Gimadiev, Timur; Bodrov, Andrey; Nugmanov, Ramil; Varnek, Alexandre

    2017-09-01

    We describe a novel approach of reaction representation as a combination of two mixtures: a mixture of reactants and a mixture of products. In turn, each mixture can be encoded using an earlier reported approach involving simplex descriptors (SiRMS). The feature vector representing these two mixtures results from either concatenated product and reactant descriptors or the difference between descriptors of products and reactants. This reaction representation doesn't need an explicit labeling of a reaction center. The rigorous "product-out" cross-validation (CV) strategy has been suggested. Unlike the naïve "reaction-out" CV approach based on a random selection of items, the proposed one provides with more realistic estimation of prediction accuracy for reactions resulting in novel products. The new methodology has been applied to model rate constants of E2 reactions. It has been demonstrated that the use of the fragment control domain applicability approach significantly increases prediction accuracy of the models. The models obtained with new "mixture" approach performed better than those required either explicit (Condensed Graph of Reaction) or implicit (reaction fingerprints) reaction center labeling.

  17. On the Bayesian calibration of computer model mixtures through experimental data, and the design of predictive models

    Science.gov (United States)

    Karagiannis, Georgios; Lin, Guang

    2017-08-01

    For many real systems, several computer models may exist with different physics and predictive abilities. To achieve more accurate simulations/predictions, it is desirable for these models to be properly combined and calibrated. We propose the Bayesian calibration of computer model mixture method which relies on the idea of representing the real system output as a mixture of the available computer model outputs with unknown input dependent weight functions. The method builds a fully Bayesian predictive model as an emulator for the real system output by combining, weighting, and calibrating the available models in the Bayesian framework. Moreover, it fits a mixture of calibrated computer models that can be used by the domain scientist as a mean to combine the available computer models, in a flexible and principled manner, and perform reliable simulations. It can address realistic cases where one model may be more accurate than the others at different input values because the mixture weights, indicating the contribution of each model, are functions of the input. Inference on the calibration parameters can consider multiple computer models associated with different physics. The method does not require knowledge of the fidelity order of the models. We provide a technique able to mitigate the computational overhead due to the consideration of multiple computer models that is suitable to the mixture model framework. We implement the proposed method in a real-world application involving the Weather Research and Forecasting large-scale climate model.

  18. Structural properties of dendrimer-colloid mixtures

    International Nuclear Information System (INIS)

    Lenz, Dominic A; Blaak, Ronald; Likos, Christos N

    2012-01-01

    We consider binary mixtures of colloidal particles and amphiphilic dendrimers of the second generation by means of Monte Carlo simulations. By using the effective interactions between monomer-resolved dendrimers and colloids, we compare the results of simulations of mixtures stemming from a full monomer-resolved description with the effective two-component description at different densities, composition ratios, colloid diameters and interaction strengths. Additionally, we map the two-component system onto an effective one-component model for the colloids in the presence of the dendrimers. Simulations based on the resulting depletion potentials allow us to extend the comparison to yet another level of coarse graining and to examine under which conditions this two-step approach is valid. In addition, a preliminary outlook into the phase behavior of this system is given. (paper)

  19. Effect of Differential Diffusion in Two-Component Media

    Science.gov (United States)

    Ingel', L. Kh.

    2017-03-01

    Examples are presented of an exact solution of a nonstationary problem on the development of convection in a binary mixture (seawater) near an infinite vertical surface in which the buoyancy disturbances are determined both by the temperature and by the disturbances of the impurity (salt) concentration. Consideration is given to the development of convection in a homogeneous medium near an infinite vertical surface at whose boundary specification is made of constant (after ″switching on″ at the initial moment) heat fluxes and impurities or variations of these substances, i.e., problems with boundary conditions of 1st and 2nd kind are considered. The obtained analytical solutions demonstrate the possibility of a nontrivial effect associated with the difference in the values of the coefficients of transfer of two substances: the inflows of positive buoyancy may lead, contrary to intuitive notions, to the origination of descending motion of the medium rather than the ascending one. Clarification is provided for the physical meaning of such effects, which can be substantial, for example, in melting of sea ice.

  20. Monitoring and modeling of ultrasonic wave propagation in crystallizing mixtures

    Science.gov (United States)

    Marshall, T.; Challis, R. E.; Tebbutt, J. S.

    2002-05-01

    The utility of ultrasonic compression wave techniques for monitoring crystallization processes is investigated in a study of the seeded crystallization of copper II sulfate pentahydrate from aqueous solution. Simple models are applied to predict crystal yield, crystal size distribution and the changing nature of the continuous phase. A scattering model is used to predict the ultrasonic attenuation as crystallization proceeds. Experiments confirm that modeled attenuation is in agreement with measured results.

  1. Response Mixture Modeling: Accounting for Heterogeneity in Item Characteristics across Response Times.

    Science.gov (United States)

    Molenaar, Dylan; de Boeck, Paul

    2018-06-01

    In item response theory modeling of responses and response times, it is commonly assumed that the item responses have the same characteristics across the response times. However, heterogeneity might arise in the data if subjects resort to different response processes when solving the test items. These differences may be within-subject effects, that is, a subject might use a certain process on some of the items and a different process with different item characteristics on the other items. If the probability of using one process over the other process depends on the subject's response time, within-subject heterogeneity of the item characteristics across the response times arises. In this paper, the method of response mixture modeling is presented to account for such heterogeneity. Contrary to traditional mixture modeling where the full response vectors are classified, response mixture modeling involves classification of the individual elements in the response vector. In a simulation study, the response mixture model is shown to be viable in terms of parameter recovery. In addition, the response mixture model is applied to a real dataset to illustrate its use in investigating within-subject heterogeneity in the item characteristics across response times.

  2. Synthesis and Characterization of Two Component Alloy Nanoparticles

    Science.gov (United States)

    Tabatabaei, Salomeh

    Alloying is an old trick used to produce new materials by synergistically combining at least two components. New developments in nanoscience have enabled new degrees of freedom, such as size, solubility and concentration of the alloying element to be utilized in the design of the physical properties of alloy nanoparticles (ANPs). ANPs as multi-functional materials have applications in catalysis, biomedical technologies and electronics. Phase diagrams of ANPs are very little known and may not represent that of bulk picture, furthermore, ANPs with different crystallite orientation and compositions could remain far from equilibrium. Here, we studied the synthesis and stability of Au-Sn and Ag-Ni ANPs with chemical reduction method at room temperature. Due to the large difference in the redox potentials of Au and Sn, co-reduction is not a reproducible method. However, two step successive reductions was found to be more reliable to generate Au-Sn ANPs which consists of forming clusters in the first step (either without capping agent or with weakly coordinated surfactant molecules) and then undergoing a second reduction step in the presence of another metal salt. Our observation also showed that capping agents (Cetrimonium bromide or (CTAB)) and Polyacrylic acid (PAA)) play a key role in the alloying process and shorter length capping agent (PAA) may facilitate the diffusion of individual components and thus enabling better alloying. Different molar ratios of Sn and Au precursors were used to study the effect of alloying elements on the melting point and the crystalline structures and melting points were determined by various microscopy and spectroscopy techniques and differential scanning calorimetry (DSC). A significant depression (up to150°C) in the melting transition was observed for the Au-Sn ANPs compared to the bulk eutectic point (Tm 280°C) due to the size and shape effect. Au-Sn ANPs offer a unique set of advantages as lead-free solder material which can

  3. Latent Partially Ordered Classification Models and Normal Mixtures

    Science.gov (United States)

    Tatsuoka, Curtis; Varadi, Ferenc; Jaeger, Judith

    2013-01-01

    Latent partially ordered sets (posets) can be employed in modeling cognitive functioning, such as in the analysis of neuropsychological (NP) and educational test data. Posets are cognitively diagnostic in the sense that classification states in these models are associated with detailed profiles of cognitive functioning. These profiles allow for…

  4. Linking asphalt binder fatigue to asphalt mixture fatigue performance using viscoelastic continuum damage modeling

    Science.gov (United States)

    Safaei, Farinaz; Castorena, Cassie; Kim, Y. Richard

    2016-08-01

    Fatigue cracking is a major form of distress in asphalt pavements. Asphalt binder is the weakest asphalt concrete constituent and, thus, plays a critical role in determining the fatigue resistance of pavements. Therefore, the ability to characterize and model the inherent fatigue performance of an asphalt binder is a necessary first step to design mixtures and pavements that are not susceptible to premature fatigue failure. The simplified viscoelastic continuum damage (S-VECD) model has been used successfully by researchers to predict the damage evolution in asphalt mixtures for various traffic and climatic conditions using limited uniaxial test data. In this study, the S-VECD model, developed for asphalt mixtures, is adapted for asphalt binders tested under cyclic torsion in a dynamic shear rheometer. Derivation of the model framework is presented. The model is verified by producing damage characteristic curves that are both temperature- and loading history-independent based on time sweep tests, given that the effects of plasticity and adhesion loss on the material behavior are minimal. The applicability of the S-VECD model to the accelerated loading that is inherent of the linear amplitude sweep test is demonstrated, which reveals reasonable performance predictions, but with some loss in accuracy compared to time sweep tests due to the confounding effects of nonlinearity imposed by the high strain amplitudes included in the test. The asphalt binder S-VECD model is validated through comparisons to asphalt mixture S-VECD model results derived from cyclic direct tension tests and Accelerated Loading Facility performance tests. The results demonstrate good agreement between the asphalt binder and mixture test results and pavement performance, indicating that the developed model framework is able to capture the asphalt binder's contribution to mixture fatigue and pavement fatigue cracking performance.

  5. Analysis of real-time mixture cytotoxicity data following repeated exposure using BK/TD models

    International Nuclear Information System (INIS)

    Teng, S.; Tebby, C.; Barcellini-Couget, S.; De Sousa, G.; Brochot, C.; Rahmani, R.; Pery, A.R.R.

    2016-01-01

    Cosmetic products generally consist of multiple ingredients. Thus, cosmetic risk assessment has to deal with mixture toxicity on a long-term scale which means it has to be assessed in the context of repeated exposure. Given that animal testing has been banned for cosmetics risk assessment, in vitro assays allowing long-term repeated exposure and adapted for in vitro – in vivo extrapolation need to be developed. However, most in vitro tests only assess short-term effects and consider static endpoints which hinder extrapolation to realistic human exposure scenarios where concentration in target organs is varies over time. Thanks to impedance metrics, real-time cell viability monitoring for repeated exposure has become possible. We recently constructed biokinetic/toxicodynamic models (BK/TD) to analyze such data (Teng et al., 2015) for three hepatotoxic cosmetic ingredients: coumarin, isoeugenol and benzophenone-2. In the present study, we aim to apply these models to analyze the dynamics of mixture impedance data using the concepts of concentration addition and independent action. Metabolic interactions between the mixture components were investigated, characterized and implemented in the models, as they impacted the actual cellular exposure. Indeed, cellular metabolism following mixture exposure induced a quick disappearance of the compounds from the exposure system. We showed that isoeugenol substantially decreased the metabolism of benzophenone-2, reducing the disappearance of this compound and enhancing its in vitro toxicity. Apart from this metabolic interaction, no mixtures showed any interaction, and all binary mixtures were successfully modeled by at least one model based on exposure to the individual compounds. - Highlights: • We could predict cell response over repeated exposure to mixtures of cosmetics. • Compounds acted independently on the cells. • Metabolic interactions impacted exposure concentrations to the compounds.

  6. Analysis of real-time mixture cytotoxicity data following repeated exposure using BK/TD models

    Energy Technology Data Exchange (ETDEWEB)

    Teng, S.; Tebby, C. [Models for Toxicology and Ecotoxicology Unit, INERIS, Parc Technologique Alata, BP 2, 60550 Verneuil-en-Halatte (France); Barcellini-Couget, S. [ODESIA Neosciences, Sophia Antipolis, 400 route des chappes, 06903 Sophia Antipolis (France); De Sousa, G. [INRA, ToxAlim, 400 route des Chappes, BP, 167 06903 Sophia Antipolis, Cedex (France); Brochot, C. [Models for Toxicology and Ecotoxicology Unit, INERIS, Parc Technologique Alata, BP 2, 60550 Verneuil-en-Halatte (France); Rahmani, R. [INRA, ToxAlim, 400 route des Chappes, BP, 167 06903 Sophia Antipolis, Cedex (France); Pery, A.R.R., E-mail: alexandre.pery@agroparistech.fr [AgroParisTech, UMR 1402 INRA-AgroParisTech Ecosys, 78850 Thiverval Grignon (France); INRA, UMR 1402 INRA-AgroParisTech Ecosys, 78850 Thiverval Grignon (France)

    2016-08-15

    Cosmetic products generally consist of multiple ingredients. Thus, cosmetic risk assessment has to deal with mixture toxicity on a long-term scale which means it has to be assessed in the context of repeated exposure. Given that animal testing has been banned for cosmetics risk assessment, in vitro assays allowing long-term repeated exposure and adapted for in vitro – in vivo extrapolation need to be developed. However, most in vitro tests only assess short-term effects and consider static endpoints which hinder extrapolation to realistic human exposure scenarios where concentration in target organs is varies over time. Thanks to impedance metrics, real-time cell viability monitoring for repeated exposure has become possible. We recently constructed biokinetic/toxicodynamic models (BK/TD) to analyze such data (Teng et al., 2015) for three hepatotoxic cosmetic ingredients: coumarin, isoeugenol and benzophenone-2. In the present study, we aim to apply these models to analyze the dynamics of mixture impedance data using the concepts of concentration addition and independent action. Metabolic interactions between the mixture components were investigated, characterized and implemented in the models, as they impacted the actual cellular exposure. Indeed, cellular metabolism following mixture exposure induced a quick disappearance of the compounds from the exposure system. We showed that isoeugenol substantially decreased the metabolism of benzophenone-2, reducing the disappearance of this compound and enhancing its in vitro toxicity. Apart from this metabolic interaction, no mixtures showed any interaction, and all binary mixtures were successfully modeled by at least one model based on exposure to the individual compounds. - Highlights: • We could predict cell response over repeated exposure to mixtures of cosmetics. • Compounds acted independently on the cells. • Metabolic interactions impacted exposure concentrations to the compounds.

  7. Modelling time course gene expression data with finite mixtures of linear additive models.

    Science.gov (United States)

    Grün, Bettina; Scharl, Theresa; Leisch, Friedrich

    2012-01-15

    A model class of finite mixtures of linear additive models is presented. The component-specific parameters in the regression models are estimated using regularized likelihood methods. The advantages of the regularization are that (i) the pre-specified maximum degrees of freedom for the splines is less crucial than for unregularized estimation and that (ii) for each component individually a suitable degree of freedom is selected in an automatic way. The performance is evaluated in a simulation study with artificial data as well as on a yeast cell cycle dataset of gene expression levels over time. The latest release version of the R package flexmix is available from CRAN (http://cran.r-project.org/).

  8. Robust non-rigid point set registration using student's-t mixture model.

    Directory of Open Access Journals (Sweden)

    Zhiyong Zhou

    Full Text Available The Student's-t mixture model, which is heavily tailed and more robust than the Gaussian mixture model, has recently received great attention on image processing. In this paper, we propose a robust non-rigid point set registration algorithm using the Student's-t mixture model. Specifically, first, we consider the alignment of two point sets as a probability density estimation problem and treat one point set as Student's-t mixture model centroids. Then, we fit the Student's-t mixture model centroids to the other point set which is treated as data. Finally, we get the closed-form solutions of registration parameters, leading to a computationally efficient registration algorithm. The proposed algorithm is especially effective for addressing the non-rigid point set registration problem when significant amounts of noise and outliers are present. Moreover, less registration parameters have to be set manually for our algorithm compared to the popular coherent points drift (CPD algorithm. We have compared our algorithm with other state-of-the-art registration algorithms on both 2D and 3D data with noise and outliers, where our non-rigid registration algorithm showed accurate results and outperformed the other algorithms.

  9. Introduction to the special section on mixture modeling in personality assessment.

    Science.gov (United States)

    Wright, Aidan G C; Hallquist, Michael N

    2014-01-01

    Latent variable models offer a conceptual and statistical framework for evaluating the underlying structure of psychological constructs, including personality and psychopathology. Complex structures that combine or compare categorical and dimensional latent variables can be accommodated using mixture modeling approaches, which provide a powerful framework for testing nuanced theories about psychological structure. This special series includes introductory primers on cross-sectional and longitudinal mixture modeling, in addition to empirical examples applying these techniques to real-world data collected in clinical settings. This group of articles is designed to introduce personality assessment scientists and practitioners to a general latent variable framework that we hope will stimulate new research and application of mixture models to the assessment of personality and its pathology.

  10. A BGK model for reactive mixtures of polyatomic gases with continuous internal energy

    Science.gov (United States)

    Bisi, M.; Monaco, R.; Soares, A. J.

    2018-03-01

    In this paper we derive a BGK relaxation model for a mixture of polyatomic gases with a continuous structure of internal energies. The emphasis of the paper is on the case of a quaternary mixture undergoing a reversible chemical reaction of bimolecular type. For such a mixture we prove an H -theorem and characterize the equilibrium solutions with the related mass action law of chemical kinetics. Further, a Chapman-Enskog asymptotic analysis is performed in view of computing the first-order non-equilibrium corrections to the distribution functions and investigating the transport properties of the reactive mixture. The chemical reaction rate is explicitly derived at the first order and the balance equations for the constituent number densities are derived at the Euler level.

  11. The phase behavior of a hard sphere chain model of a binary n-alkane mixture

    International Nuclear Information System (INIS)

    Malanoski, A. P.; Monson, P. A.

    2000-01-01

    Monte Carlo computer simulations have been used to study the solid and fluid phase properties as well as phase equilibrium in a flexible, united atom, hard sphere chain model of n-heptane/n-octane mixtures. We describe a methodology for calculating the chemical potentials for the components in the mixture based on a technique used previously for atomic mixtures. The mixture was found to conform accurately to ideal solution behavior in the fluid phase. However, much greater nonidealities were seen in the solid phase. Phase equilibrium calculations indicate a phase diagram with solid-fluid phase equilibrium and a eutectic point. The components are only miscible in the solid phase for dilute solutions of the shorter chains in the longer chains. (c) 2000 American Institute of Physics

  12. Application of the Electronic Nose Technique to Differentiation between Model Mixtures with COPD Markers

    Directory of Open Access Journals (Sweden)

    Jacek Namieśnik

    2013-04-01

    Full Text Available The paper presents the potential of an electronic nose technique in the field of fast diagnostics of patients suspected of Chronic Obstructive Pulmonary Disease (COPD. The investigations were performed using a simple electronic nose prototype equipped with a set of six semiconductor sensors manufactured by FIGARO Co. They were aimed at verification of a possibility of differentiation between model reference mixtures with potential COPD markers (N,N-dimethylformamide and N,N-dimethylacetamide. These mixtures contained volatile organic compounds (VOCs such as acetone, isoprene, carbon disulphide, propan-2-ol, formamide, benzene, toluene, acetonitrile, acetic acid, dimethyl ether, dimethyl sulphide, acrolein, furan, propanol and pyridine, recognized as the components of exhaled air. The model reference mixtures were prepared at three concentration levels—10 ppb, 25 ppb, 50 ppb v/v—of each component, except for the COPD markers. Concentration of the COPD markers in the mixtures was from 0 ppb to 100 ppb v/v. Interpretation of the obtained data employed principal component analysis (PCA. The investigations revealed the usefulness of the electronic device only in the case when the concentration of the COPD markers was twice as high as the concentration of the remaining components of the mixture and for a limited number of basic mixture components.

  13. A numerical model for boiling heat transfer coefficient of zeotropic mixtures

    Science.gov (United States)

    Barraza Vicencio, Rodrigo; Caviedes Aedo, Eduardo

    2017-12-01

    Zeotropic mixtures never have the same liquid and vapor composition in the liquid-vapor equilibrium. Also, the bubble and the dew point are separated; this gap is called glide temperature (Tglide). Those characteristics have made these mixtures suitable for cryogenics Joule-Thomson (JT) refrigeration cycles. Zeotropic mixtures as working fluid in JT cycles improve their performance in an order of magnitude. Optimization of JT cycles have earned substantial importance for cryogenics applications (e.g, gas liquefaction, cryosurgery probes, cooling of infrared sensors, cryopreservation, and biomedical samples). Heat exchangers design on those cycles is a critical point; consequently, heat transfer coefficient and pressure drop of two-phase zeotropic mixtures are relevant. In this work, it will be applied a methodology in order to calculate the local convective heat transfer coefficients based on the law of the wall approach for turbulent flows. The flow and heat transfer characteristics of zeotropic mixtures in a heated horizontal tube are investigated numerically. The temperature profile and heat transfer coefficient for zeotropic mixtures of different bulk compositions are analysed. The numerical model has been developed and locally applied in a fully developed, constant temperature wall, and two-phase annular flow in a duct. Numerical results have been obtained using this model taking into account continuity, momentum, and energy equations. Local heat transfer coefficient results are compared with available experimental data published by Barraza et al. (2016), and they have shown good agreement.

  14. A Bayesian approach to the analysis of quantal bioassay studies using nonparametric mixture models.

    Science.gov (United States)

    Fronczyk, Kassandra; Kottas, Athanasios

    2014-03-01

    We develop a Bayesian nonparametric mixture modeling framework for quantal bioassay settings. The approach is built upon modeling dose-dependent response distributions. We adopt a structured nonparametric prior mixture model, which induces a monotonicity restriction for the dose-response curve. Particular emphasis is placed on the key risk assessment goal of calibration for the dose level that corresponds to a specified response. The proposed methodology yields flexible inference for the dose-response relationship as well as for other inferential objectives, as illustrated with two data sets from the literature. © 2013, The International Biometric Society.

  15. Gravel-Sand-Clay Mixture Model for Predictions of Permeability and Velocity of Unconsolidated Sediments

    Science.gov (United States)

    Konishi, C.

    2014-12-01

    Gravel-sand-clay mixture model is proposed particularly for unconsolidated sediments to predict permeability and velocity from volume fractions of the three components (i.e. gravel, sand, and clay). A well-known sand-clay mixture model or bimodal mixture model treats clay contents as volume fraction of the small particle and the rest of the volume is considered as that of the large particle. This simple approach has been commonly accepted and has validated by many studies before. However, a collection of laboratory measurements of permeability and grain size distribution for unconsolidated samples show an impact of presence of another large particle; i.e. only a few percent of gravel particles increases the permeability of the sample significantly. This observation cannot be explained by the bimodal mixture model and it suggests the necessity of considering the gravel-sand-clay mixture model. In the proposed model, I consider the three volume fractions of each component instead of using only the clay contents. Sand becomes either larger or smaller particles in the three component mixture model, whereas it is always the large particle in the bimodal mixture model. The total porosity of the two cases, one is the case that the sand is smaller particle and the other is the case that the sand is larger particle, can be modeled independently from sand volume fraction by the same fashion in the bimodal model. However, the two cases can co-exist in one sample; thus, the total porosity of the mixed sample is calculated by weighted average of the two cases by the volume fractions of gravel and clay. The effective porosity is distinguished from the total porosity assuming that the porosity associated with clay is zero effective porosity. In addition, effective grain size can be computed from the volume fractions and representative grain sizes for each component. Using the effective porosity and the effective grain size, the permeability is predicted by Kozeny-Carman equation

  16. A general mixture model and its application to coastal sandbar migration simulation

    Science.gov (United States)

    Liang, Lixin; Yu, Xiping

    2017-04-01

    A mixture model for general description of sediment laden flows is developed and then applied to coastal sandbar migration simulation. Firstly the mixture model is derived based on the Eulerian-Eulerian approach of the complete two-phase flow theory. The basic equations of the model include the mass and momentum conservation equations for the water-sediment mixture and the continuity equation for sediment concentration. The turbulent motion of the mixture is formulated for the fluid and the particles respectively. A modified k-ɛ model is used to describe the fluid turbulence while an algebraic model is adopted for the particles. A general formulation for the relative velocity between the two phases in sediment laden flows, which is derived by manipulating the momentum equations of the enhanced two-phase flow model, is incorporated into the mixture model. A finite difference method based on SMAC scheme is utilized for numerical solutions. The model is validated by suspended sediment motion in steady open channel flows, both in equilibrium and non-equilibrium state, and in oscillatory flows as well. The computed sediment concentrations, horizontal velocity and turbulence kinetic energy of the mixture are all shown to be in good agreement with experimental data. The mixture model is then applied to the study of sediment suspension and sandbar migration in surf zones under a vertical 2D framework. The VOF method for the description of water-air free surface and topography reaction model is coupled. The bed load transport rate and suspended load entrainment rate are all decided by the sea bed shear stress, which is obtained from the boundary layer resolved mixture model. The simulation results indicated that, under small amplitude regular waves, erosion occurred on the sandbar slope against the wave propagation direction, while deposition dominated on the slope towards wave propagation, indicating an onshore migration tendency. The computation results also shows that

  17. Concentration addition and independent action model: Which is better in predicting the toxicity for metal mixtures on zebrafish larvae.

    Science.gov (United States)

    Gao, Yongfei; Feng, Jianfeng; Kang, Lili; Xu, Xin; Zhu, Lin

    2018-01-01

    The joint toxicity of chemical mixtures has emerged as a popular topic, particularly on the additive and potential synergistic actions of environmental mixtures. We investigated the 24h toxicity of Cu-Zn, Cu-Cd, and Cu-Pb and 96h toxicity of Cd-Pb binary mixtures on the survival of zebrafish larvae. Joint toxicity was predicted and compared using the concentration addition (CA) and independent action (IA) models with different assumptions in the toxic action mode in toxicodynamic processes through single and binary metal mixture tests. Results showed that the CA and IA models presented varying predictive abilities for different metal combinations. For the Cu-Cd and Cd-Pb mixtures, the CA model simulated the observed survival rates better than the IA model. By contrast, the IA model simulated the observed survival rates better than the CA model for the Cu-Zn and Cu-Pb mixtures. These findings revealed that the toxic action mode may depend on the combinations and concentrations of tested metal mixtures. Statistical analysis of the antagonistic or synergistic interactions indicated that synergistic interactions were observed for the Cu-Cd and Cu-Pb mixtures, non-interactions were observed for the Cd-Pb mixtures, and slight antagonistic interactions for the Cu-Zn mixtures. These results illustrated that the CA and IA models are consistent in specifying the interaction patterns of binary metal mixtures. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Modeling of nanoscale liquid mixture transport by density functional hydrodynamics

    Science.gov (United States)

    Dinariev, Oleg Yu.; Evseev, Nikolay V.

    2017-06-01

    Modeling of multiphase compositional hydrodynamics at nanoscale is performed by means of density functional hydrodynamics (DFH). DFH is the method based on density functional theory and continuum mechanics. This method has been developed by the authors over 20 years and used for modeling in various multiphase hydrodynamic applications. In this paper, DFH was further extended to encompass phenomena inherent in liquids at nanoscale. The new DFH extension is based on the introduction of external potentials for chemical components. These potentials are localized in the vicinity of solid surfaces and take account of the van der Waals forces. A set of numerical examples, including disjoining pressure, film precursors, anomalous rheology, liquid in contact with heterogeneous surface, capillary condensation, and forward and reverse osmosis, is presented to demonstrate modeling capabilities.

  19. A Frank mixture copula family for modeling higher-order correlations of neural spike counts

    International Nuclear Information System (INIS)

    Onken, Arno; Obermayer, Klaus

    2009-01-01

    In order to evaluate the importance of higher-order correlations in neural spike count codes, flexible statistical models of dependent multivariate spike counts are required. Copula families, parametric multivariate distributions that represent dependencies, can be applied to construct such models. We introduce the Frank mixture family as a new copula family that has separate parameters for all pairwise and higher-order correlations. In contrast to the Farlie-Gumbel-Morgenstern copula family that shares this property, the Frank mixture copula can model strong correlations. We apply spike count models based on the Frank mixture copula to data generated by a network of leaky integrate-and-fire neurons and compare the goodness of fit to distributions based on the Farlie-Gumbel-Morgenstern family. Finally, we evaluate the importance of using proper single neuron spike count distributions on the Shannon information. We find notable deviations in the entropy that increase with decreasing firing rates. Moreover, we find that the Frank mixture family increases the log likelihood of the fit significantly compared to the Farlie-Gumbel-Morgenstern family. This shows that the Frank mixture copula is a useful tool to assess the importance of higher-order correlations in spike count codes.

  20. A nonlinear isobologram model with Box-Cox transformation to both sides for chemical mixtures.

    Science.gov (United States)

    Chen, D G; Pounds, J G

    1998-12-01

    The linear logistical isobologram is a commonly used and powerful graphical and statistical tool for analyzing the combined effects of simple chemical mixtures. In this paper a nonlinear isobologram model is proposed to analyze the joint action of chemical mixtures for quantitative dose-response relationships. This nonlinear isobologram model incorporates two additional new parameters, Ymin and Ymax, to facilitate analysis of response data that are not constrained between 0 and 1, where parameters Ymin and Ymax represent the minimal and the maximal observed toxic response. This nonlinear isobologram model for binary mixtures can be expressed as [formula: see text] In addition, a Box-Cox transformation to both sides is introduced to improve the goodness of fit and to provide a more robust model for achieving homogeneity and normality of the residuals. Finally, a confidence band is proposed for selected isobols, e.g., the median effective dose, to facilitate graphical and statistical analysis of the isobologram. The versatility of this approach is demonstrated using published data describing the toxicity of the binary mixtures of citrinin and ochratoxin as well as a new experimental data from our laboratory for mixtures of mercury and cadmium.

  1. Modeling diffusion coefficients in binary mixtures of polar and non-polar compounds

    DEFF Research Database (Denmark)

    Medvedev, Oleg; Shapiro, Alexander

    2005-01-01

    The theory of transport coefficients in liquids, developed previously, is tested on a description of the diffusion coefficients in binary polar/non-polar mixtures, by applying advanced thermodynamic models. Comparison to a large set of experimental data shows good performance of the model. Only f...

  2. Mapping quantitative trait loci in a selectively genotyped outbred population using a mixture model approach

    NARCIS (Netherlands)

    Johnson, David L.; Jansen, Ritsert C.; Arendonk, Johan A.M. van

    1999-01-01

    A mixture model approach is employed for the mapping of quantitative trait loci (QTL) for the situation where individuals, in an outbred population, are selectively genotyped. Maximum likelihood estimation of model parameters is obtained from an Expectation-Maximization (EM) algorithm facilitated by

  3. Growth Kinetics and Modeling of Direct Oxynitride Growth with NO-O2 Gas Mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Everist, Sarah; Nelson, Jerry; Sharangpani, Rahul; Smith, Paul Martin; Tay, Sing-Pin; Thakur, Randhir

    1999-05-03

    We have modeled growth kinetics of oxynitrides grown in NO-O2 gas mixtures from first principles using modified Deal-Grove equations. Retardation of oxygen diffusion through the nitrided dielectric was assumed to be the dominant growth-limiting step. The model was validated against experimentally obtained curves with good agreement. Excellent uniformity, which exceeded expected walues, was observed.

  4. Differential expression among tissues in morbidly obese individuals using a finite mixture model under BLUP approach

    DEFF Research Database (Denmark)

    Kogelman, Lisette; Trabzuni, Daniah; Bonder, Marc Jan

    effects of the interactions between tissues and probes using BLUP (Best Linear Unbiased Prediction) linear models correcting for gender, which were subsequently used in a finite mixture model to detect DE genes in each tissue. This approach evades the multiple-testing problem and is able to detect...

  5. Smoothed particle hydrodynamics model for phase separating fluid mixtures. I. General equations

    NARCIS (Netherlands)

    Thieulot, C; Janssen, LPBM; Espanol, P

    We present a thermodynamically consistent discrete fluid particle model for the simulation of a recently proposed set of hydrodynamic equations for a phase separating van der Waals fluid mixture [P. Espanol and C.A.P. Thieulot, J. Chem. Phys. 118, 9109 (2003)]. The discrete model is formulated by

  6. Rheology of petrolatum-paraffin oil mixtures : Applications to analogue modelling of geological processes

    NARCIS (Netherlands)

    Duarte, João C.; Schellart, Wouter P.; Cruden, Alexander R.

    2014-01-01

    Paraffins have been widely used in analogue modelling of geological processes. Petrolatum and paraffin oil are commonly used to lubricate model boundaries and to simulate weak layers. In this paper, we present rheological tests of petrolatum, paraffin oil and several homogeneous mixtures of the two.

  7. Maximum likelihood pixel labeling using a spatially variant finite mixture model

    International Nuclear Information System (INIS)

    Gopal, S.S.; Hebert, T.J.

    1996-01-01

    We propose a spatially-variant mixture model for pixel labeling. Based on this spatially-variant mixture model we derive an expectation maximization algorithm for maximum likelihood estimation of the pixel labels. While most algorithms using mixture models entail the subsequent use of a Bayes classifier for pixel labeling, the proposed algorithm yields maximum likelihood estimates of the labels themselves and results in unambiguous pixel labels. The proposed algorithm is fast, robust, easy to implement, flexible in that it can be applied to any arbitrary image data where the number of classes is known and, most importantly, obviates the need for an explicit labeling rule. The algorithm is evaluated both quantitatively and qualitatively on simulated data and on clinical magnetic resonance images of the human brain

  8. Modelling of phase equilibria for associating mixtures using an equation of state

    International Nuclear Information System (INIS)

    Ferreira, Olga; Brignole, Esteban A.; Macedo, Eugenia A.

    2004-01-01

    In the present work, the group contribution with association equation of state (GCA-EoS) is extended to represent phase equilibria in mixtures containing acids, esters, and ketones, with water, alcohols, and any number of inert components. Association effects are represented by a group-contribution approach. Self- and cross-association between the associating groups present in these mixtures are considered. The GCA-EoS model is compared to the group-contribution method MHV2, which does not take into account explicitly association effects. The results obtained with the GCA-EoS model are, in general, more accurate when compared to the ones achieved by the MHV2 equation with less number of parameters. Model predictions are presented for binary self- and cross-associating mixtures

  9. HTCC - a heat transfer model for gas-steam mixtures

    International Nuclear Information System (INIS)

    Papadimitriou, P.

    1983-01-01

    The mathematical model HTCC (Heat Transfer Coefficient in Containment) has been developed for RALOC after a loss-of-coolant accident in order to determine the local heat transfer coefficients for transfer between the containment atmosphere and the walls of the reactor building. The model considers the current values of room and wall temperature, the concentration of steam and non-condensible gases, geometry data and those of fluid dynamics together with thermodynamic parameters and from these determines the heat transfer mechanisms due to convection, radiation and condensation. The HTCC is implemented in the RALOC program. Comparative analyses of computed temperature profiles, for HEDL Standard problems A and B on hydrogen distribution, and of computed temperature profiles determined during the heat-up phase in the CSE-A5 experiment show a good agreement with experimental data. (orig.) [de

  10. Factoring variations in natural images with deep Gaussian mixture models

    OpenAIRE

    van den Oord, Aäron; Schrauwen, Benjamin

    2014-01-01

    Generative models can be seen as the swiss army knives of machine learning, as many problems can be written probabilistically in terms of the distribution of the data, including prediction, reconstruction, imputation and simulation. One of the most promising directions for unsupervised learning may lie in Deep Learning methods, given their success in supervised learning. However, one of the cur- rent problems with deep unsupervised learning methods, is that they often are harder to scale. As ...

  11. Nonlinear Structured Growth Mixture Models in Mplus and OpenMx

    Science.gov (United States)

    Grimm, Kevin J.; Ram, Nilam; Estabrook, Ryne

    2014-01-01

    Growth mixture models (GMMs; Muthén & Muthén, 2000; Muthén & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models because of their common use, flexibility in modeling many types of change patterns, the availability of statistical programs to fit such models, and the ease of programming. In this paper, we present additional ways of modeling nonlinear change patterns with GMMs. Specifically, we show how LCMs that follow specific nonlinear functions can be extended to examine the presence of multiple latent classes using the Mplus and OpenMx computer programs. These models are fit to longitudinal reading data from the Early Childhood Longitudinal Study-Kindergarten Cohort to illustrate their use. PMID:25419006

  12. Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models.

    Directory of Open Access Journals (Sweden)

    Kezi Yu

    Full Text Available In this paper, we propose an application of non-parametric Bayesian (NPB models for classification of fetal heart rate (FHR recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP and the Chinese restaurant process with finite capacity (CRFC. Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR recordings in a real-time setting.

  13. A Mixture Innovation Heterogeneous Autoregressive Model for Structural Breaks and Long Memory

    DEFF Research Database (Denmark)

    Nonejad, Nima

    We propose a flexible model to describe nonlinearities and long-range dependence in time series dynamics. Our model is an extension of the heterogeneous autoregressive model. Structural breaks occur through mixture distributions in state innovations of linear Gaussian state space models. Monte...... Carlo simulations evaluate the properties of the estimation procedures. Results show that the proposed model is viable and flexible for purposes of forecasting volatility. Model uncertainty is accounted for by employing Bayesian model averaging. Bayesian model averaging provides very competitive...... forecasts compared to any single model specification. It provides further improvements when we average over nonlinear specifications....

  14. Evaluation of thermodynamic properties of fluid mixtures by PC-SAFT model

    International Nuclear Information System (INIS)

    Almasi, Mohammad

    2014-01-01

    Experimental and calculated partial molar volumes (V ¯ m,1 ) of MIK with (♦) 2-PrOH, (♢) 2-BuOH, (●) 2-PenOH at T = 298.15 K. (—) PC-SAFT model. - Highlights: • Densities and viscosities of the mixtures (MIK + 2-alkanols) were measured. • PC-SAFT model was applied to correlate the volumetric properties of binary mixtures. • Agreement between experimental data and calculated values by PC-SAFT model is good. - Abstract: Densities and viscosities of binary mixtures of methyl isobutyl ketone (MIK) with polar solvents namely, 2-propanol, 2-butanol and 2-pentanol, were measured at 7 temperatures (293.15–323.15 K) over the entire range of composition. Using the experimental data, excess molar volumes V m E , isobaric thermal expansivity α p , partial molar volumes V ¯ m,i and viscosity deviations Δη, have been calculated due to their importance in the study of specific molecular interactions. The observed negative and positive values of deviation/excess parameters were explained on the basis of the intermolecular interactions occur in these mixtures. The Perturbed Chain Statistical Association Fluid Theory (PC-SAFT) has been used to correlate the volumetric behavior of the mixtures

  15. Evaluation of thermodynamic properties of fluid mixtures by PC-SAFT model

    Energy Technology Data Exchange (ETDEWEB)

    Almasi, Mohammad, E-mail: m.almasi@khouzestan.srbiau.ac.ir

    2014-09-10

    Experimental and calculated partial molar volumes (V{sup ¯}{sub m,1}) of MIK with (♦) 2-PrOH, (♢) 2-BuOH, (●) 2-PenOH at T = 298.15 K. (—) PC-SAFT model. - Highlights: • Densities and viscosities of the mixtures (MIK + 2-alkanols) were measured. • PC-SAFT model was applied to correlate the volumetric properties of binary mixtures. • Agreement between experimental data and calculated values by PC-SAFT model is good. - Abstract: Densities and viscosities of binary mixtures of methyl isobutyl ketone (MIK) with polar solvents namely, 2-propanol, 2-butanol and 2-pentanol, were measured at 7 temperatures (293.15–323.15 K) over the entire range of composition. Using the experimental data, excess molar volumes V{sub m}{sup E}, isobaric thermal expansivity α{sub p}, partial molar volumes V{sup ¯}{sub m,i} and viscosity deviations Δη, have been calculated due to their importance in the study of specific molecular interactions. The observed negative and positive values of deviation/excess parameters were explained on the basis of the intermolecular interactions occur in these mixtures. The Perturbed Chain Statistical Association Fluid Theory (PC-SAFT) has been used to correlate the volumetric behavior of the mixtures.

  16. Concentration addition, independent action and generalized concentration addition models for mixture effect prediction of sex hormone synthesis in vitro.

    Directory of Open Access Journals (Sweden)

    Niels Hadrup

    Full Text Available Humans are concomitantly exposed to numerous chemicals. An infinite number of combinations and doses thereof can be imagined. For toxicological risk assessment the mathematical prediction of mixture effects, using knowledge on single chemicals, is therefore desirable. We investigated pros and cons of the concentration addition (CA, independent action (IA and generalized concentration addition (GCA models. First we measured effects of single chemicals and mixtures thereof on steroid synthesis in H295R cells. Then single chemical data were applied to the models; predictions of mixture effects were calculated and compared to the experimental mixture data. Mixture 1 contained environmental chemicals adjusted in ratio according to human exposure levels. Mixture 2 was a potency adjusted mixture containing five pesticides. Prediction of testosterone effects coincided with the experimental Mixture 1 data. In contrast, antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone and estradiol, some chemicals were having stimulatory effects whereas others had inhibitory effects. The three models were not applicable in this situation and no predictions could be performed. Finally, the expected contributions of single chemicals to the mixture effects were calculated. Prochloraz was the predominant but not sole driver of the mixtures, suggesting that one chemical alone was not responsible for the mixture effects. In conclusion, the GCA model seemed to be superior to the CA and IA models for the prediction of testosterone effects. A situation with chemicals exerting opposing effects, for which the models could not be applied, was identified. In addition, the data indicate that in non-potency adjusted mixtures the effects cannot

  17. Application of Parameter Estimation for Diffusions and Mixture Models

    DEFF Research Database (Denmark)

    Nolsøe, Kim

    The first part of this thesis proposes a method to determine the preferred number of structures, their proportions and the corresponding geometrical shapes of an m-membered ring molecule. This is obtained by formulating a statistical model for the data and constructing an algorithm which samples...... with the posterior score function. From an application point of view this methology is easy to apply, since the optimal estimating function G(;Xt1 ; : : : ;Xtn ) is equal to the classical optimal estimating function, plus a correction term which takes into account the prior information. The methology is particularly...

  18. Application of fuzzy logic to determine the odour intensity of model gas mixtures using electronic nose

    Science.gov (United States)

    Szulczyński, Bartosz; Gębicki, Jacek; Namieśnik, Jacek

    2018-01-01

    The paper presents the possibility of application of fuzzy logic to determine the odour intensity of model, ternary gas mixtures (α-pinene, toluene and triethylamine) using electronic nose prototype. The results obtained using fuzzy logic algorithms were compared with the values obtained using multiple linear regression (MLR) model and sensory analysis. As the results of the studies, it was found the electronic nose prototype along with the fuzzy logic pattern recognition system can be successfully used to estimate the odour intensity of tested gas mixtures. The correctness of the results obtained using fuzzy logic was equal to 68%.

  19. A Mixture Model of Consumers' Intended Purchase Decisions for Genetically Modified Foods

    OpenAIRE

    Kristine M. Grimsrud; Robert P. Berrens; Ron C. Mittelhammer

    2006-01-01

    A finite probability mixture model is used to analyze the existence of multiple market segments for a pre-market good. The approach has at least two principal benefits. First, the model is capable of identifying likely market segments and their differentiating characteristics. Second, the model can be used to estimate the discount different consumer groups require to purchase the good. The model is illustrated using stated preference survey data collected on consumer responses to the potentia...

  20. Analysis of real-time mixture cytotoxicity data following repeated exposure using BK/TD models.

    Science.gov (United States)

    Teng, S; Tebby, C; Barcellini-Couget, S; De Sousa, G; Brochot, C; Rahmani, R; Pery, A R R

    2016-08-15

    Cosmetic products generally consist of multiple ingredients. Thus, cosmetic risk assessment has to deal with mixture toxicity on a long-term scale which means it has to be assessed in the context of repeated exposure. Given that animal testing has been banned for cosmetics risk assessment, in vitro assays allowing long-term repeated exposure and adapted for in vitro - in vivo extrapolation need to be developed. However, most in vitro tests only assess short-term effects and consider static endpoints which hinder extrapolation to realistic human exposure scenarios where concentration in target organs is varies over time. Thanks to impedance metrics, real-time cell viability monitoring for repeated exposure has become possible. We recently constructed biokinetic/toxicodynamic models (BK/TD) to analyze such data (Teng et al., 2015) for three hepatotoxic cosmetic ingredients: coumarin, isoeugenol and benzophenone-2. In the present study, we aim to apply these models to analyze the dynamics of mixture impedance data using the concepts of concentration addition and independent action. Metabolic interactions between the mixture components were investigated, characterized and implemented in the models, as they impacted the actual cellular exposure. Indeed, cellular metabolism following mixture exposure induced a quick disappearance of the compounds from the exposure system. We showed that isoeugenol substantially decreased the metabolism of benzophenone-2, reducing the disappearance of this compound and enhancing its in vitro toxicity. Apart from this metabolic interaction, no mixtures showed any interaction, and all binary mixtures were successfully modeled by at least one model based on exposure to the individual compounds. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Ion swarm data for electrical discharge modeling in air and flue gas mixtures

    International Nuclear Information System (INIS)

    Nelson, D.; Benhenni, M.; Eichwald, O.; Yousfi, M.

    2003-01-01

    The first step of this work is the determination of the elastic and inelastic ion-molecule collision cross sections for the main ions (N 2 + , O 2 + , CO 2 + , H 2 O + and O - ) usually present either in the air or flue gas discharges. The obtained cross section sets, given for ion kinetic energies not exceeding 100 eV, correspond to the interactions of each ion with its parent molecule (symmetric case) or nonparent molecule (asymmetric case). Then by using these different cross section sets, it is possible to obtain the ion swarm data for the different gas mixtures involving N 2 , CO 2 , H 2 O and O 2 molecules whatever their relative proportions. These ion swarm data are obtained from an optimized Monte Carlo method well adapted for the ion transport in gas mixtures. This also allows us to clearly show that the classical linear approximations usually applied for the ion swarm data in mixtures such as Blanc's law are far to be valid. Then, the ion swarm data are given in three cases of gas mixtures: a dry air (80% N 2 , 20% O 2 ), a ternary gas mixture (82% N 2 , 12% CO 2 , 6% O 2 ) and a typical flue gas (76% N 2 , 12% CO 2 , 6% O 2 , 6% H 2 O). From these reliable ion swarm data, electrical discharge modeling for a wire to plane electrode configuration has been carried out in these three mixtures at the atmospheric pressure for different applied voltages. Under the same discharge conditions, large discrepancies in the streamer formation and propagation have been observed in these three mixture cases. They are due to the deviations existing not only between the different effective electron-molecule ionization rates but also between the ion transport properties mainly because of the presence of a highly polar molecule such as H 2 O. This emphasizes the necessity to properly consider the ion transport in the discharge modeling

  2. Two component memory of Rotstein effect in nuclear emulsions

    International Nuclear Information System (INIS)

    Gushchin, E.M.; Lebedev, A.N.; Somov, S.V.; Timofeev, M.K.; Tipografshchik, G.I.

    1991-01-01

    Two sharply differing memory components - fast and slow -are simultaneously detected during investigation into the controlled mode of fast charged particle detection in simple nuclear emulsions, with the emulsion trace sensitivity, corresponding to these components, being about 5 time different. The value of memory time is T m ≅40 μs for fast memory and T m ≅3.5 ms for the slow one. The detection of two Rotstein effect memory components confirms the correctness of the trap model

  3. Chemically reacting flow of a compressible thermally radiating two-component plasma

    International Nuclear Information System (INIS)

    Bestman, A.R.

    1990-12-01

    The paper studies the compressible flow of a hot two-component plasma in the presence of gravitation and chemical reaction in a vertical channel. For the optically thick gas approximation, closed form analytical solutions are possible. Asymptotic solutions are also obtained for the general differential approximation when the temperature of the two bounding walls are the same. In the general case the problem is reduced to the solution of standard nonlinear integral equations which can be tackled by iterative procedure. The results are discussed quantitatively. The problem may be applicable to the understanding of explosive hydrogen-burning model of solar flares. (author). 6 refs, 4 figs

  4. Disorder-Induced Order in Two-Component Bose-Einstein Condensates

    International Nuclear Information System (INIS)

    Niederberger, A.; Schulte, T.; Wehr, J.; Lewenstein, M.; Sanchez-Palencia, L.; Sacha, K.

    2008-01-01

    We propose and analyze a general mechanism of disorder-induced order in two-component Bose-Einstein condensates, analogous to corresponding effects established for XY spin models. We show that a random Raman coupling induces a relative phase of π/2 between the two BECs and that the effect is robust. We demonstrate it in one, two, and three dimensions at T=0 and present evidence that it persists at small T>0. Applications to phase control in ultracold spinor condensates are discussed

  5. Modeling when people quit: Bayesian censored geometric models with hierarchical and latent-mixture extensions.

    Science.gov (United States)

    Okada, Kensuke; Vandekerckhove, Joachim; Lee, Michael D

    2018-02-01

    People often interact with environments that can provide only a finite number of items as resources. Eventually a book contains no more chapters, there are no more albums available from a band, and every Pokémon has been caught. When interacting with these sorts of environments, people either actively choose to quit collecting new items, or they are forced to quit when the items are exhausted. Modeling the distribution of how many items people collect before they quit involves untangling these two possibilities, We propose that censored geometric models are a useful basic technique for modeling the quitting distribution, and, show how, by implementing these models in a hierarchical and latent-mixture framework through Bayesian methods, they can be extended to capture the additional features of specific situations. We demonstrate this approach by developing and testing a series of models in two case studies involving real-world data. One case study deals with people choosing jokes from a recommender system, and the other deals with people completing items in a personality survey.

  6. Equilibrium based analytical model for estimation of pressure magnification during deflagration of hydrogen air mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Karanam, Aditya; Sharma, Pavan K.; Ganju, Sunil; Singh, Ram Kumar [Bhabha Atomic Research Centre (BARC), Mumbai (India). Reactor Safety Div.

    2016-12-15

    During postulated accident sequences in nuclear reactors, hydrogen may get released from the core and form a flammable mixture in the surrounding containment structure. Ignition of such mixtures and the subsequent pressure rise are an imminent threat for safe and sustainable operation of nuclear reactors. Methods for evaluating post ignition characteristics are important for determining the design safety margins in such scenarios. This study presents two thermo-chemical models for determining the post ignition state. The first model is based on internal energy balance while the second model uses the concept of element potentials to minimize the free energy of the system with internal energy imposed as a constraint. Predictions from both the models have been compared against published data over a wide range of mixture compositions. Important differences in the regions close to flammability limits and for stoichiometric mixtures have been identified and explained. The equilibrium model has been validated for varied temperatures and pressures representative of initial conditions that may be present in the containment during accidents. Special emphasis has been given to the understanding of the role of dissociation and its effect on equilibrium pressure, temperature and species concentrations.

  7. Semiparametric accelerated failure time cure rate mixture models with competing risks.

    Science.gov (United States)

    Choi, Sangbum; Zhu, Liang; Huang, Xuelin

    2018-01-15

    Modern medical treatments have substantially improved survival rates for many chronic diseases and have generated considerable interest in developing cure fraction models for survival data with a non-ignorable cured proportion. Statistical analysis of such data may be further complicated by competing risks that involve multiple types of endpoints. Regression analysis of competing risks is typically undertaken via a proportional hazards model adapted on cause-specific hazard or subdistribution hazard. In this article, we propose an alternative approach that treats competing events as distinct outcomes in a mixture. We consider semiparametric accelerated failure time models for the cause-conditional survival function that are combined through a multinomial logistic model within the cure-mixture modeling framework. The cure-mixture approach to competing risks provides a means to determine the overall effect of a treatment and insights into how this treatment modifies the components of the mixture in the presence of a cure fraction. The regression and nonparametric parameters are estimated by a nonparametric kernel-based maximum likelihood estimation method. Variance estimation is achieved through resampling methods for the kernel-smoothed likelihood function. Simulation studies show that the procedures work well in practical settings. Application to a sarcoma study demonstrates the use of the proposed method for competing risk data with a cure fraction. Copyright © 2017 John Wiley & Sons, Ltd.

  8. Application of pattern mixture models to address missing data in longitudinal data analysis using SPSS.

    Science.gov (United States)

    Son, Heesook; Friedmann, Erika; Thomas, Sue A

    2012-01-01

    Longitudinal studies are used in nursing research to examine changes over time in health indicators. Traditional approaches to longitudinal analysis of means, such as analysis of variance with repeated measures, are limited to analyzing complete cases. This limitation can lead to biased results due to withdrawal or data omission bias or to imputation of missing data, which can lead to bias toward the null if data are not missing completely at random. Pattern mixture models are useful to evaluate the informativeness of missing data and to adjust linear mixed model (LMM) analyses if missing data are informative. The aim of this study was to provide an example of statistical procedures for applying a pattern mixture model to evaluate the informativeness of missing data and conduct analyses of data with informative missingness in longitudinal studies using SPSS. The data set from the Patients' and Families' Psychological Response to Home Automated External Defibrillator Trial was used as an example to examine informativeness of missing data with pattern mixture models and to use a missing data pattern in analysis of longitudinal data. Prevention of withdrawal bias, omitted data bias, and bias toward the null in longitudinal LMMs requires the assessment of the informativeness of the occurrence of missing data. Missing data patterns can be incorporated as fixed effects into LMMs to evaluate the contribution of the presence of informative missingness to and control for the effects of missingness on outcomes. Pattern mixture models are a useful method to address the presence and effect of informative missingness in longitudinal studies.

  9. Equilibrium based analytical model for estimation of pressure magnification during deflagration of hydrogen air mixtures

    International Nuclear Information System (INIS)

    Karanam, Aditya; Sharma, Pavan K.; Ganju, Sunil; Singh, Ram Kumar

    2016-01-01

    During postulated accident sequences in nuclear reactors, hydrogen may get released from the core and form a flammable mixture in the surrounding containment structure. Ignition of such mixtures and the subsequent pressure rise are an imminent threat for safe and sustainable operation of nuclear reactors. Methods for evaluating post ignition characteristics are important for determining the design safety margins in such scenarios. This study presents two thermo-chemical models for determining the post ignition state. The first model is based on internal energy balance while the second model uses the concept of element potentials to minimize the free energy of the system with internal energy imposed as a constraint. Predictions from both the models have been compared against published data over a wide range of mixture compositions. Important differences in the regions close to flammability limits and for stoichiometric mixtures have been identified and explained. The equilibrium model has been validated for varied temperatures and pressures representative of initial conditions that may be present in the containment during accidents. Special emphasis has been given to the understanding of the role of dissociation and its effect on equilibrium pressure, temperature and species concentrations.

  10. Modelling phase equilibria for acid gas mixtures using the CPA equation of state. Part V: Multicomponent mixtures containing CO2 and alcohols

    DEFF Research Database (Denmark)

    Tsivintzelis, Ioannis; Kontogeorgis, Georgios M.

    2015-01-01

    of CPA for ternary and multicomponent CO2 mixtures containing alcohols (methanol, ethanol or propanol) water and hydrocarbons. This work belongs to a series of studies aiming to arrive in a single "engineering approach" for applying CPA to acid gas mixtures, without introducing significant changes...... to the model. In this direction, CPA results were obtained using various approaches, i.e. different association schemes for pure CO2 (assuming that it is a non-associating compound, or that it is a self-associating fluid with two, three or four association sites) and different possibilities for modelling...... mixtures of CO2 with water and alcohols (only use of one interaction parameter kij or assuming cross-association interactions and obtaining the relevant parameters either via a combining rule or using an experimental value for the cross-association energy). It is concluded that CPA is a powerful model...

  11. Modeling Plasma-based CO2 and CH4 Conversion in Mixtures with N2, O2 and H2O: the Bigger Plasma Chemistry Picture

    KAUST Repository

    Wang, Weizong; Snoeckx, Ramses; Zhang, Xuming; Cha, Min; Bogaerts, Annemie

    2018-01-01

    performed regarding the single component gases, i.e. CO2 splitting and CH4 reforming, as well as for two component mixtures, i.e. dry reforming of methane (CO2/CH4), partial oxidation of methane (CH4/O2), artificial photosynthesis (CO2/H2O), CO2

  12. Large non-Gaussianity from two-component hybrid inflation

    International Nuclear Information System (INIS)

    Byrnes, Christian T.; Choi, Ki-Young; Hall, Lisa M.H.

    2009-01-01

    We study the generation of non-Gaussianity in models of hybrid inflation with two inflaton fields, (2-brid inflation). We analyse the region in the parameter and the initial condition space where a large non-Gaussianity may be generated during slow-roll inflation which is generally characterised by a large f NL , τ NL and a small g NL . For certain parameter values we can satisfy τ NL >> f NL 2 . The bispectrum is of the local type but may have a significant scale dependence. We show that the loop corrections to the power spectrum and bispectrum are suppressed during inflation, if one assume that the fields follow a classical background trajectory. We also include the effect of the waterfall field, which can lead to a significant change in the observables after the waterfall field is destabilised, depending on the couplings between the waterfall and inflaton fields

  13. Mathematical Modeling of Nonstationary Separation Processes in Gas Centrifuge Cascade for Separation of Multicomponent Isotope Mixtures

    OpenAIRE

    Orlov Alexey; Ushakov Anton; Sovach Victor

    2016-01-01

    This article presents results of development of the mathematical model of nonstationary separation processes occurring in gas centrifuge cascades for separation of multicomponent isotope mixtures. This model was used for the calculation parameters of gas centrifuge cascade for separation of germanium isotopes. Comparison of obtained values with results of other authors revealed that developed mathematical model is adequate to describe nonstationary separation processes in gas centrifuge casca...

  14. Mathematical model of nonstationary hydraulic processes in gas centrifuge cascade for separation of multicomponent isotope mixtures

    OpenAIRE

    Orlov, Aleksey Alekseevich; Ushakov, Anton; Sovach, Victor

    2017-01-01

    The article presents results of development of a mathematical model of nonstationary hydraulic processes in gas centrifuge cascade for separation of multicomponent isotope mixtures. This model was used for the calculation parameters of gas centrifuge cascade for separation of silicon isotopes. Comparison of obtained values with results of other authors revealed that developed mathematical model is adequate to describe nonstationary hydraulic processes in gas centrifuge cascades for separation...

  15. Modeling phase equilibria for acid gas mixtures using the CPA equation of state. Part IV. Applications to mixtures of CO2 with alkanes

    DEFF Research Database (Denmark)

    Tsivintzelis, Ioannis; Ali, Shahid; Kontogeorgis, Georgios

    2015-01-01

    The thermodynamic properties of pure gaseous, liquid or supercritical CO2 and CO2 mixtures with hydrocarbons and other compounds such as water, alcohols, and glycols are very important in many processes in the oil and gas industry. Design of such processes requires use of accurate thermodynamic...... models, capable of predicting the complex phase behavior of multicomponent mixtures as well as their volumetric properties. In this direction, over the last several years, the cubic-plus-association (CPA) thermodynamic model has been successfully used for describing volumetric properties and phase...

  16. Partitioning detectability components in populations subject to within-season temporary emigration using binomial mixture models.

    Science.gov (United States)

    O'Donnell, Katherine M; Thompson, Frank R; Semlitsch, Raymond D

    2015-01-01

    Detectability of individual animals is highly variable and nearly always binomial mixture models to account for multiple sources of variation in detectability. The state process of the hierarchical model describes ecological mechanisms that generate spatial and temporal patterns in abundance, while the observation model accounts for the imperfect nature of counting individuals due to temporary emigration and false absences. We illustrate our model's potential advantages, including the allowance of temporary emigration between sampling periods, with a case study of southern red-backed salamanders Plethodon serratus. We fit our model and a standard binomial mixture model to counts of terrestrial salamanders surveyed at 40 sites during 3-5 surveys each spring and fall 2010-2012. Our models generated similar parameter estimates to standard binomial mixture models. Aspect was the best predictor of salamander abundance in our case study; abundance increased as aspect became more northeasterly. Increased time-since-rainfall strongly decreased salamander surface activity (i.e. availability for sampling), while higher amounts of woody cover objects and rocks increased conditional detection probability (i.e. probability of capture, given an animal is exposed to sampling). By explicitly accounting for both components of detectability, we increased congruence between our statistical modeling and our ecological understanding of the system. We stress the importance of choosing survey locations and protocols that maximize species availability and conditional detection probability to increase population parameter estimate reliability.

  17. Poisson Growth Mixture Modeling of Intensive Longitudinal Data: An Application to Smoking Cessation Behavior

    Science.gov (United States)

    Shiyko, Mariya P.; Li, Yuelin; Rindskopf, David

    2012-01-01

    Intensive longitudinal data (ILD) have become increasingly common in the social and behavioral sciences; count variables, such as the number of daily smoked cigarettes, are frequently used outcomes in many ILD studies. We demonstrate a generalized extension of growth mixture modeling (GMM) to Poisson-distributed ILD for identifying qualitatively…

  18. Soot modeling of counterflow diffusion flames of ethylene-based binary mixture fuels

    KAUST Repository

    Wang, Yu; Raj, Abhijeet Dhayal; Chung, Suk-Ho

    2015-01-01

    of ethylene and its binary mixtures with methane, ethane and propane based on the method of moments. The soot model has 36 soot nucleation reactions from 8 PAH molecules including pyrene and larger PAHs. Soot surface growth reactions were based on a modified

  19. Densities of Pure Ionic Liquids and Mixtures: Modeling and Data Analysis

    DEFF Research Database (Denmark)

    Abildskov, Jens; O’Connell, John P.

    2015-01-01

    Our two-parameter corresponding states model for liquid densities and compressibilities has been extended to more pure ionic liquids and to their mixtures with one or two solvents. A total of 19 new group contributions (5 new cations and 14 new anions) have been obtained for predicting pressure...

  20. Estimating Lion Abundance using N-mixture Models for Social Species.

    Science.gov (United States)

    Belant, Jerrold L; Bled, Florent; Wilton, Clay M; Fyumagwa, Robert; Mwampeta, Stanslaus B; Beyer, Dean E

    2016-10-27

    Declining populations of large carnivores worldwide, and the complexities of managing human-carnivore conflicts, require accurate population estimates of large carnivores to promote their long-term persistence through well-informed management We used N-mixture models to estimate lion (Panthera leo) abundance from call-in and track surveys in southeastern Serengeti National Park, Tanzania. Because of potential habituation to broadcasted calls and social behavior, we developed a hierarchical observation process within the N-mixture model conditioning lion detectability on their group response to call-ins and individual detection probabilities. We estimated 270 lions (95% credible interval = 170-551) using call-ins but were unable to estimate lion abundance from track data. We found a weak negative relationship between predicted track density and predicted lion abundance from the call-in surveys. Luminosity was negatively correlated with individual detection probability during call-in surveys. Lion abundance and track density were influenced by landcover, but direction of the corresponding effects were undetermined. N-mixture models allowed us to incorporate multiple parameters (e.g., landcover, luminosity, observer effect) influencing lion abundance and probability of detection directly into abundance estimates. We suggest that N-mixture models employing a hierarchical observation process can be used to estimate abundance of other social, herding, and grouping species.

  1. The Support Reduction Algorithm for Computing Non-Parametric Function Estimates in Mixture Models

    OpenAIRE

    GROENEBOOM, PIET; JONGBLOED, GEURT; WELLNER, JON A.

    2008-01-01

    In this paper, we study an algorithm (which we call the support reduction algorithm) that can be used to compute non-parametric M-estimators in mixture models. The algorithm is compared with natural competitors in the context of convex regression and the ‘Aspect problem’ in quantum physics.

  2. Modelling and simulation of an energy transport phenomenon in a solid-fluid mixture

    International Nuclear Information System (INIS)

    Costa, M.L.M.; Sampaio, R.; Gama, R.M.S. da.

    1989-08-01

    In the present work a model for a local description of the energy transfer phenomenon in a binary (solid-fluid) saturated mixture is proposed. The heat transfer in a saturated flow (through a porous medium) between two parallel plates is simulated by using the Finite Volumes Method. (author) [pt

  3. Using the Mixture Rasch Model to Explore Knowledge Resources Students Invoke in Mathematic and Science Assessments

    Science.gov (United States)

    Zhang, Danhui; Orrill, Chandra; Campbell, Todd

    2015-01-01

    The purpose of this study was to investigate whether mixture Rasch models followed by qualitative item-by-item analysis of selected Programme for International Student Assessment (PISA) mathematics and science items offered insight into knowledge students invoke in mathematics and science separately and combined. The researchers administered an…

  4. Market segment derivation and profiling via a finite mixture model framework

    NARCIS (Netherlands)

    Wedel, M; Desarbo, WS

    The Marketing literature has shown how difficult it is to profile market segments derived with finite mixture models. especially using traditional descriptor variables (e.g., demographics). Such profiling is critical for the proper implementation of segmentation strategy. we propose a new finite

  5. Finite mixture models for sub-pixel coastal land cover classification

    CSIR Research Space (South Africa)

    Ritchie, Michaela C

    2017-05-01

    Full Text Available Models for Sub- pixel Coastal Land Cover Classification M. Ritchie Dr. M. Lück-Vogel Dr. P. Debba Dr. V. Goodall ISRSE - 37 Tshwane, South Africa 10 May 2017 2Study Area Africa South Africa FALSE BAY 3Strand Gordon’s Bay Study Area WorldView-2 Image.../Urban 1 10 10 Herbaceous Vegetation 1 5 5 Shadow 1 8 8 Sparse Vegetation 1 3 3 Water 1 10 10 Woody Vegetation 1 5 5 11 Maximum Likelihood Classification (MLC) 12 Gaussian Mixture Discriminant Analysis (GMDA) 13 A B C t-distribution Mixture Discriminant...

  6. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies

    DEFF Research Database (Denmark)

    Thompson, Wesley K.; Wang, Yunpeng; Schork, Andrew J.

    2015-01-01

    -wide association study (GWAS) test statistics. Test statistics corresponding to null associations are modeled as random draws from a normal distribution with zero mean; test statistics corresponding to non-null associations are also modeled as normal with zero mean, but with larger variance. The model is fit via...... analytically and in simulations. We apply this approach to meta-analysis test statistics from two large GWAS, one for Crohn’s disease (CD) and the other for schizophrenia (SZ). A scale mixture of two normals distribution provides an excellent fit to the SZ nonparametric replication effect size estimates. While...... minimizing discrepancies between the parametric mixture model and resampling-based nonparametric estimates of replication effect sizes and variances. We describe in detail the implications of this model for estimation of the non-null proportion, the probability of replication in de novo samples, the local...

  7. Modelling of associating mixtures for applications in the oil & gas and chemical industries

    DEFF Research Database (Denmark)

    Kontogeorgis, Georgios; Folas, Georgios; Muro Sunè, Nuria

    2007-01-01

    Thermodynamic properties and phase equilibria of associating mixtures cannot often be satisfactorily modelled using conventional models such as cubic equations of state. CPA (cubic-plus-association) is an equation of state (EoS), which combines the SRK EoS with the association term of SAFT. For non......-alcohol (glycol)-alkanes and certain acid and amine-containing mixtures. Recent results include glycol-aromatic hydrocarbons including multiphase, multicomponent equilibria and gas hydrate calculations in combination with the van der Waals-Platteeuw model. This article will outline some new applications...... thermodynamic models especially those combining cubic EoS with local composition activity coefficient models are included. (C) 2007 Elsevier B.V. All rights reserved....

  8. Validation of a mixture-averaged thermal diffusion model for premixed lean hydrogen flames

    Science.gov (United States)

    Schlup, Jason; Blanquart, Guillaume

    2018-03-01

    The mixture-averaged thermal diffusion model originally proposed by Chapman and Cowling is validated using multiple flame configurations. Simulations using detailed hydrogen chemistry are done on one-, two-, and three-dimensional flames. The analysis spans flat and stretched, steady and unsteady, and laminar and turbulent flames. Quantitative and qualitative results using the thermal diffusion model compare very well with the more complex multicomponent diffusion model. Comparisons are made using flame speeds, surface areas, species profiles, and chemical source terms. Once validated, this model is applied to three-dimensional laminar and turbulent flames. For these cases, thermal diffusion causes an increase in the propagation speed of the flames as well as increased product chemical source terms in regions of high positive curvature. The results illustrate the necessity for including thermal diffusion, and the accuracy and computational efficiency of the mixture-averaged thermal diffusion model.

  9. Dynamic mean field theory for lattice gas models of fluid mixtures confined in mesoporous materials.

    Science.gov (United States)

    Edison, J R; Monson, P A

    2013-11-12

    We present the extension of dynamic mean field theory (DMFT) for fluids in porous materials (Monson, P. A. J. Chem. Phys. 2008, 128, 084701) to the case of mixtures. The theory can be used to describe the relaxation processes in the approach to equilibrium or metastable equilibrium states for fluids in pores after a change in the bulk pressure or composition. It is especially useful for studying systems where there are capillary condensation or evaporation transitions. Nucleation processes associated with these transitions are emergent features of the theory and can be visualized via the time dependence of the density distribution and composition distribution in the system. For mixtures an important component of the dynamics is relaxation of the composition distribution in the system, especially in the neighborhood of vapor-liquid interfaces. We consider two different types of mixtures, modeling hydrocarbon adsorption in carbon-like slit pores. We first present results on bulk phase equilibria of the mixtures and then the equilibrium (stable/metastable) behavior of these mixtures in a finite slit pore and an inkbottle pore. We then use DMFT to describe the evolution of the density and composition in the pore in the approach to equilibrium after changing the state of the bulk fluid via composition or pressure changes.

  10. Measurement and modelling of hydrogen bonding in 1-alkanol plus n-alkane binary mixtures

    DEFF Research Database (Denmark)

    von Solms, Nicolas; Jensen, Lars; Kofod, Jonas L.

    2007-01-01

    Two equations of state (simplified PC-SAFT and CPA) are used to predict the monomer fraction of 1-alkanols in binary mixtures with n-alkanes. It is found that the choice of parameters and association schemes significantly affects the ability of a model to predict hydrogen bonding in mixtures, eve...... studies, which is clarified in the present work. New hydrogen bonding data based on infrared spectroscopy are reported for seven binary mixtures of alcohols and alkanes. (C) 2007 Elsevier B.V. All rights reserved....... though pure-component liquid densities and vapour pressures are predicted equally accurately for the associating compound. As was the case in the study of pure components, there exists some confusion in the literature about the correct interpretation and comparison of experimental data and theoretical...

  11. Maximum likelihood estimation of semiparametric mixture component models for competing risks data.

    Science.gov (United States)

    Choi, Sangbum; Huang, Xuelin

    2014-09-01

    In the analysis of competing risks data, the cumulative incidence function is a useful quantity to characterize the crude risk of failure from a specific event type. In this article, we consider an efficient semiparametric analysis of mixture component models on cumulative incidence functions. Under the proposed mixture model, latency survival regressions given the event type are performed through a class of semiparametric models that encompasses the proportional hazards model and the proportional odds model, allowing for time-dependent covariates. The marginal proportions of the occurrences of cause-specific events are assessed by a multinomial logistic model. Our mixture modeling approach is advantageous in that it makes a joint estimation of model parameters associated with all competing risks under consideration, satisfying the constraint that the cumulative probability of failing from any cause adds up to one given any covariates. We develop a novel maximum likelihood scheme based on semiparametric regression analysis that facilitates efficient and reliable estimation. Statistical inferences can be conveniently made from the inverse of the observed information matrix. We establish the consistency and asymptotic normality of the proposed estimators. We validate small sample properties with simulations and demonstrate the methodology with a data set from a study of follicular lymphoma. © 2014, The International Biometric Society.

  12. Quantum particle-number fluctuations in a two-component Bose gas in a double-well potential

    International Nuclear Information System (INIS)

    Zin, Pawel; Oles, Bartlomiej; Sacha, Krzysztof

    2011-01-01

    A two-component Bose gas in a double-well potential with repulsive interactions may undergo a phase separation transition if the interspecies interactions outweigh the intraspecies ones. We analyze the transition in the strong interaction limit within the two-mode approximation. Numbers of particles in each potential well are equal and constant. However, at the transition point, the ground state of the system reveals huge fluctuations of numbers of particles belonging to the different gas components; that is, the probability for observation of any mixture of particles in each potential well becomes uniform.

  13. Finite mixture models for the computation of isotope ratios in mixed isotopic samples

    Science.gov (United States)

    Koffler, Daniel; Laaha, Gregor; Leisch, Friedrich; Kappel, Stefanie; Prohaska, Thomas

    2013-04-01

    Finite mixture models have been used for more than 100 years, but have seen a real boost in popularity over the last two decades due to the tremendous increase in available computing power. The areas of application of mixture models range from biology and medicine to physics, economics and marketing. These models can be applied to data where observations originate from various groups and where group affiliations are not known, as is the case for multiple isotope ratios present in mixed isotopic samples. Recently, the potential of finite mixture models for the computation of 235U/238U isotope ratios from transient signals measured in individual (sub-)µm-sized particles by laser ablation - multi-collector - inductively coupled plasma mass spectrometry (LA-MC-ICPMS) was demonstrated by Kappel et al. [1]. The particles, which were deposited on the same substrate, were certified with respect to their isotopic compositions. Here, we focus on the statistical model and its application to isotope data in ecogeochemistry. Commonly applied evaluation approaches for mixed isotopic samples are time-consuming and are dependent on the judgement of the analyst. Thus, isotopic compositions may be overlooked due to the presence of more dominant constituents. Evaluation using finite mixture models can be accomplished unsupervised and automatically. The models try to fit several linear models (regression lines) to subgroups of data taking the respective slope as estimation for the isotope ratio. The finite mixture models are parameterised by: • The number of different ratios. • Number of points belonging to each ratio-group. • The ratios (i.e. slopes) of each group. Fitting of the parameters is done by maximising the log-likelihood function using an iterative expectation-maximisation (EM) algorithm. In each iteration step, groups of size smaller than a control parameter are dropped; thereby the number of different ratios is determined. The analyst only influences some control

  14. Numerical analysis of a non equilibrium two-component two-compressible flow in porous media

    KAUST Repository

    Saad, Bilal Mohammed

    2013-09-01

    We propose and analyze a finite volume scheme to simulate a non equilibrium two components (water and hydrogen) two phase flow (liquid and gas) model. In this model, the assumption of local mass non equilibrium is ensured and thus the velocity of the mass exchange between dissolved hydrogen and hydrogen in the gas phase is supposed finite. The proposed finite volume scheme is fully implicit in time together with a phase-by-phase upwind approach in space and it is discretize the equations in their general form with gravity and capillary terms We show that the proposed scheme satisfies the maximum principle for the saturation and the concentration of the dissolved hydrogen. We establish stability results on the velocity of each phase and on the discrete gradient of the concentration. We show the convergence of a subsequence to a weak solution of the continuous equations as the size of the discretization tends to zero. At our knowledge, this is the first convergence result of finite volume scheme in the case of two component two phase compressible flow in several space dimensions.

  15. Method of estimating changes in vapor concentrations continuously generated from two-component organic solvents.

    Science.gov (United States)

    Hori, Hajime; Ishidao, Toru; Ishimatsu, Sumiyo

    2010-12-01

    We measured vapor concentrations continuously evaporated from two-component organic solvents in a reservoir and proposed a method to estimate and predict the evaporation rate or generated vapor concentrations. Two kinds of organic solvents were put into a small reservoir made of glass (3 cm in diameter and 3 cm high) that was installed in a cylindrical glass vessel (10 cm in diameter and 15 cm high). Air was introduced into the glass vessel at a flow rate of 150 ml/min, and the generated vapor concentrations were intermittently monitored for up to 5 hours with a gas chromatograph equipped with a flame ionization detector. The solvent systems tested in this study were the methanoltoluene system and the ethyl acetate-toluene system. The vapor concentrations of the more volatile component, that is, methanol in the methanol-toluene system and ethyl acetate in the ethyl acetate-toluene system, were high at first, and then decreased with time. On the other hand, the concentrations of the less volatile component were low at first, and then increased with time. A model for estimating multicomponent organic vapor concentrations was developed, based on a theory of vapor-liquid equilibria and a theory of the mass transfer rate, and estimated values were compared with experimental ones. The estimated vapor concentrations were in relatively good agreement with the experimental ones. The results suggest that changes in concentrations of two-component organic vapors continuously evaporating from a liquid reservoir can be estimated by the proposed model.

  16. A Mixture Model and a Hidden Markov Model to Simultaneously Detect Recombination Breakpoints and Reconstruct Phylogenies

    Directory of Open Access Journals (Sweden)

    Bastien Boussau

    2009-06-01

    Full Text Available Homologous recombination is a pervasive biological process that affects sequences in all living organisms and viruses. In the presence of recombination, the evolutionary history of an alignment of homologous sequences cannot be properly depicted by a single bifurcating tree: some sites have evolved along a specific phylogenetic tree, others have followed another path. Methods available to analyse recombination in sequences usually involve an analysis of the alignment through sliding-windows, or are particularly demanding in computational resources, and are often limited to nucleotide sequences. In this article, we propose and implement a Mixture Model on trees and a phylogenetic Hidden Markov Model to reveal recombination breakpoints while searching for the various evolutionary histories that are present in an alignment known to have undergone homologous recombination. These models are sufficiently efficient to be applied to dozens of sequences on a single desktop computer, and can handle equivalently nucleotide or protein sequences. We estimate their accuracy on simulated sequences and test them on real data.

  17. A Mixture Model and a Hidden Markov Model to Simultaneously Detect Recombination Breakpoints and Reconstruct Phylogenies

    Directory of Open Access Journals (Sweden)

    Bastien Boussau

    2009-01-01

    Full Text Available Homologous recombination is a pervasive biological process that affects sequences in all living organisms and viruses. In the presence of recombination, the evolutionary history of an alignment of homologous sequences cannot be properly depicted by a single bifurcating tree: some sites have evolved along a specific phylogenetic tree, others have followed another path. Methods available to analyse recombination in sequences usually involve an analysis of the alignment through sliding-windows, or are particularly demanding in computational resources, and are often limited to nucleotide sequences. In this article, we propose and implement a Mixture Model on trees and a phylogenetic Hidden Markov Model to reveal recombination breakpoints while searching for the various evolutionary histories that are present in an alignment known to have undergone homologous recombination. These models are sufficiently efficient to be applied to dozens of sequences on a single desktop computer, and can handle equivalently nucleotide or protein sequences. We estimate their accuracy on simulated sequences and test them on real data.

  18. Deposition behaviour of model biofuel ash in mixtures with quartz sand. Part 1: Experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Mischa Theis; Christian Mueller; Bengt-Johan Skrifvars; Mikko Hupa; Honghi Tran [Aabo Akademi Process Chemistry Centre, Aabo (Finland). Combustion and Materials Chemistry

    2006-10-15

    Model biofuel ash of well-defined size and melting properties was fed into an entrained flow reactor (EFR) to simulate the deposition behaviour of commercially applied biofuel mixtures in large-scale boilers. The aim was to obtain consistent experimental data that can be used for validation of computational fluid dynamics (CFD)-based deposition models. The results showed that while up to 80 wt% of the feed was lost to the EFR wall, the composition of the model ash particles collected at the reactor exit did not change. When model ashes were fed into the reactor individually, the ash particles were found to be sticky when they contained more than 15 wt% molten phase. When model ashes were fed in mixtures with silica sand, it was found that only a small amount of sand particles was captured in the deposits; the majority rebounded upon impact. The presence of sand in the feed mixture reduced the deposit buildup by more than could be expected from linear interpolation between the model ash and the sand. The results suggested that sand addition to model ash may prevent deposit buildup through erosion. 22 refs., 6 figs., 3 tabs.

  19. The STIRPAT Analysis on Carbon Emission in Chinese Cities: An Asymmetric Laplace Distribution Mixture Model

    Directory of Open Access Journals (Sweden)

    Shanshan Wang

    2017-12-01

    Full Text Available In cities’ policy-making, it is a hot issue to grasp the determinants of carbon dioxide emission in Chinese cities. And the common method is to use the STIRPAT model, where its coefficients represent the influence intensity of each determinants of carbon emission. However, less work discusses estimation accuracy, especially in the framework of non-normal distribution and heterogeneity among cities’ emission. To improve the estimation accuracy, this paper employs a new method to estimate the STIRPAT model. The method uses a mixture of Asymmetric Laplace distributions (ALDs to approximate the true distribution of the error term. Meantime, a designed two-layer EM algorithm is used to obtain estimators. We test the robustness via the comparison results of five different models. We find that the ALDs Mixture Model is more reliable the others. Further, a significant Kuznets curve relationship is identified in China.

  20. Optimal mixture experiments

    CERN Document Server

    Sinha, B K; Pal, Manisha; Das, P

    2014-01-01

    The book dwells mainly on the optimality aspects of mixture designs. As mixture models are a special case of regression models, a general discussion on regression designs has been presented, which includes topics like continuous designs, de la Garza phenomenon, Loewner order domination, Equivalence theorems for different optimality criteria and standard optimality results for single variable polynomial regression and multivariate linear and quadratic regression models. This is followed by a review of the available literature on estimation of parameters in mixture models. Based on recent research findings, the volume also introduces optimal mixture designs for estimation of optimum mixing proportions in different mixture models, which include Scheffé’s quadratic model, Darroch-Waller model, log- contrast model, mixture-amount models, random coefficient models and multi-response model.  Robust mixture designs and mixture designs in blocks have been also reviewed. Moreover, some applications of mixture desig...

  1. Using Bayesian statistics for modeling PTSD through Latent Growth Mixture Modeling: implementation and discussion

    Directory of Open Access Journals (Sweden)

    Sarah Depaoli

    2015-03-01

    Full Text Available Background: After traumatic events, such as disaster, war trauma, and injuries including burns (which is the focus here, the risk to develop posttraumatic stress disorder (PTSD is approximately 10% (Breslau & Davis, 1992. Latent Growth Mixture Modeling can be used to classify individuals into distinct groups exhibiting different patterns of PTSD (Galatzer-Levy, 2015. Currently, empirical evidence points to four distinct trajectories of PTSD patterns in those who have experienced burn trauma. These trajectories are labeled as: resilient, recovery, chronic, and delayed onset trajectories (e.g., Bonanno, 2004; Bonanno, Brewin, Kaniasty, & Greca, 2010; Maercker, Gäbler, O'Neil, Schützwohl, & Müller, 2013; Pietrzak et al., 2013. The delayed onset trajectory affects only a small group of individuals, that is, about 4–5% (O'Donnell, Elliott, Lau, & Creamer, 2007. In addition to its low frequency, the later onset of this trajectory may contribute to the fact that these individuals can be easily overlooked by professionals. In this special symposium on Estimating PTSD trajectories (Van de Schoot, 2015a, we illustrate how to properly identify this small group of individuals through the Bayesian estimation framework using previous knowledge through priors (see, e.g., Depaoli & Boyajian, 2014; Van de Schoot, Broere, Perryck, Zondervan-Zwijnenburg, & Van Loey, 2015. Method: We used latent growth mixture modeling (LGMM (Van de Schoot, 2015b to estimate PTSD trajectories across 4 years that followed a traumatic burn. We demonstrate and compare results from traditional (maximum likelihood and Bayesian estimation using priors (see, Depaoli, 2012, 2013. Further, we discuss where priors come from and how to define them in the estimation process. Results: We demonstrate that only the Bayesian approach results in the desired theory-driven solution of PTSD trajectories. Since the priors are chosen subjectively, we also present a sensitivity analysis of the

  2. Modulational instability, solitons and periodic waves in a model of quantum degenerate boson-fermion mixtures

    International Nuclear Information System (INIS)

    Belmonte-Beitia, Juan; Perez-Garcia, Victor M.; Vekslerchik, Vadym

    2007-01-01

    In this paper, we study a system of coupled nonlinear Schroedinger equations modelling a quantum degenerate mixture of bosons and fermions. We analyze the stability of plane waves, give precise conditions for the existence of solitons and write explicit solutions in the form of periodic waves. We also check that the solitons observed previously in numerical simulations of the model correspond exactly to our explicit solutions and see how plane waves destabilize to form periodic waves

  3. On Partial Defaults in Portfolio Credit Risk : A Poisson Mixture Model Approach

    OpenAIRE

    Weißbach, Rafael; von Lieres und Wilkau, Carsten

    2005-01-01

    Most credit portfolio models exclusively calculate the loss distribution for a portfolio of performing counterparts. Conservative default definitions cause considerable insecurity about the loss for a long time after the default. We present three approaches to account for defaulted counterparts in the calculation of the economic capital. Two of the approaches are based on the Poisson mixture model CreditRisk+ and derive a loss distribution for an integrated portfolio. The third method treats ...

  4. Personal Exposure to Mixtures of Volatile Organic Compounds: Modeling and Further Analysis of the RIOPA Data

    Science.gov (United States)

    Batterman, Stuart; Su, Feng-Chiao; Li, Shi; Mukherjee, Bhramar; Jia, Chunrong

    2015-01-01

    INTRODUCTION Emission sources of volatile organic compounds (VOCs) are numerous and widespread in both indoor and outdoor environments. Concentrations of VOCs indoors typically exceed outdoor levels, and most people spend nearly 90% of their time indoors. Thus, indoor sources generally contribute the majority of VOC exposures for most people. VOC exposure has been associated with a wide range of acute and chronic health effects; for example, asthma, respiratory diseases, liver and kidney dysfunction, neurologic impairment, and cancer. Although exposures to most VOCs for most persons fall below health-based guidelines, and long-term trends show decreases in ambient emissions and concentrations, a subset of individuals experience much higher exposures that exceed guidelines. Thus, exposure to VOCs remains an important environmental health concern. The present understanding of VOC exposures is incomplete. With the exception of a few compounds, concentration and especially exposure data are limited; and like other environmental data, VOC exposure data can show multiple modes, low and high extreme values, and sometimes a large portion of data below method detection limits (MDLs). Field data also show considerable spatial or interpersonal variability, and although evidence is limited, temporal variability seems high. These characteristics can complicate modeling and other analyses aimed at risk assessment, policy actions, and exposure management. In addition to these analytic and statistical issues, exposure typically occurs as a mixture, and mixture components may interact or jointly contribute to adverse effects. However most pollutant regulations, guidelines, and studies remain focused on single compounds, and thus may underestimate cumulative exposures and risks arising from coexposures. In addition, the composition of VOC mixtures has not been thoroughly investigated, and mixture components show varying and complex dependencies. Finally, although many factors are

  5. An odor interaction model of binary odorant mixtures by a partial differential equation method.

    Science.gov (United States)

    Yan, Luchun; Liu, Jiemin; Wang, Guihua; Wu, Chuandong

    2014-07-09

    A novel odor interaction model was proposed for binary mixtures of benzene and substituted benzenes by a partial differential equation (PDE) method. Based on the measurement method (tangent-intercept method) of partial molar volume, original parameters of corresponding formulas were reasonably displaced by perceptual measures. By these substitutions, it was possible to relate a mixture's odor intensity to the individual odorant's relative odor activity value (OAV). Several binary mixtures of benzene and substituted benzenes were respectively tested to establish the PDE models. The obtained results showed that the PDE model provided an easily interpretable method relating individual components to their joint odor intensity. Besides, both predictive performance and feasibility of the PDE model were proved well through a series of odor intensity matching tests. If combining the PDE model with portable gas detectors or on-line monitoring systems, olfactory evaluation of odor intensity will be achieved by instruments instead of odor assessors. Many disadvantages (e.g., expense on a fixed number of odor assessors) also will be successfully avoided. Thus, the PDE model is predicted to be helpful to the monitoring and management of odor pollutions.

  6. Reduced chemical kinetic model of detonation combustion of one- and multi-fuel gaseous mixtures with air

    Science.gov (United States)

    Fomin, P. A.

    2018-03-01

    Two-step approximate models of chemical kinetics of detonation combustion of (i) one hydrocarbon fuel CnHm (for example, methane, propane, cyclohexane etc.) and (ii) multi-fuel gaseous mixtures (∑aiCniHmi) (for example, mixture of methane and propane, synthesis gas, benzene and kerosene) are presented for the first time. The models can be used for any stoichiometry, including fuel/fuels-rich mixtures, when reaction products contain molecules of carbon. Owing to the simplicity and high accuracy, the models can be used in multi-dimensional numerical calculations of detonation waves in corresponding gaseous mixtures. The models are in consistent with the second law of thermodynamics and Le Chatelier's principle. Constants of the models have a clear physical meaning. The models can be used for calculation thermodynamic parameters of the mixture in a state of chemical equilibrium.

  7. Fitting N-mixture models to count data with unmodeled heterogeneity: Bias, diagnostics, and alternative approaches

    Science.gov (United States)

    Duarte, Adam; Adams, Michael J.; Peterson, James T.

    2018-01-01

    Monitoring animal populations is central to wildlife and fisheries management, and the use of N-mixture models toward these efforts has markedly increased in recent years. Nevertheless, relatively little work has evaluated estimator performance when basic assumptions are violated. Moreover, diagnostics to identify when bias in parameter estimates from N-mixture models is likely is largely unexplored. We simulated count data sets using 837 combinations of detection probability, number of sample units, number of survey occasions, and type and extent of heterogeneity in abundance or detectability. We fit Poisson N-mixture models to these data, quantified the bias associated with each combination, and evaluated if the parametric bootstrap goodness-of-fit (GOF) test can be used to indicate bias in parameter estimates. We also explored if assumption violations can be diagnosed prior to fitting N-mixture models. In doing so, we propose a new model diagnostic, which we term the quasi-coefficient of variation (QCV). N-mixture models performed well when assumptions were met and detection probabilities were moderate (i.e., ≥0.3), and the performance of the estimator improved with increasing survey occasions and sample units. However, the magnitude of bias in estimated mean abundance with even slight amounts of unmodeled heterogeneity was substantial. The parametric bootstrap GOF test did not perform well as a diagnostic for bias in parameter estimates when detectability and sample sizes were low. The results indicate the QCV is useful to diagnose potential bias and that potential bias associated with unidirectional trends in abundance or detectability can be diagnosed using Poisson regression. This study represents the most thorough assessment to date of assumption violations and diagnostics when fitting N-mixture models using the most commonly implemented error distribution. Unbiased estimates of population state variables are needed to properly inform management decision

  8. Rotation and toroidal magnetic field effects on the stability of two-component jets

    Science.gov (United States)

    Millas, Dimitrios; Keppens, Rony; Meliani, Zakaria

    2017-09-01

    Several observations of astrophysical jets show evidence of a structure in the direction perpendicular to the jet axis, leading to the development of 'spine and sheath' models of jets. Most studies focus on a two-component jet consisting of a highly relativistic inner jet and a slower - but still relativistic - outer jet surrounded by an unmagnetized environment. These jets are believed to be susceptible to a relativistic Rayleigh-Taylor-type instability, depending on the effective inertia ratio of the two components. We extend previous studies by taking into account the presence of a non-zero toroidal magnetic field. Different values of magnetization are examined to detect possible differences in the evolution and stability of the jet. We find that the toroidal field, above a certain level of magnetization σ, roughly equal to 0.01, can stabilize the jet against the previously mentioned instabilities and that there is a clear trend in the behaviour of the average Lorentz factor and the effective radius of the jet when we continuously increase the magnetization. The simulations are performed using the relativistic MHD module from the open source, parallel, grid adaptive, mpi-amrvac code.

  9. Analytical energy gradient for the two-component normalized elimination of the small component method

    Energy Technology Data Exchange (ETDEWEB)

    Zou, Wenli; Filatov, Michael; Cremer, Dieter, E-mail: dcremer@smu.edu [Computational and Theoretical Chemistry Group (CATCO), Department of Chemistry, Southern Methodist University, 3215 Daniel Ave, Dallas, Texas 75275-0314 (United States)

    2015-06-07

    The analytical gradient for the two-component Normalized Elimination of the Small Component (2c-NESC) method is presented. The 2c-NESC is a Dirac-exact method that employs the exact two-component one-electron Hamiltonian and thus leads to exact Dirac spin-orbit (SO) splittings for one-electron atoms. For many-electron atoms and molecules, the effect of the two-electron SO interaction is modeled by a screened nucleus potential using effective nuclear charges as proposed by Boettger [Phys. Rev. B 62, 7809 (2000)]. The effect of spin-orbit coupling (SOC) on molecular geometries is analyzed utilizing the properties of the frontier orbitals and calculated SO couplings. It is shown that bond lengths can either be lengthened or shortened under the impact of SOC where in the first case the influence of low lying excited states with occupied antibonding orbitals plays a role and in the second case the jj-coupling between occupied antibonding and unoccupied bonding orbitals dominates. In general, the effect of SOC on bond lengths is relatively small (≤5% of the scalar relativistic changes in the bond length). However, large effects are found for van der Waals complexes Hg{sub 2} and Cn{sub 2}, which are due to the admixture of more bonding character to the highest occupied spinors.

  10. Analytical energy gradient for the two-component normalized elimination of the small component method

    Science.gov (United States)

    Zou, Wenli; Filatov, Michael; Cremer, Dieter

    2015-06-01

    The analytical gradient for the two-component Normalized Elimination of the Small Component (2c-NESC) method is presented. The 2c-NESC is a Dirac-exact method that employs the exact two-component one-electron Hamiltonian and thus leads to exact Dirac spin-orbit (SO) splittings for one-electron atoms. For many-electron atoms and molecules, the effect of the two-electron SO interaction is modeled by a screened nucleus potential using effective nuclear charges as proposed by Boettger [Phys. Rev. B 62, 7809 (2000)]. The effect of spin-orbit coupling (SOC) on molecular geometries is analyzed utilizing the properties of the frontier orbitals and calculated SO couplings. It is shown that bond lengths can either be lengthened or shortened under the impact of SOC where in the first case the influence of low lying excited states with occupied antibonding orbitals plays a role and in the second case the jj-coupling between occupied antibonding and unoccupied bonding orbitals dominates. In general, the effect of SOC on bond lengths is relatively small (≤5% of the scalar relativistic changes in the bond length). However, large effects are found for van der Waals complexes Hg2 and Cn2, which are due to the admixture of more bonding character to the highest occupied spinors.

  11. Personal exposure to mixtures of volatile organic compounds: modeling and further analysis of the RIOPA data.

    Science.gov (United States)

    Batterman, Stuart; Su, Feng-Chiao; Li, Shi; Mukherjee, Bhramar; Jia, Chunrong

    2014-06-01

    Emission sources of volatile organic compounds (VOCs*) are numerous and widespread in both indoor and outdoor environments. Concentrations of VOCs indoors typically exceed outdoor levels, and most people spend nearly 90% of their time indoors. Thus, indoor sources generally contribute the majority of VOC exposures for most people. VOC exposure has been associated with a wide range of acute and chronic health effects; for example, asthma, respiratory diseases, liver and kidney dysfunction, neurologic impairment, and cancer. Although exposures to most VOCs for most persons fall below health-based guidelines, and long-term trends show decreases in ambient emissions and concentrations, a subset of individuals experience much higher exposures that exceed guidelines. Thus, exposure to VOCs remains an important environmental health concern. The present understanding of VOC exposures is incomplete. With the exception of a few compounds, concentration and especially exposure data are limited; and like other environmental data, VOC exposure data can show multiple modes, low and high extreme values, and sometimes a large portion of data below method detection limits (MDLs). Field data also show considerable spatial or interpersonal variability, and although evidence is limited, temporal variability seems high. These characteristics can complicate modeling and other analyses aimed at risk assessment, policy actions, and exposure management. In addition to these analytic and statistical issues, exposure typically occurs as a mixture, and mixture components may interact or jointly contribute to adverse effects. However most pollutant regulations, guidelines, and studies remain focused on single compounds, and thus may underestimate cumulative exposures and risks arising from coexposures. In addition, the composition of VOC mixtures has not been thoroughly investigated, and mixture components show varying and complex dependencies. Finally, although many factors are known to

  12. Adapting cultural mixture modeling for continuous measures of knowledge and memory fluency.

    Science.gov (United States)

    Tan, Yin-Yin Sarah; Mueller, Shane T

    2016-09-01

    Previous research (e.g., cultural consensus theory (Romney, Weller, & Batchelder, American Anthropologist, 88, 313-338, 1986); cultural mixture modeling (Mueller & Veinott, 2008)) has used overt response patterns (i.e., responses to questionnaires and surveys) to identify whether a group shares a single coherent attitude or belief set. Yet many domains in social science have focused on implicit attitudes that are not apparent in overt responses but still may be detected via response time patterns. We propose a method for modeling response times as a mixture of Gaussians, adapting the strong-consensus model of cultural mixture modeling to model this implicit measure of knowledge strength. We report the results of two behavioral experiments and one simulation experiment that establish the usefulness of the approach, as well as some of the boundary conditions under which distinct groups of shared agreement might be recovered, even when the group identity is not known. The results reveal that the ability to recover and identify shared-belief groups depends on (1) the level of noise in the measurement, (2) the differential signals for strong versus weak attitudes, and (3) the similarity between group attitudes. Consequently, the method shows promise for identifying latent groups among a population whose overt attitudes do not differ, but whose implicit or covert attitudes or knowledge may differ.

  13. Theory of synergistic effects: Hill-type response surfaces as 'null-interaction' models for mixtures.

    Science.gov (United States)

    Schindler, Michael

    2017-08-02

    The classification of effects caused by mixtures of agents as synergistic, antagonistic or additive depends critically on the reference model of 'null interaction'. Two main approaches are currently in use, the Additive Dose (ADM) or concentration addition (CA) and the Multiplicative Survival (MSM) or independent action (IA) models. We compare several response surface models to a newly developed Hill response surface, obtained by solving a logistic partial differential equation (PDE). Assuming that a mixture of chemicals with individual Hill-type dose-response curves can be described by an n-dimensional logistic function, Hill's differential equation for pure agents is replaced by a PDE for mixtures whose solution provides Hill surfaces as 'null-interaction' models and relies neither on Bliss independence or Loewe additivity nor uses Chou's unified general theory. An n-dimensional logistic PDE decribing the Hill-type response of n-component mixtures is solved. Appropriate boundary conditions ensure the correct asymptotic behaviour. Mathematica 11 (Wolfram, Mathematica Version 11.0, 2016) is used for the mathematics and graphics presented in this article. The Hill response surface ansatz can be applied to mixtures of compounds with arbitrary Hill parameters. Restrictions which are required when deriving analytical expressions for response surfaces from other principles, are unnecessary. Many approaches based on Loewe additivity turn out be special cases of the Hill approach whose increased flexibility permits a better description of 'null-effect' responses. Missing sham-compliance of Bliss IA, known as Colby's model in agrochemistry, leads to incompatibility with the Hill surface ansatz. Examples of binary and ternary mixtures illustrate the differences between the approaches. For Hill-slopes close to one and doses below the half-maximum effect doses MSM (Colby, Bliss, Finney, Abbott) predicts synergistic effects where the Hill model indicates 'null

  14. Multi-Step Time Series Forecasting with an Ensemble of Varied Length Mixture Models.

    Science.gov (United States)

    Ouyang, Yicun; Yin, Hujun

    2018-05-01

    Many real-world problems require modeling and forecasting of time series, such as weather temperature, electricity demand, stock prices and foreign exchange (FX) rates. Often, the tasks involve predicting over a long-term period, e.g. several weeks or months. Most existing time series models are inheritably for one-step prediction, that is, predicting one time point ahead. Multi-step or long-term prediction is difficult and challenging due to the lack of information and uncertainty or error accumulation. The main existing approaches, iterative and independent, either use one-step model recursively or treat the multi-step task as an independent model. They generally perform poorly in practical applications. In this paper, as an extension of the self-organizing mixture autoregressive (AR) model, the varied length mixture (VLM) models are proposed to model and forecast time series over multi-steps. The key idea is to preserve the dependencies between the time points within the prediction horizon. Training data are segmented to various lengths corresponding to various forecasting horizons, and the VLM models are trained in a self-organizing fashion on these segments to capture these dependencies in its component AR models of various predicting horizons. The VLM models form a probabilistic mixture of these varied length models. A combination of short and long VLM models and an ensemble of them are proposed to further enhance the prediction performance. The effectiveness of the proposed methods and their marked improvements over the existing methods are demonstrated through a number of experiments on synthetic data, real-world FX rates and weather temperatures.

  15. Data Requirements and Modeling for Gas Hydrate-Related Mixtures and a Comparison of Two Association Models

    DEFF Research Database (Denmark)

    Liang, Xiaodong; Aloupis, Georgios; Kontogeorgis, Georgios M.

    2017-01-01

    the performance of the CPA and sPC-SAFT EOS for modeling the fluid-phase equilibria of gas hydrate-related systems and will try to explore how the models can help in suggesting experimental measurements. These systems contain water, hydrocarbon (alkane or aromatic), and either methanol or monoethylene glycol...... parameter sets have been chosen for the sPC-SAFT EOS for a fair comparison. The comparisons are made for pure fluid properties, vapor liquid-equilibria, and liquid liquid equilibria of binary and ternary mixtures as well as vapor liquid liquid equilibria of quaternary mixtures. The results show, from...

  16. New approach in modeling Cr(VI) sorption onto biomass from metal binary mixtures solutions

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Chang [College of Environmental Science and Engineering, Anhui Normal University, South Jiuhua Road, 189, 241002 Wuhu (China); Chemical Engineering Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain); Fiol, Núria [Chemical Engineering Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain); Villaescusa, Isabel, E-mail: Isabel.Villaescusa@udg.edu [Chemical Engineering Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain); Poch, Jordi [Applied Mathematics Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain)

    2016-01-15

    In the last decades Cr(VI) sorption equilibrium and kinetic studies have been carried out using several types of biomasses. However there are few researchers that consider all the simultaneous processes that take place during Cr(VI) sorption (i.e., sorption/reduction of Cr(VI) and simultaneous formation and binding of reduced Cr(III)) when formulating a model that describes the overall sorption process. On the other hand Cr(VI) scarcely exists alone in wastewaters, it is usually found in mixtures with divalent metals. Therefore, the simultaneous removal of Cr(VI) and divalent metals in binary mixtures and the interactive mechanism governing Cr(VI) elimination have gained more and more attention. In the present work, kinetics of Cr(VI) sorption onto exhausted coffee from Cr(VI)–Cu(II) binary mixtures has been studied in a stirred batch reactor. A model including Cr(VI) sorption and reduction, Cr(III) sorption and the effect of the presence of Cu(II) in these processes has been developed and validated. This study constitutes an important advance in modeling Cr(VI) sorption kinetics especially when chromium sorption is in part based on the sorbent capacity of reducing hexavalent chromium and a metal cation is present in the binary mixture. - Highlights: • A kinetic model including Cr(VI) reduction, Cr(VI) and Cr(III) sorption/desorption • Synergistic effect of Cu(II) on Cr(VI) elimination included in the modelModel validation by checking it against independent sets of data.

  17. New approach in modeling Cr(VI) sorption onto biomass from metal binary mixtures solutions

    International Nuclear Information System (INIS)

    Liu, Chang; Fiol, Núria; Villaescusa, Isabel; Poch, Jordi

    2016-01-01

    In the last decades Cr(VI) sorption equilibrium and kinetic studies have been carried out using several types of biomasses. However there are few researchers that consider all the simultaneous processes that take place during Cr(VI) sorption (i.e., sorption/reduction of Cr(VI) and simultaneous formation and binding of reduced Cr(III)) when formulating a model that describes the overall sorption process. On the other hand Cr(VI) scarcely exists alone in wastewaters, it is usually found in mixtures with divalent metals. Therefore, the simultaneous removal of Cr(VI) and divalent metals in binary mixtures and the interactive mechanism governing Cr(VI) elimination have gained more and more attention. In the present work, kinetics of Cr(VI) sorption onto exhausted coffee from Cr(VI)–Cu(II) binary mixtures has been studied in a stirred batch reactor. A model including Cr(VI) sorption and reduction, Cr(III) sorption and the effect of the presence of Cu(II) in these processes has been developed and validated. This study constitutes an important advance in modeling Cr(VI) sorption kinetics especially when chromium sorption is in part based on the sorbent capacity of reducing hexavalent chromium and a metal cation is present in the binary mixture. - Highlights: • A kinetic model including Cr(VI) reduction, Cr(VI) and Cr(III) sorption/desorption • Synergistic effect of Cu(II) on Cr(VI) elimination included in the modelModel validation by checking it against independent sets of data

  18. Separation of a multicomponent mixture by gaseous diffusion: modelization of the enrichment in a capillary - application to a pilot cascade

    International Nuclear Information System (INIS)

    Doneddu, F.

    1982-01-01

    Starting from the modelization of gaseous flow in a porous medium (flow in a capillary), we generalize the law of enrichment in an infinite cylindrical capillary, established for an isotropic linear mixture, to a multicomponent mixture. A generalization is given of the notion of separation yields and characteristic pressure classically used for separations of isotropic linear mixtures. We present formulas for diagonalizing the diffusion operator, modelization of a multistage, gaseous diffusion cascade and comparison with the experimental results of a drain cascade (N 2 -SF 6 -UF 6 mixture). [fr

  19. Linearmycins Activate a Two-Component Signaling System Involved in Bacterial Competition and Biofilm Morphology

    Science.gov (United States)

    2017-01-01

    ABSTRACT Bacteria use two-component signaling systems to adapt and respond to their competitors and changing environments. For instance, competitor bacteria may produce antibiotics and other bioactive metabolites and sequester nutrients. To survive, some species of bacteria escape competition through antibiotic production, biofilm formation, or motility. Specialized metabolite production and biofilm formation are relatively well understood for bacterial species in isolation. How bacteria control these functions when competitors are present is not well studied. To address fundamental questions relating to the competitive mechanisms of different species, we have developed a model system using two species of soil bacteria, Bacillus subtilis and Streptomyces sp. strain Mg1. Using this model, we previously found that linearmycins produced by Streptomyces sp. strain Mg1 cause lysis of B. subtilis cells and degradation of colony matrix. We identified strains of B. subtilis with mutations in the two-component signaling system yfiJK operon that confer dual phenotypes of specific linearmycin resistance and biofilm morphology. We determined that expression of the ATP-binding cassette (ABC) transporter yfiLMN operon, particularly yfiM and yfiN, is necessary for biofilm morphology. Using transposon mutagenesis, we identified genes that are required for YfiLMN-mediated biofilm morphology, including several chaperones. Using transcriptional fusions, we found that YfiJ signaling is activated by linearmycins and other polyene metabolites. Finally, using a truncated YfiJ, we show that YfiJ requires its transmembrane domain to activate downstream signaling. Taken together, these results suggest coordinated dual antibiotic resistance and biofilm morphology by a single multifunctional ABC transporter promotes competitive fitness of B. subtilis. IMPORTANCE DNA sequencing approaches have revealed hitherto unexplored diversity of bacterial species in a wide variety of environments that

  20. Characterization of Mixtures. Part 2: QSPR Models for Prediction of Excess Molar Volume and Liquid Density Using Neural Networks.

    Science.gov (United States)

    Ajmani, Subhash; Rogers, Stephen C; Barley, Mark H; Burgess, Andrew N; Livingstone, David J

    2010-09-17

    In our earlier work, we have demonstrated that it is possible to characterize binary mixtures using single component descriptors by applying various mixing rules. We also showed that these methods were successful in building predictive QSPR models to study various mixture properties of interest. Here in, we developed a QSPR model of an excess thermodynamic property of binary mixtures i.e. excess molar volume (V(E) ). In the present study, we use a set of mixture descriptors which we earlier designed to specifically account for intermolecular interactions between the components of a mixture and applied successfully to the prediction of infinite-dilution activity coefficients using neural networks (part 1 of this series). We obtain a significant QSPR model for the prediction of excess molar volume (V(E) ) using consensus neural networks and five mixture descriptors. We find that hydrogen bond and thermodynamic descriptors are the most important in determining excess molar volume (V(E) ), which is in line with the theory of intermolecular forces governing excess mixture properties. The results also suggest that the mixture descriptors utilized herein may be sufficient to model a wide variety of properties of binary and possibly even more complex mixtures. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Development of reversible jump Markov Chain Monte Carlo algorithm in the Bayesian mixture modeling for microarray data in Indonesia

    Science.gov (United States)

    Astuti, Ani Budi; Iriawan, Nur; Irhamah, Kuswanto, Heri

    2017-12-01

    In the Bayesian mixture modeling requires stages the identification number of the most appropriate mixture components thus obtained mixture models fit the data through data driven concept. Reversible Jump Markov Chain Monte Carlo (RJMCMC) is a combination of the reversible jump (RJ) concept and the Markov Chain Monte Carlo (MCMC) concept used by some researchers to solve the problem of identifying the number of mixture components which are not known with certainty number. In its application, RJMCMC using the concept of the birth/death and the split-merge with six types of movement, that are w updating, θ updating, z updating, hyperparameter β updating, split-merge for components and birth/death from blank components. The development of the RJMCMC algorithm needs to be done according to the observed case. The purpose of this study is to know the performance of RJMCMC algorithm development in identifying the number of mixture components which are not known with certainty number in the Bayesian mixture modeling for microarray data in Indonesia. The results of this study represent that the concept RJMCMC algorithm development able to properly identify the number of mixture components in the Bayesian normal mixture model wherein the component mixture in the case of microarray data in Indonesia is not known for certain number.

  2. Estimating animal abundance with N-mixture models using the R-INLA package for R

    KAUST Repository

    Meehan, Timothy D.

    2017-05-03

    Successful management of wildlife populations requires accurate estimates of abundance. Abundance estimates can be confounded by imperfect detection during wildlife surveys. N-mixture models enable quantification of detection probability and often produce abundance estimates that are less biased. The purpose of this study was to demonstrate the use of the R-INLA package to analyze N-mixture models and to compare performance of R-INLA to two other common approaches -- JAGS (via the runjags package), which uses Markov chain Monte Carlo and allows Bayesian inference, and unmarked, which uses Maximum Likelihood and allows frequentist inference. We show that R-INLA is an attractive option for analyzing N-mixture models when (1) familiar model syntax and data format (relative to other R packages) are desired, (2) survey level covariates of detection are not essential, (3) fast computing times are necessary (R-INLA is 10 times faster than unmarked, 300 times faster than JAGS), and (4) Bayesian inference is preferred.

  3. Modeling the surface tension of complex, reactive organic-inorganic mixtures

    Science.gov (United States)

    Schwier, A. N.; Viglione, G. A.; Li, Z.; McNeill, V. Faye

    2013-11-01

    Atmospheric aerosols can contain thousands of organic compounds which impact aerosol surface tension, affecting aerosol properties such as heterogeneous reactivity, ice nucleation, and cloud droplet formation. We present new experimental data for the surface tension of complex, reactive organic-inorganic aqueous mixtures mimicking tropospheric aerosols. Each solution contained 2-6 organic compounds, including methylglyoxal, glyoxal, formaldehyde, acetaldehyde, oxalic acid, succinic acid, leucine, alanine, glycine, and serine, with and without ammonium sulfate. We test two semi-empirical surface tension models and find that most reactive, complex, aqueous organic mixtures which do not contain salt are well described by a weighted Szyszkowski-Langmuir (S-L) model which was first presented by Henning et al. (2005). Two approaches for modeling the effects of salt were tested: (1) the Tuckermann approach (an extension of the Henning model with an additional explicit salt term), and (2) a new implicit method proposed here which employs experimental surface tension data obtained for each organic species in the presence of salt used with the Henning model. We recommend the use of method (2) for surface tension modeling of aerosol systems because the Henning model (using data obtained from organic-inorganic systems) and Tuckermann approach provide similar modeling results and goodness-of-fit (χ2) values, yet the Henning model is a simpler and more physical approach to modeling the effects of salt, requiring less empirically determined parameters.

  4. Partitioning detectability components in populations subject to within-season temporary emigration using binomial mixture models.

    Directory of Open Access Journals (Sweden)

    Katherine M O'Donnell

    Full Text Available Detectability of individual animals is highly variable and nearly always < 1; imperfect detection must be accounted for to reliably estimate population sizes and trends. Hierarchical models can simultaneously estimate abundance and effective detection probability, but there are several different mechanisms that cause variation in detectability. Neglecting temporary emigration can lead to biased population estimates because availability and conditional detection probability are confounded. In this study, we extend previous hierarchical binomial mixture models to account for multiple sources of variation in detectability. The state process of the hierarchical model describes ecological mechanisms that generate spatial and temporal patterns in abundance, while the observation model accounts for the imperfect nature of counting individuals due to temporary emigration and false absences. We illustrate our model's potential advantages, including the allowance of temporary emigration between sampling periods, with a case study of southern red-backed salamanders Plethodon serratus. We fit our model and a standard binomial mixture model to counts of terrestrial salamanders surveyed at 40 sites during 3-5 surveys each spring and fall 2010-2012. Our models generated similar parameter estimates to standard binomial mixture models. Aspect was the best predictor of salamander abundance in our case study; abundance increased as aspect became more northeasterly. Increased time-since-rainfall strongly decreased salamander surface activity (i.e. availability for sampling, while higher amounts of woody cover objects and rocks increased conditional detection probability (i.e. probability of capture, given an animal is exposed to sampling. By explicitly accounting for both components of detectability, we increased congruence between our statistical modeling and our ecological understanding of the system. We stress the importance of choosing survey locations and

  5. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies.

    Directory of Open Access Journals (Sweden)

    Wesley K Thompson

    2015-12-01

    Full Text Available Characterizing the distribution of effects from genome-wide genotyping data is crucial for understanding important aspects of the genetic architecture of complex traits, such as number or proportion of non-null loci, average proportion of phenotypic variance explained per non-null effect, power for discovery, and polygenic risk prediction. To this end, previous work has used effect-size models based on various distributions, including the normal and normal mixture distributions, among others. In this paper we propose a scale mixture of two normals model for effect size distributions of genome-wide association study (GWAS test statistics. Test statistics corresponding to null associations are modeled as random draws from a normal distribution with zero mean; test statistics corresponding to non-null associations are also modeled as normal with zero mean, but with larger variance. The model is fit via minimizing discrepancies between the parametric mixture model and resampling-based nonparametric estimates of replication effect sizes and variances. We describe in detail the implications of this model for estimation of the non-null proportion, the probability of replication in de novo samples, the local false discovery rate, and power for discovery of a specified proportion of phenotypic variance explained from additive effects of loci surpassing a given significance threshold. We also examine the crucial issue of the impact of linkage disequilibrium (LD on effect sizes and parameter estimates, both analytically and in simulations. We apply this approach to meta-analysis test statistics from two large GWAS, one for Crohn's disease (CD and the other for schizophrenia (SZ. A scale mixture of two normals distribution provides an excellent fit to the SZ nonparametric replication effect size estimates. While capturing the general behavior of the data, this mixture model underestimates the tails of the CD effect size distribution. We discuss the

  6. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies.

    Science.gov (United States)

    Thompson, Wesley K; Wang, Yunpeng; Schork, Andrew J; Witoelar, Aree; Zuber, Verena; Xu, Shujing; Werge, Thomas; Holland, Dominic; Andreassen, Ole A; Dale, Anders M

    2015-12-01

    Characterizing the distribution of effects from genome-wide genotyping data is crucial for understanding important aspects of the genetic architecture of complex traits, such as number or proportion of non-null loci, average proportion of phenotypic variance explained per non-null effect, power for discovery, and polygenic risk prediction. To this end, previous work has used effect-size models based on various distributions, including the normal and normal mixture distributions, among others. In this paper we propose a scale mixture of two normals model for effect size distributions of genome-wide association study (GWAS) test statistics. Test statistics corresponding to null associations are modeled as random draws from a normal distribution with zero mean; test statistics corresponding to non-null associations are also modeled as normal with zero mean, but with larger variance. The model is fit via minimizing discrepancies between the parametric mixture model and resampling-based nonparametric estimates of replication effect sizes and variances. We describe in detail the implications of this model for estimation of the non-null proportion, the probability of replication in de novo samples, the local false discovery rate, and power for discovery of a specified proportion of phenotypic variance explained from additive effects of loci surpassing a given significance threshold. We also examine the crucial issue of the impact of linkage disequilibrium (LD) on effect sizes and parameter estimates, both analytically and in simulations. We apply this approach to meta-analysis test statistics from two large GWAS, one for Crohn's disease (CD) and the other for schizophrenia (SZ). A scale mixture of two normals distribution provides an excellent fit to the SZ nonparametric replication effect size estimates. While capturing the general behavior of the data, this mixture model underestimates the tails of the CD effect size distribution. We discuss the implications of

  7. Methodology for predicting oily mixture properties in the mathematical modeling of molecular distillation

    Directory of Open Access Journals (Sweden)

    M. F. Gayol

    2017-06-01

    Full Text Available A methodology for predicting the thermodynamic and transport properties of a multi-component oily mixture, in which the different mixture components are grouped into a small number of pseudo components is shown. This prediction of properties is used in the mathematical modeling of molecular distillation, which consists of a system of differential equations in partial derivatives, according to the principles of the Transport Phenomena and is solved by an implicit finite difference method using a computer code. The mathematical model was validated with experimental data, specifically the molecular distillation of a deodorizer distillate (DD of sunflower oil. The results obtained were satisfactory, with errors less than 10% with respect to the experimental data in a temperature range in which it is possible to apply the proposed method.

  8. Methodology for predicting oily mixture properties in the mathematical modeling of molecular distillation

    International Nuclear Information System (INIS)

    Gayol, M.F.; Pramparo, M.C.; Miró Erdmann, S.M.

    2017-01-01

    A methodology for predicting the thermodynamic and transport properties of a multi-component oily mixture, in which the different mixture components are grouped into a small number of pseudo components is shown. This prediction of properties is used in the mathematical modeling of molecular distillation, which consists of a system of differential equations in partial derivatives, according to the principles of the Transport Phenomena and is solved by an implicit finite difference method using a computer code. The mathematical model was validated with experimental data, specifically the molecular distillation of a deodorizer distillate (DD) of sunflower oil. The results obtained were satisfactory, with errors less than 10% with respect to the experimental data in a temperature range in which it is possible to apply the proposed method. [es

  9. Modeling the flow of activated H2 + CH4 mixture by deposition of diamond nanostructures

    Directory of Open Access Journals (Sweden)

    Plotnikov Mikhail

    2017-01-01

    Full Text Available Algorithm of the direct simulation Monte Carlo method for the flow of hydrogen and methane mixture in a cylindrical channel is developed. Heterogeneous reactions on tungsten channel surfaces are included into the model. Their effects on flows are analyzed. A one-dimensional approach based on the solution of equilibrium chemical kinetics equations is used to analyze gas-phase methane decomposition. The obtained results may be useful for optimization of gas-dynamic sources of activated gas diamond synthesis.

  10. Catalytically stabilized combustion of lean methane-air-mixtures: a numerical model

    Energy Technology Data Exchange (ETDEWEB)

    Dogwiler, U; Benz, P; Mantharas, I [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1997-06-01

    The catalytically stabilized combustion of lean methane/air mixtures has been studied numerically under conditions closely resembling the ones prevailing in technical devices. A detailed numerical model has been developed for a laminar, stationary, 2-D channel flow with full heterogeneous and homogeneous reaction mechanisms. The computations provide direct information on the coupling between heterogeneous-homogeneous combustion and in particular on the means of homogeneous ignitions and stabilization. (author) 4 figs., 3 refs.

  11. C-Vine copula mixture model for clustering of residential electrical load pattern data

    OpenAIRE

    Sun, M; Konstantelos, I; Strbac, G

    2016-01-01

    The ongoing deployment of residential smart meters in numerous jurisdictions has led to an influx of electricity consumption data. This information presents a valuable opportunity to suppliers for better understanding their customer base and designing more effective tariff structures. In the past, various clustering methods have been proposed for meaningful customer partitioning. This paper presents a novel finite mixture modeling framework based on C-vine copulas (CVMM) for carrying out cons...

  12. Two-component air heating system. Final report. Zweikomponenten-Luftheizungs-System. Abschlussbericht

    Energy Technology Data Exchange (ETDEWEB)

    Radtke, W; Thiel, D

    1986-01-01

    The two-component heating system consists of a combination of air-based floor heating and direct air heating, with ventilation and extraction and heat recovery. The direct airflow consists exclusively of heated outside air, the amount corresponding to the building's external air intake requirement. The control system comprises a two-step sequential control of the air throughput of the direct air heating system and of the air distribution for the floor heating airflow. A special heating switch makes it possible to switch off the direct air heating system separately, and to select rapid warm-up. The way in which the new heating system works has been tested in a pilot set-up and proven by comprehensive measurements. In addition, a simulation model was produced which gave substantial confirmation of the measurements. (orig.) With 9 refs., 37 tabs., 63 figs.

  13. Dynamic viscosity modeling of methane plus n-decane and methane plus toluene mixtures: Comparative study of some representative models

    DEFF Research Database (Denmark)

    Baylaucq, A.; Boned, C.; Canet, X.

    2005-01-01

    Viscosity measurements of well-defined mixtures are useful in order to evaluate existing viscosity models. Recently, an extensive experimental study of the viscosity at pressures up to 140 MPa has been carried out for the binary systems methane + n-decane and methane toluene, between 293.15 and 3...

  14. Modeling plant interspecific interactions from experiments with perennial crop mixtures to predict optimal combinations.

    Science.gov (United States)

    Halty, Virginia; Valdés, Matías; Tejera, Mauricio; Picasso, Valentín; Fort, Hugo

    2017-12-01

    The contribution of plant species richness to productivity and ecosystem functioning is a longstanding issue in ecology, with relevant implications for both conservation and agriculture. Both experiments and quantitative modeling are fundamental to the design of sustainable agroecosystems and the optimization of crop production. We modeled communities of perennial crop mixtures by using a generalized Lotka-Volterra model, i.e., a model such that the interspecific interactions are more general than purely competitive. We estimated model parameters -carrying capacities and interaction coefficients- from, respectively, the observed biomass of monocultures and bicultures measured in a large diversity experiment of seven perennial forage species in Iowa, United States. The sign and absolute value of the interaction coefficients showed that the biological interactions between species pairs included amensalism, competition, and parasitism (asymmetric positive-negative interaction), with various degrees of intensity. We tested the model fit by simulating the combinations of more than two species and comparing them with the polycultures experimental data. Overall, theoretical predictions are in good agreement with the experiments. Using this model, we also simulated species combinations that were not sown. From all possible mixtures (sown and not sown) we identified which are the most productive species combinations. Our results demonstrate that a combination of experiments and modeling can contribute to the design of sustainable agricultural systems in general and to the optimization of crop production in particular. © 2017 by the Ecological Society of America.

  15. Accounting for misclassification in electronic health records-derived exposures using generalized linear finite mixture models.

    Science.gov (United States)

    Hubbard, Rebecca A; Johnson, Eric; Chubak, Jessica; Wernli, Karen J; Kamineni, Aruna; Bogart, Andy; Rutter, Carolyn M

    2017-06-01

    Exposures derived from electronic health records (EHR) may be misclassified, leading to biased estimates of their association with outcomes of interest. An example of this problem arises in the context of cancer screening where test indication, the purpose for which a test was performed, is often unavailable. This poses a challenge to understanding the effectiveness of screening tests because estimates of screening test effectiveness are biased if some diagnostic tests are misclassified as screening. Prediction models have been developed for a variety of exposure variables that can be derived from EHR, but no previous research has investigated appropriate methods for obtaining unbiased association estimates using these predicted probabilities. The full likelihood incorporating information on both the predicted probability of exposure-class membership and the association between the exposure and outcome of interest can be expressed using a finite mixture model. When the regression model of interest is a generalized linear model (GLM), the expectation-maximization algorithm can be used to estimate the parameters using standard software for GLMs. Using simulation studies, we compared the bias and efficiency of this mixture model approach to alternative approaches including multiple imputation and dichotomization of the predicted probabilities to create a proxy for the missing predictor. The mixture model was the only approach that was unbiased across all scenarios investigated. Finally, we explored the performance of these alternatives in a study of colorectal cancer screening with colonoscopy. These findings have broad applicability in studies using EHR data where gold-standard exposures are unavailable and prediction models have been developed for estimating proxies.

  16. Using finite mixture models in thermal-hydraulics system code uncertainty analysis

    Energy Technology Data Exchange (ETDEWEB)

    Carlos, S., E-mail: scarlos@iqn.upv.es [Department d’Enginyeria Química i Nuclear, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain); Sánchez, A. [Department d’Estadística Aplicada i Qualitat, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain); Ginestar, D. [Department de Matemàtica Aplicada, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain); Martorell, S. [Department d’Enginyeria Química i Nuclear, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain)

    2013-09-15

    Highlights: • Best estimate codes simulation needs uncertainty quantification. • The output variables can present multimodal probability distributions. • The analysis of multimodal distribution is performed using finite mixture models. • Two methods to reconstruct output variable probability distribution are used. -- Abstract: Nuclear Power Plant safety analysis is mainly based on the use of best estimate (BE) codes that predict the plant behavior under normal or accidental conditions. As the BE codes introduce uncertainties due to uncertainty in input parameters and modeling, it is necessary to perform uncertainty assessment (UA), and eventually sensitivity analysis (SA), of the results obtained. These analyses are part of the appropriate treatment of uncertainties imposed by current regulation based on the adoption of the best estimate plus uncertainty (BEPU) approach. The most popular approach for uncertainty assessment, based on Wilks’ method, obtains a tolerance/confidence interval, but it does not completely characterize the output variable behavior, which is required for an extended UA and SA. However, the development of standard UA and SA impose high computational cost due to the large number of simulations needed. In order to obtain more information about the output variable and, at the same time, to keep computational cost as low as possible, there has been a recent shift toward developing metamodels (model of model), or surrogate models, that approximate or emulate complex computer codes. In this way, there exist different techniques to reconstruct the probability distribution using the information provided by a sample of values as, for example, the finite mixture models. In this paper, the Expectation Maximization and the k-means algorithms are used to obtain a finite mixture model that reconstructs the output variable probability distribution from data obtained with RELAP-5 simulations. Both methodologies have been applied to a separated

  17. Generalization of two-phase model with topology microstructure of mixture to Lagrange-Euler methodology

    International Nuclear Information System (INIS)

    Vladimir V Chudanov; Alexei A Leonov

    2005-01-01

    Full text of publication follows: One of the mathematical models (hyperbolic type) for describing evolution of compressible two-phase mixtures was offered in [1] to deal with the following applications: interfaces between compressible materials; shock waves in multiphase mixtures; evolution of homogeneous two-phase flows; cavitation in liquids. The basic difficulties of this model was connected to discretization of the non-conservative equation terms. As result, the class of problems concerned with passage of shock waves through fields with a discontinuing profile of a volume fraction was not described by means of this model. A class of schemes that are able to converge to the correct solution of such problems was received in [2] due to a deeper analysis of two-phase model. The technique offered in [2] was implemented on a Eulerian grid via the Godunov scheme. In present paper the additional analysis of two-phase model in view of microstructure of an mixture topology is carried out in Lagrange mass coordinates. As result, the equations averaged over the set of all possible realizations for two-phase mixture are received. The numerical solution is carried out with use of PPM method [3] in two steps: at first - the equations averaged over mass variable are solved; on the second - the solution, found on the previous step, is re-mapped to a fixed Eulerian grid. Such approach allows to expand the proposed technique on two-dimensional (three-dimensional) case, as in the Lagrange variables the Euler equations system is split on two (three) identical subsystems, each of which describes evolution of considered medium in the given direction. The accuracy and robustness of the described procedure are demonstrated on a sequence of the numerical problems. References: (1). R. Saurel, R. Abgrall, A multiphase Godunov method for compressible multi-fluid and multiphase flows, J. Comput. Phys. 150 (1999) 425-467; (2). R. Saurel, R. Abgrall, Discrete equations for physical and

  18. Discrete Element Method Modeling of the Rheological Properties of Coke/Pitch Mixtures

    Directory of Open Access Journals (Sweden)

    Behzad Majidi

    2016-05-01

    Full Text Available Rheological properties of pitch and pitch/coke mixtures at temperatures around 150 °C are of great interest for the carbon anode manufacturing process in the aluminum industry. In the present work, a cohesive viscoelastic contact model based on Burger’s model is developed using the discrete element method (DEM on the YADE, the open-source DEM software. A dynamic shear rheometer (DSR is used to measure the viscoelastic properties of pitch at 150 °C. The experimental data obtained is then used to estimate the Burger’s model parameters and calibrate the DEM model. The DSR tests were then simulated by a three-dimensional model. Very good agreement was observed between the experimental data and simulation results. Coke aggregates were modeled by overlapping spheres in the DEM model. Coke/pitch mixtures were numerically created by adding 5, 10, 20, and 30 percent of coke aggregates of the size range of 0.297–0.595 mm (−30 + 50 mesh to pitch. Adding up to 30% of coke aggregates to pitch can increase its complex shear modulus at 60 Hz from 273 Pa to 1557 Pa. Results also showed that adding coke particles increases both storage and loss moduli, while it does not have a meaningful effect on the phase angle of pitch.

  19. Discrete Element Method Modeling of the Rheological Properties of Coke/Pitch Mixtures.

    Science.gov (United States)

    Majidi, Behzad; Taghavi, Seyed Mohammad; Fafard, Mario; Ziegler, Donald P; Alamdari, Houshang

    2016-05-04

    Rheological properties of pitch and pitch/coke mixtures at temperatures around 150 °C are of great interest for the carbon anode manufacturing process in the aluminum industry. In the present work, a cohesive viscoelastic contact model based on Burger's model is developed using the discrete element method (DEM) on the YADE, the open-source DEM software. A dynamic shear rheometer (DSR) is used to measure the viscoelastic properties of pitch at 150 °C. The experimental data obtained is then used to estimate the Burger's model parameters and calibrate the DEM model. The DSR tests were then simulated by a three-dimensional model. Very good agreement was observed between the experimental data and simulation results. Coke aggregates were modeled by overlapping spheres in the DEM model. Coke/pitch mixtures were numerically created by adding 5, 10, 20, and 30 percent of coke aggregates of the size range of 0.297-0.595 mm (-30 + 50 mesh) to pitch. Adding up to 30% of coke aggregates to pitch can increase its complex shear modulus at 60 Hz from 273 Pa to 1557 Pa. Results also showed that adding coke particles increases both storage and loss moduli, while it does not have a meaningful effect on the phase angle of pitch.

  20. A Bayesian Approach to Model Selection in Hierarchical Mixtures-of-Experts Architectures.

    Science.gov (United States)

    Tanner, Martin A.; Peng, Fengchun; Jacobs, Robert A.

    1997-03-01

    There does not exist a statistical model that shows good performance on all tasks. Consequently, the model selection problem is unavoidable; investigators must decide which model is best at summarizing the data for each task of interest. This article presents an approach to the model selection problem in hierarchical mixtures-of-experts architectures. These architectures combine aspects of generalized linear models with those of finite mixture models in order to perform tasks via a recursive "divide-and-conquer" strategy. Markov chain Monte Carlo methodology is used to estimate the distribution of the architectures' parameters. One part of our approach to model selection attempts to estimate the worth of each component of an architecture so that relatively unused components can be pruned from the architecture's structure. A second part of this approach uses a Bayesian hypothesis testing procedure in order to differentiate inputs that carry useful information from nuisance inputs. Simulation results suggest that the approach presented here adheres to the dictum of Occam's razor; simple architectures that are adequate for summarizing the data are favored over more complex structures. Copyright 1997 Elsevier Science Ltd. All Rights Reserved.

  1. Study of normal and shear material properties for viscoelastic model of asphalt mixture by discrete element method

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Stang, Henrik

    2015-01-01

    In this paper, the viscoelastic behavior of asphalt mixture was studied by using discrete element method. The dynamic properties of asphalt mixture were captured by implementing Burger’s contact model. Different ways of taking into account of the normal and shear material properties of asphalt mi...

  2. Phase behavior of mixtures of oppositely charged nanoparticles: Heterogeneous Poisson-Boltzmann cell model applied to lysozyme and succinylated lysozyme

    NARCIS (Netherlands)

    Biesheuvel, P.M.; Lindhoud, S.; Vries, de R.J.; Stuart, M.A.C.

    2006-01-01

    We study the phase behavior of mixtures of oppositely charged nanoparticles, both theoretically and experimentally. As an experimental model system we consider mixtures of lysozyme and lysozyme that has been chemically modified in such a way that its charge is nearly equal in magnitude but opposite

  3. A semi-nonparametric mixture model for selecting functionally consistent proteins.

    Science.gov (United States)

    Yu, Lianbo; Doerge, Rw

    2010-09-28

    High-throughput technologies have led to a new era of proteomics. Although protein microarray experiments are becoming more common place there are a variety of experimental and statistical issues that have yet to be addressed, and that will carry over to new high-throughput technologies unless they are investigated. One of the largest of these challenges is the selection of functionally consistent proteins. We present a novel semi-nonparametric mixture model for classifying proteins as consistent or inconsistent while controlling the false discovery rate and the false non-discovery rate. The performance of the proposed approach is compared to current methods via simulation under a variety of experimental conditions. We provide a statistical method for selecting functionally consistent proteins in the context of protein microarray experiments, but the proposed semi-nonparametric mixture model method can certainly be generalized to solve other mixture data problems. The main advantage of this approach is that it provides the posterior probability of consistency for each protein.

  4. Multivariate spatial Gaussian mixture modeling for statistical clustering of hemodynamic parameters in functional MRI

    International Nuclear Information System (INIS)

    Fouque, A.L.; Ciuciu, Ph.; Risser, L.; Fouque, A.L.; Ciuciu, Ph.; Risser, L.

    2009-01-01

    In this paper, a novel statistical parcellation of intra-subject functional MRI (fMRI) data is proposed. The key idea is to identify functionally homogenous regions of interest from their hemodynamic parameters. To this end, a non-parametric voxel-based estimation of hemodynamic response function is performed as a prerequisite. Then, the extracted hemodynamic features are entered as the input data of a Multivariate Spatial Gaussian Mixture Model (MSGMM) to be fitted. The goal of the spatial aspect is to favor the recovery of connected components in the mixture. Our statistical clustering approach is original in the sense that it extends existing works done on univariate spatially regularized Gaussian mixtures. A specific Gibbs sampler is derived to account for different covariance structures in the feature space. On realistic artificial fMRI datasets, it is shown that our algorithm is helpful for identifying a parsimonious functional parcellation required in the context of joint detection estimation of brain activity. This allows us to overcome the classical assumption of spatial stationarity of the BOLD signal model. (authors)

  5. Mixtures of endocrine disrupting contaminants modelled on human high end exposures

    DEFF Research Database (Denmark)

    Christiansen, Sofie; Kortenkamp, A.; Petersen, Marta Axelstad

    2012-01-01

    exceeding 1 is expected to lead to effects in the rat, a total dose more than 62 times higher than human exposures should lead to responses. Considering the high uncertainty of this estimate, experience on lowest‐observed‐adverse‐effect‐level (LOAEL)/NOAEL ratios and statistical power of rat studies, we...... expected that combined doses 150 times higher than high end human intake estimates should give no, or only borderline effects, whereas doses 450 times higher should produce significant responses. Experiments indeed showed clear developmental toxicity of the 450‐fold dose in terms of increased nipple...... though each individual chemical is present at low, ineffective doses, but the effects of mixtures modelled based on human intakes have not previously been investigated. To address this issue for the first time, we selected 13 chemicals for a developmental mixture toxicity study in rats where data about...

  6. Phase equilibria for mixtures containing nonionic surfactant systems: Modeling and experiments

    International Nuclear Information System (INIS)

    Shin, Moon Sam; Kim, Hwayong

    2008-01-01

    Surfactants are important materials with numerous applications in the cosmetic, pharmaceutical, and food industries due to inter-associating and intra-associating bond. We present a lattice fluid equation-of-state that combines the quasi-chemical nonrandom lattice fluid model with Veytsman statistics for (intra + inter) molecular association to calculate phase behavior for mixtures containing nonionic surfactants. We also measured binary (vapor + liquid) equilibrium data for {2-butoxyethanol (C 4 E 1 ) + n-hexane} and {2-butoxyethanol (C 4 E 1 ) + n-heptane} systems at temperatures ranging from (303.15 to 323.15) K. A static apparatus was used in this study. The presented equation-of-state correlated well with the measured and published data for mixtures containing nonionic surfactant systems

  7. A Class of Two-Component Adler—Bobenko—Suris Lattice Equations

    International Nuclear Information System (INIS)

    Fu Wei; Zhang Da-Jun; Zhou Ru-Guang

    2014-01-01

    We study a class of two-component forms of the famous list of the Adler—Bobenko—Suris lattice equations. The obtained two-component lattice equations are still consistent around the cube and they admit solutions with ‘jumping properties’ between two levels. (general)

  8. A BAYESIAN NONPARAMETRIC MIXTURE MODEL FOR SELECTING GENES AND GENE SUBNETWORKS.

    Science.gov (United States)

    Zhao, Yize; Kang, Jian; Yu, Tianwei

    2014-06-01

    It is very challenging to select informative features from tens of thousands of measured features in high-throughput data analysis. Recently, several parametric/regression models have been developed utilizing the gene network information to select genes or pathways strongly associated with a clinical/biological outcome. Alternatively, in this paper, we propose a nonparametric Bayesian model for gene selection incorporating network information. In addition to identifying genes that have a strong association with a clinical outcome, our model can select genes with particular expressional behavior, in which case the regression models are not directly applicable. We show that our proposed model is equivalent to an infinity mixture model for which we develop a posterior computation algorithm based on Markov chain Monte Carlo (MCMC) methods. We also propose two fast computing algorithms that approximate the posterior simulation with good accuracy but relatively low computational cost. We illustrate our methods on simulation studies and the analysis of Spellman yeast cell cycle microarray data.

  9. Beyond GLMs: a generative mixture modeling approach to neural system identification.

    Directory of Open Access Journals (Sweden)

    Lucas Theis

    Full Text Available Generalized linear models (GLMs represent a popular choice for the probabilistic characterization of neural spike responses. While GLMs are attractive for their computational tractability, they also impose strong assumptions and thus only allow for a limited range of stimulus-response relationships to be discovered. Alternative approaches exist that make only very weak assumptions but scale poorly to high-dimensional stimulus spaces. Here we seek an approach which can gracefully interpolate between the two extremes. We extend two frequently used special cases of the GLM-a linear and a quadratic model-by assuming that the spike-triggered and non-spike-triggered distributions can be adequately represented using Gaussian mixtures. Because we derive the model from a generative perspective, its components are easy to interpret as they correspond to, for example, the spike-triggered distribution and the interspike interval distribution. The model is able to capture complex dependencies on high-dimensional stimuli with far fewer parameters than other approaches such as histogram-based methods. The added flexibility comes at the cost of a non-concave log-likelihood. We show that in practice this does not have to be an issue and the mixture-based model is able to outperform generalized linear and quadratic models.

  10. A thermodynamically consistent model for granular-fluid mixtures considering pore pressure evolution and hypoplastic behavior

    Science.gov (United States)

    Hess, Julian; Wang, Yongqi

    2016-11-01

    A new mixture model for granular-fluid flows, which is thermodynamically consistent with the entropy principle, is presented. The extra pore pressure described by a pressure diffusion equation and the hypoplastic material behavior obeying a transport equation are taken into account. The model is applied to granular-fluid flows, using a closing assumption in conjunction with the dynamic fluid pressure to describe the pressure-like residual unknowns, hereby overcoming previous uncertainties in the modeling process. Besides the thermodynamically consistent modeling, numerical simulations are carried out and demonstrate physically reasonable results, including simple shear flow in order to investigate the vertical distribution of the physical quantities, and a mixture flow down an inclined plane by means of the depth-integrated model. Results presented give insight in the ability of the deduced model to capture the key characteristics of granular-fluid flows. We acknowledge the support of the Deutsche Forschungsgemeinschaft (DFG) for this work within the Project Number WA 2610/3-1.

  11. An Odor Interaction Model of Binary Odorant Mixtures by a Partial Differential Equation Method

    Directory of Open Access Journals (Sweden)

    Luchun Yan

    2014-07-01

    Full Text Available A novel odor interaction model was proposed for binary mixtures of benzene and substituted benzenes by a partial differential equation (PDE method. Based on the measurement method (tangent-intercept method of partial molar volume, original parameters of corresponding formulas were reasonably displaced by perceptual measures. By these substitutions, it was possible to relate a mixture’s odor intensity to the individual odorant’s relative odor activity value (OAV. Several binary mixtures of benzene and substituted benzenes were respectively tested to establish the PDE models. The obtained results showed that the PDE model provided an easily interpretable method relating individual components to their joint odor intensity. Besides, both predictive performance and feasibility of the PDE model were proved well through a series of odor intensity matching tests. If combining the PDE model with portable gas detectors or on-line monitoring systems, olfactory evaluation of odor intensity will be achieved by instruments instead of odor assessors. Many disadvantages (e.g., expense on a fixed number of odor assessors also will be successfully avoided. Thus, the PDE model is predicted to be helpful to the monitoring and management of odor pollutions.

  12. Estimating demographic parameters using a combination of known-fate and open N-mixture models.

    Science.gov (United States)

    Schmidt, Joshua H; Johnson, Devin S; Lindberg, Mark S; Adams, Layne G

    2015-10-01

    Accurate estimates of demographic parameters are required to infer appropriate ecological relationships and inform management actions. Known-fate data from marked individuals are commonly used to estimate survival rates, whereas N-mixture models use count data from unmarked individuals to estimate multiple demographic parameters. However, a joint approach combining the strengths of both analytical tools has not been developed. Here we develop an integrated model combining known-fate and open N-mixture models, allowing the estimation of detection probability, recruitment, and the joint estimation of survival. We demonstrate our approach through both simulations and an applied example using four years of known-fate and pack count data for wolves (Canis lupus). Simulation results indicated that the integrated model reliably recovered parameters with no evidence of bias, and survival estimates were more precise under the joint model. Results from the applied example indicated that the marked sample of wolves was biased toward individuals with higher apparent survival rates than the unmarked pack mates, suggesting that joint estimates may be more representative of the overall population. Our integrated model is a practical approach for reducing bias while increasing precision and the amount of information gained from mark-resight data sets. We provide implementations in both the BUGS language and an R package.

  13. Manual hierarchical clustering of regional geochemical data using a Bayesian finite mixture model

    International Nuclear Information System (INIS)

    Ellefsen, Karl J.; Smith, David B.

    2016-01-01

    Interpretation of regional scale, multivariate geochemical data is aided by a statistical technique called “clustering.” We investigate a particular clustering procedure by applying it to geochemical data collected in the State of Colorado, United States of America. The clustering procedure partitions the field samples for the entire survey area into two clusters. The field samples in each cluster are partitioned again to create two subclusters, and so on. This manual procedure generates a hierarchy of clusters, and the different levels of the hierarchy show geochemical and geological processes occurring at different spatial scales. Although there are many different clustering methods, we use Bayesian finite mixture modeling with two probability distributions, which yields two clusters. The model parameters are estimated with Hamiltonian Monte Carlo sampling of the posterior probability density function, which usually has multiple modes. Each mode has its own set of model parameters; each set is checked to ensure that it is consistent both with the data and with independent geologic knowledge. The set of model parameters that is most consistent with the independent geologic knowledge is selected for detailed interpretation and partitioning of the field samples. - Highlights: • We evaluate a clustering procedure by applying it to geochemical data. • The procedure generates a hierarchy of clusters. • Different levels of the hierarchy show geochemical processes at different spatial scales. • The clustering method is Bayesian finite mixture modeling. • Model parameters are estimated with Hamiltonian Monte Carlo sampling.

  14. Discrete Element Method Modeling of the Rheological Properties of Coke/Pitch Mixtures

    OpenAIRE

    Majidi, Behzad; Taghavi, Seyed Mohammad; Fafard, Mario; Ziegler, Donald P.; Alamdari, Houshang

    2016-01-01

    Rheological properties of pitch and pitch/coke mixtures at temperatures around 150 °C are of great interest for the carbon anode manufacturing process in the aluminum industry. In the present work, a cohesive viscoelastic contact model based on Burger’s model is developed using the discrete element method (DEM) on the YADE, the open-source DEM software. A dynamic shear rheometer (DSR) is used to measure the viscoelastic properties of pitch at 150 °C. The experimental data obtained is then use...

  15. Whole-Volume Clustering of Time Series Data from Zebrafish Brain Calcium Images via Mixture Modeling.

    Science.gov (United States)

    Nguyen, Hien D; Ullmann, Jeremy F P; McLachlan, Geoffrey J; Voleti, Venkatakaushik; Li, Wenze; Hillman, Elizabeth M C; Reutens, David C; Janke, Andrew L

    2018-02-01

    Calcium is a ubiquitous messenger in neural signaling events. An increasing number of techniques are enabling visualization of neurological activity in animal models via luminescent proteins that bind to calcium ions. These techniques generate large volumes of spatially correlated time series. A model-based functional data analysis methodology via Gaussian mixtures is suggested for the clustering of data from such visualizations is proposed. The methodology is theoretically justified and a computationally efficient approach to estimation is suggested. An example analysis of a zebrafish imaging experiment is presented.

  16. Modified Baryonic Dynamics: two-component cosmological simulations with light sterile neutrinos

    Energy Technology Data Exchange (ETDEWEB)

    Angus, G.W.; Gentile, G. [Department of Physics and Astrophysics, Vrije Universiteit Brussel, Pleinlaan 2, Brussels, 1050 Belgium (Belgium); Diaferio, A. [Dipartimento di Fisica, Università di Torino, Via P. Giuria 1, Torino, I-10125 Italy (Italy); Famaey, B. [Observatoire astronomique de Strasbourg, CNRS UMR 7550, Université de Strasbourg, 11 rue de l' Université, Strasbourg, F-67000 France (France); Heyden, K.J. van der, E-mail: garry.angus@vub.ac.be, E-mail: diaferio@ph.unito.it, E-mail: benoit.famaey@astro.unistra.fr, E-mail: gianfranco.gentile@ugent.be, E-mail: heyden@ast.uct.ac.za [Astrophysics, Cosmology and Gravity Centre, Dept. of Astronomy, University of Cape Town, Private Bag X3, Rondebosch, 7701 South Africa (South Africa)

    2014-10-01

    In this article we continue to test cosmological models centred on Modified Newtonian Dynamics (MOND) with light sterile neutrinos, which could in principle be a way to solve the fine-tuning problems of the standard model on galaxy scales while preserving successful predictions on larger scales. Due to previous failures of the simple MOND cosmological model, here we test a speculative model where the modified gravitational field is produced only by the baryons and the sterile neutrinos produce a purely Newtonian field (hence Modified Baryonic Dynamics). We use two-component cosmological simulations to separate the baryonic N-body particles from the sterile neutrino ones. The premise is to attenuate the over-production of massive galaxy cluster halos which were prevalent in the original MOND plus light sterile neutrinos scenario. Theoretical issues with such a formulation notwithstanding, the Modified Baryonic Dynamics model fails to produce the correct amplitude for the galaxy cluster mass function for any reasonable value of the primordial power spectrum normalisation.

  17. Nonlinear light scattering in a two component medium: optical limiting application

    International Nuclear Information System (INIS)

    Joudrier, Valerie

    1998-01-01

    Scattering is a fundamental manifestation of the interaction between matter and radiation, resulting from inhomogeneities in the refractive index, which decrease transmission. This phenomenon is then especially attractive for sensor protection from laser light by optical limiting. One of the methods to induce scattering at high incident energy is to make use of the Kerr effect where the index of refraction is intensity dependent. Thus, the idea is to use a two component medium with a good index matching between the two components at low intensity, resulting in the medium transparency, and to modify it, at high intensity, due to the non linearity of one component making the medium highly scattering. Some of the experimental and theoretical investigations concerning a new material (here, a cell containing some liquid with small silica particles as inclusion in it) are presented in the visible domain (I=532 nm), for the nanosecond protection regime, beginning, with the chemical synthesis of the sample. The experimental results concerning the optical limiting process are presented, showing that nonlinear scattering is clearly the dominant mechanism in confrontation with other potential nonlinear effects. Several complementary experiments are then performed to complete the nonlinear scattering characterization, involving the measurement of the angular distribution of scattered energy and the integrating sphere measurement. Further information are also gained by studying the time response of the nonlinearities with a dual-beam (pulsed-pump, cw probe) technique. The previous experimental data is also analyzed with some simple theoretical models to evaluate the nonlinearity of the material from optical limiting, the angular scattering and the total scattering energy measurements. The good match between all the analytical results permits to delineate the physical mechanisms responsible for the nonlinear scattering effect and to direct the final conclusion. (author) [fr

  18. Toxicogenomic responses in rainbow trout (Oncorhynchus mykiss) hepatocytes exposed to model chemicals and a synthetic mixture

    International Nuclear Information System (INIS)

    Finne, E.F.; Cooper, G.A.; Koop, B.F.; Hylland, K.; Tollefsen, K.E.

    2007-01-01

    As more salmon gene expression data has become available, the cDNA microarray platform has emerged as an appealing alternative in ecotoxicological screening of single chemicals and environmental samples relevant to the aquatic environment. This study was performed to validate biomarker gene responses of in vitro cultured rainbow trout (Oncorhynchus mykiss) hepatocytes exposed to model chemicals, and to investigate effects of mixture toxicity in a synthetic mixture. Chemicals used for 24 h single chemical- and mixture exposures were 10 nM 17α-ethinylestradiol (EE2), 0.75 nM 2,3,7,8-tetrachloro-di-benzodioxin (TCDD), 100 μM paraquat (PQ) and 0.75 μM 4-nitroquinoline-1-oxide (NQO). RNA was isolated from exposed cells, DNAse treated and quality controlled before cDNA synthesis, fluorescent labelling and hybridisation to a 16k salmonid microarray. The salmonid 16k cDNA array identified differential gene expression predictive of exposure, which could be verified by quantitative real time PCR. More precisely, the responses of biomarker genes such as cytochrome p4501A and UDP-glucuronosyl transferase to TCDD exposure, glutathione reductase and gammaglutamyl cysteine synthetase to paraquat exposure, as well as vitellogenin and vitelline envelope protein to EE2 exposure validated the use of microarray applied to RNA extracted from in vitro exposed hepatocytes. The mutagenic compound NQO did not result in any change in gene expression. Results from exposure to a synthetic mixture of the same four chemicals, using identical concentrations as for single chemical exposures, revealed combined effects that were not predicted by results for individual chemicals alone. In general, the response of exposure to this mixture led to an average loss of approximately 60% of the transcriptomic signature found for single chemical exposure. The present findings show that microarray analyses may contribute to our mechanistic understanding of single contaminant mode of action as well as

  19. Toxicogenomic responses in rainbow trout (Oncorhynchus mykiss) hepatocytes exposed to model chemicals and a synthetic mixture

    Energy Technology Data Exchange (ETDEWEB)

    Finne, E.F. [Norwegian Institute for Water Research, Gaustadalleen 21, N-0349 Oslo (Norway) and University of Oslo, Department of Biology, P.O. Box 1066, Blindern, N-0316 Oslo (Norway)]. E-mail: eivind.finne@niva.no; Cooper, G.A. [Centre for Biomedical Research, University of Victoria, BC V8P5C2 (Canada); Koop, B.F. [Centre for Biomedical Research, University of Victoria, BC V8P5C2 (Canada); Hylland, K. [Norwegian Institute for Water Research, Gaustadalleen 21, N-0349 Oslo (Norway); University of Oslo, Department of Biology, P.O. Box 1066, Blindern, N-0316 Oslo (Norway); Tollefsen, K.E. [Norwegian Institute for Water Research, Gaustadalleen 21, N-0349 Oslo (Norway)

    2007-03-10

    As more salmon gene expression data has become available, the cDNA microarray platform has emerged as an appealing alternative in ecotoxicological screening of single chemicals and environmental samples relevant to the aquatic environment. This study was performed to validate biomarker gene responses of in vitro cultured rainbow trout (Oncorhynchus mykiss) hepatocytes exposed to model chemicals, and to investigate effects of mixture toxicity in a synthetic mixture. Chemicals used for 24 h single chemical- and mixture exposures were 10 nM 17{alpha}-ethinylestradiol (EE2), 0.75 nM 2,3,7,8-tetrachloro-di-benzodioxin (TCDD), 100 {mu}M paraquat (PQ) and 0.75 {mu}M 4-nitroquinoline-1-oxide (NQO). RNA was isolated from exposed cells, DNAse treated and quality controlled before cDNA synthesis, fluorescent labelling and hybridisation to a 16k salmonid microarray. The salmonid 16k cDNA array identified differential gene expression predictive of exposure, which could be verified by quantitative real time PCR. More precisely, the responses of biomarker genes such as cytochrome p4501A and UDP-glucuronosyl transferase to TCDD exposure, glutathione reductase and gammaglutamyl cysteine synthetase to paraquat exposure, as well as vitellogenin and vitelline envelope protein to EE2 exposure validated the use of microarray applied to RNA extracted from in vitro exposed hepatocytes. The mutagenic compound NQO did not result in any change in gene expression. Results from exposure to a synthetic mixture of the same four chemicals, using identical concentrations as for single chemical exposures, revealed combined effects that were not predicted by results for individual chemicals alone. In general, the response of exposure to this mixture led to an average loss of approximately 60% of the transcriptomic signature found for single chemical exposure. The present findings show that microarray analyses may contribute to our mechanistic understanding of single contaminant mode of action as

  20. Mathematical Modeling and Evaluation of Human Motions in Physical Therapy Using Mixture Density Neural Networks.

    Science.gov (United States)

    Vakanski, A; Ferguson, J M; Lee, S

    2016-12-01

    The objective of the proposed research is to develop a methodology for modeling and evaluation of human motions, which will potentially benefit patients undertaking a physical rehabilitation therapy (e.g., following a stroke or due to other medical conditions). The ultimate aim is to allow patients to perform home-based rehabilitation exercises using a sensory system for capturing the motions, where an algorithm will retrieve the trajectories of a patient's exercises, will perform data analysis by comparing the performed motions to a reference model of prescribed motions, and will send the analysis results to the patient's physician with recommendations for improvement. The modeling approach employs an artificial neural network, consisting of layers of recurrent neuron units and layers of neuron units for estimating a mixture density function over the spatio-temporal dependencies within the human motion sequences. Input data are sequences of motions related to a prescribed exercise by a physiotherapist to a patient, and recorded with a motion capture system. An autoencoder subnet is employed for reducing the dimensionality of captured sequences of human motions, complemented with a mixture density subnet for probabilistic modeling of the motion data using a mixture of Gaussian distributions. The proposed neural network architecture produced a model for sets of human motions represented with a mixture of Gaussian density functions. The mean log-likelihood of observed sequences was employed as a performance metric in evaluating the consistency of a subject's performance relative to the reference dataset of motions. A publically available dataset of human motions captured with Microsoft Kinect was used for validation of the proposed method. The article presents a novel approach for modeling and evaluation of human motions with a potential application in home-based physical therapy and rehabilitation. The described approach employs the recent progress in the field of

  1. Sworn testimony of the model evidence: Gaussian Mixture Importance (GAME) sampling

    Science.gov (United States)

    Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.

    2017-07-01

    What is the "best" model? The answer to this question lies in part in the eyes of the beholder, nevertheless a good model must blend rigorous theory with redeeming qualities such as parsimony and quality of fit. Model selection is used to make inferences, via weighted averaging, from a set of K candidate models, Mk; k=>(1,…,K>), and help identify which model is most supported by the observed data, Y>˜=>(y˜1,…,y˜n>). Here, we introduce a new and robust estimator of the model evidence, p>(Y>˜|Mk>), which acts as normalizing constant in the denominator of Bayes' theorem and provides a single quantitative measure of relative support for each hypothesis that integrates model accuracy, uncertainty, and complexity. However, p>(Y>˜|Mk>) is analytically intractable for most practical modeling problems. Our method, coined GAussian Mixture importancE (GAME) sampling, uses bridge sampling of a mixture distribution fitted to samples of the posterior model parameter distribution derived from MCMC simulation. We benchmark the accuracy and reliability of GAME sampling by application to a diverse set of multivariate target distributions (up to 100 dimensions) with known values of p>(Y>˜|Mk>) and to hypothesis testing using numerical modeling of the rainfall-runoff transformation of the Leaf River watershed in Mississippi, USA. These case studies demonstrate that GAME sampling provides robust and unbiased estimates of the evidence at a relatively small computational cost outperforming commonly used estimators. The GAME sampler is implemented in the MATLAB package of DREAM and simplifies considerably scientific inquiry through hypothesis testing and model selection.

  2. Evaluation of Thermodynamic Models for Predicting Phase Equilibria of CO2 + Impurity Binary Mixture

    Science.gov (United States)

    Shin, Byeong Soo; Rho, Won Gu; You, Seong-Sik; Kang, Jeong Won; Lee, Chul Soo

    2018-03-01

    For the design and operation of CO2 capture and storage (CCS) processes, equation of state (EoS) models are used for phase equilibrium calculations. Reliability of an EoS model plays a crucial role, and many variations of EoS models have been reported and continue to be published. The prediction of phase equilibria for CO2 mixtures containing SO2, N2, NO, H2, O2, CH4, H2S, Ar, and H2O is important for CO2 transportation because the captured gas normally contains small amounts of impurities even though it is purified in advance. For the design of pipelines in deep sea or arctic conditions, flow assurance and safety are considered priority issues, and highly reliable calculations are required. In this work, predictive Soave-Redlich-Kwong, cubic plus association, Groupe Européen de Recherches Gazières (GERG-2008), perturbed-chain statistical associating fluid theory, and non-random lattice fluids hydrogen bond EoS models were compared regarding performance in calculating phase equilibria of CO2-impurity binary mixtures and with the collected literature data. No single EoS could cover the entire range of systems considered in this study. Weaknesses and strong points of each EoS model were analyzed, and recommendations are given as guidelines for safe design and operation of CCS processes.

  3. Oscillometric blood pressure estimation by combining nonparametric bootstrap with Gaussian mixture model.

    Science.gov (United States)

    Lee, Soojeong; Rajan, Sreeraman; Jeon, Gwanggil; Chang, Joon-Hyuk; Dajani, Hilmi R; Groza, Voicu Z

    2017-06-01

    Blood pressure (BP) is one of the most important vital indicators and plays a key role in determining the cardiovascular activity of patients. This paper proposes a hybrid approach consisting of nonparametric bootstrap (NPB) and machine learning techniques to obtain the characteristic ratios (CR) used in the blood pressure estimation algorithm to improve the accuracy of systolic blood pressure (SBP) and diastolic blood pressure (DBP) estimates and obtain confidence intervals (CI). The NPB technique is used to circumvent the requirement for large sample set for obtaining the CI. A mixture of Gaussian densities is assumed for the CRs and Gaussian mixture model (GMM) is chosen to estimate the SBP and DBP ratios. The K-means clustering technique is used to obtain the mixture order of the Gaussian densities. The proposed approach achieves grade "A" under British Society of Hypertension testing protocol and is superior to the conventional approach based on maximum amplitude algorithm (MAA) that uses fixed CR ratios. The proposed approach also yields a lower mean error (ME) and the standard deviation of the error (SDE) in the estimates when compared to the conventional MAA method. In addition, CIs obtained through the proposed hybrid approach are also narrower with a lower SDE. The proposed approach combining the NPB technique with the GMM provides a methodology to derive individualized characteristic ratio. The results exhibit that the proposed approach enhances the accuracy of SBP and DBP estimation and provides narrower confidence intervals for the estimates. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Examining the cost efficiency of Chinese hydroelectric companies using a finite mixture model

    International Nuclear Information System (INIS)

    Barros, Carlos Pestana; Chen, Zhongfei; Managi, Shunsuke; Antunes, Olinda Sequeira

    2013-01-01

    This paper evaluates the operational activities of Chinese hydroelectric power companies over the period 2000–2010 using a finite mixture model that controls for unobserved heterogeneity. In so doing, a stochastic frontier latent class model, which allows for the existence of different technologies, is adopted to estimate cost frontiers. This procedure not only enables us to identify different groups among the hydro-power companies analysed, but also permits the analysis of their cost efficiency. The main result is that three groups are identified in the sample, each equipped with different technologies, suggesting that distinct business strategies need to be adapted to the characteristics of China's hydro-power companies. Some managerial implications are developed. - Highlights: ► This paper evaluates the operational activities of Chinese electricity hydric companies. ► This study uses data from 2000 to 2010 using a finite mixture model. ► The model procedure identifies different groups of Chinese hydric companies analysed. ► Three groups are identified in the sample, each equipped with completely different “technologies”. ► This suggests that distinct business strategies need to be adapted to the characteristics of the hydric companies

  5. A mixture model for robust point matching under multi-layer motion.

    Directory of Open Access Journals (Sweden)

    Jiayi Ma

    Full Text Available This paper proposes an efficient mixture model for establishing robust point correspondences between two sets of points under multi-layer motion. Our algorithm starts by creating a set of putative correspondences which can contain a number of false correspondences, or outliers, in addition to the true correspondences (inliers. Next we solve for correspondence by interpolating a set of spatial transformations on the putative correspondence set based on a mixture model, which involves estimating a consensus of inlier points whose matching follows a non-parametric geometrical constraint. We formulate this as a maximum a posteriori (MAP estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose non-parametric geometrical constraints on the correspondence, as a prior distribution, in a reproducing kernel Hilbert space (RKHS. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation. We further provide a fast implementation based on sparse approximation which can achieve a significant speed-up without much performance degradation. We illustrate the proposed method on 2D and 3D real images for sparse feature correspondence, as well as a public available dataset for shape matching. The quantitative results demonstrate that our method is robust to non-rigid deformation and multi-layer/large discontinuous motion.

  6. N-mix for fish: estimating riverine salmonid habitat selection via N-mixture models

    Science.gov (United States)

    Som, Nicholas A.; Perry, Russell W.; Jones, Edward C.; De Juilio, Kyle; Petros, Paul; Pinnix, William D.; Rupert, Derek L.

    2018-01-01

    Models that formulate mathematical linkages between fish use and habitat characteristics are applied for many purposes. For riverine fish, these linkages are often cast as resource selection functions with variables including depth and velocity of water and distance to nearest cover. Ecologists are now recognizing the role that detection plays in observing organisms, and failure to account for imperfect detection can lead to spurious inference. Herein, we present a flexible N-mixture model to associate habitat characteristics with the abundance of riverine salmonids that simultaneously estimates detection probability. Our formulation has the added benefits of accounting for demographics variation and can generate probabilistic statements regarding intensity of habitat use. In addition to the conceptual benefits, model application to data from the Trinity River, California, yields interesting results. Detection was estimated to vary among surveyors, but there was little spatial or temporal variation. Additionally, a weaker effect of water depth on resource selection is estimated than that reported by previous studies not accounting for detection probability. N-mixture models show great promise for applications to riverine resource selection.

  7. Spatial Mixture Modelling for Unobserved Point Processes: Examples in Immunofluorescence Histology.

    Science.gov (United States)

    Ji, Chunlin; Merl, Daniel; Kepler, Thomas B; West, Mike

    2009-12-04

    We discuss Bayesian modelling and computational methods in analysis of indirectly observed spatial point processes. The context involves noisy measurements on an underlying point process that provide indirect and noisy data on locations of point outcomes. We are interested in problems in which the spatial intensity function may be highly heterogenous, and so is modelled via flexible nonparametric Bayesian mixture models. Analysis aims to estimate the underlying intensity function and the abundance of realized but unobserved points. Our motivating applications involve immunological studies of multiple fluorescent intensity images in sections of lymphatic tissue where the point processes represent geographical configurations of cells. We are interested in estimating intensity functions and cell abundance for each of a series of such data sets to facilitate comparisons of outcomes at different times and with respect to differing experimental conditions. The analysis is heavily computational, utilizing recently introduced MCMC approaches for spatial point process mixtures and extending them to the broader new context here of unobserved outcomes. Further, our example applications are problems in which the individual objects of interest are not simply points, but rather small groups of pixels; this implies a need to work at an aggregate pixel region level and we develop the resulting novel methodology for this. Two examples with with immunofluorescence histology data demonstrate the models and computational methodology.

  8. Regional SAR Image Segmentation Based on Fuzzy Clustering with Gamma Mixture Model

    Science.gov (United States)

    Li, X. L.; Zhao, Q. H.; Li, Y.

    2017-09-01

    Most of stochastic based fuzzy clustering algorithms are pixel-based, which can not effectively overcome the inherent speckle noise in SAR images. In order to deal with the problem, a regional SAR image segmentation algorithm based on fuzzy clustering with Gamma mixture model is proposed in this paper. First, initialize some generating points randomly on the image, the image domain is divided into many sub-regions using Voronoi tessellation technique. Each sub-region is regarded as a homogeneous area in which the pixels share the same cluster label. Then, assume the probability of the pixel to be a Gamma mixture model with the parameters respecting to the cluster which the pixel belongs to. The negative logarithm of the probability represents the dissimilarity measure between the pixel and the cluster. The regional dissimilarity measure of one sub-region is defined as the sum of the measures of pixels in the region. Furthermore, the Markov Random Field (MRF) model is extended from pixels level to Voronoi sub-regions, and then the regional objective function is established under the framework of fuzzy clustering. The optimal segmentation results can be obtained by the solution of model parameters and generating points. Finally, the effectiveness of the proposed algorithm can be proved by the qualitative and quantitative analysis from the segmentation results of the simulated and real SAR images.

  9. Flame acceleration of hydrogen - air - diluent mixtures at middle scale using ENACCEF: experiments and modelling

    International Nuclear Information System (INIS)

    Fabrice Malet; Nathalie Lamoureux; Nabiha Djebaili-Chaumeix; Claude-Etienne Paillard; Pierre Pailhories; Jean-Pierre L'heriteau; Bernard Chaumont; Ahmed Bentaib

    2005-01-01

    Full text of publication follows: In the case of hypothetic severe accident on light water nuclear reactor, hydrogen would be produced during reactor core degradation and released to the reactor building which could subsequently raise a combustion hazard. A local ignition of the combustible mixture would give birth initially to a slow flame which can be accelerated due to turbulence. Depending on the geometry and the premixed combustible mixture composition, the flame can accelerate and for some conditions transit to detonation or be quenched after a certain distance. The flame acceleration is responsible for the generation of high pressure loads that could damage the reactor's building. Moreover, geometrical configuration is a major factor leading to flame acceleration. Thus, recording experimental data notably on mid-size installations is required for the numeric simulations validation before modelling realistic scales. The ENACCEF vertical facility is a 6 meters high acceleration tube aimed at representing steam generator room leading to containment dome. This setup can be equipped with obstacles of different blockage ratios and shapes in order to obtain an acceleration of the flame. Depending on the geometrical characteristics of these obstacles, different regimes of the flame propagation can be achieved. The mixture composition's influence on flame velocity and acceleration has been investigated. Using a steam physical-like diluent (40% He - 60% CO 2 ), influence of dilution on flame speed and acceleration has been investigated. The flame front has also been recorded with ultra fast ombroscopy visualization, both in the tube and in dome's the entering. The flame propagation is computed using the TONUS code. Based on Euler's equation solving code using structured finite volumes, it includes the CREBCOM flames modelling and simulates the hydrogen/air turbulent flame propagation, taking into account 3D complex geometry and reactants concentration gradients. Since

  10. Three-dimensional modeling and simulation of asphalt concrete mixtures based on X-ray CT microstructure images

    Directory of Open Access Journals (Sweden)

    Hainian Wang

    2014-02-01

    Full Text Available X-ray CT (computed tomography was used to scan asphalt mixture specimen to obtain high resolution continuous cross-section images and the meso-structure. According to the theory of three-dimensional (3D reconstruction, the 3D reconstruction algorithm was investigated in this paper. The key to the reconstruction technique is the acquisition of the voxel positions and the relationship between the pixel element and node. Three-dimensional numerical model of asphalt mixture specimen was created by a self-developed program. A splitting test was conducted to predict the stress distributions of the asphalt mixture and verify the rationality of the 3D model.

  11. Mixed Platoon Flow Dispersion Model Based on Speed-Truncated Gaussian Mixture Distribution

    Directory of Open Access Journals (Sweden)

    Weitiao Wu

    2013-01-01

    Full Text Available A mixed traffic flow feature is presented on urban arterials in China due to a large amount of buses. Based on field data, a macroscopic mixed platoon flow dispersion model (MPFDM was proposed to simulate the platoon dispersion process along the road section between two adjacent intersections from the flow view. More close to field observation, truncated Gaussian mixture distribution was adopted as the speed density distribution for mixed platoon. Expectation maximum (EM algorithm was used for parameters estimation. The relationship between the arriving flow distribution at downstream intersection and the departing flow distribution at upstream intersection was investigated using the proposed model. Comparison analysis using virtual flow data was performed between the Robertson model and the MPFDM. The results confirmed the validity of the proposed model.

  12. Multiphase flow modeling of molten material-vapor-liquid mixtures in thermal nonequilibrium

    International Nuclear Information System (INIS)

    Park, Ik Kyu; Park, Goon Cherl; Bang, Kwang Hyun

    2000-01-01

    This paper presents a numerical model of multiphase flow of the mixtures of molten material-liquid-vapor, particularly in thermal nonequilibrium. It is a two-dimensional, transient, three-fluid model in Eulerian coordinates. The equations are solved numerically using the finite difference method that implicitly couples the rates of phase changes, momentum, and energy exchange to determine the pressure, density, and velocity fields. To examine the model's ability to predict an experimental data, calculations have been performed for tests of pouring hot particles and molten material into a water pool. The predictions show good agreement with the experimental data. It appears, however, that the interfacial heat transfer and breakup of molten material need improved models that can be applied to such high temperature, high pressure, multiphase flow conditions

  13. Modeling Math Growth Trajectory--An Application of Conventional Growth Curve Model and Growth Mixture Model to ECLS K-5 Data

    Science.gov (United States)

    Lu, Yi

    2016-01-01

    To model students' math growth trajectory, three conventional growth curve models and three growth mixture models are applied to the Early Childhood Longitudinal Study Kindergarten-Fifth grade (ECLS K-5) dataset in this study. The results of conventional growth curve model show gender differences on math IRT scores. When holding socio-economic…

  14. Methods of producing epoxides from alkenes using a two-component catalyst system

    Science.gov (United States)

    Kung, Mayfair C.; Kung, Harold H.; Jiang, Jian

    2013-07-09

    Methods for the epoxidation of alkenes are provided. The methods include the steps of exposing the alkene to a two-component catalyst system in an aqueous solution in the presence of carbon monoxide and molecular oxygen under conditions in which the alkene is epoxidized. The two-component catalyst system comprises a first catalyst that generates peroxides or peroxy intermediates during oxidation of CO with molecular oxygen and a second catalyst that catalyzes the epoxidation of the alkene using the peroxides or peroxy intermediates. A catalyst system composed of particles of suspended gold and titanium silicalite is one example of a suitable two-component catalyst system.

  15. Onsager Vortex Formation in Two-component Bose-Einstein Condensates

    Science.gov (United States)

    Han, Junsik; Tsubota, Makoto

    2018-06-01

    We numerically study the dynamics of quantized vortices in two-dimensional two-component Bose-Einstein condensates (BECs) trapped by a box potential. For one-component BECs in a box potential, it is known that quantized vortices form Onsager vortices, which are clusters of same-sign vortices. We confirm that the vortices of the two components spatially separate from each other — even for miscible two-component BECs — suppressing the formation of Onsager vortices. This phenomenon is caused by the repulsive interaction between vortices belonging to different components, hence, suggesting a new possibility for vortex phase separation.

  16. Weak nonlinear matter waves in a trapped two-component Bose-Einstein condensates

    International Nuclear Information System (INIS)

    Yong Wenmei; Xue Jukui

    2008-01-01

    The dynamics of the weak nonlinear matter solitary waves in two-component Bose-Einstein condensates (BEC) with cigar-shaped external potential are investigated analytically by a perturbation method. In the small amplitude limit, the two-components can be decoupled and the dynamics of solitary waves are governed by a variable-coefficient Korteweg-de Vries (KdV) equation. The reduction to the KdV equation may be useful to understand the dynamics of nonlinear matter waves in two-component BEC. The analytical expressions for the evolution of soliton, emitted radiation profiles and soliton oscillation frequency are also obtained

  17. Counterbalancing Regulation in Response Memory of a Positively Autoregulated Two-Component System.

    Science.gov (United States)

    Gao, Rong; Godfrey, Katherine A; Sufian, Mahir A; Stock, Ann M

    2017-09-15

    Fluctuations in nutrient availability often result in recurrent exposures to the same stimulus conditions. The ability to memorize the past event and use the "memory" to make adjustments to current behaviors can lead to a more efficient adaptation to the recurring stimulus. A short-term phenotypic memory can be conferred via carryover of the response proteins to facilitate the recurrent response, but the additional accumulation of response proteins can lead to a deviation from response homeostasis. We used the Escherichia coli PhoB/PhoR two-component system (TCS) as a model system to study how cells cope with the recurrence of environmental phosphate (Pi) starvation conditions. We discovered that "memory" of prior Pi starvation can exert distinct effects through two regulatory pathways, the TCS signaling pathway and the stress response pathway. Although carryover of TCS proteins can lead to higher initial levels of transcription factor PhoB and a faster initial response in prestarved cells than in cells not starved, the response enhancement can be overcome by an earlier and greater repression of promoter activity in prestarved cells due to the memory of the stress response. The repression counterbalances the carryover of the response proteins, leading to a homeostatic response whether or not cells are prestimulated. A computational model based on sigma factor competition was developed to understand the memory of stress response and to predict the homeostasis of other PhoB-regulated response proteins. Our insight into the history-dependent PhoBR response may provide a general understanding of how TCSs respond to recurring stimuli and adapt to fluctuating environmental conditions. IMPORTANCE Bacterial cells in their natural environments experience scenarios that are far more complex than are typically replicated in laboratory experiments. The architectures of signaling systems and the integration of multiple adaptive pathways have evolved to deal with such complexity

  18. Counterbalancing Regulation in Response Memory of a Positively Autoregulated Two-Component System

    Science.gov (United States)

    Gao, Rong; Godfrey, Katherine A.; Sufian, Mahir A.

    2017-01-01

    ABSTRACT Fluctuations in nutrient availability often result in recurrent exposures to the same stimulus conditions. The ability to memorize the past event and use the “memory” to make adjustments to current behaviors can lead to a more efficient adaptation to the recurring stimulus. A short-term phenotypic memory can be conferred via carryover of the response proteins to facilitate the recurrent response, but the additional accumulation of response proteins can lead to a deviation from response homeostasis. We used the Escherichia coli PhoB/PhoR two-component system (TCS) as a model system to study how cells cope with the recurrence of environmental phosphate (Pi) starvation conditions. We discovered that “memory” of prior Pi starvation can exert distinct effects through two regulatory pathways, the TCS signaling pathway and the stress response pathway. Although carryover of TCS proteins can lead to higher initial levels of transcription factor PhoB and a faster initial response in prestarved cells than in cells not starved, the response enhancement can be overcome by an earlier and greater repression of promoter activity in prestarved cells due to the memory of the stress response. The repression counterbalances the carryover of the response proteins, leading to a homeostatic response whether or not cells are prestimulated. A computational model based on sigma factor competition was developed to understand the memory of stress response and to predict the homeostasis of other PhoB-regulated response proteins. Our insight into the history-dependent PhoBR response may provide a general understanding of how TCSs respond to recurring stimuli and adapt to fluctuating environmental conditions. IMPORTANCE Bacterial cells in their natural environments experience scenarios that are far more complex than are typically replicated in laboratory experiments. The architectures of signaling systems and the integration of multiple adaptive pathways have evolved to deal

  19. Encrypted data stream identification using randomness sparse representation and fuzzy Gaussian mixture model

    Science.gov (United States)

    Zhang, Hong; Hou, Rui; Yi, Lei; Meng, Juan; Pan, Zhisong; Zhou, Yuhuan

    2016-07-01

    The accurate identification of encrypted data stream helps to regulate illegal data, detect network attacks and protect users' information. In this paper, a novel encrypted data stream identification algorithm is introduced. The proposed method is based on randomness characteristics of encrypted data stream. We use a l1-norm regularized logistic regression to improve sparse representation of randomness features and Fuzzy Gaussian Mixture Model (FGMM) to improve identification accuracy. Experimental results demonstrate that the method can be adopted as an effective technique for encrypted data stream identification.

  20. The potential energy landscape in the Lennard-Jones binary mixture model

    International Nuclear Information System (INIS)

    Sampoli, M; Benassi, P; Eramo, R; Angelani, L; Ruocco, G

    2003-01-01

    The potential energy landscape in the Kob-Andersen Lennard-Jones binary mixture model has been studied carefully from the liquid down to the supercooled regime, from T = 2 down to 0.46. One thousand independent configurations along the time evolution locus have been examined at each temperature investigated. From the starting configuration, we searched for the nearest saddle (or quasi-saddle) and minimum of the potential energy. The vibrational densities of states for the starting and the two derived configurations have been evaluated. Besides the number of negative eigenvalues of the saddles other quantities show some signature of the approach of the dynamical arrest temperature

  1. Mixtures of beta distributions in models of the duration of a project affected by risk

    Science.gov (United States)

    Gładysz, Barbara; Kuchta, Dorota

    2017-07-01

    This article presents a method for timetabling a project affected by risk. The times required to carry out tasks are modelled using mixtures of beta distributions. The parameters of these beta distributions are given by experts: one corresponding to the duration of a task in stable conditions, with no risks materializing, and the other corresponding to the duration of a task in the case when risks do occur. Finally, a case study will be presented and analysed: the project of constructing a shopping mall in Poland.

  2. General mixture item response models with different item response structures: Exposition with an application to Likert scales.

    Science.gov (United States)

    Tijmstra, Jesper; Bolsinova, Maria; Jeon, Minjeong

    2018-01-10

    This article proposes a general mixture item response theory (IRT) framework that allows for classes of persons to differ with respect to the type of processes underlying the item responses. Through the use of mixture models, nonnested IRT models with different structures can be estimated for different classes, and class membership can be estimated for each person in the sample. If researchers are able to provide competing measurement models, this mixture IRT framework may help them deal with some violations of measurement invariance. To illustrate this approach, we consider a two-class mixture model, where a person's responses to Likert-scale items containing a neutral middle category are either modeled using a generalized partial credit model, or through an IRTree model. In the first model, the middle category ("neither agree nor disagree") is taken to be qualitatively similar to the other categories, and is taken to provide information about the person's endorsement. In the second model, the middle category is taken to be qualitatively different and to reflect a nonresponse choice, which is modeled using an additional latent variable that captures a person's willingness to respond. The mixture model is studied using simulation studies and is applied to an empirical example.

  3. Permeability of EVOH Barrier Material Used in Automotive Applications: Metrology Development for Model Fuel Mixtures

    Directory of Open Access Journals (Sweden)

    Zhao Jing

    2015-02-01

    Full Text Available EVOH (Ethylene-Vinyl Alcohol materials are widely used in automotive applications in multi-layer fuel lines and tanks owing to their excellent barrier properties to aromatic and aliphatic hydrocarbons. These barrier materials are essential to limit environmental fuel emissions and comply with the challenging requirements of fast changing international regulations. Nevertheless, the measurement of EVOH permeability to model fuel mixtures or to their individual components is particularly difficult due to the complexity of these systems and their very low permeability, which can vary by several orders of magnitude depending on the permeating species and their relative concentrations. This paper describes the development of a new automated permeameter capable of taking up the challenge of measuring minute quantities as low as 1 mg/(m2.day for partial fluxes for model fuel mixtures containing ethanol, i-octane and toluene at 50°C. The permeability results are discussed as a function of the model fuel composition and the importance of EVOH preconditioning is emphasized for accurate permeability measurements. The last part focuses on the influence of EVOH conditioning on its mechanical properties and its microstructure, and further illustrates the specific behavior of EVOH in presence of ethanol oxygenated fuels. The new metrology developed in this work offers a new insight in the permeability properties of a leading barrier material and will help prevent the consequences of (bioethanol addition in fuels on environmental emissions through fuel lines and tanks.

  4. Modeling intensive longitudinal data with mixtures of nonparametric trajectories and time-varying effects.

    Science.gov (United States)

    Dziak, John J; Li, Runze; Tan, Xianming; Shiffman, Saul; Shiyko, Mariya P

    2015-12-01

    Behavioral scientists increasingly collect intensive longitudinal data (ILD), in which phenomena are measured at high frequency and in real time. In many such studies, it is of interest to describe the pattern of change over time in important variables as well as the changing nature of the relationship between variables. Individuals' trajectories on variables of interest may be far from linear, and the predictive relationship between variables of interest and related covariates may also change over time in a nonlinear way. Time-varying effect models (TVEMs; see Tan, Shiyko, Li, Li, & Dierker, 2012) address these needs by allowing regression coefficients to be smooth, nonlinear functions of time rather than constants. However, it is possible that not only observed covariates but also unknown, latent variables may be related to the outcome. That is, regression coefficients may change over time and also vary for different kinds of individuals. Therefore, we describe a finite mixture version of TVEM for situations in which the population is heterogeneous and in which a single trajectory would conceal important, interindividual differences. This extended approach, MixTVEM, combines finite mixture modeling with non- or semiparametric regression modeling, to describe a complex pattern of change over time for distinct latent classes of individuals. The usefulness of the method is demonstrated in an empirical example from a smoking cessation study. We provide a versatile SAS macro and R function for fitting MixTVEMs. (c) 2015 APA, all rights reserved).

  5. Features of non-congruent phase transition in modified Coulomb model of the binary ionic mixture

    International Nuclear Information System (INIS)

    Stroev, N E; Iosilevskiy, I L

    2016-01-01

    Non-congruent gas-liquid phase transition (NCPT) have been studied previously in modified Coulomb model of a binary ionic mixture C(+6) + O(+8) on a uniformly compressible ideal electronic background /BIM(∼)/. The features of NCPT in improved version of the BIM(∼) model for the same mixture on background of non-ideal electronic Fermi-gas and comparison it with the previous calculations are the subject of present study. Analytical fits for Coulomb corrections to equation of state of electronic and ionic subsystems were used in present calculations within the Gibbs-Guggenheim conditions of non-congruent phase equilibrium. Parameters of critical point-line were calculated on the entire range of proportions of mixed ions 0 < X < 1. Strong “distillation” effect was found for NCPT in the present BIM(∼) model. Just similar distillation was obtained in the variant of NCPT in dense nuslear matter. The absence of azeotropic compositions was revealed in studied variants of BIM(∼) in contrast to an explicit existence of the azeotropic compositions for the NCPT in chemically reacting plasmas and in astrophysical applications. (paper)

  6. Features of non-congruent phase transition in modified Coulomb model of the binary ionic mixture

    Science.gov (United States)

    Stroev, N. E.; Iosilevskiy, I. L.

    2016-11-01

    Non-congruent gas-liquid phase transition (NCPT) have been studied previously in modified Coulomb model of a binary ionic mixture C(+6) + O(+8) on a uniformly compressible ideal electronic background /BIM(∼)/. The features of NCPT in improved version of the BIM(∼) model for the same mixture on background of non-ideal electronic Fermi-gas and comparison it with the previous calculations are the subject of present study. Analytical fits for Coulomb corrections to equation of state of electronic and ionic subsystems were used in present calculations within the Gibbs-Guggenheim conditions of non-congruent phase equilibrium. Parameters of critical point-line were calculated on the entire range of proportions of mixed ions 0 distillation” effect was found for NCPT in the present BIM(∼) model. Just similar distillation was obtained in the variant of NCPT in dense nuslear matter. The absence of azeotropic compositions was revealed in studied variants of BIM(∼) in contrast to an explicit existence of the azeotropic compositions for the NCPT in chemically reacting plasmas and in astrophysical applications.

  7. Assessing clustering strategies for Gaussian mixture filtering a subsurface contaminant model

    KAUST Repository

    Liu, Bo

    2016-02-03

    An ensemble-based Gaussian mixture (GM) filtering framework is studied in this paper in term of its dependence on the choice of the clustering method to construct the GM. In this approach, a number of particles sampled from the posterior distribution are first integrated forward with the dynamical model for forecasting. A GM representation of the forecast distribution is then constructed from the forecast particles. Once an observation becomes available, the forecast GM is updated according to Bayes’ rule. This leads to (i) a Kalman filter-like update of the particles, and (ii) a Particle filter-like update of their weights, generalizing the ensemble Kalman filter update to non-Gaussian distributions. We focus on investigating the impact of the clustering strategy on the behavior of the filter. Three different clustering methods for constructing the prior GM are considered: (i) a standard kernel density estimation, (ii) clustering with a specified mixture component size, and (iii) adaptive clustering (with a variable GM size). Numerical experiments are performed using a two-dimensional reactive contaminant transport model in which the contaminant concentration and the heterogenous hydraulic conductivity fields are estimated within a confined aquifer using solute concentration data. The experimental results suggest that the performance of the GM filter is sensitive to the choice of the GM model. In particular, increasing the size of the GM does not necessarily result in improved performances. In this respect, the best results are obtained with the proposed adaptive clustering scheme.

  8. Modeling Plasma-based CO2 and CH4 Conversion in Mixtures with N2, O2 and H2O: the Bigger Plasma Chemistry Picture

    KAUST Repository

    Wang, Weizong

    2018-01-18

    Due to the unique properties of plasma technology, its use in gas conversion applications is gaining significant interest around the globe. Plasma-based CO2 and CH4 conversion have become major research areas. Many investigations have already been performed regarding the single component gases, i.e. CO2 splitting and CH4 reforming, as well as for two component mixtures, i.e. dry reforming of methane (CO2/CH4), partial oxidation of methane (CH4/O2), artificial photosynthesis (CO2/H2O), CO2 hydrogenation (CO2/H2), and even first steps towards the influence of N2 impurities have been taken, i.e. CO2/N2 and CH4/N2. In this feature article we briefly discuss the advances made in literature for these different steps from a plasma chemistry modeling point of view. Subsequently, we present a comprehensive plasma chemistry set, combining the knowledge gathered in this field so far, and supported with extensive experimental data. This set can be used for chemical kinetics plasma modeling for all possible combinations of CO2, CH4, N2, O2 and H2O, to investigate the bigger picture of the underlying plasmachemical pathways for these mixtures in a dielectric barrier discharge plasma. This is extremely valuable for the optimization of existing plasma-based CO2 conversion and CH4 reforming processes, as well as for investigating the influence of N2, O2 and H2O on these processes, and even to support plasma-based multi-reforming processes.

  9. Improved predictive model for n-decane kinetics across species, as a component of hydrocarbon mixtures.

    Science.gov (United States)

    Merrill, E A; Gearhart, J M; Sterner, T R; Robinson, P J

    2008-07-01

    n-Decane is considered a major component of various fuels and industrial solvents. These hydrocarbon products are complex mixtures of hundreds of components, including straight-chain alkanes, branched chain alkanes, cycloalkanes, diaromatics, and naphthalenes. Human exposures to the jet fuel, JP-8, or to industrial solvents in vapor, aerosol, and liquid forms all have the potential to produce health effects, including immune suppression and/or neurological deficits. A physiologically based pharmacokinetic (PBPK) model has previously been developed for n-decane, in which partition coefficients (PC), fitted to 4-h exposure kinetic data, were used in preference to measured values. The greatest discrepancy between fitted and measured values was for fat, where PC values were changed from 250-328 (measured) to 25 (fitted). Such a large change in a critical parameter, without any physiological basis, greatly impedes the model's extrapolative abilities, as well as its applicability for assessing the interactions of n-decane or similar alkanes with other compounds in a mixture model. Due to these limitations, the model was revised. Our approach emphasized the use of experimentally determined PCs because many tissues had not approached steady-state concentrations by the end of the 4-h exposures. Diffusion limitation was used to describe n-decane kinetics for the brain, perirenal fat, skin, and liver. Flow limitation was used to describe the remaining rapidly and slowly perfused tissues. As expected from the high lipophilicity of this semivolatile compound (log K(ow) = 5.25), sensitivity analyses showed that parameters describing fat uptake were next to blood:air partitioning and pulmonary ventilation as critical in determining overall systemic circulation and uptake in other tissues. In our revised model, partitioning into fat took multiple days to reach steady state, which differed considerably from the previous model that assumed steady-state conditions in fat at 4 h post

  10. General multi-group macroscopic modeling for thermo-chemical non-equilibrium gas mixtures

    Science.gov (United States)

    Liu, Yen; Panesi, Marco; Sahai, Amal; Vinokur, Marcel

    2015-04-01

    relaxation model, which can only be applied to molecules, the new model is applicable to atoms, molecules, ions, and their mixtures. Numerical examples and model validations are carried out with two gas mixtures using the maximum entropy linear model: one mixture consists of nitrogen molecules undergoing internal excitation and dissociation and the other consists of nitrogen atoms undergoing internal excitation and ionization. Results show that the original hundreds to thousands of microscopic equations can be reduced to two macroscopic equations with almost perfect agreement for the total number density and total internal energy using only one or two groups. We also obtain good prediction of the microscopic state populations using 5-10 groups in the macroscopic equations.

  11. General multi-group macroscopic modeling for thermo-chemical non-equilibrium gas mixtures.

    Science.gov (United States)

    Liu, Yen; Panesi, Marco; Sahai, Amal; Vinokur, Marcel

    2015-04-07

    relaxation model, which can only be applied to molecules, the new model is applicable to atoms, molecules, ions, and their mixtures. Numerical examples and model validations are carried out with two gas mixtures using the maximum entropy linear model: one mixture consists of nitrogen molecules undergoing internal excitation and dissociation and the other consists of nitrogen atoms undergoing internal excitation and ionization. Results show that the original hundreds to thousands of microscopic equations can be reduced to two macroscopic equations with almost perfect agreement for the total number density and total internal energy using only one or two groups. We also obtain good prediction of the microscopic state populations using 5-10 groups in the macroscopic equations.

  12. Balancing precision and risk: should multiple detection methods be analyzed separately in N-mixture models?

    Directory of Open Access Journals (Sweden)

    Tabitha A Graves

    Full Text Available Using multiple detection methods can increase the number, kind, and distribution of individuals sampled, which may increase accuracy and precision and reduce cost of population abundance estimates. However, when variables influencing abundance are of interest, if individuals detected via different methods are influenced by the landscape differently, separate analysis of multiple detection methods may be more appropriate. We evaluated the effects of combining two detection methods on the identification of variables important to local abundance using detections of grizzly bears with hair traps (systematic and bear rubs (opportunistic. We used hierarchical abundance models (N-mixture models with separate model components for each detection method. If both methods sample the same population, the use of either data set alone should (1 lead to the selection of the same variables as important and (2 provide similar estimates of relative local abundance. We hypothesized that the inclusion of 2 detection methods versus either method alone should (3 yield more support for variables identified in single method analyses (i.e. fewer variables and models with greater weight, and (4 improve precision of covariate estimates for variables selected in both separate and combined analyses because sample size is larger. As expected, joint analysis of both methods increased precision as well as certainty in variable and model selection. However, the single-method analyses identified different variables and the resulting predicted abundances had different spatial distributions. We recommend comparing single-method and jointly modeled results to identify the presence of individual heterogeneity between detection methods in N-mixture models, along with consideration of detection probabilities, correlations among variables, and tolerance to risk of failing to identify variables important to a subset of the population. The benefits of increased precision should be weighed

  13. New Alcohol and Onyx Mixture for Embolization: Feasibility and Proof of Concept in Both In Vitro and In Vivo Models

    Energy Technology Data Exchange (ETDEWEB)

    Saeed Kilani, Mohammad, E-mail: msaeedkilani@gmail.com, E-mail: mohammadalikilani@yahoo.com [Centre Hospitalier Universitaire (CHRU) de Lille, Hôpital cardiologique (France); Zehtabi, Fatemeh, E-mail: fatemeh.zehtabi@gmail.com; Lerouge, Sophie, E-mail: Sophie.Lerouge@etsmtl.ca [école de technologie supérieure (ETS) & CHUM Research center (CRCHUM), Department of Mechanical Engineering (Canada); Soulez, Gilles, E-mail: gilles.soulez.chum@ssss.gouv.qc.ca [Centre Hospitalier de l’Université de Montréal, Department of Radiology (Canada); Bartoli, Jean Michel, E-mail: jean-michel.bartoli@ap-hm.fr [University Hospital Timone, Department of Medical Imaging (France); Vidal, Vincent, E-mail: Vincent.VIDAL@ap-hm.fr [Centre Hospitalier Universitaire (CHRU) de Lille, Hôpital cardiologique (France); Badran, Mohammad F., E-mail: mfbadran@hotmail.com [King Faisal Specialist Hospital and Research Center, Radiology Department (Saudi Arabia)

    2017-05-15

    IntroductionOnyx and ethanol are well-known embolic and sclerotic agents that are frequently used in embolization. These agents present advantages and disadvantages regarding visibility, injection control and penetration depth. Mixing both products might yield a new product with different characteristics. The aim of this study is to evaluate the injectability, radiopacity, and mechanical and occlusive properties of different mixtures of Onyx 18 and ethanol in vitro and in vivo (in a swine model).Materials and MethodsVarious Onyx 18 and ethanol formulations were prepared and tested in vitro for their injectability, solidification rate and shrinkage, cohesion and occlusive properties. In vivo tests were performed using 3 swine. Ease of injection, radiopacity, cohesiveness and penetration were analyzed using fluoroscopy and high-resolution CT.ResultsAll mixtures were easy to inject through a microcatheter with no resistance or blockage in vitro and in vivo. The 50%-ethanol mixture showed delayed copolymerization with fragmentation and proximal occlusion. The 75%-ethanol mixture showed poor radiopacity in vivo and was not tested in vitro. The 25%-ethanol mixture showed good occlusive properties and accepted penetration and radiopacity.ConclusionMixing Onyx and ethanol is feasible. The mixture of 25% of ethanol and 75% of Onyx 18 could be a new sclero-embolic agent. Further research is needed to study the chemical changes of the mixture, to confirm the significance of the added sclerotic effect and to find out the ideal mixture percentages.

  14. Thermodynamics of two component gaseous and solid state plasmas at any degeneracy

    International Nuclear Information System (INIS)

    Kraeft, W.D.; Stolzmann, W.; Fromhold-Treu, I.; Rother, T.

    1988-10-01

    We give the results of thermodynamical calculations for two component plasmas which are of interest for dense hydrogen, noble gas and alkali plasmas and for electron hole plasmas in optically excited semiconductors as well. 25 refs, 4 figs

  15. Isomerization and dissociation in competition: the two-component dissociation rates of methyl acetate ions

    Science.gov (United States)

    Mazyar, Oleg A.; Mayer, Paul M.; Baer, Tomas

    1997-11-01

    Threshold photoelectron-photoion coincidence (TPEPICO) spectroscopy has been used to investigate the unimolecular chemistry of metastable methyl acetate ions, CH3COOCH3.+. The rate of molecular ion fragmentation with the loss of CH3O. and CH2OH radicals as a function of ion internal energy was obtained from the coincidence data and used in conjunction with Rice-Ramsperger-Kassel-Markus and ab initio molecular orbital calculations to model the dissociation/isomerization mechanism of the methyl acetate ion (A). The data were found to be consistent with the mechanism involving a hydrogen-bridged complex CH3CO[middle dot][middle dot][middle dot]H[middle dot][middle dot][middle dot]OCH2.+(E) as the direct precursor of the observed fragments CH3CO+ and CH2OH.. The two-component decay rates were modeled with a three-well-two-product potential energy surface including the distonic ion CH3C(OH)OCH2.+(B) and enol isomer CH2C(OH)OCH3.+(C), which are formed from the methyl acetate ion by two consecutive [1,4]-hydrogen shifts. The 0 K heats of formation of isomers B and C as well as transition states TSAB, TSBC, and TSBE (relative to isomer A) were calculated from Rice-Ramsperger-Kassel-Markus (RRKM) theory.

  16. General multi-group macroscopic modeling for thermo-chemical non-equilibrium gas mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yen, E-mail: yen.liu@nasa.gov; Vinokur, Marcel [NASA Ames Research Center, Moffett Field, California 94035 (United States); Panesi, Marco; Sahai, Amal [University of Illinois, Urbana-Champaign, Illinois 61801 (United States)

    2015-04-07

    relaxation model, which can only be applied to molecules, the new model is applicable to atoms, molecules, ions, and their mixtures. Numerical examples and model validations are carried out with two gas mixtures using the maximum entropy linear model: one mixture consists of nitrogen molecules undergoing internal excitation and dissociation and the other consists of nitrogen atoms undergoing internal excitation and ionization. Results show that the original hundreds to thousands of microscopic equations can be reduced to two macroscopic equations with almost perfect agreement for the total number density and total internal energy using only one or two groups. We also obtain good prediction of the microscopic state populations using 5-10 groups in the macroscopic equations.

  17. General multi-group macroscopic modeling for thermo-chemical non-equilibrium gas mixtures

    International Nuclear Information System (INIS)

    Liu, Yen; Vinokur, Marcel; Panesi, Marco; Sahai, Amal

    2015-01-01

    relaxation model, which can only be applied to molecules, the new model is applicable to atoms, molecules, ions, and their mixtures. Numerical examples and model validations are carried out with two gas mixtures using the maximum entropy linear model: one mixture consists of nitrogen molecules undergoing internal excitation and dissociation and the other consists of nitrogen atoms undergoing internal excitation and ionization. Results show that the original hundreds to thousands of microscopic equations can be reduced to two macroscopic equations with almost perfect agreement for the total number density and total internal energy using only one or two groups. We also obtain good prediction of the microscopic state populations using 5-10 groups in the macroscopic equations

  18. Bayesian Non-Parametric Mixtures of GARCH(1,1 Models

    Directory of Open Access Journals (Sweden)

    John W. Lau

    2012-01-01

    Full Text Available Traditional GARCH models describe volatility levels that evolve smoothly over time, generated by a single GARCH regime. However, nonstationary time series data may exhibit abrupt changes in volatility, suggesting changes in the underlying GARCH regimes. Further, the number and times of regime changes are not always obvious. This article outlines a nonparametric mixture of GARCH models that is able to estimate the number and time of volatility regime changes by mixing over the Poisson-Kingman process. The process is a generalisation of the Dirichlet process typically used in nonparametric models for time-dependent data provides a richer clustering structure, and its application to time series data is novel. Inference is Bayesian, and a Markov chain Monte Carlo algorithm to explore the posterior distribution is described. The methodology is illustrated on the Standard and Poor's 500 financial index.

  19. A sub-grid, mixture-fraction-based thermodynamic equilibrium model for gas phase combustion in FIRETEC: development and results

    Science.gov (United States)

    M. M. Clark; T. H. Fletcher; R. R. Linn

    2010-01-01

    The chemical processes of gas phase combustion in wildland fires are complex and occur at length-scales that are not resolved in computational fluid dynamics (CFD) models of landscape-scale wildland fire. A new approach for modelling fire chemistry in HIGRAD/FIRETEC (a landscape-scale CFD wildfire model) applies a mixture– fraction model relying on thermodynamic...

  20. Competitive Adsorption of a Two-Component Gas on a Deformable Adsorbent

    OpenAIRE

    Usenko, A. S.

    2013-01-01

    We investigate the competitive adsorption of a two-component gas on the surface of an adsorbent whose adsorption properties vary in adsorption due to the adsorbent deformation. The essential difference of adsorption isotherms for a deformable adsorbent both from the classical Langmuir adsorption isotherms of a two-component gas and from the adsorption isotherms of a one-component gas taking into account variations in adsorption properties of the adsorbent in adsorption is obtained. We establi...

  1. Mapping behavioral landscapes for animal movement: a finite mixture modeling approach

    Science.gov (United States)

    Tracey, Jeff A.; Zhu, Jun; Boydston, Erin E.; Lyren, Lisa M.; Fisher, Robert N.; Crooks, Kevin R.

    2013-01-01

    Because of its role in many ecological processes, movement of animals in response to landscape features is an important subject in ecology and conservation biology. In this paper, we develop models of animal movement in relation to objects or fields in a landscape. We take a finite mixture modeling approach in which the component densities are conceptually related to different choices for movement in response to a landscape feature, and the mixing proportions are related to the probability of selecting each response as a function of one or more covariates. We combine particle swarm optimization and an Expectation-Maximization (EM) algorithm to obtain maximum likelihood estimates of the model parameters. We use this approach to analyze data for movement of three bobcats in relation to urban areas in southern California, USA. A behavioral interpretation of the models revealed similarities and differences in bobcat movement response to urbanization. All three bobcats avoided urbanization by moving either parallel to urban boundaries or toward less urban areas as the proportion of urban land cover in the surrounding area increased. However, one bobcat, a male with a dispersal-like large-scale movement pattern, avoided urbanization at lower densities and responded strictly by moving parallel to the urban edge. The other two bobcats, which were both residents and occupied similar geographic areas, avoided urban areas using a combination of movements parallel to the urban edge and movement toward areas of less urbanization. However, the resident female appeared to exhibit greater repulsion at lower levels of urbanization than the resident male, consistent with empirical observations of bobcats in southern California. Using the parameterized finite mixture models, we mapped behavioral states to geographic space, creating a representation of a behavioral landscape. This approach can provide guidance for conservation planning based on analysis of animal movement data using

  2. Characterization of the pharmacokinetics of gasoline using PBPK modeling with a complex mixtures chemical lumping approach.

    Science.gov (United States)

    Dennison, James E; Andersen, Melvin E; Yang, Raymond S H

    2003-09-01

    Gasoline consists of a few toxicologically significant components and a large number of other hydrocarbons in a complex mixture. By using an integrated, physiologically based pharmacokinetic (PBPK) modeling and lumping approach, we have developed a method for characterizing the pharmacokinetics (PKs) of gasoline in rats. The PBPK model tracks selected target components (benzene, toluene, ethylbenzene, o-xylene [BTEX], and n-hexane) and a lumped chemical group representing all nontarget components, with competitive metabolic inhibition between all target compounds and the lumped chemical. PK data was acquired by performing gas uptake PK studies with male F344 rats in a closed chamber. Chamber air samples were analyzed every 10-20 min by gas chromatography/flame ionization detection and all nontarget chemicals were co-integrated. A four-compartment PBPK model with metabolic interactions was constructed using the BTEX, n-hexane, and lumped chemical data. Target chemical kinetic parameters were refined by studies with either the single chemical alone or with all five chemicals together. o-Xylene, at high concentrations, decreased alveolar ventilation, consistent with respiratory irritation. A six-chemical interaction model with the lumped chemical group was used to estimate lumped chemical partitioning and metabolic parameters for a winter blend of gasoline with methyl t-butyl ether and a summer blend without any oxygenate. Computer simulation results from this model matched well with experimental data from single chemical, five-chemical mixture, and the two blends of gasoline. The PBPK model analysis indicated that metabolism of individual components was inhibited up to 27% during the 6-h gas uptake experiments of gasoline exposures.

  3. Chloroplast two-component systems: evolution of the link between photosynthesis and gene expression.

    Science.gov (United States)

    Puthiyaveetil, Sujith; Allen, John F

    2009-06-22

    Two-component signal transduction, consisting of sensor kinases and response regulators, is the predominant signalling mechanism in bacteria. This signalling system originated in prokaryotes and has spread throughout the eukaryotic domain of life through endosymbiotic, lateral gene transfer from the bacterial ancestors and early evolutionary precursors of eukaryotic, cytoplasmic, bioenergetic organelles-chloroplasts and mitochondria. Until recently, it was thought that two-component systems inherited from an ancestral cyanobacterial symbiont are no longer present in chloroplasts. Recent research now shows that two-component systems have survived in chloroplasts as products of both chloroplast and nuclear genes. Comparative genomic analysis of photosynthetic eukaryotes shows a lineage-specific distribution of chloroplast two-component systems. The components and the systems they comprise have homologues in extant cyanobacterial lineages, indicating their ancient cyanobacterial origin. Sequence and functional characteristics of chloroplast two-component systems point to their fundamental role in linking photosynthesis with gene expression. We propose that two-component systems provide a coupling between photosynthesis and gene expression that serves to retain genes in chloroplasts, thus providing the basis of cytoplasmic, non-Mendelian inheritance of plastid-associated characters. We discuss the role of this coupling in the chronobiology of cells and in the dialogue between nuclear and cytoplasmic genetic systems.

  4. Comparative Analysis of Wolbachia Genomes Reveals Streamlining and Divergence of Minimalist Two-Component Systems

    Science.gov (United States)

    Christensen, Steen; Serbus, Laura Renee

    2015-01-01

    Two-component regulatory systems are commonly used by bacteria to coordinate intracellular responses with environmental cues. These systems are composed of functional protein pairs consisting of a sensor histidine kinase and cognate response regulator. In contrast to the well-studied Caulobacter crescentus system, which carries dozens of these pairs, the streamlined bacterial endosymbiont Wolbachia pipientis encodes only two pairs: CckA/CtrA and PleC/PleD. Here, we used bioinformatic tools to compare characterized two-component system relays from C. crescentus, the related Anaplasmataceae species Anaplasma phagocytophilum and Ehrlichia chaffeensis, and 12 sequenced Wolbachia strains. We found the core protein pairs and a subset of interacting partners to be highly conserved within Wolbachia and these other Anaplasmataceae. Genes involved in two-component signaling were positioned differently within the various Wolbachia genomes, whereas the local context of each gene was conserved. Unlike Anaplasma and Ehrlichia, Wolbachia two-component genes were more consistently found clustered with metabolic genes. The domain architecture and key functional residues standard for two-component system proteins were well-conserved in Wolbachia, although residues that specify cognate pairing diverged substantially from other Anaplasmataceae. These findings indicate that Wolbachia two-component signaling pairs share considerable functional overlap with other α-proteobacterial systems, whereas their divergence suggests the potential for regulatory differences and cross-talk. PMID:25809075

  5. Finite mixture models for sensitivity analysis of thermal hydraulic codes for passive safety systems analysis

    Energy Technology Data Exchange (ETDEWEB)

    Di Maio, Francesco, E-mail: francesco.dimaio@polimi.it [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Nicola, Giancarlo [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Zio, Enrico [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Chair on System Science and Energetic Challenge Fondation EDF, Ecole Centrale Paris and Supelec, Paris (France); Yu, Yu [School of Nuclear Science and Engineering, North China Electric Power University, 102206 Beijing (China)

    2015-08-15

    Highlights: • Uncertainties of TH codes affect the system failure probability quantification. • We present Finite Mixture Models (FMMs) for sensitivity analysis of TH codes. • FMMs approximate the pdf of the output of a TH code with a limited number of simulations. • The approach is tested on a Passive Containment Cooling System of an AP1000 reactor. • The novel approach overcomes the results of a standard variance decomposition method. - Abstract: For safety analysis of Nuclear Power Plants (NPPs), Best Estimate (BE) Thermal Hydraulic (TH) codes are used to predict system response in normal and accidental conditions. The assessment of the uncertainties of TH codes is a critical issue for system failure probability quantification. In this paper, we consider passive safety systems of advanced NPPs and present a novel approach of Sensitivity Analysis (SA). The approach is based on Finite Mixture Models (FMMs) to approximate the probability density function (i.e., the uncertainty) of the output of the passive safety system TH code with a limited number of simulations. We propose a novel Sensitivity Analysis (SA) method for keeping the computational cost low: an Expectation Maximization (EM) algorithm is used to calculate the saliency of the TH code input variables for identifying those that most affect the system functional failure. The novel approach is compared with a standard variance decomposition method on a case study considering a Passive Containment Cooling System (PCCS) of an Advanced Pressurized reactor AP1000.

  6. Empirical Bayes ranking and selection methods via semiparametric hierarchical mixture models in microarray studies.

    Science.gov (United States)

    Noma, Hisashi; Matsui, Shigeyuki

    2013-05-20

    The main purpose of microarray studies is screening of differentially expressed genes as candidates for further investigation. Because of limited resources in this stage, prioritizing genes are relevant statistical tasks in microarray studies. For effective gene selections, parametric empirical Bayes methods for ranking and selection of genes with largest effect sizes have been proposed (Noma et al., 2010; Biostatistics 11: 281-289). The hierarchical mixture model incorporates the differential and non-differential components and allows information borrowing across differential genes with separation from nuisance, non-differential genes. In this article, we develop empirical Bayes ranking methods via a semiparametric hierarchical mixture model. A nonparametric prior distribution, rather than parametric prior distributions, for effect sizes is specified and estimated using the "smoothing by roughening" approach of Laird and Louis (1991; Computational statistics and data analysis 12: 27-37). We present applications to childhood and infant leukemia clinical studies with microarrays for exploring genes related to prognosis or disease progression. Copyright © 2012 John Wiley & Sons, Ltd.

  7. Firing rate estimation using infinite mixture models and its application to neural decoding.

    Science.gov (United States)

    Shibue, Ryohei; Komaki, Fumiyasu

    2017-11-01

    Neural decoding is a framework for reconstructing external stimuli from spike trains recorded by various neural recordings. Kloosterman et al. proposed a new decoding method using marked point processes (Kloosterman F, Layton SP, Chen Z, Wilson MA. J Neurophysiol 111: 217-227, 2014). This method does not require spike sorting and thereby improves decoding accuracy dramatically. In this method, they used kernel density estimation to estimate intensity functions of marked point processes. However, the use of kernel density estimation causes problems such as low decoding accuracy and high computational costs. To overcome these problems, we propose a new decoding method using infinite mixture models to estimate intensity. The proposed method improves decoding performance in terms of accuracy and computational speed. We apply the proposed method to simulation and experimental data to verify its performance. NEW & NOTEWORTHY We propose a new neural decoding method using infinite mixture models and nonparametric Bayesian statistics. The proposed method improves decoding performance in terms of accuracy and computation speed. We have successfully applied the proposed method to position decoding from spike trains recorded in a rat hippocampus. Copyright © 2017 the American Physiological Society.

  8. Scattering and radiative properties of semi-external versus external mixtures of different aerosol types

    International Nuclear Information System (INIS)

    Mishchenko, Michael I.; Liu Li; Travis, Larry D.; Lacis, Andrew A.

    2004-01-01

    The superposition T-matrix method is used to compute the scattering of unpolarized light by semi-external aerosol mixtures in the form of polydisperse, randomly oriented two-particle clusters with touching components. The results are compared with those for composition-equivalent external aerosol mixtures, in which the components are widely separated and scatter light in isolation from each other. It is concluded that aggregation is likely to have a relatively weak effect on scattering and radiative properties of two-component tropospheric aerosols and can be replaced by the much simpler external-mixture model in remote sensing studies and atmospheric radiation balance computations

  9. Effects of non-uniform temperature gradients on surface tension driven two component magneto convection in a porous- fluid system

    Science.gov (United States)

    Manjunatha, N.; Sumithra, R.

    2018-04-01

    The problem of surface tension driven two component magnetoconvection is investigated in a Porous-Fluid system, consisting of anincompressible two component electrically conducting fluid saturatedporous layer above which lies a layer of the same fluid in the presence of a uniform vertical magnetic field. The lower boundary of the porous layeris rigid and the upper boundary of the fluid layer is free with surfacetension effects depending on both temperature and concentration, boththese boundaries are insulating to heat and mass. At the interface thevelocity, shear and normal stress, heat and heat flux, mass and mass fluxare assumed to be continuous suitable for Darcy-Brinkman model. Theeigenvalue problem is solved in linear, parabolic and inverted parabolictemperature profiles and the corresponding Thermal Marangoni Numberis obtained for different important physical parameters.

  10. Measurement and Modeling of Surface Tensions of Asymmetric Systems: Heptane, Eicosane, Docosane, Tetracosane and their Mixtures

    DEFF Research Database (Denmark)

    Queimada, Antonio; Silva, Filipa A. E.; Caco, Ana I.

    2003-01-01

    To extend the surface tension database for heavy or asymmetric n-alkane mixtures, measurements were performed using the Wilhelmy plate method. Measured systems included the binary mixtures heptane + eicosane, heptane + docosane and heptane + tetracosane and the ternary mixture heptane + eicosane ...

  11. Measurement and Modeling of Surface Tensions of Asymmetric Systems: Heptane, Eicosane, Docosane, Tetracosane and their Mixtures

    DEFF Research Database (Denmark)

    Queimada, Antonio; Silva, Filipa A.E; Caco, Ana I.

    2003-01-01

    To extend the surface tension database for heavy or asymmetric n-alkane mixtures, measurements were performed using the Wilhelmy plate method. Measured systems included the binary mixtures heptane + eicosane, heptane + docosane and heptane + tetracosane and the ternary mixture heptane + eicosane...

  12. Quantum characteristics of occurrence scattering time in two-component non-ideal plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Woo-Pyo [Department of Electronics Engineering, Catholic University of Daegu, Hayang, 712-702 (Korea, Republic of); Jung, Young-Dae, E-mail: ydjung@hanyang.ac.kr [Department of Applied Physics and Department of Bionanotechnology, Hanyang University, Ansan, Kyunggi-Do 15588 (Korea, Republic of); Department of Physics, Applied Physics, and Astronomy, Rensselaer Polytechnic Institute, 110 8th Street, Troy, NY 12180-3590 (United States)

    2015-10-30

    The quantum diffraction and plasma screening effects on the occurrence time for the collision process are investigated in two-component non-ideal plasmas. The micropotential model taking into account the quantum diffraction and screening with the eikonal analysis is employed to derive the occurrence time as functions of the collision energy, density parameter, Debye length, de Broglie wavelength, and scattering angle. It is shown that the occurrence time for forward scattering directions decreases the tendency of time-advance with increasing scattering angle and de Broglie wavelength. However, it is found that the occurrence time shows the oscillatory time-advance and time-retarded behaviors with increasing scattering angle. It is found that the plasma screening effect enhances the tendency of time-advance on the occurrence time for forward scattering regions. It is also shown the quantum diffraction effect suppresses the occurrence time advance for forward scattering angles. In addition, it is shown that the occurrence time advance decreases with an increase of the collision energy. - Highlights: • The quantum diffraction and screening effects on the occurrence scattering time are investigated in non-ideal plasmas. • It is shown the quantum diffraction effect suppresses the occurrence time advance for forward scattering angles. • It is found that the plasma screening effect enhances the tendency of time-advance on the occurrence time.

  13. Temporal evolution of photon energy emitted from two-component advective flows: origin of time lag

    Science.gov (United States)

    Chatterjee, Arka; Chakrabarti, Sandip K.; Ghosh, Himadri

    2017-12-01

    X-ray time lag of black hole candidates contains important information regarding the emission geometry. Recently, study of time lags from observational data revealed very intriguing properties. To investigate the real cause of this lag behavior with energy and spectral states, we study photon paths inside a two-component advective flow (TCAF) which appears to be a satisfactory model to explain the spectral and timing properties. We employ the Monte Carlo simulation technique to carry out the Comptonization process. We use a relativistic thick disk in Schwarzschild geometry as the CENtrifugal pressure supported BOundary Layer (CENBOL) which is the Compton cloud. In TCAF, this is the post-shock region of the advective component. Keplerian disk on the equatorial plane which is truncated at the inner edge i.e. at the outer boundary of the CENBOL, acts as the soft photon source. Ray-tracing code is employed to track the photons to a distantly located observer. We compute the cumulative time taken by a photon during Comptonization, reflection and following the curved geometry on the way to the observer. Time lags between various hard and soft bands have been calculated. We study the variation of time lags with accretion rates, CENBOL size and inclination angle. Time lags for different energy channels are plotted for different inclination angles. The general trend of variation of time lag with QPO frequency and energy as observed in satellite data is reproduced.

  14. A modeling approach for heat conduction and radiation diffusion in plasma-photon mixture in temperature nonequilibrium

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Chong [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-09

    We present a simple approach for determining ion, electron, and radiation temperatures of heterogeneous plasma-photon mixtures, in which temperatures depend on both material type and morphology of the mixture. The solution technique is composed of solving ion, electron, and radiation energy equations for both mixed and pure phases of each material in zones containing random mixture and solving pure material energy equations in subdivided zones using interface reconstruction. Application of interface reconstruction is determined by the material configuration in the surrounding zones. In subdivided zones, subzonal inter-material energy exchanges are calculated by heat fluxes across the material interfaces. Inter-material energy exchange in zones with random mixtures is modeled using the length scale and contact surface area models. In those zones, inter-zonal heat flux in each material is determined using the volume fractions.

  15. A modeling approach for heat conduction and radiation diffusion in plasma-photon mixture in temperature nonequilibrium

    International Nuclear Information System (INIS)

    Chang, Chong

    2016-01-01

    We present a simple approach for determining ion, electron, and radiation temperatures of heterogeneous plasma-photon mixtures, in which temperatures depend on both material type and morphology of the mixture. The solution technique is composed of solving ion, electron, and radiation energy equations for both mixed and pure phases of each material in zones containing random mixture and solving pure material energy equations in subdivided zones using interface reconstruction. Application of interface reconstruction is determined by the material configuration in the surrounding zones. In subdivided zones, subzonal inter-material energy exchanges are calculated by heat fluxes across the material interfaces. Inter-material energy exchange in zones with random mixtures is modeled using the length scale and contact surface area models. In those zones, inter-zonal heat flux in each material is determined using the volume fractions.

  16. Cosmological models described by a mixture of van der Waals fluid and dark energy

    International Nuclear Information System (INIS)

    Kremer, G.M.

    2003-01-01

    The Universe is modeled as a binary mixture whose constituents are described by a van der Waals fluid and by a dark energy density. The dark energy density is considered either as quintessence or as the Chaplygin gas. The irreversible processes concerning the energy transfer between the van der Waals fluid and the gravitational field are taken into account. This model can simulate (a) an inflationary period where the acceleration grows exponentially and the van der Waals fluid behaves like an inflaton, (b) an accelerated period where the acceleration is positive but it decreases and tends to zero whereas the energy density of the van der Waals fluid decays, (c) a decelerated period which corresponds to a matter dominated period with a non-negative pressure, and (d) a present accelerated period where the dark energy density outweighs the energy density of the van der Waals fluid

  17. Estimation of Seismic Wavelets Based on the Multivariate Scale Mixture of Gaussians Model

    Directory of Open Access Journals (Sweden)

    Jing-Huai Gao

    2009-12-01

    Full Text Available This paper proposes a new method for estimating seismic wavelets. Suppose a seismic wavelet can be modeled by a formula with three free parameters (scale, frequency and phase. We can transform the estimation of the wavelet into determining these three parameters. The phase of the wavelet is estimated by constant-phase rotation to the seismic signal, while the other two parameters are obtained by the Higher-order Statistics (HOS (fourth-order cumulant matching method. In order to derive the estimator of the Higher-order Statistics (HOS, the multivariate scale mixture of Gaussians (MSMG model is applied to formulating the multivariate joint probability density function (PDF of the seismic signal. By this way, we can represent HOS as a polynomial function of second-order statistics to improve the anti-noise performance and accuracy. In addition, the proposed method can work well for short time series.

  18. LEARNING VECTOR QUANTIZATION FOR ADAPTED GAUSSIAN MIXTURE MODELS IN AUTOMATIC SPEAKER IDENTIFICATION

    Directory of Open Access Journals (Sweden)

    IMEN TRABELSI

    2017-05-01

    Full Text Available Speaker Identification (SI aims at automatically identifying an individual by extracting and processing information from his/her voice. Speaker voice is a robust a biometric modality that has a strong impact in several application areas. In this study, a new combination learning scheme has been proposed based on Gaussian mixture model-universal background model (GMM-UBM and Learning vector quantization (LVQ for automatic text-independent speaker identification. Features vectors, constituted by the Mel Frequency Cepstral Coefficients (MFCC extracted from the speech signal are used to train the New England subset of the TIMIT database. The best results obtained (90% for gender- independent speaker identification, 97 % for male speakers and 93% for female speakers for test data using 36 MFCC features.

  19. Reading Ability Development from Kindergarten to Junior Secondary: Latent Transition Analyses with Growth Mixture Modeling

    Directory of Open Access Journals (Sweden)

    Yuan Liu

    2016-10-01

    Full Text Available The present study examined the reading ability development of children in the large scale Early Childhood Longitudinal Study (Kindergarten Class of 1998-99 data; Tourangeau, Nord, Lê, Pollack, & Atkins-Burnett, 2006 under the dynamic systems. To depict children's growth pattern, we extended the measurement part of latent transition analysis to the growth mixture model and found that the new model fitted the data well. Results also revealed that most of the children stayed in the same ability group with few cross-level changes in their classes. After adding the environmental factors as predictors, analyses showed that children receiving higher teachers' ratings, with higher socioeconomic status, and of above average poverty status, would have higher probability to transit into the higher ability group.

  20. Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2017-06-01

    Full Text Available This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT models with stochastic (or uncertain constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT models (such as log-normal, log-Cauchy, and log-logistic FT models as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works.

  1. A narrow-band k-distribution model with single mixture gas assumption for radiative flows

    Science.gov (United States)

    Jo, Sung Min; Kim, Jae Won; Kwon, Oh Joon

    2018-06-01

    In the present study, the narrow-band k-distribution (NBK) model parameters for mixtures of H2O, CO2, and CO are proposed by utilizing the line-by-line (LBL) calculations with a single mixture gas assumption. For the application of the NBK model to radiative flows, a radiative transfer equation (RTE) solver based on a finite-volume method on unstructured meshes was developed. The NBK model and the RTE solver were verified by solving two benchmark problems including the spectral radiance distribution emitted from one-dimensional slabs and the radiative heat transfer in a truncated conical enclosure. It was shown that the results are accurate and physically reliable by comparing with available data. To examine the applicability of the methods to realistic multi-dimensional problems in non-isothermal and non-homogeneous conditions, radiation in an axisymmetric combustion chamber was analyzed, and then the infrared signature emitted from an aircraft exhaust plume was predicted. For modeling the plume flow involving radiative cooling, a flow-radiation coupled procedure was devised in a loosely coupled manner by adopting a Navier-Stokes flow solver based on unstructured meshes. It was shown that the predicted radiative cooling for the combustion chamber is physically more accurate than other predictions, and is as accurate as that by the LBL calculations. It was found that the infrared signature of aircraft exhaust plume can also be obtained accurately, equivalent to the LBL calculations, by using the present narrow-band approach with a much improved numerical efficiency.

  2. Gaussian mixture models and semantic gating improve reconstructions from human brain activity

    Directory of Open Access Journals (Sweden)

    Sanne eSchoenmakers

    2015-01-01

    Full Text Available Better acquisition protocols and analysis techniques are making it possible to use fMRI to obtain highly detailed visualizations of brain processes. In particular we focus on the reconstruction of natural images from BOLD responses in visual cortex. We expand our linear Gaussian framework for percept decoding with Gaussian mixture models to better represent the prior distribution of natural images. Reconstruction of such images then boils down to probabilistic inference in a hybrid Bayesian network. In our set-up, different mixture components correspond to different character categories. Our framework can automatically infer higher-order semantic categories from lower-level brain areas. Furthermore the framework can gate semantic information from higher-order brain areas to enforce the correct category during reconstruction. When categorical information is not available, we show that automatically learned clusters in the data give a similar improvement in reconstruction. The hybrid Bayesian network leads to highly accurate reconstructions in both supervised and unsupervised settings.

  3. Pattern-mixture models for analyzing normal outcome data with proxy respondents.

    Science.gov (United States)

    Shardell, Michelle; Hicks, Gregory E; Miller, Ram R; Langenberg, Patricia; Magaziner, Jay

    2010-06-30

    Studies of older adults often involve interview questions regarding subjective constructs such as perceived disability. In some studies, when subjects are unable (e.g. due to cognitive impairment) or unwilling to respond to these questions, proxies (e.g. relatives or other care givers) are recruited to provide responses in place of the subject. Proxies are usually not approached to respond on behalf of subjects who respond for themselves; thus, for each subject, data from only one of the subject or proxy are available. Typically, proxy responses are simply substituted for missing subject responses, and standard complete-data analyses are performed. However, this approach may introduce measurement error and produce biased parameter estimates. In this paper, we propose using pattern-mixture models that relate non-identifiable parameters to identifiable parameters to analyze data with proxy respondents. We posit three interpretable pattern-mixture restrictions to be used with proxy data, and we propose estimation procedures using maximum likelihood and multiple imputation. The methods are applied to a cohort of elderly hip-fracture patients. (c) 2010 John Wiley & Sons, Ltd.

  4. Solvable Model of a Generic Trapped Mixture of Interacting Bosons: Many-Body and Mean-Field Properties

    Science.gov (United States)

    Klaiman, S.; Streltsov, A. I.; Alon, O. E.

    2018-04-01

    A solvable model of a generic trapped bosonic mixture, N 1 bosons of mass m 1 and N 2 bosons of mass m 2 trapped in an harmonic potential of frequency ω and interacting by harmonic inter-particle interactions of strengths λ 1, λ 2, and λ 12, is discussed. It has recently been shown for the ground state [J. Phys. A 50, 295002 (2017)] that in the infinite-particle limit, when the interaction parameters λ 1(N 1 ‑ 1), λ 2(N 2 ‑ 1), λ 12 N 1, λ 12 N 2 are held fixed, each of the species is 100% condensed and its density per particle as well as the total energy per particle are given by the solution of the coupled Gross-Pitaevskii equations of the mixture. In the present work we investigate properties of the trapped generic mixture at the infinite-particle limit, and find differences between the many-body and mean-field descriptions of the mixture, despite each species being 100%. We compute analytically and analyze, both for the mixture and for each species, the center-of-mass position and momentum variances, their uncertainty product, the angular-momentum variance, as well as the overlap of the exact and Gross-Pitaevskii wavefunctions of the mixture. The results obtained in this work can be considered as a step forward in characterizing how important are many-body effects in a fully condensed trapped bosonic mixture at the infinite-particle limit.

  5. Gasification under CO2–Steam Mixture: Kinetic Model Study Based on Shared Active Sites

    Directory of Open Access Journals (Sweden)

    Xia Liu

    2017-11-01

    Full Text Available In this work, char gasification of two coals (i.e., Shenfu bituminous coal and Zunyi anthracite and a petroleum coke under a steam and CO2 mixture (steam/CO2 partial pressures, 0.025–0.075 MPa; total pressures, 0.100 MPa and CO2/steam chemisorption of char samples were conducted in a Thermogravimetric Analyzer (TGA. Two conventional kinetic models exhibited difficulties in exactly fitting the experimental data of char–steam–CO2 gasification. Hence, a modified model based on Langmuir–Hinshelwood model and assuming that char–CO2 and char–steam reactions partially shared active sites was proposed and had indicated high accuracy for estimating the interactions in char–steam–CO2 reaction. Moreover, it was found that two new model parameters (respectively characterized as the amount ratio of shared active sites to total active sites in char–CO2 and char–steam reactions in the modified model hardly varied with gasification conditions, and the results of chemisorption indicate that these two new model parameters mainly depended on the carbon active sites in char samples.

  6. Missing Value Imputation Based on Gaussian Mixture Model for the Internet of Things

    Directory of Open Access Journals (Sweden)

    Xiaobo Yan

    2015-01-01

    Full Text Available This paper addresses missing value imputation for the Internet of Things (IoT. Nowadays, the IoT has been used widely and commonly by a variety of domains, such as transportation and logistics domain and healthcare domain. However, missing values are very common in the IoT for a variety of reasons, which results in the fact that the experimental data are incomplete. As a result of this, some work, which is related to the data of the IoT, can’t be carried out normally. And it leads to the reduction in the accuracy and reliability of the data analysis results. This paper, for the characteristics of the data itself and the features of missing data in IoT, divides the missing data into three types and defines three corresponding missing value imputation problems. Then, we propose three new models to solve the corresponding problems, and they are model of missing value imputation based on context and linear mean (MCL, model of missing value imputation based on binary search (MBS, and model of missing value imputation based on Gaussian mixture model (MGI. Experimental results showed that the three models can improve the accuracy, reliability, and stability of missing value imputation greatly and effectively.

  7. ADAPTIVE BACKGROUND DENGAN METODE GAUSSIAN MIXTURE MODELS UNTUK REAL-TIME TRACKING

    Directory of Open Access Journals (Sweden)

    Silvia Rostianingsih

    2008-01-01

    Full Text Available Nowadays, motion tracking application is widely used for many purposes, such as detecting traffic jam and counting how many people enter a supermarket or a mall. A method to separate background and the tracked object is required for motion tracking. It will not be hard to develop the application if the tracking is performed on a static background, but it will be difficult if the tracked object is at a place with a non-static background, because the changing part of the background can be recognized as a tracking area. In order to handle the problem an application can be made to separate background where that separation can adapt to change that occur. This application is made to produce adaptive background using Gaussian Mixture Models (GMM as its method. GMM method clustered the input pixel data with pixel color value as it’s basic. After the cluster formed, dominant distributions are choosen as background distributions. This application is made by using Microsoft Visual C 6.0. The result of this research shows that GMM algorithm could made adaptive background satisfactory. This proofed by the result of the tests that succeed at all condition given. This application can be developed so the tracking process integrated in adaptive background maker process. Abstract in Bahasa Indonesia : Saat ini, aplikasi motion tracking digunakan secara luas untuk banyak tujuan, seperti mendeteksi kemacetan dan menghitung berapa banyak orang yang masuk ke sebuah supermarket atau sebuah mall. Sebuah metode untuk memisahkan antara background dan obyek yang di-track dibutuhkan untuk melakukan motion tracking. Membuat aplikasi tracking pada background yang statis bukanlah hal yang sulit, namun apabila tracking dilakukan pada background yang tidak statis akan lebih sulit, dikarenakan perubahan background dapat dikenali sebagai area tracking. Untuk mengatasi masalah tersebut, dapat dibuat suatu aplikasi untuk memisahkan background dimana aplikasi tersebut dapat

  8. Transcriptome analysis of all two-component regulatory system mutants of Escherichia coli K-12.

    Science.gov (United States)

    Oshima, Taku; Aiba, Hirofumi; Masuda, Yasushi; Kanaya, Shigehiko; Sugiura, Masahito; Wanner, Barry L; Mori, Hirotada; Mizuno, Takeshi

    2002-10-01

    We have systematically examined the mRNA profiles of 36 two-component deletion mutants, which include all two-component regulatory systems of Escherichia coli, under a single growth condition. DNA microarray results revealed that the mutants belong to one of three groups based on their gene expression profiles in Luria-Bertani broth under aerobic conditions: (i) those with no or little change; (ii) those with significant changes; and (iii) those with drastic changes. Under these conditions, the anaeroresponsive ArcB/ArcA system, the osmoresponsive EnvZ/OmpR system and the response regulator UvrY showed the most drastic changes. Cellular functions such as flagellar synthesis and expression of the RpoS regulon were affected by multiple two-component systems. A high correlation coefficient of expression profile was found between several two-component mutants. Together, these results support the view that a network of functional interactions, such as cross-regulation, exists between different two-component systems. The compiled data are avail-able at our website (http://ecoli.aist-nara.ac.jp/xp_analysis/ 2_components).

  9. Partially Observed Mixtures of IRT Models: An Extension of the Generalized Partial-Credit Model

    Science.gov (United States)

    Von Davier, Matthias; Yamamoto, Kentaro

    2004-01-01

    The generalized partial-credit model (GPCM) is used frequently in educational testing and in large-scale assessments for analyzing polytomous data. Special cases of the generalized partial-credit model are the partial-credit model--or Rasch model for ordinal data--and the two parameter logistic (2PL) model. This article extends the GPCM to the…

  10. Clustering gene expression time series data using an infinite Gaussian process mixture model.

    Science.gov (United States)

    McDowell, Ian C; Manandhar, Dinesh; Vockley, Christopher M; Schmid, Amy K; Reddy, Timothy E; Engelhardt, Barbara E

    2018-01-01

    Transcriptome-wide time series expression profiling is used to characterize the cellular response to environmental perturbations. The first step to analyzing transcriptional response data is often to cluster genes with similar responses. Here, we present a nonparametric model-based method, Dirichlet process Gaussian process mixture model (DPGP), which jointly models data clusters with a Dirichlet process and temporal dependencies with Gaussian processes. We demonstrate the accuracy of DPGP in comparison to state-of-the-art approaches using hundreds of simulated data sets. To further test our method, we apply DPGP to published microarray data from a microbial model organism exposed to stress and to novel RNA-seq data from a human cell line exposed to the glucocorticoid dexamethasone. We validate our clusters by examining local transcription factor binding and histone modifications. Our results demonstrate that jointly modeling cluster number and temporal dependencies can reveal shared regulatory mechanisms. DPGP software is freely available online at https://github.com/PrincetonUniversity/DP_GP_cluster.

  11. Clustering gene expression time series data using an infinite Gaussian process mixture model.

    Directory of Open Access Journals (Sweden)

    Ian C McDowell

    2018-01-01

    Full Text Available Transcriptome-wide time series expression profiling is used to characterize the cellular response to environmental perturbations. The first step to analyzing transcriptional response data is often to cluster genes with similar responses. Here, we present a nonparametric model-based method, Dirichlet process Gaussian process mixture model (DPGP, which jointly models data clusters with a Dirichlet process and temporal dependencies with Gaussian processes. We demonstrate the accuracy of DPGP in comparison to state-of-the-art approaches using hundreds of simulated data sets. To further test our method, we apply DPGP to published microarray data from a microbial model organism exposed to stress and to novel RNA-seq data from a human cell line exposed to the glucocorticoid dexamethasone. We validate our clusters by examining local transcription factor binding and histone modifications. Our results demonstrate that jointly modeling cluster number and temporal dependencies can reveal shared regulatory mechanisms. DPGP software is freely available online at https://github.com/PrincetonUniversity/DP_GP_cluster.

  12. Dynamic modeling the composting process of the mixture of poultry manure and wheat straw.

    Science.gov (United States)

    Petric, Ivan; Mustafić, Nesib

    2015-09-15

    Due to lack of understanding of the complex nature of the composting process, there is a need to provide a valuable tool that can help to improve the prediction of the process performance but also its optimization. Therefore, the main objective of this study is to develop a comprehensive mathematical model of the composting process based on microbial kinetics. The model incorporates two different microbial populations that metabolize the organic matter in two different substrates. The model was validated by comparison of the model and experimental data obtained from the composting process of the mixture of poultry manure and wheat straw. Comparison of simulation results and experimental data for five dynamic state variables (organic matter conversion, oxygen concentration, carbon dioxide concentration, substrate temperature and moisture content) showed that the model has very good predictions of the process performance. According to simulation results, the optimum values for air flow rate and ambient air temperature are 0.43 l min(-1) kg(-1)OM and 28 °C, respectively. On the basis of sensitivity analysis, the maximum organic matter conversion is the most sensitive among the three objective functions. Among the twelve examined parameters, μmax,1 is the most influencing parameter and X1 is the least influencing parameter. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Two component injection moulding: an interface quality and bond strength dilemma

    DEFF Research Database (Denmark)

    Islam, Mohammad Aminul; Hansen, Hans Nørgaard; Tang, Peter Torben

    2008-01-01

    on quality parameters of the two component parts. Most engineering applications of two component injection moulding calls for high bond strength between the two polymers, on the other hand a sharp and well-defined interface between the two polymers are required for applications like selective metallization...... of polymers, parts for micro applications and also for the aesthetic purpose of the final product. The investigation presented in this paper indicates a dilemma between obtaining reasonably good bond strength and at the same time keeping the interface quality suitable for applications. The required process...... conditions for a sharp and well-defined interface are exactly the opposite of what is congenial for higher bond strength. So in the production of two component injection moulded parts, there is a compromise to make between the interface quality and the bond strength of the two polymers. Also the injection...

  14. Gaussian mixture models-based ship target recognition algorithm in remote sensing infrared images

    Science.gov (United States)

    Yao, Shoukui; Qin, Xiaojuan

    2018-02-01

    Since the resolution of remote sensing infrared images is low, the features of ship targets become unstable. The issue of how to recognize ships with fuzzy features is an open problem. In this paper, we propose a novel ship target recognition algorithm based on Gaussian mixture models (GMMs). In the proposed algorithm, there are mainly two steps. At the first step, the Hu moments of these ship target images are calculated, and the GMMs are trained on the moment features of ships. At the second step, the moment feature of each ship image is assigned to the trained GMMs for recognition. Because of the scale, rotation, translation invariance property of Hu moments and the power feature-space description ability of GMMs, the GMMs-based ship target recognition algorithm can recognize ship reliably. Experimental results of a large simulating image set show that our approach is effective in distinguishing different ship types, and obtains a satisfactory ship recognition performance.

  15. Spot counting on fluorescence in situ hybridization in suspension images using Gaussian mixture model

    Science.gov (United States)

    Liu, Sijia; Sa, Ruhan; Maguire, Orla; Minderman, Hans; Chaudhary, Vipin

    2015-03-01

    Cytogenetic abnormalities are important diagnostic and prognostic criteria for acute myeloid leukemia (AML). A flow cytometry-based imaging approach for FISH in suspension (FISH-IS) was established that enables the automated analysis of several log-magnitude higher number of cells compared to the microscopy-based approaches. The rotational positioning can occur leading to discordance between spot count. As a solution of counting error from overlapping spots, in this study, a Gaussian Mixture Model based classification method is proposed. The Akaike information criterion (AIC) and Bayesian information criterion (BIC) of GMM are used as global image features of this classification method. Via Random Forest classifier, the result shows that the proposed method is able to detect closely overlapping spots which cannot be separated by existing image segmentation based spot detection methods. The experiment results show that by the proposed method we can obtain a significant improvement in spot counting accuracy.

  16. On selecting a prior for the precision parameter of Dirichlet process mixture models

    Science.gov (United States)

    Dorazio, R.M.

    2009-01-01

    In hierarchical mixture models the Dirichlet process is used to specify latent patterns of heterogeneity, particularly when the distribution of latent parameters is thought to be clustered (multimodal). The parameters of a Dirichlet process include a precision parameter ?? and a base probability measure G0. In problems where ?? is unknown and must be estimated, inferences about the level of clustering can be sensitive to the choice of prior assumed for ??. In this paper an approach is developed for computing a prior for the precision parameter ?? that can be used in the presence or absence of prior information about the level of clustering. This approach is illustrated in an analysis of counts of stream fishes. The results of this fully Bayesian analysis are compared with an empirical Bayes analysis of the same data and with a Bayesian analysis based on an alternative commonly used prior.

  17. Predictive Distribution of the Dirichlet Mixture Model by the Local Variational Inference Method

    DEFF Research Database (Denmark)

    Ma, Zhanyu; Leijon, Arne; Tan, Zheng-Hua

    2014-01-01

    the predictive likelihood of the new upcoming data, especially when the amount of training data is small. The Bayesian estimation of a Dirichlet mixture model (DMM) is, in general, not analytically tractable. In our previous work, we have proposed a global variational inference-based method for approximately...... calculating the posterior distributions of the parameters in the DMM analytically. In this paper, we extend our previous study for the DMM and propose an algorithm to calculate the predictive distribution of the DMM with the local variational inference (LVI) method. The true predictive distribution of the DMM...... is analytically intractable. By considering the concave property of the multivariate inverse beta function, we introduce an upper-bound to the true predictive distribution. As the global minimum of this upper-bound exists, the problem is reduced to seek an approximation to the true predictive distribution...

  18. Genomic outlier profile analysis: mixture models, null hypotheses, and nonparametric estimation.

    Science.gov (United States)

    Ghosh, Debashis; Chinnaiyan, Arul M

    2009-01-01

    In most analyses of large-scale genomic data sets, differential expression analysis is typically assessed by testing for differences in the mean of the distributions between 2 groups. A recent finding by Tomlins and others (2005) is of a different type of pattern of differential expression in which a fraction of samples in one group have overexpression relative to samples in the other group. In this work, we describe a general mixture model framework for the assessment of this type of expression, called outlier profile analysis. We start by considering the single-gene situation and establishing results on identifiability. We propose 2 nonparametric estimation procedures that have natural links to familiar multiple testing procedures. We then develop multivariate extensions of this methodology to handle genome-wide measurements. The proposed methodologies are compared using simulation studies as well as data from a prostate cancer gene expression study.

  19. Hierarchical Bayesian nonparametric mixture models for clustering with variable relevance determination.

    Science.gov (United States)

    Yau, Christopher; Holmes, Chris

    2011-07-01

    We propose a hierarchical Bayesian nonparametric mixture model for clustering when some of the covariates are assumed to be of varying relevance to the clustering problem. This can be thought of as an issue in variable selection for unsupervised learning. We demonstrate that by defining a hierarchical population based nonparametric prior on the cluster locations scaled by the inverse covariance matrices of the likelihood we arrive at a 'sparsity prior' representation which admits a conditionally conjugate prior. This allows us to perform full Gibbs sampling to obtain posterior distributions over parameters of interest including an explicit measure of each covariate's relevance and a distribution over the number of potential clusters present in the data. This also allows for individual cluster specific variable selection. We demonstrate improved inference on a number of canonical problems.

  20. Zero-truncated panel Poisson mixture models: Estimating the impact on tourism benefits in Fukushima Prefecture.

    Science.gov (United States)

    Narukawa, Masaki; Nohara, Katsuhito

    2018-04-01

    This study proposes an estimation approach to panel count data, truncated at zero, in order to apply a contingent behavior travel cost method to revealed and stated preference data collected via a web-based survey. We develop zero-truncated panel Poisson mixture models by focusing on respondents who visited a site. In addition, we introduce an inverse Gaussian distribution to unobserved individual heterogeneity as an alternative to a popular gamma distribution, making it possible to capture effectively the long tail typically observed in trip data. We apply the proposed method to estimate the impact on tourism benefits in Fukushima Prefecture as a result of the Fukushima Nuclear Power Plant No. 1 accident. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Mixture model-based clustering and logistic regression for automatic detection of microaneurysms in retinal images

    Science.gov (United States)

    Sánchez, Clara I.; Hornero, Roberto; Mayo, Agustín; García, María

    2009-02-01

    Diabetic Retinopathy is one of the leading causes of blindness and vision defects in developed countries. An early detection and diagnosis is crucial to avoid visual complication. Microaneurysms are the first ocular signs of the presence of this ocular disease. Their detection is of paramount importance for the development of a computer-aided diagnosis technique which permits a prompt diagnosis of the disease. However, the detection of microaneurysms in retinal images is a difficult task due to the wide variability that these images usually present in screening programs. We propose a statistical approach based on mixture model-based clustering and logistic regression which is robust to the changes in the appearance of retinal fundus images. The method is evaluated on the public database proposed by the Retinal Online Challenge in order to obtain an objective performance measure and to allow a comparative study with other proposed algorithms.

  2. A Grasp-Pose Generation Method Based on Gaussian Mixture Models

    Directory of Open Access Journals (Sweden)

    Wenjia Wu

    2015-11-01

    Full Text Available A Gaussian Mixture Model (GMM-based grasp-pose generation method is proposed in this paper. Through offline training, the GMM is set up and used to depict the distribution of the robot's reachable orientations. By dividing the robot's workspace into small 3D voxels and training the GMM for each voxel, a look-up table covering all the workspace is built with the x, y and z positions as the index and the GMM as the entry. Through the definition of Task Space Regions (TSR, an object's feasible grasp poses are expressed as a continuous region. With the GMM, grasp poses can be preferentially sampled from regions with high reachability probabilities in the online grasp-planning stage. The GMM can also be used as a preliminary judgement of a grasp pose's reachability. Experiments on both a simulated and a real robot show the superiority of our method over the existing method.

  3. MATHEMATICAL MODEL OF FORECASTING FOR OUTCOMES IN VICTIMS OF METHANE-COAL MIXTURE EXPLOSION

    Directory of Open Access Journals (Sweden)

    E. Y. Fistal

    2016-01-01

    Full Text Available BACKGROUND. The severity of the victims’ state  in the early period after the combined  trauma (with the prevalence of a thermal  injury is associated with the development of numerous  changes  in all organs and systems  which make proper  diagnosis  of complications and estimation of lethal  outcome  probability extremely  difficult to be performed.MATERIAL AND METHODS. The article  presents a mathematical model  for predicting  lethal  outcomes  in victims of methanecoal mixture explosion, based on case histories of 220 miners who were treated at the Donetsk Burn Center in 1994–2012.RESULTS. It was revealed  that  the  probability  of lethal  outcomes  in victims of methane-coal mixture  explosion was statistically significantly affected  with the  area  of deep  burns  (p<0.001, and  the  severe traumatic brain injury (p<0.001. In the probability of lethal  outcomes,  tactics  of surgical treatment for burn wounds in the early hours after the injury was statistically significant (p=0.003. It involves the primary debridement of burn wounds in the period of burn shock with the simultaneous closure of affected  surfaces with temporary biological covering.CONCLUSION. These neural network models are easy to practice and may be created  for the most common pathologic conditions  frequently encountered in clinical practice.

  4. BClass: A Bayesian Approach Based on Mixture Models for Clustering and Classification of Heterogeneous Biological Data

    Directory of Open Access Journals (Sweden)

    Arturo Medrano-Soto

    2004-12-01

    Full Text Available Based on mixture models, we present a Bayesian method (called BClass to classify biological entities (e.g. genes when variables of quite heterogeneous nature are analyzed. Various statistical distributions are used to model the continuous/categorical data commonly produced by genetic experiments and large-scale genomic projects. We calculate the posterior probability of each entry to belong to each element (group in the mixture. In this way, an original set of heterogeneous variables is transformed into a set of purely homogeneous characteristics represented by the probabilities of each entry to belong to the groups. The number of groups in the analysis is controlled dynamically by rendering the groups as 'alive' and 'dormant' depending upon the number of entities classified within them. Using standard Metropolis-Hastings and Gibbs sampling algorithms, we constructed a sampler to approximate posterior moments and grouping probabilities. Since this method does not require the definition of similarity measures, it is especially suitable for data mining and knowledge discovery in biological databases. We applied BClass to classify genes in RegulonDB, a database specialized in information about the transcriptional regulation of gene expression in the bacterium Escherichia coli. The classification obtained is consistent with current knowledge and allowed prediction of missing values for a number of genes. BClass is object-oriented and fully programmed in Lisp-Stat. The output grouping probabilities are analyzed and interpreted using graphical (dynamically linked plots and query-based approaches. We discuss the advantages of using Lisp-Stat as a programming language as well as the problems we faced when the data volume increased exponentially due to the ever-growing number of genomic projects.

  5. A Gaussian mixture model based cost function for parameter estimation of chaotic biological systems

    Science.gov (United States)

    Shekofteh, Yasser; Jafari, Sajad; Sprott, Julien Clinton; Hashemi Golpayegani, S. Mohammad Reza; Almasganj, Farshad

    2015-02-01

    As we know, many biological systems such as neurons or the heart can exhibit chaotic behavior. Conventional methods for parameter estimation in models of these systems have some limitations caused by sensitivity to initial conditions. In this paper, a novel cost function is proposed to overcome those limitations by building a statistical model on the distribution of the real system attractor in state space. This cost function is defined by the use of a likelihood score in a Gaussian mixture model (GMM) which is fitted to the observed attractor generated by the real system. Using that learned GMM, a similarity score can be defined by the computed likelihood score of the model time series. We have applied the proposed method to the parameter estimation of two important biological systems, a neuron and a cardiac pacemaker, which show chaotic behavior. Some simulated experiments are given to verify the usefulness of the proposed approach in clean and noisy conditions. The results show the adequacy of the proposed cost function.

  6. Estimating the Prevalence of Atrial Fibrillation from A Three-Class Mixture Model for Repeated Diagnoses

    Science.gov (United States)

    Li, Liang; Mao, Huzhang; Ishwaran, Hemant; Rajeswaran, Jeevanantham; Ehrlinger, John; Blackstone, Eugene H.

    2016-01-01

    Atrial fibrillation (AF) is an abnormal heart rhythm characterized by rapid and irregular heart beat, with or without perceivable symptoms. In clinical practice, the electrocardiogram (ECG) is often used for diagnosis of AF. Since the AF often arrives as recurrent episodes of varying frequency and duration and only the episodes that occur at the time of ECG can be detected, the AF is often underdiagnosed when a limited number of repeated ECGs are used. In studies evaluating the efficacy of AF ablation surgery, each patient undergo multiple ECGs and the AF status at the time of ECG is recorded. The objective of this paper is to estimate the marginal proportions of patients with or without AF in a population, which are important measures of the efficacy of the treatment. The underdiagnosis problem is addressed by a three-class mixture regression model in which a patient’s probability of having no AF, paroxysmal AF, and permanent AF is modeled by auxiliary baseline covariates in a nested logistic regression. A binomial regression model is specified conditional on a subject being in the paroxysmal AF group. The model parameters are estimated by the EM algorithm. These parameters are themselves nuisance parameters for the purpose of this research, but the estimators of the marginal proportions of interest can be expressed as functions of the data and these nuisance parameters and their variances can be estimated by the sandwich method. We examine the performance of the proposed methodology in simulations and two real data applications. PMID:27983754

  7. Mixture regression models for the gap time distributions and illness-death processes.

    Science.gov (United States)

    Huang, Chia-Hui

    2018-01-27

    The aim of this study is to provide an analysis of gap event times under the illness-death model, where some subjects experience "illness" before "death" and others experience only "death." Which event is more likely to occur first and how the duration of the "illness" influences the "death" event are of interest. Because the occurrence of the second event is subject to dependent censoring, it can lead to bias in the estimation of model parameters. In this work, we generalize the semiparametric mixture models for competing risks data to accommodate the subsequent event and use a copula function to model the dependent structure between the successive events. Under the proposed method, the survival function of the censoring time does not need to be estimated when developing the inference procedure. We incorporate the cause-specific hazard functions with the counting process approach and derive a consistent estimation using the nonparametric maximum likelihood method. Simulations are conducted to demonstrate the performance of the proposed analysis, and its application in a clinical study on chronic myeloid leukemia is reported to illustrate its utility.

  8. Bayesian semiparametric mixture Tobit models with left censoring, skewness, and covariate measurement errors.

    Science.gov (United States)

    Dagne, Getachew A; Huang, Yangxin

    2013-09-30

    Common problems to many longitudinal HIV/AIDS, cancer, vaccine, and environmental exposure studies are the presence of a lower limit of quantification of an outcome with skewness and time-varying covariates with measurement errors. There has been relatively little work published simultaneously dealing with these features of longitudinal data. In particular, left-censored data falling below a limit of detection may sometimes have a proportion larger than expected under a usually assumed log-normal distribution. In such cases, alternative models, which can account for a high proportion of censored data, should be considered. In this article, we present an extension of the Tobit model that incorporates a mixture of true undetectable observations and those values from a skew-normal distribution for an outcome with possible left censoring and skewness, and covariates with substantial measurement error. To quantify the covariate process, we offer a flexible nonparametric mixed-effects model within the Tobit framework. A Bayesian modeling approach is used to assess the simultaneous impact of left censoring, skewness, and measurement error in covariates on inference. The proposed methods are illustrated using real data from an AIDS clinical study. . Copyright © 2013 John Wiley & Sons, Ltd.

  9. Calculation of radiative opacity of plasma mixtures using a relativistic screened hydrogenic model

    International Nuclear Information System (INIS)

    Mendoza, M.A.; Rubiano, J.G.; Gil, J.M.; Rodríguez, R.; Florido, R.; Espinosa, G.; Martel, P.; Mínguez, E.

    2014-01-01

    We present the code ATMED based on an average atom model and conceived for fast computing the population distribution and radiative properties of hot and dense single and multicomponent plasmas under LTE conditions. A relativistic screened hydrogenic model (RSHM), built on a new set of universal constants considering j-splitting, is used to calculate the required atomic data. The opacity model includes radiative bound–bound, bound–free, free–free, and scattering processes. Bound–bound line-shape function has contributions from natural, Doppler and electron-impact broadenings. An additional dielectronic broadening to account for fluctuations in the average level populations has been included, which improves substantially the Rosseland mean opacity results. To illustrate the main features of the code and its capabilities, calculations of several fundamental quantities of one-component plasmas and mixtures are presented, and a comparison with previously published data is performed. Results are satisfactorily compared with those predicted by more elaborate codes. - Highlights: • A new opacity code, ATMED, based on the average atom approximation is presented. • Atomic data are computed by means of a relativistic screened hydrogenic model. • An effective bound level degeneracy is included for accounting pressure ionization. • A new dielectronic line broadening is included to improve the mean opacities. • ATMED has the possibility to handle with single element and multicomponent plasmas

  10. Mixture Item Response Theory-MIMIC Model: Simultaneous Estimation of Differential Item Functioning for Manifest Groups and Latent Classes

    Science.gov (United States)

    Bilir, Mustafa Kuzey

    2009-01-01

    This study uses a new psychometric model (mixture item response theory-MIMIC model) that simultaneously estimates differential item functioning (DIF) across manifest groups and latent classes. Current DIF detection methods investigate DIF from only one side, either across manifest groups (e.g., gender, ethnicity, etc.), or across latent classes…

  11. Numerical analysis of mixing process of two component gases in vertical fluid layer

    International Nuclear Information System (INIS)

    Hatori, Hirofumi; Takeda, Tetsuaki; Funatani, Shumpei

    2015-01-01

    When the depressurization accident occurs in the Very-High-Temperature Reactor (VHTR), it is expected that air enter into the reactor core. Therefore, it is important to know a mixing process of different kind of gases in the stable or unstable stratified fluid layer. Especially, it is also important to examine an influence of localized natural convection and molecular diffusion on mixing process from a viewpoint of safety. In order to research the mixing process of two component gases and flow characteristics of the localized natural convection, we have carried out numerical analysis using three dimensional CFD code. The numerical model was consisted of a storage tank and a reverse U-shaped vertical slot. They were separated by a partition plate. One side of the left vertical fluid layer was heated and the other side was cooled. The right vertical fluid layer was also cooled. The procedure of numerical analysis is as follows. Firstly, the storage tank was filled with heavy gas and the reverse U-shaped vertical slot was filled with light gas. In the left vertical fluid layer, the localized natural convection was generated by the temperature difference between the vertical walls. The flow characteristics were obtained by a steady state analysis. The unsteady state analysis was started when the partition plate was opened. The gases were mixed by molecular diffusion and natural convection. After the time elapsed, natural circulation occurred. The result obtained in this numerical analysis is as follows. The temperature difference of the left vertical fluid layer was set to 100 K. The combination of the mixed gas was nitrogen and argon. After 76 minutes elapsed, natural circulation occurred. (author)

  12. A second order anti-diffusive Lagrange-remap scheme for two-component flows

    Directory of Open Access Journals (Sweden)

    Lagoutière Frédéric

    2011-11-01

    Full Text Available We build a non-dissipative second order algorithm for the approximate resolution of the one-dimensional Euler system of compressible gas dynamics with two components. The considered model was proposed in [1]. The algorithm is based on [8] which deals with a non-dissipative first order resolution in Lagrange-remap formalism. In the present paper we describe, in the same framework, an algorithm that is second order accurate in time and space, and that preserves sharp interfaces. Numerical results reported at the end of the paper are very encouraging, showing the interest of the second order accuracy for genuinely non-linear waves. Nous construisons un algorithme d’ordre deux et non dissipatif pour la résolution approchée des équations d’Euler de la dynamique des gaz compressibles à deux constituants en dimension un. Le modèle que nous considérons est celui à cinq équations proposé et analysé dans [1]. L’algorithme est basé sur [8] qui proposait une résolution approchée à l’ordre un et non dissipative au moyen d’un splitting de type Lagrange-projection. Dans le présent article, nous décrivons, dans le même formalisme, un algorithme d’ordre deux en temps et en espace, qui préserve des interfaces « parfaites » entre les constituants. Les résultats numériques rapportés à la fin de l’article sont très encourageants ; ils montrent clairement les avantages d’un schéma d’ordre deux pour les ondes vraiment non linéaires.

  13. Hierarchical mixture of experts and diagnostic modeling approach to reduce hydrologic model structural uncertainty: STRUCTURAL UNCERTAINTY DIAGNOSTICS

    Energy Technology Data Exchange (ETDEWEB)

    Moges, Edom [Civil and Environmental Engineering Department, Washington State University, Richland Washington USA; Demissie, Yonas [Civil and Environmental Engineering Department, Washington State University, Richland Washington USA; Li, Hong-Yi [Hydrology Group, Pacific Northwest National Laboratory, Richland Washington USA

    2016-04-01

    In most water resources applications, a single model structure might be inadequate to capture the dynamic multi-scale interactions among different hydrological processes. Calibrating single models for dynamic catchments, where multiple dominant processes exist, can result in displacement of errors from structure to parameters, which in turn leads to over-correction and biased predictions. An alternative to a single model structure is to develop local expert structures that are effective in representing the dominant components of the hydrologic process and adaptively integrate them based on an indicator variable. In this study, the Hierarchical Mixture of Experts (HME) framework is applied to integrate expert model structures representing the different components of the hydrologic process. Various signature diagnostic analyses are used to assess the presence of multiple dominant processes and the adequacy of a single model, as well as to identify the structures of the expert models. The approaches are applied for two distinct catchments, the Guadalupe River (Texas) and the French Broad River (North Carolina) from the Model Parameter Estimation Experiment (MOPEX), using different structures of the HBV model. The results show that the HME approach has a better performance over the single model for the Guadalupe catchment, where multiple dominant processes are witnessed through diagnostic measures. Whereas, the diagnostics and aggregated performance measures prove that French Broad has a homogeneous catchment response, making the single model adequate to capture the response.

  14. Assessing variation in life-history tactics within a population using mixture regression models: a practical guide for evolutionary ecologists.

    Science.gov (United States)

    Hamel, Sandra; Yoccoz, Nigel G; Gaillard, Jean-Michel

    2017-05-01

    Mixed models are now well-established methods in ecology and evolution because they allow accounting for and quantifying within- and between-individual variation. However, the required normal distribution of the random effects can often be violated by the presence of clusters among subjects, which leads to multi-modal distributions. In such cases, using what is known as mixture regression models might offer a more appropriate approach. These models are widely used in psychology, sociology, and medicine to describe the diversity of trajectories occurring within a population over time (e.g. psychological development, growth). In ecology and evolution, however, these models are seldom used even though understanding changes in individual trajectories is an active area of research in life-history studies. Our aim is to demonstrate the value of using mixture models to describe variation in individual life-history tactics within a population, and hence to promote the use of these models by ecologists and evolutionary ecologists. We first ran a set of simulations to determine whether and when a mixture model allows teasing apart latent clustering, and to contrast the precision and accuracy of estimates obtained from mixture models versus mixed models under a wide range of ecological contexts. We then used empirical data from long-term studies of large mammals to illustrate the potential of using mixture models for assessing within-population variation in life-history tactics. Mixture models performed well in most cases, except for variables following a Bernoulli distribution and when sample size was small. The four selection criteria we evaluated [Akaike information criterion (AIC), Bayesian information criterion (BIC), and two bootstrap methods] performed similarly well, selecting the right number of clusters in most ecological situations. We then showed that the normality of random effects implicitly assumed by evolutionary ecologists when using mixed models was often

  15. Two-component wind fields over ocean waves using atmospheric lidar and motion estimation algorithms

    Science.gov (United States)

    Mayor, S. D.

    2016-02-01

    Numerical models, such as large eddy simulations, are capable of providing stunning visualizations of the air-sea interface. One reason for this is the inherent spatial nature of such models. As compute power grows, models are able to provide higher resolution visualizations over larger domains revealing intricate details of the interactions of ocean waves and the airflow over them. Spatial observations on the other hand, which are necessary to validate the simulations, appear to lag behind models. The rough ocean environment of the real world is an additional challenge. One method of providing spatial observations of fluid flow is that of particle image velocimetry (PIV). PIV has been successfully applied to many problems in engineering and the geosciences. This presentation will show recent research results that demonstate that a PIV-style approach using pulsed-fiber atmospheric elastic backscatter lidar hardware and wavelet-based optical flow motion estimation software can reveal two-component wind fields over rough ocean surfaces. Namely, a recently-developed compact lidar was deployed for 10 days in March of 2015 in the Eureka, California area. It scanned over the ocean. Imagery reveal that breaking ocean waves provide copius amounts of particulate matter for the lidar to detect and for the motion estimation algorithms to retrieve wind vectors from. The image below shows two examples of results from the experiment. The left panel shows the elastic backscatter intensity (copper shades) under a field of vectors that was retrieved by the wavelet-based optical flow algorithm from two scans that took about 15 s each to acquire. The vectors, that reveal offshore flow toward the NW, were decimated for clarity. The bright aerosol features along the right edge of the sector scan were caused by ocean waves breaking on the beach. The right panel is the result of scanning over the ocean on a day when wave amplitudes ranged from 8-12 feet and whitecaps offshore beyond the

  16. Assessing the effect, on animal model, of mixture of food additives, on the water balance.

    Science.gov (United States)

    Friedrich, Mariola; Kuchlewska, Magdalena

    2013-01-01

    The purpose of this study was to determine, on the animal model, the effect of modification of diet composition and administration of selected food additives on water balance in the body. The study was conducted with 48 males and 48 females (separately for each sex) of Wistar strain rats divided into four groups. For drinking, the animals from groups I and III were receiving water, whereas the animals from groups II and IV were administered 5 ml of a solution of selected food additives (potassium nitrate - E 252, sodium nitrite - E 250, benzoic acid - E 210, sorbic acid - E 200, and monosodium glutamate - E 621). Doses of the administered food additives were computed taking into account the average intake by men, expressed per body mass unit. Having drunk the solution, the animals were provided water for drinking. The mixture of selected food additives applied in the experiment was found to facilitate water retention in the body both in the case of both male and female rats, and differences observed between the volume of ingested fluids and the volume of excreted urine were statistically significant in the animals fed the basal diet. The type of feed mixture provided to the animals affected the site of water retention - in the case of animals receiving the basal diet analyses demonstrated a significant increase in water content in the liver tissue, whereas in the animals fed the modified diet water was observed to accumulate in the vascular bed. Taking into account the fact of water retention in the vascular bed, the effects of food additives intake may be more adverse in the case of females.

  17. Incorporation of β-glucans in meat emulsions through an optimal mixture modeling systems.

    Science.gov (United States)

    Vasquez Mejia, Sandra M; de Francisco, Alicia; Manique Barreto, Pedro L; Damian, César; Zibetti, Andre Wüst; Mahecha, Hector Suárez; Bohrer, Benjamin M

    2018-05-22

    The effects of β-glucans (βG) in beef emulsions with carrageenan and starch were evaluated using an optimal mixture modeling system. The best mathematical models to describe the cooking loss, color, and textural profile analysis (TPA) were selected and optimized. The cubic models were better to describe the cooking loss, color, and TPA parameters, with the exception of springiness. Emulsions with greater levels of βG and starch had less cooking loss (54 and <62), and greater hardness, cohesiveness and springiness values. Subsequently, during the optimization phase, the use of carrageenan was eliminated. The optimized emulsion contained 3.13 ± 0.11% βG, which could cover the intake daily of βG recommendations. However, the hardness of the optimized emulsion was greater (60,224 ± 1025 N) than expected. The optimized emulsion had a homogeneous structure and normal thermal behavior by DSC and allowed for the manufacture of products with high amounts of βG and desired functional attributes. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Augmenting Scheffe Linear Mixture Models With Squared and/or Crossproduct Terms

    International Nuclear Information System (INIS)

    Piepel, Gregory F.; Szychowski, Jeffrey M.; Loeppky, Jason L.

    2001-01-01

    A glass composition variation study (CVS) for high-level waste (HLW) stored at the Idaho National Engineering and Environmental Laboratory (INEEL) is being statistically designed and performed in phases over several years. The purpose of the CVS is to investigate and model how HLW-glass properties depend on glass composition within a glass composition region compatible with the expected range of INEEL HLW. The resulting glass property-composition models will be used to develop desirable glass formulations and other purposes. Phases 1 and 2 of the CVS have been completed so far, and are briefly described. The main focus of this paper is the CVS Phase 3 experimental design (test matrix). The Phase 3 experimental design was chosen to augment the Phase 1 and 2 data with additional data points, as well as to account for additional glass components of interest not studied in Phases 1 and/or 2. The paper describes how these Phase 3 experimental design augmentation challenges were addressed using the previous data, preliminary property-composition models, and statistical mixture experiment and optimal experimental design methods and software. The resulting Phase 3 experimental design of 30 simulated HAW glasses is presented and discussed

  19. Simultaneous discovery, estimation and prediction analysis of complex traits using a bayesian mixture model.

    Directory of Open Access Journals (Sweden)

    Gerhard Moser

    2015-04-01

    Full Text Available Gene discovery, estimation of heritability captured by SNP arrays, inference on genetic architecture and prediction analyses of complex traits are usually performed using different statistical models and methods, leading to inefficiency and loss of power. Here we use a Bayesian mixture model that simultaneously allows variant discovery, estimation of genetic variance explained by all variants and prediction of unobserved phenotypes in new samples. We apply the method to simulated data of quantitative traits and Welcome Trust Case Control Consortium (WTCCC data on disease and show that it provides accurate estimates of SNP-based heritability, produces unbiased estimators of risk in new samples, and that it can estimate genetic architecture by partitioning variation across hundreds to thousands of SNPs. We estimated that, depending on the trait, 2,633 to 9,411 SNPs explain all of the SNP-based heritability in the WTCCC diseases. The majority of those SNPs (>96% had small effects, confirming a substantial polygenic component to common diseases. The proportion of the SNP-based variance explained by large effects (each SNP explaining 1% of the variance varied markedly between diseases, ranging from almost zero for bipolar disorder to 72% for type 1 diabetes. Prediction analyses demonstrate that for diseases with major loci, such as type 1 diabetes and rheumatoid arthritis, Bayesian methods outperform profile scoring or mixed model approaches.

  20. A Dedicated Mixture Model for Clustering Smart Meter Data: Identification and Analysis of Electricity Consumption Behaviors

    Directory of Open Access Journals (Sweden)

    Fateh Nassim Melzi

    2017-09-01

    Full Text Available The large amount of data collected by smart meters is a valuable resource that can be used to better understand consumer behavior and optimize electricity consumption in cities. This paper presents an unsupervised classification approach for extracting typical consumption patterns from data generated by smart electric meters. The proposed approach is based on a constrained Gaussian mixture model whose parameters vary according to the day type (weekday, Saturday or Sunday. The proposed methodology is applied to a real dataset of Irish households collected by smart meters over one year. For each cluster, the model provides three consumption profiles that depend on the day type. In the first instance, the model is applied on the electricity consumption of users during one month to extract groups of consumers who exhibit similar consumption behaviors. The clustering results are then crossed with contextual variables available for the households to show the close links between electricity consumption and household socio-economic characteristics. At the second instance, the evolution of the consumer behavior from one month to another is assessed through variations of cluster sizes over time. The results show that the consumer behavior evolves over time depending on the contextual variables such as temperature fluctuations and calendar events.