WorldWideScience

Sample records for skew-normal mixture model

  1. Characteristic functions of scale mixtures of multivariate skew-normal distributions

    KAUST Repository

    Kim, Hyoung-Moon

    2011-08-01

    We obtain the characteristic function of scale mixtures of skew-normal distributions both in the univariate and multivariate cases. The derivation uses the simple stochastic relationship between skew-normal distributions and scale mixtures of skew-normal distributions. In particular, we describe the characteristic function of skew-normal, skew-t, and other related distributions. © 2011 Elsevier Inc.

  2. Characteristic functions of scale mixtures of multivariate skew-normal distributions

    KAUST Repository

    Kim, Hyoung-Moon; Genton, Marc G.

    2011-01-01

    We obtain the characteristic function of scale mixtures of skew-normal distributions both in the univariate and multivariate cases. The derivation uses the simple stochastic relationship between skew-normal distributions and scale mixtures of skew

  3. Mixtures of skewed Kalman filters

    KAUST Repository

    Kim, Hyoungmoon; Ryu, Duchwan; Mallick, Bani K.; Genton, Marc G.

    2014-01-01

    Normal state-space models are prevalent, but to increase the applicability of the Kalman filter, we propose mixtures of skewed, and extended skewed, Kalman filters. To do so, the closed skew-normal distribution is extended to a scale mixture class

  4. Mixtures of skewed Kalman filters

    KAUST Repository

    Kim, Hyoungmoon

    2014-01-01

    Normal state-space models are prevalent, but to increase the applicability of the Kalman filter, we propose mixtures of skewed, and extended skewed, Kalman filters. To do so, the closed skew-normal distribution is extended to a scale mixture class of closed skew-normal distributions. Some basic properties are derived and a class of closed skew. t distributions is obtained. Our suggested family of distributions is skewed and has heavy tails too, so it is appropriate for robust analysis. Our proposed special sequential Monte Carlo methods use a random mixture of the closed skew-normal distributions to approximate a target distribution. Hence it is possible to handle skewed and heavy tailed data simultaneously. These methods are illustrated with numerical experiments. © 2013 Elsevier Inc.

  5. Scale and shape mixtures of multivariate skew-normal distributions

    KAUST Repository

    Arellano-Valle, Reinaldo B.; Ferreira, Clé cio S.; Genton, Marc G.

    2018-01-01

    We introduce a broad and flexible class of multivariate distributions obtained by both scale and shape mixtures of multivariate skew-normal distributions. We present the probabilistic properties of this family of distributions in detail and lay down

  6. Scale and shape mixtures of multivariate skew-normal distributions

    KAUST Repository

    Arellano-Valle, Reinaldo B.

    2018-02-26

    We introduce a broad and flexible class of multivariate distributions obtained by both scale and shape mixtures of multivariate skew-normal distributions. We present the probabilistic properties of this family of distributions in detail and lay down the theoretical foundations for subsequent inference with this model. In particular, we study linear transformations, marginal distributions, selection representations, stochastic representations and hierarchical representations. We also describe an EM-type algorithm for maximum likelihood estimation of the parameters of the model and demonstrate its implementation on a wind dataset. Our family of multivariate distributions unifies and extends many existing models of the literature that can be seen as submodels of our proposal.

  7. A probit- log- skew-normal mixture model for repeated measures data with excess zeros, with application to a cohort study of paediatric respiratory symptoms

    Directory of Open Access Journals (Sweden)

    Johnston Neil W

    2010-06-01

    Full Text Available Abstract Background A zero-inflated continuous outcome is characterized by occurrence of "excess" zeros that more than a single distribution can explain, with the positive observations forming a skewed distribution. Mixture models are employed for regression analysis of zero-inflated data. Moreover, for repeated measures zero-inflated data the clustering structure should also be modeled for an adequate analysis. Methods Diary of Asthma and Viral Infections Study (DAVIS was a one year (2004 cohort study conducted at McMaster University to monitor viral infection and respiratory symptoms in children aged 5-11 years with and without asthma. Respiratory symptoms were recorded daily using either an Internet or paper-based diary. Changes in symptoms were assessed by study staff and led to collection of nasal fluid specimens for virological testing. The study objectives included investigating the response of respiratory symptoms to respiratory viral infection in children with and without asthma over a one year period. Due to sparse data daily respiratory symptom scores were aggregated into weekly average scores. More than 70% of the weekly average scores were zero, with the positive scores forming a skewed distribution. We propose a random effects probit/log-skew-normal mixture model to analyze the DAVIS data. The model parameters were estimated using a maximum marginal likelihood approach. A simulation study was conducted to assess the performance of the proposed mixture model if the underlying distribution of the positive response is different from log-skew normal. Results Viral infection status was highly significant in both probit and log-skew normal model components respectively. The probability of being symptom free was much lower for the week a child was viral positive relative to the week she/he was viral negative. The severity of the symptoms was also greater for the week a child was viral positive. The probability of being symptom free was

  8. Bayesian semiparametric mixture Tobit models with left censoring, skewness, and covariate measurement errors.

    Science.gov (United States)

    Dagne, Getachew A; Huang, Yangxin

    2013-09-30

    Common problems to many longitudinal HIV/AIDS, cancer, vaccine, and environmental exposure studies are the presence of a lower limit of quantification of an outcome with skewness and time-varying covariates with measurement errors. There has been relatively little work published simultaneously dealing with these features of longitudinal data. In particular, left-censored data falling below a limit of detection may sometimes have a proportion larger than expected under a usually assumed log-normal distribution. In such cases, alternative models, which can account for a high proportion of censored data, should be considered. In this article, we present an extension of the Tobit model that incorporates a mixture of true undetectable observations and those values from a skew-normal distribution for an outcome with possible left censoring and skewness, and covariates with substantial measurement error. To quantify the covariate process, we offer a flexible nonparametric mixed-effects model within the Tobit framework. A Bayesian modeling approach is used to assess the simultaneous impact of left censoring, skewness, and measurement error in covariates on inference. The proposed methods are illustrated using real data from an AIDS clinical study. . Copyright © 2013 John Wiley & Sons, Ltd.

  9. Flexible mixture modeling via the multivariate t distribution with the Box-Cox transformation: an alternative to the skew-t distribution.

    Science.gov (United States)

    Lo, Kenneth; Gottardo, Raphael

    2012-01-01

    Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.

  10. Option Pricing with Asymmetric Heteroskedastic Normal Mixture Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars

    This paper uses asymmetric heteroskedastic normal mixture models to fit return data and to price options. The models can be estimated straightforwardly by maximum likelihood, have high statistical fit when used on S&P 500 index return data, and allow for substantial negative skewness and time...... varying higher order moments of the risk neutral distribution. When forecasting out-of-sample a large set of index options between 1996 and 2009, substantial improvements are found compared to several benchmark models in terms of dollar losses and the ability to explain the smirk in implied volatilities...

  11. Differential models of twin correlations in skew for body-mass index (BMI).

    Science.gov (United States)

    Tsang, Siny; Duncan, Glen E; Dinescu, Diana; Turkheimer, Eric

    2018-01-01

    Body Mass Index (BMI), like most human phenotypes, is substantially heritable. However, BMI is not normally distributed; the skew appears to be structural, and increases as a function of age. Moreover, twin correlations for BMI commonly violate the assumptions of the most common variety of the classical twin model, with the MZ twin correlation greater than twice the DZ correlation. This study aimed to decompose twin correlations for BMI using more general skew-t distributions. Same sex MZ and DZ twin pairs (N = 7,086) from the community-based Washington State Twin Registry were included. We used latent profile analysis (LPA) to decompose twin correlations for BMI into multiple mixture distributions. LPA was performed using the default normal mixture distribution and the skew-t mixture distribution. Similar analyses were performed for height as a comparison. Our analyses are then replicated in an independent dataset. A two-class solution under the skew-t mixture distribution fits the BMI distribution for both genders. The first class consists of a relatively normally distributed, highly heritable BMI with a mean in the normal range. The second class is a positively skewed BMI in the overweight and obese range, with lower twin correlations. In contrast, height is normally distributed, highly heritable, and is well-fit by a single latent class. Results in the replication dataset were highly similar. Our findings suggest that two distinct processes underlie the skew of the BMI distribution. The contrast between height and weight is in accord with subjective psychological experience: both are under obvious genetic influence, but BMI is also subject to behavioral control, whereas height is not.

  12. Likelihood Inference of Nonlinear Models Based on a Class of Flexible Skewed Distributions

    Directory of Open Access Journals (Sweden)

    Xuedong Chen

    2014-01-01

    Full Text Available This paper deals with the issue of the likelihood inference for nonlinear models with a flexible skew-t-normal (FSTN distribution, which is proposed within a general framework of flexible skew-symmetric (FSS distributions by combining with skew-t-normal (STN distribution. In comparison with the common skewed distributions such as skew normal (SN, and skew-t (ST as well as scale mixtures of skew normal (SMSN, the FSTN distribution can accommodate more flexibility and robustness in the presence of skewed, heavy-tailed, especially multimodal outcomes. However, for this distribution, a usual approach of maximum likelihood estimates based on EM algorithm becomes unavailable and an alternative way is to return to the original Newton-Raphson type method. In order to improve the estimation as well as the way for confidence estimation and hypothesis test for the parameters of interest, a modified Newton-Raphson iterative algorithm is presented in this paper, based on profile likelihood for nonlinear regression models with FSTN distribution, and, then, the confidence interval and hypothesis test are also developed. Furthermore, a real example and simulation are conducted to demonstrate the usefulness and the superiority of our approach.

  13. Skewed factor models using selection mechanisms

    KAUST Repository

    Kim, Hyoung-Moon

    2015-12-21

    Traditional factor models explicitly or implicitly assume that the factors follow a multivariate normal distribution; that is, only moments up to order two are involved. However, it may happen in real data problems that the first two moments cannot explain the factors. Based on this motivation, here we devise three new skewed factor models, the skew-normal, the skew-tt, and the generalized skew-normal factor models depending on a selection mechanism on the factors. The ECME algorithms are adopted to estimate related parameters for statistical inference. Monte Carlo simulations validate our new models and we demonstrate the need for skewed factor models using the classic open/closed book exam scores dataset.

  14. Skewed factor models using selection mechanisms

    KAUST Repository

    Kim, Hyoung-Moon; Maadooliat, Mehdi; Arellano-Valle, Reinaldo B.; Genton, Marc G.

    2015-01-01

    Traditional factor models explicitly or implicitly assume that the factors follow a multivariate normal distribution; that is, only moments up to order two are involved. However, it may happen in real data problems that the first two moments cannot explain the factors. Based on this motivation, here we devise three new skewed factor models, the skew-normal, the skew-tt, and the generalized skew-normal factor models depending on a selection mechanism on the factors. The ECME algorithms are adopted to estimate related parameters for statistical inference. Monte Carlo simulations validate our new models and we demonstrate the need for skewed factor models using the classic open/closed book exam scores dataset.

  15. Skewed Normal Distribution Of Return Assets In Call European Option Pricing

    Directory of Open Access Journals (Sweden)

    Evy Sulistianingsih

    2011-12-01

    Full Text Available Option is one of security derivates. In financial market, option is a contract that gives a right (notthe obligation for its owner to buy or sell a particular asset for a certain price at a certain time.Option can give a guarantee for a risk that can be faced in a market.This paper studies about theuse of Skewed Normal Distribution (SN in call europeanoption pricing. The SN provides aflexible framework that captures the skewness of log return. We obtain aclosed form solution forthe european call option pricing when log return follow the SN. Then, we will compare optionprices that is obtained by the SN and the Black-Scholes model with the option prices of market. Keywords: skewed normaldistribution, log return, options.

  16. The Semiparametric Normal Variance-Mean Mixture Model

    DEFF Research Database (Denmark)

    Korsholm, Lars

    1997-01-01

    We discuss the normal vairance-mean mixture model from a semi-parametric point of view, i.e. we let the mixing distribution belong to a non parametric family. The main results are consistency of the non parametric maximum likelihood estimat or in this case, and construction of an asymptotically...... normal and efficient estimator....

  17. A Multi-Resolution Spatial Model for Large Datasets Based on the Skew-t Distribution

    KAUST Repository

    Tagle, Felipe

    2017-12-06

    Large, non-Gaussian spatial datasets pose a considerable modeling challenge as the dependence structure implied by the model needs to be captured at different scales, while retaining feasible inference. Skew-normal and skew-t distributions have only recently begun to appear in the spatial statistics literature, without much consideration, however, for the ability to capture dependence at multiple resolutions, and simultaneously achieve feasible inference for increasingly large data sets. This article presents the first multi-resolution spatial model inspired by the skew-t distribution, where a large-scale effect follows a multivariate normal distribution and the fine-scale effects follow a multivariate skew-normal distributions. The resulting marginal distribution for each region is skew-t, thereby allowing for greater flexibility in capturing skewness and heavy tails characterizing many environmental datasets. Likelihood-based inference is performed using a Monte Carlo EM algorithm. The model is applied as a stochastic generator of daily wind speeds over Saudi Arabia.

  18. Objective Bayesian Analysis of Skew- t Distributions

    KAUST Repository

    BRANCO, MARCIA D'ELIA

    2012-02-27

    We study the Jeffreys prior and its properties for the shape parameter of univariate skew-t distributions with linear and nonlinear Student\\'s t skewing functions. In both cases, we show that the resulting priors for the shape parameter are symmetric around zero and proper. Moreover, we propose a Student\\'s t approximation of the Jeffreys prior that makes an objective Bayesian analysis easy to perform. We carry out a Monte Carlo simulation study that demonstrates an overall better behaviour of the maximum a posteriori estimator compared with the maximum likelihood estimator. We also compare the frequentist coverage of the credible intervals based on the Jeffreys prior and its approximation and show that they are similar. We further discuss location-scale models under scale mixtures of skew-normal distributions and show some conditions for the existence of the posterior distribution and its moments. Finally, we present three numerical examples to illustrate the implications of our results on inference for skew-t distributions. © 2012 Board of the Foundation of the Scandinavian Journal of Statistics.

  19. Estimation of value at risk and conditional value at risk using normal mixture distributions model

    Science.gov (United States)

    Kamaruzzaman, Zetty Ain; Isa, Zaidi

    2013-04-01

    Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.

  20. Normalization of High Dimensional Genomics Data Where the Distribution of the Altered Variables Is Skewed

    Science.gov (United States)

    Landfors, Mattias; Philip, Philge; Rydén, Patrik; Stenberg, Per

    2011-01-01

    Genome-wide analysis of gene expression or protein binding patterns using different array or sequencing based technologies is now routinely performed to compare different populations, such as treatment and reference groups. It is often necessary to normalize the data obtained to remove technical variation introduced in the course of conducting experimental work, but standard normalization techniques are not capable of eliminating technical bias in cases where the distribution of the truly altered variables is skewed, i.e. when a large fraction of the variables are either positively or negatively affected by the treatment. However, several experiments are likely to generate such skewed distributions, including ChIP-chip experiments for the study of chromatin, gene expression experiments for the study of apoptosis, and SNP-studies of copy number variation in normal and tumour tissues. A preliminary study using spike-in array data established that the capacity of an experiment to identify altered variables and generate unbiased estimates of the fold change decreases as the fraction of altered variables and the skewness increases. We propose the following work-flow for analyzing high-dimensional experiments with regions of altered variables: (1) Pre-process raw data using one of the standard normalization techniques. (2) Investigate if the distribution of the altered variables is skewed. (3) If the distribution is not believed to be skewed, no additional normalization is needed. Otherwise, re-normalize the data using a novel HMM-assisted normalization procedure. (4) Perform downstream analysis. Here, ChIP-chip data and simulated data were used to evaluate the performance of the work-flow. It was found that skewed distributions can be detected by using the novel DSE-test (Detection of Skewed Experiments). Furthermore, applying the HMM-assisted normalization to experiments where the distribution of the truly altered variables is skewed results in considerably higher

  1. Validation of an Acoustic Impedance Prediction Model for Skewed Resonators

    Science.gov (United States)

    Howerton, Brian M.; Parrott, Tony L.

    2009-01-01

    An impedance prediction model was validated experimentally to determine the composite impedance of a series of high-aspect ratio slot resonators incorporating channel skew and sharp bends. Such structures are useful for packaging acoustic liners into constrained spaces for turbofan noise control applications. A formulation of the Zwikker-Kosten Transmission Line (ZKTL) model, incorporating the Richards correction for rectangular channels, is used to calculate the composite normalized impedance of a series of six multi-slot resonator arrays with constant channel length. Experimentally, acoustic data was acquired in the NASA Langley Normal Incidence Tube over the frequency range of 500 to 3500 Hz at 120 and 140 dB OASPL. Normalized impedance was reduced using the Two-Microphone Method for the various combinations of channel skew and sharp 90o and 180o bends. Results show that the presence of skew and/or sharp bends does not significantly alter the impedance of a slot resonator as compared to a straight resonator of the same total channel length. ZKTL predicts the impedance of such resonators very well over the frequency range of interest. The model can be used to design arrays of slot resonators that can be packaged into complex geometries heretofore unsuitable for effective acoustic treatment.

  2. Portfolio optimization with skewness and kurtosis

    Science.gov (United States)

    Lam, Weng Hoe; Jaaman, Saiful Hafizah Hj.; Isa, Zaidi

    2013-04-01

    Mean and variance of return distributions are two important parameters of the mean-variance model in portfolio optimization. However, the mean-variance model will become inadequate if the returns of assets are not normally distributed. Therefore, higher moments such as skewness and kurtosis cannot be ignored. Risk averse investors prefer portfolios with high skewness and low kurtosis so that the probability of getting negative rates of return will be reduced. The objective of this study is to compare the portfolio compositions as well as performances between the mean-variance model and mean-variance-skewness-kurtosis model by using the polynomial goal programming approach. The results show that the incorporation of skewness and kurtosis will change the optimal portfolio compositions. The mean-variance-skewness-kurtosis model outperforms the mean-variance model because the mean-variance-skewness-kurtosis model takes skewness and kurtosis into consideration. Therefore, the mean-variance-skewness-kurtosis model is more appropriate for the investors of Malaysia in portfolio optimization.

  3. Yaw-modelling using a skewed vortex cylinder

    DEFF Research Database (Denmark)

    Branlard, Emmanuel Simon Pierre

    2017-01-01

    The cylindrical vortex wake model presented in Chap. 17 for the case of uniform inflow is extended in the current chapter to the case of yawed inflow. Generalities regarding yaw are presented in Sect. 6.1 and only the skewed cylindrical vortex model is presented in this chapter. The chapter starts...... with a literature review on the topic of yaw-models and vorticity-based methods. The description of the model follows. The novelty of the current model is that the assumption of infinite tip-speed ratio is relaxed. The bound vorticity is assumed to be identical to the case of uniform inflow but the vortex cylinder...... and the root vortex are skewed with respect to the normal of the rotor disk. Closed form formulae for the induced velocities are provided. They can only be evaluated analytically for a limited part of the domain. A numerical integration is required to obtain the velocity everywhere in the domain. The numerical...

  4. Multivariate extended skew-t distributions and related families

    KAUST Repository

    Arellano-Valle, Reinaldo B.

    2010-12-01

    A class of multivariate extended skew-t (EST) distributions is introduced and studied in detail, along with closely related families such as the subclass of extended skew-normal distributions. Besides mathematical tractability and modeling flexibility in terms of both skewness and heavier tails than the normal distribution, the most relevant properties of the EST distribution include closure under conditioning and ability to model lighter tails as well. The first part of the present paper examines probabilistic properties of the EST distribution, such as various stochastic representations, marginal and conditional distributions, linear transformations, moments and in particular Mardia’s measures of multivariate skewness and kurtosis. The second part of the paper studies statistical properties of the EST distribution, such as likelihood inference, behavior of the profile log-likelihood, the score vector and the Fisher information matrix. Especially, unlike the extended skew-normal distribution, the Fisher information matrix of the univariate EST distribution is shown to be non-singular when the skewness is set to zero. Finally, a numerical application of the conditional EST distribution is presented in the context of confidential data perturbation.

  5. Multivariate extended skew-t distributions and related families

    KAUST Repository

    Arellano-Valle, Reinaldo B.; Genton, Marc G.

    2010-01-01

    A class of multivariate extended skew-t (EST) distributions is introduced and studied in detail, along with closely related families such as the subclass of extended skew-normal distributions. Besides mathematical tractability and modeling flexibility in terms of both skewness and heavier tails than the normal distribution, the most relevant properties of the EST distribution include closure under conditioning and ability to model lighter tails as well. The first part of the present paper examines probabilistic properties of the EST distribution, such as various stochastic representations, marginal and conditional distributions, linear transformations, moments and in particular Mardia’s measures of multivariate skewness and kurtosis. The second part of the paper studies statistical properties of the EST distribution, such as likelihood inference, behavior of the profile log-likelihood, the score vector and the Fisher information matrix. Especially, unlike the extended skew-normal distribution, the Fisher information matrix of the univariate EST distribution is shown to be non-singular when the skewness is set to zero. Finally, a numerical application of the conditional EST distribution is presented in the context of confidential data perturbation.

  6. A novel generalized normal distribution for human longevity and other negatively skewed data.

    Science.gov (United States)

    Robertson, Henry T; Allison, David B

    2012-01-01

    Negatively skewed data arise occasionally in statistical practice; perhaps the most familiar example is the distribution of human longevity. Although other generalizations of the normal distribution exist, we demonstrate a new alternative that apparently fits human longevity data better. We propose an alternative approach of a normal distribution whose scale parameter is conditioned on attained age. This approach is consistent with previous findings that longevity conditioned on survival to the modal age behaves like a normal distribution. We derive such a distribution and demonstrate its accuracy in modeling human longevity data from life tables. The new distribution is characterized by 1. An intuitively straightforward genesis; 2. Closed forms for the pdf, cdf, mode, quantile, and hazard functions; and 3. Accessibility to non-statisticians, based on its close relationship to the normal distribution.

  7. Genetic Analysis of Somatic Cell Score in Danish Holsteins Using a Liability-Normal Mixture Model

    DEFF Research Database (Denmark)

    Madsen, P; Shariati, M M; Ødegård, J

    2008-01-01

    Mixture models are appealing for identifying hidden structures affecting somatic cell score (SCS) data, such as unrecorded cases of subclinical mastitis. Thus, liability-normal mixture (LNM) models were used for genetic analysis of SCS data, with the aim of predicting breeding values for such cas...

  8. A skewed distribution with asset pricing applications

    NARCIS (Netherlands)

    de Roon, Frans; Karehnke, P.

    2017-01-01

    Recent research has identified skewness and downside risk as one of the most important features of risk. We present a new distribution which makes modeling skewed risks no more difficult than normally distributed (symmetric) risks. Our distribution is a combination of the “downside” and “upside”

  9. Regularized multivariate regression models with skew-t error distributions

    KAUST Repository

    Chen, Lianfu

    2014-06-01

    We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both the regression coefficient and inverse scale matrices simultaneously. The sparsity is introduced through penalizing the negative log-likelihood by adding L1-penalties on the entries of the two matrices. Taking advantage of the hierarchical representation of skew-t distributions, and using the expectation conditional maximization (ECM) algorithm, we reduce the problem to penalized normal likelihood and develop a procedure to minimize the ensuing objective function. Using a simulation study the performance of the method is assessed, and the methodology is illustrated using a real data set with a 24-dimensional response vector. © 2014 Elsevier B.V.

  10. Report from LHC MD 2171: Amplitude dependent closest tune approach from normal and skew octupoles

    CERN Document Server

    Maclean, Ewen Hamish; Persson, Tobias Hakan Bjorn; Carlier, Felix Simon; CERN. Geneva. ATS Department

    2018-01-01

    Simulation-based studies predict significant amplitude-dependent closest tune approach can be generated by skew octupole sources in conjunction with their normal octupolar counterparts. This has the potential to significantly influence Landau damping at small β∗, where skew octupole errors in the experimental IRs, together with b4 introduced by the Landau octupoles, is predicted to cause large distortion of the tune footprint. This MD aimed to perform a first exploration of these predictions with beam, by enhancing skew octupole sources in the IRs at injection and measuring amplitude detuning with free kicks in the plane approaching the coupling resonance.

  11. Bayesian inference for two-part mixed-effects model using skew distributions, with application to longitudinal semicontinuous alcohol data.

    Science.gov (United States)

    Xing, Dongyuan; Huang, Yangxin; Chen, Henian; Zhu, Yiliang; Dagne, Getachew A; Baldwin, Julie

    2017-08-01

    Semicontinuous data featured with an excessive proportion of zeros and right-skewed continuous positive values arise frequently in practice. One example would be the substance abuse/dependence symptoms data for which a substantial proportion of subjects investigated may report zero. Two-part mixed-effects models have been developed to analyze repeated measures of semicontinuous data from longitudinal studies. In this paper, we propose a flexible two-part mixed-effects model with skew distributions for correlated semicontinuous alcohol data under the framework of a Bayesian approach. The proposed model specification consists of two mixed-effects models linked by the correlated random effects: (i) a model on the occurrence of positive values using a generalized logistic mixed-effects model (Part I); and (ii) a model on the intensity of positive values using a linear mixed-effects model where the model errors follow skew distributions including skew- t and skew-normal distributions (Part II). The proposed method is illustrated with an alcohol abuse/dependence symptoms data from a longitudinal observational study, and the analytic results are reported by comparing potential models under different random-effects structures. Simulation studies are conducted to assess the performance of the proposed models and method.

  12. Multi-objective mean-variance-skewness model for generation portfolio allocation in electricity markets

    Energy Technology Data Exchange (ETDEWEB)

    Pindoriya, N.M.; Singh, S.N. [Department of Electrical Engineering, Indian Institute of Technology Kanpur, Kanpur 208016 (India); Singh, S.K. [Indian Institute of Management Lucknow, Lucknow 226013 (India)

    2010-10-15

    This paper proposes an approach for generation portfolio allocation based on mean-variance-skewness (MVS) model which is an extension of the classical mean-variance (MV) portfolio theory, to deal with assets whose return distribution is non-normal. The MVS model allocates portfolios optimally by considering the maximization of both the expected return and skewness of portfolio return while simultaneously minimizing the risk. Since, it is competing and conflicting non-smooth multi-objective optimization problem, this paper employed a multi-objective particle swarm optimization (MOPSO) based meta-heuristic technique to provide Pareto-optimal solution in a single simulation run. Using a case study of the PJM electricity market, the performance of the MVS portfolio theory based method and the classical MV method is compared. It has been found that the MVS portfolio theory based method can provide significantly better portfolios in the situation where non-normally distributed assets exist for trading. (author)

  13. Multi-objective mean-variance-skewness model for generation portfolio allocation in electricity markets

    International Nuclear Information System (INIS)

    Pindoriya, N.M.; Singh, S.N.; Singh, S.K.

    2010-01-01

    This paper proposes an approach for generation portfolio allocation based on mean-variance-skewness (MVS) model which is an extension of the classical mean-variance (MV) portfolio theory, to deal with assets whose return distribution is non-normal. The MVS model allocates portfolios optimally by considering the maximization of both the expected return and skewness of portfolio return while simultaneously minimizing the risk. Since, it is competing and conflicting non-smooth multi-objective optimization problem, this paper employed a multi-objective particle swarm optimization (MOPSO) based meta-heuristic technique to provide Pareto-optimal solution in a single simulation run. Using a case study of the PJM electricity market, the performance of the MVS portfolio theory based method and the classical MV method is compared. It has been found that the MVS portfolio theory based method can provide significantly better portfolios in the situation where non-normally distributed assets exist for trading. (author)

  14. Skewness of the standard model possible implications

    International Nuclear Information System (INIS)

    Nielsen, H.B.; Brene, N.

    1989-09-01

    In this paper we consider combinations of gauge algebra and set of rules for quantization of gauge charges. We show that the combination of the algebra of the standard model and the rule satisfied by the electric charges of the quarks and leptons has an exceptional high degree of a kind of asymmetry which we call skewness. Assuming that skewness has physical significance and adding two other rather plausible assumptions, we may conclude that space time must have a non simply connected topology on very small distances. Such topology would allow a kind of symmetry breakdown leading to a more skew combination of gauge algebra and set of quantization rules. (orig.)

  15. Study of normal and shear material properties for viscoelastic model of asphalt mixture by discrete element method

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Stang, Henrik

    2015-01-01

    In this paper, the viscoelastic behavior of asphalt mixture was studied by using discrete element method. The dynamic properties of asphalt mixture were captured by implementing Burger’s contact model. Different ways of taking into account of the normal and shear material properties of asphalt mi...

  16. Characteristic function-based semiparametric inference for skew-symmetric models

    KAUST Repository

    Potgieter, Cornelis J.

    2012-12-26

    Skew-symmetric models offer a very flexible class of distributions for modelling data. These distributions can also be viewed as selection models for the symmetric component of the specified skew-symmetric distribution. The estimation of the location and scale parameters corresponding to the symmetric component is considered here, with the symmetric component known. Emphasis is placed on using the empirical characteristic function to estimate these parameters. This is made possible by an invariance property of the skew-symmetric family of distributions, namely that even transformations of random variables that are skew-symmetric have a distribution only depending on the symmetric density. A distance metric between the real components of the empirical and true characteristic functions is minimized to obtain the estimators. The method is semiparametric, in that the symmetric component is specified, but the skewing function is assumed unknown. Furthermore, the methodology is extended to hypothesis testing. Two tests for a hypothesis of specific parameter values are considered, as well as a test for the hypothesis that the symmetric component has a specific parametric form. A resampling algorithm is described for practical implementation of these tests. The outcomes of various numerical experiments are presented. © 2012 Board of the Foundation of the Scandinavian Journal of Statistics.

  17. Some case studies of skewed (and other ab-normal) data distributions arising in low-level environmental research

    International Nuclear Information System (INIS)

    Currie, L.A.

    2001-01-01

    Three general classes of skewed data distributions have been encountered in research on background radiation, chemical and radiochemical blanks, and low levels of 85 Kr and 14 C in the atmosphere and the cryosphere. The first class of skewed data can be considered to be theoretically, or fundamentally skewed. It is typified by the exponential distribution of inter-arrival times for nuclear counting events for a Poisson process. As part of a study of the nature of low-level (anti-coincidence) Geiger- Mueller counter background radiation, tests were performed on the Poisson distribution of counts, the uniform distribution of arrival times, and the exponential distribution of inter-arrival times. The real laboratory system, of course, failed the (inter-arrival time) test - for very interesting reasons, linked to the physics of the measurement process. The second, computationally skewed, class relates to skewness induced by non-linear transformations. It is illustrated by non-linear concentration estimates from inverse calibration, and bivariate blank corrections for low-level 14 C- 12 C aerosol data that led to highly asymmetric uncertainty intervals for the biomass carbon contribution to urban ''soot''. The third, environmentally skewed, data class relates to a universal problem for the detection of excursions above blank or baseline levels: namely, the widespread occurrence of ab-normal distributions of environmental and laboratory blanks. This is illustrated by the search for fundamental factors that lurk behind skewed frequency distributions of sulfur laboratory blanks and 85 Kr environmental baselines, and the application of robust statistical procedures for reliable detection decisions in the face of skewed isotopic carbon procedural blanks with few degrees of freedom. (orig.)

  18. Some case studies of skewed (and other ab-normal) data distributions arising in low-level environmental research.

    Science.gov (United States)

    Currie, L A

    2001-07-01

    Three general classes of skewed data distributions have been encountered in research on background radiation, chemical and radiochemical blanks, and low levels of 85Kr and 14C in the atmosphere and the cryosphere. The first class of skewed data can be considered to be theoretically, or fundamentally skewed. It is typified by the exponential distribution of inter-arrival times for nuclear counting events for a Poisson process. As part of a study of the nature of low-level (anti-coincidence) Geiger-Muller counter background radiation, tests were performed on the Poisson distribution of counts, the uniform distribution of arrival times, and the exponential distribution of inter-arrival times. The real laboratory system, of course, failed the (inter-arrival time) test--for very interesting reasons, linked to the physics of the measurement process. The second, computationally skewed, class relates to skewness induced by non-linear transformations. It is illustrated by non-linear concentration estimates from inverse calibration, and bivariate blank corrections for low-level 14C-12C aerosol data that led to highly asymmetric uncertainty intervals for the biomass carbon contribution to urban "soot". The third, environmentally, skewed, data class relates to a universal problem for the detection of excursions above blank or baseline levels: namely, the widespread occurrence of ab-normal distributions of environmental and laboratory blanks. This is illustrated by the search for fundamental factors that lurk behind skewed frequency distributions of sulfur laboratory blanks and 85Kr environmental baselines, and the application of robust statistical procedures for reliable detection decisions in the face of skewed isotopic carbon procedural blanks with few degrees of freedom.

  19. Inferring climate variability from skewed proxy records

    Science.gov (United States)

    Emile-Geay, J.; Tingley, M.

    2013-12-01

    Many paleoclimate analyses assume a linear relationship between the proxy and the target climate variable, and that both the climate quantity and the errors follow normal distributions. An ever-increasing number of proxy records, however, are better modeled using distributions that are heavy-tailed, skewed, or otherwise non-normal, on account of the proxies reflecting non-normally distributed climate variables, or having non-linear relationships with a normally distributed climate variable. The analysis of such proxies requires a different set of tools, and this work serves as a cautionary tale on the danger of making conclusions about the underlying climate from applications of classic statistical procedures to heavily skewed proxy records. Inspired by runoff proxies, we consider an idealized proxy characterized by a nonlinear, thresholded relationship with climate, and describe three approaches to using such a record to infer past climate: (i) applying standard methods commonly used in the paleoclimate literature, without considering the non-linearities inherent to the proxy record; (ii) applying a power transform prior to using these standard methods; (iii) constructing a Bayesian model to invert the mechanistic relationship between the climate and the proxy. We find that neglecting the skewness in the proxy leads to erroneous conclusions and often exaggerates changes in climate variability between different time intervals. In contrast, an explicit treatment of the skewness, using either power transforms or a Bayesian inversion of the mechanistic model for the proxy, yields significantly better estimates of past climate variations. We apply these insights in two paleoclimate settings: (1) a classical sedimentary record from Laguna Pallcacocha, Ecuador (Moy et al., 2002). Our results agree with the qualitative aspects of previous analyses of this record, but quantitative departures are evident and hold implications for how such records are interpreted, and

  20. Bayesian linear regression with skew-symmetric error distributions with applications to survival analysis

    KAUST Repository

    Rubio, Francisco J.

    2016-02-09

    We study Bayesian linear regression models with skew-symmetric scale mixtures of normal error distributions. These kinds of models can be used to capture departures from the usual assumption of normality of the errors in terms of heavy tails and asymmetry. We propose a general noninformative prior structure for these regression models and show that the corresponding posterior distribution is proper under mild conditions. We extend these propriety results to cases where the response variables are censored. The latter scenario is of interest in the context of accelerated failure time models, which are relevant in survival analysis. We present a simulation study that demonstrates good frequentist properties of the posterior credible intervals associated with the proposed priors. This study also sheds some light on the trade-off between increased model flexibility and the risk of over-fitting. We illustrate the performance of the proposed models with real data. Although we focus on models with univariate response variables, we also present some extensions to the multivariate case in the Supporting Information.

  1. A Finite Segment Method for Skewed Box Girder Analysis

    Directory of Open Access Journals (Sweden)

    Xingwei Xue

    2018-01-01

    Full Text Available A finite segment method is presented to analyze the mechanical behavior of skewed box girders. By modeling the top and bottom plates of the segments with skew plate beam element under an inclined coordinate system and the webs with normal plate beam element, a spatial elastic displacement model for skewed box girder is constructed, which can satisfy the compatibility condition at the corners of the cross section for box girders. The formulation of the finite segment is developed based on the variational principle. The major advantage of the proposed approach, in comparison with the finite element method, is that it can simplify a three-dimensional structure into a one-dimensional structure for structural analysis, which results in significant saving in computational times. At last, the accuracy and efficiency of the proposed finite segment method are verified by a model test.

  2. Metric adjusted skew information

    DEFF Research Database (Denmark)

    Hansen, Frank

    2008-01-01

    ) that vanishes for observables commuting with the state. We show that the skew information is a convex function on the manifold of states. It also satisfies other requirements, proposed by Wigner and Yanase, for an effective measure-of-information content of a state relative to a conserved observable. We...... establish a connection between the geometrical formulation of quantum statistics as proposed by Chentsov and Morozova and measures of quantum information as introduced by Wigner and Yanase and extended in this article. We show that the set of normalized Morozova-Chentsov functions describing the possible......We extend the concept of Wigner-Yanase-Dyson skew information to something we call "metric adjusted skew information" (of a state with respect to a conserved observable). This "skew information" is intended to be a non-negative quantity bounded by the variance (of an observable in a state...

  3. A Skewed Student-t Value-at-Risk Approach for Long Memory Volatility Processes in Japanese Financial Markets

    Directory of Open Access Journals (Sweden)

    Seong¡-Min Yoon

    2007-06-01

    Full Text Available This paper investigates the relevance of skewed Student-t distributions in capturing long memory volatility properties in the daily return series of Japanese financial data (Nikkei 225 Index and JPY-USD exchange rate. For this purpose, we assess the performance of two long memory Value-at-Risk (VaR models (FIGARCH and FIAPARCH VaR model with three different distribution innovations: the normal, Student-t, and skewed Student-t distributions. From our results, we find that the skewed Student-t distribution model produces more accurate VaR estimations than normal and Student-t distribution models. Thus, accounting for skewness and excess kurtosis in the asset return distribution can provide suitable criteria for VaR model selection in the context of long memory volatility and enhance the performance of risk management in Japanese financial markets.

  4. Neoclassical versus Frontier Production Models ? Testing for the Skewness of Regression Residuals

    DEFF Research Database (Denmark)

    Kuosmanen, T; Fosgerau, Mogens

    2009-01-01

    The empirical literature on production and cost functions is divided into two strands. The neoclassical approach concentrates on model parameters, while the frontier approach decomposes the disturbance term to a symmetric noise term and a positively skewed inefficiency term. We propose a theoreti......The empirical literature on production and cost functions is divided into two strands. The neoclassical approach concentrates on model parameters, while the frontier approach decomposes the disturbance term to a symmetric noise term and a positively skewed inefficiency term. We propose...... a theoretical justification for the skewness of the inefficiency term, arguing that this skewness is the key testable hypothesis of the frontier approach. We propose to test the regression residuals for skewness in order to distinguish the two competing approaches. Our test builds directly upon the asymmetry...

  5. Skewed X-inactivation in cloned mice

    International Nuclear Information System (INIS)

    Senda, Sho; Wakayama, Teruhiko; Yamazaki, Yukiko; Ohgane, Jun; Hattori, Naka; Tanaka, Satoshi; Yanagimachi, Ryuzo; Shiota, Kunio

    2004-01-01

    In female mammals, dosage compensation for X-linked genes is accomplished by inactivation of one of two X chromosomes. The X-inactivation ratio (a percentage of the cells with inactivated maternal X chromosomes in the whole cells) is skewed as a consequence of various genetic mutations, and has been observed in a number of X-linked disorders. We previously reported that phenotypically normal full-term cloned mouse fetuses had loci with inappropriate DNA methylation. Thus, cloned mice are excellent models to study abnormal epigenetic events in mammalian development. In the present study, we analyzed X-inactivation ratios in adult female cloned mice (B6C3F1). Kidneys of eight naturally produced controls and 11 cloned mice were analyzed. Although variations in X-inactivation ratio among the mice were observed in both groups, the distributions were significantly different (Ansary-Bradley test, P < 0.01). In particular, 2 of 11 cloned mice showed skewed X-inactivation ratios (19.2% and 86.8%). Similarly, in intestine, 1 of 10 cloned mice had a skewed ratio (75.7%). Skewed X-inactivation was observed to various degrees in different tissues of different individuals, suggesting that skewed X-inactivation in cloned mice is the result of secondary cell selection in combination with stochastic distortion of primary choice. The present study is the first demonstration that skewed X-inactivation occurs in cloned animals. This finding is important for understanding both nuclear transfer technology and etiology of X-linked disorders

  6. SKEW QUADRUPOLE FOCUSING LATTICES AND APPLICATIONS

    International Nuclear Information System (INIS)

    Parker, B.

    2001-01-01

    In this paper we revisit using skew quadrupole fields in place of traditional normal upright quadrupole fields to make beam focusing structures. We illustrate by example skew lattice decoupling, dispersion suppression and chromatic correction using the neutrino factory Study-II muon storage ring design. Ongoing BNL investigation of flat coil magnet structures that allow building a very compact muon storage ring arc and other flat coil configurations that might bring significant magnet cost reduction to a VLHC motivate our study of skew focusing

  7. Adaptive Convergence Rates of a Dirichlet Process Mixture of Multivariate Normals

    OpenAIRE

    Tokdar, Surya T.

    2011-01-01

    It is shown that a simple Dirichlet process mixture of multivariate normals offers Bayesian density estimation with adaptive posterior convergence rates. Toward this, a novel sieve for non-parametric mixture densities is explored, and its rate adaptability to various smoothness classes of densities in arbitrary dimension is demonstrated. This sieve construction is expected to offer a substantial technical advancement in studying Bayesian non-parametric mixture models based on stick-breaking p...

  8. Quantiles for Finite Mixtures of Normal Distributions

    Science.gov (United States)

    Rahman, Mezbahur; Rahman, Rumanur; Pearson, Larry M.

    2006-01-01

    Quantiles for finite mixtures of normal distributions are computed. The difference between a linear combination of independent normal random variables and a linear combination of independent normal densities is emphasized. (Contains 3 tables and 1 figure.)

  9. Micromagnetic recording model of writer geometry effects at skew

    Science.gov (United States)

    Plumer, M. L.; Bozeman, S.; van Ek, J.; Michel, R. P.

    2006-04-01

    The effects of the pole-tip geometry at the air-bearing surface on perpendicular recording at a skew angle are examined through modeling and spin-stand test data. Head fields generated by the finite element method were used to record transitions within our previously described micromagnetic recording model. Write-field contours for a variety of square, rectangular, and trapezoidal pole shapes were evaluated to determine the impact of geometry on field contours. Comparing results for recorded track width, transition width, and media signal to noise ratio at 0° and 15° skew demonstrate the benefits of trapezoidal and reduced aspect-ratio pole shapes. Consistency between these modeled results and test data is demonstrated.

  10. Consistency of the MLE under mixture models

    OpenAIRE

    Chen, Jiahua

    2016-01-01

    The large-sample properties of likelihood-based statistical inference under mixture models have received much attention from statisticians. Although the consistency of the nonparametric MLE is regarded as a standard conclusion, many researchers ignore the precise conditions required on the mixture model. An incorrect claim of consistency can lead to false conclusions even if the mixture model under investigation seems well behaved. Under a finite normal mixture model, for instance, the consis...

  11. Social and genetic structure of paper wasp cofoundress associations: tests of reproductive skew models.

    Science.gov (United States)

    Field, J; Solís, C R; Queller, D C; Strassmann, J E

    1998-06-01

    Recent models postulate that the members of a social group assess their ecological and social environments and agree a "social contract" of reproductive partitioning (skew). We tested social contracts theory by using DNA microsatellites to measure skew in 24 cofoundress associations of paper wasps, Polistes bellicosus. In contrast to theoretical predictions, there was little variation in cofoundress relatedness, and relatedness either did not predict skew or was negatively correlated with it; the dominant/subordinate size ratio, assumed to reflect relative fighting ability, did not predict skew; and high skew was associated with decreased aggression by the rank 2 subordinate toward the dominant. High skew was associated with increased group size. A difficulty with measuring skew in real systems is the frequent changes in group composition that commonly occur in social animals. In P. bellicosus, 61% of egg layers and an unknown number of non-egg layers were absent by the time nests were collected. The social contracts models provide an attractive general framework linking genetics, ecology, and behavior, but there have been few direct tests of their predictions. We question assumptions underlying the models and suggest directions for future research.

  12. Using partially labeled data for normal mixture identification with application to class definition

    Science.gov (United States)

    Shahshahani, Behzad M.; Landgrebe, David A.

    1992-01-01

    The problem of estimating the parameters of a normal mixture density when, in addition to the unlabeled samples, sets of partially labeled samples are available is addressed. The density of the multidimensional feature space is modeled with a normal mixture. It is assumed that the set of components of the mixture can be partitioned into several classes and that training samples are available from each class. Since for any training sample the class of origin is known but the exact component of origin within the corresponding class is unknown, the training samples as considered to be partially labeled. The EM iterative equations are derived for estimating the parameters of the normal mixture in the presence of partially labeled samples. These equations can be used to combine the supervised and nonsupervised learning processes.

  13. Skew information in the XY model with staggered Dzyaloshinskii-Moriya interaction

    Energy Technology Data Exchange (ETDEWEB)

    Qiu, Liang, E-mail: lqiu@cumt.edu.cn [School of Physics, China University of Mining and Technology, Xuzhou, Jiangsu 221116 (China); Quan, Dongxiao [State Key Laboratory of Integrated Services Networks, Xidian University, Xi' an, Shaanxi 710071 (China); Pan, Fei; Liu, Zhi [School of Physics, China University of Mining and Technology, Xuzhou, Jiangsu 221116 (China)

    2017-06-01

    We study the performance of the lower bound of skew information in the vicinity of transition point for the anisotropic spin-1/2 XY chain with staggered Dzyaloshinskii-Moriya interaction by use of quantum renormalization-group method. For a fixed value of the Dzyaloshinskii-Moriya interaction, there are two saturated values for the lower bound of skew information corresponding to the spin-fluid and Néel phases, respectively. The scaling exponent of the lower bound of skew information closely relates to the correlation length of the model and the Dzyaloshinskii-Moriya interaction shifts the factorization point. Our results show that the lower bound of skew information can be a good candidate to detect the critical point of XY spin chain with staggered Dzyaloshinskii-Moriya interaction.

  14. On Bayesian Inference under Sampling from Scale Mixtures of Normals

    NARCIS (Netherlands)

    Fernández, C.; Steel, M.F.J.

    1996-01-01

    This paper considers a Bayesian analysis of the linear regression model under independent sampling from general scale mixtures of Normals.Using a common reference prior, we investigate the validity of Bayesian inference and the existence of posterior moments of the regression and precision

  15. Option Pricing with Asymmetric Heteroskedastic Normal Mixture Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V. K; Stentoft, Lars

    2015-01-01

    We propose an asymmetric GARCH in mean mixture model and provide a feasible method for option pricing within this general framework by deriving the appropriate risk neutral dynamics. We forecast the out-of-sample prices of a large sample of options on the S&P 500 index from January 2006 to December...

  16. Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2017-06-01

    Full Text Available This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT models with stochastic (or uncertain constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT models (such as log-normal, log-Cauchy, and log-logistic FT models as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works.

  17. Regularized multivariate regression models with skew-t error distributions

    KAUST Repository

    Chen, Lianfu; Pourahmadi, Mohsen; Maadooliat, Mehdi

    2014-01-01

    We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both

  18. Parameterizing unconditional skewness in models for financial time series

    DEFF Research Database (Denmark)

    He, Changli; Silvennoinen, Annastiina; Teräsvirta, Timo

    In this paper we consider the third-moment structure of a class of time series models. It is often argued that the marginal distribution of financial time series such as returns is skewed. Therefore it is of importance to know what properties a model should possess if it is to accommodate...

  19. Bayesian Option Pricing Using Mixed Normal Heteroskedasticity Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars Peter

    While stochastic volatility models improve on the option pricing error when compared to the Black-Scholes-Merton model, mispricings remain. This paper uses mixed normal heteroskedasticity models to price options. Our model allows for significant negative skewness and time varying higher order...... moments of the risk neutral distribution. Parameter inference using Gibbs sampling is explained and we detail how to compute risk neutral predictive densities taking into account parameter uncertainty. When forecasting out-of-sample options on the S&P 500 index, substantial improvements are found compared...

  20. Shannon Entropy and Mutual Information for Multivariate Skew-Elliptical Distributions

    KAUST Repository

    Arellano-Valle, Reinaldo B.

    2012-02-27

    The entropy and mutual information index are important concepts developed by Shannon in the context of information theory. They have been widely studied in the case of the multivariate normal distribution. We first extend these tools to the full symmetric class of multivariate elliptical distributions and then to the more flexible families of multivariate skew-elliptical distributions. We study in detail the cases of the multivariate skew-normal and skew-t distributions. We implement our findings to the application of the optimal design of an ozone monitoring station network in Santiago de Chile. © 2012 Board of the Foundation of the Scandinavian Journal of Statistics.

  1. Shannon Entropy and Mutual Information for Multivariate Skew-Elliptical Distributions

    KAUST Repository

    Arellano-Valle, Reinaldo B.; Contreras-Reyes, Javier E.; Genton, Marc G.

    2012-01-01

    The entropy and mutual information index are important concepts developed by Shannon in the context of information theory. They have been widely studied in the case of the multivariate normal distribution. We first extend these tools to the full symmetric class of multivariate elliptical distributions and then to the more flexible families of multivariate skew-elliptical distributions. We study in detail the cases of the multivariate skew-normal and skew-t distributions. We implement our findings to the application of the optimal design of an ozone monitoring station network in Santiago de Chile. © 2012 Board of the Foundation of the Scandinavian Journal of Statistics.

  2. Partially linear mixed-effects joint models for skewed and missing longitudinal competing risks outcomes.

    Science.gov (United States)

    Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong

    2017-12-18

    Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.

  3. Mixture modeling methods for the assessment of normal and abnormal personality, part II: longitudinal models.

    Science.gov (United States)

    Wright, Aidan G C; Hallquist, Michael N

    2014-01-01

    Studying personality and its pathology as it changes, develops, or remains stable over time offers exciting insight into the nature of individual differences. Researchers interested in examining personal characteristics over time have a number of time-honored analytic approaches at their disposal. In recent years there have also been considerable advances in person-oriented analytic approaches, particularly longitudinal mixture models. In this methodological primer we focus on mixture modeling approaches to the study of normative and individual change in the form of growth mixture models and ipsative change in the form of latent transition analysis. We describe the conceptual underpinnings of each of these models, outline approaches for their implementation, and provide accessible examples for researchers studying personality and its assessment.

  4. A quantitative trait locus mixture model that avoids spurious LOD score peaks.

    Science.gov (United States)

    Feenstra, Bjarke; Skovgaard, Ib M

    2004-06-01

    In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.

  5. Multivariate log-skew-elliptical distributions with applications to precipitation data

    KAUST Repository

    Marchenko, Yulia V.

    2009-07-13

    We introduce a family of multivariate log-skew-elliptical distributions, extending the list of multivariate distributions with positive support. We investigate their probabilistic properties such as stochastic representations, marginal and conditional distributions, and existence of moments, as well as inferential properties. We demonstrate, for example, that as for the log-t distribution, the positive moments of the log-skew-t distribution do not exist. Our emphasis is on two special cases, the log-skew-normal and log-skew-t distributions, which we use to analyze US national (univariate) and regional (multivariate) monthly precipitation data. © 2009 John Wiley & Sons, Ltd.

  6. Multivariate log-skew-elliptical distributions with applications to precipitation data

    KAUST Repository

    Marchenko, Yulia V.; Genton, Marc G.

    2009-01-01

    We introduce a family of multivariate log-skew-elliptical distributions, extending the list of multivariate distributions with positive support. We investigate their probabilistic properties such as stochastic representations, marginal and conditional distributions, and existence of moments, as well as inferential properties. We demonstrate, for example, that as for the log-t distribution, the positive moments of the log-skew-t distribution do not exist. Our emphasis is on two special cases, the log-skew-normal and log-skew-t distributions, which we use to analyze US national (univariate) and regional (multivariate) monthly precipitation data. © 2009 John Wiley & Sons, Ltd.

  7. Modeling multivariate time series on manifolds with skew radial basis functions.

    Science.gov (United States)

    Jamshidi, Arta A; Kirby, Michael J

    2011-01-01

    We present an approach for constructing nonlinear empirical mappings from high-dimensional domains to multivariate ranges. We employ radial basis functions and skew radial basis functions for constructing a model using data that are potentially scattered or sparse. The algorithm progresses iteratively, adding a new function at each step to refine the model. The placement of the functions is driven by a statistical hypothesis test that accounts for correlation in the multivariate range variables. The test is applied on training and validation data and reveals nonstatistical or geometric structure when it fails. At each step, the added function is fit to data contained in a spatiotemporally defined local region to determine the parameters--in particular, the scale of the local model. The scale of the function is determined by the zero crossings of the autocorrelation function of the residuals. The model parameters and the number of basis functions are determined automatically from the given data, and there is no need to initialize any ad hoc parameters save for the selection of the skew radial basis functions. Compactly supported skew radial basis functions are employed to improve model accuracy, order, and convergence properties. The extension of the algorithm to higher-dimensional ranges produces reduced-order models by exploiting the existence of correlation in the range variable data. Structure is tested not just in a single time series but between all pairs of time series. We illustrate the new methodologies using several illustrative problems, including modeling data on manifolds and the prediction of chaotic time series.

  8. mixtools: An R Package for Analyzing Mixture Models

    Directory of Open Access Journals (Sweden)

    Tatiana Benaglia

    2009-10-01

    Full Text Available The mixtools package for R provides a set of functions for analyzing a variety of finite mixture models. These functions include both traditional methods, such as EM algorithms for univariate and multivariate normal mixtures, and newer methods that reflect some recent research in finite mixture models. In the latter category, mixtools provides algorithms for estimating parameters in a wide range of different mixture-of-regression contexts, in multinomial mixtures such as those arising from discretizing continuous multivariate data, in nonparametric situations where the multivariate component densities are completely unspecified, and in semiparametric situations such as a univariate location mixture of symmetric but otherwise unspecified densities. Many of the algorithms of the mixtools package are EM algorithms or are based on EM-like ideas, so this article includes an overview of EM algorithms for finite mixture models.

  9. T helper cell 2 immune skewing in pregnancy/early life

    DEFF Research Database (Denmark)

    McFadden, J P; Thyssen, J P; Basketter, D A

    2015-01-01

    During the last 50 years there has been a significant increase in Western societies of atopic disease and associated allergy. The balance between functional subpopulations of T helper cells (Th) determines the quality of the immune response provoked by antigen. One such subpopulation - Th2 cells...... that in Westernized societies reduced exposure during early childhood to pathogenic microorganisms favours the development of atopic allergy. Pregnancy is normally associated with Th2 skewing, which persists for some months in the neonate before Th1/Th2 realignment occurs. In this review, we consider...... the immunophysiology of Th2 immune skewing during pregnancy. In particular, we explore the possibility that altered and increased patterns of exposure to certain chemicals have served to accentuate this normal Th2 skewing and therefore further promote the persistence of a Th2 bias in neonates. Furthermore, we propose...

  10. Penalized Maximum Likelihood Estimation for univariate normal mixture distributions

    International Nuclear Information System (INIS)

    Ridolfi, A.; Idier, J.

    2001-01-01

    Due to singularities of the likelihood function, the maximum likelihood approach for the estimation of the parameters of normal mixture models is an acknowledged ill posed optimization problem. Ill posedness is solved by penalizing the likelihood function. In the Bayesian framework, it amounts to incorporating an inverted gamma prior in the likelihood function. A penalized version of the EM algorithm is derived, which is still explicit and which intrinsically assures that the estimates are not singular. Numerical evidence of the latter property is put forward with a test

  11. Flow induced by a skewed vortex cylinder

    DEFF Research Database (Denmark)

    Branlard, Emmanuel Simon Pierre

    2017-01-01

    The velocity field induced by a skewed vortex cylinder of longitudinal and tangential vorticity is derived in this chapter by direct integration of the Biot– Savart law. The derivation steps are provided in details. The results of Castles and Durham for the skewed semi-infinite cylinder....... The content of this chapter is based on the publication of the author entitled "Cylindrical vortex wake model: skewed cylinder, application to yawed or tilted rotors" [1]. Results from this chapter are applied: in Chap. 21 to model a wind turbine (or rotor) in yaw, in Chap. 22 to derive a new yaw...

  12. Hybrid excited claw pole generator with skewed and non-skewed permanent magnets

    Science.gov (United States)

    Wardach, Marcin

    2017-12-01

    This article contains simulation results of the Hybrid Excited Claw Pole Generator with skewed and non-skewed permanent magnets on rotor. The experimental machine has claw poles on two rotor sections, between which an excitation control coil is located. The novelty of this machine is existence of non-skewed permanent magnets on claws of one part of the rotor and skewed permanent magnets on the second one. The paper presents the construction of the machine and analysis of the influence of the PM skewing on the cogging torque and back-emf. Simulation studies enabled the determination of the cogging torque and the back-emf rms for both: the strengthening and the weakening of magnetic field. The influence of the magnets skewing on the cogging torque and the back-emf rms have also been analyzed.

  13. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  14. Hybrid excited claw pole generator with skewed and non-skewed permanent magnets

    Directory of Open Access Journals (Sweden)

    Wardach Marcin

    2017-12-01

    Full Text Available This article contains simulation results of the Hybrid Excited Claw Pole Generator with skewed and non-skewed permanent magnets on rotor. The experimental machine has claw poles on two rotor sections, between which an excitation control coil is located. The novelty of this machine is existence of non-skewed permanent magnets on claws of one part of the rotor and skewed permanent magnets on the second one. The paper presents the construction of the machine and analysis of the influence of the PM skewing on the cogging torque and back-emf. Simulation studies enabled the determination of the cogging torque and the back-emf rms for both: the strengthening and the weakening of magnetic field. The influence of the magnets skewing on the cogging torque and the back-emf rms have also been analyzed.

  15. Metric-adjusted skew information

    DEFF Research Database (Denmark)

    Liang, Cai; Hansen, Frank

    2010-01-01

    on a bipartite system and proved superadditivity of the Wigner-Yanase-Dyson skew informations for such states. We extend this result to the general metric-adjusted skew information. We finally show that a recently introduced extension to parameter values 1 ...We give a truly elementary proof of the convexity of metric-adjusted skew information following an idea of Effros. We extend earlier results of weak forms of superadditivity to general metric-adjusted skew information. Recently, Luo and Zhang introduced the notion of semi-quantum states...... of (unbounded) metric-adjusted skew information....

  16. Log-Normal Turbulence Dissipation in Global Ocean Models

    Science.gov (United States)

    Pearson, Brodie; Fox-Kemper, Baylor

    2018-03-01

    Data from turbulent numerical simulations of the global ocean demonstrate that the dissipation of kinetic energy obeys a nearly log-normal distribution even at large horizontal scales O (10 km ) . As the horizontal scales of resolved turbulence are larger than the ocean is deep, the Kolmogorov-Yaglom theory for intermittency in 3D homogeneous, isotropic turbulence cannot apply; instead, the down-scale potential enstrophy cascade of quasigeostrophic turbulence should. Yet, energy dissipation obeys approximate log-normality—robustly across depths, seasons, regions, and subgrid schemes. The distribution parameters, skewness and kurtosis, show small systematic departures from log-normality with depth and subgrid friction schemes. Log-normality suggests that a few high-dissipation locations dominate the integrated energy and enstrophy budgets, which should be taken into account when making inferences from simplified models and inferring global energy budgets from sparse observations.

  17. Investors’ Risk Preference Characteristics and Conditional Skewness

    Directory of Open Access Journals (Sweden)

    Fenghua Wen

    2014-01-01

    Full Text Available Perspective on behavioral finance, we take a new look at the characteristics of investors’ risk preference, building the D-GARCH-M model, DR-GARCH-M model, and GARCHC-M model to investigate their changes with states of gain and loss and values of return together with other time-varying characteristics of investors’ risk preference. Based on a full description of risk preference characteristic, we develop a GARCHCS-M model to study its effect on the return skewness. The top ten market value stock composite indexes from Global Stock Exchange in 2012 are adopted to make the empirical analysis. The results show that investors are risk aversion when they gain and risk seeking when they lose, which effectively explains the inconsistent risk-return relationship. Moreover, the degree of risk aversion rises with the increasing gain and that of risk seeking improves with the increasing losses. Meanwhile, we find that investors’ inherent risk preference in most countries displays risk seeking, and their current risk preference is influenced by last period’s risk preference and disturbances. At last, investors’ risk preferences affect the conditional skewness; specifically, their risk aversion makes return skewness reduce, while risk seeking makes the skewness increase.

  18. Gaussian Process-Mixture Conditional Heteroscedasticity.

    Science.gov (United States)

    Platanios, Emmanouil A; Chatzis, Sotirios P

    2014-05-01

    Generalized autoregressive conditional heteroscedasticity (GARCH) models have long been considered as one of the most successful families of approaches for volatility modeling in financial return series. In this paper, we propose an alternative approach based on methodologies widely used in the field of statistical machine learning. Specifically, we propose a novel nonparametric Bayesian mixture of Gaussian process regression models, each component of which models the noise variance process that contaminates the observed data as a separate latent Gaussian process driven by the observed data. This way, we essentially obtain a Gaussian process-mixture conditional heteroscedasticity (GPMCH) model for volatility modeling in financial return series. We impose a nonparametric prior with power-law nature over the distribution of the model mixture components, namely the Pitman-Yor process prior, to allow for better capturing modeled data distributions with heavy tails and skewness. Finally, we provide a copula-based approach for obtaining a predictive posterior for the covariances over the asset returns modeled by means of a postulated GPMCH model. We evaluate the efficacy of our approach in a number of benchmark scenarios, and compare its performance to state-of-the-art methodologies.

  19. α-Skew π-McCoy Rings

    Directory of Open Access Journals (Sweden)

    Areej M. Abduldaim

    2013-01-01

    Full Text Available As a generalization of α-skew McCoy rings, we introduce the concept of α-skew π-McCoy rings, and we study the relationships with another two new generalizations, α-skew π1-McCoy rings and α-skew π2-McCoy rings, observing the relations with α-skew McCoy rings, π-McCoy rings, α-skew Armendariz rings, π-regular rings, and other kinds of rings. Also, we investigate conditions such that α-skew π1-McCoy rings imply α-skew π-McCoy rings and α-skew π2-McCoy rings. We show that in the case where R is a nonreduced ring, if R is 2-primal, then R is an α-skew π-McCoy ring. And, let R be a weak (α,δ-compatible ring; if R is an α-skew π1-McCoy ring, then R is α-skew π2-McCoy.

  20. Linking the Value Assessment of Oil and Gas Firms to Ambidexterity Theory Using a Mixture of Normal Distributions

    Directory of Open Access Journals (Sweden)

    Casault Sébastien

    2016-05-01

    Full Text Available Oil and gas exploration and production firms have return profiles that are not easily explained by current financial theory – the variation in their market returns is non-Gaussian. In this paper, the nature and underlying reason for these significant deviations from expected behavior are considered. Understanding these differences in financial market behavior is important for a wide range of reasons, including: assessing investments, investor relations, decisions to raise capital, assessment of firm and management performance. We show that using a “thicker tailed” mixture of two normal distributions offers a significantly more accurate model than the traditionally Gaussian approach in describing the behavior of the value of oil and gas firms. This mixture of normal distribution is also more effective in bridging the gap between management theory and practice without the need to introduce complex time-sensitive GARCH and/or jump diffusion dynamics. The mixture distribution is consistent with ambidexterity theory that suggests firms operate in two distinct states driven by the primary focus of the firm: an exploration state with high uncertainty and, an exploitation (or production state with lower uncertainty. The findings have direct implications on improving the accuracy of real option pricing techniques and futures analysis of risk management. Traditional options pricing models assume that commercial returns from these assets are described by a normal random walk. However, a normal random walk model discounts the possibility of large changes to the marketplace from events such as the discovery of important reserves or the introduction of new technology. The mixture distribution proves to be well suited to inherently describe the unusually large risks and opportunities associated with oil and gas production and exploration. A significance testing study of 554 oil and gas exploration and production firms empirically supports using a mixture

  1. Skew-t partially linear mixed-effects models for AIDS clinical studies.

    Science.gov (United States)

    Lu, Tao

    2016-01-01

    We propose partially linear mixed-effects models with asymmetry and missingness to investigate the relationship between two biomarkers in clinical studies. The proposed models take into account irregular time effects commonly observed in clinical studies under a semiparametric model framework. In addition, commonly assumed symmetric distributions for model errors are substituted by asymmetric distribution to account for skewness. Further, informative missing data mechanism is accounted for. A Bayesian approach is developed to perform parameter estimation simultaneously. The proposed model and method are applied to an AIDS dataset and comparisons with alternative models are performed.

  2. Inequalities for quantum skew information

    DEFF Research Database (Denmark)

    Audenaert, Koenraad; Cai, Liang; Hansen, Frank

    2008-01-01

    relation on the set of functions representing quantum Fisher information that renders the set into a lattice with an involution. This order structure generates new inequalities for the metric adjusted skew informations. In particular, the Wigner-Yanase skew information is the maximal skew information...... with respect to this order structure in the set of Wigner-Yanase-Dyson skew informations....

  3. Skew chromaticity

    International Nuclear Information System (INIS)

    Peggs, S.; Dell, G.F.

    1994-01-01

    The on-momentum description of linear coupling between horizontal and vertical betatron motion is extended to include off-momentum particles, introducing a vector quantity called the ''skew chromaticity''. This vector tends to be long in large superconducting storage rings, where it restricts the available working space in the tune plane, and modifies collective effect stability criteria. Skew chromaticity measurements at the Cornell Electron Storage Ring (CESR) and at the Fermilab Tevatron are reported, as well as tracking results from the Relativistic Heavy Ion Collider (RHIC). The observation of anomalous head-tail beam Iowa new the tune diagonal in the Tevatron are explained in terms of the extended theory, including modified criteria for headtail stability. These results are confirmed in head-tail simulations. Sources of skew chromaticity are investigated

  4. On the Effects of Wind Turbine Wake Skew Caused by Wind Veer

    Energy Technology Data Exchange (ETDEWEB)

    Churchfield, Matthew J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Sirnivas, Senu [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-01-12

    Because of Coriolis forces caused by the Earth's rotation, the structure of the atmospheric boundary layer often contains wind-direction change with height, also known as wind-direction veer. Under low turbulence conditions, such as in stably stratified atmospheric conditions, this veer can be significant, even across the vertical extent of a wind turbine's rotor disk. The veer then causes the wind turbine wake to skew as it advects downstream. This wake skew has been observed both experimentally and numerically. In this work, we attempt to examine the wake skewing process in some detail, and quantify how differently a skewed wake versus a non skewed wake affects a downstream turbine. We do this by performing atmospheric large-eddy simulations to create turbulent inflow winds with and without veer. In the veer case, there is a roughly 8 degree wind direction change across the turbine rotor. We then perform subsequent large-eddy simulations using these inflow data with an actuator line rotor model to create wakes. The turbine modeled is a large, modern, offshore, multimegawatt turbine. We examine the unsteady wake data in detail and show that the skewed wake recovers faster than the non skewed wake. We also show that the wake deficit does not skew to the same degree that a passive tracer would if subject to veered inflow. Last, we use the wake data to place a hypothetical turbine 9 rotor diameters downstream by running aeroelastic simulations with the simulated wake data. We see differences in power and loads if this downstream turbine is subject to a skewed or non skewed wake. We feel that the differences observed between the skewed and nonskewed wake are important enough that the skewing effect should be included in engineering wake models.

  5. A Dirichlet process mixture model for brain MRI tissue classification.

    Science.gov (United States)

    Ferreira da Silva, Adelino R

    2007-04-01

    Accurate classification of magnetic resonance images according to tissue type or region of interest has become a critical requirement in diagnosis, treatment planning, and cognitive neuroscience. Several authors have shown that finite mixture models give excellent results in the automated segmentation of MR images of the human normal brain. However, performance and robustness of finite mixture models deteriorate when the models have to deal with a variety of anatomical structures. In this paper, we propose a nonparametric Bayesian model for tissue classification of MR images of the brain. The model, known as Dirichlet process mixture model, uses Dirichlet process priors to overcome the limitations of current parametric finite mixture models. To validate the accuracy and robustness of our method we present the results of experiments carried out on simulated MR brain scans, as well as on real MR image data. The results are compared with similar results from other well-known MRI segmentation methods.

  6. Selection on skewed characters and the paradox of stasis.

    Science.gov (United States)

    Bonamour, Suzanne; Teplitsky, Céline; Charmantier, Anne; Crochet, Pierre-André; Chevin, Luis-Miguel

    2017-11-01

    Observed phenotypic responses to selection in the wild often differ from predictions based on measurements of selection and genetic variance. An overlooked hypothesis to explain this paradox of stasis is that a skewed phenotypic distribution affects natural selection and evolution. We show through mathematical modeling that, when a trait selected for an optimum phenotype has a skewed distribution, directional selection is detected even at evolutionary equilibrium, where it causes no change in the mean phenotype. When environmental effects are skewed, Lande and Arnold's (1983) directional gradient is in the direction opposite to the skew. In contrast, skewed breeding values can displace the mean phenotype from the optimum, causing directional selection in the direction of the skew. These effects can be partitioned out using alternative selection estimates based on average derivatives of individual relative fitness, or additive genetic covariances between relative fitness and trait (Robertson-Price identity). We assess the validity of these predictions using simulations of selection estimation under moderate sample sizes. Ecologically relevant traits may commonly have skewed distributions, as we here exemplify with avian laying date - repeatedly described as more evolutionarily stable than expected - so this skewness should be accounted for when investigating evolutionary dynamics in the wild. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.

  7. On the Empirical Importance of the Conditional Skewness Assumption in Modelling the Relationship between Risk and Return

    Science.gov (United States)

    Pipień, M.

    2008-09-01

    We present the results of an application of Bayesian inference in testing the relation between risk and return on the financial instruments. On the basis of the Intertemporal Capital Asset Pricing Model, proposed by Merton we built a general sampling distribution suitable in analysing this relationship. The most important feature of our assumptions is that the skewness of the conditional distribution of returns is used as an alternative source of relation between risk and return. This general specification relates to Skewed Generalized Autoregressive Conditionally Heteroscedastic-in-Mean model. In order to make conditional distribution of financial returns skewed we considered the unified approach based on the inverse probability integral transformation. In particular, we applied hidden truncation mechanism, inverse scale factors, order statistics concept, Beta and Bernstein distribution transformations and also a constructive method. Based on the daily excess returns on the Warsaw Stock Exchange Index we checked the empirical importance of the conditional skewness assumption on the relation between risk and return on the Warsaw Stock Market. We present posterior probabilities of all competing specifications as well as the posterior analysis of the positive sign of the tested relationship.

  8. A heteroscedastic generalized linear model with a non-normal speed factor for responses and response times.

    Science.gov (United States)

    Molenaar, Dylan; Bolsinova, Maria

    2017-05-01

    In generalized linear modelling of responses and response times, the observed response time variables are commonly transformed to make their distribution approximately normal. A normal distribution for the transformed response times is desirable as it justifies the linearity and homoscedasticity assumptions in the underlying linear model. Past research has, however, shown that the transformed response times are not always normal. Models have been developed to accommodate this violation. In the present study, we propose a modelling approach for responses and response times to test and model non-normality in the transformed response times. Most importantly, we distinguish between non-normality due to heteroscedastic residual variances, and non-normality due to a skewed speed factor. In a simulation study, we establish parameter recovery and the power to separate both effects. In addition, we apply the model to a real data set. © 2017 The Authors. British Journal of Mathematical and Statistical Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.

  9. American Option Pricing using GARCH models and the Normal Inverse Gaussian distribution

    DEFF Research Database (Denmark)

    Stentoft, Lars Peter

    In this paper we propose a feasible way to price American options in a model with time varying volatility and conditional skewness and leptokurtosis using GARCH processes and the Normal Inverse Gaussian distribution. We show how the risk neutral dynamics can be obtained in this model, we interpret...... properties shows that there are important option pricing differences compared to the Gaussian case as well as to the symmetric special case. A large scale empirical examination shows that our model outperforms the Gaussian case for pricing options on three large US stocks as well as a major index...

  10. Induced supersolidity in a mixture of normal and hard-core bosons

    International Nuclear Information System (INIS)

    Mishra, Tapan; Das, B. P.; Pai, Ramesh V.

    2010-01-01

    We present a scenario where a supersolid is induced in one of the components of a mixture of two species bosonic atoms where there are no long-range interactions. We study a system of normal and hard-core boson mixture with only the former possessing long-range interactions. We consider three cases: the first where the total density is commensurate and the other two where it is incommensurate to the lattice. By suitable choices of the densities of normal and hard-core bosons and the interaction strengths between them, we predict that the charge density wave and the supersolid orders can be induced in the hard-core species as a result of the competing interatomic interactions.

  11. On the Effects of Wind Turbine Wake Skew Caused by Wind Veer: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Churchfield, Matthew J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Sirnivas, Senu [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-03-01

    Because of Coriolis forces caused by the Earth's rotation, the structure of the atmospheric boundary layer often contains wind-direction change with height, also known as wind-direction veer. Under low turbulence conditions, such as in stably stratified atmospheric conditions, this veer can be significant, even across the vertical extent of a wind turbine's rotor disk. The veer then causes the wind turbine wake to skew as it advects downstream. This wake skew has been observed both experimentally and numerically. In this work, we attempt to examine the wake skewing process in some detail, and quantify how differently a skewed wake versus a non skewed wake affects a downstream turbine. We do this by performing atmospheric large-eddy simulations to create turbulent inflow winds with and without veer. In the veer case, there is a roughly 8 degree wind direction change across the turbine rotor. We then perform subsequent large-eddy simulations using these inflow data with an actuator line rotor model to create wakes. The turbine modeled is a large, modern, offshore, multimegawatt turbine. We examine the unsteady wake data in detail and show that the skewed wake recovers faster than the non skewed wake. We also show that the wake deficit does not skew to the same degree that a passive tracer would if subject to veered inflow. Last, we use the wake data to place a hypothetical turbine 9 rotor diameters downstream by running aeroelastic simulations with the simulated wake data. We see differences in power and loads if this downstream turbine is subject to a skewed or non skewed wake. We feel that the differences observed between the skewed and nonskewed wake are important enough that the skewing effect should be included in engineering wake models.

  12. PRINCIPLE OF SKEW QUADRUPOLE MODULATION TO MEASURE BETATRON COUPLING

    International Nuclear Information System (INIS)

    LUO, Y.; PILAT, F.; ROSER, T.

    2004-01-01

    The measurement of the residual betatron coupling via skew quadrupole modulation is a new diagnostics technique that has been developed and tested at the Relativistic Heavy Ion Collider (RHIC) as a very promising method for the linear decoupling on the ramp. By modulating the strengths of different skew quadrupole families the two eigentunes are precisely measured with the phase lock loop system. The projections of the residual coupling coefficient onto the skew quadrupole coupling modulation directions are determined. The residual linear coupling could be corrected according to the measurement. An analytical solution for skew quadrupole modulation based on Hamiltonian perturbation approximation is given, and simulation code using smooth accelerator model is also developed. Some issues concerning the practical applications of this technique are discussed

  13. Pattern-mixture models for analyzing normal outcome data with proxy respondents.

    Science.gov (United States)

    Shardell, Michelle; Hicks, Gregory E; Miller, Ram R; Langenberg, Patricia; Magaziner, Jay

    2010-06-30

    Studies of older adults often involve interview questions regarding subjective constructs such as perceived disability. In some studies, when subjects are unable (e.g. due to cognitive impairment) or unwilling to respond to these questions, proxies (e.g. relatives or other care givers) are recruited to provide responses in place of the subject. Proxies are usually not approached to respond on behalf of subjects who respond for themselves; thus, for each subject, data from only one of the subject or proxy are available. Typically, proxy responses are simply substituted for missing subject responses, and standard complete-data analyses are performed. However, this approach may introduce measurement error and produce biased parameter estimates. In this paper, we propose using pattern-mixture models that relate non-identifiable parameters to identifiable parameters to analyze data with proxy respondents. We posit three interpretable pattern-mixture restrictions to be used with proxy data, and we propose estimation procedures using maximum likelihood and multiple imputation. The methods are applied to a cohort of elderly hip-fracture patients. (c) 2010 John Wiley & Sons, Ltd.

  14. A method for generating skewed random numbers using two overlapping uniform distributions

    International Nuclear Information System (INIS)

    Ermak, D.L.; Nasstrom, J.S.

    1995-02-01

    The objective of this work was to implement and evaluate a method for generating skewed random numbers using a combination of uniform random numbers. The method provides a simple and accurate way of generating skewed random numbers from the specified first three moments without an a priori specification of the probability density function. We describe the procedure for generating skewed random numbers from unifon-n random numbers, and show that it accurately produces random numbers with the desired first three moments over a range of skewness values. We also show that in the limit of zero skewness, the distribution of random numbers is an accurate approximation to the Gaussian probability density function. Future work win use this method to provide skewed random numbers for a Langevin equation model for diffusion in skewed turbulence

  15. Alpha - Skew Pi - Armendariz Rings

    Directory of Open Access Journals (Sweden)

    Areej M Abduldaim

    2018-03-01

    Full Text Available In this article we introduce a new concept called Alpha-skew Pi-Armendariz rings (Alpha - S Pi - ARas a generalization of the notion of Alpha-skew Armendariz rings.Another important goal behind studying this class of rings is to employ it in order to design a modern algorithm of an identification scheme according to the evolution of using modern algebra in the applications of the field of cryptography.We investigate general properties of this concept and give examples for illustration. Furthermore, this paperstudy the relationship between this concept and some previous notions related to Alpha-skew Armendariz rings. It clearly presents that every weak Alpha-skew Armendariz ring is Alpha-skew Pi-Armendariz (Alpha-S Pi-AR. Also, thisarticle showsthat the concepts of Alpha-skew Armendariz rings and Alpha-skew Pi- Armendariz rings are equivalent in case R is 2-primal and semiprime ring.Moreover, this paper proves for a semicommutative Alpha-compatible ringR that if R[x;Alpha] is nil-Armendariz, thenR is an Alpha-S Pi-AR. In addition, if R is an Alpha - S Pi -AR, 2-primal and semiprime ring, then N(R[x;Alpha]=N(R[x;Alpha]. Finally, we look forwardthat Alpha-skew Pi-Armendariz rings (Alpha-S Pi-ARbe more effect (due to their properties in the field of cryptography than Pi-Armendariz rings, weak Armendariz rings and others.For these properties and characterizations of the introduced concept Alpha-S Pi-AR, we aspire to design a novel algorithm of an identification scheme.

  16. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies.

    Directory of Open Access Journals (Sweden)

    Wesley K Thompson

    2015-12-01

    Full Text Available Characterizing the distribution of effects from genome-wide genotyping data is crucial for understanding important aspects of the genetic architecture of complex traits, such as number or proportion of non-null loci, average proportion of phenotypic variance explained per non-null effect, power for discovery, and polygenic risk prediction. To this end, previous work has used effect-size models based on various distributions, including the normal and normal mixture distributions, among others. In this paper we propose a scale mixture of two normals model for effect size distributions of genome-wide association study (GWAS test statistics. Test statistics corresponding to null associations are modeled as random draws from a normal distribution with zero mean; test statistics corresponding to non-null associations are also modeled as normal with zero mean, but with larger variance. The model is fit via minimizing discrepancies between the parametric mixture model and resampling-based nonparametric estimates of replication effect sizes and variances. We describe in detail the implications of this model for estimation of the non-null proportion, the probability of replication in de novo samples, the local false discovery rate, and power for discovery of a specified proportion of phenotypic variance explained from additive effects of loci surpassing a given significance threshold. We also examine the crucial issue of the impact of linkage disequilibrium (LD on effect sizes and parameter estimates, both analytically and in simulations. We apply this approach to meta-analysis test statistics from two large GWAS, one for Crohn's disease (CD and the other for schizophrenia (SZ. A scale mixture of two normals distribution provides an excellent fit to the SZ nonparametric replication effect size estimates. While capturing the general behavior of the data, this mixture model underestimates the tails of the CD effect size distribution. We discuss the

  17. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies.

    Science.gov (United States)

    Thompson, Wesley K; Wang, Yunpeng; Schork, Andrew J; Witoelar, Aree; Zuber, Verena; Xu, Shujing; Werge, Thomas; Holland, Dominic; Andreassen, Ole A; Dale, Anders M

    2015-12-01

    Characterizing the distribution of effects from genome-wide genotyping data is crucial for understanding important aspects of the genetic architecture of complex traits, such as number or proportion of non-null loci, average proportion of phenotypic variance explained per non-null effect, power for discovery, and polygenic risk prediction. To this end, previous work has used effect-size models based on various distributions, including the normal and normal mixture distributions, among others. In this paper we propose a scale mixture of two normals model for effect size distributions of genome-wide association study (GWAS) test statistics. Test statistics corresponding to null associations are modeled as random draws from a normal distribution with zero mean; test statistics corresponding to non-null associations are also modeled as normal with zero mean, but with larger variance. The model is fit via minimizing discrepancies between the parametric mixture model and resampling-based nonparametric estimates of replication effect sizes and variances. We describe in detail the implications of this model for estimation of the non-null proportion, the probability of replication in de novo samples, the local false discovery rate, and power for discovery of a specified proportion of phenotypic variance explained from additive effects of loci surpassing a given significance threshold. We also examine the crucial issue of the impact of linkage disequilibrium (LD) on effect sizes and parameter estimates, both analytically and in simulations. We apply this approach to meta-analysis test statistics from two large GWAS, one for Crohn's disease (CD) and the other for schizophrenia (SZ). A scale mixture of two normals distribution provides an excellent fit to the SZ nonparametric replication effect size estimates. While capturing the general behavior of the data, this mixture model underestimates the tails of the CD effect size distribution. We discuss the implications of

  18. Improved models for the prediction of activity coefficients in nearly athermal mixtures: Part I. Empirical modifications of free-volume models

    DEFF Research Database (Denmark)

    Kontogeorgis, Georgios M.; Coutsikos, Philipos; Tassios, Dimitrios

    1994-01-01

    Mixtures containing exclusively normal, branched and cyclic alkanes, as well as saturated hydrocarbon polymers (e.g. poly(ethylene) and poly(isobutylene)), are known to exhibit almost athermal behavior. Several new activity coefficient models containing both combinatorial and free-volume contribu......Mixtures containing exclusively normal, branched and cyclic alkanes, as well as saturated hydrocarbon polymers (e.g. poly(ethylene) and poly(isobutylene)), are known to exhibit almost athermal behavior. Several new activity coefficient models containing both combinatorial and free...

  19. A general mixture model for mapping quantitative trait loci by using molecular markers

    NARCIS (Netherlands)

    Jansen, R.C.

    1992-01-01

    In a segregating population a quantitative trait may be considered to follow a mixture of (normal) distributions, the mixing proportions being based on Mendelian segregation rules. A general and flexible mixture model is proposed for mapping quantitative trait loci (QTLs) by using molecular markers.

  20. Skew quad compensation at PEP

    International Nuclear Information System (INIS)

    Murray, J.J.

    1977-10-01

    Rotational and focal effects of solenoids used in PEP detectors will cause severe perturbations of machine beam optics and must be corrected. Ordinarily this would be accomplished by the addition of compensating solenoids and adjustment of insertion quadrupole strengths. It has been found that an arbitrary cross plane coupling representing the effects of solenoids and/or skew quads in any combination can be synthesized (or compensated) exactly using a quartet of skew quads combined with other erect transport elements in a wide variety of configurations. Specific skew quad compensating systems for PEP have been designed and are under study by PEP staff. So far no fundamental flaws have been discovered. In view of that, PEP management has tentatively authorized the use of such a system in the PEP-4, PEP-9 experiments and proposes to leave the question open ''without prejudice'' for other experiments. Use of skew quad compensation involves an imponderable risk, of course, simply because the method is new and untested. But in addition to providing the only known method for dealing with skew quad perturbations, skew quad compensation, as an alternate to compensating solenoids, promises to be much cheaper, to require much less power and to occupy much less space in the IR's. The purpose of this note is to inform potential users of the foregoing situation and to explain skew quad compensation more fully. 2 refs., 1 fig., 1 tab

  1. Apparent Transition in the Human Height Distribution Caused by Age-Dependent Variation during Puberty Period

    Science.gov (United States)

    Iwata, Takaki; Yamazaki, Yoshihiro; Kuninaka, Hiroto

    2013-08-01

    In this study, we examine the validity of the transition of the human height distribution from the log-normal distribution to the normal distribution during puberty, as suggested in an earlier study [Kuninaka et al.: J. Phys. Soc. Jpn. 78 (2009) 125001]. Our data analysis reveals that, in late puberty, the variation in height decreases as children grow. Thus, the classification of a height dataset by age at this stage leads us to analyze a mixture of distributions with larger means and smaller variations. This mixture distribution has a negative skewness and is consequently closer to the normal distribution than to the log-normal distribution. The opposite case occurs in early puberty and the mixture distribution is positively skewed, which resembles the log-normal distribution rather than the normal distribution. Thus, this scenario mimics the transition during puberty. Additionally, our scenario is realized through a numerical simulation based on a statistical model. The present study does not support the transition suggested by the earlier study.

  2. Nonparametric Mixture of Regression Models.

    Science.gov (United States)

    Huang, Mian; Li, Runze; Wang, Shaoli

    2013-07-01

    Motivated by an analysis of US house price index data, we propose nonparametric finite mixture of regression models. We study the identifiability issue of the proposed models, and develop an estimation procedure by employing kernel regression. We further systematically study the sampling properties of the proposed estimators, and establish their asymptotic normality. A modified EM algorithm is proposed to carry out the estimation procedure. We show that our algorithm preserves the ascent property of the EM algorithm in an asymptotic sense. Monte Carlo simulations are conducted to examine the finite sample performance of the proposed estimation procedure. An empirical analysis of the US house price index data is illustrated for the proposed methodology.

  3. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies

    DEFF Research Database (Denmark)

    Thompson, Wesley K.; Wang, Yunpeng; Schork, Andrew J.

    2015-01-01

    -wide association study (GWAS) test statistics. Test statistics corresponding to null associations are modeled as random draws from a normal distribution with zero mean; test statistics corresponding to non-null associations are also modeled as normal with zero mean, but with larger variance. The model is fit via...... analytically and in simulations. We apply this approach to meta-analysis test statistics from two large GWAS, one for Crohn’s disease (CD) and the other for schizophrenia (SZ). A scale mixture of two normals distribution provides an excellent fit to the SZ nonparametric replication effect size estimates. While...... minimizing discrepancies between the parametric mixture model and resampling-based nonparametric estimates of replication effect sizes and variances. We describe in detail the implications of this model for estimation of the non-null proportion, the probability of replication in de novo samples, the local...

  4. Normal uniform mixture differential gene expression detection for cDNA microarrays

    Directory of Open Access Journals (Sweden)

    Raftery Adrian E

    2005-07-01

    Full Text Available Abstract Background One of the primary tasks in analysing gene expression data is finding genes that are differentially expressed in different samples. Multiple testing issues due to the thousands of tests run make some of the more popular methods for doing this problematic. Results We propose a simple method, Normal Uniform Differential Gene Expression (NUDGE detection for finding differentially expressed genes in cDNA microarrays. The method uses a simple univariate normal-uniform mixture model, in combination with new normalization methods for spread as well as mean that extend the lowess normalization of Dudoit, Yang, Callow and Speed (2002 1. It takes account of multiple testing, and gives probabilities of differential expression as part of its output. It can be applied to either single-slide or replicated experiments, and it is very fast. Three datasets are analyzed using NUDGE, and the results are compared to those given by other popular methods: unadjusted and Bonferroni-adjusted t tests, Significance Analysis of Microarrays (SAM, and Empirical Bayes for microarrays (EBarrays with both Gamma-Gamma and Lognormal-Normal models. Conclusion The method gives a high probability of differential expression to genes known/suspected a priori to be differentially expressed and a low probability to the others. In terms of known false positives and false negatives, the method outperforms all multiple-replicate methods except for the Gamma-Gamma EBarrays method to which it offers comparable results with the added advantages of greater simplicity, speed, fewer assumptions and applicability to the single replicate case. An R package called nudge to implement the methods in this paper will be made available soon at http://www.bioconductor.org.

  5. Estimating the mean and standard deviation of environmental data with below detection limit observations: Considering highly skewed data and model misspecification.

    Science.gov (United States)

    Shoari, Niloofar; Dubé, Jean-Sébastien; Chenouri, Shoja'eddin

    2015-11-01

    In environmental studies, concentration measurements frequently fall below detection limits of measuring instruments, resulting in left-censored data. Some studies employ parametric methods such as the maximum likelihood estimator (MLE), robust regression on order statistic (rROS), and gamma regression on order statistic (GROS), while others suggest a non-parametric approach, the Kaplan-Meier method (KM). Using examples of real data from a soil characterization study in Montreal, we highlight the need for additional investigations that aim at unifying the existing literature. A number of studies have examined this issue; however, those considering data skewness and model misspecification are rare. These aspects are investigated in this paper through simulations. Among other findings, results show that for low skewed data, the performance of different statistical methods is comparable, regardless of the censoring percentage and sample size. For highly skewed data, the performance of the MLE method under lognormal and Weibull distributions is questionable; particularly, when the sample size is small or censoring percentage is high. In such conditions, MLE under gamma distribution, rROS, GROS, and KM are less sensitive to skewness. Related to model misspecification, MLE based on lognormal and Weibull distributions provides poor estimates when the true distribution of data is misspecified. However, the methods of rROS, GROS, and MLE under gamma distribution are generally robust to model misspecifications regardless of skewness, sample size, and censoring percentage. Since the characteristics of environmental data (e.g., type of distribution and skewness) are unknown a priori, we suggest using MLE based on gamma distribution, rROS and GROS. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Analysis of Parasite and Other Skewed Counts

    Science.gov (United States)

    Alexander, Neal

    2012-01-01

    Objective To review methods for the statistical analysis of parasite and other skewed count data. Methods Statistical methods for skewed count data are described and compared, with reference to those used over a ten year period of Tropical Medicine and International Health. Two parasitological datasets are used for illustration. Results Ninety papers were identified, 89 with descriptive and 60 with inferential analysis. A lack of clarity is noted in identifying measures of location, in particular the Williams and geometric mean. The different measures are compared, emphasizing the legitimacy of the arithmetic mean for skewed data. In the published papers, the t test and related methods were often used on untransformed data, which is likely to be invalid. Several approaches to inferential analysis are described, emphasizing 1) non-parametric methods, while noting that they are not simply comparisons of medians, and 2) generalized linear modelling, in particular with the negative binomial distribution. Additional methods, such as the bootstrap, with potential for greater use are described. Conclusions Clarity is recommended when describing transformations and measures of location. It is suggested that non-parametric methods and generalized linear models are likely to be sufficient for most analyses. PMID:22943299

  7. Generation of net sediment transport by velocity skewness in oscillatory sheet flow

    Science.gov (United States)

    Chen, Xin; Li, Yong; Chen, Genfa; Wang, Fujun; Tang, Xuelin

    2018-01-01

    This study utilizes a qualitative approach and a two-phase numerical model to investigate net sediment transport caused by velocity skewness beneath oscillatory sheet flow and current. The qualitative approach is derived based on the pseudo-laminar approximation of boundary layer velocity and exponential approximation of concentration. The two-phase model can obtain well the instantaneous erosion depth, sediment flux, boundary layer thickness, and sediment transport rate. It can especially illustrate the difference between positive and negative flow stages caused by velocity skewness, which is considerably important in determining the net boundary layer flow and sediment transport direction. The two-phase model also explains the effect of sediment diameter and phase-lag to sediment transport by comparing the instantaneous-type formulas to better illustrate velocity skewness effect. In previous studies about sheet flow transport in pure velocity-skewed flows, net sediment transport is only attributed to the phase-lag effect. In the present study with the qualitative approach and two-phase model, phase-lag effect is shown important but not sufficient for the net sediment transport beneath pure velocity-skewed flow and current, while the asymmetric wave boundary layer development between positive and negative flow stages also contributes to the sediment transport.

  8. Quantum skew divergence

    Energy Technology Data Exchange (ETDEWEB)

    Audenaert, Koenraad M. R., E-mail: koenraad.audenaert@rhul.ac.uk [Department of Mathematics, Royal Holloway University of London, Egham TW20 0EX, United Kingdom and Department of Physics and Astronomy, University of Ghent, S9, Krijgslaan 281, B-9000 Ghent (Belgium)

    2014-11-15

    In this paper, we study the quantum generalisation of the skew divergence, which is a dissimilarity measure between distributions introduced by Lee in the context of natural language processing. We provide an in-depth study of the quantum skew divergence, including its relation to other state distinguishability measures. Finally, we present a number of important applications: new continuity inequalities for the quantum Jensen-Shannon divergence and the Holevo information, and a new and short proof of Bravyi's Small Incremental Mixing conjecture.

  9. Network Skewness Measures Resilience in Lake Ecosystems

    Science.gov (United States)

    Langdon, P. G.; Wang, R.; Dearing, J.; Zhang, E.; Doncaster, P.; Yang, X.; Yang, H.; Dong, X.; Hu, Z.; Xu, M.; Yanjie, Z.; Shen, J.

    2017-12-01

    Changes in ecosystem resilience defy straightforward quantification from biodiversity metrics, which ignore influences of community structure. Naturally self-organized network structures show positive skewness in the distribution of node connections. Here we test for skewness reduction in lake diatom communities facing anthropogenic stressors, across a network of 273 lakes in China containing 452 diatom species. Species connections show positively skewed distributions in little-impacted lakes, switching to negative skewness in lakes associated with human settlement, surrounding land-use change, and higher phosphorus concentration. Dated sediment cores reveal a down-shifting of network skewness as human impacts intensify, and reversal with recovery from disturbance. The appearance and degree of negative skew presents a new diagnostic for quantifying system resilience and impacts from exogenous forcing on ecosystem communities.

  10. Finding the Right Distribution for Highly Skewed Zero-inflated Clinical Data

    Directory of Open Access Journals (Sweden)

    Resmi Gupta

    2013-03-01

    Full Text Available Discrete, highly skewed distributions with excess numbers of zeros often result in biased estimates and misleading inferences if the zeros are not properly addressed. A clinical example of children with electrophysiologic disorders in which many of the children are treated without surgery is provided. The purpose of the current study was to identify the optimal modeling strategy for highly skewed, zeroinflated data often observed in the clinical setting by: (a simulating skewed, zero-inflated count data; (b fitting simulated data with Poisson, Negative Binomial, Zero-Inflated Poisson (ZIP and Zero-inflated Negative Binomial (ZINB models; and, (c applying the aforementioned models to actual, highlyskewed, clinical data of children with an EP disorder. The ZIP model was observed to be the optimal model based on traditional fit statistics as well as estimates of bias, mean-squared error, and coverage.  

  11. Evaluation of Distance Measures Between Gaussian Mixture Models of MFCCs

    DEFF Research Database (Denmark)

    Jensen, Jesper Højvang; Ellis, Dan P. W.; Christensen, Mads Græsbøll

    2007-01-01

    In music similarity and in the related task of genre classification, a distance measure between Gaussian mixture models is frequently needed. We present a comparison of the Kullback-Leibler distance, the earth movers distance and the normalized L2 distance for this application. Although...

  12. WFC3/UVIS image skew

    Science.gov (United States)

    Petro, Larry

    2009-07-01

    This proposal will provide an independent check of the skew in the ACS astrometric catalog of Omega Cen stars, using exposures taken in a 45-deg range of telescope roll. The roll sequence will also provide a test for orbital variation of skew and field angle dependent PSF variations. The astrometric catalog of Omega Cen, improved for a skew, will be used to derive the geometric distorion to all UVIS filters, which has preliminarily been determined from F606W images and an astrometric catalog of 47 Tuc.

  13. Quantum Fisher and skew information for Unruh accelerated Dirac qubit

    Energy Technology Data Exchange (ETDEWEB)

    Banerjee, Subhashish; Alok, Ashutosh Kumar [Indian Institute of Technology Jodhpur, Jodhpur (India); Omkar, S. [Indian Institute of Science Education and Research, Thiruvananthapuram (India)

    2016-08-15

    We develop a Bloch vector representation of the Unruh channel for a Dirac field mode. This is used to provide a unified, analytical treatment of quantum Fisher and skew information for a qubit subjected to the Unruh channel, both in its pure form as well as in the presence of experimentally relevant external noise channels. The time evolution of Fisher and skew information is studied along with the impact of external environment parameters such as temperature and squeezing. The external noises are modelled by both purely dephasing phase damping and the squeezed generalised amplitude damping channels. An interesting interplay between the external reservoir temperature and squeezing on the Fisher and skew information is observed, in particular, for the action of the squeezed generalised amplitude damping channel. It is seen that for some regimes, squeezing can enhance the quantum information against the deteriorating influence of the ambient environment. Similar features are also observed for the analogous study of skew information, highlighting a similar origin of the Fisher and skew information. (orig.)

  14. Quantum Fisher and skew information for Unruh accelerated Dirac qubit

    International Nuclear Information System (INIS)

    Banerjee, Subhashish; Alok, Ashutosh Kumar; Omkar, S.

    2016-01-01

    We develop a Bloch vector representation of the Unruh channel for a Dirac field mode. This is used to provide a unified, analytical treatment of quantum Fisher and skew information for a qubit subjected to the Unruh channel, both in its pure form as well as in the presence of experimentally relevant external noise channels. The time evolution of Fisher and skew information is studied along with the impact of external environment parameters such as temperature and squeezing. The external noises are modelled by both purely dephasing phase damping and the squeezed generalised amplitude damping channels. An interesting interplay between the external reservoir temperature and squeezing on the Fisher and skew information is observed, in particular, for the action of the squeezed generalised amplitude damping channel. It is seen that for some regimes, squeezing can enhance the quantum information against the deteriorating influence of the ambient environment. Similar features are also observed for the analogous study of skew information, highlighting a similar origin of the Fisher and skew information. (orig.)

  15. Univariate and multivariate skewness and kurtosis for measuring nonnormality: Prevalence, influence and estimation.

    Science.gov (United States)

    Cain, Meghan K; Zhang, Zhiyong; Yuan, Ke-Hai

    2017-10-01

    Nonnormality of univariate data has been extensively examined previously (Blanca et al., Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 9(2), 78-84, 2013; Miceeri, Psychological Bulletin, 105(1), 156, 1989). However, less is known of the potential nonnormality of multivariate data although multivariate analysis is commonly used in psychological and educational research. Using univariate and multivariate skewness and kurtosis as measures of nonnormality, this study examined 1,567 univariate distriubtions and 254 multivariate distributions collected from authors of articles published in Psychological Science and the American Education Research Journal. We found that 74 % of univariate distributions and 68 % multivariate distributions deviated from normal distributions. In a simulation study using typical values of skewness and kurtosis that we collected, we found that the resulting type I error rates were 17 % in a t-test and 30 % in a factor analysis under some conditions. Hence, we argue that it is time to routinely report skewness and kurtosis along with other summary statistics such as means and variances. To facilitate future report of skewness and kurtosis, we provide a tutorial on how to compute univariate and multivariate skewness and kurtosis by SAS, SPSS, R and a newly developed Web application.

  16. SKEWNESS IN STOCK RETURNS: EVIDENCE FROM THE BUCHAREST STOCK EXCHANGE DURING 2000 – 2011

    Directory of Open Access Journals (Sweden)

    IULIAN PANAIT

    2012-05-01

    Full Text Available Our paper investigates the symmetry in stock returns of the 30 most liquid companies traded on Bucharest Stock Exchange during 2000 – 2011 and also the most representative 5 market indices. Our daily data shows that skewness estimates are slightly negative for most indices and individual stocks, but only a few present values significantly different from the characteristics of a normal distribution. We compare our results with skewness estimates for 21 major and emerging stock market indices around the world and find that such results are similar to other low capitalization and trading volume markets. For all the Romanian and international assets studied, the Studentized-Range (St-R and Jarque-Bera (J-B tests reject the hypothesis of normal distribution of daily returns.

  17. Handling Data Skew in MapReduce Cluster by Using Partition Tuning

    Directory of Open Access Journals (Sweden)

    Yufei Gao

    2017-01-01

    Full Text Available The healthcare industry has generated large amounts of data, and analyzing these has emerged as an important problem in recent years. The MapReduce programming model has been successfully used for big data analytics. However, data skew invariably occurs in big data analytics and seriously affects efficiency. To overcome the data skew problem in MapReduce, we have in the past proposed a data processing algorithm called Partition Tuning-based Skew Handling (PTSH. In comparison with the one-stage partitioning strategy used in the traditional MapReduce model, PTSH uses a two-stage strategy and the partition tuning method to disperse key-value pairs in virtual partitions and recombines each partition in case of data skew. The robustness and efficiency of the proposed algorithm were tested on a wide variety of simulated datasets and real healthcare datasets. The results showed that PTSH algorithm can handle data skew in MapReduce efficiently and improve the performance of MapReduce jobs in comparison with the native Hadoop, Closer, and locality-aware and fairness-aware key partitioning (LEEN. We also found that the time needed for rule extraction can be reduced significantly by adopting the PTSH algorithm, since it is more suitable for association rule mining (ARM on healthcare data.

  18. A turbulence model in mixtures. First part: Statistical description of mixture

    International Nuclear Information System (INIS)

    Besnard, D.

    1987-03-01

    Classical theory of mixtures gives a model for molecular mixtures. This kind of model is based on a small gradient approximation for concentration, temperature, and pression. We present here a mixture model, allowing for large gradients in the flow. We also show that, with a local balance assumption between material diffusion and flow gradients evolution, we obtain a model similar to those mentioned above [fr

  19. Separation of variables in anisotropic models and non-skew-symmetric elliptic r-matrix

    Science.gov (United States)

    Skrypnyk, Taras

    2017-05-01

    We solve a problem of separation of variables for the classical integrable hamiltonian systems possessing Lax matrices satisfying linear Poisson brackets with the non-skew-symmetric, non-dynamical elliptic so(3)⊗ so(3)-valued classical r-matrix. Using the corresponding Lax matrices, we present a general form of the "separating functions" B( u) and A( u) that generate the coordinates and the momenta of separation for the associated models. We consider several examples and perform the separation of variables for the classical anisotropic Euler's top, Steklov-Lyapunov model of the motion of anisotropic rigid body in the liquid, two-spin generalized Gaudin model and "spin" generalization of Steklov-Lyapunov model.

  20. Skew-adjacency matrices of graphs

    NARCIS (Netherlands)

    Cavers, M.; Cioaba, S.M.; Fallat, S.; Gregory, D.A.; Haemers, W.H.; Kirkland, S.J.; McDonald, J.J.; Tsatsomeros, M.

    2012-01-01

    The spectra of the skew-adjacency matrices of a graph are considered as a possible way to distinguish adjacency cospectral graphs. This leads to the following topics: graphs whose skew-adjacency matrices are all cospectral; relations between the matchings polynomial of a graph and the characteristic

  1. The Location-Scale Mixture Exponential Power Distribution: A Bayesian and Maximum Likelihood Approach

    OpenAIRE

    Rahnamaei, Z.; Nematollahi, N.; Farnoosh, R.

    2012-01-01

    We introduce an alternative skew-slash distribution by using the scale mixture of the exponential power distribution. We derive the properties of this distribution and estimate its parameter by Maximum Likelihood and Bayesian methods. By a simulation study we compute the mentioned estimators and their mean square errors, and we provide an example on real data to demonstrate the modeling strength of the new distribution.

  2. Modelling of an homogeneous equilibrium mixture model

    International Nuclear Information System (INIS)

    Bernard-Champmartin, A.; Poujade, O.; Mathiaud, J.; Mathiaud, J.; Ghidaglia, J.M.

    2014-01-01

    We present here a model for two phase flows which is simpler than the 6-equations models (with two densities, two velocities, two temperatures) but more accurate than the standard mixture models with 4 equations (with two densities, one velocity and one temperature). We are interested in the case when the two-phases have been interacting long enough for the drag force to be small but still not negligible. The so-called Homogeneous Equilibrium Mixture Model (HEM) that we present is dealing with both mixture and relative quantities, allowing in particular to follow both a mixture velocity and a relative velocity. This relative velocity is not tracked by a conservation law but by a closure law (drift relation), whose expression is related to the drag force terms of the two-phase flow. After the derivation of the model, a stability analysis and numerical experiments are presented. (authors)

  3. Individual loss reserving with the Multivariate Skew Normal model

    NARCIS (Netherlands)

    Pigeon, M.; Antonio, K.; Denuit, M.

    2011-01-01

    In general insurance, the evaluation of future cash ows and solvency capital has become increasingly important. To assist in this process, the present paper proposes an individual discrete-time loss re- serving model describing the occurrence, the reporting delay, the timeto the first payment, and

  4. Investigation of free vibration characteristics for skew multiphase magneto-electro-elastic plate

    Science.gov (United States)

    Kiran, M. C.; Kattimani, S.

    2018-04-01

    This article presents the investigation of skew multiphase magneto-electro-elastic (MMEE) plate to assess its free vibration characteristics. A finite element (FE) model is formulated considering the different couplings involved via coupled constitutive equations. The transformation matrices are derived to transform local degrees of freedom into the global degrees of freedom for the nodes lying on the skew edges. Effect of different volume fraction (Vf) on the free vibration behavior is explicitly studied. In addition, influence of width to thickness ratio, the aspect ratio, and the stacking arrangement on natural frequencies of skew multiphase MEE plate investigated. Particular attention has been paid to investigate the effect of skew angle on the non-dimensional Eigen frequencies of multiphase MEE plate with simply supported edges.

  5. The Skew Risk Premium in the Equity Index Market

    OpenAIRE

    Roman Kozhan; Anthony Neuberger; Paul Schneider

    2013-01-01

    We develop a new method for measuring moment risk premiums. We find that the skew premium accounts for over 40% of the slope in the implied volatility curve in the S&P 500 market. Skew risk is tightly related to variance risk, in the sense that strategies designed to capture the one and hedge out exposure to the other earn an insignificant risk premium. This provides a new testable restriction for asset pricing models trying to capture, in particular, disaster risk premiums. We base our resul...

  6. On modeling of structured multiphase mixtures

    International Nuclear Information System (INIS)

    Dobran, F.

    1987-01-01

    The usual modeling of multiphase mixtures involves a set of conservation and balance equations of mass, momentum, energy and entropy (the basic set) constructed by an averaging procedure or postulated. The averaged models are constructed by averaging, over space or time segments, the local macroscopic field equations of each phase, whereas the postulated models are usually motivated by the single phase multicomponent mixture models. In both situations, the resulting equations yield superimposed continua models and are closed by the constitutive equations which place restrictions on the possible material response during the motion and phase change. In modeling the structured multiphase mixtures, the modeling of intrinsic motion of grains or particles is accomplished by adjoining to the basic set of field equations the additional balance equations, thereby placing restrictions on the motion of phases only within the imposed extrinsic and intrinsic sources. The use of the additional balance equations has been primarily advocated in the postulatory theories of multiphase mixtures and are usually derived through very special assumptions of the material deformation. Nevertheless, the resulting mixture models can predict a wide variety of complex phenomena such as the Mohr-Coulomb yield criterion in granular media, Rayleigh bubble equation, wave dispersion and dilatancy. Fundamental to the construction of structured models of multiphase mixtures are the problems pertaining to the existence and number of additional balance equations to model the structural characteristics of a mixture. Utilizing a volume averaging procedure it is possible not only to derive the basic set of field equation discussed above, but also a very general set of additional balance equations for modeling of structural properties of the mixture

  7. Skewed Binary Search Trees

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Moruz, Gabriel

    2006-01-01

    It is well-known that to minimize the number of comparisons a binary search tree should be perfectly balanced. Previous work has shown that a dominating factor over the running time for a search is the number of cache faults performed, and that an appropriate memory layout of a binary search tree...... can reduce the number of cache faults by several hundred percent. Motivated by the fact that during a search branching to the left or right at a node does not necessarily have the same cost, e.g. because of branch prediction schemes, we in this paper study the class of skewed binary search trees....... For all nodes in a skewed binary search tree the ratio between the size of the left subtree and the size of the tree is a fixed constant (a ratio of 1/2 gives perfect balanced trees). In this paper we present an experimental study of various memory layouts of static skewed binary search trees, where each...

  8. The Location-Scale Mixture Exponential Power Distribution: A Bayesian and Maximum Likelihood Approach

    Directory of Open Access Journals (Sweden)

    Z. Rahnamaei

    2012-01-01

    Full Text Available We introduce an alternative skew-slash distribution by using the scale mixture of the exponential power distribution. We derive the properties of this distribution and estimate its parameter by Maximum Likelihood and Bayesian methods. By a simulation study we compute the mentioned estimators and their mean square errors, and we provide an example on real data to demonstrate the modeling strength of the new distribution.

  9. Nonparametric Fine Tuning of Mixtures: Application to Non-Life Insurance Claims Distribution Estimation

    Science.gov (United States)

    Sardet, Laure; Patilea, Valentin

    When pricing a specific insurance premium, actuary needs to evaluate the claims cost distribution for the warranty. Traditional actuarial methods use parametric specifications to model claims distribution, like lognormal, Weibull and Pareto laws. Mixtures of such distributions allow to improve the flexibility of the parametric approach and seem to be quite well-adapted to capture the skewness, the long tails as well as the unobserved heterogeneity among the claims. In this paper, instead of looking for a finely tuned mixture with many components, we choose a parsimonious mixture modeling, typically a two or three-component mixture. Next, we use the mixture cumulative distribution function (CDF) to transform data into the unit interval where we apply a beta-kernel smoothing procedure. A bandwidth rule adapted to our methodology is proposed. Finally, the beta-kernel density estimate is back-transformed to recover an estimate of the original claims density. The beta-kernel smoothing provides an automatic fine-tuning of the parsimonious mixture and thus avoids inference in more complex mixture models with many parameters. We investigate the empirical performance of the new method in the estimation of the quantiles with simulated nonnegative data and the quantiles of the individual claims distribution in a non-life insurance application.

  10. Option-Based Estimation of the Price of Co-Skewness and Co-Kurtosis Risk

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Fournier, Mathieu; Fournier, Mathieu

    -neutral second moments, and the price of co-kurtosis risk corresponds to the spread between the physical and the risk-neutral third moments. The option-based estimates of the prices of risk lead to reasonable values of the associated risk premia. An out-of-sample analysis of factor models with co-skewness and co......We show that the prices of risk for factors that are nonlinear in the market return are readily obtained using index option prices. We apply this insight to the price of co-skewness and co-kurtosis risk. The price of co-skewness risk corresponds to the spread between the physical and the risk......-kurtosis risk indicates that the new estimates of the price of risk improve the models performance. Models with higher-order market moments also robustly outperform standard competitors such as the CAPM and the Fama-French model....

  11. Option-Based Estimation of the Price of Co-Skewness and Co-Kurtosis Risk

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Fournier, Mathieu; Jacobs, Kris

    -neutral second moments, and the price of co-kurtosis risk corresponds to the spread between the physical and the risk-neutral third moments. The option-based estimates of the prices of risk lead to reasonable values of the associated risk premia. An out-of-sample analysis of factor models with co-skewness and co......We show that the prices of risk for factors that are nonlinear in the market return are readily obtained using index option prices. We apply this insight to the price of co-skewness and co-kurtosis risk. The price of co-skewness risk corresponds to the spread between the physical and the risk......-kurtosis risk indicates that the new estimates of the price of risk improve the models' performance. Models with higher-order market moments also robustly outperform standard competitors such as the CAPM and the Fama-French model....

  12. A Bayesian estimate of the concordance correlation coefficient with skewed data.

    Science.gov (United States)

    Feng, Dai; Baumgartner, Richard; Svetnik, Vladimir

    2015-01-01

    Concordance correlation coefficient (CCC) is one of the most popular scaled indices used to evaluate agreement. Most commonly, it is used under the assumption that data is normally distributed. This assumption, however, does not apply to skewed data sets. While methods for the estimation of the CCC of skewed data sets have been introduced and studied, the Bayesian approach and its comparison with the previous methods has been lacking. In this study, we propose a Bayesian method for the estimation of the CCC of skewed data sets and compare it with the best method previously investigated. The proposed method has certain advantages. It tends to outperform the best method studied before when the variation of the data is mainly from the random subject effect instead of error. Furthermore, it allows for greater flexibility in application by enabling incorporation of missing data, confounding covariates, and replications, which was not considered previously. The superiority of this new approach is demonstrated using simulation as well as real-life biomarker data sets used in an electroencephalography clinical study. The implementation of the Bayesian method is accessible through the Comprehensive R Archive Network. Copyright © 2015 John Wiley & Sons, Ltd.

  13. Bayesian nonparametric meta-analysis using Polya tree mixture models.

    Science.gov (United States)

    Branscum, Adam J; Hanson, Timothy E

    2008-09-01

    Summary. A common goal in meta-analysis is estimation of a single effect measure using data from several studies that are each designed to address the same scientific inquiry. Because studies are typically conducted in geographically disperse locations, recent developments in the statistical analysis of meta-analytic data involve the use of random effects models that account for study-to-study variability attributable to differences in environments, demographics, genetics, and other sources that lead to heterogeneity in populations. Stemming from asymptotic theory, study-specific summary statistics are modeled according to normal distributions with means representing latent true effect measures. A parametric approach subsequently models these latent measures using a normal distribution, which is strictly a convenient modeling assumption absent of theoretical justification. To eliminate the influence of overly restrictive parametric models on inferences, we consider a broader class of random effects distributions. We develop a novel hierarchical Bayesian nonparametric Polya tree mixture (PTM) model. We present methodology for testing the PTM versus a normal random effects model. These methods provide researchers a straightforward approach for conducting a sensitivity analysis of the normality assumption for random effects. An application involving meta-analysis of epidemiologic studies designed to characterize the association between alcohol consumption and breast cancer is presented, which together with results from simulated data highlight the performance of PTMs in the presence of nonnormality of effect measures in the source population.

  14. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: ’Are we actually dealing with a convolutive mixture?’. We try to answer this question for EEG data....

  15. Development and Validation of a New Blade Element Momentum Skewed-Wake Model within AeroDyn: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Ning, S. A.; Hayman, G.; Damiani, R.; Jonkman, J.

    2014-12-01

    Blade element momentum methods, though conceptually simple, are highly useful for analyzing wind turbines aerodynamics and are widely used in many design and analysis applications. A new version of AeroDyn is being developed to take advantage of new robust solution methodologies, conform to a new modularization framework for National Renewable Energy Laboratory's FAST, utilize advanced skewed-wake analysis methods, fix limitations with previous implementations, and to enable modeling of highly flexible and nonstraight blades. This paper reviews blade element momentum theory and several of the options available for analyzing skewed inflow. AeroDyn implementation details are described for the benefit of users and developers. These new options are compared to solutions from the previous version of AeroDyn and to experimental data. Finally, recommendations are given on how one might select from the various available solution approaches.

  16. Demographic origins of skewed operational and adult sex ratios: perturbation analyses of two-sex models.

    Science.gov (United States)

    Veran, Sophie; Beissinger, Steven R

    2009-02-01

    Skewed sex ratios - operational (OSR) and Adult (ASR) - arise from sexual differences in reproductive behaviours and adult survival rates due to the cost of reproduction. However, skewed sex-ratio at birth, sex-biased dispersal and immigration, and sexual differences in juvenile mortality may also contribute. We present a framework to decompose the roles of demographic traits on sex ratios using perturbation analyses of two-sex matrix population models. Metrics of sensitivity are derived from analyses of sensitivity, elasticity, life-table response experiments and life stage simulation analyses, and applied to the stable stage distribution instead of lambda. We use these approaches to examine causes of male-biased sex ratios in two populations of green-rumped parrotlets (Forpus passerinus) in Venezuela. Female local juvenile survival contributed the most to the unbalanced OSR and ASR due to a female-biased dispersal rate, suggesting sexual differences in philopatry can influence sex ratios more strongly than the cost of reproduction.

  17. Probabilistic mixture-based image modelling

    Czech Academy of Sciences Publication Activity Database

    Haindl, Michal; Havlíček, Vojtěch; Grim, Jiří

    2011-01-01

    Roč. 47, č. 3 (2011), s. 482-500 ISSN 0023-5954 R&D Projects: GA MŠk 1M0572; GA ČR GA102/08/0593 Grant - others:CESNET(CZ) 387/2010; GA MŠk(CZ) 2C06019; GA ČR(CZ) GA103/11/0335 Institutional research plan: CEZ:AV0Z10750506 Keywords : BTF texture modelling * discrete distribution mixtures * Bernoulli mixture * Gaussian mixture * multi-spectral texture modelling Subject RIV: BD - Theory of Information Impact factor: 0.454, year: 2011 http://library.utia.cas.cz/separaty/2011/RO/haindl-0360244.pdf

  18. Poisson Mixture Regression Models for Heart Disease Prediction.

    Science.gov (United States)

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  19. Poisson Mixture Regression Models for Heart Disease Prediction

    Science.gov (United States)

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  20. Isomorphic Operators and Functional Equations for the Skew-Circulant Algebra

    Directory of Open Access Journals (Sweden)

    Zhaolin Jiang

    2014-01-01

    Full Text Available The skew-circulant matrix has been used in solving ordinary differential equations. We prove that the set of skew-circulants with complex entries has an idempotent basis. On that basis, a skew-cyclic group of automorphisms and functional equations on the skew-circulant algebra is introduced. And different operators on linear vector space that are isomorphic to the algebra of n×n complex skew-circulant matrices are displayed in this paper.

  1. Mixture of Regression Models with Single-Index

    OpenAIRE

    Xiang, Sijia; Yao, Weixin

    2016-01-01

    In this article, we propose a class of semiparametric mixture regression models with single-index. We argue that many recently proposed semiparametric/nonparametric mixture regression models can be considered special cases of the proposed model. However, unlike existing semiparametric mixture regression models, the new pro- posed model can easily incorporate multivariate predictors into the nonparametric components. Backfitting estimates and the corresponding algorithms have been proposed for...

  2. A nonlinear isobologram model with Box-Cox transformation to both sides for chemical mixtures.

    Science.gov (United States)

    Chen, D G; Pounds, J G

    1998-12-01

    The linear logistical isobologram is a commonly used and powerful graphical and statistical tool for analyzing the combined effects of simple chemical mixtures. In this paper a nonlinear isobologram model is proposed to analyze the joint action of chemical mixtures for quantitative dose-response relationships. This nonlinear isobologram model incorporates two additional new parameters, Ymin and Ymax, to facilitate analysis of response data that are not constrained between 0 and 1, where parameters Ymin and Ymax represent the minimal and the maximal observed toxic response. This nonlinear isobologram model for binary mixtures can be expressed as [formula: see text] In addition, a Box-Cox transformation to both sides is introduced to improve the goodness of fit and to provide a more robust model for achieving homogeneity and normality of the residuals. Finally, a confidence band is proposed for selected isobols, e.g., the median effective dose, to facilitate graphical and statistical analysis of the isobologram. The versatility of this approach is demonstrated using published data describing the toxicity of the binary mixtures of citrinin and ochratoxin as well as a new experimental data from our laboratory for mixtures of mercury and cadmium.

  3. A Skew-t space-varying regression model for the spectral analysis of resting state brain activity.

    Science.gov (United States)

    Ismail, Salimah; Sun, Wenqi; Nathoo, Farouk S; Babul, Arif; Moiseev, Alexader; Beg, Mirza Faisal; Virji-Babul, Naznin

    2013-08-01

    It is known that in many neurological disorders such as Down syndrome, main brain rhythms shift their frequencies slightly, and characterizing the spatial distribution of these shifts is of interest. This article reports on the development of a Skew-t mixed model for the spatial analysis of resting state brain activity in healthy controls and individuals with Down syndrome. Time series of oscillatory brain activity are recorded using magnetoencephalography, and spectral summaries are examined at multiple sensor locations across the scalp. We focus on the mean frequency of the power spectral density, and use space-varying regression to examine associations with age, gender and Down syndrome across several scalp regions. Spatial smoothing priors are incorporated based on a multivariate Markov random field, and the markedly non-Gaussian nature of the spectral response variable is accommodated by the use of a Skew-t distribution. A range of models representing different assumptions on the association structure and response distribution are examined, and we conduct model selection using the deviance information criterion. (1) Our analysis suggests region-specific differences between healthy controls and individuals with Down syndrome, particularly in the left and right temporal regions, and produces smoothed maps indicating the scalp topography of the estimated differences.

  4. Objective Bayesian Analysis of Skew- t Distributions

    KAUST Repository

    BRANCO, MARCIA D'ELIA; GENTON, MARC G.; LISEO, BRUNERO

    2012-01-01

    We study the Jeffreys prior and its properties for the shape parameter of univariate skew-t distributions with linear and nonlinear Student's t skewing functions. In both cases, we show that the resulting priors for the shape parameter are symmetric

  5. Mixture Modeling: Applications in Educational Psychology

    Science.gov (United States)

    Harring, Jeffrey R.; Hodis, Flaviu A.

    2016-01-01

    Model-based clustering methods, commonly referred to as finite mixture modeling, have been applied to a wide variety of cross-sectional and longitudinal data to account for heterogeneity in population characteristics. In this article, we elucidate 2 such approaches: growth mixture modeling and latent profile analysis. Both techniques are…

  6. Elevated mortality among birds in Chernobyl as judged from skewed age and sex ratios.

    Directory of Open Access Journals (Sweden)

    Anders Pape Møller

    Full Text Available Radiation has negative effects on survival of animals including humans, although the generality of this claim is poorly documented under low-dose field conditions. Because females may suffer disproportionately from the effects of radiation on survival due to differences in sex roles during reproduction, radiation-induced mortality may result in male-skewed adult sex ratios.We estimated the effects of low-dose radiation on adult survival rates in birds by determining age ratios of adults captured in mist nets during the breeding season in relation to background radiation levels around Chernobyl and in nearby uncontaminated control areas. Age ratios were skewed towards yearlings, especially in the most contaminated areas, implying that adult survival rates were reduced in contaminated areas, and that populations in such areas could only be maintained through immigration from nearby uncontaminated areas. Differential mortality in females resulted in a strongly male-skewed sex ratio in the most contaminated areas. In addition, males sang disproportionately commonly in the most contaminated areas where the sex ratio was male skewed presumably because males had difficulty finding and acquiring mates when females were rare. The results were not caused by permanent emigration by females from the most contaminated areas because none of the recaptured birds had changed breeding site, and the proportion of individuals with morphological abnormalities did not differ significantly between the sexes for areas with normal and higher levels of contamination.These findings are consistent with the hypothesis that the adult survival rate of female birds is particularly susceptible to the effects of low-dose radiation, resulting in male skewed sex ratios at high levels of radiation. Such skewed age ratios towards yearlings in contaminated areas are consistent with the hypothesis that an area exceeding 30,000 km(2 in Chernobyl's surroundings constitutes an

  7. Few Skewed Results from IOTA Interferometer YSO Disk Survey

    Science.gov (United States)

    Monnier, J. D.; Millan-Gabet, R.; Berger, J.-P.; Pedretti, E.; Traub, W.; Schloerb, F. P.

    2005-12-01

    The 3-telescope IOTA interferometer is capable of measuring closure phases for dozens of Herbig Ae/Be stars in the near-infrared. The closure phase unambiguously identifies deviations from centro-symmetry (i.e., skew) in the brightness distribution, at the scale of 4 milliarcseconds (sub-AU physical scales) for our work. Indeed, hot dust emission from the inner circumstellar accretion disk is expected to be skewed for (generic) flared disks viewed at intermediate inclination angles, as has been observed for LkHa 101. Surprisingly, we find very little evidence for skewed disk emission in our IOTA3 sample, setting strong constraints on the geometry of the inner disk. In particular, we rule out the currently-popular model of a VERTICAL hot inner wall of dust at the sublimation radius. Instead, our data is more consistent with a curved inner wall that bends away from the midplane as might be expected from the pressure-dependence of dust sublimation or limited absorption of stellar luminosity in the disk midplane by gas.

  8. Inheritance of Properties of Normal and Non-Normal Distributions after Transformation of Scores to Ranks

    Science.gov (United States)

    Zimmerman, Donald W.

    2011-01-01

    This study investigated how population parameters representing heterogeneity of variance, skewness, kurtosis, bimodality, and outlier-proneness, drawn from normal and eleven non-normal distributions, also characterized the ranks corresponding to independent samples of scores. When the parameters of population distributions from which samples were…

  9. Maximum likelihood estimation of semiparametric mixture component models for competing risks data.

    Science.gov (United States)

    Choi, Sangbum; Huang, Xuelin

    2014-09-01

    In the analysis of competing risks data, the cumulative incidence function is a useful quantity to characterize the crude risk of failure from a specific event type. In this article, we consider an efficient semiparametric analysis of mixture component models on cumulative incidence functions. Under the proposed mixture model, latency survival regressions given the event type are performed through a class of semiparametric models that encompasses the proportional hazards model and the proportional odds model, allowing for time-dependent covariates. The marginal proportions of the occurrences of cause-specific events are assessed by a multinomial logistic model. Our mixture modeling approach is advantageous in that it makes a joint estimation of model parameters associated with all competing risks under consideration, satisfying the constraint that the cumulative probability of failing from any cause adds up to one given any covariates. We develop a novel maximum likelihood scheme based on semiparametric regression analysis that facilitates efficient and reliable estimation. Statistical inferences can be conveniently made from the inverse of the observed information matrix. We establish the consistency and asymptotic normality of the proposed estimators. We validate small sample properties with simulations and demonstrate the methodology with a data set from a study of follicular lymphoma. © 2014, The International Biometric Society.

  10. Exponential model normalization for electrical capacitance tomography with external electrodes under gap permittivity conditions

    International Nuclear Information System (INIS)

    Baidillah, Marlin R; Takei, Masahiro

    2017-01-01

    A nonlinear normalization model which is called exponential model for electrical capacitance tomography (ECT) with external electrodes under gap permittivity conditions has been developed. The exponential model normalization is proposed based on the inherently nonlinear relationship characteristic between the mixture permittivity and the measured capacitance due to the gap permittivity of inner wall. The parameters of exponential equation are derived by using an exponential fitting curve based on the simulation and a scaling function is added to adjust the experiment system condition. The exponential model normalization was applied to two dimensional low and high contrast dielectric distribution phantoms by using simulation and experimental studies. The proposed normalization model has been compared with other normalization models i.e. Parallel, Series, Maxwell and Böttcher models. Based on the comparison of image reconstruction results, the exponential model is reliable to predict the nonlinear normalization of measured capacitance in term of low and high contrast dielectric distribution. (paper)

  11. Mixture modeling methods for the assessment of normal and abnormal personality, part I: cross-sectional models.

    Science.gov (United States)

    Hallquist, Michael N; Wright, Aidan G C

    2014-01-01

    Over the past 75 years, the study of personality and personality disorders has been informed considerably by an impressive array of psychometric instruments. Many of these tests draw on the perspective that personality features can be conceptualized in terms of latent traits that vary dimensionally across the population. A purely trait-oriented approach to personality, however, might overlook heterogeneity that is related to similarities among subgroups of people. This article describes how factor mixture modeling (FMM), which incorporates both categories and dimensions, can be used to represent person-oriented and trait-oriented variability in the latent structure of personality. We provide an overview of different forms of FMM that vary in the degree to which they emphasize trait- versus person-oriented variability. We also provide practical guidelines for applying FMM to personality data, and we illustrate model fitting and interpretation using an empirical analysis of general personality dysfunction.

  12. Skewness of the cosmic microwave background temperature fluctuations due to the non-linear gravitational instability

    International Nuclear Information System (INIS)

    Munshi, D.; Souradeep, T.; Starobinsky, A.A.

    1995-01-01

    The skewness of the temperature fluctuations of the cosmic microwave background (CMB) produced by initially Gaussian adiabatic perturbations with the flat (Harrison-Zeldovich) spectrum, which arises due to non-linear corrections to a gravitational potential at the matter-dominated stage, is calculated quantitatively. For the standard CDM model, the effect appears to be smaller than expected previously and lies below the cosmic variance limit even for small angles. The sign of the skewness is opposite to that of the skewness of density perturbations. (author)

  13. Robust bivariate error detection in skewed data with application to historical radiosonde winds

    KAUST Repository

    Sun, Ying

    2017-01-18

    The global historical radiosonde archives date back to the 1920s and contain the only directly observed measurements of temperature, wind, and moisture in the upper atmosphere, but they contain many random errors. Most of the focus on cleaning these large datasets has been on temperatures, but winds are important inputs to climate models and in studies of wind climatology. The bivariate distribution of the wind vector does not have elliptical contours but is skewed and heavy-tailed, so we develop two methods for outlier detection based on the bivariate skew-t (BST) distribution, using either distance-based or contour-based approaches to flag observations as potential outliers. We develop a framework to robustly estimate the parameters of the BST and then show how the tuning parameter to get these estimates is chosen. In simulation, we compare our methods with one based on a bivariate normal distribution and a nonparametric approach based on the bagplot. We then apply all four methods to the winds observed for over 35,000 radiosonde launches at a single station and demonstrate differences in the number of observations flagged across eight pressure levels and through time. In this pilot study, the method based on the BST contours performs very well.

  14. Robust bivariate error detection in skewed data with application to historical radiosonde winds

    KAUST Repository

    Sun, Ying; Hering, Amanda S.; Browning, Joshua M.

    2017-01-01

    The global historical radiosonde archives date back to the 1920s and contain the only directly observed measurements of temperature, wind, and moisture in the upper atmosphere, but they contain many random errors. Most of the focus on cleaning these large datasets has been on temperatures, but winds are important inputs to climate models and in studies of wind climatology. The bivariate distribution of the wind vector does not have elliptical contours but is skewed and heavy-tailed, so we develop two methods for outlier detection based on the bivariate skew-t (BST) distribution, using either distance-based or contour-based approaches to flag observations as potential outliers. We develop a framework to robustly estimate the parameters of the BST and then show how the tuning parameter to get these estimates is chosen. In simulation, we compare our methods with one based on a bivariate normal distribution and a nonparametric approach based on the bagplot. We then apply all four methods to the winds observed for over 35,000 radiosonde launches at a single station and demonstrate differences in the number of observations flagged across eight pressure levels and through time. In this pilot study, the method based on the BST contours performs very well.

  15. Widely Linear Equalization for IQ Imbalance and Skew Compensation in Optical Coherent Receivers

    DEFF Research Database (Denmark)

    Porto da Silva, Edson; Zibar, Darko

    2016-01-01

    In this paper, an alternative approach to design linear equalization algorithms for optical coherent receivers is introduced. Using widely linear complex analysis, a general analytical model it is shown, where In-phase/quadrature (IQ) imbalances and IQ skew at the coherent receiver front-end are ......In this paper, an alternative approach to design linear equalization algorithms for optical coherent receivers is introduced. Using widely linear complex analysis, a general analytical model it is shown, where In-phase/quadrature (IQ) imbalances and IQ skew at the coherent receiver front...

  16. Study on seismic behaviour of integral concrete bridges with different skew angles through fragility curves

    Directory of Open Access Journals (Sweden)

    Mahmoud Reza ُُShiravand

    2017-12-01

    Full Text Available Bridges are key elements in urban transportation system and should be designed to sustain earthquake induced damages to be utilized after earthquake. Extensive damages during last earthquakes highlighted the importance of seismic assessment and damage estimation of bridges. Skewness is one of the primary parameters effects on seismic behavior of bridges. Skew bridges are defined as bridges with skew angle piers and abutments. In these bridges, the piers have some degrees of skewness due to construction restrictions, such as those caused by crossing a waterway, railway line or road. This paper aims to investigate seismic behavior of skew concrete bridges using damage criteria and estimate probability of piers damage with fragility curves. To this end, three types of concrete bridges with two, three and four spans and varying skew angles of 00 ,100, 200 and 300 are modeled with finite element software. Seismic responses of bridge piers under 10 earthquake ground motion records are calculated using incremental dynamic analysis. Following, damage criteria proposed by Mackie and Stojadinovic are used to define damage limits of bridge piers in four damage states of slight, moderate, extensive and complete and bridge fragility curves are developed. The results show that increasing skew angles increases the probability of damage occurrence, particularly in extensive and complete damage states.

  17. Inferring network structure in non-normal and mixed discrete-continuous genomic data.

    Science.gov (United States)

    Bhadra, Anindya; Rao, Arvind; Baladandayuthapani, Veerabhadran

    2018-03-01

    Inferring dependence structure through undirected graphs is crucial for uncovering the major modes of multivariate interaction among high-dimensional genomic markers that are potentially associated with cancer. Traditionally, conditional independence has been studied using sparse Gaussian graphical models for continuous data and sparse Ising models for discrete data. However, there are two clear situations when these approaches are inadequate. The first occurs when the data are continuous but display non-normal marginal behavior such as heavy tails or skewness, rendering an assumption of normality inappropriate. The second occurs when a part of the data is ordinal or discrete (e.g., presence or absence of a mutation) and the other part is continuous (e.g., expression levels of genes or proteins). In this case, the existing Bayesian approaches typically employ a latent variable framework for the discrete part that precludes inferring conditional independence among the data that are actually observed. The current article overcomes these two challenges in a unified framework using Gaussian scale mixtures. Our framework is able to handle continuous data that are not normal and data that are of mixed continuous and discrete nature, while still being able to infer a sparse conditional sign independence structure among the observed data. Extensive performance comparison in simulations with alternative techniques and an analysis of a real cancer genomics data set demonstrate the effectiveness of the proposed approach. © 2017, The International Biometric Society.

  18. Determining the role of skewed X-chromosome inactivation in developing muscle symptoms in carriers of Duchenne muscular dystrophy.

    Science.gov (United States)

    Viggiano, Emanuela; Ergoli, Manuela; Picillo, Esther; Politano, Luisa

    2016-07-01

    Duchenne and Becker dystrophinopathies (DMD and BMD) are X-linked recessive disorders caused by mutations in the dystrophin gene that lead to absent or reduced expression of dystrophin in both skeletal and heart muscles. DMD/BMD female carriers are usually asymptomatic, although about 8 % may exhibit muscle or cardiac symptoms. Several mechanisms leading to a reduced dystrophin have been hypothesized to explain the clinical manifestations and, in particular, the role of the skewed XCI is questioned. In this review, the mechanism of XCI and its involvement in the phenotype of BMD/DMD carriers with both a normal karyotype or with X;autosome translocations with breakpoints at Xp21 (locus of the DMD gene) will be analyzed. We have previously observed that DMD carriers with moderate/severe muscle involvement, exhibit a moderate or extremely skewed XCI, in particular if presenting with an early onset of symptoms, while DMD carriers with mild muscle involvement present a random XCI. Moreover, we found that among 87.1 % of the carriers with X;autosome translocations involving the locus Xp21 who developed signs and symptoms of dystrophinopathy such as proximal muscle weakness, difficulty to run, jump and climb stairs, 95.2 % had a skewed XCI pattern in lymphocytes. These data support the hypothesis that skewed XCI is involved in the onset of phenotype in DMD carriers, the X chromosome carrying the normal DMD gene being preferentially inactivated and leading to a moderate-severe muscle involvement.

  19. Investigating the Investigative Task: Testing for Skewness--An Investigation of Different Test Statistics and Their Power to Detect Skewness

    Science.gov (United States)

    Tabor, Josh

    2010-01-01

    On the 2009 AP[c] Statistics Exam, students were asked to create a statistic to measure skewness in a distribution. This paper explores several of the most popular student responses and evaluates which statistic performs best when sampling from various skewed populations. (Contains 8 figures, 3 tables, and 4 footnotes.)

  20. Two-component mixture model: Application to palm oil and exchange rate

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-12-01

    Palm oil is a seed crop which is widely adopt for food and non-food products such as cookie, vegetable oil, cosmetics, household products and others. Palm oil is majority growth in Malaysia and Indonesia. However, the demand for palm oil is getting growth and rapidly running out over the years. This phenomenal cause illegal logging of trees and destroy the natural habitat. Hence, the present paper investigates the relationship between exchange rate and palm oil price in Malaysia by using Maximum Likelihood Estimation via Newton-Raphson algorithm to fit a two components mixture model. Besides, this paper proposes a mixture of normal distribution to accommodate with asymmetry characteristics and platykurtic time series data.

  1. Skew-signings of positive weighted digraphs

    Directory of Open Access Journals (Sweden)

    Kawtar Attas

    2018-07-01

    Full Text Available An arc-weighted digraph is a pair (D , ω where D is a digraph and ω is an arc-weight function that assigns to each arc u v of D a nonzero real number ω (u v . Given an arc-weighted digraph (D , ω with vertices v 1 , … , v n , the weighted adjacency matrix of (D , ω is defined as the n × n matrix A (D , ω = [ a i j ] where a i j = ω ( v i v j if v i v j is an arc of D , and 0 otherwise. Let (D , ω be a positive arc-weighted digraph and assume that D is loopless and symmetric. A skew-signing of (D , ω is an arc-weight function ω ′ such that ω ′ (u v = ± ω (u v and ω ′ (u v ω ′ (v u < 0 for every arc u v of D . In this paper, we give necessary and sufficient conditions under which the characteristic polynomial of A (D , ω ′ is the same for all skew-signings ω ′ of (D , ω . Our main theorem generalizes a result of Cavers et al. (2012 about skew-adjacency matrices of graphs. Keywords: Arc-weighted digraphs, Skew-signing of a digraph, Weighted adjacency matrix, Mathematics Subject Classification: 05C22, 05C31, 05C50

  2. Accuracy and uncertainty analysis of soil Bbf spatial distribution estimation at a coking plant-contaminated site based on normalization geostatistical technologies.

    Science.gov (United States)

    Liu, Geng; Niu, Junjie; Zhang, Chao; Guo, Guanlin

    2015-12-01

    Data distribution is usually skewed severely by the presence of hot spots in contaminated sites. This causes difficulties for accurate geostatistical data transformation. Three types of typical normal distribution transformation methods termed the normal score, Johnson, and Box-Cox transformations were applied to compare the effects of spatial interpolation with normal distribution transformation data of benzo(b)fluoranthene in a large-scale coking plant-contaminated site in north China. Three normal transformation methods decreased the skewness and kurtosis of the benzo(b)fluoranthene, and all the transformed data passed the Kolmogorov-Smirnov test threshold. Cross validation showed that Johnson ordinary kriging has a minimum root-mean-square error of 1.17 and a mean error of 0.19, which was more accurate than the other two models. The area with fewer sampling points and that with high levels of contamination showed the largest prediction standard errors based on the Johnson ordinary kriging prediction map. We introduce an ideal normal transformation method prior to geostatistical estimation for severely skewed data, which enhances the reliability of risk estimation and improves the accuracy for determination of remediation boundaries.

  3. Continuity diaphragm for skewed continuous span precast prestressed concrete girder bridges.

    Science.gov (United States)

    2004-10-01

    Continuity diaphragms used on skewed bents in prestressed girder bridges cause difficulties in detailing and : construction. Details for bridges with large diaphragm skew angles (>30) have not been a problem for LA DOTD. : However, as the skew angl...

  4. Development of reversible jump Markov Chain Monte Carlo algorithm in the Bayesian mixture modeling for microarray data in Indonesia

    Science.gov (United States)

    Astuti, Ani Budi; Iriawan, Nur; Irhamah, Kuswanto, Heri

    2017-12-01

    In the Bayesian mixture modeling requires stages the identification number of the most appropriate mixture components thus obtained mixture models fit the data through data driven concept. Reversible Jump Markov Chain Monte Carlo (RJMCMC) is a combination of the reversible jump (RJ) concept and the Markov Chain Monte Carlo (MCMC) concept used by some researchers to solve the problem of identifying the number of mixture components which are not known with certainty number. In its application, RJMCMC using the concept of the birth/death and the split-merge with six types of movement, that are w updating, θ updating, z updating, hyperparameter β updating, split-merge for components and birth/death from blank components. The development of the RJMCMC algorithm needs to be done according to the observed case. The purpose of this study is to know the performance of RJMCMC algorithm development in identifying the number of mixture components which are not known with certainty number in the Bayesian mixture modeling for microarray data in Indonesia. The results of this study represent that the concept RJMCMC algorithm development able to properly identify the number of mixture components in the Bayesian normal mixture model wherein the component mixture in the case of microarray data in Indonesia is not known for certain number.

  5. An equiratio mixture model for non-additive components : a case study for aspartame/acesulfame-K mixtures

    NARCIS (Netherlands)

    Schifferstein, H.N.J.

    1996-01-01

    The Equiratio Mixture Model predicts the psychophysical function for an equiratio mixture type on the basis of the psychophysical functions for the unmixed components. The model reliably estimates the sweetness of mixtures of sugars and sugar-alchohols, but is unable to predict intensity for

  6. PROCESS CAPABILITY ESTIMATION FOR NON-NORMALLY DISTRIBUTED DATA USING ROBUST METHODS - A COMPARATIVE STUDY

    Directory of Open Access Journals (Sweden)

    Yerriswamy Wooluru

    2016-06-01

    Full Text Available Process capability indices are very important process quality assessment tools in automotive industries. The common process capability indices (PCIs Cp, Cpk, Cpm are widely used in practice. The use of these PCIs based on the assumption that process is in control and its output is normally distributed. In practice, normality is not always fulfilled. Indices developed based on normality assumption are very sensitive to non- normal processes. When distribution of a product quality characteristic is non-normal, Cp and Cpk indices calculated using conventional methods often lead to erroneous interpretation of process capability. In the literature, various methods have been proposed for surrogate process capability indices under non normality but few literature sources offer their comprehensive evaluation and comparison of their ability to capture true capability in non-normal situation. In this paper, five methods have been reviewed and capability evaluation is carried out for the data pertaining to resistivity of silicon wafer. The final results revealed that the Burr based percentile method is better than Clements method. Modelling of non-normal data and Box-Cox transformation method using statistical software (Minitab 14 provides reasonably good result as they are very promising methods for non - normal and moderately skewed data (Skewness <= 1.5.

  7. Flow in Rotating Serpentine Coolant Passages With Skewed Trip Strips

    Science.gov (United States)

    Tse, David G.N.; Steuber, Gary

    1996-01-01

    Laser velocimetry was utilized to map the velocity field in serpentine turbine blade cooling passages with skewed trip strips. The measurements were obtained at Reynolds and Rotation numbers of 25,000 and 0.24 to assess the influence of trips, passage curvature and Coriolis force on the flow field. The interaction of the secondary flows induced by skewed trips with the passage rotation produces a swirling vortex and a corner recirculation zone. With trips skewed at +45 deg, the secondary flows remain unaltered as the cross-flow proceeds from the passage to the turn. However, the flow characteristics at these locations differ when trips are skewed at -45 deg. Changes in the flow structure are expected to augment heat transfer, in agreement with the heat transfer measurements of Johnson, et al. The present results show that trips are skewed at -45 deg in the outward flow passage and trips are skewed at +45 deg in the inward flow passage maximize heat transfer. Details of the present measurements were related to the heat transfer measurements of Johnson, et al. to relate fluid flow and heat transfer measurements.

  8. A variational analysis for large deflection of skew plates under ...

    African Journals Online (AJOL)

    In the present paper, the static behaviour of thin isotropic skew plates under uniformly distributed load is analyzed with the geometric nonlinearity of the model properly handled. A variational method based on total potential energy has been implemented through assumed displacement field. The computational work has ...

  9. Assessing variation in life-history tactics within a population using mixture regression models: a practical guide for evolutionary ecologists.

    Science.gov (United States)

    Hamel, Sandra; Yoccoz, Nigel G; Gaillard, Jean-Michel

    2017-05-01

    Mixed models are now well-established methods in ecology and evolution because they allow accounting for and quantifying within- and between-individual variation. However, the required normal distribution of the random effects can often be violated by the presence of clusters among subjects, which leads to multi-modal distributions. In such cases, using what is known as mixture regression models might offer a more appropriate approach. These models are widely used in psychology, sociology, and medicine to describe the diversity of trajectories occurring within a population over time (e.g. psychological development, growth). In ecology and evolution, however, these models are seldom used even though understanding changes in individual trajectories is an active area of research in life-history studies. Our aim is to demonstrate the value of using mixture models to describe variation in individual life-history tactics within a population, and hence to promote the use of these models by ecologists and evolutionary ecologists. We first ran a set of simulations to determine whether and when a mixture model allows teasing apart latent clustering, and to contrast the precision and accuracy of estimates obtained from mixture models versus mixed models under a wide range of ecological contexts. We then used empirical data from long-term studies of large mammals to illustrate the potential of using mixture models for assessing within-population variation in life-history tactics. Mixture models performed well in most cases, except for variables following a Bernoulli distribution and when sample size was small. The four selection criteria we evaluated [Akaike information criterion (AIC), Bayesian information criterion (BIC), and two bootstrap methods] performed similarly well, selecting the right number of clusters in most ecological situations. We then showed that the normality of random effects implicitly assumed by evolutionary ecologists when using mixed models was often

  10. Impaired imprinted X chromosome inactivation is responsible for the skewed sex ratio following in vitro fertilization

    Science.gov (United States)

    Tan, Kun; An, Lei; Miao, Kai; Ren, Likun; Hou, Zhuocheng; Tao, Li; Zhang, Zhenni; Wang, Xiaodong; Xia, Wei; Liu, Jinghao; Wang, Zhuqing; Xi, Guangyin; Gao, Shuai; Sui, Linlin; Zhu, De-Sheng; Wang, Shumin; Wu, Zhonghong; Bach, Ingolf; Chen, Dong-bao; Tian, Jianhui

    2016-01-01

    Dynamic epigenetic reprogramming occurs during normal embryonic development at the preimplantation stage. Erroneous epigenetic modifications due to environmental perturbations such as manipulation and culture of embryos during in vitro fertilization (IVF) are linked to various short- or long-term consequences. Among these, the skewed sex ratio, an indicator of reproductive hazards, was reported in bovine and porcine embryos and even human IVF newborns. However, since the first case of sex skewing reported in 1991, the underlying mechanisms remain unclear. We reported herein that sex ratio is skewed in mouse IVF offspring, and this was a result of female-biased peri-implantation developmental defects that were originated from impaired imprinted X chromosome inactivation (iXCI) through reduced ring finger protein 12 (Rnf12)/X-inactive specific transcript (Xist) expression. Compensation of impaired iXCI by overexpression of Rnf12 to up-regulate Xist significantly rescued female-biased developmental defects and corrected sex ratio in IVF offspring. Moreover, supplementation of an epigenetic modulator retinoic acid in embryo culture medium up-regulated Rnf12/Xist expression, improved iXCI, and successfully redeemed the skewed sex ratio to nearly 50% in mouse IVF offspring. Thus, our data show that iXCI is one of the major epigenetic barriers for the developmental competence of female embryos during preimplantation stage, and targeting erroneous epigenetic modifications may provide a potential approach for preventing IVF-associated complications. PMID:26951653

  11. Comparison of the binary logistic and skewed logistic (Scobit) models of injury severity in motor vehicle collisions.

    Science.gov (United States)

    Tay, Richard

    2016-03-01

    The binary logistic model has been extensively used to analyze traffic collision and injury data where the outcome of interest has two categories. However, the assumption of a symmetric distribution may not be a desirable property in some cases, especially when there is a significant imbalance in the two categories of outcome. This study compares the standard binary logistic model with the skewed logistic model in two cases in which the symmetry assumption is violated in one but not the other case. The differences in the estimates, and thus the marginal effects obtained, are significant when the assumption of symmetry is violated. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Stochastic radiative transfer model for mixture of discontinuous vegetation canopies

    International Nuclear Information System (INIS)

    Shabanov, Nikolay V.; Huang, D.; Knjazikhin, Y.; Dickinson, R.E.; Myneni, Ranga B.

    2007-01-01

    Modeling of the radiation regime of a mixture of vegetation species is a fundamental problem of the Earth's land remote sensing and climate applications. The major existing approaches, including the linear mixture model and the turbid medium (TM) mixture radiative transfer model, provide only an approximate solution to this problem. In this study, we developed the stochastic mixture radiative transfer (SMRT) model, a mathematically exact tool to evaluate radiation regime in a natural canopy with spatially varying optical properties, that is, canopy, which exhibits a structured mixture of vegetation species and gaps. The model solves for the radiation quantities, direct input to the remote sensing/climate applications: mean radiation fluxes over whole mixture and over individual species. The canopy structure is parameterized in the SMRT model in terms of two stochastic moments: the probability of finding species and the conditional pair-correlation of species. The second moment is responsible for the 3D radiation effects, namely, radiation streaming through gaps without interaction with vegetation and variation of the radiation fluxes between different species. We performed analytical and numerical analysis of the radiation effects, simulated with the SMRT model for the three cases of canopy structure: (a) non-ordered mixture of species and gaps (TM); (b) ordered mixture of species without gaps; and (c) ordered mixture of species with gaps. The analysis indicates that the variation of radiation fluxes between different species is proportional to the variation of species optical properties (leaf albedo, density of foliage, etc.) Gaps introduce significant disturbance to the radiation regime in the canopy as their optical properties constitute major contrast to those of any vegetation species. The SMRT model resolves deficiencies of the major existing mixture models: ignorance of species radiation coupling via multiple scattering of photons (the linear mixture model

  13. Evaluating Mixture Modeling for Clustering: Recommendations and Cautions

    Science.gov (United States)

    Steinley, Douglas; Brusco, Michael J.

    2011-01-01

    This article provides a large-scale investigation into several of the properties of mixture-model clustering techniques (also referred to as latent class cluster analysis, latent profile analysis, model-based clustering, probabilistic clustering, Bayesian classification, unsupervised learning, and finite mixture models; see Vermunt & Magdison,…

  14. Cavitation Simulation on Conventional and Highly-Skewed Propellers in the Behind Condition

    DEFF Research Database (Denmark)

    Shin, Keun Woo; Andersen, Poul; Mikkelsen, Robert Flemming

    2011-01-01

    The cavitating flows around conventional and highly-skewed propellers in the behind-hull condition are simulated by an in-house RANS solver, EllipSys (Sørensen 2003), with the cavitation model, based on the homogeneous equilibrium modeling (HEM) approach and a vapor transport equation. The valida...

  15. Steel framing strategies for highly skewed bridges to reduce/eliminate distortion near skewed supports.

    Science.gov (United States)

    2014-05-01

    Different problems in straight skewed steel I-girder bridges are often associated with the methods used for detailing the cross-frames. Use of theoretical terms to describe these detailing methods and absence of complete and simplified design approac...

  16. The role of the SST-thermocline relationship in Indian Ocean Dipole skewness and its response to global warming

    Science.gov (United States)

    Ng, Benjamin; Cai, Wenju; Walsh, Kevin

    2014-01-01

    A positive Indian Ocean Dipole (IOD) tends to have stronger cold sea surface temperature anomalies (SSTAs) over the eastern Indian Ocean with greater impacts than warm SSTAs that occur during its negative phase. Two feedbacks have been suggested as the cause of positive IOD skewness, a positive Bjerknes feedback and a negative SST-cloud-radiation (SCR) feedback, but their relative importance is debated. Using inter-model statistics, we show that the most important process for IOD skewness is an asymmetry in the thermocline feedback, whereby SSTAs respond to thermocline depth anomalies more strongly during the positive phase than negative phase. This asymmetric thermocline feedback drives IOD skewness despite positive IODs receiving greater damping from the SCR feedback. In response to global warming, although the thermocline feedback strengthens, its asymmetry between positive and negative IODs weakens. This behaviour change explains the reduction in IOD skewness that many models display under global warming. PMID:25112717

  17. The curious anomaly of skewed judgment distributions and systematic error in the wisdom of crowds.

    Directory of Open Access Journals (Sweden)

    Ulrik W Nash

    Full Text Available Judgment distributions are often skewed and we know little about why. This paper explains the phenomenon of skewed judgment distributions by introducing the augmented quincunx (AQ model of sequential and probabilistic cue categorization by neurons of judges. In the process of developing inferences about true values, when neurons categorize cues better than chance, and when the particular true value is extreme compared to what is typical and anchored upon, then populations of judges form skewed judgment distributions with high probability. Moreover, the collective error made by these people can be inferred from how skewed their judgment distributions are, and in what direction they tilt. This implies not just that judgment distributions are shaped by cues, but that judgment distributions are cues themselves for the wisdom of crowds. The AQ model also predicts that judgment variance correlates positively with collective error, thereby challenging what is commonly believed about how diversity and collective intelligence relate. Data from 3053 judgment surveys about US macroeconomic variables obtained from the Federal Reserve Bank of Philadelphia and the Wall Street Journal provide strong support, and implications are discussed with reference to three central ideas on collective intelligence, these being Galton's conjecture on the distribution of judgments, Muth's rational expectations hypothesis, and Page's diversity prediction theorem.

  18. Joint IQ Skew and Chromatic Dispersion Estimation for Coherent Optical Communication Receivers

    DEFF Research Database (Denmark)

    Medeiros Diniz, Júlio César; Porto da Silva, Edson; Piels, Molly

    2016-01-01

    A low-complexity scanning method for joint estimation of receiver IQ skew and chromatic dispersion is proposed. This method shows less than 1 ps skew error for a 1200-km 32-GBd DP-16QAM optical transmission experiment.......A low-complexity scanning method for joint estimation of receiver IQ skew and chromatic dispersion is proposed. This method shows less than 1 ps skew error for a 1200-km 32-GBd DP-16QAM optical transmission experiment....

  19. Learning a Novel Pattern through Balanced and Skewed Input

    Science.gov (United States)

    McDonough, Kim; Trofimovich, Pavel

    2013-01-01

    This study compared the effectiveness of balanced and skewed input at facilitating the acquisition of the transitive construction in Esperanto, characterized by the accusative suffix "-n" and variable word order (SVO, OVS). Thai university students (N = 98) listened to 24 sentences under skewed (one noun with high token frequency) or…

  20. Using finite mixture models in thermal-hydraulics system code uncertainty analysis

    Energy Technology Data Exchange (ETDEWEB)

    Carlos, S., E-mail: scarlos@iqn.upv.es [Department d’Enginyeria Química i Nuclear, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain); Sánchez, A. [Department d’Estadística Aplicada i Qualitat, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain); Ginestar, D. [Department de Matemàtica Aplicada, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain); Martorell, S. [Department d’Enginyeria Química i Nuclear, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain)

    2013-09-15

    Highlights: • Best estimate codes simulation needs uncertainty quantification. • The output variables can present multimodal probability distributions. • The analysis of multimodal distribution is performed using finite mixture models. • Two methods to reconstruct output variable probability distribution are used. -- Abstract: Nuclear Power Plant safety analysis is mainly based on the use of best estimate (BE) codes that predict the plant behavior under normal or accidental conditions. As the BE codes introduce uncertainties due to uncertainty in input parameters and modeling, it is necessary to perform uncertainty assessment (UA), and eventually sensitivity analysis (SA), of the results obtained. These analyses are part of the appropriate treatment of uncertainties imposed by current regulation based on the adoption of the best estimate plus uncertainty (BEPU) approach. The most popular approach for uncertainty assessment, based on Wilks’ method, obtains a tolerance/confidence interval, but it does not completely characterize the output variable behavior, which is required for an extended UA and SA. However, the development of standard UA and SA impose high computational cost due to the large number of simulations needed. In order to obtain more information about the output variable and, at the same time, to keep computational cost as low as possible, there has been a recent shift toward developing metamodels (model of model), or surrogate models, that approximate or emulate complex computer codes. In this way, there exist different techniques to reconstruct the probability distribution using the information provided by a sample of values as, for example, the finite mixture models. In this paper, the Expectation Maximization and the k-means algorithms are used to obtain a finite mixture model that reconstructs the output variable probability distribution from data obtained with RELAP-5 simulations. Both methodologies have been applied to a separated

  1. A log-sinh transformation for data normalization and variance stabilization

    Science.gov (United States)

    Wang, Q. J.; Shrestha, D. L.; Robertson, D. E.; Pokhrel, P.

    2012-05-01

    When quantifying model prediction uncertainty, it is statistically convenient to represent model errors that are normally distributed with a constant variance. The Box-Cox transformation is the most widely used technique to normalize data and stabilize variance, but it is not without limitations. In this paper, a log-sinh transformation is derived based on a pattern of errors commonly seen in hydrological model predictions. It is suited to applications where prediction variables are positively skewed and the spread of errors is seen to first increase rapidly, then slowly, and eventually approach a constant as the prediction variable becomes greater. The log-sinh transformation is applied in two case studies, and the results are compared with one- and two-parameter Box-Cox transformations.

  2. Using color histogram normalization for recovering chromatic illumination-changed images.

    Science.gov (United States)

    Pei, S C; Tseng, C L; Wu, C C

    2001-11-01

    We propose a novel image-recovery method using the covariance matrix of the red-green-blue (R-G-B) color histogram and tensor theories. The image-recovery method is called the color histogram normalization algorithm. It is known that the color histograms of an image taken under varied illuminations are related by a general affine transformation of the R-G-B coordinates when the illumination is changed. We propose a simplified affine model for application with illumination variation. This simplified affine model considers the effects of only three basic forms of distortion: translation, scaling, and rotation. According to this principle, we can estimate the affine transformation matrix necessary to recover images whose color distributions are varied as a result of illumination changes. We compare the normalized color histogram of the standard image with that of the tested image. By performing some operations of simple linear algebra, we can estimate the matrix of the affine transformation between two images under different illuminations. To demonstrate the performance of the proposed algorithm, we divide the experiments into two parts: computer-simulated images and real images corresponding to illumination changes. Simulation results show that the proposed algorithm is effective for both types of images. We also explain the noise-sensitive skew-rotation estimation that exists in the general affine model and demonstrate that the proposed simplified affine model without the use of skew rotation is better than the general affine model for such applications.

  3. Modeling text with generalizable Gaussian mixtures

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Sigurdsson, Sigurdur; Kolenda, Thomas

    2000-01-01

    We apply and discuss generalizable Gaussian mixture (GGM) models for text mining. The model automatically adapts model complexity for a given text representation. We show that the generalizability of these models depends on the dimensionality of the representation and the sample size. We discuss...

  4. The STIRPAT Analysis on Carbon Emission in Chinese Cities: An Asymmetric Laplace Distribution Mixture Model

    Directory of Open Access Journals (Sweden)

    Shanshan Wang

    2017-12-01

    Full Text Available In cities’ policy-making, it is a hot issue to grasp the determinants of carbon dioxide emission in Chinese cities. And the common method is to use the STIRPAT model, where its coefficients represent the influence intensity of each determinants of carbon emission. However, less work discusses estimation accuracy, especially in the framework of non-normal distribution and heterogeneity among cities’ emission. To improve the estimation accuracy, this paper employs a new method to estimate the STIRPAT model. The method uses a mixture of Asymmetric Laplace distributions (ALDs to approximate the true distribution of the error term. Meantime, a designed two-layer EM algorithm is used to obtain estimators. We test the robustness via the comparison results of five different models. We find that the ALDs Mixture Model is more reliable the others. Further, a significant Kuznets curve relationship is identified in China.

  5. Dip and anisotropy effects on flow using a vertically skewed model grid.

    Science.gov (United States)

    Hoaglund, John R; Pollard, David

    2003-01-01

    Darcy flow equations relating vertical and bedding-parallel flow to vertical and bedding-parallel gradient components are derived for a skewed Cartesian grid in a vertical plane, correcting for structural dip given the principal hydraulic conductivities in bedding-parallel and bedding-orthogonal directions. Incorrect-minus-correct flow error results are presented for ranges of structural dip (0 strike and dip, and a solver that can handle off-diagonal hydraulic conductivity terms.

  6. Semiparametric mixtures in case-controlstudies

    NARCIS (Netherlands)

    Murphy, S.A.; van der Vaart, A.W.

    2001-01-01

    We consider likelihood based inference in a class of logistic models for case-control studies with a partially observed covariate. The likelihood is a combination of a nonparametric mixture, a parametric likelihood, and an empirical likelihood. We prove the asymptotic normality of the maximum

  7. Random skew plane partitions with a piecewise periodic back wall

    DEFF Research Database (Denmark)

    Boutillier, Cedric; Mkrtchyan, Sevak; Reshetikhin, Nicolai

    Random skew plane partitions of large size distributed according to an appropriately scaled Schur process develop limit shapes. In the present work we consider the limit of large random skew plane partitions where the inner boundary approaches a piecewise linear curve with non-lattice slopes. Muc...

  8. Does Realized Skewness Predict the Cross-Section of Equity Returns?

    DEFF Research Database (Denmark)

    Amaya, Diego; Christoffersen, Peter; Jacobs, Kris

    2015-01-01

    We use intraday data to compute weekly realized moments for equity returns and study their time-series and cross-sectional properties. Buying stocks in the lowest realized skewness decile and selling stocks in the highest realized skewness decile generates an average return of 19 basis points...

  9. Modeling abundance using N-mixture models: the importance of considering ecological mechanisms.

    Science.gov (United States)

    Joseph, Liana N; Elkin, Ché; Martin, Tara G; Possinghami, Hugh P

    2009-04-01

    Predicting abundance across a species' distribution is useful for studies of ecology and biodiversity management. Modeling of survey data in relation to environmental variables can be a powerful method for extrapolating abundances across a species' distribution and, consequently, calculating total abundances and ultimately trends. Research in this area has demonstrated that models of abundance are often unstable and produce spurious estimates, and until recently our ability to remove detection error limited the development of accurate models. The N-mixture model accounts for detection and abundance simultaneously and has been a significant advance in abundance modeling. Case studies that have tested these new models have demonstrated success for some species, but doubt remains over the appropriateness of standard N-mixture models for many species. Here we develop the N-mixture model to accommodate zero-inflated data, a common occurrence in ecology, by employing zero-inflated count models. To our knowledge, this is the first application of this method to modeling count data. We use four variants of the N-mixture model (Poisson, zero-inflated Poisson, negative binomial, and zero-inflated negative binomial) to model abundance, occupancy (zero-inflated models only) and detection probability of six birds in South Australia. We assess models by their statistical fit and the ecological realism of the parameter estimates. Specifically, we assess the statistical fit with AIC and assess the ecological realism by comparing the parameter estimates with expected values derived from literature, ecological theory, and expert opinion. We demonstrate that, despite being frequently ranked the "best model" according to AIC, the negative binomial variants of the N-mixture often produce ecologically unrealistic parameter estimates. The zero-inflated Poisson variant is preferable to the negative binomial variants of the N-mixture, as it models an ecological mechanism rather than a

  10. Self-organising mixture autoregressive model for non-stationary time series modelling.

    Science.gov (United States)

    Ni, He; Yin, Hujun

    2008-12-01

    Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.

  11. The skew ray ambiguity in the analysis of videokeratoscopic data.

    Science.gov (United States)

    Iskander, D Robert; Davis, Brett A; Collins, Michael J

    2007-05-01

    Skew ray ambiguity is present in most videokeratoscopic measurements when azimuthal components of the corneal curvature are not taken into account. There have been some reported studies based on theoretical predictions and measured test surfaces suggesting that skew ray ambiguity is significant for highly deformed corneas or decentered corneal measurements. However, the effect of skew ray ambiguity in ray tracing through videokeratoscopic data has not been studied in depth. We have evaluated the significance of the skew ray ambiguity and its effect on the analyzed corneal optics. This has been achieved by devising a procedure in which we compared the corneal wavefront aberrations estimated from 3D ray tracing with those determined from 2D (meridional based) estimates of the refractive power. The latter was possible due to recently developed concept of refractive Zernike power polynomials which links the refractive power domain with that of the wavefront. Simulated corneal surfaces as well as data from a range of corneas (from two different Placido disk-based videokeratoscopes) were used to find the limit at which the difference in estimated corneal wavefronts (or the corresponding refractive powers) would have clinical significance (e.g., equivalent to 0.125 D or more). The inclusion/exclusion of the skew ray in the analyses showed some differences in the results. However, the proposed procedure showed clinically significant differences only for highly deformed corneas and only for large corneal diameters. For the overwhelming majority of surfaces, the skew ray ambiguity is not a clinically significant issue in the analysis of the videokeratoscopic data indicating that the meridional processing such as that encountered in calculation of the refractive power maps is adequate.

  12. The skewed weak lensing likelihood: why biases arise, despite data and theory being sound.

    Science.gov (United States)

    Sellentin, Elena; Heymans, Catherine; Harnois-Déraps, Joachim

    2018-04-01

    We derive the essentials of the skewed weak lensing likelihood via a simple Hierarchical Forward Model. Our likelihood passes four objective and cosmology-independent tests which a standard Gaussian likelihood fails. We demonstrate that sound weak lensing data are naturally biased low, since they are drawn from a skewed distribution. This occurs already in the framework of ΛCDM. Mathematically, the biases arise because noisy two-point functions follow skewed distributions. This form of bias is already known from CMB analyses, where the low multipoles have asymmetric error bars. Weak lensing is more strongly affected by this asymmetry as galaxies form a discrete set of shear tracer particles, in contrast to a smooth shear field. We demonstrate that the biases can be up to 30% of the standard deviation per data point, dependent on the properties of the weak lensing survey and the employed filter function. Our likelihood provides a versatile framework with which to address this bias in future weak lensing analyses.

  13. Uncovering the skewness news impact curve

    Czech Academy of Sciences Publication Activity Database

    Anatolyev, Stanislav; Petukhov, A.

    2016-01-01

    Roč. 14, č. 4 (2016), s. 746-771 ISSN 1479-8409 Institutional support: RVO:67985998 Keywords : conditional skewness * news impact curve * stock returns Subject RIV: AH - Economics Impact factor: 1.800, year: 2016

  14. Bayesian Option Pricing using Mixed Normal Heteroskedasticity Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen; Stentoft, Lars

    2014-01-01

    Option pricing using mixed normal heteroscedasticity models is considered. It is explained how to perform inference and price options in a Bayesian framework. The approach allows to easily compute risk neutral predictive price densities which take into account parameter uncertainty....... In an application to the S&P 500 index, classical and Bayesian inference is performed on the mixture model using the available return data. Comparing the ML estimates and posterior moments small differences are found. When pricing a rich sample of options on the index, both methods yield similar pricing errors...... measured in dollar and implied standard deviation losses, and it turns out that the impact of parameter uncertainty is minor. Therefore, when it comes to option pricing where large amounts of data are available, the choice of the inference method is unimportant. The results are robust to different...

  15. Semiparametric Mixtures of Regressions with Single-index for Model Based Clustering

    OpenAIRE

    Xiang, Sijia; Yao, Weixin

    2017-01-01

    In this article, we propose two classes of semiparametric mixture regression models with single-index for model based clustering. Unlike many semiparametric/nonparametric mixture regression models that can only be applied to low dimensional predictors, the new semiparametric models can easily incorporate high dimensional predictors into the nonparametric components. The proposed models are very general, and many of the recently proposed semiparametric/nonparametric mixture regression models a...

  16. Modeling of Multicomponent Mixture Separation Processes Using Hollow fiber Membrane

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sin-Ah; Kim, Jin-Kuk; Lee, Young Moo; Yeo, Yeong-Koo [Hanyang University, Seoul (Korea, Republic of)

    2015-02-15

    So far, most of research activities on modeling of membrane separation processes have been focused on binary feed mixture. But, in actual separation operations, binary feed is hard to find and most separation processes involve multicomponent feed mixture. In this work models for membrane separation processes treating multicomponent feed mixture are developed. Various model types are investigated and validity of proposed models are analysed based on experimental data obtained using hollowfiber membranes. The proposed separation models show quick convergence and exhibit good tracking performance.

  17. On nomenclature for, and the relative merits of, two formulations of skew distributions

    KAUST Repository

    Azzalini, Adelchi

    2015-12-21

    We examine some skew distributions used extensively within the model-based clustering literature in recent years, paying special attention to claims that have been made about their relative efficacy. Theoretical arguments are provided as well as real data examples.

  18. On nomenclature for, and the relative merits of, two formulations of skew distributions

    KAUST Repository

    Azzalini, Adelchi; Browne, Ryan P.; Genton, Marc G.; McNicholas, Paul D.

    2015-01-01

    We examine some skew distributions used extensively within the model-based clustering literature in recent years, paying special attention to claims that have been made about their relative efficacy. Theoretical arguments are provided as well as real data examples.

  19. Large-scale genomic 2D visualization reveals extensive CG-AT skew correlation in bird genomes

    Directory of Open Access Journals (Sweden)

    Deng Xuemei

    2007-11-01

    Full Text Available Abstract Background Bird genomes have very different compositional structure compared with other warm-blooded animals. The variation in the base skew rules in the vertebrate genomes remains puzzling, but it must relate somehow to large-scale genome evolution. Current research is inclined to relate base skew with mutations and their fixation. Here we wish to explore base skew correlations in bird genomes, to develop methods for displaying and quantifying such correlations at different scales, and to discuss possible explanations for the peculiarities of the bird genomes in skew correlation. Results We have developed a method called Base Skew Double Triangle (BSDT for exhibiting the genome-scale change of AT/CG skew as a two-dimensional square picture, showing base skews at many scales simultaneously in a single image. By this method we found that most chicken chromosomes have high AT/CG skew correlation (symmetry in 2D picture, except for some microchromosomes. No other organisms studied (18 species show such high skew correlations. This visualized high correlation was validated by three kinds of quantitative calculations with overlapping and non-overlapping windows, all indicating that chicken and birds in general have a special genome structure. Similar features were also found in some of the mammal genomes, but clearly much weaker than in chickens. We presume that the skew correlation feature evolved near the time that birds separated from other vertebrate lineages. When we eliminated the repeat sequences from the genomes, the AT and CG skews correlation increased for some mammal genomes, but were still clearly lower than in chickens. Conclusion Our results suggest that BSDT is an expressive visualization method for AT and CG skew and enabled the discovery of the very high skew correlation in bird genomes; this peculiarity is worth further study. Computational analysis indicated that this correlation might be a compositional characteristic

  20. Uncovering the skewness news impact curve

    Czech Academy of Sciences Publication Activity Database

    Anatolyev, Stanislav; Petukhov, A.

    2016-01-01

    Roč. 14, č. 4 (2016), s. 746-771 ISSN 1479-8409 Institutional support: PRVOUK-P23 Keywords : conditional skewness * news impact curve * stock returns Subject RIV: AH - Economics Impact factor: 1.800, year: 2016

  1. Personal exposure to mixtures of volatile organic compounds: modeling and further analysis of the RIOPA data.

    Science.gov (United States)

    Batterman, Stuart; Su, Feng-Chiao; Li, Shi; Mukherjee, Bhramar; Jia, Chunrong

    2014-06-01

    affect VOC exposures, many personal, environmental, and socioeconomic determinants remain to be identified, and the significance and applicability of the determinants reported in the literature are uncertain. To help answer these unresolved questions and overcome limitations of previous analyses, this project used several novel and powerful statistical modeling and analysis techniques and two large data sets. The overall objectives of this project were (1) to identify and characterize exposure distributions (including extreme values), (2) evaluate mixtures (including dependencies), and (3) identify determinants of VOC exposure. METHODS VOC data were drawn from two large data sets: the Relationships of Indoor, Outdoor, and Personal Air (RIOPA) study (1999-2001) and the National Health and Nutrition Examination Survey (NHANES; 1999-2000). The RIOPA study used a convenience sample to collect outdoor, indoor, and personal exposure measurements in three cities (Elizabeth, NJ; Houston, TX; Los Angeles, CA). In each city, approximately 100 households with adults and children who did not smoke were sampled twice for 18 VOCs. In addition, information about 500 variables associated with exposure was collected. The NHANES used a nationally representative sample and included personal VOC measurements for 851 participants. NHANES sampled 10 VOCs in common with RIOPA. Both studies used similar sampling methods and study periods. Specific Aim 1. To estimate and model extreme value exposures, extreme value distribution models were fitted to the top 10% and 5% of VOC exposures. Health risks were estimated for individual VOCs and for three VOC mixtures. Simulated extreme value data sets, generated for each VOC and for fitted extreme value and lognormal distributions, were compared with measured concentrations (RIOPA observations) to evaluate each model's goodness of fit. Mixture distributions were fitted with the conventional finite mixture of normal distributions and the semi

  2. A mixture model-based approach to the clustering of microarray expression data.

    Science.gov (United States)

    McLachlan, G J; Bean, R W; Peel, D

    2002-03-01

    This paper introduces the software EMMIX-GENE that has been developed for the specific purpose of a model-based approach to the clustering of microarray expression data, in particular, of tissue samples on a very large number of genes. The latter is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. A feasible approach is provided by first selecting a subset of the genes relevant for the clustering of the tissue samples by fitting mixtures of t distributions to rank the genes in order of increasing size of the likelihood ratio statistic for the test of one versus two components in the mixture model. The imposition of a threshold on the likelihood ratio statistic used in conjunction with a threshold on the size of a cluster allows the selection of a relevant set of genes. However, even this reduced set of genes will usually be too large for a normal mixture model to be fitted directly to the tissues, and so the use of mixtures of factor analyzers is exploited to reduce effectively the dimension of the feature space of genes. The usefulness of the EMMIX-GENE approach for the clustering of tissue samples is demonstrated on two well-known data sets on colon and leukaemia tissues. For both data sets, relevant subsets of the genes are able to be selected that reveal interesting clusterings of the tissues that are either consistent with the external classification of the tissues or with background and biological knowledge of these sets. EMMIX-GENE is available at http://www.maths.uq.edu.au/~gjm/emmix-gene/

  3. Prevalence Incidence Mixture Models

    Science.gov (United States)

    The R package and webtool fits Prevalence Incidence Mixture models to left-censored and irregularly interval-censored time to event data that is commonly found in screening cohorts assembled from electronic health records. Absolute and relative risk can be estimated for simple random sampling, and stratified sampling (the two approaches of superpopulation and a finite population are supported for target populations). Non-parametric (absolute risks only), semi-parametric, weakly-parametric (using B-splines), and some fully parametric (such as the logistic-Weibull) models are supported.

  4. INVESTIGATION OF SEISMIC PERFORMANCE AND DESIGN OF TYPICAL CURVED AND SKEWED BRIDGES IN COLORADO

    Science.gov (United States)

    2018-01-15

    This report summarizes the analytical studies on the seismic performance of typical Colorado concrete bridges, particularly those with curved and skewed configurations. A set of bridge models with different geometric configurations derived from a pro...

  5. A note on generalized skew derivations on Lie ideals

    Indian Academy of Sciences (India)

    MOHAMMAD ASHRAF

    2018-04-24

    Apr 24, 2018 ... Abstract. Let R be a prime ring, Z(R) its center, C its extended centroid, L a Lie ideal of R, F a generalized skew derivation associated with a skew derivation d and automorphism α. Assume that there exist t ≥ 1 and m, n ≥ 0 fixed integers such that vu = umF(uv)tun for all u,v ∈ L. Then it is shown that either ...

  6. Sworn testimony of the model evidence: Gaussian Mixture Importance (GAME) sampling

    Science.gov (United States)

    Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.

    2017-07-01

    What is the "best" model? The answer to this question lies in part in the eyes of the beholder, nevertheless a good model must blend rigorous theory with redeeming qualities such as parsimony and quality of fit. Model selection is used to make inferences, via weighted averaging, from a set of K candidate models, Mk; k=>(1,…,K>), and help identify which model is most supported by the observed data, Y>˜=>(y˜1,…,y˜n>). Here, we introduce a new and robust estimator of the model evidence, p>(Y>˜|Mk>), which acts as normalizing constant in the denominator of Bayes' theorem and provides a single quantitative measure of relative support for each hypothesis that integrates model accuracy, uncertainty, and complexity. However, p>(Y>˜|Mk>) is analytically intractable for most practical modeling problems. Our method, coined GAussian Mixture importancE (GAME) sampling, uses bridge sampling of a mixture distribution fitted to samples of the posterior model parameter distribution derived from MCMC simulation. We benchmark the accuracy and reliability of GAME sampling by application to a diverse set of multivariate target distributions (up to 100 dimensions) with known values of p>(Y>˜|Mk>) and to hypothesis testing using numerical modeling of the rainfall-runoff transformation of the Leaf River watershed in Mississippi, USA. These case studies demonstrate that GAME sampling provides robust and unbiased estimates of the evidence at a relatively small computational cost outperforming commonly used estimators. The GAME sampler is implemented in the MATLAB package of DREAM and simplifies considerably scientific inquiry through hypothesis testing and model selection.

  7. Latent Partially Ordered Classification Models and Normal Mixtures

    Science.gov (United States)

    Tatsuoka, Curtis; Varadi, Ferenc; Jaeger, Judith

    2013-01-01

    Latent partially ordered sets (posets) can be employed in modeling cognitive functioning, such as in the analysis of neuropsychological (NP) and educational test data. Posets are cognitively diagnostic in the sense that classification states in these models are associated with detailed profiles of cognitive functioning. These profiles allow for…

  8. Halo Pressure Profile through the Skew Cross-power Spectrum of the Sunyaev–Zel’dovich Effect and CMB Lensing in Planck

    Energy Technology Data Exchange (ETDEWEB)

    Timmons, Nicholas; Cooray, Asantha; Feng, Chang [Department of Physics and Astronomy, University of California, Irvine, CA 92697 (United States); Keating, Brian [Department of Physics, University of California, San Diego, La Jolla, CA 92093 (United States)

    2017-11-01

    We measure the cosmic microwave background (CMB) skewness power spectrum in Planck , using frequency maps of the HFI instrument and the Sunyaev–Zel’dovich (SZ) component map. The two-to-one skewness power spectrum measures the cross-correlation between CMB lensing and the thermal SZ effect. We also directly measure the same cross-correlation using the Planck CMB lensing map and the SZ map and compare it to the cross-correlation derived from the skewness power spectrum. We model fit the SZ power spectrum and CMB lensing–SZ cross-power spectrum via the skewness power spectrum to constrain the gas pressure profile of dark matter halos. The gas pressure profile is compared to existing measurements in the literature including a direct estimate based on the stacking of SZ clusters in Planck .

  9. An improved model of induction motors for diagnosis purposes - Slot skewing effect and air-gap eccentricity faults

    Energy Technology Data Exchange (ETDEWEB)

    Ghoggal, A.; Zouzou, S.E.; Sahraoui, M. [Laboratoire de genie electrique de Biskra, Departement d' electrotechnique, Universite Mohamed Khider, BP 145, Biskra (Algeria); Razik, H. [Groupe de Recherche en Electrotechnique et Electronique de Nancy, Universite Henri Poincare, Faculte des Sciences et Techniques, BP 239, F-54506 Vandoeuvre-les-Nancy (France); Khezzar, A. [Laboratoire d' Electrotechnique de Constantine, Universite Mentouri, Constantine (Algeria)

    2009-05-15

    This paper describes an improved method for the modeling of axial and radial eccentricities in induction motors (IM). The model is based on an extension of the modified winding function approach (MWFA) which allows for all harmonics of the magnetomotive force (MMF) to be taken into account. It is shown that a plane view of IM gets easily the motor inductances and reduces considerably the calculation process. The described technique includes accurately the slot skewing effect and leads to pure analytical expressions of the inductances in case of radial eccentricity. In order to model the static, dynamic or mixed axial eccentricity, three suitable alternatives are explained. Unlike the previous proposals, the discussed alternatives take into account all the harmonics of the inverse of air-gap function without any development in Fourier series. Simulation results as well as experimental verifications prove the usefulness and the effectiveness of the proposed model. (author)

  10. An improved model of induction motors for diagnosis purposes - Slot skewing effect and air-gap eccentricity faults

    International Nuclear Information System (INIS)

    Ghoggal, A.; Zouzou, S.E.; Razik, H.; Sahraoui, M.; Khezzar, A.

    2009-01-01

    This paper describes an improved method for the modeling of axial and radial eccentricities in induction motors (IM). The model is based on an extension of the modified winding function approach (MWFA) which allows for all harmonics of the magnetomotive force (MMF) to be taken into account. It is shown that a plane view of IM gets easily the motor inductances and reduces considerably the calculation process. The described technique includes accurately the slot skewing effect and leads to pure analytical expressions of the inductances in case of radial eccentricity. In order to model the static, dynamic or mixed axial eccentricity, three suitable alternatives are explained. Unlike the previous proposals, the discussed alternatives take into account all the harmonics of the inverse of air-gap function without any development in Fourier series. Simulation results as well as experimental verifications prove the usefulness and the effectiveness of the proposed model.

  11. Skewed steel bridges, part ii : cross-frame and connection design to ensure brace effectiveness : technical summary.

    Science.gov (United States)

    2017-08-01

    Skewed bridges in Kansas are often designed such that the cross-frames are carried parallel to the skew angle up to 40, while many other states place cross-frames perpendicular to the girder for skew angles greater than 20. Skewed-parallel cross-...

  12. Skewed steel bridges, part ii : cross-frame and connection design to ensure brace effectiveness : final report.

    Science.gov (United States)

    2017-08-01

    Skewed bridges in Kansas are often designed such that the cross-frames are carried parallel to the skew angle up to 40, while many other states place cross-frames perpendicular to the girder for skew angles greater than 20. Skewed-parallel cross-...

  13. A Note on the Use of Mixture Models for Individual Prediction.

    Science.gov (United States)

    Cole, Veronica T; Bauer, Daniel J

    Mixture models capture heterogeneity in data by decomposing the population into latent subgroups, each of which is governed by its own subgroup-specific set of parameters. Despite the flexibility and widespread use of these models, most applications have focused solely on making inferences for whole or sub-populations, rather than individual cases. The current article presents a general framework for computing marginal and conditional predicted values for individuals using mixture model results. These predicted values can be used to characterize covariate effects, examine the fit of the model for specific individuals, or forecast future observations from previous ones. Two empirical examples are provided to demonstrate the usefulness of individual predicted values in applications of mixture models. The first example examines the relative timing of initiation of substance use using a multiple event process survival mixture model whereas the second example evaluates changes in depressive symptoms over adolescence using a growth mixture model.

  14. MEASURING LOCAL GRADIENT AND SKEW QUADRUPOLE ERRORS IN RHIC IRS

    International Nuclear Information System (INIS)

    CARDONA, J.; PEGGS, S.; PILAT, R.; PTITSYN, V.

    2004-01-01

    The measurement of local linear errors at RHIC interaction regions using an ''action and phase'' analysis of difference orbits has already been presented [2]. This paper evaluates the accuracy of this technique using difference orbits that were taken when known gradient errors and skew quadrupole errors were intentionally introduced. It also presents action and phase analysis of simulated orbits when controlled errors are intentionally placed in a RHIC simulation model

  15. Personal Exposure to Mixtures of Volatile Organic Compounds: Modeling and Further Analysis of the RIOPA Data

    Science.gov (United States)

    Batterman, Stuart; Su, Feng-Chiao; Li, Shi; Mukherjee, Bhramar; Jia, Chunrong

    2015-01-01

    known to affect VOC exposures, many personal, environmental, and socioeconomic determinants remain to be identified, and the significance and applicability of the determinants reported in the literature are uncertain. To help answer these unresolved questions and overcome limitations of previous analyses, this project used several novel and powerful statistical modeling and analysis techniques and two large data sets. The overall objectives of this project were (1) to identify and characterize exposure distributions (including extreme values), (2) evaluate mixtures (including dependencies), and (3) identify determinants of VOC exposure. METHODS VOC data were drawn from two large data sets: the Relationships of Indoor, Outdoor, and Personal Air (RIOPA) study (1999–2001) and the National Health and Nutrition Examination Survey (NHANES; 1999–2000). The RIOPA study used a convenience sample to collect outdoor, indoor, and personal exposure measurements in three cities (Elizabeth, NJ; Houston, TX; Los Angeles, CA). In each city, approximately 100 households with adults and children who did not smoke were sampled twice for 18 VOCs. In addition, information about 500 variables associated with exposure was collected. The NHANES used a nationally representative sample and included personal VOC measurements for 851 participants. NHANES sampled 10 VOCs in common with RIOPA. Both studies used similar sampling methods and study periods. Specific Aim 1 To estimate and model extreme value exposures, extreme value distribution models were fitted to the top 10% and 5% of VOC exposures. Health risks were estimated for individual VOCs and for three VOC mixtures. Simulated extreme value data sets, generated for each VOC and for fitted extreme value and lognormal distributions, were compared with measured concentrations (RIOPA observations) to evaluate each model’s goodness of fit. Mixture distributions were fitted with the conventional finite mixture of normal distributions and the

  16. A model for radiative heat transfer in mixtures of a hot solid or molten material with water and steam

    International Nuclear Information System (INIS)

    Vaeth, L.

    1997-05-01

    A model has been devised for describing the radiative heat transfer in mixtures of a hot radiant material with water and steam, to be used, e.g., in the framework of a multiphase, multicomponent flow simulation. The main features of the model are: 1. The radiative heat transfer is modelled for a homogeneous mixture of one continuous material with droplets/bubbles of the other two, of the kind normally assumed for the material distribution in one cell of a bigger calculational problem. Neither the heat transfer over the cell boundaries nor the finite dimensions of the cell are taken into account. 2. The geometry of the mixture (radiant material continuous or discontinuous, droplet/bubble diameters and number densities) is taken into account. 3. The optical properties of water and water vapour are modelled as functions of the temperature of the radiant and, in the case of water vapour, also of the absorbing material. 4. The model distinguishes between heat transfer to the surface of the water (leading to evaporation) and into the bulk of the water (pure heating). (orig./DG) [de

  17. Widely Linear Blind Adaptive Equalization for Transmitter IQ-Imbalance/Skew Compensation in Multicarrier Systems

    DEFF Research Database (Denmark)

    Porto da Silva, Edson; Zibar, Darko

    2016-01-01

    Simple analytical widely linear complex-valued models for IQ-imbalance and IQ-skew effects in multicarrier transmitters are presented. To compensate for such effects, a 4×4 MIMO widely linear adaptive equalizer is proposed and experimentally validated....

  18. Structure-reactivity modeling using mixture-based representation of chemical reactions.

    Science.gov (United States)

    Polishchuk, Pavel; Madzhidov, Timur; Gimadiev, Timur; Bodrov, Andrey; Nugmanov, Ramil; Varnek, Alexandre

    2017-09-01

    We describe a novel approach of reaction representation as a combination of two mixtures: a mixture of reactants and a mixture of products. In turn, each mixture can be encoded using an earlier reported approach involving simplex descriptors (SiRMS). The feature vector representing these two mixtures results from either concatenated product and reactant descriptors or the difference between descriptors of products and reactants. This reaction representation doesn't need an explicit labeling of a reaction center. The rigorous "product-out" cross-validation (CV) strategy has been suggested. Unlike the naïve "reaction-out" CV approach based on a random selection of items, the proposed one provides with more realistic estimation of prediction accuracy for reactions resulting in novel products. The new methodology has been applied to model rate constants of E2 reactions. It has been demonstrated that the use of the fragment control domain applicability approach significantly increases prediction accuracy of the models. The models obtained with new "mixture" approach performed better than those required either explicit (Condensed Graph of Reaction) or implicit (reaction fingerprints) reaction center labeling.

  19. Stock markets volatility spillovers during financial crises: A DCC-MGARCH with skewed-t density approach

    Directory of Open Access Journals (Sweden)

    Dahiru A. Bala

    2017-03-01

    Full Text Available This paper investigates stock returns volatility spillovers in emerging and developed markets (DMs using multivariate-GARCH (MGARCH models and their variants. In addition, we analyse the impacts of global financial crisis (2007–2009 on stock market volatility interactions and modify the BEKK-MGARCH-type models by including financial crisis dummies to assess their impact on volatilities and spillovers. Major findings reveal that correlations among emerging markets (EMs are lower compared with correlations among DMs and increase during financial crises. Furthermore, we detect evidence of volatility spillovers and observe that own-volatility spillovers are higher than cross-volatility spillovers for EMs suggesting that shocks have not been substantially transmitted among EMs compared to DMs. We also find significant asymmetric behaviour in DMs while weak evidence is detected for EMs. Finally, the DCC-with-skewed-t density model provided improved diagnostics compared to other models partly due to its taking into account fat tails and skewed features often present in financial returns.

  20. Identifiability in N-mixture models: a large-scale screening test with bird data.

    Science.gov (United States)

    Kéry, Marc

    2018-02-01

    Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.

  1. Resonance multiphoton ionization and dissociation of dimethyl ether via the {\\skew1\\tilde{\\rm C}^{\\prime}}, {\\skew1\\tilde{\\rm C}} and \\tilde{\\rm B} states

    Science.gov (United States)

    Mejia-Ospino, E.; García, G.; Guerrero, A.; Alvarez, I.; Cisneros, C.

    2005-01-01

    The three-photon resonance four-photon ionization and dissociation spectra of dimethyl ether (DME) are presented in the wavelength range 450-550 nm at 1 nm intervals. The (3+1) REMPI spectra show three prominent bands corresponding to the \\tildeB \\leftarrow \\skew1\\tildeX, {\\skew1\\tildeC} \\leftarrow \\skew1\\tildeX and {\\skew1\\tildeC^{\\prime}} \\leftarrow \\skew1\\tildeX transitions with origins at 61 457 cm-1 (7.615 eV), 59 055 cm-1 (7.322 eV) and 58 010 cm-1 (7.194 eV), respectively. Several ionized species, CH3+, CHnO+ (n = 1-3) and CH3OCH3+, are observed in the region of wavelengths studied here. In order to compare the results, a shorter wavelength multiphoton dissociation and ionization of DME at 355 nm is also presented. At this wavelength, DME undergoes neutral dissociation to CH3 and CH3O and each fragment is then ionized by multiphoton absorption. The fragmentation at 355 nm is very intense and only small fragments such as CH3+, CHO+, CH2+, CH+ and C+ ions are observed. The measurement of photoelectron energy allows us to establish that the DME ionization potential is at least 9.55 ± 0.15 eV. The experiments were performed using a Nd:YAG-OPO (optical parametric oscillator) tunable laser system coupled to a time-of-flight mass spectrometer and a hemispherical electron energy analyser.

  2. Modeling of non-additive mixture properties using the Online CHEmical database and Modeling environment (OCHEM

    Directory of Open Access Journals (Sweden)

    Oprisiu Ioana

    2013-01-01

    Full Text Available Abstract The Online Chemical Modeling Environment (OCHEM, http://ochem.eu is a web-based platform that provides tools for automation of typical steps necessary to create a predictive QSAR/QSPR model. The platform consists of two major subsystems: a database of experimental measurements and a modeling framework. So far, OCHEM has been limited to the processing of individual compounds. In this work, we extended OCHEM with a new ability to store and model properties of binary non-additive mixtures. The developed system is publicly accessible, meaning that any user on the Web can store new data for binary mixtures and develop models to predict their non-additive properties. The database already contains almost 10,000 data points for the density, bubble point, and azeotropic behavior of binary mixtures. For these data, we developed models for both qualitative (azeotrope/zeotrope and quantitative endpoints (density and bubble points using different learning methods and specially developed descriptors for mixtures. The prediction performance of the models was similar to or more accurate than results reported in previous studies. Thus, we have developed and made publicly available a powerful system for modeling mixtures of chemical compounds on the Web.

  3. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    Directory of Open Access Journals (Sweden)

    Jan Hasenauer

    2014-07-01

    Full Text Available Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  4. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    Science.gov (United States)

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J

    2014-07-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  5. Simple skew category algebras associated with minimal partially defined dynamical systems

    DEFF Research Database (Denmark)

    Nystedt, Patrik; Öinert, Per Johan

    2013-01-01

    In this article, we continue our study of category dynamical systems, that is functors s from a category G to Topop, and their corresponding skew category algebras. Suppose that the spaces s(e), for e∈ob(G), are compact Hausdorff. We show that if (i) the skew category algebra is simple, then (ii) G...

  6. Development of a Skewed Pipe Shear Connector for Precast Concrete Structures.

    Science.gov (United States)

    Kim, Sang-Hyo; Choi, Jae-Gu; Park, Sejun; Lee, Hyunmin; Heo, And Won-Ho

    2017-05-13

    Joint connection methods, such as shear key and loop bar, improve the structural performance of precast concrete structures; consequently, there is usually decreased workability or constructional efficiency. This paper proposes a high-efficiency skewed pipe shear connector. To resist shear and pull-out forces, the proposed connectors are placed diagonally between precast concrete segments and a cast-in-place concrete joint part on a girder. Design variables (such as the pipe diameter, length, and insertion angle) have been examined to investigate the connection performance of the proposed connector. The results of our testing indicate that the skewed pipe shear connectors have 50% higher ductility and a 15% higher ratio of maximum load to yield strength as compared to the corresponding parameters of the loop bar. Finite element analysis was used for validation. The resulting validation indicates that, compared to the loop bar, the skewed pipe shear connector has a higher ultimate shear and pull-out resistance. These results indicate that the skewed pipe shear connector demonstrates more idealized behavior than the loop bar in precast concrete structures.

  7. Influence of skew rays on the sensitivity and signal-to-noise ratio of a fiber-optic surface-plasmon-resonance sensor: a theoretical study

    International Nuclear Information System (INIS)

    Dwivedi, Yogendra S.; Sharma, Anuj K.; Gupta, Banshi D.

    2007-01-01

    We have theoretically analyzed the influence of skew rays on the performance of a fiber-optic sensor based on surface plasmon resonance. The performance of the sensor has been evaluated in terms of its sensitivity and signal-to-noise ratio (SNR). The theoretical model for skewness dependence includes the material dispersion in fiber cores and metal layers, simultaneous excitation of skew rays, and meridional rays in the fiber core along with all guided rays launching from a collimated light source. The effect of skew rays on the SNR and the sensitivity of the sensor with two different metals has been compared. The same comparison is carried out for the different values of design parameters such as numerical aperture, fiber core diameter, and the length of the surface-plasmon-resonance (SPR)active sensing region. This detailed analysis for the effect of skewness on the SNR and the sensitivity of the sensor leads us to achieve the best possible performance from a fiber-optic SPR sensor against the skewness in the optical fiber

  8. Optimal mixture experiments

    CERN Document Server

    Sinha, B K; Pal, Manisha; Das, P

    2014-01-01

    The book dwells mainly on the optimality aspects of mixture designs. As mixture models are a special case of regression models, a general discussion on regression designs has been presented, which includes topics like continuous designs, de la Garza phenomenon, Loewner order domination, Equivalence theorems for different optimality criteria and standard optimality results for single variable polynomial regression and multivariate linear and quadratic regression models. This is followed by a review of the available literature on estimation of parameters in mixture models. Based on recent research findings, the volume also introduces optimal mixture designs for estimation of optimum mixing proportions in different mixture models, which include Scheffé’s quadratic model, Darroch-Waller model, log- contrast model, mixture-amount models, random coefficient models and multi-response model.  Robust mixture designs and mixture designs in blocks have been also reviewed. Moreover, some applications of mixture desig...

  9. Confidence Intervals for True Scores Using the Skew-Normal Distribution

    Science.gov (United States)

    Garcia-Perez, Miguel A.

    2010-01-01

    A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…

  10. Infinite von Mises-Fisher Mixture Modeling of Whole Brain fMRI Data

    DEFF Research Database (Denmark)

    Røge, Rasmus; Madsen, Kristoffer Hougaard; Schmidt, Mikkel Nørgaard

    2017-01-01

    spherical manifold are rarely analyzed, in part due to the computational challenges imposed by directional statistics. In this letter, we discuss a Bayesian von Mises-Fisher (vMF) mixture model for data on the unit hypersphere and present an efficient inference procedure based on collapsed Markov chain...... Monte Carlo sampling. Comparing the vMF and gaussian mixture models on synthetic data, we demonstrate that the vMF model has a slight advantage inferring the true underlying clustering when compared to gaussian-based models on data generated from both a mixture of vMFs and a mixture of gaussians......Cluster analysis of functional magnetic resonance imaging (fMRI) data is often performed using gaussian mixture models, but when the time series are standardized such that the data reside on a hypersphere, this modeling assumption is questionable. The consequences of ignoring the underlying...

  11. Experimental study of the turbulent boundary layer in acceleration-skewed oscillatory flow

    NARCIS (Netherlands)

    van der A, D.A.; O' Donoghue, T.; Davies, A.G; Ribberink, Jan S.

    2011-01-01

    Experiments have been conducted in a large oscillatory flow tunnel to investigate the effects of acceleration skewness on oscillatory boundary layer flow over fixed beds. As well as enabling experimental investigation of the effects of acceleration skewness, the new experiments add substantially to

  12. The effect of forward skewed rotor blades on aerodynamic and aeroacoustic performance of axial-flow fan

    Science.gov (United States)

    Wei, Jun; Zhong, Fangyuan

    Based on comparative experiment, this paper deals with using tangentially skewed rotor blades in axial-flow fan. It is seen from the comparison of the overall performance of the fan with skewed bladed rotor and radial bladed rotor that the skewed blades operate more efficiently than the radial blades, especially at low volume flows. Meanwhile, decrease in pressure rise and flow rate of axial-flow fan with skewed rotor blades is found. The rotor-stator interaction noise and broadband noise of axial-flow fan are reduced with skewed rotor blades. Forward skewed blades tend to reduce the accumulation of the blade boundary layer in the tip region resulting from the effect of centrifugal forces. The turning of streamlines from the outer radius region into inner radius region in blade passages due to the radial component of blade forces of skewed blades is the main reason for the decrease in pressure rise and flow rate.

  13. Study and optimal correction of a systematic skew quadrupole field in the Tevatron

    International Nuclear Information System (INIS)

    Snopok, Pavel; Johnstone, Carol; Berz, Martin; Ovsyannikov, Dmitry A.; Ovsyannikov, Alexander D.

    2006-01-01

    Increasing demands for luminosity in existing and future colliders have made lattice design and error tolerance and correction critical to achieving performance goals. The current state of the Tevatron collider is an example, with a strong skew quadrupole error present in the operational lattice. This work studies the high-order performance of the Tevatron and the strong nonlinear behavior introduced when a significant skew quadrupole error is combined with conventional sextupole correction, a behavior still clearly evident after optimal tuning of available skew quadrupole circuits. An optimization study is performed using different skew quadrupole families, and, importantly, local and global correction of the linear skew terms in maps generated by the code COSY INFINITY [M. Berz, COSY INFINITY version 8.1 user's guide and reference manual, Department of Physics and Astronomy MSUHEP-20704, Michigan State University (2002). URL http://cosy.pa.msu.edu/cosymanu/index.html]. Two correction schemes with one family locally correcting each arc and eight independent correctors in the straight sections for global correction are proposed and shown to dramatically improve linearity and performance of the baseline Tevatron lattice

  14. Models for the computation of opacity of mixtures

    International Nuclear Information System (INIS)

    Klapisch, Marcel; Busquet, Michel

    2013-01-01

    We compare four models for the partial densities of the components of mixtures. These models yield different opacities as shown on polystyrene, acrylic and polyimide in local thermodynamical equilibrium (LTE). Two of these models, the ‘whole volume partial pressure’ model (M1) and its modification (M2) are not thermodynamically consistent (TC). The other two models are TC and minimize free energy. M3, the ‘partial volume equal pressure’ model, uses equality of chemical potential. M4 uses commonality of free electron density. The latter two give essentially identical results in LTE, but M4’s convergence is slower. M4 is easily generalized to non-LTE conditions. Non-LTE effects are shown by the variation of the Planck mean opacity of the mixtures with temperature and density. (paper)

  15. New models for predicting thermophysical properties of ionic liquid mixtures.

    Science.gov (United States)

    Huang, Ying; Zhang, Xiangping; Zhao, Yongsheng; Zeng, Shaojuan; Dong, Haifeng; Zhang, Suojiang

    2015-10-28

    Potential applications of ILs require the knowledge of the physicochemical properties of ionic liquid (IL) mixtures. In this work, a series of semi-empirical models were developed to predict the density, surface tension, heat capacity and thermal conductivity of IL mixtures. Each semi-empirical model only contains one new characteristic parameter, which can be determined using one experimental data point. In addition, as another effective tool, artificial neural network (ANN) models were also established. The two kinds of models were verified by a total of 2304 experimental data points for binary mixtures of ILs and molecular compounds. The overall average absolute deviations (AARDs) of both the semi-empirical and ANN models are less than 2%. Compared to previously reported models, these new semi-empirical models require fewer adjustable parameters and can be applied in a wider range of applications.

  16. Modeling mixtures of thyroid gland function disruptors in a vertebrate alternative model, the zebrafish eleutheroembryo

    International Nuclear Information System (INIS)

    Thienpont, Benedicte; Barata, Carlos; Raldúa, Demetrio

    2013-01-01

    Maternal thyroxine (T4) plays an essential role in fetal brain development, and even mild and transitory deficits in free-T4 in pregnant women can produce irreversible neurological effects in their offspring. Women of childbearing age are daily exposed to mixtures of chemicals disrupting the thyroid gland function (TGFDs) through the diet, drinking water, air and pharmaceuticals, which has raised the highest concern for the potential additive or synergic effects on the development of mild hypothyroxinemia during early pregnancy. Recently we demonstrated that zebrafish eleutheroembryos provide a suitable alternative model for screening chemicals impairing the thyroid hormone synthesis. The present study used the intrafollicular T4-content (IT4C) of zebrafish eleutheroembryos as integrative endpoint for testing the hypotheses that the effect of mixtures of TGFDs with a similar mode of action [inhibition of thyroid peroxidase (TPO)] was well predicted by a concentration addition concept (CA) model, whereas the response addition concept (RA) model predicted better the effect of dissimilarly acting binary mixtures of TGFDs [TPO-inhibitors and sodium-iodide symporter (NIS)-inhibitors]. However, CA model provided better prediction of joint effects than RA in five out of the six tested mixtures. The exception being the mixture MMI (TPO-inhibitor)-KClO 4 (NIS-inhibitor) dosed at a fixed ratio of EC 10 that provided similar CA and RA predictions and hence it was difficult to get any conclusive result. There results support the phenomenological similarity criterion stating that the concept of concentration addition could be extended to mixture constituents having common apical endpoints or common adverse outcomes. - Highlights: • Potential synergic or additive effect of mixtures of chemicals on thyroid function. • Zebrafish as alternative model for testing the effect of mixtures of goitrogens. • Concentration addition seems to predict better the effect of mixtures of

  17. Modeling mixtures of thyroid gland function disruptors in a vertebrate alternative model, the zebrafish eleutheroembryo

    Energy Technology Data Exchange (ETDEWEB)

    Thienpont, Benedicte; Barata, Carlos [Department of Environmental Chemistry, Institute of Environmental Assessment and Water Research (IDAEA, CSIC), Jordi Girona, 18-26, 08034 Barcelona (Spain); Raldúa, Demetrio, E-mail: drpqam@cid.csic.es [Department of Environmental Chemistry, Institute of Environmental Assessment and Water Research (IDAEA, CSIC), Jordi Girona, 18-26, 08034 Barcelona (Spain); Maladies Rares: Génétique et Métabolisme (MRGM), University of Bordeaux, EA 4576, F-33400 Talence (France)

    2013-06-01

    Maternal thyroxine (T4) plays an essential role in fetal brain development, and even mild and transitory deficits in free-T4 in pregnant women can produce irreversible neurological effects in their offspring. Women of childbearing age are daily exposed to mixtures of chemicals disrupting the thyroid gland function (TGFDs) through the diet, drinking water, air and pharmaceuticals, which has raised the highest concern for the potential additive or synergic effects on the development of mild hypothyroxinemia during early pregnancy. Recently we demonstrated that zebrafish eleutheroembryos provide a suitable alternative model for screening chemicals impairing the thyroid hormone synthesis. The present study used the intrafollicular T4-content (IT4C) of zebrafish eleutheroembryos as integrative endpoint for testing the hypotheses that the effect of mixtures of TGFDs with a similar mode of action [inhibition of thyroid peroxidase (TPO)] was well predicted by a concentration addition concept (CA) model, whereas the response addition concept (RA) model predicted better the effect of dissimilarly acting binary mixtures of TGFDs [TPO-inhibitors and sodium-iodide symporter (NIS)-inhibitors]. However, CA model provided better prediction of joint effects than RA in five out of the six tested mixtures. The exception being the mixture MMI (TPO-inhibitor)-KClO{sub 4} (NIS-inhibitor) dosed at a fixed ratio of EC{sub 10} that provided similar CA and RA predictions and hence it was difficult to get any conclusive result. There results support the phenomenological similarity criterion stating that the concept of concentration addition could be extended to mixture constituents having common apical endpoints or common adverse outcomes. - Highlights: • Potential synergic or additive effect of mixtures of chemicals on thyroid function. • Zebrafish as alternative model for testing the effect of mixtures of goitrogens. • Concentration addition seems to predict better the effect of

  18. Bayesian Plackett-Luce Mixture Models for Partially Ranked Data.

    Science.gov (United States)

    Mollica, Cristina; Tardella, Luca

    2017-06-01

    The elicitation of an ordinal judgment on multiple alternatives is often required in many psychological and behavioral experiments to investigate preference/choice orientation of a specific population. The Plackett-Luce model is one of the most popular and frequently applied parametric distributions to analyze rankings of a finite set of items. The present work introduces a Bayesian finite mixture of Plackett-Luce models to account for unobserved sample heterogeneity of partially ranked data. We describe an efficient way to incorporate the latent group structure in the data augmentation approach and the derivation of existing maximum likelihood procedures as special instances of the proposed Bayesian method. Inference can be conducted with the combination of the Expectation-Maximization algorithm for maximum a posteriori estimation and the Gibbs sampling iterative procedure. We additionally investigate several Bayesian criteria for selecting the optimal mixture configuration and describe diagnostic tools for assessing the fitness of ranking distributions conditionally and unconditionally on the number of ranked items. The utility of the novel Bayesian parametric Plackett-Luce mixture for characterizing sample heterogeneity is illustrated with several applications to simulated and real preference ranked data. We compare our method with the frequentist approach and a Bayesian nonparametric mixture model both assuming the Plackett-Luce model as a mixture component. Our analysis on real datasets reveals the importance of an accurate diagnostic check for an appropriate in-depth understanding of the heterogenous nature of the partial ranking data.

  19. bspmma: An R Package for Bayesian Semiparametric Models for Meta-Analysis

    Directory of Open Access Journals (Sweden)

    Deborah Burr

    2012-07-01

    Full Text Available We introduce an R package, bspmma, which implements a Dirichlet-based random effects model specific to meta-analysis. In meta-analysis, when combining effect estimates from several heterogeneous studies, it is common to use a random-effects model. The usual frequentist or Bayesian models specify a normal distribution for the true effects. However, in many situations, the effect distribution is not normal, e.g., it can have thick tails, be skewed, or be multi-modal. A Bayesian nonparametric model based on mixtures of Dirichlet process priors has been proposed in the literature, for the purpose of accommodating the non-normality. We review this model and then describe a competitor, a semiparametric version which has the feature that it allows for a well-defined centrality parameter convenient for determining whether the overall effect is significant. This second Bayesian model is based on a different version of the Dirichlet process prior, and we call it the "conditional Dirichlet model". The package contains functions to carry out analyses based on either the ordinary or the conditional Dirichlet model, functions for calculating certain Bayes factors that provide a check on the appropriateness of the conditional Dirichlet model, and functions that enable an empirical Bayes selection of the precision parameter of the Dirichlet process. We illustrate the use of the package on two examples, and give an interpretation of the results in these two different scenarios.

  20. Modeling the effects of binary mixtures on survival in time.

    NARCIS (Netherlands)

    Baas, J.; van Houte, B.P.P.; van Gestel, C.A.M.; Kooijman, S.A.L.M.

    2007-01-01

    In general, effects of mixtures are difficult to describe, and most of the models in use are descriptive in nature and lack a strong mechanistic basis. The aim of this experiment was to develop a process-based model for the interpretation of mixture toxicity measurements, with effects of binary

  1. Evaluation of the Weibull and log normal distribution functions as survival models of Escherichia coli under isothermal and non isothermal conditions.

    Science.gov (United States)

    Aragao, Glaucia M F; Corradini, Maria G; Normand, Mark D; Peleg, Micha

    2007-11-01

    Published survival curves of Escherichia coli in two growth media, with and without the presence of salt, at various temperatures and in a Greek eggplant salad having various levels of essential oil, all had a characteristic downward concavity when plotted on semi logarithmic coordinates. Some also exhibited what appeared as a 'shoulder' of considerable length. Regardless of whether a shoulder was noticed, the survival pattern could be considered as a manifestation of an underlying unimodal distribution of the cells' death times. Mathematically, the data could be described equally well by the Weibull and log normal distribution functions, which had similar modes, means, standard deviations and coefficients of skewness. When plotted in their probability density function (PDF) form, the curves also appeared very similar visually. This enabled us to quantify and compare the effect of temperature or essential oil concentration on the organism's survival in terms of these temporal distributions' characteristics. Increased lethality was generally expressed in a shorter mean and mode, a smaller standard deviation and increased overall symmetry as judged by the distributions' degree of skewness. The 'shoulder', as expected, simply indicated that the distribution's standard deviation was much smaller than its mode. Rate models based on the two distribution functions could be used to predict non isothermal survival patterns. They were derived on the assumption that the momentary inactivation rate is the isothermal rate at the momentary temperature at a time that corresponds to the momentary survival ratio. In this application, however, the Weibullian model with a fixed power was not only simpler and more convenient mathematically than the one based on the log normal distribution, but it also provided more accurate estimates of the dynamic inactivation patterns.

  2. Segmentation and intensity estimation of microarray images using a gamma-t mixture model.

    Science.gov (United States)

    Baek, Jangsun; Son, Young Sook; McLachlan, Geoffrey J

    2007-02-15

    We present a new approach to the analysis of images for complementary DNA microarray experiments. The image segmentation and intensity estimation are performed simultaneously by adopting a two-component mixture model. One component of this mixture corresponds to the distribution of the background intensity, while the other corresponds to the distribution of the foreground intensity. The intensity measurement is a bivariate vector consisting of red and green intensities. The background intensity component is modeled by the bivariate gamma distribution, whose marginal densities for the red and green intensities are independent three-parameter gamma distributions with different parameters. The foreground intensity component is taken to be the bivariate t distribution, with the constraint that the mean of the foreground is greater than that of the background for each of the two colors. The degrees of freedom of this t distribution are inferred from the data but they could be specified in advance to reduce the computation time. Also, the covariance matrix is not restricted to being diagonal and so it allows for nonzero correlation between R and G foreground intensities. This gamma-t mixture model is fitted by maximum likelihood via the EM algorithm. A final step is executed whereby nonparametric (kernel) smoothing is undertaken of the posterior probabilities of component membership. The main advantages of this approach are: (1) it enjoys the well-known strengths of a mixture model, namely flexibility and adaptability to the data; (2) it considers the segmentation and intensity simultaneously and not separately as in commonly used existing software, and it also works with the red and green intensities in a bivariate framework as opposed to their separate estimation via univariate methods; (3) the use of the three-parameter gamma distribution for the background red and green intensities provides a much better fit than the normal (log normal) or t distributions; (4) the

  3. Screening Immunomodulators To Skew the Antigen-Specific Autoimmune Response.

    Science.gov (United States)

    Northrup, Laura; Sullivan, Bradley P; Hartwell, Brittany L; Garza, Aaron; Berkland, Cory

    2017-01-03

    Current therapies to treat autoimmune diseases often result in side effects such as nonspecific immunosuppression. Therapies that can induce antigen-specific immune tolerance provide an opportunity to reverse autoimmunity and mitigate the risks associated with global immunosuppression. In an effort to induce antigen-specific immune tolerance, co-administration of immunomodulators with autoantigens has been investigated in an effort to reprogram autoimmunity. To date, identifying immunomodulators that may skew the antigen-specific immune response has been ad hoc at best. To address this need, we utilized splenocytes obtained from mice with experimental autoimmune encephalomyelitis (EAE) in order to determine if certain immunomodulators may induce markers of immune tolerance following antigen rechallenge. Of the immunomodulatory compounds investigated, only dexamethasone modified the antigen-specific immune response by skewing the cytokine response and decreasing T-cell populations at a concentration corresponding to a relevant in vivo dose. Thus, antigen-educated EAE splenocytes provide an ex vivo screen for investigating compounds capable of skewing the antigen-specific immune response, and this approach could be extrapolated to antigen-educated cells from other diseases or human tissues.

  4. KONVERGENSI ESTIMATOR DALAM MODEL MIXTURE BERBASIS MISSING DATA

    Directory of Open Access Journals (Sweden)

    N Dwidayati

    2014-06-01

    Full Text Available Abstrak __________________________________________________________________________________________ Model mixture dapat mengestimasi proporsi pasien yang sembuh (cured dan fungsi survival pasien tak sembuh (uncured. Pada kajian ini, model mixture dikembangkan untuk  analisis cure rate berbasis missing data. Ada beberapa metode yang dapat digunakan untuk analisis missing data. Salah satu metode yang dapat digunakan adalah Algoritma EM, Metode ini didasarkan pada 2 (dua langkah, yaitu: (1 Expectation Step dan (2 Maximization Step. Algoritma EM merupakan pendekatan iterasi untuk mempelajari model dari data dengan nilai hilang melalui 4 (empat langkah, yaitu(1 pilih himpunan inisial dari parameter untuk sebuah model, (2 tentukan nilai ekspektasi untuk data hilang, (3 buat induksi parameter model baru dari gabungan nilai ekspekstasi dan data asli, dan (4 jika parameter tidak converged, ulangi langkah 2 menggunakan model baru. Berdasar kajian yang dilakukan dapat ditunjukkan bahwa pada algoritma EM, log-likelihood untuk missing data mengalami kenaikan setelah dilakukan setiap iterasi dari algoritmanya. Dengan demikian berdasar algoritma EM, barisan likelihood konvergen jika likelihood terbatas ke bawah.   Abstract __________________________________________________________________________________________ Model mixture can estimate proportion of recovering patient  and function of patient survival do not recover. At this study, model mixture developed to analyse cure rate bases on missing data. There are some method which applicable to analyse missing data. One of method which can be applied is Algoritma EM, This method based on 2 ( two step, that is: ( 1 Expectation Step and ( 2 Maximization Step. EM Algorithm is approach of iteration to study model from data with value loses through 4 ( four step, yaitu(1 select;chooses initial gathering from parameter for a model, ( 2 determines expectation value for data to lose, ( 3 induce newfangled parameter

  5. RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.

    Science.gov (United States)

    Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na

    2015-09-03

    Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.

  6. Asympotic efficiency of signed - rank symmetry tests under skew alternatives.

    OpenAIRE

    Alessandra Durio; Yakov Nikitin

    2002-01-01

    The efficiency of some known tests for symmetry such as the sign test, the Wilcoxon signed-rank test or more general linear signed rank tests was studied mainly under the classical alternatives of location. However it is interesting to compare the efficiencies of these tests under asymmetric alternatives like the so-called skew alternative proposed in Azzalini (1985). We find and compare local Bahadur efficiencies of linear signed-rank statistics for skew alternatives and discuss also the con...

  7. Performance Analyses of IDEAL Algorithm on Highly Skewed Grid System

    Directory of Open Access Journals (Sweden)

    Dongliang Sun

    2014-03-01

    Full Text Available IDEAL is an efficient segregated algorithm for the fluid flow and heat transfer problems. This algorithm has now been extended to the 3D nonorthogonal curvilinear coordinates. Highly skewed grids in the nonorthogonal curvilinear coordinates can decrease the convergence rate and deteriorate the calculating stability. In this study, the feasibility of the IDEAL algorithm on highly skewed grid system is analyzed by investigating the lid-driven flow in the inclined cavity. It can be concluded that the IDEAL algorithm is more robust and more efficient than the traditional SIMPLER algorithm, especially for the highly skewed and fine grid system. For example, at θ = 5° and grid number = 70 × 70 × 70, the convergence rate of the IDEAL algorithm is 6.3 times faster than that of the SIMPLER algorithm, and the IDEAL algorithm can converge almost at any time step multiple.

  8. Using purine skews to predict genes in AT-rich poxviruses

    Directory of Open Access Journals (Sweden)

    Upton Chris

    2005-02-01

    Full Text Available Abstract Background Clusters or runs of purines on the mRNA synonymous strand have been found in many different organisms including orthopoxviruses. The purine bias that is exhibited by these clusters can be observed using a purine skew and in the case of poxviruses, these skews can be used to help determine the coding strand of a particular segment of the genome. Combined with previous findings that minor ORFs have lower than average aspartate and glutamate composition and higher than average serine composition, purine content can be used to predict the likelihood of a poxvirus ORF being a "real gene". Results Using purine skews and a "quality" measure designed to incorporate previous findings about minor ORFs, we have found that in our training case (vaccinia virus strain Copenhagen, 59 of 65 minor (small and unlikely to be a real genes ORFs were correctly classified as being minor. Of the 201 major (large and likely to be real genes vaccinia ORFs, 192 were correctly classified as being major. Performing a similar analysis with the entomopoxvirus amsacta moorei (AMEV, it was found that 4 major ORFs were incorrectly classified as minor and 9 minor ORFs were incorrectly classified as major. The purine abundance observed for major ORFs in vaccinia virus was found to stem primarily from the first codon position with both the second and third codon positions containing roughly equal amounts of purines and pyrimidines. Conclusion Purine skews and a "quality" measure can be used to predict functional ORFs and purine skews in particular can be used to determine which of two overlapping ORFs is most likely to be the real gene if neither of the two ORFs has orthologs in other poxviruses.

  9. Robustness of S1 statistic with Hodges-Lehmann for skewed distributions

    Science.gov (United States)

    Ahad, Nor Aishah; Yahaya, Sharipah Soaad Syed; Yin, Lee Ping

    2016-10-01

    Analysis of variance (ANOVA) is a common use parametric method to test the differences in means for more than two groups when the populations are normally distributed. ANOVA is highly inefficient under the influence of non- normal and heteroscedastic settings. When the assumptions are violated, researchers are looking for alternative such as Kruskal-Wallis under nonparametric or robust method. This study focused on flexible method, S1 statistic for comparing groups using median as the location estimator. S1 statistic was modified by substituting the median with Hodges-Lehmann and the default scale estimator with the variance of Hodges-Lehmann and MADn to produce two different test statistics for comparing groups. Bootstrap method was used for testing the hypotheses since the sampling distributions of these modified S1 statistics are unknown. The performance of the proposed statistic in terms of Type I error was measured and compared against the original S1 statistic, ANOVA and Kruskal-Wallis. The propose procedures show improvement compared to the original statistic especially under extremely skewed distribution.

  10. Challenges in modelling the random structure correctly in growth mixture models and the impact this has on model mixtures.

    Science.gov (United States)

    Gilthorpe, M S; Dahly, D L; Tu, Y K; Kubzansky, L D; Goodman, E

    2014-06-01

    Lifecourse trajectories of clinical or anthropological attributes are useful for identifying how our early-life experiences influence later-life morbidity and mortality. Researchers often use growth mixture models (GMMs) to estimate such phenomena. It is common to place constrains on the random part of the GMM to improve parsimony or to aid convergence, but this can lead to an autoregressive structure that distorts the nature of the mixtures and subsequent model interpretation. This is especially true if changes in the outcome within individuals are gradual compared with the magnitude of differences between individuals. This is not widely appreciated, nor is its impact well understood. Using repeat measures of body mass index (BMI) for 1528 US adolescents, we estimated GMMs that required variance-covariance constraints to attain convergence. We contrasted constrained models with and without an autocorrelation structure to assess the impact this had on the ideal number of latent classes, their size and composition. We also contrasted model options using simulations. When the GMM variance-covariance structure was constrained, a within-class autocorrelation structure emerged. When not modelled explicitly, this led to poorer model fit and models that differed substantially in the ideal number of latent classes, as well as class size and composition. Failure to carefully consider the random structure of data within a GMM framework may lead to erroneous model inferences, especially for outcomes with greater within-person than between-person homogeneity, such as BMI. It is crucial to reflect on the underlying data generation processes when building such models.

  11. Quantitative structure activity relationships (QSAR) for binary mixtures at non-equitoxic ratios based on toxic ratios-effects curves.

    Science.gov (United States)

    Tian, Dayong; Lin, Zhifen; Yin, Daqiang

    2013-01-01

    The present study proposed a QSAR model to predict joint effects at non-equitoxic ratios for binary mixtures containing reactive toxicants, cyanogenic compounds and aldehydes. Toxicity of single and binary mixtures was measured by quantifying the decrease in light emission from the Photobacterium phosphoreum for 15 min. The joint effects of binary mixtures (TU sum) can thus be obtained. The results showed that the relationships between toxic ratios of the individual chemicals and their joint effects can be described by normal distribution function. Based on normal distribution equations, the joint effects of binary mixtures at non-equitoxic ratios ( [Formula: see text]) can be predicted quantitatively using the joint effects at equitoxic ratios ( [Formula: see text]). Combined with a QSAR model of [Formula: see text]in our previous work, a novel QSAR model can be proposed to predict the joint effects of mixtures at non-equitoxic ratios ( [Formula: see text]). The proposed model has been validated using additional mixtures other than the one used for the development of the model. Predicted and observed results were similar (p>0.05). This study provides an approach to the prediction of joint effects for binary mixtures at non-equitoxic ratios.

  12. Skew-orthogonal polynomials, differential systems and random matrix theory

    International Nuclear Information System (INIS)

    Ghosh, S.

    2007-01-01

    We study skew-orthogonal polynomials with respect to the weight function exp[-2V (x)], with V (x) = Σ K=1 2d (u K /K)x K , u 2d > 0, d > 0. A finite subsequence of such skew-orthogonal polynomials arising in the study of Orthogonal and Symplectic ensembles of random matrices, satisfy a system of differential-difference-deformation equation. The vectors formed by such subsequence has the rank equal to the degree of the potential in the quaternion sense. These solutions satisfy certain compatibility condition and hence admit a simultaneous fundamental system of solutions. (author)

  13. Nonparametric Identification and Estimation of Finite Mixture Models of Dynamic Discrete Choices

    OpenAIRE

    Hiroyuki Kasahara; Katsumi Shimotsu

    2006-01-01

    In dynamic discrete choice analysis, controlling for unobserved heterogeneity is an important issue, and finite mixture models provide flexible ways to account for unobserved heterogeneity. This paper studies nonparametric identifiability of type probabilities and type-specific component distributions in finite mixture models of dynamic discrete choices. We derive sufficient conditions for nonparametric identification for various finite mixture models of dynamic discrete choices used in appli...

  14. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, Addendum

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.

  15. A study of finite mixture model: Bayesian approach on financial time series data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-07-01

    Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.

  16. Robust non-rigid point set registration using student's-t mixture model.

    Directory of Open Access Journals (Sweden)

    Zhiyong Zhou

    Full Text Available The Student's-t mixture model, which is heavily tailed and more robust than the Gaussian mixture model, has recently received great attention on image processing. In this paper, we propose a robust non-rigid point set registration algorithm using the Student's-t mixture model. Specifically, first, we consider the alignment of two point sets as a probability density estimation problem and treat one point set as Student's-t mixture model centroids. Then, we fit the Student's-t mixture model centroids to the other point set which is treated as data. Finally, we get the closed-form solutions of registration parameters, leading to a computationally efficient registration algorithm. The proposed algorithm is especially effective for addressing the non-rigid point set registration problem when significant amounts of noise and outliers are present. Moreover, less registration parameters have to be set manually for our algorithm compared to the popular coherent points drift (CPD algorithm. We have compared our algorithm with other state-of-the-art registration algorithms on both 2D and 3D data with noise and outliers, where our non-rigid registration algorithm showed accurate results and outperformed the other algorithms.

  17. Hydrogenic ionization model for mixtures in non-LTE plasmas

    International Nuclear Information System (INIS)

    Djaoui, A.

    1999-01-01

    The Hydrogenic Ionization Model for Mixtures (HIMM) is a non-Local Thermodynamic Equilibrium (non-LTE), time-dependent ionization model for laser-produced plasmas containing mixtures of elements (species). In this version, both collisional and radiative rates are taken into account. An ionization distribution for each species which is consistent with the ambient electron density is obtained by use of an iterative procedure in a single calculation for all species. Energy levels for each shell having a given principal quantum number and for each ion stage of each species in the mixture are calculated using screening constants. Steady-state non-LTE as well as LTE solutions are also provided. The non-LTE rate equations converge to the LTE solution at sufficiently high densities or as the radiation temperature approaches the electron temperature. The model is particularly useful at low temperatures where convergence problems are usually encountered in our previous models. We apply our model to typical situation in x-ray laser research, laser-produced plasmas and inertial confinement fusion. Our results compare well with previously published results for a selenium plasma. (author)

  18. Nonlinear Structured Growth Mixture Models in M"plus" and OpenMx

    Science.gov (United States)

    Grimm, Kevin J.; Ram, Nilam; Estabrook, Ryne

    2010-01-01

    Growth mixture models (GMMs; B. O. Muthen & Muthen, 2000; B. O. Muthen & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models…

  19. Meta-analysis of prediction model performance across multiple studies: Which scale helps ensure between-study normality for the C-statistic and calibration measures?

    Science.gov (United States)

    Snell, Kym Ie; Ensor, Joie; Debray, Thomas Pa; Moons, Karel Gm; Riley, Richard D

    2017-01-01

    If individual participant data are available from multiple studies or clusters, then a prediction model can be externally validated multiple times. This allows the model's discrimination and calibration performance to be examined across different settings. Random-effects meta-analysis can then be used to quantify overall (average) performance and heterogeneity in performance. This typically assumes a normal distribution of 'true' performance across studies. We conducted a simulation study to examine this normality assumption for various performance measures relating to a logistic regression prediction model. We simulated data across multiple studies with varying degrees of variability in baseline risk or predictor effects and then evaluated the shape of the between-study distribution in the C-statistic, calibration slope, calibration-in-the-large, and E/O statistic, and possible transformations thereof. We found that a normal between-study distribution was usually reasonable for the calibration slope and calibration-in-the-large; however, the distributions of the C-statistic and E/O were often skewed across studies, particularly in settings with large variability in the predictor effects. Normality was vastly improved when using the logit transformation for the C-statistic and the log transformation for E/O, and therefore we recommend these scales to be used for meta-analysis. An illustrated example is given using a random-effects meta-analysis of the performance of QRISK2 across 25 general practices.

  20. Direct Importance Estimation with Gaussian Mixture Models

    Science.gov (United States)

    Yamada, Makoto; Sugiyama, Masashi

    The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extention of the Kullback-Leibler importance estimation procedure (KLIEP), an importance estimation method using linear or kernel models. An advantage of GMMs is that covariance matrices can also be learned through an expectation-maximization procedure, so the proposed method — which we call the Gaussian mixture KLIEP (GM-KLIEP) — is expected to work well when the true importance function has high correlation. Through experiments, we show the validity of the proposed approach.

  1. Tunable integration of absorption-membrane-adsorption for efficiently separating low boiling gas mixtures near normal temperature

    Science.gov (United States)

    Liu, Huang; Pan, Yong; Liu, Bei; Sun, Changyu; Guo, Ping; Gao, Xueteng; Yang, Lanying; Ma, Qinglan; Chen, Guangjin

    2016-01-01

    Separation of low boiling gas mixtures is widely concerned in process industries. Now their separations heavily rely upon energy-intensive cryogenic processes. Here, we report a pseudo-absorption process for separating low boiling gas mixtures near normal temperature. In this process, absorption-membrane-adsorption is integrated by suspending suitable porous ZIF material in suitable solvent and forming selectively permeable liquid membrane around ZIF particles. Green solvents like water and glycol were used to form ZIF-8 slurry and tune the permeability of liquid membrane surrounding ZIF-8 particles. We found glycol molecules form tighter membrane while water molecules form looser membrane because of the hydrophobicity of ZIF-8. When using mixing solvents composed of glycol and water, the permeability of liquid membrane becomes tunable. It is shown that ZIF-8/water slurry always manifests remarkable higher separation selectivity than solid ZIF-8 and it could be tuned to further enhance the capture of light hydrocarbons by adding suitable quantity of glycol to water. Because of its lower viscosity and higher sorption/desorption rate, tunable ZIF-8/water-glycol slurry could be readily used as liquid absorbent to separate different kinds of low boiling gas mixtures by applying a multistage separation process in one traditional absorption tower, especially for the capture of light hydrocarbons. PMID:26892255

  2. Uniqueness: skews bit occurrence frequencies in randomly generated fingerprint libraries.

    Science.gov (United States)

    Chen, Nelson G

    2016-08-01

    Requiring that randomly generated chemical fingerprint libraries have unique fingerprints such that no two fingerprints are identical causes a systematic skew in bit occurrence frequencies, the proportion at which specified bits are set. Observed frequencies (O) at which each bit is set within the resulting libraries systematically differ from frequencies at which bits are set at fingerprint generation (E). Observed frequencies systematically skew toward 0.5, with the effect being more pronounced as library size approaches the compound space, which is the total number of unique possible fingerprints given the number of bit positions each fingerprint contains. The effect is quantified for varying library sizes as a fraction of the overall compound space, and for changes in the specified frequency E. The cause and implications for this systematic skew are subsequently discussed. When generating random libraries of chemical fingerprints, the imposition of a uniqueness requirement should either be avoided or taken into account.

  3. Combinatorial bounds on the α-divergence of univariate mixture models

    KAUST Repository

    Nielsen, Frank

    2017-06-20

    We derive lower- and upper-bounds of α-divergence between univariate mixture models with components in the exponential family. Three pairs of bounds are presented in order with increasing quality and increasing computational cost. They are verified empirically through simulated Gaussian mixture models. The presented methodology generalizes to other divergence families relying on Hellinger-type integrals.

  4. Communication: Modeling electrolyte mixtures with concentration dependent dielectric permittivity

    Science.gov (United States)

    Chen, Hsieh; Panagiotopoulos, Athanassios Z.

    2018-01-01

    We report a new implicit-solvent simulation model for electrolyte mixtures based on the concept of concentration dependent dielectric permittivity. A combining rule is found to predict the dielectric permittivity of electrolyte mixtures based on the experimentally measured dielectric permittivity for pure electrolytes as well as the mole fractions of the electrolytes in mixtures. Using grand canonical Monte Carlo simulations, we demonstrate that this approach allows us to accurately reproduce the mean ionic activity coefficients of NaCl in NaCl-CaCl2 mixtures at ionic strengths up to I = 3M. These results are important for thermodynamic studies of geologically relevant brines and physiological fluids.

  5. A general mixture model and its application to coastal sandbar migration simulation

    Science.gov (United States)

    Liang, Lixin; Yu, Xiping

    2017-04-01

    A mixture model for general description of sediment laden flows is developed and then applied to coastal sandbar migration simulation. Firstly the mixture model is derived based on the Eulerian-Eulerian approach of the complete two-phase flow theory. The basic equations of the model include the mass and momentum conservation equations for the water-sediment mixture and the continuity equation for sediment concentration. The turbulent motion of the mixture is formulated for the fluid and the particles respectively. A modified k-ɛ model is used to describe the fluid turbulence while an algebraic model is adopted for the particles. A general formulation for the relative velocity between the two phases in sediment laden flows, which is derived by manipulating the momentum equations of the enhanced two-phase flow model, is incorporated into the mixture model. A finite difference method based on SMAC scheme is utilized for numerical solutions. The model is validated by suspended sediment motion in steady open channel flows, both in equilibrium and non-equilibrium state, and in oscillatory flows as well. The computed sediment concentrations, horizontal velocity and turbulence kinetic energy of the mixture are all shown to be in good agreement with experimental data. The mixture model is then applied to the study of sediment suspension and sandbar migration in surf zones under a vertical 2D framework. The VOF method for the description of water-air free surface and topography reaction model is coupled. The bed load transport rate and suspended load entrainment rate are all decided by the sea bed shear stress, which is obtained from the boundary layer resolved mixture model. The simulation results indicated that, under small amplitude regular waves, erosion occurred on the sandbar slope against the wave propagation direction, while deposition dominated on the slope towards wave propagation, indicating an onshore migration tendency. The computation results also shows that

  6. Optimal designs for linear mixture models

    NARCIS (Netherlands)

    Mendieta, E.J.; Linssen, H.N.; Doornbos, R.

    1975-01-01

    In a recent paper Snee and Marquardt (1974) considered designs for linear mixture models, where the components are subject to individual lower and/or upper bounds. When the number of components is large their algorithm XVERT yields designs far too extensive for practical purposes. The purpose of

  7. Flexible Mixture-Amount Models for Business and Industry Using Gaussian Processes

    NARCIS (Netherlands)

    A. Ruseckaite (Aiste); D. Fok (Dennis); P.P. Goos (Peter)

    2016-01-01

    markdownabstractMany products and services can be described as mixtures of ingredients whose proportions sum to one. Specialized models have been developed for linking the mixture proportions to outcome variables, such as preference, quality and liking. In many scenarios, only the mixture

  8. Modeling pore corrosion in normally open gold- plated copper connectors.

    Energy Technology Data Exchange (ETDEWEB)

    Battaile, Corbett Chandler; Moffat, Harry K.; Sun, Amy Cha-Tien; Enos, David George; Serna, Lysle M.; Sorensen, Neil Robert

    2008-09-01

    The goal of this study is to model the electrical response of gold plated copper electrical contacts exposed to a mixed flowing gas stream consisting of air containing 10 ppb H{sub 2}S at 30 C and a relative humidity of 70%. This environment accelerates the attack normally observed in a light industrial environment (essentially a simplified version of the Battelle Class 2 environment). Corrosion rates were quantified by measuring the corrosion site density, size distribution, and the macroscopic electrical resistance of the aged surface as a function of exposure time. A pore corrosion numerical model was used to predict both the growth of copper sulfide corrosion product which blooms through defects in the gold layer and the resulting electrical contact resistance of the aged surface. Assumptions about the distribution of defects in the noble metal plating and the mechanism for how corrosion blooms affect electrical contact resistance were needed to complete the numerical model. Comparisons are made to the experimentally observed number density of corrosion sites, the size distribution of corrosion product blooms, and the cumulative probability distribution of the electrical contact resistance. Experimentally, the bloom site density increases as a function of time, whereas the bloom size distribution remains relatively independent of time. These two effects are included in the numerical model by adding a corrosion initiation probability proportional to the surface area along with a probability for bloom-growth extinction proportional to the corrosion product bloom volume. The cumulative probability distribution of electrical resistance becomes skewed as exposure time increases. While the electrical contact resistance increases as a function of time for a fraction of the bloom population, the median value remains relatively unchanged. In order to model this behavior, the resistance calculated for large blooms has been weighted more heavily.

  9. Effect of Phase Response Curve Skew on Synchronization with and without Conduction Delays

    Directory of Open Access Journals (Sweden)

    Carmen eCanavier

    2013-12-01

    Full Text Available A central problem in cortical processing including sensory binding and attentional gating is how neurons can synchronize their responses with zero or near-zero time lag. For a spontaneously firing neuron, an input from another neuron can delay or advance the next spike by different amounts depending upon the timing of the input relative to the previous spike. This information constitutes the phase response curve (PRC. We present a simple graphical method for determining the effect of PRC shape on synchronization tendencies and illustrate it using type 1 PRCs, which consist entirely of advances (delays in response to excitation (inhibition. We obtained the following generic solutions for type 1 PRCs, which include the pulse coupled leaky integrate and fire model. For pairs with mutual excitation, exact synchrony can be stable for strong coupling because of the stabilizing effect of the causal limit region of the PRC in which an input triggers a spike immediately upon arrival. However, synchrony is unstable for short delays, because delayed inputs arrive during a refractory period and cannot trigger an immediate spike. Right skew destabilizes antiphase and enables modes with time lags that grow as the conduction delay is increased. Therefore, right skew favors near-synchrony at short conduction delays and a gradual transition between synchrony and antiphase for pairs coupled by mutual excitation. For pairs with mutual inhibition, zero time lag synchrony is stable for conduction delays ranging from zero to a substantial fraction of the period for pairs. However, for right skew there is a preferred antiphase mode at short delays. In contrast to mutual excitation, left skew destabilizes antiphase for mutual inhibition so that synchrony dominates at short delays as well. These pairwise synchronization tendencies constrain the synchronization properties of neurons embedded in larger networks.

  10. Sound speed models for a noncondensible gas-steam-water mixture

    International Nuclear Information System (INIS)

    Ransom, V.H.; Trapp, J.A.

    1984-01-01

    An analytical expression is derived for the homogeneous equilibrium speed of sound in a mixture of noncondensible gas, steam, and water. The expression is based on the Gibbs free energy interphase equilibrium condition for a Gibbs-Dalton mixture in contact with a pure liquid phase. Several simplified models are discussed including the homogeneous frozen model. These idealized models can be used as a reference for data comparison and also serve as a basis for empirically corrected nonhomogeneous and nonequilibrium models

  11. Evaluation of Thermodynamic Models for Predicting Phase Equilibria of CO2 + Impurity Binary Mixture

    Science.gov (United States)

    Shin, Byeong Soo; Rho, Won Gu; You, Seong-Sik; Kang, Jeong Won; Lee, Chul Soo

    2018-03-01

    For the design and operation of CO2 capture and storage (CCS) processes, equation of state (EoS) models are used for phase equilibrium calculations. Reliability of an EoS model plays a crucial role, and many variations of EoS models have been reported and continue to be published. The prediction of phase equilibria for CO2 mixtures containing SO2, N2, NO, H2, O2, CH4, H2S, Ar, and H2O is important for CO2 transportation because the captured gas normally contains small amounts of impurities even though it is purified in advance. For the design of pipelines in deep sea or arctic conditions, flow assurance and safety are considered priority issues, and highly reliable calculations are required. In this work, predictive Soave-Redlich-Kwong, cubic plus association, Groupe Européen de Recherches Gazières (GERG-2008), perturbed-chain statistical associating fluid theory, and non-random lattice fluids hydrogen bond EoS models were compared regarding performance in calculating phase equilibria of CO2-impurity binary mixtures and with the collected literature data. No single EoS could cover the entire range of systems considered in this study. Weaknesses and strong points of each EoS model were analyzed, and recommendations are given as guidelines for safe design and operation of CCS processes.

  12. Risk Aversion and Skewness Preference: a comment

    NARCIS (Netherlands)

    G.T. Post (Thierry); P. van Vliet (Pim)

    2003-01-01

    textabstractEmpirically, co-skewness of asset returns seems to explain a substantial part of the cross-sectional variation of mean return not explained by beta. Thisfinding is typically interpreted in terms of a risk averse representativeinvestor with a cubic utility function. This comment questions

  13. Cancer Outlier Analysis Based on Mixture Modeling of Gene Expression Data

    Directory of Open Access Journals (Sweden)

    Keita Mori

    2013-01-01

    Full Text Available Molecular heterogeneity of cancer, partially caused by various chromosomal aberrations or gene mutations, can yield substantial heterogeneity in gene expression profile in cancer samples. To detect cancer-related genes which are active only in a subset of cancer samples or cancer outliers, several methods have been proposed in the context of multiple testing. Such cancer outlier analyses will generally suffer from a serious lack of power, compared with the standard multiple testing setting where common activation of genes across all cancer samples is supposed. In this paper, we consider information sharing across genes and cancer samples, via a parametric normal mixture modeling of gene expression levels of cancer samples across genes after a standardization using the reference, normal sample data. A gene-based statistic for gene selection is developed on the basis of a posterior probability of cancer outlier for each cancer sample. Some efficiency improvement by using our method was demonstrated, even under settings with misspecified, heavy-tailed t-distributions. An application to a real dataset from hematologic malignancies is provided.

  14. Generalized Skew Coefficients of Annual Peak Flows for Rural, Unregulated Streams in West Virginia

    Science.gov (United States)

    Atkins, John T.; Wiley, Jeffrey B.; Paybins, Katherine S.

    2009-01-01

    Generalized skew was determined from analysis of records from 147 streamflow-gaging stations in or near West Virginia. The analysis followed guidelines established by the Interagency Advisory Committee on Water Data described in Bulletin 17B, except that stations having 50 or more years of record were used instead of stations with the less restrictive recommendation of 25 or more years of record. The generalized-skew analysis included contouring, averaging, and regression of station skews. The best method was considered the one with the smallest mean square error (MSE). MSE is defined as the following quantity summed and divided by the number of peaks: the square of the difference of an individual logarithm (base 10) of peak flow less the mean of all individual logarithms of peak flow. Contouring of station skews was the best method for determining generalized skew for West Virginia, with a MSE of about 0.2174. This MSE is an improvement over the MSE of about 0.3025 for the national map presented in Bulletin 17B.

  15. Optimal designs for linear mixture models

    NARCIS (Netherlands)

    Mendieta, E.J.; Linssen, H.N.; Doornbos, R.

    1975-01-01

    In a recent paper Snee and Marquardt [8] considered designs for linear mixture models, where the components are subject to individual lower and/or upper bounds. When the number of components is large their algorithm XVERT yields designs far too extensive for practical purposes. The purpose of this

  16. On Two Mixture-Based Clustering Approaches Used in Modeling an Insurance Portfolio

    Directory of Open Access Journals (Sweden)

    Tatjana Miljkovic

    2018-05-01

    Full Text Available We review two complementary mixture-based clustering approaches for modeling unobserved heterogeneity in an insurance portfolio: the generalized linear mixed cluster-weighted model (CWM and mixture-based clustering for an ordered stereotype model (OSM. The latter is for modeling of ordinal variables, and the former is for modeling losses as a function of mixed-type of covariates. The article extends the idea of mixture modeling to a multivariate classification for the purpose of testing unobserved heterogeneity in an insurance portfolio. The application of both methods is illustrated on a well-known French automobile portfolio, in which the model fitting is performed using the expectation-maximization (EM algorithm. Our findings show that these mixture-based clustering methods can be used to further test unobserved heterogeneity in an insurance portfolio and as such may be considered in insurance pricing, underwriting, and risk management.

  17. Consistent paternity skew through ontogeny in Peron's tree frog (Litoria peronii.

    Directory of Open Access Journals (Sweden)

    Craig D H Sherman

    Full Text Available BACKGROUND: A large number of studies in postcopulatory sexual selection use paternity success as a proxy for fertilization success. However, selective mortality during embryonic development can lead to skews in paternity in situations of polyandry and sperm competition. Thus, when assessment of paternity fails to incorporate mortality skews during early ontogeny, this may interfere with correct interpretation of results and subsequent evolutionary inference. In a previous series of in vitro sperm competition experiments with amphibians (Litoria peronii, we showed skewed paternity patterns towards males more genetically similar to the female. METHODOLOGY/PRINCIPAL FINDINGS: Here we use in vitro fertilizations and sperm competition trials to test if this pattern of paternity of fully developed tadpoles reflects patterns of paternity at fertilization and if paternity skews changes during embryonic development. We show that there is no selective mortality through ontogeny and that patterns of paternity of hatched tadpoles reflects success of competing males in sperm competition at fertilization. CONCLUSIONS/SIGNIFICANCE: While this study shows that previous inferences of fertilization success from paternity data are valid for this species, rigorous testing of these assumptions is required to ensure that differential embryonic mortality does not confound estimations of true fertilization success.

  18. PS-Modules over Ore Extensions and Skew Generalized Power Series Rings

    Directory of Open Access Journals (Sweden)

    Refaat M. Salem

    2015-01-01

    Full Text Available A right R-module MR is called a PS-module if its socle, SocMR, is projective. We investigate PS-modules over Ore extension and skew generalized power series extension. Let R be an associative ring with identity, MR a unitary right R-module, O=Rx;α,δ Ore extension, MxO a right O-module, S,≤ a strictly ordered additive monoid, ω:S→EndR a monoid homomorphism, A=RS,≤,ω the skew generalized power series ring, and BA=MS,≤RS,≤, ω the skew generalized power series module. Then, under some certain conditions, we prove the following: (1 If MR is a right PS-module, then MxO is a right PS-module. (2 If MR is a right PS-module, then BA is a right PS-module.

  19. A Study on The Mixture of Exponentiated-Weibull Distribution

    Directory of Open Access Journals (Sweden)

    Adel Tawfik Elshahat

    2016-12-01

    Full Text Available Mixtures of measures or distributions occur frequently in the theory and applications of probability and statistics. In the simplest case it may, for example, be reasonable to assume that one is dealing with the mixture in given proportions of a finite number of normal populations with different means or variances. The mixture parameter may also be denumerable infinite, as in the theory of sums of a random number of random variables, or continuous, as in the compound Poisson distribution. The use of finite mixture distributions, to control for unobserved heterogeneity, has become increasingly popular among those estimating dynamic discrete choice models. One of the barriers to using mixture models is that parameters that could previously be estimated in stages must now be estimated jointly: using mixture distributions destroys any additive reparability of the log likelihood function. In this thesis, the maximum likelihood estimators have been obtained for the parameters of the mixture of exponentiated Weibull distribution when sample is available from censoring scheme. The maximum likelihood estimators of the parameters and the asymptotic variance covariance matrix have been also obtained. A numerical illustration for these new results is given.

  20. Modeling phase equilibria for acid gas mixtures using the CPA equation of state. Part II: Binary mixtures with CO2

    DEFF Research Database (Denmark)

    Tsivintzelis, Ioannis; Kontogeorgis, Georgios; Michelsen, Michael Locht

    2011-01-01

    In Part I of this series of articles, the study of H2S mixtures has been presented with CPA. In this study the phase behavior of CO2 containing mixtures is modeled. Binary mixtures with water, alcohols, glycols and hydrocarbons are investigated. Both phase equilibria (vapor–liquid and liquid–liqu...

  1. A mixture theory model of fluid and solute transport in the microvasculature of normal and malignant tissues. I. Theory.

    Science.gov (United States)

    Schuff, M M; Gore, J P; Nauman, E A

    2013-05-01

    In order to better understand the mechanisms governing transport of drugs, nanoparticle-based treatments, and therapeutic biomolecules, and the role of the various physiological parameters, a number of mathematical models have previously been proposed. The limitations of the existing transport models indicate the need for a comprehensive model that includes transport in the vessel lumen, the vessel wall, and the interstitial space and considers the effects of the solute concentration on fluid flow. In this study, a general model to describe the transient distribution of fluid and multiple solutes at the microvascular level was developed using mixture theory. The model captures the experimentally observed dependence of the hydraulic permeability coefficient of the capillary wall on the concentration of solutes present in the capillary wall and the surrounding tissue. Additionally, the model demonstrates that transport phenomena across the capillary wall and in the interstitium are related to the solute concentration as well as the hydrostatic pressure. The model is used in a companion paper to examine fluid and solute transport for the simplified case of an axisymmetric geometry with no solid deformation or interconversion of mass.

  2. Mixture estimation with state-space components and Markov model of switching

    Czech Academy of Sciences Publication Activity Database

    Nagy, Ivan; Suzdaleva, Evgenia

    2013-01-01

    Roč. 37, č. 24 (2013), s. 9970-9984 ISSN 0307-904X R&D Projects: GA TA ČR TA01030123 Institutional support: RVO:67985556 Keywords : probabilistic dynamic mixtures, * probability density function * state-space models * recursive mixture estimation * Bayesian dynamic decision making under uncertainty * Kerridge inaccuracy Subject RIV: BC - Control Systems Theory Impact factor: 2.158, year: 2013 http://library.utia.cas.cz/separaty/2013/AS/nagy-mixture estimation with state-space components and markov model of switching.pdf

  3. Introduction to the special section on mixture modeling in personality assessment.

    Science.gov (United States)

    Wright, Aidan G C; Hallquist, Michael N

    2014-01-01

    Latent variable models offer a conceptual and statistical framework for evaluating the underlying structure of psychological constructs, including personality and psychopathology. Complex structures that combine or compare categorical and dimensional latent variables can be accommodated using mixture modeling approaches, which provide a powerful framework for testing nuanced theories about psychological structure. This special series includes introductory primers on cross-sectional and longitudinal mixture modeling, in addition to empirical examples applying these techniques to real-world data collected in clinical settings. This group of articles is designed to introduce personality assessment scientists and practitioners to a general latent variable framework that we hope will stimulate new research and application of mixture models to the assessment of personality and its pathology.

  4. The R Package bgmm : Mixture Modeling with Uncertain Knowledge

    Directory of Open Access Journals (Sweden)

    Przemys law Biecek

    2012-04-01

    Full Text Available Classical supervised learning enjoys the luxury of accessing the true known labels for the observations in a modeled dataset. Real life, however, poses an abundance of problems, where the labels are only partially defined, i.e., are uncertain and given only for a subsetof observations. Such partial labels can occur regardless of the knowledge source. For example, an experimental assessment of labels may have limited capacity and is prone to measurement errors. Also expert knowledge is often restricted to a specialized area and is thus unlikely to provide trustworthy labels for all observations in the dataset. Partially supervised mixture modeling is able to process such sparse and imprecise input. Here, we present an R package calledbgmm, which implements two partially supervised mixture modeling methods: soft-label and belief-based modeling. For completeness, we equipped the package also with the functionality of unsupervised, semi- and fully supervised mixture modeling. On real data we present the usage of bgmm for basic model-fitting in all modeling variants. The package can be applied also to selection of the best-fitting from a set of models with different component numbers or constraints on their structures. This functionality is presented on an artificial dataset, which can be simulated in bgmm from a distribution defined by a given model.

  5. New Flexible Models and Design Construction Algorithms for Mixtures and Binary Dependent Variables

    NARCIS (Netherlands)

    A. Ruseckaite (Aiste)

    2017-01-01

    markdownabstractThis thesis discusses new mixture(-amount) models, choice models and the optimal design of experiments. Two chapters of the thesis relate to the so-called mixture, which is a product or service whose ingredients’ proportions sum to one. The thesis begins by introducing mixture

  6. The generalised Sylvester matrix equations over the generalised bisymmetric and skew-symmetric matrices

    Science.gov (United States)

    Dehghan, Mehdi; Hajarian, Masoud

    2012-08-01

    A matrix P is called a symmetric orthogonal if P = P T = P -1. A matrix X is said to be a generalised bisymmetric with respect to P if X = X T = PXP. It is obvious that any symmetric matrix is also a generalised bisymmetric matrix with respect to I (identity matrix). By extending the idea of the Jacobi and the Gauss-Seidel iterations, this article proposes two new iterative methods, respectively, for computing the generalised bisymmetric (containing symmetric solution as a special case) and skew-symmetric solutions of the generalised Sylvester matrix equation ? (including Sylvester and Lyapunov matrix equations as special cases) which is encountered in many systems and control applications. When the generalised Sylvester matrix equation has a unique generalised bisymmetric (skew-symmetric) solution, the first (second) iterative method converges to the generalised bisymmetric (skew-symmetric) solution of this matrix equation for any initial generalised bisymmetric (skew-symmetric) matrix. Finally, some numerical results are given to illustrate the effect of the theoretical results.

  7. An Evaluation of Normal versus Lognormal Distribution in Data Description and Empirical Analysis

    Science.gov (United States)

    Diwakar, Rekha

    2017-01-01

    Many existing methods of statistical inference and analysis rely heavily on the assumption that the data are normally distributed. However, the normality assumption is not fulfilled when dealing with data which does not contain negative values or are otherwise skewed--a common occurrence in diverse disciplines such as finance, economics, political…

  8. Response Mixture Modeling: Accounting for Heterogeneity in Item Characteristics across Response Times.

    Science.gov (United States)

    Molenaar, Dylan; de Boeck, Paul

    2018-06-01

    In item response theory modeling of responses and response times, it is commonly assumed that the item responses have the same characteristics across the response times. However, heterogeneity might arise in the data if subjects resort to different response processes when solving the test items. These differences may be within-subject effects, that is, a subject might use a certain process on some of the items and a different process with different item characteristics on the other items. If the probability of using one process over the other process depends on the subject's response time, within-subject heterogeneity of the item characteristics across the response times arises. In this paper, the method of response mixture modeling is presented to account for such heterogeneity. Contrary to traditional mixture modeling where the full response vectors are classified, response mixture modeling involves classification of the individual elements in the response vector. In a simulation study, the response mixture model is shown to be viable in terms of parameter recovery. In addition, the response mixture model is applied to a real dataset to illustrate its use in investigating within-subject heterogeneity in the item characteristics across response times.

  9. Analysis of domain wall dynamics based on skewness of magnetic Barkhausen noise for applied stress determination

    Energy Technology Data Exchange (ETDEWEB)

    Ding, Song [College of Electrical Engineering and Control Science, Nanjing Tech University, Nanjing, Jiangsu 211816 (China); School of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, Jiangsu 210016 (China); Tian, GuiYun, E-mail: tian280@hotmail.com [School of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, Jiangsu 210016 (China); School of Electrical and Electronic Engineering, Merz Court, University of Newcastle upon Tyne, Newcastle NE1 7RU (United Kingdom); Dobmann, Gerd; Wang, Ping [School of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, Jiangsu 210016 (China)

    2017-01-01

    Skewness of Magnetic Barkhausen Noise (MBN) signal is used as a new feature for applied stress determination. After experimental studies, skewness presents its ability for measuring applied tensile stress compared with conventional feature, meanwhile, a non-linear behavior of this new feature and an independence of the excitation conditions under compressive stress are found and discussed. Effective damping during domain wall motion influencing the asymmetric shape of the MBN statistical distribution function is discussed under compressive and tensile stress variation. Domain wall (DW) energy and distance between pinning edges of the DW are considered altering the characteristic relaxation time, which is the reason for the non-linear phenomenon of skewness. - Highlights: • The skewness of magnetic Barkhausen noise profile is proposed as a new feature for applied stress determination. • The skewness is sensitive to applied stress and independent to excitation frequency. • Domain wall energy and pinning distance influence the relaxation time of domain wall, which leads to a non-linear behavior of skewness under compressive stress.

  10. On the asymptotic improvement of supervised learning by utilizing additional unlabeled samples - Normal mixture density case

    Science.gov (United States)

    Shahshahani, Behzad M.; Landgrebe, David A.

    1992-01-01

    The effect of additional unlabeled samples in improving the supervised learning process is studied in this paper. Three learning processes. supervised, unsupervised, and combined supervised-unsupervised, are compared by studying the asymptotic behavior of the estimates obtained under each process. Upper and lower bounds on the asymptotic covariance matrices are derived. It is shown that under a normal mixture density assumption for the probability density function of the feature space, the combined supervised-unsupervised learning is always superior to the supervised learning in achieving better estimates. Experimental results are provided to verify the theoretical concepts.

  11. Modeling Phase Equilibria for Acid Gas Mixtures Using the CPA Equation of State. I. Mixtures with H2S

    DEFF Research Database (Denmark)

    Tsivintzelis, Ioannis; Kontogeorgis, Georgios; Michelsen, Michael Locht

    2010-01-01

    (water, methanol, and glycols) are modeled assuming presence or not of cross-association interactions. Such interactions are accounted for using either a combining rule or a cross-solvation energy obtained from spectroscopic data. Using the parameters obtained from the binary systems, one ternary......The Cubic-Plus-Association (CPA) equation of state is applied to a large variety of mixtures containing H2S, which are of interest in the oil and gas industry. Binary H2S mixtures with alkanes, CO2, water, methanol, and glycols are first considered. The interactions of H2S with polar compounds...... and three quaternary mixtures are considered. It is shown that overall excellent correlation for binary, mixtures and satisfactory prediction results for multicomponent systems are obtained. There are significant differences between the various modeling approaches and the best results are obtained when...

  12. Quantifying the cross-sectional relationship between online sentiment and the skewness of stock returns

    Science.gov (United States)

    Shen, Dehua; Liu, Lanbiao; Zhang, Yongjie

    2018-01-01

    The constantly increasing utilization of social media as the alternative information channel, e.g., Twitter, provides us a unique opportunity to investigate the dynamics of the financial market. In this paper, we employ the daily happiness sentiment extracted from Twitter as the proxy for the online sentiment dynamics and investigate its association with the skewness of stock returns of 26 international stock market index returns. The empirical results show that: (1) by dividing the daily happiness sentiment into quintiles from the least to the most happiness days, the skewness of the Most-happiness subgroup is significantly larger than that of the Least-happiness subgroup. Besides, there exist significant differences in any pair of subgroups; (2) in an event study methodology, we further show that the skewness around the highest happiness days is significantly larger than the skewness around the lowest happiness days.

  13. Stock Markets Volatility Spillovers during Financial Crises : A DCC-MGARCH with Skew-t Approach

    OpenAIRE

    Bala, Dahiru A.; Takimoto, Taro

    2016-01-01

    We investigate stock markets volatility spillovers in selected emerging and major developed markets using multivariate GARCH (MGARCH) models [namely; DVECH, CCC-MGARCH, CCC-VARMA-(A)MGARCH, VAR-EGARCH, BEKK-(A)MGARCH, DCC-MGARCH (with Gaussian and t distributions) and DCC-with-skew-t density]. The paper analyses the impacts of recent global financial crisis (2007{2009) on stock market volatility and examines their dynamic interactions using several MGARCH model variants. Structural break dete...

  14. Approximate median regression for complex survey data with skewed response.

    Science.gov (United States)

    Fraser, Raphael André; Lipsitz, Stuart R; Sinha, Debajyoti; Fitzmaurice, Garrett M; Pan, Yi

    2016-12-01

    The ready availability of public-use data from various large national complex surveys has immense potential for the assessment of population characteristics using regression models. Complex surveys can be used to identify risk factors for important diseases such as cancer. Existing statistical methods based on estimating equations and/or utilizing resampling methods are often not valid with survey data due to complex survey design features. That is, stratification, multistage sampling, and weighting. In this article, we accommodate these design features in the analysis of highly skewed response variables arising from large complex surveys. Specifically, we propose a double-transform-both-sides (DTBS)'based estimating equations approach to estimate the median regression parameters of the highly skewed response; the DTBS approach applies the same Box-Cox type transformation twice to both the outcome and regression function. The usual sandwich variance estimate can be used in our approach, whereas a resampling approach would be needed for a pseudo-likelihood based on minimizing absolute deviations (MAD). Furthermore, the approach is relatively robust to the true underlying distribution, and has much smaller mean square error than a MAD approach. The method is motivated by an analysis of laboratory data on urinary iodine (UI) concentration from the National Health and Nutrition Examination Survey. © 2016, The International Biometric Society.

  15. Application of normalized spectra in resolving a challenging Orphenadrine and Paracetamol binary mixture

    Science.gov (United States)

    Yehia, Ali M.; Abd El-Rahman, Mohamed K.

    2015-03-01

    Normalized spectra have a great power in resolving spectral overlap of challenging Orphenadrine (ORP) and Paracetamol (PAR) binary mixture, four smart techniques utilizing the normalized spectra were used in this work, namely, amplitude modulation (AM), simultaneous area ratio subtraction (SARS), simultaneous derivative spectrophotometry (S1DD) and ratio H-point standard addition method (RHPSAM). In AM, peak amplitude at 221.6 nm of the division spectra was measured for both ORP and PAR determination, while in SARS, concentration of ORP was determined using the area under the curve from 215 nm to 222 nm of the regenerated ORP zero order absorption spectra, in S1DD, concentration of ORP was determined using the peak amplitude at 224 nm of the first derivative ratio spectra. PAR concentration was determined directly at 288 nm in the division spectra obtained during the manipulation steps in the previous three methods. The last RHPSAM is a dual wavelength method in which two calibrations were plotted at 216 nm and 226 nm. RH point is the intersection of the two calibration lines, where ORP and PAR concentrations were directly determined from coordinates of RH point. The proposed methods were applied successfully for the determination of ORP and PAR in their dosage form.

  16. Estimating Lion Abundance using N-mixture Models for Social Species.

    Science.gov (United States)

    Belant, Jerrold L; Bled, Florent; Wilton, Clay M; Fyumagwa, Robert; Mwampeta, Stanslaus B; Beyer, Dean E

    2016-10-27

    Declining populations of large carnivores worldwide, and the complexities of managing human-carnivore conflicts, require accurate population estimates of large carnivores to promote their long-term persistence through well-informed management We used N-mixture models to estimate lion (Panthera leo) abundance from call-in and track surveys in southeastern Serengeti National Park, Tanzania. Because of potential habituation to broadcasted calls and social behavior, we developed a hierarchical observation process within the N-mixture model conditioning lion detectability on their group response to call-ins and individual detection probabilities. We estimated 270 lions (95% credible interval = 170-551) using call-ins but were unable to estimate lion abundance from track data. We found a weak negative relationship between predicted track density and predicted lion abundance from the call-in surveys. Luminosity was negatively correlated with individual detection probability during call-in surveys. Lion abundance and track density were influenced by landcover, but direction of the corresponding effects were undetermined. N-mixture models allowed us to incorporate multiple parameters (e.g., landcover, luminosity, observer effect) influencing lion abundance and probability of detection directly into abundance estimates. We suggest that N-mixture models employing a hierarchical observation process can be used to estimate abundance of other social, herding, and grouping species.

  17. On the Bayesian calibration of computer model mixtures through experimental data, and the design of predictive models

    Science.gov (United States)

    Karagiannis, Georgios; Lin, Guang

    2017-08-01

    For many real systems, several computer models may exist with different physics and predictive abilities. To achieve more accurate simulations/predictions, it is desirable for these models to be properly combined and calibrated. We propose the Bayesian calibration of computer model mixture method which relies on the idea of representing the real system output as a mixture of the available computer model outputs with unknown input dependent weight functions. The method builds a fully Bayesian predictive model as an emulator for the real system output by combining, weighting, and calibrating the available models in the Bayesian framework. Moreover, it fits a mixture of calibrated computer models that can be used by the domain scientist as a mean to combine the available computer models, in a flexible and principled manner, and perform reliable simulations. It can address realistic cases where one model may be more accurate than the others at different input values because the mixture weights, indicating the contribution of each model, are functions of the input. Inference on the calibration parameters can consider multiple computer models associated with different physics. The method does not require knowledge of the fidelity order of the models. We provide a technique able to mitigate the computational overhead due to the consideration of multiple computer models that is suitable to the mixture model framework. We implement the proposed method in a real-world application involving the Weather Research and Forecasting large-scale climate model.

  18. Skew cyclic codes over F_q+uF_q+vF_q+uvF_q

    Directory of Open Access Journals (Sweden)

    Ting Yao

    2015-09-01

    Full Text Available In this paper, we study skew cyclic codes over the ring $R=F_q+uF_q+vF_q+uvF_q$, where $u^{2}=u,v^{2}=v,uv=vu$, $q=p^{m}$ and $p$ is an odd prime. We investigate the structural properties of skew cyclic codes over $R$ through a decomposition theorem. Furthermore, we give a formula for the number of skew cyclic codes of length $n$ over $R.$

  19. Deterioration of the Skew Quadrupole Moment in Tevatron Dipoles Over Time

    CERN Document Server

    Syphers, Michael J

    2005-01-01

    During the 20 years since it was first commissioned, the Fermilab Tevatron has developed strong coupling between the two transverse degrees of freedom. A circuit of skew quadrupole magnets is used to correct for coupling and, though capable, its required strength has increased since 1983 by more than an order of magnitude. In more recent years changes to the Tevatron for colliding beams operation have altered the skew quadrupole corrector distribution and strong local coupling become evident, often encumbering routine operation during the present physics run. Detailed magnet measurements were performed on each individual magnet during construction, and in early 2003 it was realized that measurements could be performed on the magnets in situ which could determine coil movements within the iron yoke since the early 1980's. It was discovered that the superconducting coils had become vertically displaced relative to their yokes since their construction. The ensuing systematic skew quadrupole field introduced by t...

  20. Representation and validation of liquid densities for pure compounds and mixtures

    DEFF Research Database (Denmark)

    Diky, Vladimir; O'Connell, John P.; Abildskov, Jens

    2015-01-01

    Reliable correlation and prediction of liquid densities are important for designing chemical processes at normal and elevated pressures. A corresponding-states model from molecular theory was extended to yield a robust method for quality testing of experimental data that also provides predicted...... values at unmeasured conditions. The model has been shown to successfully represent and validate the pressure and temperature dependence of liquid densities greater than 1.5 of the critical density for pure compounds, binary mixtures, and ternary mixtures from the triple to critical temperatures...

  1. Evaluation of thermodynamic properties of fluid mixtures by PC-SAFT model

    International Nuclear Information System (INIS)

    Almasi, Mohammad

    2014-01-01

    Experimental and calculated partial molar volumes (V ¯ m,1 ) of MIK with (♦) 2-PrOH, (♢) 2-BuOH, (●) 2-PenOH at T = 298.15 K. (—) PC-SAFT model. - Highlights: • Densities and viscosities of the mixtures (MIK + 2-alkanols) were measured. • PC-SAFT model was applied to correlate the volumetric properties of binary mixtures. • Agreement between experimental data and calculated values by PC-SAFT model is good. - Abstract: Densities and viscosities of binary mixtures of methyl isobutyl ketone (MIK) with polar solvents namely, 2-propanol, 2-butanol and 2-pentanol, were measured at 7 temperatures (293.15–323.15 K) over the entire range of composition. Using the experimental data, excess molar volumes V m E , isobaric thermal expansivity α p , partial molar volumes V ¯ m,i and viscosity deviations Δη, have been calculated due to their importance in the study of specific molecular interactions. The observed negative and positive values of deviation/excess parameters were explained on the basis of the intermolecular interactions occur in these mixtures. The Perturbed Chain Statistical Association Fluid Theory (PC-SAFT) has been used to correlate the volumetric behavior of the mixtures

  2. Detecting Housing Submarkets using Unsupervised Learning of Finite Mixture Models

    DEFF Research Database (Denmark)

    Ntantamis, Christos

    association between prices that can be attributed, among others, to unobserved neighborhood effects. In this paper, a model of spatial association for housing markets is introduced. Spatial association is treated in the context of spatial heterogeneity, which is explicitly modeled in both a global and a local....... The identified mixtures are considered as the different spatial housing submarkets. The main advantage of the approach is that submarkets are recovered by the housing prices data compared to submarkets imposed by administrative or geographical criteria. The Finite Mixture Model is estimated using the Figueiredo...

  3. Hypothyroidism after primary radiotherapy for head and neck squamous cell carcinoma: Normal tissue complication probability modeling with latent time correction

    International Nuclear Information System (INIS)

    Rønjom, Marianne Feen; Brink, Carsten; Bentzen, Søren M.; Hegedüs, Laszlo; Overgaard, Jens; Johansen, Jørgen

    2013-01-01

    Background and purpose: To develop a normal tissue complication probability (NTCP) model of radiation-induced biochemical hypothyroidism (HT) after primary radiotherapy for head and neck squamous cell carcinoma (HNSCC) with adjustment for latency and clinical risk factors. Patients and methods: Patients with HNSCC receiving definitive radiotherapy with 66–68 Gy without surgery were followed up with serial post-treatment thyrotropin (TSH) assessment. HT was defined as TSH >4.0 mU/l. Data were analyzed with both a logistic and a mixture model (correcting for latency) to determine risk factors for HT and develop an NTCP model based on mean thyroid dose (MTD) and thyroid volume. Results: 203 patients were included. Median follow-up: 25.1 months. Five-year estimated risk of HT was 25.6%. In the mixture model, the only independent risk factors for HT were thyroid volume (cm 3 ) (OR = 0.75 [95% CI: 0.64–0.85], p 3 , respectively. Conclusions: Comparing the logistic and mixture models demonstrates the importance of latent-time correction in NTCP-modeling. Thyroid dose constraints in treatment planning should be individualized based on thyroid volume

  4. Numerical modelling of continuous spin detonation in rich methane-oxygen mixture

    International Nuclear Information System (INIS)

    Trotsyuk, A V

    2016-01-01

    A numerical simulation of a two-dimensional structure of the detonation wave (DW) in a rich (equivalence ratio φ=1.5) methane-air mixture at normal initial condition has been conducted. The computations have been performed in a wide range of channel heights. From the analysis of the flow structure and the number of primary transverse waves in the channel, the dominant size of the detonation cell for studied mixture has been determined to be 45÷50 cm. Based on the fundamental studies of multi-front (cellular) structure of the classical propagating DW in methane mixtures, numerical simulation of continuous spin detonation (CSD) of rich (φ=1.2) methane-oxygen mixture has been carried out in the cylindrical detonation chamber (DC) of the rocket-type engine. We studied the global flow structure in DC, and the detailed structure of the front of the rotating DW. Integral characteristics of the detonation process - the distribution of average values of static and total pressure along the length of the DC, and the value of specific impulse have been obtained. The geometric limit of stable existence of CSD has been determined. (paper)

  5. Phylogenetic mixtures and linear invariants for equal input models.

    Science.gov (United States)

    Casanellas, Marta; Steel, Mike

    2017-04-01

    The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).

  6. Beta Regression Finite Mixture Models of Polarization and Priming

    Science.gov (United States)

    Smithson, Michael; Merkle, Edgar C.; Verkuilen, Jay

    2011-01-01

    This paper describes the application of finite-mixture general linear models based on the beta distribution to modeling response styles, polarization, anchoring, and priming effects in probability judgments. These models, in turn, enhance our capacity for explicitly testing models and theories regarding the aforementioned phenomena. The mixture…

  7. A predictive model of natural gas mixture combustion in internal combustion engines

    Directory of Open Access Journals (Sweden)

    Henry Espinoza

    2007-05-01

    Full Text Available This study shows the development of a predictive natural gas mixture combustion model for conventional com-bustion (ignition engines. The model was based on resolving two areas; one having unburned combustion mixture and another having combustion products. Energy and matter conservation equations were solved for each crankshaft turn angle for each area. Nonlinear differential equations for each phase’s energy (considering compression, combustion and expansion were solved by applying the fourth-order Runge-Kutta method. The model also enabled studying different natural gas components’ composition and evaluating combustion in the presence of dry and humid air. Validation results are shown with experimental data, demonstrating the software’s precision and accuracy in the results so produced. The results showed cylinder pressure, unburned and burned mixture temperature, burned mass fraction and combustion reaction heat for the engine being modelled using a natural gas mixture.

  8. Arctic lead detection using a waveform mixture algorithm from CryoSat-2 data

    Science.gov (United States)

    Lee, Sanggyun; Kim, Hyun-cheol; Im, Jungho

    2018-05-01

    We propose a waveform mixture algorithm to detect leads from CryoSat-2 data, which is novel and different from the existing threshold-based lead detection methods. The waveform mixture algorithm adopts the concept of spectral mixture analysis, which is widely used in the field of hyperspectral image analysis. This lead detection method was evaluated with high-resolution (250 m) MODIS images and showed comparable and promising performance in detecting leads when compared to the previous methods. The robustness of the proposed approach also lies in the fact that it does not require the rescaling of parameters (i.e., stack standard deviation, stack skewness, stack kurtosis, pulse peakiness, and backscatter σ0), as it directly uses L1B waveform data, unlike the existing threshold-based methods. Monthly lead fraction maps were produced by the waveform mixture algorithm, which shows interannual variability of recent sea ice cover during 2011-2016, excluding the summer season (i.e., June to September). We also compared the lead fraction maps to other lead fraction maps generated from previously published data sets, resulting in similar spatiotemporal patterns.

  9. A Dirichlet process mixture of generalized Dirichlet distributions for proportional data modeling.

    Science.gov (United States)

    Bouguila, Nizar; Ziou, Djemel

    2010-01-01

    In this paper, we propose a clustering algorithm based on both Dirichlet processes and generalized Dirichlet distribution which has been shown to be very flexible for proportional data modeling. Our approach can be viewed as an extension of the finite generalized Dirichlet mixture model to the infinite case. The extension is based on nonparametric Bayesian analysis. This clustering algorithm does not require the specification of the number of mixture components to be given in advance and estimates it in a principled manner. Our approach is Bayesian and relies on the estimation of the posterior distribution of clusterings using Gibbs sampler. Through some applications involving real-data classification and image databases categorization using visual words, we show that clustering via infinite mixture models offers a more powerful and robust performance than classic finite mixtures.

  10. Color Texture Segmentation by Decomposition of Gaussian Mixture Model

    Czech Academy of Sciences Publication Activity Database

    Grim, Jiří; Somol, Petr; Haindl, Michal; Pudil, Pavel

    2006-01-01

    Roč. 19, č. 4225 (2006), s. 287-296 ISSN 0302-9743. [Iberoamerican Congress on Pattern Recognition. CIARP 2006 /11./. Cancun, 14.11.2006-17.11.2006] R&D Projects: GA AV ČR 1ET400750407; GA MŠk 1M0572; GA MŠk 2C06019 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : texture segmentation * gaussian mixture model * EM algorithm Subject RIV: IN - Informatics, Computer Science Impact factor: 0.402, year: 2005 http://library.utia.cas.cz/separaty/historie/grim-color texture segmentation by decomposition of gaussian mixture model.pdf

  11. Supervised Gaussian mixture model based remote sensing image ...

    African Journals Online (AJOL)

    Using the supervised classification technique, both simulated and empirical satellite remote sensing data are used to train and test the Gaussian mixture model algorithm. For the purpose of validating the experiment, the resulting classified satellite image is compared with the ground truth data. For the simulated modelling, ...

  12. Three Different Ways of Calibrating Burger's Contact Model for Viscoelastic Model of Asphalt Mixtures by Discrete Element Method

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Stang, Henrik

    2016-01-01

    modulus. Three different approaches have been used and compared for calibrating the Burger's contact model. Values of the dynamic modulus and phase angle of asphalt mixtures were predicted by conducting DE simulation under dynamic strain control loading. The excellent agreement between the predicted......In this paper the viscoelastic behavior of asphalt mixture was investigated by employing a three-dimensional discrete element method. Combined with Burger's model, three contact models were used for the construction of constitutive asphalt mixture model with viscoelastic properties...

  13. Speckles generated by skewed, short-coherence light beams

    International Nuclear Information System (INIS)

    Brogioli, D; Salerno, D; Ziano, R; Mantegazza, F; Croccolo, F

    2011-01-01

    When a coherent laser beam impinges on a random sample (e.g. a colloidal suspension), the scattered light exhibits characteristic speckles. If the temporal coherence of the light source is too short, then the speckles disappear, along with the possibility of performing homodyne or heterodyne scattering detection or photon correlation spectroscopy. Here we investigate the scattering of a so-called ‘skewed coherence beam’, i.e. a short-coherence beam modified such that the field is coherent within slabs that are skewed with respect to the wave fronts. We show that such a beam generates speckles and can be used for heterodyne scattering detection, despite its short temporal coherence. Moreover, we show that the heterodyne signal is not affected by multiple scattering. We suggest that the phenomenon presented here can be used as a means of carrying out heterodyne scattering measurement with any short-coherence radiation, including x-rays. (paper)

  14. Sample size calculations based on a difference in medians for positively skewed outcomes in health care studies

    Directory of Open Access Journals (Sweden)

    Aidan G. O’Keeffe

    2017-12-01

    Full Text Available Abstract Background In healthcare research, outcomes with skewed probability distributions are common. Sample size calculations for such outcomes are typically based on estimates on a transformed scale (e.g. log which may sometimes be difficult to obtain. In contrast, estimates of median and variance on the untransformed scale are generally easier to pre-specify. The aim of this paper is to describe how to calculate a sample size for a two group comparison of interest based on median and untransformed variance estimates for log-normal outcome data. Methods A log-normal distribution for outcome data is assumed and a sample size calculation approach for a two-sample t-test that compares log-transformed outcome data is demonstrated where the change of interest is specified as difference in median values on the untransformed scale. A simulation study is used to compare the method with a non-parametric alternative (Mann-Whitney U test in a variety of scenarios and the method is applied to a real example in neurosurgery. Results The method attained a nominal power value in simulation studies and was favourable in comparison to a Mann-Whitney U test and a two-sample t-test of untransformed outcomes. In addition, the method can be adjusted and used in some situations where the outcome distribution is not strictly log-normal. Conclusions We recommend the use of this sample size calculation approach for outcome data that are expected to be positively skewed and where a two group comparison on a log-transformed scale is planned. An advantage of this method over usual calculations based on estimates on the log-transformed scale is that it allows clinical efficacy to be specified as a difference in medians and requires a variance estimate on the untransformed scale. Such estimates are often easier to obtain and more interpretable than those for log-transformed outcomes.

  15. Maximum likelihood pixel labeling using a spatially variant finite mixture model

    International Nuclear Information System (INIS)

    Gopal, S.S.; Hebert, T.J.

    1996-01-01

    We propose a spatially-variant mixture model for pixel labeling. Based on this spatially-variant mixture model we derive an expectation maximization algorithm for maximum likelihood estimation of the pixel labels. While most algorithms using mixture models entail the subsequent use of a Bayes classifier for pixel labeling, the proposed algorithm yields maximum likelihood estimates of the labels themselves and results in unambiguous pixel labels. The proposed algorithm is fast, robust, easy to implement, flexible in that it can be applied to any arbitrary image data where the number of classes is known and, most importantly, obviates the need for an explicit labeling rule. The algorithm is evaluated both quantitatively and qualitatively on simulated data and on clinical magnetic resonance images of the human brain

  16. Market skewness risk and the cross section of stock returns

    DEFF Research Database (Denmark)

    Chang, B.Y.; Christoffersen, Peter; Jacobs, K.

    2013-01-01

    The cross section of stock returns has substantial exposure to risk captured by higher moments of market returns. We estimate these moments from daily Standard & Poor's 500 index option data. The resulting time series of factors are genuinely conditional and forward-looking. Stocks with high...... exposure to innovations in implied market skewness exhibit low returns on average. The results are robust to various permutations of the empirical setup. The market skewness risk premium is statistically and economically significant and cannot be explained by other common risk factors such as the market...... excess return or the size, book-to-market, momentum, and market volatility factors, or by firm characteristics....

  17. Time Skew Estimator for Dual-Polarization QAM Transmitters

    DEFF Research Database (Denmark)

    Medeiros Diniz, Júlio César; Da Ros, Francesco; Jones, Rasmus Thomas

    2017-01-01

    A simple method for joint estimation of transmitter’s in-phase/quadrature and inter-polarization time skew is proposed and experimentally demonstrated. The method is based on clock tone extraction of a photodetected signal and genetic algorithm. The maximum estimation error was 0.5 ps....

  18. Model-based clustering of DNA methylation array data: a recursive-partitioning algorithm for high-dimensional data arising as a mixture of beta distributions

    Directory of Open Access Journals (Sweden)

    Wiemels Joseph

    2008-09-01

    Full Text Available Abstract Background Epigenetics is the study of heritable changes in gene function that cannot be explained by changes in DNA sequence. One of the most commonly studied epigenetic alterations is cytosine methylation, which is a well recognized mechanism of epigenetic gene silencing and often occurs at tumor suppressor gene loci in human cancer. Arrays are now being used to study DNA methylation at a large number of loci; for example, the Illumina GoldenGate platform assesses DNA methylation at 1505 loci associated with over 800 cancer-related genes. Model-based cluster analysis is often used to identify DNA methylation subgroups in data, but it is unclear how to cluster DNA methylation data from arrays in a scalable and reliable manner. Results We propose a novel model-based recursive-partitioning algorithm to navigate clusters in a beta mixture model. We present simulations that show that the method is more reliable than competing nonparametric clustering approaches, and is at least as reliable as conventional mixture model methods. We also show that our proposed method is more computationally efficient than conventional mixture model approaches. We demonstrate our method on the normal tissue samples and show that the clusters are associated with tissue type as well as age. Conclusion Our proposed recursively-partitioned mixture model is an effective and computationally efficient method for clustering DNA methylation data.

  19. Evaluation of thermodynamic properties of fluid mixtures by PC-SAFT model

    Energy Technology Data Exchange (ETDEWEB)

    Almasi, Mohammad, E-mail: m.almasi@khouzestan.srbiau.ac.ir

    2014-09-10

    Experimental and calculated partial molar volumes (V{sup ¯}{sub m,1}) of MIK with (♦) 2-PrOH, (♢) 2-BuOH, (●) 2-PenOH at T = 298.15 K. (—) PC-SAFT model. - Highlights: • Densities and viscosities of the mixtures (MIK + 2-alkanols) were measured. • PC-SAFT model was applied to correlate the volumetric properties of binary mixtures. • Agreement between experimental data and calculated values by PC-SAFT model is good. - Abstract: Densities and viscosities of binary mixtures of methyl isobutyl ketone (MIK) with polar solvents namely, 2-propanol, 2-butanol and 2-pentanol, were measured at 7 temperatures (293.15–323.15 K) over the entire range of composition. Using the experimental data, excess molar volumes V{sub m}{sup E}, isobaric thermal expansivity α{sub p}, partial molar volumes V{sup ¯}{sub m,i} and viscosity deviations Δη, have been calculated due to their importance in the study of specific molecular interactions. The observed negative and positive values of deviation/excess parameters were explained on the basis of the intermolecular interactions occur in these mixtures. The Perturbed Chain Statistical Association Fluid Theory (PC-SAFT) has been used to correlate the volumetric behavior of the mixtures.

  20. Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches

    Science.gov (United States)

    Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward

    2015-01-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.

  1. International portfolio diversification, skewness and the role of gold

    OpenAIRE

    LUCEY, BRIAN MICHAEL

    2007-01-01

    PUBLISHED The paper examines the optimal allocation of assets in well diversified equity based portfolio where the investor is concerned not only with mean and variance but also with the skewness of the returns.

  2. Bayesian mixture modeling for blood sugar levels of diabetes mellitus patients (case study in RSUD Saiful Anwar Malang Indonesia)

    Science.gov (United States)

    Budi Astuti, Ani; Iriawan, Nur; Irhamah; Kuswanto, Heri; Sasiarini, Laksmi

    2017-10-01

    Bayesian statistics proposes an approach that is very flexible in the number of samples and distribution of data. Bayesian Mixture Model (BMM) is a Bayesian approach for multimodal models. Diabetes Mellitus (DM) is more commonly known in the Indonesian community as sweet pee. This disease is one type of chronic non-communicable diseases but it is very dangerous to humans because of the effects of other diseases complications caused. WHO reports in 2013 showed DM disease was ranked 6th in the world as the leading causes of human death. In Indonesia, DM disease continues to increase over time. These research would be studied patterns and would be built the BMM models of the DM data through simulation studies where the simulation data built on cases of blood sugar levels of DM patients in RSUD Saiful Anwar Malang. The results have been successfully demonstrated pattern of distribution of the DM data which has a normal mixture distribution. The BMM models have succeed to accommodate the real condition of the DM data based on the data driven concept.

  3. Polynomial combinatorial algorithms for skew-bisubmodular function minimization

    NARCIS (Netherlands)

    S. Fujishige (Satoru); S.-I. Tanigawa (Shin-Ichi)

    2017-01-01

    textabstractHuber et al. (SIAM J Comput 43:1064–1084, 2014) introduced a concept of skew bisubmodularity, as a generalization of bisubmodularity, in their complexity dichotomy theorem for valued constraint satisfaction problems over the three-value domain, and Huber and Krokhin (SIAM J Discrete Math

  4. Identifying Clusters with Mixture Models that Include Radial Velocity Observations

    Science.gov (United States)

    Czarnatowicz, Alexis; Ybarra, Jason E.

    2018-01-01

    The study of stellar clusters plays an integral role in the study of star formation. We present a cluster mixture model that considers radial velocity data in addition to spatial data. Maximum likelihood estimation through the Expectation-Maximization (EM) algorithm is used for parameter estimation. Our mixture model analysis can be used to distinguish adjacent or overlapping clusters, and estimate properties for each cluster.Work supported by awards from the Virginia Foundation for Independent Colleges (VFIC) Undergraduate Science Research Fellowship and The Research Experience @Bridgewater (TREB).

  5. Individual loss reserving with the Multivariate Skew Normal distribution

    NARCIS (Netherlands)

    Pigeon, M.; Antonio, K.; Denuit, M.

    2012-01-01

    The evaluation of future cash flows and solvency capital recently gained importance in general insurance. To assist in this process, our paper proposes a novel loss reserving model, designed for individual claims in discrete time. We model the occurrence of claims, as well as their reporting delay,

  6. Individual loss reserving with the multivariate skew normal framework

    NARCIS (Netherlands)

    Pigeon, M.; Antonio, K.; Denuit, M.

    2013-01-01

    The evaluation of future cash flows and solvency capital recently gained importance in general insurance. To assist in this process, our paper proposes a novel loss reserving model, designed for individual claims developing in discrete time. We model the occurrence of claims, as well as their

  7. Kinematic correction for roller skewing

    Science.gov (United States)

    Savage, M.; Loewenthal, S. H.

    1980-01-01

    A theory of kinematic stabilization of rolling cylinders is developed for high-speed cylindrical roller bearings. This stabilization requires race and roller crowning to product changes in the rolling geometry as the roller shifts axially. These changes put a reverse skew in the rolling elements by changing the rolling taper. Twelve basic possible bearing modifications are identified in this paper. Four have single transverse convex curvature in the rollers while eight have rollers with compound transverse curvature composed of a central cylindrical band of constant radius surrounded by symmetric bands with both slope and transverse curvature.

  8. Skewed matrilineal genetic composition in a small wild chimpanzee community.

    Science.gov (United States)

    Shimada, Makoto K; Hayakawa, Sachiko; Fujita, Shiho; Sugiyama, Yukimaru; Saitou, Naruya

    2009-01-01

    Maternal kinship is important in primate societies because it affects individual behaviour as well as the sustainability of populations. All members of the Bossou chimpanzee community are descended from 8 individuals (herein referred to as original adults) who were already adults or subadults when field observations were initiated in 1976 and whose genetic relationships were unknown. Sequencing of the control region on the maternally inherited mtDNA revealed that 4 (1 male and 3 females) of the 8 original adults shared an identical haplotype. We investigated the effects of the skewed distribution of mtDNA haplotypes on the following two outcomes. First, we demonstrated that the probability of mtDNA haplotype extinction would be increased under such a skewed composition in a small community. Second, the ratio of potential mating candidates to competitors is likely to decrease if chimpanzees become aware of maternal kinship and avoid incest. We estimated that the magnitude of the decrease in the ratio is 10 times greater in males than in females. Here we demonstrate a scenario in which this matrilineal skewness in a small community accelerates extinction of mtDNA haplotype, which will make it more difficult to find a suitable mate within the community. 2008 S. Karger AG, Basel.

  9. Semiparametric accelerated failure time cure rate mixture models with competing risks.

    Science.gov (United States)

    Choi, Sangbum; Zhu, Liang; Huang, Xuelin

    2018-01-15

    Modern medical treatments have substantially improved survival rates for many chronic diseases and have generated considerable interest in developing cure fraction models for survival data with a non-ignorable cured proportion. Statistical analysis of such data may be further complicated by competing risks that involve multiple types of endpoints. Regression analysis of competing risks is typically undertaken via a proportional hazards model adapted on cause-specific hazard or subdistribution hazard. In this article, we propose an alternative approach that treats competing events as distinct outcomes in a mixture. We consider semiparametric accelerated failure time models for the cause-conditional survival function that are combined through a multinomial logistic model within the cure-mixture modeling framework. The cure-mixture approach to competing risks provides a means to determine the overall effect of a treatment and insights into how this treatment modifies the components of the mixture in the presence of a cure fraction. The regression and nonparametric parameters are estimated by a nonparametric kernel-based maximum likelihood estimation method. Variance estimation is achieved through resampling methods for the kernel-smoothed likelihood function. Simulation studies show that the procedures work well in practical settings. Application to a sarcoma study demonstrates the use of the proposed method for competing risk data with a cure fraction. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Nonlinear Structured Growth Mixture Models in Mplus and OpenMx

    Science.gov (United States)

    Grimm, Kevin J.; Ram, Nilam; Estabrook, Ryne

    2014-01-01

    Growth mixture models (GMMs; Muthén & Muthén, 2000; Muthén & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models because of their common use, flexibility in modeling many types of change patterns, the availability of statistical programs to fit such models, and the ease of programming. In this paper, we present additional ways of modeling nonlinear change patterns with GMMs. Specifically, we show how LCMs that follow specific nonlinear functions can be extended to examine the presence of multiple latent classes using the Mplus and OpenMx computer programs. These models are fit to longitudinal reading data from the Early Childhood Longitudinal Study-Kindergarten Cohort to illustrate their use. PMID:25419006

  11. Random skew plane partitions and the Pearcey process

    DEFF Research Database (Denmark)

    Reshetikhin, Nicolai; Okounkov, Andrei

    2007-01-01

    We study random skew 3D partitions weighted by q vol and, specifically, the q → 1 asymptotics of local correlations near various points of the limit shape. We obtain sine-kernel asymptotics for correlations in the bulk of the disordered region, Airy kernel asymptotics near a general point of the ...

  12. Bayesian mixture models for source separation in MEG

    International Nuclear Information System (INIS)

    Calvetti, Daniela; Homa, Laura; Somersalo, Erkki

    2011-01-01

    This paper discusses the problem of imaging electromagnetic brain activity from measurements of the induced magnetic field outside the head. This imaging modality, magnetoencephalography (MEG), is known to be severely ill posed, and in order to obtain useful estimates for the activity map, complementary information needs to be used to regularize the problem. In this paper, a particular emphasis is on finding non-superficial focal sources that induce a magnetic field that may be confused with noise due to external sources and with distributed brain noise. The data are assumed to come from a mixture of a focal source and a spatially distributed possibly virtual source; hence, to differentiate between those two components, the problem is solved within a Bayesian framework, with a mixture model prior encoding the information that different sources may be concurrently active. The mixture model prior combines one density that favors strongly focal sources and another that favors spatially distributed sources, interpreted as clutter in the source estimation. Furthermore, to address the challenge of localizing deep focal sources, a novel depth sounding algorithm is suggested, and it is shown with simulated data that the method is able to distinguish between a signal arising from a deep focal source and a clutter signal. (paper)

  13. Optimal transformations leading to normal distributions of positron emission tomography standardized uptake values

    Science.gov (United States)

    Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert

    2018-02-01

    The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. Methods. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUVmax distributions at both pre and post treatment. This study included 57 patients that underwent 18F-fluorodeoxyglucose (18F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18F-Fluorothymidine (18F-FLT) PET scans at our institution. Results. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18F-FDG SUV distributions deviated significantly from normality (P  >  0.10). Similar results were found for 18F-FLT PET SUV distributions (P  >  0.10). For both 18F-FDG and 18F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18F-FDG and 18F-FLT where a log transformation was not optimal for providing normal SUV distributions. Conclusion. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when

  14. Model-based experimental design for assessing effects of mixtures of chemicals

    NARCIS (Netherlands)

    Baas, J.; Stefanowicz, A.M.; Klimek, B.; Laskowski, R.; Kooijman, S.A.L.M.

    2010-01-01

    We exposed flour beetles (Tribolium castaneum) to a mixture of four poly aromatic hydrocarbons (PAHs). The experimental setup was chosen such that the emphasis was on assessing partial effects. We interpreted the effects of the mixture by a process-based model, with a threshold concentration for

  15. Systems of Differential Equations with Skew-Symmetric, Orthogonal Matrices

    Science.gov (United States)

    Glaister, P.

    2008-01-01

    The solution of a system of linear, inhomogeneous differential equations is discussed. The particular class considered is where the coefficient matrix is skew-symmetric and orthogonal, and where the forcing terms are sinusoidal. More general matrices are also considered.

  16. Skew harmonics suppression in electromagnets with application to the Advanced Light Source (ALS) storage ring corrector magnet design

    International Nuclear Information System (INIS)

    Schlueter, R.; Halbach, K.

    1993-09-01

    An analytical expression for prediction of skew harmonics in an iron core combined function regular/skew dipole magnet due to arbitrarily positioned electromagnet coils is developed. A structured approach is presented for the suppression of an arbitrary number of harmonic components to arbitrarily low values. Application of the analytical harmonic strength calculations coupled to the structured harmonic suppression approach is presented in the context of the design of the ALS storage ring corrector magnets, where quadrupole, sextupole, and octupole skew harmonics were reduced to less than 1.0% of the skew dipole at the beam aperture radius r = 3.0 cm

  17. Non-skew-symmetric classical r-matrices, algebraic Bethe ansatz, and Bardeen-Cooper-Schrieffer-type integrable systems

    International Nuclear Information System (INIS)

    Skrypnyk, T.

    2009-01-01

    We construct quantum integrable systems associated with non-skew-symmetric gl(2)-valued classical r-matrices. We find a new explicit multiparametric family of such the non-skew-symmetric classical r-matrices. We consider two classes of examples of the corresponding integrable systems, namely generalized Gaudin systems with and without an external magnetic field. In the case of arbitrary r-matrices diagonal in a standard gl(2)-basis, we calculate the spectrum of the corresponding quantum integrable systems using the algebraic Bethe ansatz. We apply these results to a construction of integrable fermionic models and obtain a wide class of integrable Bardeen-Cooper-Schrieffer (BCS)-type fermionic Hamiltonians containing the pairing and electrostatic interaction terms. We also consider special cases when the corresponding integrable Hamiltonians contain only pairing interaction term and are exact analogs of the 'reduced BCS Hamiltonian' of Richardson

  18. Modelling of phase equilibria for associating mixtures using an equation of state

    International Nuclear Information System (INIS)

    Ferreira, Olga; Brignole, Esteban A.; Macedo, Eugenia A.

    2004-01-01

    In the present work, the group contribution with association equation of state (GCA-EoS) is extended to represent phase equilibria in mixtures containing acids, esters, and ketones, with water, alcohols, and any number of inert components. Association effects are represented by a group-contribution approach. Self- and cross-association between the associating groups present in these mixtures are considered. The GCA-EoS model is compared to the group-contribution method MHV2, which does not take into account explicitly association effects. The results obtained with the GCA-EoS model are, in general, more accurate when compared to the ones achieved by the MHV2 equation with less number of parameters. Model predictions are presented for binary self- and cross-associating mixtures

  19. Arctic lead detection using a waveform mixture algorithm from CryoSat-2 data

    Directory of Open Access Journals (Sweden)

    S. Lee

    2018-05-01

    Full Text Available We propose a waveform mixture algorithm to detect leads from CryoSat-2 data, which is novel and different from the existing threshold-based lead detection methods. The waveform mixture algorithm adopts the concept of spectral mixture analysis, which is widely used in the field of hyperspectral image analysis. This lead detection method was evaluated with high-resolution (250 m MODIS images and showed comparable and promising performance in detecting leads when compared to the previous methods. The robustness of the proposed approach also lies in the fact that it does not require the rescaling of parameters (i.e., stack standard deviation, stack skewness, stack kurtosis, pulse peakiness, and backscatter σ0, as it directly uses L1B waveform data, unlike the existing threshold-based methods. Monthly lead fraction maps were produced by the waveform mixture algorithm, which shows interannual variability of recent sea ice cover during 2011–2016, excluding the summer season (i.e., June to September. We also compared the lead fraction maps to other lead fraction maps generated from previously published data sets, resulting in similar spatiotemporal patterns.

  20. Representation and Validation of Liquid Densities for Pure Compounds and Mixtures

    DEFF Research Database (Denmark)

    O'Connell, J.; V. Dicky, V.; Abildskov, Jens

    Reliable correlation and prediction of liquid densities are important for designing chemical processes at normal and elevated pressures. We have extended a corresponding states model from molecular theory to yield a robust method for quality testing of experimental data that also provides predicted...... values at unmeasured conditions. The model has been shown to successfully validate and represent the pressure and temperature dependence of liquid densities greater than 1.5 of the critical density for pure compounds, binary mixtures, and ternary mixtures from the triple to critical temperatures...... at pressures up to 1000 MPa. The systems include the full range of organic compounds, including complex mixtures, and ionic liquids. Minimal data are required for making predictions.The presentation will show the implementation of the method, criteria for its deployment, examples of its application to a wide...

  1. Optimization of a divided wall column for the separation of C4-C6 normal paraffin mixture using Box-Behnken design

    Directory of Open Access Journals (Sweden)

    Sangal Vikas K.

    2013-01-01

    Full Text Available In the present study, simulation of a divided wall column (DWC was carried out to study the product quality and energy efficiency as a function of reflux rate, liquid spilt and vapour split for the separation of C4-C6 normal paraffin ternary mixture. Rigorous simulation of the DWC was carried out using Multifrac model of ASPEN Plus software. Box-Behnken design (BBD was used for the optimization of parameters and to evaluate the effects and interaction of the process parameters such as reflux rate (r, liquid split (l and vapour split (v. It was found that the number of simulation runs reduced significantly for the optimization of DWC by BBD. Optimization by BBD under response surface methodology (RSM vividly underscores interactions between variables and their effects. The predictions agree well with the results of the rigorous simulation.

  2. Gravel-Sand-Clay Mixture Model for Predictions of Permeability and Velocity of Unconsolidated Sediments

    Science.gov (United States)

    Konishi, C.

    2014-12-01

    Gravel-sand-clay mixture model is proposed particularly for unconsolidated sediments to predict permeability and velocity from volume fractions of the three components (i.e. gravel, sand, and clay). A well-known sand-clay mixture model or bimodal mixture model treats clay contents as volume fraction of the small particle and the rest of the volume is considered as that of the large particle. This simple approach has been commonly accepted and has validated by many studies before. However, a collection of laboratory measurements of permeability and grain size distribution for unconsolidated samples show an impact of presence of another large particle; i.e. only a few percent of gravel particles increases the permeability of the sample significantly. This observation cannot be explained by the bimodal mixture model and it suggests the necessity of considering the gravel-sand-clay mixture model. In the proposed model, I consider the three volume fractions of each component instead of using only the clay contents. Sand becomes either larger or smaller particles in the three component mixture model, whereas it is always the large particle in the bimodal mixture model. The total porosity of the two cases, one is the case that the sand is smaller particle and the other is the case that the sand is larger particle, can be modeled independently from sand volume fraction by the same fashion in the bimodal model. However, the two cases can co-exist in one sample; thus, the total porosity of the mixed sample is calculated by weighted average of the two cases by the volume fractions of gravel and clay. The effective porosity is distinguished from the total porosity assuming that the porosity associated with clay is zero effective porosity. In addition, effective grain size can be computed from the volume fractions and representative grain sizes for each component. Using the effective porosity and the effective grain size, the permeability is predicted by Kozeny-Carman equation

  3. Mixture models with entropy regularization for community detection in networks

    Science.gov (United States)

    Chang, Zhenhai; Yin, Xianjun; Jia, Caiyan; Wang, Xiaoyang

    2018-04-01

    Community detection is a key exploratory tool in network analysis and has received much attention in recent years. NMM (Newman's mixture model) is one of the best models for exploring a range of network structures including community structure, bipartite and core-periphery structures, etc. However, NMM needs to know the number of communities in advance. Therefore, in this study, we have proposed an entropy regularized mixture model (called EMM), which is capable of inferring the number of communities and identifying network structure contained in a network, simultaneously. In the model, by minimizing the entropy of mixing coefficients of NMM using EM (expectation-maximization) solution, the small clusters contained little information can be discarded step by step. The empirical study on both synthetic networks and real networks has shown that the proposed model EMM is superior to the state-of-the-art methods.

  4. Stall inception and warning in a single-stage transonic axial compressor with axial skewed slot casing treatment

    International Nuclear Information System (INIS)

    Lim, Byeung Jun; Kwon, Se Jin; Park, Tae Choon

    2014-01-01

    Characteristic changes in the stall inception in a single-stage transonic axial compressor with an axial skewed slot casing treatment were investigated experimentally. A rotating stall occurred intermittently in a compressor with an axial skewed slot, whereas spike-type rotating stalls occurred in the case of smooth casing. The axial skewed slot suppressed stall cell growth and increased the operating range. A mild surge, the frequency of which is the Helmholtz frequency of the compressor system, occurred with the rotating stall. The irregularity in the pressure signals at the slot bottom increased decreasing flow rate. An autocorrelation-based stall warning method was applied to the measured pressure signals. Results estimate and warn against the stall margin in a compressor with an axial skewed slot.

  5. Model-based experimental design for assessing effects of mixtures of chemicals

    Energy Technology Data Exchange (ETDEWEB)

    Baas, Jan, E-mail: jan.baas@falw.vu.n [Vrije Universiteit of Amsterdam, Dept of Theoretical Biology, De Boelelaan 1085, 1081 HV Amsterdam (Netherlands); Stefanowicz, Anna M., E-mail: anna.stefanowicz@uj.edu.p [Institute of Environmental Sciences, Jagiellonian University, Gronostajowa 7, 30-387 Krakow (Poland); Klimek, Beata, E-mail: beata.klimek@uj.edu.p [Institute of Environmental Sciences, Jagiellonian University, Gronostajowa 7, 30-387 Krakow (Poland); Laskowski, Ryszard, E-mail: ryszard.laskowski@uj.edu.p [Institute of Environmental Sciences, Jagiellonian University, Gronostajowa 7, 30-387 Krakow (Poland); Kooijman, Sebastiaan A.L.M., E-mail: bas@bio.vu.n [Vrije Universiteit of Amsterdam, Dept of Theoretical Biology, De Boelelaan 1085, 1081 HV Amsterdam (Netherlands)

    2010-01-15

    We exposed flour beetles (Tribolium castaneum) to a mixture of four poly aromatic hydrocarbons (PAHs). The experimental setup was chosen such that the emphasis was on assessing partial effects. We interpreted the effects of the mixture by a process-based model, with a threshold concentration for effects on survival. The behavior of the threshold concentration was one of the key features of this research. We showed that the threshold concentration is shared by toxicants with the same mode of action, which gives a mechanistic explanation for the observation that toxic effects in mixtures may occur in concentration ranges where the individual components do not show effects. Our approach gives reliable predictions of partial effects on survival and allows for a reduction of experimental effort in assessing effects of mixtures, extrapolations to other mixtures, other points in time, or in a wider perspective to other organisms. - We show a mechanistic approach to assess effects of mixtures in low concentrations.

  6. Model-based experimental design for assessing effects of mixtures of chemicals

    International Nuclear Information System (INIS)

    Baas, Jan; Stefanowicz, Anna M.; Klimek, Beata; Laskowski, Ryszard; Kooijman, Sebastiaan A.L.M.

    2010-01-01

    We exposed flour beetles (Tribolium castaneum) to a mixture of four poly aromatic hydrocarbons (PAHs). The experimental setup was chosen such that the emphasis was on assessing partial effects. We interpreted the effects of the mixture by a process-based model, with a threshold concentration for effects on survival. The behavior of the threshold concentration was one of the key features of this research. We showed that the threshold concentration is shared by toxicants with the same mode of action, which gives a mechanistic explanation for the observation that toxic effects in mixtures may occur in concentration ranges where the individual components do not show effects. Our approach gives reliable predictions of partial effects on survival and allows for a reduction of experimental effort in assessing effects of mixtures, extrapolations to other mixtures, other points in time, or in a wider perspective to other organisms. - We show a mechanistic approach to assess effects of mixtures in low concentrations.

  7. Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.

    Science.gov (United States)

    Zhang, Jiachao; Hirakawa, Keigo

    2017-04-01

    This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.

  8. Determining of migraine prognosis using latent growth mixture models.

    Science.gov (United States)

    Tasdelen, Bahar; Ozge, Aynur; Kaleagasi, Hakan; Erdogan, Semra; Mengi, Tufan

    2011-04-01

    This paper presents a retrospective study to classify patients into subtypes of the treatment according to baseline and longitudinally observed values considering heterogenity in migraine prognosis. In the classical prospective clinical studies, participants are classified with respect to baseline status and followed within a certain time period. However, latent growth mixture model is the most suitable method, which considers the population heterogenity and is not affected drop-outs if they are missing at random. Hence, we planned this comprehensive study to identify prognostic factors in migraine. The study data have been based on a 10-year computer-based follow-up data of Mersin University Headache Outpatient Department. The developmental trajectories within subgroups were described for the severity, frequency, and duration of headache separately and the probabilities of each subgroup were estimated by using latent growth mixture models. SAS PROC TRAJ procedures, semiparametric and group-based mixture modeling approach, were applied to define the developmental trajectories. While the three-group model for the severity (mild, moderate, severe) and frequency (low, medium, high) of headache appeared to be appropriate, the four-group model for the duration (low, medium, high, extremely high) was more suitable. The severity of headache increased in the patients with nausea, vomiting, photophobia and phonophobia. The frequency of headache was especially related with increasing age and unilateral pain. Nausea and photophobia were also related with headache duration. Nausea, vomiting and photophobia were the most significant factors to identify developmental trajectories. The remission time was not the same for the severity, frequency, and duration of headache.

  9. Skew chromaticity in large accelerators

    International Nuclear Information System (INIS)

    Peggs, S.; Dell, G.F.

    1995-01-01

    The 2-D ''skew chromaticity'' vector k is introduced when the standard on-momentum description of linear coupling is extended to include off-momentum particles. A lattice that is well decoupled on-momentum may be badly decoupled off-momentum, inside the natural momentum spread of the beam. There are two general areas of concern: (1) the free space in the tune plane is decreased; (2) collective phenomena may be destabilized. Two strong new criteria for head-tail stability in the presence of off-momentum coupling are derived, which are consistent with experimental and operational observations at the Tevatron, and with tracking data from RHIC

  10. Skewed sex ratios in India: "physician, heal thyself".

    Science.gov (United States)

    Patel, Archana B; Badhoniya, Neetu; Mamtani, Manju; Kulkarni, Hemant

    2013-06-01

    Sex selection, a gender discrimination of the worst kind, is highly prevalent across all strata of Indian society. Physicians have a crucial role in this practice and implementation of the Indian Government's Pre-Natal Diagnostic Techniques Act in 1996 to prevent the misuse of ultrasound techniques for the purpose of prenatal sex determination. Little is known about family preferences, let alone preferences among families of physicians. We investigated the sex ratios in 946 nuclear families with 1,624 children, for which either one or both parents were physicians. The overall child sex ratio was more skewed than the national average of 914. The conditional sex ratios decreased with increasing number of previous female births, and a previous birth of a daughter in the family was associated with a 38 % reduced likelihood of a subsequent female birth. The heavily skewed sex ratios in the families of physicians are indicative of a deeply rooted social malady that could pose a critical challenge in correcting the sex ratios in India.

  11. Thermodiffusion in Multicomponent Mixtures Thermodynamic, Algebraic, and Neuro-Computing Models

    CERN Document Server

    Srinivasan, Seshasai

    2013-01-01

    Thermodiffusion in Multicomponent Mixtures presents the computational approaches that are employed in the study of thermodiffusion in various types of mixtures, namely, hydrocarbons, polymers, water-alcohol, molten metals, and so forth. We present a detailed formalism of these methods that are based on non-equilibrium thermodynamics or algebraic correlations or principles of the artificial neural network. The book will serve as single complete reference to understand the theoretical derivations of thermodiffusion models and its application to different types of multi-component mixtures. An exhaustive discussion of these is used to give a complete perspective of the principles and the key factors that govern the thermodiffusion process.

  12. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    Science.gov (United States)

    de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.

    2010-01-01

    We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…

  13. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  14. Experiments with Mixtures Designs, Models, and the Analysis of Mixture Data

    CERN Document Server

    Cornell, John A

    2011-01-01

    The most comprehensive, single-volume guide to conducting experiments with mixtures"If one is involved, or heavily interested, in experiments on mixtures of ingredients, one must obtain this book. It is, as was the first edition, the definitive work."-Short Book Reviews (Publication of the International Statistical Institute)"The text contains many examples with worked solutions and with its extensive coverage of the subject matter will prove invaluable to those in the industrial and educational sectors whose work involves the design and analysis of mixture experiments."-Journal of the Royal S

  15. Different male versus female breeding periodicity helps mitigate offspring sex ratio skews in sea turtles

    Directory of Open Access Journals (Sweden)

    Graeme Clive Hays

    2014-09-01

    Full Text Available The implications of climate change for global biodiversity may be profound with those species with little capacity for adaptation being thought to be particularly vulnerable to warming. A classic case of groups for concern are those animals exhibiting temperature-dependent sex-determination (TSD, such as sea turtles, where climate warming may produce single sex populations and hence extinction. We show that, globally, female biased hatchling sex ratios dominate sea turtle populations (exceeding 3:1 in >50% records, which, at-a-glance, reiterates concerns for extinction. However, we also demonstrate that more frequent breeding by males, empirically shown by satellite tracking 23 individuals and supported by a generalized bio-energetic life history model, generates more balanced operational sex ratios (OSRs. Hence, concerns of increasingly skewed hatchling sex ratios and reduced population viability are less acute than previously thought for sea turtles. In fact, in some scenarios skewed hatchling sex ratios in groups with TSD may be adaptive to ensure optimum OSRs.

  16. Measurement of the width and skewness of elliptic flow fluctuations in PbPb collisions at 5.02 TeV with CMS

    CERN Document Server

    Castle, James Robert

    2017-01-01

    Flow harmonic fluctuations are studied for PbPb collisions at $\\sqrt{s_{NN}} = 5.02~\\mathrm{TeV}$ using the CMS detector at the LHC. Flow harmonic probability distributions $p\\left(v_2\\right)$ are obtained by unfolding smearing effects from observed azimuthal anisotropy distributions using particles of $0.3 < p_{\\mathrm{T}} < 3.0~\\mathrm{GeV}/c$ and $\\lvert \\eta \\rvert < 1.0$. Cumulant flow harmonics are determined from the moments of $p\\left(v_2\\right)$ and used to estimate the standardized elliptic flow skewness. Hydrodynamic models predict this skewness to be negative with respect to the reaction plane. A statistically significant negative skewness is observed for all centrality bins as evidenced by a splitting between $v_2$ $\\{{4}\\}$, $v_2$ $\\{{6}\\}$, and $v_2$ $\\{{8}\\}$ cumulants. Elliptic power law distribution fits are made to $p\\left(v_2\\right)$ distributions to infer information on the nature of initial-state eccentricity distributions and found to provide a more accurate description of the ...

  17. A semi-nonparametric mixture model for selecting functionally consistent proteins.

    Science.gov (United States)

    Yu, Lianbo; Doerge, Rw

    2010-09-28

    High-throughput technologies have led to a new era of proteomics. Although protein microarray experiments are becoming more common place there are a variety of experimental and statistical issues that have yet to be addressed, and that will carry over to new high-throughput technologies unless they are investigated. One of the largest of these challenges is the selection of functionally consistent proteins. We present a novel semi-nonparametric mixture model for classifying proteins as consistent or inconsistent while controlling the false discovery rate and the false non-discovery rate. The performance of the proposed approach is compared to current methods via simulation under a variety of experimental conditions. We provide a statistical method for selecting functionally consistent proteins in the context of protein microarray experiments, but the proposed semi-nonparametric mixture model method can certainly be generalized to solve other mixture data problems. The main advantage of this approach is that it provides the posterior probability of consistency for each protein.

  18. Asymmetric skew Bessel processes and their applications to finance

    NARCIS (Netherlands)

    Decamps, M.; Goovaerts, M.J.; Schoutens, W.

    2006-01-01

    In this paper, we extend the Harrison and Shepp's construction of the skew Brownian motion (1981) and we obtain a diffusion similar to the two-dimensional Bessel process with speed and scale densities discontinuous at one point. Natural generalizations to multi-dimensional and fractional order

  19. The subgroups in the special linear group over a skew field that contain the group of diagonal matrices

    International Nuclear Information System (INIS)

    Bui Xuan Hai.

    1990-05-01

    For an arbitrary skew field T we study the lattice of subgroups of the special linear group Γ=SL(n,T) that contain the subgroup Δ-SD(n,T) of diagonal matrices with Dieudonne's determinant equal to 1. We show that the description of these subgroups is standard in the following sense: For any subgroup H,Δ≤H≤Γ there exists a unique unital net such that Γ(σ) ≤H≤N(σ), where Γ(σ) is the net subgroup that corresponds to the net σ and N(σ) is the normalizer of Γ(σ) in Γ. (author). 11 refs

  20. A numerical model for boiling heat transfer coefficient of zeotropic mixtures

    Science.gov (United States)

    Barraza Vicencio, Rodrigo; Caviedes Aedo, Eduardo

    2017-12-01

    Zeotropic mixtures never have the same liquid and vapor composition in the liquid-vapor equilibrium. Also, the bubble and the dew point are separated; this gap is called glide temperature (Tglide). Those characteristics have made these mixtures suitable for cryogenics Joule-Thomson (JT) refrigeration cycles. Zeotropic mixtures as working fluid in JT cycles improve their performance in an order of magnitude. Optimization of JT cycles have earned substantial importance for cryogenics applications (e.g, gas liquefaction, cryosurgery probes, cooling of infrared sensors, cryopreservation, and biomedical samples). Heat exchangers design on those cycles is a critical point; consequently, heat transfer coefficient and pressure drop of two-phase zeotropic mixtures are relevant. In this work, it will be applied a methodology in order to calculate the local convective heat transfer coefficients based on the law of the wall approach for turbulent flows. The flow and heat transfer characteristics of zeotropic mixtures in a heated horizontal tube are investigated numerically. The temperature profile and heat transfer coefficient for zeotropic mixtures of different bulk compositions are analysed. The numerical model has been developed and locally applied in a fully developed, constant temperature wall, and two-phase annular flow in a duct. Numerical results have been obtained using this model taking into account continuity, momentum, and energy equations. Local heat transfer coefficient results are compared with available experimental data published by Barraza et al. (2016), and they have shown good agreement.

  1. Text document classification based on mixture models

    Czech Academy of Sciences Publication Activity Database

    Novovičová, Jana; Malík, Antonín

    2004-01-01

    Roč. 40, č. 3 (2004), s. 293-304 ISSN 0023-5954 R&D Projects: GA AV ČR IAA2075302; GA ČR GA102/03/0049; GA AV ČR KSK1019101 Institutional research plan: CEZ:AV0Z1075907 Keywords : text classification * text categorization * multinomial mixture model Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.224, year: 2004

  2. Determinants of (–1,1-matrices of the skew-symmetric type: a cocyclic approach

    Directory of Open Access Journals (Sweden)

    Álvarez Víctor

    2015-01-01

    Full Text Available An n by n skew-symmetric type (-1; 1-matrix K =[ki;j ] has 1’s on the main diagonal and ±1’s elsewhere with ki;j =-kj;i . The largest possible determinant of such a matrix K is an interesting problem. The literature is extensive for n ≡ 0 mod 4 (skew-Hadamard matrices, but for n ≡ 2 mod 4 there are few results known for this question. In this paper we approach this problem constructing cocyclic matrices over the dihedral group of 2t elements, for t odd, which are equivalent to (-1; 1-matrices of skew type. Some explicit calculations have been done up to t =11. To our knowledge, the upper bounds on the maximal determinant in orders 18 and 22 have been improved.

  3. Reduced chemical kinetic model of detonation combustion of one- and multi-fuel gaseous mixtures with air

    Science.gov (United States)

    Fomin, P. A.

    2018-03-01

    Two-step approximate models of chemical kinetics of detonation combustion of (i) one hydrocarbon fuel CnHm (for example, methane, propane, cyclohexane etc.) and (ii) multi-fuel gaseous mixtures (∑aiCniHmi) (for example, mixture of methane and propane, synthesis gas, benzene and kerosene) are presented for the first time. The models can be used for any stoichiometry, including fuel/fuels-rich mixtures, when reaction products contain molecules of carbon. Owing to the simplicity and high accuracy, the models can be used in multi-dimensional numerical calculations of detonation waves in corresponding gaseous mixtures. The models are in consistent with the second law of thermodynamics and Le Chatelier's principle. Constants of the models have a clear physical meaning. The models can be used for calculation thermodynamic parameters of the mixture in a state of chemical equilibrium.

  4. Incorporation of β-glucans in meat emulsions through an optimal mixture modeling systems.

    Science.gov (United States)

    Vasquez Mejia, Sandra M; de Francisco, Alicia; Manique Barreto, Pedro L; Damian, César; Zibetti, Andre Wüst; Mahecha, Hector Suárez; Bohrer, Benjamin M

    2018-05-22

    The effects of β-glucans (βG) in beef emulsions with carrageenan and starch were evaluated using an optimal mixture modeling system. The best mathematical models to describe the cooking loss, color, and textural profile analysis (TPA) were selected and optimized. The cubic models were better to describe the cooking loss, color, and TPA parameters, with the exception of springiness. Emulsions with greater levels of βG and starch had less cooking loss (54 and <62), and greater hardness, cohesiveness and springiness values. Subsequently, during the optimization phase, the use of carrageenan was eliminated. The optimized emulsion contained 3.13 ± 0.11% βG, which could cover the intake daily of βG recommendations. However, the hardness of the optimized emulsion was greater (60,224 ± 1025 N) than expected. The optimized emulsion had a homogeneous structure and normal thermal behavior by DSC and allowed for the manufacture of products with high amounts of βG and desired functional attributes. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Leverage and Deepening Business Cycle Skewness

    DEFF Research Database (Denmark)

    Jensen, Henrik; Petrella, Ivan; Ravn, Søren Hove

    2017-01-01

    We document that the U.S. economy has been characterized by an increasingly negative business cycle asymmetry over the last three decades. This finding can be explained by the concurrent increase in the financial leverage of households and firms. To support this view, we devise and estimate......, booms become progressively smoother and more prolonged than busts. We are therefore able to reconcile a more negatively skewed business cycle with the Great Moderation in cyclical volatility. Finally, in line with recent empirical evidence, financially-driven expansions lead to deeper contractions...

  6. Mutual tolerance or reproductive competition? Patterns of reproductive skew among male redfronted lemurs (Eulemur fulvus rufus)

    OpenAIRE

    Kappeler, Peter M.; Port, Markus

    2008-01-01

    The social organization of gregarious lemurs significantly deviates from predictions of the socioecological model, as they form small groups in which the number of males approximately equals the number of females. This study uses models of reproductive skew theory as a new approach to explain this unusual group composition, in particular the high number of males, in a representative of these lemurs, the redfronted lemur (Eulemur fulvus rufus). We tested two central predictions of “concession”...

  7. High-speed broadband elastic actuator in water using induced-charge electro-osmosis with a skew structure

    Science.gov (United States)

    Sugioka, Hideyuki; Nakano, Naoki

    2018-01-01

    An artificial cilium using ac electro-osmosis (ACEO) is attractive because of its large potentiality for innovative microfluidic applications. However, the ACEO cilium has not been probed experimentally and has a shortcoming that the working frequency range is very narrow. Thus, we here propose an ACEO elastic actuator having a skew structure that broadens a working frequency range and experimentally demonstrate that the elastic actuator in water can be driven with a high-speed (˜10 Hz) and a wide frequency range (˜0.1 to ˜10 kHz). Moreover, we propose a simple self-consistent model that explains the broadband characteristic due to the skew structure with other characteristics. By comparing the theoretical results with the experimental results, we find that they agree fairly well. We believe that our ACEO elastic actuator will play an important role in microfluidics in the future.

  8. High-speed broadband elastic actuator in water using induced-charge electro-osmosis with a skew structure.

    Science.gov (United States)

    Sugioka, Hideyuki; Nakano, Naoki

    2018-01-01

    An artificial cilium using ac electro-osmosis (ACEO) is attractive because of its large potentiality for innovative microfluidic applications. However, the ACEO cilium has not been probed experimentally and has a shortcoming that the working frequency range is very narrow. Thus, we here propose an ACEO elastic actuator having a skew structure that broadens a working frequency range and experimentally demonstrate that the elastic actuator in water can be driven with a high-speed (∼10 Hz) and a wide frequency range (∼0.1 to ∼10 kHz). Moreover, we propose a simple self-consistent model that explains the broadband characteristic due to the skew structure with other characteristics. By comparing the theoretical results with the experimental results, we find that they agree fairly well. We believe that our ACEO elastic actuator will play an important role in microfluidics in the future.

  9. Torque ripple minimization in a doubly salient permanent magnet motors by skewing the rotor teeth

    International Nuclear Information System (INIS)

    Sheth, N.K.; Sekharbabu, A.R.C.; Rajagopal, K.R.

    2006-01-01

    This paper presents the effects of skewing the rotor teeth on the performance of an 8/6 doubly salient permanent magnet motor using a simple method, which utilizes the results obtained from the 2-D FE analysis. The optimum skewing angle is obtained as 12-15 o for the least ripple torque without much reduction in the back-emf

  10. Geometric representation of the mean-variance-skewness portfolio frontier based upon the shortage function

    OpenAIRE

    Kerstens, Kristiaan; Mounier, Amine; Van de Woestyne, Ignace

    2008-01-01

    The literature suggests that investors prefer portfolios based on mean, variance and skewness rather than portfolios based on mean-variance (MV) criteria solely. Furthermore, a small variety of methods have been proposed to determine mean-variance-skewness (MVS) optimal portfolios. Recently, the shortage function has been introduced as a measure of efficiency, allowing to characterize MVS optimalportfolios using non-parametric mathematical programming tools. While tracing the MV portfolio fro...

  11. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions

    Directory of Open Access Journals (Sweden)

    Yoon Soo ePark

    2016-02-01

    Full Text Available This study investigates the impact of item parameter drift (IPD on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effect on item parameters and examinee ability.

  12. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions.

    Science.gov (United States)

    Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan

    2016-01-01

    This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability.

  13. Influence of high power ultrasound on rheological and foaming properties of model ice-cream mixtures

    Directory of Open Access Journals (Sweden)

    Verica Batur

    2010-03-01

    Full Text Available This paper presents research of the high power ultrasound effect on rheological and foaming properties of ice cream model mixtures. Ice cream model mixtures are prepared according to specific recipes, and afterward undergone through different homogenization techniques: mechanical mixing, ultrasound treatment and combination of mechanical and ultrasound treatment. Specific diameter (12.7 mm of ultrasound probe tip has been used for ultrasound treatment that lasted 5 minutes at 100 percent amplitude. Rheological parameters have been determined using rotational rheometer and expressed as flow index, consistency coefficient and apparent viscosity. From the results it can be concluded that all model mixtures have non-newtonian, dilatant type behavior. The highest viscosities have been observed for model mixtures that were homogenizes with mechanical mixing, and significantly lower values of viscosity have been observed for ultrasound treated ones. Foaming properties are expressed as percentage of increase in foam volume, foam stability index and minimal viscosity. It has been determined that ice cream model mixtures treated only with ultrasound had minimal increase in foam volume, while the highest increase in foam volume has been observed for ice cream mixture that has been treated in combination with mechanical and ultrasound treatment. Also, ice cream mixtures having higher amount of proteins in composition had shown higher foam stability. It has been determined that optimal treatment time is 10 minutes.

  14. Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots

    Science.gov (United States)

    Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.

    2013-01-01

    Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…

  15. Skewed X inactivation and survival: a 13-year follow-up study of elderly twins and singletons

    DEFF Research Database (Denmark)

    Mengel-From, Jonas; Thinggaard, Mikael; Christiansen, Lene

    2012-01-01

    In mammalian females, one of the two X chromosomes is inactivated in early embryonic life. Females are therefore mosaics for two cell populations, one with the maternal and one with the paternal X as the active X chromosome. A skewed X inactivation is a marked deviation from a 50:50 ratio...... mortality than the majority of women who had a more skewed DS (hazard ratio: 1.30; 95% CI: 1.04-1.64). The association between X inactivation and mortality was replicated in dizygotic twin pairs for which the co-twin with the lowest DS also had a statistically significant tendency to die first in the twin....... In populations of women past 55-60 years of age, an increased degree of skewing (DS) is found. Here the association between age-related skewing and mortality is analyzed in a 13-year follow-up study of 500 women from three cohorts (73-100 years of age at intake). Women with low DS had significantly higher...

  16. Gaussian Mixture Model of Heart Rate Variability

    Science.gov (United States)

    Costa, Tommaso; Boccignone, Giuseppe; Ferraro, Mario

    2012-01-01

    Heart rate variability (HRV) is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters. PMID:22666386

  17. Modelling of phase equilibria of glycol ethers mixtures using an association model

    DEFF Research Database (Denmark)

    Garrido, Nuno M.; Folas, Georgios; Kontogeorgis, Georgios

    2008-01-01

    Vapor-liquid and liquid-liquid equilibria of glycol ethers (surfactant) mixtures with hydrocarbons, polar compounds and water are calculated using an association model, the Cubic-Plus-Association Equation of State. Parameters are estimated for several non-ionic surfactants of the polyoxyethylene ...

  18. Linking asphalt binder fatigue to asphalt mixture fatigue performance using viscoelastic continuum damage modeling

    Science.gov (United States)

    Safaei, Farinaz; Castorena, Cassie; Kim, Y. Richard

    2016-08-01

    Fatigue cracking is a major form of distress in asphalt pavements. Asphalt binder is the weakest asphalt concrete constituent and, thus, plays a critical role in determining the fatigue resistance of pavements. Therefore, the ability to characterize and model the inherent fatigue performance of an asphalt binder is a necessary first step to design mixtures and pavements that are not susceptible to premature fatigue failure. The simplified viscoelastic continuum damage (S-VECD) model has been used successfully by researchers to predict the damage evolution in asphalt mixtures for various traffic and climatic conditions using limited uniaxial test data. In this study, the S-VECD model, developed for asphalt mixtures, is adapted for asphalt binders tested under cyclic torsion in a dynamic shear rheometer. Derivation of the model framework is presented. The model is verified by producing damage characteristic curves that are both temperature- and loading history-independent based on time sweep tests, given that the effects of plasticity and adhesion loss on the material behavior are minimal. The applicability of the S-VECD model to the accelerated loading that is inherent of the linear amplitude sweep test is demonstrated, which reveals reasonable performance predictions, but with some loss in accuracy compared to time sweep tests due to the confounding effects of nonlinearity imposed by the high strain amplitudes included in the test. The asphalt binder S-VECD model is validated through comparisons to asphalt mixture S-VECD model results derived from cyclic direct tension tests and Accelerated Loading Facility performance tests. The results demonstrate good agreement between the asphalt binder and mixture test results and pavement performance, indicating that the developed model framework is able to capture the asphalt binder's contribution to mixture fatigue and pavement fatigue cracking performance.

  19. Generation of a stochastic precipitation model for the tropical climate

    Science.gov (United States)

    Ng, Jing Lin; Abd Aziz, Samsuzana; Huang, Yuk Feng; Wayayok, Aimrun; Rowshon, MK

    2017-06-01

    A tropical country like Malaysia is characterized by intense localized precipitation with temperatures remaining relatively constant throughout the year. A stochastic modeling of precipitation in the flood-prone Kelantan River Basin is particularly challenging due to the high intermittency of precipitation events of the northeast monsoons. There is an urgent need to have long series of precipitation in modeling the hydrological responses. A single-site stochastic precipitation model that includes precipitation occurrence and an intensity model was developed, calibrated, and validated for the Kelantan River Basin. The simulation process was carried out separately for each station without considering the spatial correlation of precipitation. The Markov chains up to the fifth-order and six distributions were considered. The daily precipitation data of 17 rainfall stations for the study period of 1954-2013 were selected. The results suggested that second- and third-order Markov chains were suitable for simulating monthly and yearly precipitation occurrences, respectively. The fifth-order Markov chain resulted in overestimation of precipitation occurrences. For the mean, distribution, and standard deviation of precipitation amounts, the exponential, gamma, log-normal, skew normal, mixed exponential, and generalized Pareto distributions performed superiorly. However, for the extremes of precipitation, the exponential and log-normal distributions were better while the skew normal and generalized Pareto distributions tend to show underestimations. The log-normal distribution was chosen as the best distribution to simulate precipitation amounts. Overall, the stochastic precipitation model developed is considered a convenient tool to simulate the characteristics of precipitation in the Kelantan River Basin.

  20. Entrepreneurship and Financial Incentives of Return, Risk, and Skew

    DEFF Research Database (Denmark)

    Berkhout, Peter; Hartog, Joop; Van Praag, Mirjam

    2016-01-01

    . The focus on earnings forgone may help to solve the lack of robust empirical support for the effect of financial incentives on the decision to become an entrepreneur. We find, consistent with standard theory, that a higher mean, lower variance, and higher skew in the relevant wage distribution reduce...

  1. A compressibility based model for predicting the tensile strength of directly compressed pharmaceutical powder mixtures.

    Science.gov (United States)

    Reynolds, Gavin K; Campbell, Jacqueline I; Roberts, Ron J

    2017-10-05

    A new model to predict the compressibility and compactability of mixtures of pharmaceutical powders has been developed. The key aspect of the model is consideration of the volumetric occupancy of each powder under an applied compaction pressure and the respective contribution it then makes to the mixture properties. The compressibility and compactability of three pharmaceutical powders: microcrystalline cellulose, mannitol and anhydrous dicalcium phosphate have been characterised. Binary and ternary mixtures of these excipients have been tested and used to demonstrate the predictive capability of the model. Furthermore, the model is shown to be uniquely able to capture a broad range of mixture behaviours, including neutral, negative and positive deviations, illustrating its utility for formulation design. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Investigating Individual Differences in Toddler Search with Mixture Models

    Science.gov (United States)

    Berthier, Neil E.; Boucher, Kelsea; Weisner, Nina

    2015-01-01

    Children's performance on cognitive tasks is often described in categorical terms in that a child is described as either passing or failing a test, or knowing or not knowing some concept. We used binomial mixture models to determine whether individual children could be classified as passing or failing two search tasks, the DeLoache model room…

  3. Application of association models to mixtures containing alkanolamines

    DEFF Research Database (Denmark)

    Avlund, Ane Søgaard; Eriksen, Daniel Kunisch; Kontogeorgis, Georgios

    2011-01-01

    Two association models,the CPA and sPC-SAFT equations of state, are applied to binarymixtures containing alkanolamines and hydrocarbons or water. CPA is applied to mixtures of MEA and DEA, while sPC-SAFT is applied to MEA–n-heptane liquid–liquid equilibria and MEA–water vapor–liquid equilibria. T...

  4. Amplification of DNA mixtures--Missing data approach

    DEFF Research Database (Denmark)

    Tvedebrink, Torben; Eriksen, Poul Svante; Mogensen, Helle Smidt

    2008-01-01

    This paper presents a model for the interpretation of results of STR typing of DNA mixtures based on a multivariate normal distribution of peak areas. From previous analyses of controlled experiments with mixed DNA samples, we exploit the linear relationship between peak heights and peak areas...... DNA samples, it is only possible to observe the cumulative peak heights and areas. Complying with this latent structure, we use the EM-algorithm to impute the missing variables based on a compound symmetry model. That is the measurements are subject to intra- and inter-loci correlations not depending...... on the actual alleles of the DNA profiles. Due to factorization of the likelihood, properties of the normal distribution and use of auxiliary variables, an ordinary implementation of the EM-algorithm solves the missing data problem. We estimate the parameters in the model based on a training data set. In order...

  5. A Frank mixture copula family for modeling higher-order correlations of neural spike counts

    International Nuclear Information System (INIS)

    Onken, Arno; Obermayer, Klaus

    2009-01-01

    In order to evaluate the importance of higher-order correlations in neural spike count codes, flexible statistical models of dependent multivariate spike counts are required. Copula families, parametric multivariate distributions that represent dependencies, can be applied to construct such models. We introduce the Frank mixture family as a new copula family that has separate parameters for all pairwise and higher-order correlations. In contrast to the Farlie-Gumbel-Morgenstern copula family that shares this property, the Frank mixture copula can model strong correlations. We apply spike count models based on the Frank mixture copula to data generated by a network of leaky integrate-and-fire neurons and compare the goodness of fit to distributions based on the Farlie-Gumbel-Morgenstern family. Finally, we evaluate the importance of using proper single neuron spike count distributions on the Shannon information. We find notable deviations in the entropy that increase with decreasing firing rates. Moreover, we find that the Frank mixture family increases the log likelihood of the fit significantly compared to the Farlie-Gumbel-Morgenstern family. This shows that the Frank mixture copula is a useful tool to assess the importance of higher-order correlations in spike count codes.

  6. Missing Value Imputation Based on Gaussian Mixture Model for the Internet of Things

    Directory of Open Access Journals (Sweden)

    Xiaobo Yan

    2015-01-01

    Full Text Available This paper addresses missing value imputation for the Internet of Things (IoT. Nowadays, the IoT has been used widely and commonly by a variety of domains, such as transportation and logistics domain and healthcare domain. However, missing values are very common in the IoT for a variety of reasons, which results in the fact that the experimental data are incomplete. As a result of this, some work, which is related to the data of the IoT, can’t be carried out normally. And it leads to the reduction in the accuracy and reliability of the data analysis results. This paper, for the characteristics of the data itself and the features of missing data in IoT, divides the missing data into three types and defines three corresponding missing value imputation problems. Then, we propose three new models to solve the corresponding problems, and they are model of missing value imputation based on context and linear mean (MCL, model of missing value imputation based on binary search (MBS, and model of missing value imputation based on Gaussian mixture model (MGI. Experimental results showed that the three models can improve the accuracy, reliability, and stability of missing value imputation greatly and effectively.

  7. Copula Based Factorization in Bayesian Multivariate Infinite Mixture Models

    OpenAIRE

    Martin Burda; Artem Prokhorov

    2012-01-01

    Bayesian nonparametric models based on infinite mixtures of density kernels have been recently gaining in popularity due to their flexibility and feasibility of implementation even in complicated modeling scenarios. In economics, they have been particularly useful in estimating nonparametric distributions of latent variables. However, these models have been rarely applied in more than one dimension. Indeed, the multivariate case suffers from the curse of dimensionality, with a rapidly increas...

  8. Automatic categorization of web pages and user clustering with mixtures of hidden Markov models

    NARCIS (Netherlands)

    Ypma, A.; Heskes, T.M.; Zaiane, O.R.; Srivastav, J.

    2003-01-01

    We propose mixtures of hidden Markov models for modelling clickstreams of web surfers. Hence, the page categorization is learned from the data without the need for a (possibly cumbersome) manual categorization. We provide an EM algorithm for training a mixture of HMMs and show that additional static

  9. Some homological properties of skew PBW extensions arising in non-commutative algebraic geometry

    Directory of Open Access Journals (Sweden)

    Lezama Oswaldo

    2017-06-01

    Full Text Available In this short paper we study for the skew PBW (Poincar-Birkhoff-Witt extensions some homological properties arising in non-commutative algebraic geometry, namely, Auslander-Gorenstein regularity, Cohen-Macaulayness and strongly noetherianity. Skew PBW extensions include a considerable number of non-commutative rings of polynomial type such that classical PBW extensions, quantum polynomial rings, multiplicative analogue of the Weyl algebra, some Sklyanin algebras, operator algebras, diffusion algebras, quadratic algebras in 3 variables, among many others. Parametrization of the point modules of some examples is also presented.

  10. Delivering Left-Skewed Portfolio Payoff Distributions in the Presence of Transaction Costs

    Directory of Open Access Journals (Sweden)

    Jacek B Krawczyk

    2015-08-01

    Full Text Available For pension-savers, a low payoff is a financial disaster. Such investors will most likely prefer left-skewed payoff distributions over right-skewed payoff distributions. We explore how such distributions can be delivered. Cautious-relaxed utility measures are cautious in ensuring that payoffs don’t fall much below a reference value, but relaxed about exceeding it. We find that the payoff distribution delivered by a cautious-relaxed utility measure has appealing features which payoff distributions delivered by traditional utility functions don’t. In particular, cautious-relaxed distributions can have the mass concentrated on the left, hence be left-skewed. However, cautious-relaxed strategies prescribe frequent portfolio adjustments which may be expensive if transaction costs are charged. In contrast, more traditional strategies can be time-invariant. Thus we investigate the impact of transaction costs on the appeal of cautious-relaxed strategies. We find that relatively high transaction fees are required for the cautious-relaxed strategy to lose its appeal. This paper contributes to the literature which compares utility measures by the payoff distributions they produce and finds that a cautious-relaxed utility measure will deliver payoffs that many investors will prefer.

  11. Finite mixture models for the computation of isotope ratios in mixed isotopic samples

    Science.gov (United States)

    Koffler, Daniel; Laaha, Gregor; Leisch, Friedrich; Kappel, Stefanie; Prohaska, Thomas

    2013-04-01

    Finite mixture models have been used for more than 100 years, but have seen a real boost in popularity over the last two decades due to the tremendous increase in available computing power. The areas of application of mixture models range from biology and medicine to physics, economics and marketing. These models can be applied to data where observations originate from various groups and where group affiliations are not known, as is the case for multiple isotope ratios present in mixed isotopic samples. Recently, the potential of finite mixture models for the computation of 235U/238U isotope ratios from transient signals measured in individual (sub-)µm-sized particles by laser ablation - multi-collector - inductively coupled plasma mass spectrometry (LA-MC-ICPMS) was demonstrated by Kappel et al. [1]. The particles, which were deposited on the same substrate, were certified with respect to their isotopic compositions. Here, we focus on the statistical model and its application to isotope data in ecogeochemistry. Commonly applied evaluation approaches for mixed isotopic samples are time-consuming and are dependent on the judgement of the analyst. Thus, isotopic compositions may be overlooked due to the presence of more dominant constituents. Evaluation using finite mixture models can be accomplished unsupervised and automatically. The models try to fit several linear models (regression lines) to subgroups of data taking the respective slope as estimation for the isotope ratio. The finite mixture models are parameterised by: • The number of different ratios. • Number of points belonging to each ratio-group. • The ratios (i.e. slopes) of each group. Fitting of the parameters is done by maximising the log-likelihood function using an iterative expectation-maximisation (EM) algorithm. In each iteration step, groups of size smaller than a control parameter are dropped; thereby the number of different ratios is determined. The analyst only influences some control

  12. Impact of radius and skew angle on areal density in heat assisted magnetic recording hard disk drives

    Science.gov (United States)

    Cordle, Michael; Rea, Chris; Jury, Jason; Rausch, Tim; Hardie, Cal; Gage, Edward; Victora, R. H.

    2018-05-01

    This study aims to investigate the impact that factors such as skew, radius, and transition curvature have on areal density capability in heat-assisted magnetic recording hard disk drives. We explore a "ballistic seek" approach for capturing in-situ scan line images of the magnetization footprint on the recording media, and extract parametric results of recording characteristics such as transition curvature. We take full advantage of the significantly improved cycle time to apply a statistical treatment to relatively large samples of experimental curvature data to evaluate measurement capability. Quantitative analysis of factors that impact transition curvature reveals an asymmetry in the curvature profile that is strongly correlated to skew angle. Another less obvious skew-related effect is an overall decrease in curvature as skew angle increases. Using conventional perpendicular magnetic recording as the reference case, we characterize areal density capability as a function of recording position.

  13. A random effects meta-analysis model with Box-Cox transformation.

    Science.gov (United States)

    Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D

    2017-07-19

    In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The

  14. A random effects meta-analysis model with Box-Cox transformation

    Directory of Open Access Journals (Sweden)

    Yusuke Yamaguchi

    2017-07-01

    Full Text Available Abstract Background In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. Methods We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. Results A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and

  15. Modeling adsorption of binary and ternary mixtures on microporous media

    DEFF Research Database (Denmark)

    Monsalvo, Matias Alfonso; Shapiro, Alexander

    2007-01-01

    it possible using the same equation of state to describe the thermodynamic properties of the segregated and the bulk phases. For comparison, we also used the ideal adsorbed solution theory (IAST) to describe adsorption equilibria. The main advantage of these two models is their capabilities to predict......The goal of this work is to analyze the adsorption of binary and ternary mixtures on the basis of the multicomponent potential theory of adsorption (MPTA). In the MPTA, the adsorbate is considered as a segregated mixture in the external potential field emitted by the solid adsorbent. This makes...... multicomponent adsorption equilibria on the basis of single-component adsorption data. We compare the MPTA and IAST models to a large set of experimental data, obtaining reasonable good agreement with experimental data and high degree of predictability. Some limitations of both models are also discussed....

  16. Analysis of real-time mixture cytotoxicity data following repeated exposure using BK/TD models

    International Nuclear Information System (INIS)

    Teng, S.; Tebby, C.; Barcellini-Couget, S.; De Sousa, G.; Brochot, C.; Rahmani, R.; Pery, A.R.R.

    2016-01-01

    Cosmetic products generally consist of multiple ingredients. Thus, cosmetic risk assessment has to deal with mixture toxicity on a long-term scale which means it has to be assessed in the context of repeated exposure. Given that animal testing has been banned for cosmetics risk assessment, in vitro assays allowing long-term repeated exposure and adapted for in vitro – in vivo extrapolation need to be developed. However, most in vitro tests only assess short-term effects and consider static endpoints which hinder extrapolation to realistic human exposure scenarios where concentration in target organs is varies over time. Thanks to impedance metrics, real-time cell viability monitoring for repeated exposure has become possible. We recently constructed biokinetic/toxicodynamic models (BK/TD) to analyze such data (Teng et al., 2015) for three hepatotoxic cosmetic ingredients: coumarin, isoeugenol and benzophenone-2. In the present study, we aim to apply these models to analyze the dynamics of mixture impedance data using the concepts of concentration addition and independent action. Metabolic interactions between the mixture components were investigated, characterized and implemented in the models, as they impacted the actual cellular exposure. Indeed, cellular metabolism following mixture exposure induced a quick disappearance of the compounds from the exposure system. We showed that isoeugenol substantially decreased the metabolism of benzophenone-2, reducing the disappearance of this compound and enhancing its in vitro toxicity. Apart from this metabolic interaction, no mixtures showed any interaction, and all binary mixtures were successfully modeled by at least one model based on exposure to the individual compounds. - Highlights: • We could predict cell response over repeated exposure to mixtures of cosmetics. • Compounds acted independently on the cells. • Metabolic interactions impacted exposure concentrations to the compounds.

  17. Analysis of real-time mixture cytotoxicity data following repeated exposure using BK/TD models

    Energy Technology Data Exchange (ETDEWEB)

    Teng, S.; Tebby, C. [Models for Toxicology and Ecotoxicology Unit, INERIS, Parc Technologique Alata, BP 2, 60550 Verneuil-en-Halatte (France); Barcellini-Couget, S. [ODESIA Neosciences, Sophia Antipolis, 400 route des chappes, 06903 Sophia Antipolis (France); De Sousa, G. [INRA, ToxAlim, 400 route des Chappes, BP, 167 06903 Sophia Antipolis, Cedex (France); Brochot, C. [Models for Toxicology and Ecotoxicology Unit, INERIS, Parc Technologique Alata, BP 2, 60550 Verneuil-en-Halatte (France); Rahmani, R. [INRA, ToxAlim, 400 route des Chappes, BP, 167 06903 Sophia Antipolis, Cedex (France); Pery, A.R.R., E-mail: alexandre.pery@agroparistech.fr [AgroParisTech, UMR 1402 INRA-AgroParisTech Ecosys, 78850 Thiverval Grignon (France); INRA, UMR 1402 INRA-AgroParisTech Ecosys, 78850 Thiverval Grignon (France)

    2016-08-15

    Cosmetic products generally consist of multiple ingredients. Thus, cosmetic risk assessment has to deal with mixture toxicity on a long-term scale which means it has to be assessed in the context of repeated exposure. Given that animal testing has been banned for cosmetics risk assessment, in vitro assays allowing long-term repeated exposure and adapted for in vitro – in vivo extrapolation need to be developed. However, most in vitro tests only assess short-term effects and consider static endpoints which hinder extrapolation to realistic human exposure scenarios where concentration in target organs is varies over time. Thanks to impedance metrics, real-time cell viability monitoring for repeated exposure has become possible. We recently constructed biokinetic/toxicodynamic models (BK/TD) to analyze such data (Teng et al., 2015) for three hepatotoxic cosmetic ingredients: coumarin, isoeugenol and benzophenone-2. In the present study, we aim to apply these models to analyze the dynamics of mixture impedance data using the concepts of concentration addition and independent action. Metabolic interactions between the mixture components were investigated, characterized and implemented in the models, as they impacted the actual cellular exposure. Indeed, cellular metabolism following mixture exposure induced a quick disappearance of the compounds from the exposure system. We showed that isoeugenol substantially decreased the metabolism of benzophenone-2, reducing the disappearance of this compound and enhancing its in vitro toxicity. Apart from this metabolic interaction, no mixtures showed any interaction, and all binary mixtures were successfully modeled by at least one model based on exposure to the individual compounds. - Highlights: • We could predict cell response over repeated exposure to mixtures of cosmetics. • Compounds acted independently on the cells. • Metabolic interactions impacted exposure concentrations to the compounds.

  18. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  19. Matrix orderings and their associated skew fields

    International Nuclear Information System (INIS)

    Mahdavi-Hezavehi, M.

    1990-08-01

    Matrix orderings on rings are investigated. It is shown that in the commutative case they are essentially positive cones. This is proved by reducing it to the field case; similarly one can show that on a skew field, matrix positive cones can be reduced to positive cones by using the Dieudonne determinant. Our main result shows that there is a natural bijection between the matrix positive cones on a ring R and the ordered epic R-fields. (author). 7 refs

  20. Analysis of real-time mixture cytotoxicity data following repeated exposure using BK/TD models.

    Science.gov (United States)

    Teng, S; Tebby, C; Barcellini-Couget, S; De Sousa, G; Brochot, C; Rahmani, R; Pery, A R R

    2016-08-15

    Cosmetic products generally consist of multiple ingredients. Thus, cosmetic risk assessment has to deal with mixture toxicity on a long-term scale which means it has to be assessed in the context of repeated exposure. Given that animal testing has been banned for cosmetics risk assessment, in vitro assays allowing long-term repeated exposure and adapted for in vitro - in vivo extrapolation need to be developed. However, most in vitro tests only assess short-term effects and consider static endpoints which hinder extrapolation to realistic human exposure scenarios where concentration in target organs is varies over time. Thanks to impedance metrics, real-time cell viability monitoring for repeated exposure has become possible. We recently constructed biokinetic/toxicodynamic models (BK/TD) to analyze such data (Teng et al., 2015) for three hepatotoxic cosmetic ingredients: coumarin, isoeugenol and benzophenone-2. In the present study, we aim to apply these models to analyze the dynamics of mixture impedance data using the concepts of concentration addition and independent action. Metabolic interactions between the mixture components were investigated, characterized and implemented in the models, as they impacted the actual cellular exposure. Indeed, cellular metabolism following mixture exposure induced a quick disappearance of the compounds from the exposure system. We showed that isoeugenol substantially decreased the metabolism of benzophenone-2, reducing the disappearance of this compound and enhancing its in vitro toxicity. Apart from this metabolic interaction, no mixtures showed any interaction, and all binary mixtures were successfully modeled by at least one model based on exposure to the individual compounds. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Effects of Test Conditions on APA Rutting and Prediction Modeling for Asphalt Mixtures

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2017-01-01

    Full Text Available APA rutting tests were conducted for six kinds of asphalt mixtures under air-dry and immersing conditions. The influences of test conditions, including load, temperature, air voids, and moisture, on APA rutting depth were analyzed by using grey correlation method, and the APA rutting depth prediction model was established. Results show that the modified asphalt mixtures have bigger rutting depth ratios of air-dry to immersing conditions, indicating that the modified asphalt mixtures have better antirutting properties and water stability than the matrix asphalt mixtures. The grey correlation degrees of temperature, load, air void, and immersing conditions on APA rutting depth decrease successively, which means that temperature is the most significant influencing factor. The proposed indoor APA rutting prediction model has good prediction accuracy, and the correlation coefficient between the predicted and the measured rutting depths is 96.3%.

  2. Measurement of the Skewness of Elliptic Flow Fluctuations in PbPb Collisions at $\\sqrt{s_{NN}} = 5.02~\\mathrm{TeV}$

    CERN Document Server

    CMS Collaboration

    2017-01-01

    Event-by-event flow harmonics are studied for PbPb collisions at $\\sqrt{s_{NN}} = 5.02~\\mathrm{TeV}$ using the CMS detector at the LHC. Flow harmonic probability distributions $p\\left(v_2\\right)$ are obtained using particles of $0.3 \\leq p_{T} \\leq 3.0~\\mathrm{GeV}/c$ and $\\left|\\eta\\right| \\leq 1.0$ and are unfolded to remove smearing effects from observed azimuthal particle distributions. Cumulant flow harmonics are determined from the moments of $p\\left(v_2\\right)$ and used to estimate the standardized elliptic flow skewness in $5\\%$ wide centrality bins up to $60\\%$. Hydrodynamic models predict that flow fluctuations will lead to a non-Gaussian component in the flow distributions with a negative skew with respect to the reaction plane. A significant negative skewness is observed for all centrality bins as evidenced by a splitting between $v_2\\left\\{4\\right\\}$ and $v_2\\left\\{6\\right\\}$ cumulants. In addition, elliptic power law distribution fits are made to the $p\\left(v_2\\right)$ distributions to infer in...

  3. Microbial comparative pan-genomics using binomial mixture models

    Directory of Open Access Journals (Sweden)

    Ussery David W

    2009-08-01

    Full Text Available Abstract Background The size of the core- and pan-genome of bacterial species is a topic of increasing interest due to the growing number of sequenced prokaryote genomes, many from the same species. Attempts to estimate these quantities have been made, using regression methods or mixture models. We extend the latter approach by using statistical ideas developed for capture-recapture problems in ecology and epidemiology. Results We estimate core- and pan-genome sizes for 16 different bacterial species. The results reveal a complex dependency structure for most species, manifested as heterogeneous detection probabilities. Estimated pan-genome sizes range from small (around 2600 gene families in Buchnera aphidicola to large (around 43000 gene families in Escherichia coli. Results for Echerichia coli show that as more data become available, a larger diversity is estimated, indicating an extensive pool of rarely occurring genes in the population. Conclusion Analyzing pan-genomics data with binomial mixture models is a way to handle dependencies between genomes, which we find is always present. A bottleneck in the estimation procedure is the annotation of rarely occurring genes.

  4. Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.

    Science.gov (United States)

    Böhning, Dankmar; Kuhnert, Ronny

    2006-12-01

    This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.

  5. Quantitative analysis of replication-related mutation and selection pressures in bacterial chromosomes and plasmids using generalised GC skew index

    Directory of Open Access Journals (Sweden)

    Suzuki Haruo

    2009-12-01

    Full Text Available Abstract Background Due to their bi-directional replication machinery starting from a single finite origin, bacterial genomes show characteristic nucleotide compositional bias between the two replichores, which can be visualised through GC skew or (C-G/(C+G. Although this polarisation is used for computational prediction of replication origins in many bacterial genomes, the degree of GC skew visibility varies widely among different species, necessitating a quantitative measurement of GC skew strength in order to provide confidence measures for GC skew-based predictions of replication origins. Results Here we discuss a quantitative index for the measurement of GC skew strength, named the generalised GC skew index (gGCSI, which is applicable to genomes of any length, including bacterial chromosomes and plasmids. We demonstrate that gGCSI is independent of the window size and can thus be used to compare genomes with different sizes, such as bacterial chromosomes and plasmids. It can suggest the existence of different replication mechanisms in archaea and of rolling-circle replication in plasmids. Correlation of gGCSI values between plasmids and their corresponding host chromosomes suggests that within the same strain, these replicons have reproduced using the same replication machinery and thus exhibit similar strengths of replication strand skew. Conclusions gGCSI can be applied to genomes of any length and thus allows comparative study of replication-related mutation and selection pressures in genomes of different lengths such as bacterial chromosomes and plasmids. Using gGCSI, we showed that replication-related mutation or selection pressure is similar for replicons with similar machinery.

  6. Concentration addition, independent action and generalized concentration addition models for mixture effect prediction of sex hormone synthesis in vitro.

    Directory of Open Access Journals (Sweden)

    Niels Hadrup

    Full Text Available Humans are concomitantly exposed to numerous chemicals. An infinite number of combinations and doses thereof can be imagined. For toxicological risk assessment the mathematical prediction of mixture effects, using knowledge on single chemicals, is therefore desirable. We investigated pros and cons of the concentration addition (CA, independent action (IA and generalized concentration addition (GCA models. First we measured effects of single chemicals and mixtures thereof on steroid synthesis in H295R cells. Then single chemical data were applied to the models; predictions of mixture effects were calculated and compared to the experimental mixture data. Mixture 1 contained environmental chemicals adjusted in ratio according to human exposure levels. Mixture 2 was a potency adjusted mixture containing five pesticides. Prediction of testosterone effects coincided with the experimental Mixture 1 data. In contrast, antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone and estradiol, some chemicals were having stimulatory effects whereas others had inhibitory effects. The three models were not applicable in this situation and no predictions could be performed. Finally, the expected contributions of single chemicals to the mixture effects were calculated. Prochloraz was the predominant but not sole driver of the mixtures, suggesting that one chemical alone was not responsible for the mixture effects. In conclusion, the GCA model seemed to be superior to the CA and IA models for the prediction of testosterone effects. A situation with chemicals exerting opposing effects, for which the models could not be applied, was identified. In addition, the data indicate that in non-potency adjusted mixtures the effects cannot

  7. XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling

    Science.gov (United States)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-08-01

    XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.

  8. FLECHT low flooding rate skewed test series data report

    International Nuclear Information System (INIS)

    Rosal, E.R.; Conway, C.E.; Krepinevich, M.C.

    1977-05-01

    The FLECHT Low Flooding Rate Tests were conducted in an improved original FLECHT Test Facility to provide heat transfer coefficient and entrainment data at forced flooding rates of 1 in./sec. and with electrically heated rod bundles which had cosine and top skewed axial power profiles. The top-skewed axial power profile test series has now been successfully completed and is here reported. For these tests the rod bundle was enclosed in a low mass cylindrical housing which would minimize the wall housing effects encountered in the cosine test series. These tests examined the effects of initial clad temperature, variable stepped and continuously variable flooding rates, housing heat release, rod peak power, constant low flooding rates, coolant subcooling, hot and cold channel entrainment, and bundle stored and generated power. Data obtained in runs which met the test specifications are reported here, and include rod clad temperatures, turn around and quench times, heat transfer coefficients, inlet flooding rates, overall mass balances, differential pressures and calculated void fractions in the test section, thimble wall and steam temperatures, and exhaust steam and liquid carryover rates

  9. Excess Properties of Aqueous Mixtures of Methanol: Simple Models Versus Experiment

    Czech Academy of Sciences Publication Activity Database

    Vlček, Lukáš; Nezbeda, Ivo

    roč. 131-132, - (2007), s. 158-162 ISSN 0167-7322. [International Conference on Solution Chemistry /29./. Portorož, 21.08.2005-25.08.2005] R&D Projects: GA AV ČR(CZ) IAA4072303; GA AV ČR(CZ) 1ET400720409 Institutional research plan: CEZ:AV0Z40720504 Keywords : aqueous mixtures * primitive models * water-alcohol mixtures Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 0.982, year: 2007

  10. A new normalization method based on electrical field lines for electrical capacitance tomography

    International Nuclear Information System (INIS)

    Zhang, L F; Wang, H X

    2009-01-01

    Electrical capacitance tomography (ECT) is considered to be one of the most promising process tomography techniques. The image reconstruction for ECT is an inverse problem to find the spatially distributed permittivities in a pipe. Usually, the capacitance measurements obtained from the ECT system are normalized at the high and low permittivity for image reconstruction. The parallel normalization model is commonly used during the normalization process, which assumes the distribution of materials in parallel. Thus, the normalized capacitance is a linear function of measured capacitance. A recently used model is a series normalization model which results in the normalized capacitance as a nonlinear function of measured capacitance. The newest presented model is based on electrical field centre lines (EFCL), and is a mixture of two normalization models. The multi-threshold method of this model is presented in this paper. The sensitivity matrices based on different normalization models were obtained, and image reconstruction was carried out accordingly. Simulation results indicate that reconstructed images with higher quality can be obtained based on the presented model

  11. The Distribution of the Product Explains Normal Theory Mediation Confidence Interval Estimation.

    Science.gov (United States)

    Kisbu-Sakarya, Yasemin; MacKinnon, David P; Miočević, Milica

    2014-05-01

    The distribution of the product has several useful applications. One of these applications is its use to form confidence intervals for the indirect effect as the product of 2 regression coefficients. The purpose of this article is to investigate how the moments of the distribution of the product explain normal theory mediation confidence interval coverage and imbalance. Values of the critical ratio for each random variable are used to demonstrate how the moments of the distribution of the product change across values of the critical ratio observed in research studies. Results of the simulation study showed that as skewness in absolute value increases, coverage decreases. And as skewness in absolute value and kurtosis increases, imbalance increases. The difference between testing the significance of the indirect effect using the normal theory versus the asymmetric distribution of the product is further illustrated with a real data example. This article is the first study to show the direct link between the distribution of the product and indirect effect confidence intervals and clarifies the results of previous simulation studies by showing why normal theory confidence intervals for indirect effects are often less accurate than those obtained from the asymmetric distribution of the product or from resampling methods.

  12. Low reproductive skew despite high male-biased operational sex ratio in a glass frog with paternal care.

    Science.gov (United States)

    Mangold, Alexandra; Trenkwalder, Katharina; Ringler, Max; Hödl, Walter; Ringler, Eva

    2015-09-03

    Reproductive skew, the uneven distribution of reproductive success among individuals, is a common feature of many animal populations. Several scenarios have been proposed to favour either high or low levels of reproductive skew. Particularly a male-biased operational sex ratio and the asynchronous arrival of females is expected to cause high variation in reproductive success among males. Recently it has been suggested that the type of benefits provided by males (fixed vs. dilutable) could also strongly impact individual mating patterns, and thereby affecting reproductive skew. We tested this hypothesis in Hyalinobatrachium valerioi, a Neotropical glass frog with prolonged breeding and paternal care. We monitored and genetically sampled a natural population in southwestern Costa Rica during the breeding season in 2012 and performed parentage analysis of adult frogs and tadpoles to investigate individual mating frequencies, possible mating preferences, and estimate reproductive skew in males and females. We identified a polygamous mating system, where high proportions of males (69 %) and females (94 %) reproduced successfully. The variance in male mating success could largely be attributed to differences in time spent calling at the reproductive site, but not to body size or relatedness. Female H. valerioi were not choosy and mated indiscriminately with available males. Our findings support the hypothesis that dilutable male benefits - such as parental care - can favour female polyandry and maintain low levels of reproductive skew among males within a population, even in the presence of direct male-male competition and a highly male-biased operational sex ratio. We hypothesize that low male reproductive skew might be a general characteristic in prolonged breeders with paternal care.

  13. Three-part joint modeling methods for complex functional data mixed with zero-and-one-inflated proportions and zero-inflated continuous outcomes with skewness.

    Science.gov (United States)

    Li, Haocheng; Staudenmayer, John; Wang, Tianying; Keadle, Sarah Kozey; Carroll, Raymond J

    2018-02-20

    We take a functional data approach to longitudinal studies with complex bivariate outcomes. This work is motivated by data from a physical activity study that measured 2 responses over time in 5-minute intervals. One response is the proportion of time active in each interval, a continuous proportions with excess zeros and ones. The other response, energy expenditure rate in the interval, is a continuous variable with excess zeros and skewness. This outcome is complex because there are 3 possible activity patterns in each interval (inactive, partially active, and completely active), and those patterns, which are observed, induce both nonrandom and random associations between the responses. More specifically, the inactive pattern requires a zero value in both the proportion for active behavior and the energy expenditure rate; a partially active pattern means that the proportion of activity is strictly between zero and one and that the energy expenditure rate is greater than zero and likely to be moderate, and the completely active pattern means that the proportion of activity is exactly one, and the energy expenditure rate is greater than zero and likely to be higher. To address these challenges, we propose a 3-part functional data joint modeling approach. The first part is a continuation-ratio model to reorder the ordinal valued 3 activity patterns. The second part models the proportions when they are in interval (0,1). The last component specifies the skewed continuous energy expenditure rate with Box-Cox transformations when they are greater than zero. In this 3-part model, the regression structures are specified as smooth curves measured at various time points with random effects that have a correlation structure. The smoothed random curves for each variable are summarized using a few important principal components, and the association of the 3 longitudinal components is modeled through the association of the principal component scores. The difficulties in

  14. Effects of trophic skewing of species richness on ecosystem functioning in a diverse marine community.

    Directory of Open Access Journals (Sweden)

    Pamela L Reynolds

    Full Text Available Widespread overharvesting of top consumers of the world's ecosystems has "skewed" food webs, in terms of biomass and species richness, towards a generally greater domination at lower trophic levels. This skewing is exacerbated in locations where exotic species are predominantly low-trophic level consumers such as benthic macrophytes, detritivores, and filter feeders. However, in some systems where numerous exotic predators have been added, sometimes purposefully as in many freshwater systems, food webs are skewed in the opposite direction toward consumer dominance. Little is known about how such modifications to food web topology, e.g., changes in the ratio of predator to prey species richness, affect ecosystem functioning. We experimentally measured the effects of trophic skew on production in an estuarine food web by manipulating ratios of species richness across three trophic levels in experimental mesocosms. After 24 days, increasing macroalgal richness promoted both plant biomass and grazer abundance, although the positive effect on plant biomass disappeared in the presence of grazers. The strongest trophic cascade on the experimentally stocked macroalgae emerged in communities with a greater ratio of prey to predator richness (bottom-rich food webs, while stronger cascades on the accumulation of naturally colonizing algae (primarily microalgae with some early successional macroalgae that recruited and grew in the mesocosms generally emerged in communities with greater predator to prey richness (the more top-rich food webs. These results suggest that trophic skewing of species richness and overall changes in food web topology can influence marine community structure and food web dynamics in complex ways, emphasizing the need for multitrophic approaches to understand the consequences of marine extinctions and invasions.

  15. Concentration addition and independent action model: Which is better in predicting the toxicity for metal mixtures on zebrafish larvae.

    Science.gov (United States)

    Gao, Yongfei; Feng, Jianfeng; Kang, Lili; Xu, Xin; Zhu, Lin

    2018-01-01

    The joint toxicity of chemical mixtures has emerged as a popular topic, particularly on the additive and potential synergistic actions of environmental mixtures. We investigated the 24h toxicity of Cu-Zn, Cu-Cd, and Cu-Pb and 96h toxicity of Cd-Pb binary mixtures on the survival of zebrafish larvae. Joint toxicity was predicted and compared using the concentration addition (CA) and independent action (IA) models with different assumptions in the toxic action mode in toxicodynamic processes through single and binary metal mixture tests. Results showed that the CA and IA models presented varying predictive abilities for different metal combinations. For the Cu-Cd and Cd-Pb mixtures, the CA model simulated the observed survival rates better than the IA model. By contrast, the IA model simulated the observed survival rates better than the CA model for the Cu-Zn and Cu-Pb mixtures. These findings revealed that the toxic action mode may depend on the combinations and concentrations of tested metal mixtures. Statistical analysis of the antagonistic or synergistic interactions indicated that synergistic interactions were observed for the Cu-Cd and Cu-Pb mixtures, non-interactions were observed for the Cd-Pb mixtures, and slight antagonistic interactions for the Cu-Zn mixtures. These results illustrated that the CA and IA models are consistent in specifying the interaction patterns of binary metal mixtures. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Mixture distributions of wind speed in the UAE

    Science.gov (United States)

    Shin, J.; Ouarda, T.; Lee, T. S.

    2013-12-01

    Wind speed probability distribution is commonly used to estimate potential wind energy. The 2-parameter Weibull distribution has been most widely used to characterize the distribution of wind speed. However, it is unable to properly model wind speed regimes when wind speed distribution presents bimodal and kurtotic shapes. Several studies have concluded that the Weibull distribution should not be used for frequency analysis of wind speed without investigation of wind speed distribution. Due to these mixture distributional characteristics of wind speed data, the application of mixture distributions should be further investigated in the frequency analysis of wind speed. A number of studies have investigated the potential wind energy in different parts of the Arabian Peninsula. Mixture distributional characteristics of wind speed were detected from some of these studies. Nevertheless, mixture distributions have not been employed for wind speed modeling in the Arabian Peninsula. In order to improve our understanding of wind energy potential in Arabian Peninsula, mixture distributions should be tested for the frequency analysis of wind speed. The aim of the current study is to assess the suitability of mixture distributions for the frequency analysis of wind speed in the UAE. Hourly mean wind speed data at 10-m height from 7 stations were used in the current study. The Weibull and Kappa distributions were employed as representatives of the conventional non-mixture distributions. 10 mixture distributions are used and constructed by mixing four probability distributions such as Normal, Gamma, Weibull and Extreme value type-one (EV-1) distributions. Three parameter estimation methods such as Expectation Maximization algorithm, Least Squares method and Meta-Heuristic Maximum Likelihood (MHML) method were employed to estimate the parameters of the mixture distributions. In order to compare the goodness-of-fit of tested distributions and parameter estimation methods for

  17. A novel technique for estimation of skew in binary text document ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    Gatos et al (1997) have proposed a new skew detection method based on the information ..... different books, magazines and journals. ..... Duda R O, Hart P E 1973 Pattern classification and scene analysis (New York: Wiley-Interscience).

  18. Latent Transition Analysis with a Mixture Item Response Theory Measurement Model

    Science.gov (United States)

    Cho, Sun-Joo; Cohen, Allan S.; Kim, Seock-Ho; Bottge, Brian

    2010-01-01

    A latent transition analysis (LTA) model was described with a mixture Rasch model (MRM) as the measurement model. Unlike the LTA, which was developed with a latent class measurement model, the LTA-MRM permits within-class variability on the latent variable, making it more useful for measuring treatment effects within latent classes. A simulation…

  19. Modelling of associating mixtures for applications in the oil & gas and chemical industries

    DEFF Research Database (Denmark)

    Kontogeorgis, Georgios; Folas, Georgios; Muro Sunè, Nuria

    2007-01-01

    Thermodynamic properties and phase equilibria of associating mixtures cannot often be satisfactorily modelled using conventional models such as cubic equations of state. CPA (cubic-plus-association) is an equation of state (EoS), which combines the SRK EoS with the association term of SAFT. For non......-alcohol (glycol)-alkanes and certain acid and amine-containing mixtures. Recent results include glycol-aromatic hydrocarbons including multiphase, multicomponent equilibria and gas hydrate calculations in combination with the van der Waals-Platteeuw model. This article will outline some new applications...... thermodynamic models especially those combining cubic EoS with local composition activity coefficient models are included. (C) 2007 Elsevier B.V. All rights reserved....

  20. Mixture

    Directory of Open Access Journals (Sweden)

    Silva-Aguilar Martín

    2011-01-01

    Full Text Available Metals are ubiquitous pollutants present as mixtures. In particular, mixture of arsenic-cadmium-lead is among the leading toxic agents detected in the environment. These metals have carcinogenic and cell-transforming potential. In this study, we used a two step cell transformation model, to determine the role of oxidative stress in transformation induced by a mixture of arsenic-cadmium-lead. Oxidative damage and antioxidant response were determined. Metal mixture treatment induces the increase of damage markers and the antioxidant response. Loss of cell viability and increased transforming potential were observed during the promotion phase. This finding correlated significantly with generation of reactive oxygen species. Cotreatment with N-acetyl-cysteine induces effect on the transforming capacity; while a diminution was found in initiation, in promotion phase a total block of the transforming capacity was observed. Our results suggest that oxidative stress generated by metal mixture plays an important role only in promotion phase promoting transforming capacity.

  1. Skew quad compensation for SPEAR minibeta optics

    International Nuclear Information System (INIS)

    Wille, K.

    1984-06-01

    With the new minibeta insertion for SPEAR the betatron coupling and the perturbations of beam optics caused by the solenoid field of the MARK III detector can't be compensated by the simple coils used so far. Therefore another scheme with four skew quads arranged in two families has been chosen. Even though this scheme doesn't compensate the effect of the solenoid on the beam completely, the residual emittance coupling is much less than 1% which should be sufficient under all running conditions. The major advantage of this concept is its simplicity

  2. Does Realized Skewness Predict the Cross-Section of Equity Returns?

    DEFF Research Database (Denmark)

    Amaya, Diego; Christoffersen, Peter; Jacobs, Kris

    We use intraday data to compute weekly realized variance, skewness, and kurtosis for equity returns and study the realized moments’ time-series and cross-sectional properties. We investigate if this week’'s realized moments are informative for the cross-section of next week'’s stock returns. We...

  3. On the skew-symmetric character of the couple-stress tensor

    OpenAIRE

    Hadjesfandiari, Ali R.

    2013-01-01

    In this paper, the skew-symmetric character of the couple-stress tensor is established as the result of arguments from tensor analysis. Consequently, the couple-stress pseudo-tensor has a true vectorial character. The fundamental step in this development is that the isotropic couple-stress tensor cannot exist.

  4. Dual-Mode Measurement and Theoretical Analysis of Evaporation Kinetics of Binary Mixtures

    Science.gov (United States)

    Song, Hanyu; He, Chi-Ruei; Basdeo, Carl; Li, Ji-Qin; Ye, Dezhuang; Kalonia, Devendra; Li, Si-Yu; Fan, Tai-Hsi

    Theoretical and experimental investigations are presented for the precision measurement of evaporation kinetics of binary mixtures using a quartz crystal resonator. A thin layer of light alcohol mixture including a volatile (methanol) and a much less volatile (1-butanol) components is deployed on top of the resonator. The normal or acoustic mode is to detect the moving liquid-vapor interface due to evaporation with a great spatial precision on the order of microns, and simultaneously the shear mode is used for in-situ detection of point viscosity or concentration of the mixture near the resonator. A one-dimensional theoretical model is developed to describe the underlying mass transfer and interfacial transport phenomena. Along with the modeling results, the transient evaporation kinetics, moving interface, and the stratification of viscosity of the liquid mixture during evaporation are simultaneously measured by the impedance response of the shear and longitudinal waves emitted from the resonator. The system can be used to characterize complicated evaporation kinetics involving multi-component fuels. American Chemical Society Petroleum Research Fund, NSF CMMI-0952646.

  5. Breeding system and reproductive skew in a highly polygynous ant population

    DEFF Research Database (Denmark)

    Haag-Liautard, C.; Pedersen, Jes Søe; Ovaskainen, O.

    2008-01-01

    of mature queens by mark-release-recapture in 29 nests and dissected a sub-sample of queens to assess their reproductive status. We also used microsatellites to estimate relatedness within and between all classes of nestmates (queens, their mates, worker brood, queen brood and male brood). Queen number...... Factors affecting relatedness among nest members in ant colonies with high queen number are still poorly understood. In order to identify the major determinants of nest kin structure, we conducted a detailed analysis of the breeding system of the ant Formica exsecta. We estimated the number...... was very high, with an arithmetic mean of 253 per nest. Most queens (90%) were reproductively active, consistent with the genetic analyses revealing that there was only a minimal reproductive skew among nestmate queens. Despite the high queen number and low reproductive skew, almost all classes...

  6. Data Requirements and Modeling for Gas Hydrate-Related Mixtures and a Comparison of Two Association Models

    DEFF Research Database (Denmark)

    Liang, Xiaodong; Aloupis, Georgios; Kontogeorgis, Georgios M.

    2017-01-01

    the performance of the CPA and sPC-SAFT EOS for modeling the fluid-phase equilibria of gas hydrate-related systems and will try to explore how the models can help in suggesting experimental measurements. These systems contain water, hydrocarbon (alkane or aromatic), and either methanol or monoethylene glycol...... parameter sets have been chosen for the sPC-SAFT EOS for a fair comparison. The comparisons are made for pure fluid properties, vapor liquid-equilibria, and liquid liquid equilibria of binary and ternary mixtures as well as vapor liquid liquid equilibria of quaternary mixtures. The results show, from...

  7. A Cable-Passive Damper System for Sway and Skew Motion Control of a Crane Spreader

    Directory of Open Access Journals (Sweden)

    La Duc Viet

    2015-01-01

    Full Text Available While the crane control problem is often approached by applying a certain active control command to some parts of the crane, this paper proposes a cable-passive damper system to reduce the vibration of a four-cable suspended crane spreader. The residual sway and skew motions of a crane spreader always produce the angle deflections between the crane cables and the crane spreader. The idea in this paper is to convert those deflections into energy dissipated by the viscous dampers, which connect the cables and the spreader. The proposed damper system is effective in reducing spreader sway and skew motions. Moreover, the optimal damping coefficient can be found analytically by minimizing the time integral of system energy. The numerical simulations show that the proposed passive system can assist the input shaping control of the trolley motion in reducing both sway and skew responses.

  8. Comparison of linear, skewed-linear, and proportional hazard models for the analysis of lambing interval in Ripollesa ewes.

    Science.gov (United States)

    Casellas, J; Bach, R

    2012-06-01

    Lambing interval is a relevant reproductive indicator for sheep populations under continuous mating systems, although there is a shortage of selection programs accounting for this trait in the sheep industry. Both the historical assumption of small genetic background and its unorthodox distribution pattern have limited its implementation as a breeding objective. In this manuscript, statistical performances of 3 alternative parametrizations [i.e., symmetric Gaussian mixed linear (GML) model, skew-Gaussian mixed linear (SGML) model, and piecewise Weibull proportional hazard (PWPH) model] have been compared to elucidate the preferred methodology to handle lambing interval data. More specifically, flock-by-flock analyses were performed on 31,986 lambing interval records (257.3 ± 0.2 d) from 6 purebred Ripollesa flocks. Model performances were compared in terms of deviance information criterion (DIC) and Bayes factor (BF). For all flocks, PWPH models were clearly preferred; they generated a reduction of 1,900 or more DIC units and provided BF estimates larger than 100 (i.e., PWPH models against linear models). These differences were reduced when comparing PWPH models with different number of change points for the baseline hazard function. In 4 flocks, only 2 change points were required to minimize the DIC, whereas 4 and 6 change points were needed for the 2 remaining flocks. These differences demonstrated a remarkable degree of heterogeneity across sheep flocks that must be properly accounted for in genetic evaluation models to avoid statistical biases and suboptimal genetic trends. Within this context, all 6 Ripollesa flocks revealed substantial genetic background for lambing interval with heritabilities ranging between 0.13 and 0.19. This study provides the first evidence of the suitability of PWPH models for lambing interval analysis, clearly discarding previous parametrizations focused on mixed linear models.

  9. Forces in wingwalls from thermal expansion of skewed semi-integral bridges.

    Science.gov (United States)

    2010-11-01

    Jointless bridges, such as semi-integral and integral bridges, have become more popular in recent years because of their simplicity in the construction and the elimination of high costs related to joint maintenance. Prior research has shown that skew...

  10. Modeling the surface tension of complex, reactive organic-inorganic mixtures

    Science.gov (United States)

    Schwier, A. N.; Viglione, G. A.; Li, Z.; McNeill, V. Faye

    2013-11-01

    Atmospheric aerosols can contain thousands of organic compounds which impact aerosol surface tension, affecting aerosol properties such as heterogeneous reactivity, ice nucleation, and cloud droplet formation. We present new experimental data for the surface tension of complex, reactive organic-inorganic aqueous mixtures mimicking tropospheric aerosols. Each solution contained 2-6 organic compounds, including methylglyoxal, glyoxal, formaldehyde, acetaldehyde, oxalic acid, succinic acid, leucine, alanine, glycine, and serine, with and without ammonium sulfate. We test two semi-empirical surface tension models and find that most reactive, complex, aqueous organic mixtures which do not contain salt are well described by a weighted Szyszkowski-Langmuir (S-L) model which was first presented by Henning et al. (2005). Two approaches for modeling the effects of salt were tested: (1) the Tuckermann approach (an extension of the Henning model with an additional explicit salt term), and (2) a new implicit method proposed here which employs experimental surface tension data obtained for each organic species in the presence of salt used with the Henning model. We recommend the use of method (2) for surface tension modeling of aerosol systems because the Henning model (using data obtained from organic-inorganic systems) and Tuckermann approach provide similar modeling results and goodness-of-fit (χ2) values, yet the Henning model is a simpler and more physical approach to modeling the effects of salt, requiring less empirically determined parameters.

  11. Finite mixture models for sensitivity analysis of thermal hydraulic codes for passive safety systems analysis

    Energy Technology Data Exchange (ETDEWEB)

    Di Maio, Francesco, E-mail: francesco.dimaio@polimi.it [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Nicola, Giancarlo [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Zio, Enrico [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Chair on System Science and Energetic Challenge Fondation EDF, Ecole Centrale Paris and Supelec, Paris (France); Yu, Yu [School of Nuclear Science and Engineering, North China Electric Power University, 102206 Beijing (China)

    2015-08-15

    Highlights: • Uncertainties of TH codes affect the system failure probability quantification. • We present Finite Mixture Models (FMMs) for sensitivity analysis of TH codes. • FMMs approximate the pdf of the output of a TH code with a limited number of simulations. • The approach is tested on a Passive Containment Cooling System of an AP1000 reactor. • The novel approach overcomes the results of a standard variance decomposition method. - Abstract: For safety analysis of Nuclear Power Plants (NPPs), Best Estimate (BE) Thermal Hydraulic (TH) codes are used to predict system response in normal and accidental conditions. The assessment of the uncertainties of TH codes is a critical issue for system failure probability quantification. In this paper, we consider passive safety systems of advanced NPPs and present a novel approach of Sensitivity Analysis (SA). The approach is based on Finite Mixture Models (FMMs) to approximate the probability density function (i.e., the uncertainty) of the output of the passive safety system TH code with a limited number of simulations. We propose a novel Sensitivity Analysis (SA) method for keeping the computational cost low: an Expectation Maximization (EM) algorithm is used to calculate the saliency of the TH code input variables for identifying those that most affect the system functional failure. The novel approach is compared with a standard variance decomposition method on a case study considering a Passive Containment Cooling System (PCCS) of an Advanced Pressurized reactor AP1000.

  12. Combinatorial bounds on the α-divergence of univariate mixture models

    KAUST Repository

    Nielsen, Frank; Sun, Ke

    2017-01-01

    We derive lower- and upper-bounds of α-divergence between univariate mixture models with components in the exponential family. Three pairs of bounds are presented in order with increasing quality and increasing computational cost. They are verified

  13. General mixture item response models with different item response structures: Exposition with an application to Likert scales.

    Science.gov (United States)

    Tijmstra, Jesper; Bolsinova, Maria; Jeon, Minjeong

    2018-01-10

    This article proposes a general mixture item response theory (IRT) framework that allows for classes of persons to differ with respect to the type of processes underlying the item responses. Through the use of mixture models, nonnested IRT models with different structures can be estimated for different classes, and class membership can be estimated for each person in the sample. If researchers are able to provide competing measurement models, this mixture IRT framework may help them deal with some violations of measurement invariance. To illustrate this approach, we consider a two-class mixture model, where a person's responses to Likert-scale items containing a neutral middle category are either modeled using a generalized partial credit model, or through an IRTree model. In the first model, the middle category ("neither agree nor disagree") is taken to be qualitatively similar to the other categories, and is taken to provide information about the person's endorsement. In the second model, the middle category is taken to be qualitatively different and to reflect a nonresponse choice, which is modeled using an additional latent variable that captures a person's willingness to respond. The mixture model is studied using simulation studies and is applied to an empirical example.

  14. A nonparametric mixture model for cure rate estimation.

    Science.gov (United States)

    Peng, Y; Dear, K B

    2000-03-01

    Nonparametric methods have attracted less attention than their parametric counterparts for cure rate analysis. In this paper, we study a general nonparametric mixture model. The proportional hazards assumption is employed in modeling the effect of covariates on the failure time of patients who are not cured. The EM algorithm, the marginal likelihood approach, and multiple imputations are employed to estimate parameters of interest in the model. This model extends models and improves estimation methods proposed by other researchers. It also extends Cox's proportional hazards regression model by allowing a proportion of event-free patients and investigating covariate effects on that proportion. The model and its estimation method are investigated by simulations. An application to breast cancer data, including comparisons with previous analyses using a parametric model and an existing nonparametric model by other researchers, confirms the conclusions from the parametric model but not those from the existing nonparametric model.

  15. Finite mixture models for sub-pixel coastal land cover classification

    CSIR Research Space (South Africa)

    Ritchie, Michaela C

    2017-05-01

    Full Text Available Models for Sub- pixel Coastal Land Cover Classification M. Ritchie Dr. M. Lück-Vogel Dr. P. Debba Dr. V. Goodall ISRSE - 37 Tshwane, South Africa 10 May 2017 2Study Area Africa South Africa FALSE BAY 3Strand Gordon’s Bay Study Area WorldView-2 Image.../Urban 1 10 10 Herbaceous Vegetation 1 5 5 Shadow 1 8 8 Sparse Vegetation 1 3 3 Water 1 10 10 Woody Vegetation 1 5 5 11 Maximum Likelihood Classification (MLC) 12 Gaussian Mixture Discriminant Analysis (GMDA) 13 A B C t-distribution Mixture Discriminant...

  16. Use of finite mixture distribution models in the analysis of wind energy in the Canarian Archipelago

    International Nuclear Information System (INIS)

    Carta, Jose Antonio; Ramirez, Penelope

    2007-01-01

    The statistical characteristics of hourly mean wind speed data recorded at 16 weather stations located in the Canarian Archipelago are analyzed in this paper. As a result of this analysis we see that the typical two parameter Weibull wind speed distribution (W-pdf) does not accurately represent all wind regimes observed in that region. However, a Singly Truncated from below Normal Weibull mixture distribution (TNW-pdf) and a two component mixture Weibull distribution (WW-pdf) developed here do provide very good fits for both unimodal and bimodal wind speed frequency distributions observed in that region and offer less relative errors in determining the annual mean wind power density. The parameters of the distributions are estimated using the least squares method, which is resolved in this paper using the Levenberg-Marquardt algorithm. The suitability of the distributions is judged from the probability plot correlation coefficient plot R 2 , adjusted for degrees of freedom. Based on the results obtained, we conclude that the two mixture distributions proposed here provide very flexible models for wind speed studies and can be applied in a widespread manner to represent the wind regimes in the Canarian archipelago and in other regions with similar characteristics. The TNW-pdf takes into account the frequency of null winds, whereas the WW-pdf and W-pdf do not. It can, therefore, better represent wind regimes with high percentages of null wind speeds. However, calculation of the TNW-pdf is markedly slower

  17. Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models.

    Directory of Open Access Journals (Sweden)

    Kezi Yu

    Full Text Available In this paper, we propose an application of non-parametric Bayesian (NPB models for classification of fetal heart rate (FHR recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP and the Chinese restaurant process with finite capacity (CRFC. Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR recordings in a real-time setting.

  18. Normalization for Implied Volatility

    OpenAIRE

    Fukasawa, Masaaki

    2010-01-01

    We study specific nonlinear transformations of the Black-Scholes implied volatility to show remarkable properties of the volatility surface. Model-free bounds on the implied volatility skew are given. Pricing formulas for the European options which are written in terms of the implied volatility are given. In particular, we prove elegant formulas for the fair strikes of the variance swap and the gamma swap.

  19. Modelling phase equilibria for acid gas mixtures using the CPA equation of state. Part V: Multicomponent mixtures containing CO2 and alcohols

    DEFF Research Database (Denmark)

    Tsivintzelis, Ioannis; Kontogeorgis, Georgios M.

    2015-01-01

    of CPA for ternary and multicomponent CO2 mixtures containing alcohols (methanol, ethanol or propanol) water and hydrocarbons. This work belongs to a series of studies aiming to arrive in a single "engineering approach" for applying CPA to acid gas mixtures, without introducing significant changes...... to the model. In this direction, CPA results were obtained using various approaches, i.e. different association schemes for pure CO2 (assuming that it is a non-associating compound, or that it is a self-associating fluid with two, three or four association sites) and different possibilities for modelling...... mixtures of CO2 with water and alcohols (only use of one interaction parameter kij or assuming cross-association interactions and obtaining the relevant parameters either via a combining rule or using an experimental value for the cross-association energy). It is concluded that CPA is a powerful model...

  20. Detecting Math Anxiety with a Mixture Partial Credit Model

    Science.gov (United States)

    Ölmez, Ibrahim Burak; Cohen, Allan S.

    2017-01-01

    The purpose of this study was to investigate a new methodology for detection of differences in middle grades students' math anxiety. A mixture partial credit model analysis revealed two distinct latent classes based on homogeneities in response patterns within each latent class. Students in Class 1 had less anxiety about apprehension of math…

  1. The structure of mode-locking regions of piecewise-linear continuous maps: II. Skew sawtooth maps

    Science.gov (United States)

    Simpson, D. J. W.

    2018-05-01

    In two-parameter bifurcation diagrams of piecewise-linear continuous maps on , mode-locking regions typically have points of zero width known as shrinking points. Near any shrinking point, but outside the associated mode-locking region, a significant proportion of parameter space can be usefully partitioned into a two-dimensional array of annular sectors. The purpose of this paper is to show that in these sectors the dynamics is well-approximated by a three-parameter family of skew sawtooth circle maps, where the relationship between the skew sawtooth maps and the N-dimensional map is fixed within each sector. The skew sawtooth maps are continuous, degree-one, and piecewise-linear, with two different slopes. They approximate the stable dynamics of the N-dimensional map with an error that goes to zero with the distance from the shrinking point. The results explain the complicated radial pattern of periodic, quasi-periodic, and chaotic dynamics that occurs near shrinking points.

  2. Subordinate wasps are more aggressive in colonies with low reproductive skew

    DEFF Research Database (Denmark)

    Fanelli, D.; Boomsma, Jacobus Jan; Turillazzi, S.

    2008-01-01

    The small societies of primitively eusocial wasps have provided interesting testing grounds for reproductive skew theory because all individuals have similar reproductive potential, which is unusual in social insects but common in vertebrate societies. Aggression is a key parameter in testing the...

  3. A general mixture theory. I. Mixtures of spherical molecules

    Science.gov (United States)

    Hamad, Esam Z.

    1996-08-01

    We present a new general theory for obtaining mixture properties from the pure species equations of state. The theory addresses the composition and the unlike interactions dependence of mixture equation of state. The density expansion of the mixture equation gives the exact composition dependence of all virial coefficients. The theory introduces multiple-index parameters that can be calculated from binary unlike interaction parameters. In this first part of the work, details are presented for the first and second levels of approximations for spherical molecules. The second order model is simple and very accurate. It predicts the compressibility factor of additive hard spheres within simulation uncertainty (equimolar with size ratio of three). For nonadditive hard spheres, comparison with compressibility factor simulation data over a wide range of density, composition, and nonadditivity parameter, gave an average error of 2%. For mixtures of Lennard-Jones molecules, the model predictions are better than the Weeks-Chandler-Anderson perturbation theory.

  4. A globally accurate theory for a class of binary mixture models

    Science.gov (United States)

    Dickman, Adriana G.; Stell, G.

    The self-consistent Ornstein-Zernike approximation results for the 3D Ising model are used to obtain phase diagrams for binary mixtures described by decorated models, yielding the plait point, binodals, and closed-loop coexistence curves for the models proposed by Widom, Clark, Neece, and Wheeler. The results are in good agreement with series expansions and experiments.

  5. Guided ultrasonic wave beam skew in silicon wafers

    Science.gov (United States)

    Pizzolato, Marco; Masserey, Bernard; Robyr, Jean-Luc; Fromme, Paul

    2018-04-01

    In the photovoltaic industry, monocrystalline silicon wafers are employed for solar cells with high conversion efficiency. Micro-cracks induced by the cutting process in the thin wafers can lead to brittle wafer fracture. Guided ultrasonic waves would offer an efficient methodology for the in-process non-destructive testing of wafers to assess micro-crack density. The material anisotropy of the monocrystalline silicon leads to variations of the guided wave characteristics, depending on the propagation direction relative to the crystal orientation. Selective guided ultrasonic wave excitation was achieved using a contact piezoelectric transducer with custom-made wedges for the A0 and S0 Lamb wave modes and a transducer holder to achieve controlled contact pressure and orientation. The out-of-plane component of the guided wave propagation was measured using a non-contact laser interferometer. The phase slowness (velocity) of the two fundamental Lamb wave modes was measured experimentally for varying propagation directions relative to the crystal orientation and found to match theoretical predictions. Significant wave beam skew was observed experimentally, especially for the S0 mode, and investigated from 3D finite element simulations. Good agreement was found with the theoretical predictions based on nominal material properties of the silicon wafer. The important contribution of guided wave beam skewing effects for the non-destructive testing of silicon wafers was demonstrated.

  6. Generating a normalized geometric liver model with warping

    International Nuclear Information System (INIS)

    Boes, J.L.; Weymouth, T.E.; Meyer, C.R.; Quint, L.E.; Bland, P.H.; Bookstein, F.L.

    1990-01-01

    This paper reports on the automated determination of the liver surface in abdominal CT scans for radiation treatment, surgery planning, and anatomic visualization. The normalized geometric model of the liver is generated by averaging registered outlines from a set of 15 studies of normal liver. The outlines have been registered with the use of thin-plate spline warping based on a set of five homologous landmarks. Thus, the model consists of an average of the surface and a set of five anatomic landmarks. The accuracy of the model is measured against both the set of studies used in model generation and an alternate set of 15 normal studies with use of, as an error measure, the ratio of nonoverlapping model and study volume to total model volume

  7. Skew redundant MEMS IMU calibration using a Kalman filter

    International Nuclear Information System (INIS)

    Jafari, M; Sahebjameyan, M; Moshiri, B; Najafabadi, T A

    2015-01-01

    In this paper, a novel calibration procedure for skew redundant inertial measurement units (SRIMUs) based on micro-electro mechanical systems (MEMS) is proposed. A general model of the SRIMU measurements is derived which contains the effects of bias, scale factor error and misalignments. For more accuracy, the effect of lever arms of the accelerometers to the center of the table are modeled and compensated in the calibration procedure. Two separate Kalman filters (KFs) are proposed to perform the estimation of error parameters for gyroscopes and accelerometers. The predictive error minimization (PEM) stochastic modeling method is used to simultaneously model the effect of bias instability and random walk noise on the calibration Kalman filters to diminish the biased estimations. The proposed procedure is simulated numerically and has expected experimental results. The calibration maneuvers are applied using a two-axis angle turntable in a way that the persistency of excitation (PE) condition for parameter estimation is met. For this purpose, a trapezoidal calibration profile is utilized to excite different deterministic error parameters of the accelerometers and a pulse profile is used for the gyroscopes. Furthermore, to evaluate the performance of the proposed KF calibration method, a conventional least squares (LS) calibration procedure is derived for the SRIMUs and the simulation and experimental results compare the functionality of the two proposed methods with each other. (paper)

  8. Auto-Calibration and Fault Detection and Isolation of Skewed Redundant Accelerometers in Measurement While Drilling Systems.

    Science.gov (United States)

    Seyed Moosavi, Seyed Mohsen; Moaveni, Bijan; Moshiri, Behzad; Arvan, Mohammad Reza

    2018-02-27

    The present study designed skewed redundant accelerometers for a Measurement While Drilling (MWD) tool and executed auto-calibration, fault diagnosis and isolation of accelerometers in this tool. The optimal structure includes four accelerometers was selected and designed precisely in accordance with the physical shape of the existing MWD tool. A new four-accelerometer structure was designed, implemented and installed on the current system, replacing the conventional orthogonal structure. Auto-calibration operation of skewed redundant accelerometers and all combinations of three accelerometers have been done. Consequently, biases, scale factors, and misalignment factors of accelerometers have been successfully estimated. By defecting the sensors in the new optimal skewed redundant structure, the fault was detected using the proposed FDI method and the faulty sensor was diagnosed and isolated. The results indicate that the system can continue to operate with at least three correct sensors.

  9. Application of the Electronic Nose Technique to Differentiation between Model Mixtures with COPD Markers

    Directory of Open Access Journals (Sweden)

    Jacek Namieśnik

    2013-04-01

    Full Text Available The paper presents the potential of an electronic nose technique in the field of fast diagnostics of patients suspected of Chronic Obstructive Pulmonary Disease (COPD. The investigations were performed using a simple electronic nose prototype equipped with a set of six semiconductor sensors manufactured by FIGARO Co. They were aimed at verification of a possibility of differentiation between model reference mixtures with potential COPD markers (N,N-dimethylformamide and N,N-dimethylacetamide. These mixtures contained volatile organic compounds (VOCs such as acetone, isoprene, carbon disulphide, propan-2-ol, formamide, benzene, toluene, acetonitrile, acetic acid, dimethyl ether, dimethyl sulphide, acrolein, furan, propanol and pyridine, recognized as the components of exhaled air. The model reference mixtures were prepared at three concentration levels—10 ppb, 25 ppb, 50 ppb v/v—of each component, except for the COPD markers. Concentration of the COPD markers in the mixtures was from 0 ppb to 100 ppb v/v. Interpretation of the obtained data employed principal component analysis (PCA. The investigations revealed the usefulness of the electronic device only in the case when the concentration of the COPD markers was twice as high as the concentration of the remaining components of the mixture and for a limited number of basic mixture components.

  10. Symmetries and structure of skewed and double distributions

    International Nuclear Information System (INIS)

    Radyushkin, A.V.

    1998-01-01

    Extending the concept of parton densities onto nonforward matrix elements b arO(0,z)vert b arp> of quark and gluon light-cone operators, one can use two types of nonperturbative functions: double distributions (DDs) f(x,α;t), F(x,y;t) and skewed (off and nonforward) parton distributions (SPDs) H(x,ξ;t), F ζ (X,t). The authors treat DDs as primary objects producing SPDs after integration. They emphasize the role of DDs in understanding interplay between (x) and ζ (ξ) dependences of SPDs. In particular, the use of DDs is crucial to secure the polynomiality condition: Nth moments of SPDs are Nth degree polynomials in the relevant skewedness parameter ζ or ξ. They propose simple ansaetze for DDs having correct spectral and symmetry properties and derive model expressions for SPDs satisfying all known constraints. Finally, they argue that for small skewedness, one can obtain SPDs from the usual parton densities by averaging the latter with an appropriate weight over the region [Xminusζ,X] (or [ x minus ξ, x + ξ])

  11. Modeling phase equilibria for acid gas mixtures using the CPA equation of state. Part IV. Applications to mixtures of CO2 with alkanes

    DEFF Research Database (Denmark)

    Tsivintzelis, Ioannis; Ali, Shahid; Kontogeorgis, Georgios

    2015-01-01

    The thermodynamic properties of pure gaseous, liquid or supercritical CO2 and CO2 mixtures with hydrocarbons and other compounds such as water, alcohols, and glycols are very important in many processes in the oil and gas industry. Design of such processes requires use of accurate thermodynamic...... models, capable of predicting the complex phase behavior of multicomponent mixtures as well as their volumetric properties. In this direction, over the last several years, the cubic-plus-association (CPA) thermodynamic model has been successfully used for describing volumetric properties and phase...

  12. I-optimal mixture designs

    OpenAIRE

    GOOS, Peter; JONES, Bradley; SYAFITRI, Utami

    2013-01-01

    In mixture experiments, the factors under study are proportions of the ingredients of a mixture. The special nature of the factors in a mixture experiment necessitates specific types of regression models, and specific types of experimental designs. Although mixture experiments usually are intended to predict the response(s) for all possible formulations of the mixture and to identify optimal proportions for each of the ingredients, little research has been done concerning their I-optimal desi...

  13. Equilibrium based analytical model for estimation of pressure magnification during deflagration of hydrogen air mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Karanam, Aditya; Sharma, Pavan K.; Ganju, Sunil; Singh, Ram Kumar [Bhabha Atomic Research Centre (BARC), Mumbai (India). Reactor Safety Div.

    2016-12-15

    During postulated accident sequences in nuclear reactors, hydrogen may get released from the core and form a flammable mixture in the surrounding containment structure. Ignition of such mixtures and the subsequent pressure rise are an imminent threat for safe and sustainable operation of nuclear reactors. Methods for evaluating post ignition characteristics are important for determining the design safety margins in such scenarios. This study presents two thermo-chemical models for determining the post ignition state. The first model is based on internal energy balance while the second model uses the concept of element potentials to minimize the free energy of the system with internal energy imposed as a constraint. Predictions from both the models have been compared against published data over a wide range of mixture compositions. Important differences in the regions close to flammability limits and for stoichiometric mixtures have been identified and explained. The equilibrium model has been validated for varied temperatures and pressures representative of initial conditions that may be present in the containment during accidents. Special emphasis has been given to the understanding of the role of dissociation and its effect on equilibrium pressure, temperature and species concentrations.

  14. Equilibrium based analytical model for estimation of pressure magnification during deflagration of hydrogen air mixtures

    International Nuclear Information System (INIS)

    Karanam, Aditya; Sharma, Pavan K.; Ganju, Sunil; Singh, Ram Kumar

    2016-01-01

    During postulated accident sequences in nuclear reactors, hydrogen may get released from the core and form a flammable mixture in the surrounding containment structure. Ignition of such mixtures and the subsequent pressure rise are an imminent threat for safe and sustainable operation of nuclear reactors. Methods for evaluating post ignition characteristics are important for determining the design safety margins in such scenarios. This study presents two thermo-chemical models for determining the post ignition state. The first model is based on internal energy balance while the second model uses the concept of element potentials to minimize the free energy of the system with internal energy imposed as a constraint. Predictions from both the models have been compared against published data over a wide range of mixture compositions. Important differences in the regions close to flammability limits and for stoichiometric mixtures have been identified and explained. The equilibrium model has been validated for varied temperatures and pressures representative of initial conditions that may be present in the containment during accidents. Special emphasis has been given to the understanding of the role of dissociation and its effect on equilibrium pressure, temperature and species concentrations.

  15. Thermodynamic parameters for mixtures of quartz under shock wave loading in views of the equilibrium model

    International Nuclear Information System (INIS)

    Maevskii, K. K.; Kinelovskii, S. A.

    2015-01-01

    The numerical results of modeling of shock wave loading of mixtures with the SiO 2 component are presented. The TEC (thermodynamic equilibrium component) model is employed to describe the behavior of solid and porous multicomponent mixtures and alloys under shock wave loading. State equations of a Mie–Grüneisen type are used to describe the behavior of condensed phases, taking into account the temperature dependence of the Grüneisen coefficient, gas in pores is one of the components of the environment. The model is based on the assumption that all components of the mixture under shock-wave loading are in thermodynamic equilibrium. The calculation results are compared with the experimental data derived by various authors. The behavior of the mixture containing components with a phase transition under high dynamic loads is described

  16. Incorporating Skew into RMS Surface Roughness Probability Distribution

    Science.gov (United States)

    Stahl, Mark T.; Stahl, H. Philip.

    2013-01-01

    The standard treatment of RMS surface roughness data is the application of a Gaussian probability distribution. This handling of surface roughness ignores the skew present in the surface and overestimates the most probable RMS of the surface, the mode. Using experimental data we confirm the Gaussian distribution overestimates the mode and application of an asymmetric distribution provides a better fit. Implementing the proposed asymmetric distribution into the optical manufacturing process would reduce the polishing time required to meet surface roughness specifications.

  17. Adapting cultural mixture modeling for continuous measures of knowledge and memory fluency.

    Science.gov (United States)

    Tan, Yin-Yin Sarah; Mueller, Shane T

    2016-09-01

    Previous research (e.g., cultural consensus theory (Romney, Weller, & Batchelder, American Anthropologist, 88, 313-338, 1986); cultural mixture modeling (Mueller & Veinott, 2008)) has used overt response patterns (i.e., responses to questionnaires and surveys) to identify whether a group shares a single coherent attitude or belief set. Yet many domains in social science have focused on implicit attitudes that are not apparent in overt responses but still may be detected via response time patterns. We propose a method for modeling response times as a mixture of Gaussians, adapting the strong-consensus model of cultural mixture modeling to model this implicit measure of knowledge strength. We report the results of two behavioral experiments and one simulation experiment that establish the usefulness of the approach, as well as some of the boundary conditions under which distinct groups of shared agreement might be recovered, even when the group identity is not known. The results reveal that the ability to recover and identify shared-belief groups depends on (1) the level of noise in the measurement, (2) the differential signals for strong versus weak attitudes, and (3) the similarity between group attitudes. Consequently, the method shows promise for identifying latent groups among a population whose overt attitudes do not differ, but whose implicit or covert attitudes or knowledge may differ.

  18. The role of semantics, pre-emption and skew in linguistic distributions: the case of the un-construction.

    Directory of Open Access Journals (Sweden)

    Paul eIbbotson

    2013-12-01

    Full Text Available We use the Google Ngram database, a corpus of 5,195,769 digitized books containing ~4% of all books ever published, to test three ideas that are hypothesized to account for linguistic generalizations: verbal semantics, pre-emption and skew. Using 828,813 tokens of un-forms as a test case for these mechanisms, we found verbal semantics was a good predictor of the frequency of un-forms in the English language over the past 200 years – both in terms of how the frequency changed over time and their rank frequency. We did not find strong evidence for the direct competition of un-forms and their top pre-emptors, however the skew of the un-construction competitors was inversely correlated with the acceptability of the un-form. We suggest a cognitive explanation for this, namely, that the more the set of relevant pre-emptors is skewed then the more easily it is retrieved from memory. This suggests that it is not just the frequency of pre-emptive forms that must be taken into account when trying to explain usage patterns but their skew as well.

  19. Effective dielectric mixture model for characterization of diesel contaminated soil

    International Nuclear Information System (INIS)

    Al-Mattarneh, H.M.A.

    2007-01-01

    Human exposure to contaminated soil by diesel isomers can have serious health consequences like neurological diseases or cancer. The potential of dielectric measuring techniques for electromagnetic characterization of contaminated soils was investigated in this paper. The purpose of the research was to develop an empirical dielectric mixture model for soil hydrocarbon contamination application. The paper described the basic theory and elaborated in dielectric mixture theory. The analytical and empirical models were explained in simple algebraic formulas. The experimental study was then described with reference to materials, properties and experimental results. The results of the analytical models were also mathematically explained. The proposed semi-empirical model was also presented. According to the result of the electromagnetic properties of dry soil contaminated with diesel, the diesel presence had no significant effect on the electromagnetic properties of dry soil. It was concluded that diesel had no contribution to the soil electrical conductivity, which confirmed the nonconductive character of diesel. The results of diesel-contaminated soil at saturation condition indicated that both dielectric constant and loss factors of soil were decreased with increasing diesel content. 15 refs., 2 tabs., 9 figs

  20. Estimating animal abundance with N-mixture models using the R-INLA package for R

    KAUST Repository

    Meehan, Timothy D.

    2017-05-03

    Successful management of wildlife populations requires accurate estimates of abundance. Abundance estimates can be confounded by imperfect detection during wildlife surveys. N-mixture models enable quantification of detection probability and often produce abundance estimates that are less biased. The purpose of this study was to demonstrate the use of the R-INLA package to analyze N-mixture models and to compare performance of R-INLA to two other common approaches -- JAGS (via the runjags package), which uses Markov chain Monte Carlo and allows Bayesian inference, and unmarked, which uses Maximum Likelihood and allows frequentist inference. We show that R-INLA is an attractive option for analyzing N-mixture models when (1) familiar model syntax and data format (relative to other R packages) are desired, (2) survey level covariates of detection are not essential, (3) fast computing times are necessary (R-INLA is 10 times faster than unmarked, 300 times faster than JAGS), and (4) Bayesian inference is preferred.

  1. A locally adaptive normal distribution

    DEFF Research Database (Denmark)

    Arvanitidis, Georgios; Hansen, Lars Kai; Hauberg, Søren

    2016-01-01

    entropy distribution under the given metric. The underlying metric is, however, non-parametric. We develop a maximum likelihood algorithm to infer the distribution parameters that relies on a combination of gradient descent and Monte Carlo integration. We further extend the LAND to mixture models......The multivariate normal density is a monotonic function of the distance to the mean, and its ellipsoidal shape is due to the underlying Euclidean metric. We suggest to replace this metric with a locally adaptive, smoothly changing (Riemannian) metric that favors regions of high local density...

  2. New approach in modeling Cr(VI) sorption onto biomass from metal binary mixtures solutions

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Chang [College of Environmental Science and Engineering, Anhui Normal University, South Jiuhua Road, 189, 241002 Wuhu (China); Chemical Engineering Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain); Fiol, Núria [Chemical Engineering Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain); Villaescusa, Isabel, E-mail: Isabel.Villaescusa@udg.edu [Chemical Engineering Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain); Poch, Jordi [Applied Mathematics Department, Escola Politècnica Superior, Universitat de Girona, Ma Aurèlia Capmany, 61, 17071 Girona (Spain)

    2016-01-15

    In the last decades Cr(VI) sorption equilibrium and kinetic studies have been carried out using several types of biomasses. However there are few researchers that consider all the simultaneous processes that take place during Cr(VI) sorption (i.e., sorption/reduction of Cr(VI) and simultaneous formation and binding of reduced Cr(III)) when formulating a model that describes the overall sorption process. On the other hand Cr(VI) scarcely exists alone in wastewaters, it is usually found in mixtures with divalent metals. Therefore, the simultaneous removal of Cr(VI) and divalent metals in binary mixtures and the interactive mechanism governing Cr(VI) elimination have gained more and more attention. In the present work, kinetics of Cr(VI) sorption onto exhausted coffee from Cr(VI)–Cu(II) binary mixtures has been studied in a stirred batch reactor. A model including Cr(VI) sorption and reduction, Cr(III) sorption and the effect of the presence of Cu(II) in these processes has been developed and validated. This study constitutes an important advance in modeling Cr(VI) sorption kinetics especially when chromium sorption is in part based on the sorbent capacity of reducing hexavalent chromium and a metal cation is present in the binary mixture. - Highlights: • A kinetic model including Cr(VI) reduction, Cr(VI) and Cr(III) sorption/desorption • Synergistic effect of Cu(II) on Cr(VI) elimination included in the modelModel validation by checking it against independent sets of data.

  3. New approach in modeling Cr(VI) sorption onto biomass from metal binary mixtures solutions

    International Nuclear Information System (INIS)

    Liu, Chang; Fiol, Núria; Villaescusa, Isabel; Poch, Jordi

    2016-01-01

    In the last decades Cr(VI) sorption equilibrium and kinetic studies have been carried out using several types of biomasses. However there are few researchers that consider all the simultaneous processes that take place during Cr(VI) sorption (i.e., sorption/reduction of Cr(VI) and simultaneous formation and binding of reduced Cr(III)) when formulating a model that describes the overall sorption process. On the other hand Cr(VI) scarcely exists alone in wastewaters, it is usually found in mixtures with divalent metals. Therefore, the simultaneous removal of Cr(VI) and divalent metals in binary mixtures and the interactive mechanism governing Cr(VI) elimination have gained more and more attention. In the present work, kinetics of Cr(VI) sorption onto exhausted coffee from Cr(VI)–Cu(II) binary mixtures has been studied in a stirred batch reactor. A model including Cr(VI) sorption and reduction, Cr(III) sorption and the effect of the presence of Cu(II) in these processes has been developed and validated. This study constitutes an important advance in modeling Cr(VI) sorption kinetics especially when chromium sorption is in part based on the sorbent capacity of reducing hexavalent chromium and a metal cation is present in the binary mixture. - Highlights: • A kinetic model including Cr(VI) reduction, Cr(VI) and Cr(III) sorption/desorption • Synergistic effect of Cu(II) on Cr(VI) elimination included in the modelModel validation by checking it against independent sets of data

  4. Non-local correlations via Wigner-Yanase skew information in two SC-qubit having mutual interaction under phase decoherence

    Science.gov (United States)

    Mohamed, Abdel-Baset A.

    2017-10-01

    An analytical solution of the master equation that describes a superconducting cavity containing two coupled superconducting charge qubits is obtained. Quantum-mechanical correlations based on Wigner-Yanase skew information, as local quantum uncertainty and uncertainty-induced quantum non-locality, are compared to the concurrence under the effects of the phase decoherence. Local quantum uncertainty exhibits sudden changes during its time evolution and revival process. Sudden death and sudden birth occur only for entanglement, depending on the initial state of the two coupled charge qubits, while the correlations of skew information does not vanish. The quantum correlations of skew information are found to be sensitive to the dephasing rate, the photons number in the cavity, the interaction strength between the two qubits, and the qubit distribution angle of the initial state. With a proper initial state, the stationary correlation of the skew information has a non-zero stationary value for a long time interval under the phase decoherence, that it may be useful in quantum information and computation processes.

  5. A BGK model for reactive mixtures of polyatomic gases with continuous internal energy

    Science.gov (United States)

    Bisi, M.; Monaco, R.; Soares, A. J.

    2018-03-01

    In this paper we derive a BGK relaxation model for a mixture of polyatomic gases with a continuous structure of internal energies. The emphasis of the paper is on the case of a quaternary mixture undergoing a reversible chemical reaction of bimolecular type. For such a mixture we prove an H -theorem and characterize the equilibrium solutions with the related mass action law of chemical kinetics. Further, a Chapman-Enskog asymptotic analysis is performed in view of computing the first-order non-equilibrium corrections to the distribution functions and investigating the transport properties of the reactive mixture. The chemical reaction rate is explicitly derived at the first order and the balance equations for the constituent number densities are derived at the Euler level.

  6. A Bayesian approach to the analysis of quantal bioassay studies using nonparametric mixture models.

    Science.gov (United States)

    Fronczyk, Kassandra; Kottas, Athanasios

    2014-03-01

    We develop a Bayesian nonparametric mixture modeling framework for quantal bioassay settings. The approach is built upon modeling dose-dependent response distributions. We adopt a structured nonparametric prior mixture model, which induces a monotonicity restriction for the dose-response curve. Particular emphasis is placed on the key risk assessment goal of calibration for the dose level that corresponds to a specified response. The proposed methodology yields flexible inference for the dose-response relationship as well as for other inferential objectives, as illustrated with two data sets from the literature. © 2013, The International Biometric Society.

  7. Auto-Calibration and Fault Detection and Isolation of Skewed Redundant Accelerometers in Measurement While Drilling Systems

    Directory of Open Access Journals (Sweden)

    Seyed Mohsen Seyed Moosavi

    2018-02-01

    Full Text Available The present study designed skewed redundant accelerometers for a Measurement While Drilling (MWD tool and executed auto-calibration, fault diagnosis and isolation of accelerometers in this tool. The optimal structure includes four accelerometers was selected and designed precisely in accordance with the physical shape of the existing MWD tool. A new four-accelerometer structure was designed, implemented and installed on the current system, replacing the conventional orthogonal structure. Auto-calibration operation of skewed redundant accelerometers and all combinations of three accelerometers have been done. Consequently, biases, scale factors, and misalignment factors of accelerometers have been successfully estimated. By defecting the sensors in the new optimal skewed redundant structure, the fault was detected using the proposed FDI method and the faulty sensor was diagnosed and isolated. The results indicate that the system can continue to operate with at least three correct sensors.

  8. Parameter Estimation and Model Selection for Mixtures of Truncated Exponentials

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael

    2010-01-01

    Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorithms and provide a flexible way of modeling hybrid domains (domains containing both discrete and continuous variables). On the other hand, estimating an MTE from data has turned out to be a difficul...

  9. Fitting N-mixture models to count data with unmodeled heterogeneity: Bias, diagnostics, and alternative approaches

    Science.gov (United States)

    Duarte, Adam; Adams, Michael J.; Peterson, James T.

    2018-01-01

    Monitoring animal populations is central to wildlife and fisheries management, and the use of N-mixture models toward these efforts has markedly increased in recent years. Nevertheless, relatively little work has evaluated estimator performance when basic assumptions are violated. Moreover, diagnostics to identify when bias in parameter estimates from N-mixture models is likely is largely unexplored. We simulated count data sets using 837 combinations of detection probability, number of sample units, number of survey occasions, and type and extent of heterogeneity in abundance or detectability. We fit Poisson N-mixture models to these data, quantified the bias associated with each combination, and evaluated if the parametric bootstrap goodness-of-fit (GOF) test can be used to indicate bias in parameter estimates. We also explored if assumption violations can be diagnosed prior to fitting N-mixture models. In doing so, we propose a new model diagnostic, which we term the quasi-coefficient of variation (QCV). N-mixture models performed well when assumptions were met and detection probabilities were moderate (i.e., ≥0.3), and the performance of the estimator improved with increasing survey occasions and sample units. However, the magnitude of bias in estimated mean abundance with even slight amounts of unmodeled heterogeneity was substantial. The parametric bootstrap GOF test did not perform well as a diagnostic for bias in parameter estimates when detectability and sample sizes were low. The results indicate the QCV is useful to diagnose potential bias and that potential bias associated with unidirectional trends in abundance or detectability can be diagnosed using Poisson regression. This study represents the most thorough assessment to date of assumption violations and diagnostics when fitting N-mixture models using the most commonly implemented error distribution. Unbiased estimates of population state variables are needed to properly inform management decision

  10. Using Bayesian statistics for modeling PTSD through Latent Growth Mixture Modeling : implementation and discussion

    NARCIS (Netherlands)

    Depaoli, Sarah; van de Schoot, Rens; van Loey, Nancy; Sijbrandij, Marit

    2015-01-01

    BACKGROUND: After traumatic events, such as disaster, war trauma, and injuries including burns (which is the focus here), the risk to develop posttraumatic stress disorder (PTSD) is approximately 10% (Breslau & Davis, 1992). Latent Growth Mixture Modeling can be used to classify individuals into

  11. Characterization of Mixtures. Part 2: QSPR Models for Prediction of Excess Molar Volume and Liquid Density Using Neural Networks.

    Science.gov (United States)

    Ajmani, Subhash; Rogers, Stephen C; Barley, Mark H; Burgess, Andrew N; Livingstone, David J

    2010-09-17

    In our earlier work, we have demonstrated that it is possible to characterize binary mixtures using single component descriptors by applying various mixing rules. We also showed that these methods were successful in building predictive QSPR models to study various mixture properties of interest. Here in, we developed a QSPR model of an excess thermodynamic property of binary mixtures i.e. excess molar volume (V(E) ). In the present study, we use a set of mixture descriptors which we earlier designed to specifically account for intermolecular interactions between the components of a mixture and applied successfully to the prediction of infinite-dilution activity coefficients using neural networks (part 1 of this series). We obtain a significant QSPR model for the prediction of excess molar volume (V(E) ) using consensus neural networks and five mixture descriptors. We find that hydrogen bond and thermodynamic descriptors are the most important in determining excess molar volume (V(E) ), which is in line with the theory of intermolecular forces governing excess mixture properties. The results also suggest that the mixture descriptors utilized herein may be sufficient to model a wide variety of properties of binary and possibly even more complex mixtures. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Fast and Accurate Ground Truth Generation for Skew-Tolerance Evaluation of Page Segmentation Algorithms

    Directory of Open Access Journals (Sweden)

    Okun Oleg

    2006-01-01

    Full Text Available Many image segmentation algorithms are known, but often there is an inherent obstacle in the unbiased evaluation of segmentation quality: the absence or lack of a common objective representation for segmentation results. Such a representation, known as the ground truth, is a description of what one should obtain as the result of ideal segmentation, independently of the segmentation algorithm used. The creation of ground truth is a laborious process and therefore any degree of automation is always welcome. Document image analysis is one of the areas where ground truths are employed. In this paper, we describe an automated tool called GROTTO intended to generate ground truths for skewed document images, which can be used for the performance evaluation of page segmentation algorithms. Some of these algorithms are claimed to be insensitive to skew (tilt of text lines. However, this fact is usually supported only by a visual comparison of what one obtains and what one should obtain since ground truths are mostly available for upright images, that is, those without skew. As a result, the evaluation is both subjective; that is, prone to errors, and tedious. Our tool allows users to quickly and easily produce many sufficiently accurate ground truths that can be employed in practice and therefore it facilitates automatic performance evaluation. The main idea is to utilize the ground truths available for upright images and the concept of the representative square [9] in order to produce the ground truths for skewed images. The usefulness of our tool is demonstrated through a number of experiments with real-document images of complex layout.

  13. Bas-Relief Modeling from Normal Images with Intuitive Styles.

    Science.gov (United States)

    Ji, Zhongping; Ma, Weiyin; Sun, Xianfang

    2014-05-01

    Traditional 3D model-based bas-relief modeling methods are often limited to model-dependent and monotonic relief styles. This paper presents a novel method for digital bas-relief modeling with intuitive style control. Given a composite normal image, the problem discussed in this paper involves generating a discontinuity-free depth field with high compression of depth data while preserving or even enhancing fine details. In our framework, several layers of normal images are composed into a single normal image. The original normal image on each layer is usually generated from 3D models or through other techniques as described in this paper. The bas-relief style is controlled by choosing a parameter and setting a targeted height for them. Bas-relief modeling and stylization are achieved simultaneously by solving a sparse linear system. Different from previous work, our method can be used to freely design bas-reliefs in normal image space instead of in object space, which makes it possible to use any popular image editing tools for bas-relief modeling. Experiments with a wide range of 3D models and scenes show that our method can effectively generate digital bas-reliefs.

  14. Time skewing and amplitude nonlinearity mitigation by feedback equalization for 56 Gbps VCSEL-based PAM-4 links

    Science.gov (United States)

    You, Yue; Zhang, Wenjia; Sun, Lin; Du, Jiangbing; Liang, Chenyu; Yang, Fan; He, Zuyuan

    2018-03-01

    The vertical cavity surface emitting laser (VCSEL)-based multimode optical transceivers enabled by pulse amplitude modulation (PAM)-4 will be commercialized in near future to meet the 400-Gbps standard short reach optical interconnects. It is still challenging to achieve over 56/112-Gbps with the multilevel signaling as the multimode property of the device and link would introduce the nonlinear temporal response for the different levels. In this work, we scrutinize the distortions that relates to the multilevel feature of PAM-4 modulation, and propose an effective feedback equalization scheme for 56-Gbps VCSEL-based PAM-4 optical interconnects system to mitigate the distortions caused by eye timing-skew and nonlinear power-dependent noise. Level redistribution at Tx side is theoretically modeled and constructed to achieve equivalent symbol error ratios (SERs) of four levels and improved BER performance. The cause of the eye skewing and the mitigation approach are also simulated at 100-Gbps and experimentally investigated at 56-Gbps. The results indicate more than 2-dB power penalty improvement has been achieved by using such a distortion aware equalizer.

  15. BOX-COX transformation and random regression models for fecal egg count data

    Directory of Open Access Journals (Sweden)

    Marcos Vinicius Silva

    2012-01-01

    Full Text Available Accurate genetic evaluation of livestock is based on appropriate modeling of phenotypic measurements. In ruminants fecal egg count (FEC is commonly used to measure resistance to nematodes. FEC values are not normally distributed and logarithmic transformations have been used to achieve normality before analysis. However, the transformed data are often not normally distributed, especially when data are extremely skewed. A series of repeated FEC measurements may provide information about the population dynamics of a group or individual. A total of 6,375 FEC measures were obtained for 410 animals between 1992 and 2003 from the Beltsville Agricultural Research Center Angus herd. Original data were transformed using an extension of the Box-Cox transformation to approach normality and to estimate (covariance components. We also proposed using random regression models (RRM for genetic and non-genetic studies of FEC. Phenotypes were analyzed using RRM and restricted maximum likelihood. Within the different orders of Legendre polynomials used, those with more parameters (order 4 adjusted FEC data best. Results indicated that the transformation of FEC data utilizing the Box-Cox transformation family was effective in reducing the skewness and kurtosis, and dramatically increased estimates of heritability, and measurements of FEC obtained in the period between 12 and 26 weeks in a 26-week experimental challenge period are genetically correlated.

  16. Box-Cox Transformation and Random Regression Models for Fecal egg Count Data.

    Science.gov (United States)

    da Silva, Marcos Vinícius Gualberto Barbosa; Van Tassell, Curtis P; Sonstegard, Tad S; Cobuci, Jaime Araujo; Gasbarre, Louis C

    2011-01-01

    Accurate genetic evaluation of livestock is based on appropriate modeling of phenotypic measurements. In ruminants, fecal egg count (FEC) is commonly used to measure resistance to nematodes. FEC values are not normally distributed and logarithmic transformations have been used in an effort to achieve normality before analysis. However, the transformed data are often still not normally distributed, especially when data are extremely skewed. A series of repeated FEC measurements may provide information about the population dynamics of a group or individual. A total of 6375 FEC measures were obtained for 410 animals between 1992 and 2003 from the Beltsville Agricultural Research Center Angus herd. Original data were transformed using an extension of the Box-Cox transformation to approach normality and to estimate (co)variance components. We also proposed using random regression models (RRM) for genetic and non-genetic studies of FEC. Phenotypes were analyzed using RRM and restricted maximum likelihood. Within the different orders of Legendre polynomials used, those with more parameters (order 4) adjusted FEC data best. Results indicated that the transformation of FEC data utilizing the Box-Cox transformation family was effective in reducing the skewness and kurtosis, and dramatically increased estimates of heritability, and measurements of FEC obtained in the period between 12 and 26 weeks in a 26-week experimental challenge period are genetically correlated.

  17. Theory of synergistic effects: Hill-type response surfaces as 'null-interaction' models for mixtures.

    Science.gov (United States)

    Schindler, Michael

    2017-08-02

    The classification of effects caused by mixtures of agents as synergistic, antagonistic or additive depends critically on the reference model of 'null interaction'. Two main approaches are currently in use, the Additive Dose (ADM) or concentration addition (CA) and the Multiplicative Survival (MSM) or independent action (IA) models. We compare several response surface models to a newly developed Hill response surface, obtained by solving a logistic partial differential equation (PDE). Assuming that a mixture of chemicals with individual Hill-type dose-response curves can be described by an n-dimensional logistic function, Hill's differential equation for pure agents is replaced by a PDE for mixtures whose solution provides Hill surfaces as 'null-interaction' models and relies neither on Bliss independence or Loewe additivity nor uses Chou's unified general theory. An n-dimensional logistic PDE decribing the Hill-type response of n-component mixtures is solved. Appropriate boundary conditions ensure the correct asymptotic behaviour. Mathematica 11 (Wolfram, Mathematica Version 11.0, 2016) is used for the mathematics and graphics presented in this article. The Hill response surface ansatz can be applied to mixtures of compounds with arbitrary Hill parameters. Restrictions which are required when deriving analytical expressions for response surfaces from other principles, are unnecessary. Many approaches based on Loewe additivity turn out be special cases of the Hill approach whose increased flexibility permits a better description of 'null-effect' responses. Missing sham-compliance of Bliss IA, known as Colby's model in agrochemistry, leads to incompatibility with the Hill surface ansatz. Examples of binary and ternary mixtures illustrate the differences between the approaches. For Hill-slopes close to one and doses below the half-maximum effect doses MSM (Colby, Bliss, Finney, Abbott) predicts synergistic effects where the Hill model indicates 'null

  18. Multi-Step Time Series Forecasting with an Ensemble of Varied Length Mixture Models.

    Science.gov (United States)

    Ouyang, Yicun; Yin, Hujun

    2018-05-01

    Many real-world problems require modeling and forecasting of time series, such as weather temperature, electricity demand, stock prices and foreign exchange (FX) rates. Often, the tasks involve predicting over a long-term period, e.g. several weeks or months. Most existing time series models are inheritably for one-step prediction, that is, predicting one time point ahead. Multi-step or long-term prediction is difficult and challenging due to the lack of information and uncertainty or error accumulation. The main existing approaches, iterative and independent, either use one-step model recursively or treat the multi-step task as an independent model. They generally perform poorly in practical applications. In this paper, as an extension of the self-organizing mixture autoregressive (AR) model, the varied length mixture (VLM) models are proposed to model and forecast time series over multi-steps. The key idea is to preserve the dependencies between the time points within the prediction horizon. Training data are segmented to various lengths corresponding to various forecasting horizons, and the VLM models are trained in a self-organizing fashion on these segments to capture these dependencies in its component AR models of various predicting horizons. The VLM models form a probabilistic mixture of these varied length models. A combination of short and long VLM models and an ensemble of them are proposed to further enhance the prediction performance. The effectiveness of the proposed methods and their marked improvements over the existing methods are demonstrated through a number of experiments on synthetic data, real-world FX rates and weather temperatures.

  19. Nonparametric e-Mixture Estimation.

    Science.gov (United States)

    Takano, Ken; Hino, Hideitsu; Akaho, Shotaro; Murata, Noboru

    2016-12-01

    This study considers the common situation in data analysis when there are few observations of the distribution of interest or the target distribution, while abundant observations are available from auxiliary distributions. In this situation, it is natural to compensate for the lack of data from the target distribution by using data sets from these auxiliary distributions-in other words, approximating the target distribution in a subspace spanned by a set of auxiliary distributions. Mixture modeling is one of the simplest ways to integrate information from the target and auxiliary distributions in order to express the target distribution as accurately as possible. There are two typical mixtures in the context of information geometry: the [Formula: see text]- and [Formula: see text]-mixtures. The [Formula: see text]-mixture is applied in a variety of research fields because of the presence of the well-known expectation-maximazation algorithm for parameter estimation, whereas the [Formula: see text]-mixture is rarely used because of its difficulty of estimation, particularly for nonparametric models. The [Formula: see text]-mixture, however, is a well-tempered distribution that satisfies the principle of maximum entropy. To model a target distribution with scarce observations accurately, this letter proposes a novel framework for a nonparametric modeling of the [Formula: see text]-mixture and a geometrically inspired estimation algorithm. As numerical examples of the proposed framework, a transfer learning setup is considered. The experimental results show that this framework works well for three types of synthetic data sets, as well as an EEG real-world data set.

  20. A comparison of methods to handle skew distributed cost variables in the analysis of the resource consumption in schizophrenia treatment.

    Science.gov (United States)

    Kilian, Reinhold; Matschinger, Herbert; Löeffler, Walter; Roick, Christiane; Angermeyer, Matthias C

    2002-03-01

    Transformation of the dependent cost variable is often used to solve the problems of heteroscedasticity and skewness in linear ordinary least square regression of health service cost data. However, transformation may cause difficulties in the interpretation of regression coefficients and the retransformation of predicted values. The study compares the advantages and disadvantages of different methods to estimate regression based cost functions using data on the annual costs of schizophrenia treatment. Annual costs of psychiatric service use and clinical and socio-demographic characteristics of the patients were assessed for a sample of 254 patients with a diagnosis of schizophrenia (ICD-10 F 20.0) living in Leipzig. The clinical characteristics of the participants were assessed by means of the BPRS 4.0, the GAF, and the CAN for service needs. Quality of life was measured by WHOQOL-BREF. A linear OLS regression model with non-parametric standard errors, a log-transformed OLS model and a generalized linear model with a log-link and a gamma distribution were used to estimate service costs. For the estimation of robust non-parametric standard errors, the variance estimator by White and a bootstrap estimator based on 2000 replications were employed. Models were evaluated by the comparison of the R2 and the root mean squared error (RMSE). RMSE of the log-transformed OLS model was computed with three different methods of bias-correction. The 95% confidence intervals for the differences between the RMSE were computed by means of bootstrapping. A split-sample-cross-validation procedure was used to forecast the costs for the one half of the sample on the basis of a regression equation computed for the other half of the sample. All three methods showed significant positive influences of psychiatric symptoms and met psychiatric service needs on service costs. Only the log- transformed OLS model showed a significant negative impact of age, and only the GLM shows a significant

  1. An odor interaction model of binary odorant mixtures by a partial differential equation method.

    Science.gov (United States)

    Yan, Luchun; Liu, Jiemin; Wang, Guihua; Wu, Chuandong

    2014-07-09

    A novel odor interaction model was proposed for binary mixtures of benzene and substituted benzenes by a partial differential equation (PDE) method. Based on the measurement method (tangent-intercept method) of partial molar volume, original parameters of corresponding formulas were reasonably displaced by perceptual measures. By these substitutions, it was possible to relate a mixture's odor intensity to the individual odorant's relative odor activity value (OAV). Several binary mixtures of benzene and substituted benzenes were respectively tested to establish the PDE models. The obtained results showed that the PDE model provided an easily interpretable method relating individual components to their joint odor intensity. Besides, both predictive performance and feasibility of the PDE model were proved well through a series of odor intensity matching tests. If combining the PDE model with portable gas detectors or on-line monitoring systems, olfactory evaluation of odor intensity will be achieved by instruments instead of odor assessors. Many disadvantages (e.g., expense on a fixed number of odor assessors) also will be successfully avoided. Thus, the PDE model is predicted to be helpful to the monitoring and management of odor pollutions.

  2. Method for separating gaseous mixtures of matter

    International Nuclear Information System (INIS)

    Schuster, E.; Kersting, A.

    1979-01-01

    Molecules to be separated from a mixture of matter of a chemical component are excited in a manner known per se by narrow-band light sources, and a chemical reaction partner for reacting with these molecules is admixed while supplied with energy by electromagnetic radiation or heating, and as additionally required for making chemical reactions possible. A method is described for separating gaseous mixtures of matter by exciting the molecules to be separated with laser radiation and causing the excited species to react chemically with a reaction partner. It may be necessary to supply additional energy to the reaction partner to make the chemical reaction possible. The method is applicable to the separation of hydrogen isotopes by the bromination of normal methanol in a mixture normal methanol and deuterated methanol; of uranium isotope by the reactions of UF 6 with SF 4 , SiCl 4 , HCl, or SO 2 ; and of boron isotopes by the reaction of BH 3 with NH 3

  3. Modeling Math Growth Trajectory--An Application of Conventional Growth Curve Model and Growth Mixture Model to ECLS K-5 Data

    Science.gov (United States)

    Lu, Yi

    2016-01-01

    To model students' math growth trajectory, three conventional growth curve models and three growth mixture models are applied to the Early Childhood Longitudinal Study Kindergarten-Fifth grade (ECLS K-5) dataset in this study. The results of conventional growth curve model show gender differences on math IRT scores. When holding socio-economic…

  4. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    International Nuclear Information System (INIS)

    Shao, Kan; Gift, Jeffrey S.; Setzer, R. Woodrow

    2013-01-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more

  5. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Kan, E-mail: Shao.Kan@epa.gov [ORISE Postdoctoral Fellow, National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Gift, Jeffrey S. [National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Setzer, R. Woodrow [National Center for Computational Toxicology, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States)

    2013-11-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more

  6. The Universal Patient Centredness Questionnaire: scaling approaches to reduce positive skew

    Directory of Open Access Journals (Sweden)

    Bjertnaes O

    2016-11-01

    Full Text Available Oyvind Bjertnaes, Hilde Hestad Iversen, Andrew M Garratt Unit for Patient-Reported Quality, Norwegian Institute of Public Health, Oslo, Norway Purpose: Surveys of patients’ experiences typically show results that are indicative of positive experiences. Unbalanced response scales have reduced positive skew for responses to items within the Universal Patient Centeredness Questionnaire (UPC-Q. The objective of this study was to compare the unbalanced response scale with another unbalanced approach to scaling to assess whether the positive skew might be further reduced. Patients and methods: The UPC-Q was included in a patient experience survey conducted at the ward level at six hospitals in Norway in 2015. The postal survey included two reminders to nonrespondents. For patients in the first month of inclusion, UPC-Q items had standard scaling: poor, fairly good, good, very good, and excellent. For patients in the second month, the scaling was more positive: poor, good, very good, exceptionally good, and excellent. The effect of scaling on UPC-Q scores was tested with independent samples t-tests and multilevel linear regression analysis, the latter controlling for the hierarchical structure of data and known predictors of patient-reported experiences. Results: The response rate was 54.6% (n=4,970. Significantly lower scores were found for all items of the more positively worded scale: UPC-Q total score difference was 7.9 (P<0.001, on a scale from 0 to 100 where 100 is the best possible score. Differences between the four items of the UPC-Q ranged from 7.1 (P<0.001 to 10.4 (P<0.001. Multivariate multilevel regression analysis confirmed the difference between the response groups, after controlling for other background variables; UPC-Q total score difference estimate was 8.3 (P<0.001. Conclusion: The more positively worded scaling significantly lowered the mean scores, potentially increasing the sensitivity of the UPC-Q to identify differences over

  7. Distinguishing Continuous and Discrete Approaches to Multilevel Mixture IRT Models: A Model Comparison Perspective

    Science.gov (United States)

    Zhu, Xiaoshu

    2013-01-01

    The current study introduced a general modeling framework, multilevel mixture IRT (MMIRT) which detects and describes characteristics of population heterogeneity, while accommodating the hierarchical data structure. In addition to introducing both continuous and discrete approaches to MMIRT, the main focus of the current study was to distinguish…

  8. Spatially adaptive mixture modeling for analysis of FMRI time series.

    Science.gov (United States)

    Vincent, Thomas; Risser, Laurent; Ciuciu, Philippe

    2010-04-01

    Within-subject analysis in fMRI essentially addresses two problems, the detection of brain regions eliciting evoked activity and the estimation of the underlying dynamics. In Makni et aL, 2005 and Makni et aL, 2008, a detection-estimation framework has been proposed to tackle these problems jointly, since they are connected to one another. In the Bayesian formalism, detection is achieved by modeling activating and nonactivating voxels through independent mixture models (IMM) within each region while hemodynamic response estimation is performed at a regional scale in a nonparametric way. Instead of IMMs, in this paper we take advantage of spatial mixture models (SMM) for their nonlinear spatial regularizing properties. The proposed method is unsupervised and spatially adaptive in the sense that the amount of spatial correlation is automatically tuned from the data and this setting automatically varies across brain regions. In addition, the level of regularization is specific to each experimental condition since both the signal-to-noise ratio and the activation pattern may vary across stimulus types in a given brain region. These aspects require the precise estimation of multiple partition functions of underlying Ising fields. This is addressed efficiently using first path sampling for a small subset of fields and then using a recently developed fast extrapolation technique for the large remaining set. Simulation results emphasize that detection relying on supervised SMM outperforms its IMM counterpart and that unsupervised spatial mixture models achieve similar results without any hand-tuning of the correlation parameter. On real datasets, the gain is illustrated in a localizer fMRI experiment: brain activations appear more spatially resolved using SMM in comparison with classical general linear model (GLM)-based approaches, while estimating a specific parcel-based HRF shape. Our approach therefore validates the treatment of unsmoothed fMRI data without fixed GLM

  9. Modeling diffusion coefficients in binary mixtures of polar and non-polar compounds

    DEFF Research Database (Denmark)

    Medvedev, Oleg; Shapiro, Alexander

    2005-01-01

    The theory of transport coefficients in liquids, developed previously, is tested on a description of the diffusion coefficients in binary polar/non-polar mixtures, by applying advanced thermodynamic models. Comparison to a large set of experimental data shows good performance of the model. Only f...

  10. Statistical imitation system using relational interest points and Gaussian mixture models

    CSIR Research Space (South Africa)

    Claassens, J

    2009-11-01

    Full Text Available The author proposes an imitation system that uses relational interest points (RIPs) and Gaussian mixture models (GMMs) to characterize a behaviour. The system's structure is inspired by the Robot Programming by Demonstration (RDP) paradigm...

  11. Validation of a mixture-averaged thermal diffusion model for premixed lean hydrogen flames

    Science.gov (United States)

    Schlup, Jason; Blanquart, Guillaume

    2018-03-01

    The mixture-averaged thermal diffusion model originally proposed by Chapman and Cowling is validated using multiple flame configurations. Simulations using detailed hydrogen chemistry are done on one-, two-, and three-dimensional flames. The analysis spans flat and stretched, steady and unsteady, and laminar and turbulent flames. Quantitative and qualitative results using the thermal diffusion model compare very well with the more complex multicomponent diffusion model. Comparisons are made using flame speeds, surface areas, species profiles, and chemical source terms. Once validated, this model is applied to three-dimensional laminar and turbulent flames. For these cases, thermal diffusion causes an increase in the propagation speed of the flames as well as increased product chemical source terms in regions of high positive curvature. The results illustrate the necessity for including thermal diffusion, and the accuracy and computational efficiency of the mixture-averaged thermal diffusion model.

  12. Kelvin Equation for a Non-Ideal Multicomponent Mixture

    DEFF Research Database (Denmark)

    Shapiro, Alexander; Stenby, Erling Halfdan

    1997-01-01

    The Kelvin equation is generalized by application to a case of a multicomponent non-ideal mixture. Such a generalization is necessary in order to describe the two-phase equilibrium in a capillary medium with respect to both normal and retrograde condensation. The equation obtained is applied...... to the equilibrium state of a hydrocarbon mixture ina gas-condensate reservoir....

  13. Normal urinary albumin excretion in recently diagnosed type 1 diabetic patients

    DEFF Research Database (Denmark)

    Lind, B; Jensen, T; Feldt-Rasmussen, B

    1989-01-01

    of diabetes. Urinary albumin excretion (median and 95% confidence interval) was similar in the diabetic patients and normal control subjects (8 (6-11) vs 8 (6-11) mg 24-h-1, NS). Four diabetic patients had urinary albumin excretion in the microalbuminuric range of 30-300 mg 24-h-1. There was no significant...... difference between the two groups in urinary excretion of retinol binding protein. The distribution among the individuals of both urinary proteins was positively skewed and similar in the two groups. In conclusion, no significant differences in the urinary excretion of albumin and retinol binding protein...... were found between recently diagnosed Type 1 diabetic patients and normal subjects....

  14. Hölder properties of perturbed skew products and Fubini regained

    International Nuclear Information System (INIS)

    Ilyashenko, Yu; Negut, A

    2012-01-01

    In 2006, Gorodetski proved that central fibres of perturbed skew products are Hölder continuous with respect to the base point. In this paper, we give an explicit estimate of this Hölder exponent. Moreover, we extend Gorodetski's result from the case when the fibre maps are close to the identity to a much wider class of maps that satisfy the so-called modified dominated splitting condition. In many cases (for example, in the case of skew products over the solenoid or over linear Anosov diffeomorphisms of the torus), the Hölder exponent is close to 1. This allows one to overcome the so-called Fubini nightmare, in some sense. Namely, we prove that the union of central fibres that are strongly atypical from the point of view of ergodic theory, has Lebesgue measure zero despite the lack of absolute continuity of the holonomy map for the central foliation. This result is based on a new kind of ergodic theorem, which we call special. To prove our main result, we revisit the theory of Hirsch, Pugh and Shub, and estimate the contraction constant of the graph transform map. (paper)

  15. Skew Projection of Echo-Detected EPR Spectra for Increased Sensitivity and Resolution

    Science.gov (United States)

    Bowman, Michael K.; Krzyaniak, Matthew D.; Cruce, Alex A.; Weber, Ralph T.

    2013-01-01

    The measurement of EPR spectra during pulsed EPR experiments is commonly accomplished by recording the integral of the electron spin echo as the applied magnetic field is stepped through the spectrum. This approach to echo-detected EPR spectral measurement (ED-EPR) limits sensitivity and spectral resolution and can cause gross distortions in the resulting spectra because some of the information present in the electron spin echo is discarded in such measurements. However, Fourier Transformation of echo shapes measured at a series of magnetic field values followed by skew projection onto either a magnetic field or resonance frequency axis can increase both spectral resolution and sensitivity without the need to trade one against the other. Examples of skew-projected spectra with single crystals, glasses and powders show resolution improvements as large as a factor of seven with sensitivity increases of as much as a factor of five. PMID:23644351

  16. Beyond GLMs: a generative mixture modeling approach to neural system identification.

    Directory of Open Access Journals (Sweden)

    Lucas Theis

    Full Text Available Generalized linear models (GLMs represent a popular choice for the probabilistic characterization of neural spike responses. While GLMs are attractive for their computational tractability, they also impose strong assumptions and thus only allow for a limited range of stimulus-response relationships to be discovered. Alternative approaches exist that make only very weak assumptions but scale poorly to high-dimensional stimulus spaces. Here we seek an approach which can gracefully interpolate between the two extremes. We extend two frequently used special cases of the GLM-a linear and a quadratic model-by assuming that the spike-triggered and non-spike-triggered distributions can be adequately represented using Gaussian mixtures. Because we derive the model from a generative perspective, its components are easy to interpret as they correspond to, for example, the spike-triggered distribution and the interspike interval distribution. The model is able to capture complex dependencies on high-dimensional stimuli with far fewer parameters than other approaches such as histogram-based methods. The added flexibility comes at the cost of a non-concave log-likelihood. We show that in practice this does not have to be an issue and the mixture-based model is able to outperform generalized linear and quadratic models.

  17. Overcoming the effects of differential skewness of test items in scale construction

    Directory of Open Access Journals (Sweden)

    Johann M. Schepers

    2004-10-01

    Full Text Available The principal objective of the study was to develop a procedure for overcoming the effects of differential skewness of test items in scale construction. It was shown that the degree of skewness of test items places an upper limit on the correlations between the items, regardless of the contents of the items. If the items are ordered in terms of skewness the resulting inter correlation matrix forms a simplex or a pseudo simplex. Factoring such a matrix results in a multiplicity of factors, most of which are artifacts. A procedure for overcoming this problem was demonstrated with items from the Locus of Control Inventory (Schepers, 1995. The analysis was based on a sample of 1662 first year university students. Opsomming Die hoofdoel van die studie was om ’n prosedure te ontwikkel om die gevolge van differensiële skeefheid van toetsitems, in skaalkonstruksie, teen te werk. Daar is getoon dat die graad van skeefheid van toetsitems ’n boonste grens plaas op die korrelasies tussen die items ongeag die inhoud daarvan. Indien die items gerangskik word volgens graad van skeefheid, sal die interkorrelasiematriks van die items ’n simpleks of pseudosimpleks vorm. Indien so ’n matriks aan faktorontleding onderwerp word, lei dit tot ’n veelheid van faktore waarvan die meerderheid artefakte is. ’n Prosedure om hierdie probleem te bowe te kom, is gedemonstreer met behulp van die items van die Lokus van Beheer-vraelys (Schepers, 1995. Die ontledings is op ’n steekproef van 1662 eerstejaaruniversiteitstudente gebaseer.

  18. Smoothed particle hydrodynamics model for phase separating fluid mixtures. I. General equations

    NARCIS (Netherlands)

    Thieulot, C; Janssen, LPBM; Espanol, P

    We present a thermodynamically consistent discrete fluid particle model for the simulation of a recently proposed set of hydrodynamic equations for a phase separating van der Waals fluid mixture [P. Espanol and C.A.P. Thieulot, J. Chem. Phys. 118, 9109 (2003)]. The discrete model is formulated by

  19. Study of the Internal Mechanical response of an asphalt mixture by 3-D Discrete Element Modeling

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Hofko, Bernhard

    2015-01-01

    and the reliability of which have been validated. The dynamic modulus of asphalt mixtures were predicted by conducting Discrete Element simulation under dynamic strain control loading. In order to reduce the calculation time, a method based on frequency–temperature superposition principle has been implemented......In this paper the viscoelastic behavior of asphalt mixture was investigated by employing a three-dimensional Discrete Element Method (DEM). The cylinder model was filled with cubic array of spheres with a specified radius, and was considered as a whole mixture with uniform contact properties....... The ball density effect on the internal stress distribution of the asphalt mixture model has been studied when using this method. Furthermore, the internal stresses under dynamic loading have been studied. The agreement between the predicted and the laboratory test results of the complex modulus shows...

  20. Finite mixture model: A maximum likelihood estimation approach on time series data

    Science.gov (United States)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.