WorldWideScience

Sample records for linear mixed-effect models

  1. Application of Hierarchical Linear Models/Linear Mixed-Effects Models in School Effectiveness Research

    Science.gov (United States)

    Ker, H. W.

    2014-01-01

    Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…

  2. Linear and Generalized Linear Mixed Models and Their Applications

    CERN Document Server

    Jiang, Jiming

    2007-01-01

    This book covers two major classes of mixed effects models, linear mixed models and generalized linear mixed models, and it presents an up-to-date account of theory and methods in analysis of these models as well as their applications in various fields. The book offers a systematic approach to inference about non-Gaussian linear mixed models. Furthermore, it has included recently developed methods, such as mixed model diagnostics, mixed model selection, and jackknife method in the context of mixed models. The book is aimed at students, researchers and other practitioners who are interested

  3. Generalized, Linear, and Mixed Models

    CERN Document Server

    McCulloch, Charles E; Neuhaus, John M

    2011-01-01

    An accessible and self-contained introduction to statistical models-now in a modernized new editionGeneralized, Linear, and Mixed Models, Second Edition provides an up-to-date treatment of the essential techniques for developing and applying a wide variety of statistical models. The book presents thorough and unified coverage of the theory behind generalized, linear, and mixed models and highlights their similarities and differences in various construction, application, and computational aspects.A clear introduction to the basic ideas of fixed effects models, random effects models, and mixed m

  4. Extending the linear model with R generalized linear, mixed effects and nonparametric regression models

    CERN Document Server

    Faraway, Julian J

    2005-01-01

    Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ...

  5. Non-linear mixed-effects pharmacokinetic/pharmacodynamic modelling in NLME using differential equations

    DEFF Research Database (Denmark)

    Tornøe, Christoffer Wenzel; Agersø, Henrik; Madsen, Henrik

    2004-01-01

    The standard software for non-linear mixed-effect analysis of pharmacokinetic/phar-macodynamic (PK/PD) data is NONMEM while the non-linear mixed-effects package NLME is an alternative as tong as the models are fairly simple. We present the nlmeODE package which combines the ordinary differential...... equation (ODE) solver package odesolve and the non-Linear mixed effects package NLME thereby enabling the analysis of complicated systems of ODEs by non-linear mixed-effects modelling. The pharmacokinetics of the anti-asthmatic drug theophylline is used to illustrate the applicability of the nlme...

  6. An R2 statistic for fixed effects in the linear mixed model.

    Science.gov (United States)

    Edwards, Lloyd J; Muller, Keith E; Wolfinger, Russell D; Qaqish, Bahjat F; Schabenberger, Oliver

    2008-12-20

    Statisticians most often use the linear mixed model to analyze Gaussian longitudinal data. The value and familiarity of the R(2) statistic in the linear univariate model naturally creates great interest in extending it to the linear mixed model. We define and describe how to compute a model R(2) statistic for the linear mixed model by using only a single model. The proposed R(2) statistic measures multivariate association between the repeated outcomes and the fixed effects in the linear mixed model. The R(2) statistic arises as a 1-1 function of an appropriate F statistic for testing all fixed effects (except typically the intercept) in a full model. The statistic compares the full model with a null model with all fixed effects deleted (except typically the intercept) while retaining exactly the same covariance structure. Furthermore, the R(2) statistic leads immediately to a natural definition of a partial R(2) statistic. A mixed model in which ethnicity gives a very small p-value as a longitudinal predictor of blood pressure (BP) compellingly illustrates the value of the statistic. In sharp contrast to the extreme p-value, a very small R(2) , a measure of statistical and scientific importance, indicates that ethnicity has an almost negligible association with the repeated BP outcomes for the study.

  7. Statistical Tests for Mixed Linear Models

    CERN Document Server

    Khuri, André I; Sinha, Bimal K

    2011-01-01

    An advanced discussion of linear models with mixed or random effects. In recent years a breakthrough has occurred in our ability to draw inferences from exact and optimum tests of variance component models, generating much research activity that relies on linear models with mixed and random effects. This volume covers the most important research of the past decade as well as the latest developments in hypothesis testing. It compiles all currently available results in the area of exact and optimum tests for variance component models and offers the only comprehensive treatment for these models a

  8. Multivariate generalized linear mixed models using R

    CERN Document Server

    Berridge, Damon Mark

    2011-01-01

    Multivariate Generalized Linear Mixed Models Using R presents robust and methodologically sound models for analyzing large and complex data sets, enabling readers to answer increasingly complex research questions. The book applies the principles of modeling to longitudinal data from panel and related studies via the Sabre software package in R. A Unified Framework for a Broad Class of Models The authors first discuss members of the family of generalized linear models, gradually adding complexity to the modeling framework by incorporating random effects. After reviewing the generalized linear model notation, they illustrate a range of random effects models, including three-level, multivariate, endpoint, event history, and state dependence models. They estimate the multivariate generalized linear mixed models (MGLMMs) using either standard or adaptive Gaussian quadrature. The authors also compare two-level fixed and random effects linear models. The appendices contain additional information on quadrature, model...

  9. Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.

    Science.gov (United States)

    Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P

    2017-03-01

    The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Modelling subject-specific childhood growth using linear mixed-effect models with cubic regression splines.

    Science.gov (United States)

    Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William

    2016-01-01

    Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19

  11. Linear mixed models for longitudinal data

    CERN Document Server

    Molenberghs, Geert

    2000-01-01

    This paperback edition is a reprint of the 2000 edition. This book provides a comprehensive treatment of linear mixed models for continuous longitudinal data. Next to model formulation, this edition puts major emphasis on exploratory data analysis for all aspects of the model, such as the marginal model, subject-specific profiles, and residual covariance structure. Further, model diagnostics and missing data receive extensive treatment. Sensitivity analysis for incomplete data is given a prominent place. Several variations to the conventional linear mixed model are discussed (a heterogeity model, conditional linear mixed models). This book will be of interest to applied statisticians and biomedical researchers in industry, public health organizations, contract research organizations, and academia. The book is explanatory rather than mathematically rigorous. Most analyses were done with the MIXED procedure of the SAS software package, and many of its features are clearly elucidated. However, some other commerc...

  12. Detecting treatment-subgroup interactions in clustered data with generalized linear mixed-effects model trees.

    Science.gov (United States)

    Fokkema, M; Smits, N; Zeileis, A; Hothorn, T; Kelderman, H

    2017-10-25

    Identification of subgroups of patients for whom treatment A is more effective than treatment B, and vice versa, is of key importance to the development of personalized medicine. Tree-based algorithms are helpful tools for the detection of such interactions, but none of the available algorithms allow for taking into account clustered or nested dataset structures, which are particularly common in psychological research. Therefore, we propose the generalized linear mixed-effects model tree (GLMM tree) algorithm, which allows for the detection of treatment-subgroup interactions, while accounting for the clustered structure of a dataset. The algorithm uses model-based recursive partitioning to detect treatment-subgroup interactions, and a GLMM to estimate the random-effects parameters. In a simulation study, GLMM trees show higher accuracy in recovering treatment-subgroup interactions, higher predictive accuracy, and lower type II error rates than linear-model-based recursive partitioning and mixed-effects regression trees. Also, GLMM trees show somewhat higher predictive accuracy than linear mixed-effects models with pre-specified interaction effects, on average. We illustrate the application of GLMM trees on an individual patient-level data meta-analysis on treatments for depression. We conclude that GLMM trees are a promising exploratory tool for the detection of treatment-subgroup interactions in clustered datasets.

  13. Skew-t partially linear mixed-effects models for AIDS clinical studies.

    Science.gov (United States)

    Lu, Tao

    2016-01-01

    We propose partially linear mixed-effects models with asymmetry and missingness to investigate the relationship between two biomarkers in clinical studies. The proposed models take into account irregular time effects commonly observed in clinical studies under a semiparametric model framework. In addition, commonly assumed symmetric distributions for model errors are substituted by asymmetric distribution to account for skewness. Further, informative missing data mechanism is accounted for. A Bayesian approach is developed to perform parameter estimation simultaneously. The proposed model and method are applied to an AIDS dataset and comparisons with alternative models are performed.

  14. Model Selection with the Linear Mixed Model for Longitudinal Data

    Science.gov (United States)

    Ryoo, Ji Hoon

    2011-01-01

    Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…

  15. Estimation and Inference for Very Large Linear Mixed Effects Models

    OpenAIRE

    Gao, K.; Owen, A. B.

    2016-01-01

    Linear mixed models with large imbalanced crossed random effects structures pose severe computational problems for maximum likelihood estimation and for Bayesian analysis. The costs can grow as fast as $N^{3/2}$ when there are N observations. Such problems arise in any setting where the underlying factors satisfy a many to many relationship (instead of a nested one) and in electronic commerce applications, the N can be quite large. Methods that do not account for the correlation structure can...

  16. Partially linear mixed-effects joint models for skewed and missing longitudinal competing risks outcomes.

    Science.gov (United States)

    Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong

    2017-12-18

    Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.

  17. Analysis of baseline, average, and longitudinally measured blood pressure data using linear mixed models.

    Science.gov (United States)

    Hossain, Ahmed; Beyene, Joseph

    2014-01-01

    This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.

  18. Generalized linear longitudinal mixed models with linear covariance structure and multiplicative random effects

    DEFF Research Database (Denmark)

    Holst, René; Jørgensen, Bent

    2015-01-01

    The paper proposes a versatile class of multiplicative generalized linear longitudinal mixed models (GLLMM) with additive dispersion components, based on explicit modelling of the covariance structure. The class incorporates a longitudinal structure into the random effects models and retains...... a marginal as well as a conditional interpretation. The estimation procedure is based on a computationally efficient quasi-score method for the regression parameters combined with a REML-like bias-corrected Pearson estimating function for the dispersion and correlation parameters. This avoids...... the multidimensional integral of the conventional GLMM likelihood and allows an extension of the robust empirical sandwich estimator for use with both association and regression parameters. The method is applied to a set of otholit data, used for age determination of fish....

  19. A Multiphase Non-Linear Mixed Effects Model: An Application to Spirometry after Lung Transplantation

    Science.gov (United States)

    Rajeswaran, Jeevanantham; Blackstone, Eugene H.

    2014-01-01

    In medical sciences, we often encounter longitudinal temporal relationships that are non-linear in nature. The influence of risk factors may also change across longitudinal follow-up. A system of multiphase non-linear mixed effects model is presented to model temporal patterns of longitudinal continuous measurements, with temporal decomposition to identify the phases and risk factors within each phase. Application of this model is illustrated using spirometry data after lung transplantation using readily available statistical software. This application illustrates the usefulness of our flexible model when dealing with complex non-linear patterns and time varying coefficients. PMID:24919830

  20. From linear to generalized linear mixed models: A case study in repeated measures

    Science.gov (United States)

    Compared to traditional linear mixed models, generalized linear mixed models (GLMMs) can offer better correspondence between response variables and explanatory models, yielding more efficient estimates and tests in the analysis of data from designed experiments. Using proportion data from a designed...

  1. Effect of correlation on covariate selection in linear and nonlinear mixed effect models.

    Science.gov (United States)

    Bonate, Peter L

    2017-01-01

    The effect of correlation among covariates on covariate selection was examined with linear and nonlinear mixed effect models. Demographic covariates were extracted from the National Health and Nutrition Examination Survey III database. Concentration-time profiles were Monte Carlo simulated where only one covariate affected apparent oral clearance (CL/F). A series of univariate covariate population pharmacokinetic models was fit to the data and compared with the reduced model without covariate. The "best" covariate was identified using either the likelihood ratio test statistic or AIC. Weight and body surface area (calculated using Gehan and George equation, 1970) were highly correlated (r = 0.98). Body surface area was often selected as a better covariate than weight, sometimes as high as 1 in 5 times, when weight was the covariate used in the data generating mechanism. In a second simulation, parent drug concentration and three metabolites were simulated from a thorough QT study and used as covariates in a series of univariate linear mixed effects models of ddQTc interval prolongation. The covariate with the largest significant LRT statistic was deemed the "best" predictor. When the metabolite was formation-rate limited and only parent concentrations affected ddQTc intervals the metabolite was chosen as a better predictor as often as 1 in 5 times depending on the slope of the relationship between parent concentrations and ddQTc intervals. A correlated covariate can be chosen as being a better predictor than another covariate in a linear or nonlinear population analysis by sheer correlation These results explain why for the same drug different covariates may be identified in different analyses. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    Science.gov (United States)

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. A Note on the Identifiability of Generalized Linear Mixed Models

    DEFF Research Database (Denmark)

    Labouriau, Rodrigo

    2014-01-01

    I present here a simple proof that, under general regularity conditions, the standard parametrization of generalized linear mixed model is identifiable. The proof is based on the assumptions of generalized linear mixed models on the first and second order moments and some general mild regularity...... conditions, and, therefore, is extensible to quasi-likelihood based generalized linear models. In particular, binomial and Poisson mixed models with dispersion parameter are identifiable when equipped with the standard parametrization...

  4. A Second-Order Conditionally Linear Mixed Effects Model with Observed and Latent Variable Covariates

    Science.gov (United States)

    Harring, Jeffrey R.; Kohli, Nidhi; Silverman, Rebecca D.; Speece, Deborah L.

    2012-01-01

    A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a…

  5. Linear mixed-effects modeling approach to FMRI group analysis.

    Science.gov (United States)

    Chen, Gang; Saad, Ziad S; Britton, Jennifer C; Pine, Daniel S; Cox, Robert W

    2013-06-01

    Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance-covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance-covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity

  6. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Science.gov (United States)

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh

    2009-01-01

    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  7. Longitudinal mathematics development of students with learning disabilities and students without disabilities: a comparison of linear, quadratic, and piecewise linear mixed effects models.

    Science.gov (United States)

    Kohli, Nidhi; Sullivan, Amanda L; Sadeh, Shanna; Zopluoglu, Cengiz

    2015-04-01

    Effective instructional planning and intervening rely heavily on accurate understanding of students' growth, but relatively few researchers have examined mathematics achievement trajectories, particularly for students with special needs. We applied linear, quadratic, and piecewise linear mixed-effects models to identify the best-fitting model for mathematics development over elementary and middle school and to ascertain differences in growth trajectories of children with learning disabilities relative to their typically developing peers. The analytic sample of 2150 students was drawn from the Early Childhood Longitudinal Study - Kindergarten Cohort, a nationally representative sample of United States children who entered kindergarten in 1998. We first modeled students' mathematics growth via multiple mixed-effects models to determine the best fitting model of 9-year growth and then compared the trajectories of students with and without learning disabilities. Results indicate that the piecewise linear mixed-effects model captured best the functional form of students' mathematics trajectories. In addition, there were substantial achievement gaps between students with learning disabilities and students with no disabilities, and their trajectories differed such that students without disabilities progressed at a higher rate than their peers who had learning disabilities. The results underscore the need for further research to understand how to appropriately model students' mathematics trajectories and the need for attention to mathematics achievement gaps in policy. Copyright © 2015 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  8. Bayesian quantile regression-based partially linear mixed-effects joint models for longitudinal data with multiple features.

    Science.gov (United States)

    Zhang, Hanze; Huang, Yangxin; Wang, Wei; Chen, Henian; Langland-Orban, Barbara

    2017-01-01

    In longitudinal AIDS studies, it is of interest to investigate the relationship between HIV viral load and CD4 cell counts, as well as the complicated time effect. Most of common models to analyze such complex longitudinal data are based on mean-regression, which fails to provide efficient estimates due to outliers and/or heavy tails. Quantile regression-based partially linear mixed-effects models, a special case of semiparametric models enjoying benefits of both parametric and nonparametric models, have the flexibility to monitor the viral dynamics nonparametrically and detect the varying CD4 effects parametrically at different quantiles of viral load. Meanwhile, it is critical to consider various data features of repeated measurements, including left-censoring due to a limit of detection, covariate measurement error, and asymmetric distribution. In this research, we first establish a Bayesian joint models that accounts for all these data features simultaneously in the framework of quantile regression-based partially linear mixed-effects models. The proposed models are applied to analyze the Multicenter AIDS Cohort Study (MACS) data. Simulation studies are also conducted to assess the performance of the proposed methods under different scenarios.

  9. Linear mixed-effects models for central statistical monitoring of multicenter clinical trials

    OpenAIRE

    Desmet, L.; Venet, D.; Doffagne, E.; Timmermans, C.; BURZYKOWSKI, Tomasz; LEGRAND, Catherine; BUYSE, Marc

    2014-01-01

    Multicenter studies are widely used to meet accrual targets in clinical trials. Clinical data monitoring is required to ensure the quality and validity of the data gathered across centers. One approach to this end is central statistical monitoring, which aims at detecting atypical patterns in the data by means of statistical methods. In this context, we consider the simple case of a continuous variable, and we propose a detection procedure based on a linear mixed-effects model to detect locat...

  10. A simulation-based goodness-of-fit test for random effects in generalized linear mixed models

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus

    2006-01-01

    The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal...... distribution of the simulated random effects coincides with the assumed random effects distribution. In practice, the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution...

  11. A simulation-based goodness-of-fit test for random effects in generalized linear mixed models

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus Plenge

    The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal...... distribution of the simulated random effects coincides with the assumed random effects distribution. In practice the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution function...

  12. Linear mixed models a practical guide using statistical software

    CERN Document Server

    West, Brady T; Galecki, Andrzej T

    2014-01-01

    Highly recommended by JASA, Technometrics, and other journals, the first edition of this bestseller showed how to easily perform complex linear mixed model (LMM) analyses via a variety of software programs. Linear Mixed Models: A Practical Guide Using Statistical Software, Second Edition continues to lead readers step by step through the process of fitting LMMs. This second edition covers additional topics on the application of LMMs that are valuable for data analysts in all fields. It also updates the case studies using the latest versions of the software procedures and provides up-to-date information on the options and features of the software procedures available for fitting LMMs in SAS, SPSS, Stata, R/S-plus, and HLM.New to the Second Edition A new chapter on models with crossed random effects that uses a case study to illustrate software procedures capable of fitting these models Power analysis methods for longitudinal and clustered study designs, including software options for power analyses and suggest...

  13. Spatial variability in floodplain sedimentation: the use of generalized linear mixed-effects models

    Directory of Open Access Journals (Sweden)

    A. Cabezas

    2010-08-01

    Full Text Available Sediment, Total Organic Carbon (TOC and total nitrogen (TN accumulation during one overbank flood (1.15 y return interval were examined at one reach of the Middle Ebro River (NE Spain for elucidating spatial patterns. To achieve this goal, four areas with different geomorphological features and located within the study reach were examined by using artificial grass mats. Within each area, 1 m2 study plots consisting of three pseudo-replicates were placed in a semi-regular grid oriented perpendicular to the main channel. TOC, TN and Particle-Size composition of deposited sediments were examined and accumulation rates estimated. Generalized linear mixed-effects models were used to analyze sedimentation patterns in order to handle clustered sampling units, specific-site effects and spatial self-correlation between observations. Our results confirm the importance of channel-floodplain morphology and site micro-topography in explaining sediment, TOC and TN deposition patterns, although the importance of other factors as vegetation pattern should be included in further studies to explain small-scale variability. Generalized linear mixed-effect models provide a good framework to deal with the high spatial heterogeneity of this phenomenon at different spatial scales, and should be further investigated in order to explore its validity when examining the importance of factors such as flood magnitude or suspended sediment concentration.

  14. Mixed models, linear dependency, and identification in age-period-cohort models.

    Science.gov (United States)

    O'Brien, Robert M

    2017-07-20

    This paper examines the identification problem in age-period-cohort models that use either linear or categorically coded ages, periods, and cohorts or combinations of these parameterizations. These models are not identified using the traditional fixed effect regression model approach because of a linear dependency between the ages, periods, and cohorts. However, these models can be identified if the researcher introduces a single just identifying constraint on the model coefficients. The problem with such constraints is that the results can differ substantially depending on the constraint chosen. Somewhat surprisingly, age-period-cohort models that specify one or more of ages and/or periods and/or cohorts as random effects are identified. This is the case without introducing an additional constraint. I label this identification as statistical model identification and show how statistical model identification comes about in mixed models and why which effects are treated as fixed and which are treated as random can substantially change the estimates of the age, period, and cohort effects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  15. Linear mixed models a practical guide using statistical software

    CERN Document Server

    West, Brady T; Galecki, Andrzej T

    2006-01-01

    Simplifying the often confusing array of software programs for fitting linear mixed models (LMMs), Linear Mixed Models: A Practical Guide Using Statistical Software provides a basic introduction to primary concepts, notation, software implementation, model interpretation, and visualization of clustered and longitudinal data. This easy-to-navigate reference details the use of procedures for fitting LMMs in five popular statistical software packages: SAS, SPSS, Stata, R/S-plus, and HLM. The authors introduce basic theoretical concepts, present a heuristic approach to fitting LMMs based on bo

  16. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    Science.gov (United States)

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  17. Some consequences of assuming simple patterns for the treatment effect over time in a linear mixed model.

    Science.gov (United States)

    Bamia, Christina; White, Ian R; Kenward, Michael G

    2013-07-10

    Linear mixed models are often used for the analysis of data from clinical trials with repeated quantitative outcomes. This paper considers linear mixed models where a particular form is assumed for the treatment effect, in particular constant over time or proportional to time. For simplicity, we assume no baseline covariates and complete post-baseline measures, and we model arbitrary mean responses for the control group at each time. For the variance-covariance matrix, we consider an unstructured model, a random intercepts model and a random intercepts and slopes model. We show that the treatment effect estimator can be expressed as a weighted average of the observed time-specific treatment effects, with weights depending on the covariance structure and the magnitude of the estimated variance components. For an assumed constant treatment effect, under the random intercepts model, all weights are equal, but in the random intercepts and slopes and the unstructured models, we show that some weights can be negative: thus, the estimated treatment effect can be negative, even if all time-specific treatment effects are positive. Our results suggest that particular models for the treatment effect combined with particular covariance structures may result in estimated treatment effects of unexpected magnitude and/or direction. Methods are illustrated using a Parkinson's disease trial. Copyright © 2012 John Wiley & Sons, Ltd.

  18. Functional Mixed Effects Model for Small Area Estimation.

    Science.gov (United States)

    Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou

    2016-09-01

    Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.

  19. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    Science.gov (United States)

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  20. Practical likelihood analysis for spatial generalized linear mixed models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Ribeiro, Paulo Justiniano

    2016-01-01

    We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are......, respectively, examples of binomial and count datasets modeled by spatial generalized linear mixed models. Our results show that the Laplace approximation provides similar estimates to Markov Chain Monte Carlo likelihood, Monte Carlo expectation maximization, and modified Laplace approximation. Some advantages...... of Laplace approximation include the computation of the maximized log-likelihood value, which can be used for model selection and tests, and the possibility to obtain realistic confidence intervals for model parameters based on profile likelihoods. The Laplace approximation also avoids the tuning...

  1. Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.

    Science.gov (United States)

    Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine

    2010-09-01

    Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.

  2. Linear mixed-effects models to describe individual tree crown width for China-fir in Fujian Province, southeast China.

    Science.gov (United States)

    Hao, Xu; Yujun, Sun; Xinjie, Wang; Jin, Wang; Yao, Fu

    2015-01-01

    A multiple linear model was developed for individual tree crown width of Cunninghamia lanceolata (Lamb.) Hook in Fujian province, southeast China. Data were obtained from 55 sample plots of pure China-fir plantation stands. An Ordinary Linear Least Squares (OLS) regression was used to establish the crown width model. To adjust for correlations between observations from the same sample plots, we developed one level linear mixed-effects (LME) models based on the multiple linear model, which take into account the random effects of plots. The best random effects combinations for the LME models were determined by the Akaike's information criterion, the Bayesian information criterion and the -2logarithm likelihood. Heteroscedasticity was reduced by three residual variance functions: the power function, the exponential function and the constant plus power function. The spatial correlation was modeled by three correlation structures: the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)], and the compound symmetry structure (CS). Then, the LME model was compared to the multiple linear model using the absolute mean residual (AMR), the root mean square error (RMSE), and the adjusted coefficient of determination (adj-R2). For individual tree crown width models, the one level LME model showed the best performance. An independent dataset was used to test the performance of the models and to demonstrate the advantage of calibrating LME models.

  3. Evaluation of a Linear Mixing Model to Retrieve Soil and Vegetation Temperatures of Land Targets

    International Nuclear Information System (INIS)

    Yang, Jinxin; Jia, Li; Cui, Yaokui; Zhou, Jie; Menenti, Massimo

    2014-01-01

    A simple linear mixing model of heterogeneous soil-vegetation system and retrieval of component temperatures from directional remote sensing measurements by inverting this model is evaluated in this paper using observations by a thermal camera. The thermal camera was used to obtain multi-angular TIR (Thermal Infra-Red) images over vegetable and orchard canopies. A whole thermal camera image was treated as a pixel of a satellite image to evaluate the model with the two-component system, i.e. soil and vegetation. The evaluation included two parts: evaluation of the linear mixing model and evaluation of the inversion of the model to retrieve component temperatures. For evaluation of the linear mixing model, the RMSE is 0.2 K between the observed and modelled brightness temperatures, which indicates that the linear mixing model works well under most conditions. For evaluation of the model inversion, the RMSE between the model retrieved and the observed vegetation temperatures is 1.6K, correspondingly, the RMSE between the observed and retrieved soil temperatures is 2.0K. According to the evaluation of the sensitivity of retrieved component temperatures on fractional cover, the linear mixing model gives more accurate retrieval accuracies for both soil and vegetation temperatures under intermediate fractional cover conditions

  4. Linear models for sound from supersonic reacting mixing layers

    Science.gov (United States)

    Chary, P. Shivakanth; Samanta, Arnab

    2016-12-01

    We perform a linearized reduced-order modeling of the aeroacoustic sound sources in supersonic reacting mixing layers to explore their sensitivities to some of the flow parameters in radiating sound. Specifically, we investigate the role of outer modes as the effective flow compressibility is raised, when some of these are expected to dominate over the traditional Kelvin-Helmholtz (K-H) -type central mode. Although the outer modes are known to be of lesser importance in the near-field mixing, how these radiate to the far-field is uncertain, on which we focus. On keeping the flow compressibility fixed, the outer modes are realized via biasing the respective mean densities of the fast (oxidizer) or slow (fuel) side. Here the mean flows are laminar solutions of two-dimensional compressible boundary layers with an imposed composite (turbulent) spreading rate, which we show to significantly alter the growth of instability waves by saturating them earlier, similar to in nonlinear calculations, achieved here via solving the linear parabolized stability equations. As the flow parameters are varied, instability of the slow modes is shown to be more sensitive to heat release, potentially exceeding equivalent central modes, as these modes yield relatively compact sound sources with lesser spreading of the mixing layer, when compared to the corresponding fast modes. In contrast, the radiated sound seems to be relatively unaffected when the mixture equivalence ratio is varied, except for a lean mixture which is shown to yield a pronounced effect on the slow mode radiation by reducing its modal growth.

  5. A Linear Mixed-Effects Model of Wireless Spectrum Occupancy

    Directory of Open Access Journals (Sweden)

    Pagadarai Srikanth

    2010-01-01

    Full Text Available We provide regression analysis-based statistical models to explain the usage of wireless spectrum across four mid-size US cities in four frequency bands. Specifically, the variations in spectrum occupancy across space, time, and frequency are investigated and compared between different sites within the city as well as with other cities. By applying the mixed-effects models, several conclusions are drawn that give the occupancy percentage and the ON time duration of the licensed signal transmission as a function of several predictor variables.

  6. Linear mixed-effects models to describe length-weight relationships for yellow croaker (Larimichthys Polyactis) along the north coast of China.

    Science.gov (United States)

    Ma, Qiuyun; Jiao, Yan; Ren, Yiping

    2017-01-01

    In this study, length-weight relationships and relative condition factors were analyzed for Yellow Croaker (Larimichthys polyactis) along the north coast of China. Data covered six regions from north to south: Yellow River Estuary, Coastal Waters of Northern Shandong, Jiaozhou Bay, Coastal Waters of Qingdao, Haizhou Bay, and South Yellow Sea. In total 3,275 individuals were collected during six years (2008, 2011-2015). One generalized linear model, two simply linear models and nine linear mixed effect models that applied the effects from regions and/or years to coefficient a and/or the exponent b were studied and compared. Among these twelve models, the linear mixed effect model with random effects from both regions and years fit the data best, with lowest Akaike information criterion value and mean absolute error. In this model, the estimated a was 0.0192, with 95% confidence interval 0.0178~0.0308, and the estimated exponent b was 2.917 with 95% confidence interval 2.731~2.945. Estimates for a and b with the random effects in intercept and coefficient from Region and Year, ranged from 0.013 to 0.023 and from 2.835 to 3.017, respectively. Both regions and years had effects on parameters a and b, while the effects from years were shown to be much larger than those from regions. Except for Coastal Waters of Northern Shandong, a decreased from north to south. Condition factors relative to reference years of 1960, 1986, 2005, 2007, 2008~2009 and 2010 revealed that the body shape of Yellow Croaker became thinner in recent years. Furthermore relative condition factors varied among months, years, regions and length. The values of a and relative condition factors decreased, when the environmental pollution became worse, therefore, length-weight relationships could be an indicator for the environment quality. Results from this study provided basic description of current condition of Yellow Croaker along the north coast of China.

  7. Spatial generalised linear mixed models based on distances.

    Science.gov (United States)

    Melo, Oscar O; Mateu, Jorge; Melo, Carlos E

    2016-10-01

    Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.

  8. Linear mixing model applied to AVHRR LAC data

    Science.gov (United States)

    Holben, Brent N.; Shimabukuro, Yosio E.

    1993-01-01

    A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55 - 3.93 microns channel was extracted and used with the two reflective channels 0.58 - 0.68 microns and 0.725 - 1.1 microns to run a Constraine Least Squares model to generate vegetation, soil, and shade fraction images for an area in the Western region of Brazil. The Landsat Thematic Mapper data covering the Emas National park region was used for estimating the spectral response of the mixture components and for evaluating the mixing model results. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse resolution data for global studies.

  9. Linear mixed-effects models for within-participant psychology experiments: an introductory tutorial and free, graphical user interface (LMMgui).

    Science.gov (United States)

    Magezi, David A

    2015-01-01

    Linear mixed-effects models (LMMs) are increasingly being used for data analysis in cognitive neuroscience and experimental psychology, where within-participant designs are common. The current article provides an introductory review of the use of LMMs for within-participant data analysis and describes a free, simple, graphical user interface (LMMgui). LMMgui uses the package lme4 (Bates et al., 2014a,b) in the statistical environment R (R Core Team).

  10. Short communication: Alteration of priors for random effects in Gaussian linear mixed model

    DEFF Research Database (Denmark)

    Vandenplas, Jérémie; Christensen, Ole Fredslund; Gengler, Nicholas

    2014-01-01

    such alterations. Therefore, the aim of this study was to propose a method to alter both the mean and (co)variance of the prior multivariate normal distributions of random effects of linear mixed models while using currently available software packages. The proposed method was tested on simulated examples with 3......, multiple-trait predictions of lactation yields, and Bayesian approaches integrating external information into genetic evaluations) need to alter both the mean and (co)variance of the prior distributions and, to our knowledge, most software packages available in the animal breeding community do not permit...... different software packages available in animal breeding. The examples showed the possibility of the proposed method to alter both the mean and (co)variance of the prior distributions with currently available software packages through the use of an extended data file and a user-supplied (co)variance matrix....

  11. Bayesian prediction of spatial count data using generalized linear mixed models

    DEFF Research Database (Denmark)

    Christensen, Ole Fredslund; Waagepetersen, Rasmus Plenge

    2002-01-01

    Spatial weed count data are modeled and predicted using a generalized linear mixed model combined with a Bayesian approach and Markov chain Monte Carlo. Informative priors for a data set with sparse sampling are elicited using a previously collected data set with extensive sampling. Furthermore, ...

  12. Evaluating significance in linear mixed-effects models in R.

    Science.gov (United States)

    Luke, Steven G

    2017-08-01

    Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.

  13. glmmTMB balances speed and flexibility among packages for Zero-inflated Generalized Linear Mixed Modeling

    DEFF Research Database (Denmark)

    Brooks, Mollie Elizabeth; Kristensen, Kasper; van Benthem, Koen J.

    2017-01-01

    Count data can be analyzed using generalized linear mixed models when observations are correlated in ways that require random effects. However, count data are often zero-inflated, containing more zeros than would be expected from the typical error distributions. We present a new package, glmm...

  14. Optimization of the time series NDVI-rainfall relationship using linear mixed-effects modeling for the anti-desertification area in the Beijing and Tianjin sandstorm source region

    Science.gov (United States)

    Wang, Jin; Sun, Tao; Fu, Anmin; Xu, Hao; Wang, Xinjie

    2018-05-01

    Degradation in drylands is a critically important global issue that threatens ecosystem and environmental in many ways. Researchers have tried to use remote sensing data and meteorological data to perform residual trend analysis and identify human-induced vegetation changes. However, complex interactions between vegetation and climate, soil units and topography have not yet been considered. Data used in the study included annual accumulated Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m normalized difference vegetation index (NDVI) from 2002 to 2013, accumulated rainfall from September to August, digital elevation model (DEM) and soil units. This paper presents linear mixed-effect (LME) modeling methods for the NDVI-rainfall relationship. We developed linear mixed-effects models that considered the random effects of sample points nested in soil units for nested two-level modeling and single-level modeling of soil units and sample points, respectively. Additionally, three functions, including the exponential function (exp), the power function (power), and the constant plus power function (CPP), were tested to remove heterogeneity, and an additional three correlation structures, including the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)] and the compound symmetry structure (CS), were used to address the spatiotemporal correlations. It was concluded that the nested two-level model considering both heteroscedasticity with (CPP) and spatiotemporal correlation with [ARMA(1,1)] showed the best performance (AMR = 0.1881, RMSE = 0.2576, adj- R 2 = 0.9593). Variations between soil units and sample points that may have an effect on the NDVI-rainfall relationship should be included in model structures, and linear mixed-effects modeling achieves this in an effective and accurate way.

  15. lmerTest Package: Tests in Linear Mixed Effects Models

    DEFF Research Database (Denmark)

    Kuznetsova, Alexandra; Brockhoff, Per B.; Christensen, Rune Haubo Bojesen

    2017-01-01

    One of the frequent questions by users of the mixed model function lmer of the lme4 package has been: How can I get p values for the F and t tests for objects returned by lmer? The lmerTest package extends the 'lmerMod' class of the lme4 package, by overloading the anova and summary functions...... by providing p values for tests for fixed effects. We have implemented the Satterthwaite's method for approximating degrees of freedom for the t and F tests. We have also implemented the construction of Type I - III ANOVA tables. Furthermore, one may also obtain the summary as well as the anova table using...

  16. Mixed integer linear programming model for dynamic supplier selection problem considering discounts

    Directory of Open Access Journals (Sweden)

    Adi Wicaksono Purnawan

    2018-01-01

    Full Text Available Supplier selection is one of the most important elements in supply chain management. This function involves evaluation of many factors such as, material costs, transportation costs, quality, delays, supplier capacity, storage capacity and others. Each of these factors varies with time, therefore, supplier identified for one period is not necessarily be same for the next period to supply the same product. So, mixed integer linear programming (MILP was developed to overcome the dynamic supplier selection problem (DSSP. In this paper, a mixed integer linear programming model is built to solve the lot-sizing problem with multiple suppliers, multiple periods, multiple products and quantity discounts. The buyer has to make a decision for some products which will be supplied by some suppliers for some periods cosidering by discount. To validate the MILP model with randomly generated data. The model is solved by Lingo 16.

  17. Log-gamma linear-mixed effects models for multiple outcomes with application to a longitudinal glaucoma study

    Science.gov (United States)

    Zhang, Peng; Luo, Dandan; Li, Pengfei; Sharpsten, Lucie; Medeiros, Felipe A.

    2015-01-01

    Glaucoma is a progressive disease due to damage in the optic nerve with associated functional losses. Although the relationship between structural and functional progression in glaucoma is well established, there is disagreement on how this association evolves over time. In addressing this issue, we propose a new class of non-Gaussian linear-mixed models to estimate the correlations among subject-specific effects in multivariate longitudinal studies with a skewed distribution of random effects, to be used in a study of glaucoma. This class provides an efficient estimation of subject-specific effects by modeling the skewed random effects through the log-gamma distribution. It also provides more reliable estimates of the correlations between the random effects. To validate the log-gamma assumption against the usual normality assumption of the random effects, we propose a lack-of-fit test using the profile likelihood function of the shape parameter. We apply this method to data from a prospective observation study, the Diagnostic Innovations in Glaucoma Study, to present a statistically significant association between structural and functional change rates that leads to a better understanding of the progression of glaucoma over time. PMID:26075565

  18. The transition model test for serial dependence in mixed-effects models for binary data

    DEFF Research Database (Denmark)

    Breinegaard, Nina; Rabe-Hesketh, Sophia; Skrondal, Anders

    2017-01-01

    Generalized linear mixed models for longitudinal data assume that responses at different occasions are conditionally independent, given the random effects and covariates. Although this assumption is pivotal for consistent estimation, violation due to serial dependence is hard to assess by model...

  19. An MCMC method for the evaluation of the Fisher information matrix for non-linear mixed effect models.

    Science.gov (United States)

    Riviere, Marie-Karelle; Ueckert, Sebastian; Mentré, France

    2016-10-01

    Non-linear mixed effect models (NLMEMs) are widely used for the analysis of longitudinal data. To design these studies, optimal design based on the expected Fisher information matrix (FIM) can be used instead of performing time-consuming clinical trial simulations. In recent years, estimation algorithms for NLMEMs have transitioned from linearization toward more exact higher-order methods. Optimal design, on the other hand, has mainly relied on first-order (FO) linearization to calculate the FIM. Although efficient in general, FO cannot be applied to complex non-linear models and with difficulty in studies with discrete data. We propose an approach to evaluate the expected FIM in NLMEMs for both discrete and continuous outcomes. We used Markov Chain Monte Carlo (MCMC) to integrate the derivatives of the log-likelihood over the random effects, and Monte Carlo to evaluate its expectation w.r.t. the observations. Our method was implemented in R using Stan, which efficiently draws MCMC samples and calculates partial derivatives of the log-likelihood. Evaluated on several examples, our approach showed good performance with relative standard errors (RSEs) close to those obtained by simulations. We studied the influence of the number of MC and MCMC samples and computed the uncertainty of the FIM evaluation. We also compared our approach to Adaptive Gaussian Quadrature, Laplace approximation, and FO. Our method is available in R-package MIXFIM and can be used to evaluate the FIM, its determinant with confidence intervals (CIs), and RSEs with CIs. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. A D-vine copula-based model for repeated measurements extending linear mixed models with homogeneous correlation structure.

    Science.gov (United States)

    Killiches, Matthias; Czado, Claudia

    2018-03-22

    We propose a model for unbalanced longitudinal data, where the univariate margins can be selected arbitrarily and the dependence structure is described with the help of a D-vine copula. We show that our approach is an extremely flexible extension of the widely used linear mixed model if the correlation is homogeneous over the considered individuals. As an alternative to joint maximum-likelihood a sequential estimation approach for the D-vine copula is provided and validated in a simulation study. The model can handle missing values without being forced to discard data. Since conditional distributions are known analytically, we easily make predictions for future events. For model selection, we adjust the Bayesian information criterion to our situation. In an application to heart surgery data our model performs clearly better than competing linear mixed models. © 2018, The International Biometric Society.

  1. Application of Linear Mixed-Effects Models in Human Neuroscience Research: A Comparison with Pearson Correlation in Two Auditory Electrophysiology Studies.

    Science.gov (United States)

    Koerner, Tess K; Zhang, Yang

    2017-02-27

    Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers.

  2. Analyzing longitudinal data with the linear mixed models procedure in SPSS.

    Science.gov (United States)

    West, Brady T

    2009-09-01

    Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.

  3. Measuring the individual benefit of a medical or behavioral treatment using generalized linear mixed-effects models.

    Science.gov (United States)

    Diaz, Francisco J

    2016-10-15

    We propose statistical definitions of the individual benefit of a medical or behavioral treatment and of the severity of a chronic illness. These definitions are used to develop a graphical method that can be used by statisticians and clinicians in the data analysis of clinical trials from the perspective of personalized medicine. The method focuses on assessing and comparing individual effects of treatments rather than average effects and can be used with continuous and discrete responses, including dichotomous and count responses. The method is based on new developments in generalized linear mixed-effects models, which are introduced in this article. To illustrate, analyses of data from the Sequenced Treatment Alternatives to Relieve Depression clinical trial of sequences of treatments for depression and data from a clinical trial of respiratory treatments are presented. The estimation of individual benefits is also explained. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  4. Modelling lactation curve for milk fat to protein ratio in Iranian buffaloes (Bubalus bubalis) using non-linear mixed models.

    Science.gov (United States)

    Hossein-Zadeh, Navid Ghavi

    2016-08-01

    The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.

  5. Predicting the multi-domain progression of Parkinson's disease: a Bayesian multivariate generalized linear mixed-effect model.

    Science.gov (United States)

    Wang, Ming; Li, Zheng; Lee, Eun Young; Lewis, Mechelle M; Zhang, Lijun; Sterling, Nicholas W; Wagner, Daymond; Eslinger, Paul; Du, Guangwei; Huang, Xuemei

    2017-09-25

    It is challenging for current statistical models to predict clinical progression of Parkinson's disease (PD) because of the involvement of multi-domains and longitudinal data. Past univariate longitudinal or multivariate analyses from cross-sectional trials have limited power to predict individual outcomes or a single moment. The multivariate generalized linear mixed-effect model (GLMM) under the Bayesian framework was proposed to study multi-domain longitudinal outcomes obtained at baseline, 18-, and 36-month. The outcomes included motor, non-motor, and postural instability scores from the MDS-UPDRS, and demographic and standardized clinical data were utilized as covariates. The dynamic prediction was performed for both internal and external subjects using the samples from the posterior distributions of the parameter estimates and random effects, and also the predictive accuracy was evaluated based on the root of mean square error (RMSE), absolute bias (AB) and the area under the receiver operating characteristic (ROC) curve. First, our prediction model identified clinical data that were differentially associated with motor, non-motor, and postural stability scores. Second, the predictive accuracy of our model for the training data was assessed, and improved prediction was gained in particularly for non-motor (RMSE and AB: 2.89 and 2.20) compared to univariate analysis (RMSE and AB: 3.04 and 2.35). Third, the individual-level predictions of longitudinal trajectories for the testing data were performed, with ~80% observed values falling within the 95% credible intervals. Multivariate general mixed models hold promise to predict clinical progression of individual outcomes in PD. The data was obtained from Dr. Xuemei Huang's NIH grant R01 NS060722 , part of NINDS PD Biomarker Program (PDBP). All data was entered within 24 h of collection to the Data Management Repository (DMR), which is publically available ( https://pdbp.ninds.nih.gov/data-management ).

  6. Non-linear mixed effects modeling - from methodology and software development to driving implementation in drug development science.

    Science.gov (United States)

    Pillai, Goonaseelan Colin; Mentré, France; Steimer, Jean-Louis

    2005-04-01

    Few scientific contributions have made significant impact unless there was a champion who had the vision to see the potential for its use in seemingly disparate areas-and who then drove active implementation. In this paper, we present a historical summary of the development of non-linear mixed effects (NLME) modeling up to the more recent extensions of this statistical methodology. The paper places strong emphasis on the pivotal role played by Lewis B. Sheiner (1940-2004), who used this statistical methodology to elucidate solutions to real problems identified in clinical practice and in medical research and on how he drove implementation of the proposed solutions. A succinct overview of the evolution of the NLME modeling methodology is presented as well as ideas on how its expansion helped to provide guidance for a more scientific view of (model-based) drug development that reduces empiricism in favor of critical quantitative thinking and decision making.

  7. Effects of the ρ - ω mixing interaction in relativistic models

    International Nuclear Information System (INIS)

    Menezes, D.P.; Providencia, C.

    2003-01-01

    The effects of the ρ-ω mixing term in infinite nuclear matter and in finite nuclei are investigated with the non-linear Walecka model in a Thomas-Fermi approximation. For infinite nuclear matter the influence of the mixing term in the binding energy calculated with the NL3 and TM1 parametrizations can be neglected. Its influence on the symmetry energy is only felt for the TM1 with a unrealistically large value for the mixing term strength. For finite nuclei the contribution of the isospin mixing term is very large as compared with the expected value to solve the Nolen-Schiffer anomaly

  8. Multi-disease analysis of maternal antibody decay using non-linear mixed models accounting for censoring.

    Science.gov (United States)

    Goeyvaerts, Nele; Leuridan, Elke; Faes, Christel; Van Damme, Pierre; Hens, Niel

    2015-09-10

    Biomedical studies often generate repeated measures of multiple outcomes on a set of subjects. It may be of interest to develop a biologically intuitive model for the joint evolution of these outcomes while assessing inter-subject heterogeneity. Even though it is common for biological processes to entail non-linear relationships, examples of multivariate non-linear mixed models (MNMMs) are still fairly rare. We contribute to this area by jointly analyzing the maternal antibody decay for measles, mumps, rubella, and varicella, allowing for a different non-linear decay model for each infectious disease. We present a general modeling framework to analyze multivariate non-linear longitudinal profiles subject to censoring, by combining multivariate random effects, non-linear growth and Tobit regression. We explore the hypothesis of a common infant-specific mechanism underlying maternal immunity using a pairwise correlated random-effects approach and evaluating different correlation matrix structures. The implied marginal correlation between maternal antibody levels is estimated using simulations. The mean duration of passive immunity was less than 4 months for all diseases with substantial heterogeneity between infants. The maternal antibody levels against rubella and varicella were found to be positively correlated, while little to no correlation could be inferred for the other disease pairs. For some pairs, computational issues occurred with increasing correlation matrix complexity, which underlines the importance of further developing estimation methods for MNMMs. Copyright © 2015 John Wiley & Sons, Ltd.

  9. Monte Carlo simulation of OLS and linear mixed model inference of phenotypic effects on gene expression.

    Science.gov (United States)

    Walker, Jeffrey A

    2016-01-01

    downward biased standard errors and inflated coefficients. The Monte Carlo simulation of error rates shows highly inflated Type I error from the GLS test and slightly inflated Type I error from the GEE test. By contrast, Type I error for all OLS tests are at the nominal level. The permutation F -tests have ∼1.9X the power of the other OLS tests. This increased power comes at a cost of high sign error (∼10%) if tested on small effects. The apparently replicated pattern of well-being effects on gene expression is most parsimoniously explained as "correlated noise" due to the geometry of multiple regression. The GLS for fixed effects with correlated error, or any linear mixed model for estimating fixed effects in designs with many repeated measures or outcomes, should be used cautiously because of the inflated Type I and M error. By contrast, all OLS tests perform well, and the permutation F -tests have superior performance, including moderate power for very small effects.

  10. Monte Carlo simulation of OLS and linear mixed model inference of phenotypic effects on gene expression

    Directory of Open Access Journals (Sweden)

    Jeffrey A. Walker

    2016-10-01

    distributions suggest that the GLS results in downward biased standard errors and inflated coefficients. The Monte Carlo simulation of error rates shows highly inflated Type I error from the GLS test and slightly inflated Type I error from the GEE test. By contrast, Type I error for all OLS tests are at the nominal level. The permutation F-tests have ∼1.9X the power of the other OLS tests. This increased power comes at a cost of high sign error (∼10% if tested on small effects. Discussion The apparently replicated pattern of well-being effects on gene expression is most parsimoniously explained as “correlated noise” due to the geometry of multiple regression. The GLS for fixed effects with correlated error, or any linear mixed model for estimating fixed effects in designs with many repeated measures or outcomes, should be used cautiously because of the inflated Type I and M error. By contrast, all OLS tests perform well, and the permutation F-tests have superior performance, including moderate power for very small effects.

  11. Longitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts, Procedures and Illustrations

    Directory of Open Access Journals (Sweden)

    Daniel T. L. Shek

    2011-01-01

    Full Text Available Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes in Hong Kong are presented.

  12. Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

    Science.gov (United States)

    Shek, Daniel T L; Ma, Cecilia M S

    2011-01-05

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.

  13. Analysis of correlated count data using generalised linear mixed models exemplified by field data on aggressive behaviour of boars

    Directory of Open Access Journals (Sweden)

    N. Mielenz

    2015-01-01

    Full Text Available Population-averaged and subject-specific models are available to evaluate count data when repeated observations per subject are present. The latter are also known in the literature as generalised linear mixed models (GLMM. In GLMM repeated measures are taken into account explicitly through random animal effects in the linear predictor. In this paper the relevant GLMMs are presented based on conditional Poisson or negative binomial distribution of the response variable for given random animal effects. Equations for the repeatability of count data are derived assuming normal distribution and logarithmic gamma distribution for the random animal effects. Using count data on aggressive behaviour events of pigs (barrows, sows and boars in mixed-sex housing, we demonstrate the use of the Poisson »log-gamma intercept«, the Poisson »normal intercept« and the »normal intercept« model with negative binomial distribution. Since not all count data can definitely be seen as Poisson or negative-binomially distributed, questions of model selection and model checking are examined. Emanating from the example, we also interpret the least squares means, estimated on the link as well as the response scale. Options provided by the SAS procedure NLMIXED for estimating model parameters and for estimating marginal expected values are presented.

  14. Estimate the time varying brain receptor occupancy in PET imaging experiments using non-linear fixed and mixed effect modeling approach

    International Nuclear Information System (INIS)

    Zamuner, Stefano; Gomeni, Roberto; Bye, Alan

    2002-01-01

    Positron-Emission Tomography (PET) is an imaging technology currently used in drug development as a non-invasive measure of drug distribution and interaction with biochemical target system. The level of receptor occupancy achieved by a compound can be estimated by comparing time-activity measurements in an experiment done using tracer alone with the activity measured when the tracer is given following administration of unlabelled compound. The effective use of this surrogate marker as an enabling tool for drug development requires the definition of a model linking the brain receptor occupancy with the fluctuation of plasma concentrations. However, the predictive performance of such a model is strongly related to the precision on the estimate of receptor occupancy evaluated in PET scans collected at different times following drug treatment. Several methods have been proposed for the analysis and the quantification of the ligand-receptor interactions investigated from PET data. The aim of the present study is to evaluate alternative parameter estimation strategies based on the use of non-linear mixed effect models allowing to account for intra and inter-subject variability on the time-activity and for covariates potentially explaining this variability. A comparison of the different modeling approaches is presented using real data. The results of this comparison indicates that the mixed effect approach with a primary model partitioning the variance in term of Inter-Individual Variability (IIV) and Inter-Occasion Variability (IOV) and a second stage model relating the changes on binding potential to the dose of unlabelled drug is definitely the preferred approach

  15. Mixed models for predictive modeling in actuarial science

    NARCIS (Netherlands)

    Antonio, K.; Zhang, Y.

    2012-01-01

    We start with a general discussion of mixed (also called multilevel) models and continue with illustrating specific (actuarial) applications of this type of models. Technical details on (linear, generalized, non-linear) mixed models follow: model assumptions, specifications, estimation techniques

  16. An SDP Approach for Multiperiod Mixed 0–1 Linear Programming Models with Stochastic Dominance Constraints for Risk Management

    DEFF Research Database (Denmark)

    Escudero, Laureano F.; Monge, Juan Francisco; Morales, Dolores Romero

    2015-01-01

    In this paper we consider multiperiod mixed 0–1 linear programming models under uncertainty. We propose a risk averse strategy using stochastic dominance constraints (SDC) induced by mixed-integer linear recourse as the risk measure. The SDC strategy extends the existing literature to the multist...

  17. Experimental Effects and Individual Differences in Linear Mixed Models: Estimating the Relationship between Spatial, Object, and Attraction Effects in Visual Attention

    Science.gov (United States)

    Kliegl, Reinhold; Wei, Ping; Dambacher, Michael; Yan, Ming; Zhou, Xiaolin

    2011-01-01

    Linear mixed models (LMMs) provide a still underused methodological perspective on combining experimental and individual-differences research. Here we illustrate this approach with two-rectangle cueing in visual attention (Egly et al., 1994). We replicated previous experimental cue-validity effects relating to a spatial shift of attention within an object (spatial effect), to attention switch between objects (object effect), and to the attraction of attention toward the display centroid (attraction effect), also taking into account the design-inherent imbalance of valid and other trials. We simultaneously estimated variance/covariance components of subject-related random effects for these spatial, object, and attraction effects in addition to their mean reaction times (RTs). The spatial effect showed a strong positive correlation with mean RT and a strong negative correlation with the attraction effect. The analysis of individual differences suggests that slow subjects engage attention more strongly at the cued location than fast subjects. We compare this joint LMM analysis of experimental effects and associated subject-related variances and correlations with two frequently used alternative statistical procedures. PMID:21833292

  18. Evaluation of a Linear Mixing Model to Retrieve Soil and Vegetation Temperatures of Land Targets

    NARCIS (Netherlands)

    Yang, J.; Jia, L.; Cui, Y.; Zhou, J.; Menenti, M.

    2014-01-01

    A simple linear mixing model of heterogeneous soil-vegetation system and retrieval of component temperatures from directional remote sensing measurements by inverting this model is evaluated in this paper using observations by a thermal camera. The thermal camera was used to obtain multi-angular TIR

  19. Mixed-effects regression models in linguistics

    CERN Document Server

    Heylen, Kris; Geeraerts, Dirk

    2018-01-01

    When data consist of grouped observations or clusters, and there is a risk that measurements within the same group are not independent, group-specific random effects can be added to a regression model in order to account for such within-group associations. Regression models that contain such group-specific random effects are called mixed-effects regression models, or simply mixed models. Mixed models are a versatile tool that can handle both balanced and unbalanced datasets and that can also be applied when several layers of grouping are present in the data; these layers can either be nested or crossed.  In linguistics, as in many other fields, the use of mixed models has gained ground rapidly over the last decade. This methodological evolution enables us to build more sophisticated and arguably more realistic models, but, due to its technical complexity, also introduces new challenges. This volume brings together a number of promising new evolutions in the use of mixed models in linguistics, but also addres...

  20. A Nonlinear Mixed Effects Model for the Prediction of Natural Gas Consumption by Individual Customers

    Czech Academy of Sciences Publication Activity Database

    Brabec, Marek; Konár, Ondřej; Pelikán, Emil; Malý, Marek

    2008-01-01

    Roč. 24, č. 4 (2008), s. 659-678 ISSN 0169-2070 R&D Projects: GA AV ČR 1ET400300513 Institutional research plan: CEZ:AV0Z10300504 Keywords : individual gas consumption * nonlinear mixed effects model * ARIMAX * ARX * generalized linear mixed model * conditional modeling Subject RIV: JE - Non-nuclear Energetics, Energy Consumption ; Use Impact factor: 1.685, year: 2008

  1. Model and measurements of linear mixing in thermal IR ground leaving radiance spectra

    Science.gov (United States)

    Balick, Lee; Clodius, William; Jeffery, Christopher; Theiler, James; McCabe, Matthew; Gillespie, Alan; Mushkin, Amit; Danilina, Iryna

    2007-10-01

    Hyperspectral thermal IR remote sensing is an effective tool for the detection and identification of gas plumes and solid materials. Virtually all remotely sensed thermal IR pixels are mixtures of different materials and temperatures. As sensors improve and hyperspectral thermal IR remote sensing becomes more quantitative, the concept of homogeneous pixels becomes inadequate. The contributions of the constituents to the pixel spectral ground leaving radiance are weighted by their spectral emissivities and their temperature, or more correctly, temperature distributions, because real pixels are rarely thermally homogeneous. Planck's Law defines a relationship between temperature and radiance that is strongly wavelength dependent, even for blackbodies. Spectral ground leaving radiance (GLR) from mixed pixels is temperature and wavelength dependent and the relationship between observed radiance spectra from mixed pixels and library emissivity spectra of mixtures of 'pure' materials is indirect. A simple model of linear mixing of subpixel radiance as a function of material type, the temperature distribution of each material and the abundance of the material within a pixel is presented. The model indicates that, qualitatively and given normal environmental temperature variability, spectral features remain observable in mixtures as long as the material occupies more than roughly 10% of the pixel. Field measurements of known targets made on the ground and by an airborne sensor are presented here and serve as a reality check on the model. Target spectral GLR from mixtures as a function of temperature distribution and abundance within the pixel at day and night are presented and compare well qualitatively with model output.

  2. Actuarial statistics with generalized linear mixed models

    NARCIS (Netherlands)

    Antonio, K.; Beirlant, J.

    2007-01-01

    Over the last decade the use of generalized linear models (GLMs) in actuarial statistics has received a lot of attention, starting from the actuarial illustrations in the standard text by McCullagh and Nelder [McCullagh, P., Nelder, J.A., 1989. Generalized linear models. In: Monographs on Statistics

  3. A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence.

    Science.gov (United States)

    Nikoloulopoulos, Aristidis K

    2017-10-01

    A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.

  4. Estimating a graphical intra-class correlation coefficient (GICC) using multivariate probit-linear mixed models.

    Science.gov (United States)

    Yue, Chen; Chen, Shaojie; Sair, Haris I; Airan, Raag; Caffo, Brian S

    2015-09-01

    Data reproducibility is a critical issue in all scientific experiments. In this manuscript, the problem of quantifying the reproducibility of graphical measurements is considered. The image intra-class correlation coefficient (I2C2) is generalized and the graphical intra-class correlation coefficient (GICC) is proposed for such purpose. The concept for GICC is based on multivariate probit-linear mixed effect models. A Markov Chain Monte Carlo EM (mcm-cEM) algorithm is used for estimating the GICC. Simulation results with varied settings are demonstrated and our method is applied to the KIRBY21 test-retest dataset.

  5. Linear mixing model applied to coarse resolution satellite data

    Science.gov (United States)

    Holben, Brent N.; Shimabukuro, Yosio E.

    1992-01-01

    A linear mixing model typically applied to high resolution data such as Airborne Visible/Infrared Imaging Spectrometer, Thematic Mapper, and Multispectral Scanner System is applied to the NOAA Advanced Very High Resolution Radiometer coarse resolution satellite data. The reflective portion extracted from the middle IR channel 3 (3.55 - 3.93 microns) is used with channels 1 (0.58 - 0.68 microns) and 2 (0.725 - 1.1 microns) to run the Constrained Least Squares model to generate fraction images for an area in the west central region of Brazil. The derived fraction images are compared with an unsupervised classification and the fraction images derived from Landsat TM data acquired in the same day. In addition, the relationship betweeen these fraction images and the well known NDVI images are presented. The results show the great potential of the unmixing techniques for applying to coarse resolution data for global studies.

  6. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    Science.gov (United States)

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  7. Para-mixed linear spaces

    Directory of Open Access Journals (Sweden)

    Crasmareanu Mircea

    2017-12-01

    Full Text Available We consider the paracomplex version of the notion of mixed linear spaces introduced by M. Jurchescu in [4] by replacing the complex unit i with the paracomplex unit j, j2 = 1. The linear algebra of these spaces is studied with a special view towards their morphisms.

  8. Simultaneous inference for multilevel linear mixed models - with an application to a large-scale school meal study

    DEFF Research Database (Denmark)

    Ritz, Christian; Laursen, Rikke Pilmann; Damsgaard, Camilla Trab

    2017-01-01

    of a school meal programme. We propose a novel and versatile framework for simultaneous inference on parameters estimated from linear mixed models that were fitted separately for several outcomes from the same study, but did not necessarily contain the same fixed or random effects. By combining asymptotic...... sizes of practical relevance we studied simultaneous coverage through simulation, which showed that the approach achieved acceptable coverage probabilities even for small sample sizes (10 clusters) and for 2–16 outcomes. The approach also compared favourably with a joint modelling approach. We also...

  9. A Mixed Integer Linear Programming Model for the North Atlantic Aircraft Trajectory Planning

    OpenAIRE

    Sbihi , Mohammed; Rodionova , Olga; Delahaye , Daniel; Mongeau , Marcel

    2015-01-01

    International audience; This paper discusses the trajectory planning problem for ights in the North Atlantic oceanic airspace (NAT). We develop a mathematical optimization framework in view of better utilizing available capacity by re-routing aircraft. The model is constructed by discretizing the problem parameters. A Mixed integer linear program (MILP) is proposed. Based on the MILP a heuristic to solve real-size instances is also introduced

  10. An update on modeling dose-response relationships: Accounting for correlated data structure and heterogeneous error variance in linear and nonlinear mixed models.

    Science.gov (United States)

    Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D

    2016-05-01

    Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with

  11. Random effects coefficient of determination for mixed and meta-analysis models.

    Science.gov (United States)

    Demidenko, Eugene; Sargent, James; Onega, Tracy

    2012-01-01

    The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, [Formula: see text], that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If [Formula: see text] is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of [Formula: see text] apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects-the model can be estimated using the dummy variable approach. We derive explicit formulas for [Formula: see text] in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine.

  12. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation Models with Mixed Effects.

    Science.gov (United States)

    Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam

    2016-01-01

    Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.

  13. Generalized linear mixed models modern concepts, methods and applications

    CERN Document Server

    Stroup, Walter W

    2012-01-01

    PART I The Big PictureModeling BasicsWhat Is a Model?Two Model Forms: Model Equation and Probability DistributionTypes of Model EffectsWriting Models in Matrix FormSummary: Essential Elements for a Complete Statement of the ModelDesign MattersIntroductory Ideas for Translating Design and Objectives into ModelsDescribing ""Data Architecture"" to Facilitate Model SpecificationFrom Plot Plan to Linear PredictorDistribution MattersMore Complex Example: Multiple Factors with Different Units of ReplicationSetting the StageGoals for Inference with Models: OverviewBasic Tools of InferenceIssue I: Data

  14. Non-linear Growth Models in Mplus and SAS

    Science.gov (United States)

    Grimm, Kevin J.; Ram, Nilam

    2013-01-01

    Non-linear growth curves or growth curves that follow a specified non-linear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this paper we describe how a variety of sigmoid curves can be fit using the Mplus structural modeling program and the non-linear mixed-effects modeling procedure NLMIXED in SAS. Using longitudinal achievement data collected as part of a study examining the effects of preschool instruction on academic gain we illustrate the procedures for fitting growth models of logistic, Gompertz, and Richards functions. Brief notes regarding the practical benefits, limitations, and choices faced in the fitting and estimation of such models are included. PMID:23882134

  15. Use of non-linear mixed-effects modelling and regression analysis to predict the number of somatic coliphages by plaque enumeration after 3 hours of incubation.

    Science.gov (United States)

    Mendez, Javier; Monleon-Getino, Antonio; Jofre, Juan; Lucena, Francisco

    2017-10-01

    The present study aimed to establish the kinetics of the appearance of coliphage plaques using the double agar layer titration technique to evaluate the feasibility of using traditional coliphage plaque forming unit (PFU) enumeration as a rapid quantification method. Repeated measurements of the appearance of plaques of coliphages titrated according to ISO 10705-2 at different times were analysed using non-linear mixed-effects regression to determine the most suitable model of their appearance kinetics. Although this model is adequate, to simplify its applicability two linear models were developed to predict the numbers of coliphages reliably, using the PFU counts as determined by the ISO after only 3 hours of incubation. One linear model, when the number of plaques detected was between 4 and 26 PFU after 3 hours, had a linear fit of: (1.48 × Counts 3 h + 1.97); and the other, values >26 PFU, had a fit of (1.18 × Counts 3 h + 2.95). If the number of plaques detected was PFU after 3 hours, we recommend incubation for (18 ± 3) hours. The study indicates that the traditional coliphage plating technique has a reasonable potential to provide results in a single working day without the need to invest in additional laboratory equipment.

  16. Advantage of make-to-stock strategy based on linear mixed-effect model: a comparison with regression, autoregressive, times series, and exponential smoothing models

    Directory of Open Access Journals (Sweden)

    Yu-Pin Liao

    2017-11-01

    Full Text Available In the past few decades, demand forecasting has become relatively difficult due to rapid changes in the global environment. This research illustrates the use of the make-to-stock (MTS production strategy in order to explain how forecasting plays an essential role in business management. The linear mixed-effect (LME model has been extensively developed and is widely applied in various fields. However, no study has used the LME model for business forecasting. We suggest that the LME model be used as a tool for prediction and to overcome environment complexity. The data analysis is based on real data in an international display company, where the company needs accurate demand forecasting before adopting a MTS strategy. The forecasting result from the LME model is compared to the commonly used approaches, including the regression model, autoregressive model, times series model, and exponential smoothing model, with the results revealing that prediction performance provided by the LME model is more stable than using the other methods. Furthermore, product types in the data are regarded as a random effect in the LME model, hence demands of all types can be predicted simultaneously using a single LME model. However, some approaches require splitting the data into different type categories, and then predicting the type demand by establishing a model for each type. This feature also demonstrates the practicability of the LME model in real business operations.

  17. Extended Mixed-Efects Item Response Models with the MH-RM Algorithm

    Science.gov (United States)

    Chalmers, R. Philip

    2015-01-01

    A mixed-effects item response theory (IRT) model is presented as a logical extension of the generalized linear mixed-effects modeling approach to formulating explanatory IRT models. Fixed and random coefficients in the extended model are estimated using a Metropolis-Hastings Robbins-Monro (MH-RM) stochastic imputation algorithm to accommodate for…

  18. Warped linear mixed models for the genetic analysis of transformed phenotypes.

    Science.gov (United States)

    Fusi, Nicolo; Lippert, Christoph; Lawrence, Neil D; Stegle, Oliver

    2014-09-19

    Linear mixed models (LMMs) are a powerful and established tool for studying genotype-phenotype relationships. A limitation of the LMM is that the model assumes Gaussian distributed residuals, a requirement that rarely holds in practice. Violations of this assumption can lead to false conclusions and loss in power. To mitigate this problem, it is common practice to pre-process the phenotypic values to make them as Gaussian as possible, for instance by applying logarithmic or other nonlinear transformations. Unfortunately, different phenotypes require different transformations, and choosing an appropriate transformation is challenging and subjective. Here we present an extension of the LMM that estimates an optimal transformation from the observed data. In simulations and applications to real data from human, mouse and yeast, we show that using transformations inferred by our model increases power in genome-wide association studies and increases the accuracy of heritability estimation and phenotype prediction.

  19. Mixed Integer Linear Programming model for Crude Palm Oil Supply Chain Planning

    Science.gov (United States)

    Sembiring, Pasukat; Mawengkang, Herman; Sadyadharma, Hendaru; Bu'ulolo, F.; Fajriana

    2018-01-01

    The production process of crude palm oil (CPO) can be defined as the milling process of raw materials, called fresh fruit bunch (FFB) into end products palm oil. The process usually through a series of steps producing and consuming intermediate products. The CPO milling industry considered in this paper does not have oil palm plantation, therefore the FFB are supplied by several public oil palm plantations. Due to the limited availability of FFB, then it is necessary to choose from which plantations would be appropriate. This paper proposes a mixed integer linear programming model the supply chain integrated problem, which include waste processing. The mathematical programming model is solved using neighborhood search approach.

  20. An Efficient Test for Gene-Environment Interaction in Generalized Linear Mixed Models with Family Data.

    Science.gov (United States)

    Mazo Lopera, Mauricio A; Coombes, Brandon J; de Andrade, Mariza

    2017-09-27

    Gene-environment (GE) interaction has important implications in the etiology of complex diseases that are caused by a combination of genetic factors and environment variables. Several authors have developed GE analysis in the context of independent subjects or longitudinal data using a gene-set. In this paper, we propose to analyze GE interaction for discrete and continuous phenotypes in family studies by incorporating the relatedness among the relatives for each family into a generalized linear mixed model (GLMM) and by using a gene-based variance component test. In addition, we deal with collinearity problems arising from linkage disequilibrium among single nucleotide polymorphisms (SNPs) by considering their coefficients as random effects under the null model estimation. We show that the best linear unbiased predictor (BLUP) of such random effects in the GLMM is equivalent to the ridge regression estimator. This equivalence provides a simple method to estimate the ridge penalty parameter in comparison to other computationally-demanding estimation approaches based on cross-validation schemes. We evaluated the proposed test using simulation studies and applied it to real data from the Baependi Heart Study consisting of 76 families. Using our approach, we identified an interaction between BMI and the Peroxisome Proliferator Activated Receptor Gamma ( PPARG ) gene associated with diabetes.

  1. Bayesian inference for two-part mixed-effects model using skew distributions, with application to longitudinal semicontinuous alcohol data.

    Science.gov (United States)

    Xing, Dongyuan; Huang, Yangxin; Chen, Henian; Zhu, Yiliang; Dagne, Getachew A; Baldwin, Julie

    2017-08-01

    Semicontinuous data featured with an excessive proportion of zeros and right-skewed continuous positive values arise frequently in practice. One example would be the substance abuse/dependence symptoms data for which a substantial proportion of subjects investigated may report zero. Two-part mixed-effects models have been developed to analyze repeated measures of semicontinuous data from longitudinal studies. In this paper, we propose a flexible two-part mixed-effects model with skew distributions for correlated semicontinuous alcohol data under the framework of a Bayesian approach. The proposed model specification consists of two mixed-effects models linked by the correlated random effects: (i) a model on the occurrence of positive values using a generalized logistic mixed-effects model (Part I); and (ii) a model on the intensity of positive values using a linear mixed-effects model where the model errors follow skew distributions including skew- t and skew-normal distributions (Part II). The proposed method is illustrated with an alcohol abuse/dependence symptoms data from a longitudinal observational study, and the analytic results are reported by comparing potential models under different random-effects structures. Simulation studies are conducted to assess the performance of the proposed models and method.

  2. Efficient multiple-trait association and estimation of genetic correlation using the matrix-variate linear mixed model.

    Science.gov (United States)

    Furlotte, Nicholas A; Eskin, Eleazar

    2015-05-01

    Multiple-trait association mapping, in which multiple traits are used simultaneously in the identification of genetic variants affecting those traits, has recently attracted interest. One class of approaches for this problem builds on classical variance component methodology, utilizing a multitrait version of a linear mixed model. These approaches both increase power and provide insights into the genetic architecture of multiple traits. In particular, it is possible to estimate the genetic correlation, which is a measure of the portion of the total correlation between traits that is due to additive genetic effects. Unfortunately, the practical utility of these methods is limited since they are computationally intractable for large sample sizes. In this article, we introduce a reformulation of the multiple-trait association mapping approach by defining the matrix-variate linear mixed model. Our approach reduces the computational time necessary to perform maximum-likelihood inference in a multiple-trait model by utilizing a data transformation. By utilizing a well-studied human cohort, we show that our approach provides more than a 10-fold speedup, making multiple-trait association feasible in a large population cohort on the genome-wide scale. We take advantage of the efficiency of our approach to analyze gene expression data. By decomposing gene coexpression into a genetic and environmental component, we show that our method provides fundamental insights into the nature of coexpressed genes. An implementation of this method is available at http://genetics.cs.ucla.edu/mvLMM. Copyright © 2015 by the Genetics Society of America.

  3. Comparison of height-diameter models based on geographically weighted regressions and linear mixed modelling applied to large scale forest inventory data

    Energy Technology Data Exchange (ETDEWEB)

    Quirós Segovia, M.; Condés Ruiz, S.; Drápela, K.

    2016-07-01

    Aim of the study: The main objective of this study was to test Geographically Weighted Regression (GWR) for developing height-diameter curves for forests on a large scale and to compare it with Linear Mixed Models (LMM). Area of study: Monospecific stands of Pinus halepensis Mill. located in the region of Murcia (Southeast Spain). Materials and Methods: The dataset consisted of 230 sample plots (2582 trees) from the Third Spanish National Forest Inventory (SNFI) randomly split into training data (152 plots) and validation data (78 plots). Two different methodologies were used for modelling local (Petterson) and generalized height-diameter relationships (Cañadas I): GWR, with different bandwidths, and linear mixed models. Finally, the quality of the estimated models was compared throughout statistical analysis. Main results: In general, both LMM and GWR provide better prediction capability when applied to a generalized height-diameter function than when applied to a local one, with R2 values increasing from around 0.6 to 0.7 in the model validation. Bias and RMSE were also lower for the generalized function. However, error analysis showed that there were no large differences between these two methodologies, evidencing that GWR provides results which are as good as the more frequently used LMM methodology, at least when no additional measurements are available for calibrating. Research highlights: GWR is a type of spatial analysis for exploring spatially heterogeneous processes. GWR can model spatial variation in tree height-diameter relationship and its regression quality is comparable to LMM. The advantage of GWR over LMM is the possibility to determine the spatial location of every parameter without additional measurements. Abbreviations: GWR (Geographically Weighted Regression); LMM (Linear Mixed Model); SNFI (Spanish National Forest Inventory). (Author)

  4. Multivariate mixed linear model analysis of longitudinal data: an information-rich statistical technique for analyzing disease resistance data

    Science.gov (United States)

    The mixed linear model (MLM) is currently among the most advanced and flexible statistical modeling techniques and its use in tackling problems in plant pathology has begun surfacing in the literature. The longitudinal MLM is a multivariate extension that handles repeatedly measured data, such as r...

  5. Translational mixed-effects PKPD modelling of recombinant human growth hormone - from hypophysectomized rat to patients

    DEFF Research Database (Denmark)

    Thorsted, A; Thygesen, P; Agersø, H

    2016-01-01

    BACKGROUND AND PURPOSE: We aimed to develop a mechanistic mixed-effects pharmacokinetic (PK)-pharmacodynamic (PD) (PKPD) model for recombinant human growth hormone (rhGH) in hypophysectomized rats and to predict the human PKPD relationship. EXPERIMENTAL APPROACH: A non-linear mixed-effects model...... was developed from experimental PKPD studies of rhGH and effects of long-term treatment as measured by insulin-like growth factor 1 (IGF-1) and bodyweight gain in rats. Modelled parameter values were scaled to human values using the allometric approach with fixed exponents for PKs and unscaled for PDs...... s.c. administration was over predicted. After correction of the human s.c. absorption model, the induction model for IGF-1 well described the human PKPD data. CONCLUSIONS: A translational mechanistic PKPD model for rhGH was successfully developed from experimental rat data. The model links...

  6. A Bayesian Framework for Generalized Linear Mixed Modeling Identifies New Candidate Loci for Late-Onset Alzheimer's Disease.

    Science.gov (United States)

    Wang, Xulong; Philip, Vivek M; Ananda, Guruprasad; White, Charles C; Malhotra, Ankit; Michalski, Paul J; Karuturi, Krishna R Murthy; Chintalapudi, Sumana R; Acklin, Casey; Sasner, Michael; Bennett, David A; De Jager, Philip L; Howell, Gareth R; Carter, Gregory W

    2018-03-05

    Recent technical and methodological advances have greatly enhanced genome-wide association studies (GWAS). The advent of low-cost whole-genome sequencing facilitates high-resolution variant identification, and the development of linear mixed models (LMM) allows improved identification of putatively causal variants. While essential for correcting false positive associations due to sample relatedness and population stratification, LMMs have commonly been restricted to quantitative variables. However, phenotypic traits in association studies are often categorical, coded as binary case-control or ordered variables describing disease stages. To address these issues, we have devised a method for genomic association studies that implements a generalized linear mixed model (GLMM) in a Bayesian framework, called Bayes-GLMM Bayes-GLMM has four major features: (1) support of categorical, binary and quantitative variables; (2) cohesive integration of previous GWAS results for related traits; (3) correction for sample relatedness by mixed modeling; and (4) model estimation by both Markov chain Monte Carlo (MCMC) sampling and maximal likelihood estimation. We applied Bayes-GLMM to the whole-genome sequencing cohort of the Alzheimer's Disease Sequencing Project (ADSP). This study contains 570 individuals from 111 families, each with Alzheimer's disease diagnosed at one of four confidence levels. With Bayes-GLMM we identified four variants in three loci significantly associated with Alzheimer's disease. Two variants, rs140233081 and rs149372995 lie between PRKAR1B and PDGFA The coded proteins are localized to the glial-vascular unit, and PDGFA transcript levels are associated with AD-related neuropathology. In summary, this work provides implementation of a flexible, generalized mixed model approach in a Bayesian framework for association studies. Copyright © 2018, Genetics.

  7. Influence assessment in censored mixed-effects models using the multivariate Student’s-t distribution

    Science.gov (United States)

    Matos, Larissa A.; Bandyopadhyay, Dipankar; Castro, Luis M.; Lachos, Victor H.

    2015-01-01

    In biomedical studies on HIV RNA dynamics, viral loads generate repeated measures that are often subjected to upper and lower detection limits, and hence these responses are either left- or right-censored. Linear and non-linear mixed-effects censored (LMEC/NLMEC) models are routinely used to analyse these longitudinal data, with normality assumptions for the random effects and residual errors. However, the derived inference may not be robust when these underlying normality assumptions are questionable, especially the presence of outliers and thick-tails. Motivated by this, Matos et al. (2013b) recently proposed an exact EM-type algorithm for LMEC/NLMEC models using a multivariate Student’s-t distribution, with closed-form expressions at the E-step. In this paper, we develop influence diagnostics for LMEC/NLMEC models using the multivariate Student’s-t density, based on the conditional expectation of the complete data log-likelihood. This partially eliminates the complexity associated with the approach of Cook (1977, 1986) for censored mixed-effects models. The new methodology is illustrated via an application to a longitudinal HIV dataset. In addition, a simulation study explores the accuracy of the proposed measures in detecting possible influential observations for heavy-tailed censored data under different perturbation and censoring schemes. PMID:26190871

  8. Spatial generalized linear mixed models of electric power outages due to hurricanes and ice storms

    International Nuclear Information System (INIS)

    Liu Haibin; Davidson, Rachel A.; Apanasovich, Tatiyana V.

    2008-01-01

    This paper presents new statistical models that predict the number of hurricane- and ice storm-related electric power outages likely to occur in each 3 kmx3 km grid cell in a region. The models are based on a large database of recent outages experienced by three major East Coast power companies in six hurricanes and eight ice storms. A spatial generalized linear mixed modeling (GLMM) approach was used in which spatial correlation is incorporated through random effects. Models were fitted using a composite likelihood approach and the covariance matrix was estimated empirically. A simulation study was conducted to test the model estimation procedure, and model training, validation, and testing were done to select the best models and assess their predictive power. The final hurricane model includes number of protective devices, maximum gust wind speed, hurricane indicator, and company indicator covariates. The final ice storm model includes number of protective devices, ice thickness, and ice storm indicator covariates. The models should be useful for power companies as they plan for future storms. The statistical modeling approach offers a new way to assess the reliability of electric power and other infrastructure systems in extreme events

  9. Deliberate practice predicts performance throughout time in adolescent chess players and dropouts: A linear mixed models analysis.

    NARCIS (Netherlands)

    de Bruin, A.B.H.; Smits, N.; Rikers, R.M.J.P.; Schmidt, H.G.

    2008-01-01

    In this study, the longitudinal relation between deliberate practice and performance in chess was examined using a linear mixed models analysis. The practice activities and performance ratings of young elite chess players, who were either in, or had dropped out of the Dutch national chess training,

  10. Kriging with mixed effects models

    Directory of Open Access Journals (Sweden)

    Alessio Pollice

    2007-10-01

    Full Text Available In this paper the effectiveness of the use of mixed effects models for estimation and prediction purposes in spatial statistics for continuous data is reviewed in the classical and Bayesian frameworks. A case study on agricultural data is also provided.

  11. Using empirical Bayes predictors from generalized linear mixed models to test and visualize associations among longitudinal outcomes.

    Science.gov (United States)

    Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O

    2018-01-01

    Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes

  12. Linear mixing model applied to coarse spatial resolution data from multispectral satellite sensors

    Science.gov (United States)

    Holben, Brent N.; Shimabukuro, Yosio E.

    1993-01-01

    A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55-3.95 micron channel was used with the two reflective channels 0.58-0.68 micron and 0.725-1.1 micron to run a constrained least squares model to generate fraction images for an area in the west central region of Brazil. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse spatial resolution data for global studies.

  13. Delta-tilde interpretation of standard linear mixed model results

    DEFF Research Database (Denmark)

    Brockhoff, Per Bruun; Amorim, Isabel de Sousa; Kuznetsova, Alexandra

    2016-01-01

    effects relative to the residual error and to choose the proper effect size measure. For multi-attribute bar plots of F-statistics this amounts, in balanced settings, to a simple transformation of the bar heights to get them transformed into depicting what can be seen as approximately the average pairwise...... data set and compared to actual d-prime calculations based on Thurstonian regression modeling through the ordinal package. For more challenging cases we offer a generic "plug-in" implementation of a version of the method as part of the R-package SensMixed. We discuss and clarify the bias mechanisms...

  14. Comparison of linear and non-linear models for predicting energy expenditure from raw accelerometer data.

    Science.gov (United States)

    Montoye, Alexander H K; Begum, Munni; Henning, Zachary; Pfeiffer, Karin A

    2017-02-01

    This study had three purposes, all related to evaluating energy expenditure (EE) prediction accuracy from body-worn accelerometers: (1) compare linear regression to linear mixed models, (2) compare linear models to artificial neural network models, and (3) compare accuracy of accelerometers placed on the hip, thigh, and wrists. Forty individuals performed 13 activities in a 90 min semi-structured, laboratory-based protocol. Participants wore accelerometers on the right hip, right thigh, and both wrists and a portable metabolic analyzer (EE criterion). Four EE prediction models were developed for each accelerometer: linear regression, linear mixed, and two ANN models. EE prediction accuracy was assessed using correlations, root mean square error (RMSE), and bias and was compared across models and accelerometers using repeated-measures analysis of variance. For all accelerometer placements, there were no significant differences for correlations or RMSE between linear regression and linear mixed models (correlations: r  =  0.71-0.88, RMSE: 1.11-1.61 METs; p  >  0.05). For the thigh-worn accelerometer, there were no differences in correlations or RMSE between linear and ANN models (ANN-correlations: r  =  0.89, RMSE: 1.07-1.08 METs. Linear models-correlations: r  =  0.88, RMSE: 1.10-1.11 METs; p  >  0.05). Conversely, one ANN had higher correlations and lower RMSE than both linear models for the hip (ANN-correlation: r  =  0.88, RMSE: 1.12 METs. Linear models-correlations: r  =  0.86, RMSE: 1.18-1.19 METs; p  linear models for the wrist-worn accelerometers (ANN-correlations: r  =  0.82-0.84, RMSE: 1.26-1.32 METs. Linear models-correlations: r  =  0.71-0.73, RMSE: 1.55-1.61 METs; p  models offer a significant improvement in EE prediction accuracy over linear models. Conversely, linear models showed similar EE prediction accuracy to machine learning models for hip- and thigh

  15. Generating synthetic wave climates for coastal modelling: a linear mixed modelling approach

    Science.gov (United States)

    Thomas, C.; Lark, R. M.

    2013-12-01

    Numerical coastline morphological evolution models require wave climate properties to drive morphological change through time. Wave climate properties (typically wave height, period and direction) may be temporally fixed, culled from real wave buoy data, or allowed to vary in some way defined by a Gaussian or other pdf. However, to examine sensitivity of coastline morphologies to wave climate change, it seems desirable to be able to modify wave climate time series from a current to some new state along a trajectory, but in a way consistent with, or initially conditioned by, the properties of existing data, or to generate fully synthetic data sets with realistic time series properties. For example, mean or significant wave height time series may have underlying periodicities, as revealed in numerous analyses of wave data. Our motivation is to develop a simple methodology to generate synthetic wave climate time series that can change in some stochastic way through time. We wish to use such time series in a coastline evolution model to test sensitivities of coastal landforms to changes in wave climate over decadal and centennial scales. We have worked initially on time series of significant wave height, based on data from a Waverider III buoy located off the coast of Yorkshire, England. The statistical framework for the simulation is the linear mixed model. The target variable, perhaps after transformation (Box-Cox), is modelled as a multivariate Gaussian, the mean modelled as a function of a fixed effect, and two random components, one of which is independently and identically distributed (iid) and the second of which is temporally correlated. The model was fitted to the data by likelihood methods. We considered the option of a periodic mean, the period either fixed (e.g. at 12 months) or estimated from the data. We considered two possible correlation structures for the second random effect. In one the correlation decays exponentially with time. In the second

  16. A brief introduction to regression designs and mixed-effects modelling by a recent convert

    OpenAIRE

    Balling, Laura Winther

    2008-01-01

    This article discusses the advantages of multiple regression designs over the factorial designs traditionally used in many psycholinguistic experiments. It is shown that regression designs are typically more informative, statistically more powerful and better suited to the analysis of naturalistic tasks. The advantages of including both fixed and random effects are demonstrated with reference to linear mixed-effects models, and problems of collinearity, variable distribution and variable sele...

  17. Functional linear models for association analysis of quantitative traits.

    Science.gov (United States)

    Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao

    2013-11-01

    Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. © 2013 WILEY

  18. Comparing a single case to a control group - Applying linear mixed effects models to repeated measures data.

    Science.gov (United States)

    Huber, Stefan; Klein, Elise; Moeller, Korbinian; Willmes, Klaus

    2015-10-01

    In neuropsychological research, single-cases are often compared with a small control sample. Crawford and colleagues developed inferential methods (i.e., the modified t-test) for such a research design. In the present article, we suggest an extension of the methods of Crawford and colleagues employing linear mixed models (LMM). We first show that a t-test for the significance of a dummy coded predictor variable in a linear regression is equivalent to the modified t-test of Crawford and colleagues. As an extension to this idea, we then generalized the modified t-test to repeated measures data by using LMMs to compare the performance difference in two conditions observed in a single participant to that of a small control group. The performance of LMMs regarding Type I error rates and statistical power were tested based on Monte-Carlo simulations. We found that starting with about 15-20 participants in the control sample Type I error rates were close to the nominal Type I error rate using the Satterthwaite approximation for the degrees of freedom. Moreover, statistical power was acceptable. Therefore, we conclude that LMMs can be applied successfully to statistically evaluate performance differences between a single-case and a control sample. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Extending existing structural identifiability analysis methods to mixed-effects models.

    Science.gov (United States)

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2018-01-01

    The concept of structural identifiability for state-space models is expanded to cover mixed-effects state-space models. Two methods applicable for the analytical study of the structural identifiability of mixed-effects models are presented. The two methods are based on previously established techniques for non-mixed-effects models; namely the Taylor series expansion and the input-output form approach. By generating an exhaustive summary, and by assuming an infinite number of subjects, functions of random variables can be derived which in turn determine the distribution of the system's observation function(s). By considering the uniqueness of the analytical statistical moments of the derived functions of the random variables, the structural identifiability of the corresponding mixed-effects model can be determined. The two methods are applied to a set of examples of mixed-effects models to illustrate how they work in practice. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. A mixed-effects model approach for the statistical analysis of vocal fold viscoelastic shear properties.

    Science.gov (United States)

    Xu, Chet C; Chan, Roger W; Sun, Han; Zhan, Xiaowei

    2017-11-01

    A mixed-effects model approach was introduced in this study for the statistical analysis of rheological data of vocal fold tissues, in order to account for the data correlation caused by multiple measurements of each tissue sample across the test frequency range. Such data correlation had often been overlooked in previous studies in the past decades. The viscoelastic shear properties of the vocal fold lamina propria of two commonly used laryngeal research animal species (i.e. rabbit, porcine) were measured by a linear, controlled-strain simple-shear rheometer. Along with published canine and human rheological data, the vocal fold viscoelastic shear moduli of these animal species were compared to those of human over a frequency range of 1-250Hz using the mixed-effects models. Our results indicated that tissues of the rabbit, canine and porcine vocal fold lamina propria were significantly stiffer and more viscous than those of human. Mixed-effects models were shown to be able to more accurately analyze rheological data generated from repeated measurements. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Three novel approaches to structural identifiability analysis in mixed-effects models.

    Science.gov (United States)

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2016-05-06

    Structural identifiability is a concept that considers whether the structure of a model together with a set of input-output relations uniquely determines the model parameters. In the mathematical modelling of biological systems, structural identifiability is an important concept since biological interpretations are typically made from the parameter estimates. For a system defined by ordinary differential equations, several methods have been developed to analyse whether the model is structurally identifiable or otherwise. Another well-used modelling framework, which is particularly useful when the experimental data are sparsely sampled and the population variance is of interest, is mixed-effects modelling. However, established identifiability analysis techniques for ordinary differential equations are not directly applicable to such models. In this paper, we present and apply three different methods that can be used to study structural identifiability in mixed-effects models. The first method, called the repeated measurement approach, is based on applying a set of previously established statistical theorems. The second method, called the augmented system approach, is based on augmenting the mixed-effects model to an extended state-space form. The third method, called the Laplace transform mixed-effects extension, is based on considering the moment invariants of the systems transfer function as functions of random variables. To illustrate, compare and contrast the application of the three methods, they are applied to a set of mixed-effects models. Three structural identifiability analysis methods applicable to mixed-effects models have been presented in this paper. As method development of structural identifiability techniques for mixed-effects models has been given very little attention, despite mixed-effects models being widely used, the methods presented in this paper provides a way of handling structural identifiability in mixed-effects models previously not

  2. Linear mixed models in sensometrics

    DEFF Research Database (Denmark)

    Kuznetsova, Alexandra

    quality of decision making in Danish as well as international food companies and other companies using the same methods. The two open-source R packages lmerTest and SensMixed implement and support the methodological developments in the research papers as well as the ANOVA modelling part of the Consumer...... an open-source software tool ConsumerCheck was developed in this project and now is available for everyone. will represent a major step forward when concerns this important problem in modern consumer driven product development. Standard statistical software packages can be used for some of the purposes......Today’s companies and researchers gather large amounts of data of different kind. In consumer studies the objective is the collection of the data to better understand consumer acceptance of products. In such studies a number of persons (generally not trained) are selected in order to score products...

  3. Analysis of 24-Hour Ambulatory Blood Pressure Monitoring Data using Orthonormal Polynomials in the Linear Mixed Model

    OpenAIRE

    Edwards, Lloyd J.; Simpson, Sean L.

    2010-01-01

    The use of 24-hour ambulatory blood pressure monitoring (ABPM) in clinical practice and observational epidemiological studies has grown considerably in the past 25 years. ABPM is a very effective technique for assessing biological, environmental, and drug effects on blood pressure. In order to enhance the effectiveness of ABPM for clinical and observational research studies via analytical and graphical results, developing alternative data analysis approaches are important. The linear mixed mo...

  4. Efficient and robust estimation for longitudinal mixed models for binary data

    DEFF Research Database (Denmark)

    Holst, René

    2009-01-01

    This paper proposes a longitudinal mixed model for binary data. The model extends the classical Poisson trick, in which a binomial regression is fitted by switching to a Poisson framework. A recent estimating equations method for generalized linear longitudinal mixed models, called GEEP, is used...... as a vehicle for fitting the conditional Poisson regressions, given a latent process of serial correlated Tweedie variables. The regression parameters are estimated using a quasi-score method, whereas the dispersion and correlation parameters are estimated by use of bias-corrected Pearson-type estimating...... equations, using second moments only. Random effects are predicted by BLUPs. The method provides a computationally efficient and robust approach to the estimation of longitudinal clustered binary data and accommodates linear and non-linear models. A simulation study is used for validation and finally...

  5. Visualizing multifactorial and multi-attribute effect sizes in linear mixed models with a view towards sensometrics

    DEFF Research Database (Denmark)

    and straightforward idea is to interpret effects relative to the residual error and to choose the proper effect size measure. For multi-attribute bar plots of F-statistics this amounts, in balanced settings, to a simple transformation of the bar heights to get them transformed into depicting what can be seen...... on a multifactorial sensory profile data set and compared to actual d-prime calculations based on ordinal regression modelling through the ordinal package. A generic ``plug-in'' implementation of the method is given in the SensMixed package, which again depends on the lmerTest package. We discuss and clarify the bias...

  6. Spillways Scheduling for Flood Control of Three Gorges Reservoir Using Mixed Integer Linear Programming Model

    Directory of Open Access Journals (Sweden)

    Maoyuan Feng

    2014-01-01

    Full Text Available This study proposes a mixed integer linear programming (MILP model to optimize the spillways scheduling for reservoir flood control. Unlike the conventional reservoir operation model, the proposed MILP model specifies the spillways status (including the number of spillways to be open and the degree of the spillway opened instead of reservoir release, since the release is actually controlled by using the spillway. The piecewise linear approximation is used to formulate the relationship between the reservoir storage and water release for a spillway, which should be open/closed with a status depicted by a binary variable. The control order and symmetry rules of spillways are described and incorporated into the constraints for meeting the practical demand. Thus, a MILP model is set up to minimize the maximum reservoir storage. The General Algebraic Modeling System (GAMS and IBM ILOG CPLEX Optimization Studio (CPLEX software are used to find the optimal solution for the proposed MILP model. The China’s Three Gorges Reservoir, whose spillways are of five types with the total number of 80, is selected as the case study. It is shown that the proposed model decreases the flood risk compared with the conventional operation and makes the operation more practical by specifying the spillways status directly.

  7. A mixed integer linear program for an integrated fishery | Hasan ...

    African Journals Online (AJOL)

    ... and labour allocation of quota based integrated fisheries. We demonstrate the workability of our model with a numerical example and sensitivity analysis based on data obtained from one of the major fisheries in New Zealand. Keywords: mixed integer linear program, fishing, trawler scheduling, processing, quotas ORiON: ...

  8. A brief introduction to regression designs and mixed-effects modelling by a recent convert

    DEFF Research Database (Denmark)

    Balling, Laura Winther

    2008-01-01

    This article discusses the advantages of multiple regression designs over the factorial designs traditionally used in many psycholinguistic experiments. It is shown that regression designs are typically more informative, statistically more powerful and better suited to the analysis of naturalistic...... tasks. The advantages of including both fixed and random effects are demonstrated with reference to linear mixed-effects models, and problems of collinearity, variable distribution and variable selection are discussed. The advantages of these techniques are exemplified in an analysis of a word...

  9. Modelling the multilevel structure and mixed effects of the factors influencing the energy consumption of electric vehicles

    International Nuclear Information System (INIS)

    Liu, Kai; Wang, Jiangbo; Yamamoto, Toshiyuki; Morikawa, Takayuki

    2016-01-01

    Highlights: • The impacts of driving heterogeneity on EVs’ energy efficiency are examined. • Several multilevel mixed-effects regression models are proposed and compared. • The most reasonable nested structure is extracted from the long term GPS data. • Proposed model improves the energy estimation accuracy by 7.5%. - Abstract: To improve the accuracy of estimation of the energy consumption of electric vehicles (EVs) and to enable the alleviation of range anxiety through the introduction of EV charging stations at suitable locations for the near future, multilevel mixed-effects linear regression models were used in this study to estimate the actual energy efficiency of EVs. The impacts of the heterogeneity in driving behaviour among various road environments and traffic conditions on EV energy efficiency were extracted from long-term daily trip-based energy consumption data, which were collected over 12 months from 68 in-use EVs in Aichi Prefecture in Japan. Considering the variations in energy efficiency associated with different types of EV ownership, different external environments, and different driving habits, a two-level random intercept model, three two-level mixed-effects models, and two three-level mixed-effects models were developed and compared. The most reasonable nesting structure was determined by comparing the models, which were designed with different nesting structures and different random variance component specifications, thereby revealing the potential correlations and non-constant variability of the energy consumption per kilometre (ECPK) and improving the estimation accuracy by 7.5%.

  10. Effect Displays in R for Generalised Linear Models

    Directory of Open Access Journals (Sweden)

    John Fox

    2003-07-01

    Full Text Available This paper describes the implementation in R of a method for tabular or graphical display of terms in a complex generalised linear model. By complex, I mean a model that contains terms related by marginality or hierarchy, such as polynomial terms, or main effects and interactions. I call these tables or graphs effect displays. Effect displays are constructed by identifying high-order terms in a generalised linear model. Fitted values under the model are computed for each such term. The lower-order "relatives" of a high-order term (e.g., main effects marginal to an interaction are absorbed into the term, allowing the predictors appearing in the high-order term to range over their values. The values of other predictors are fixed at typical values: for example, a covariate could be fixed at its mean or median, a factor at its proportional distribution in the data, or to equal proportions in its several levels. Variations of effect displays are also described, including representation of terms higher-order to any appearing in the model.

  11. Model Predictive Control for Linear Complementarity and Extended Linear Complementarity Systems

    Directory of Open Access Journals (Sweden)

    Bambang Riyanto

    2005-11-01

    Full Text Available In this paper, we propose model predictive control method for linear complementarity and extended linear complementarity systems by formulating optimization along prediction horizon as mixed integer quadratic program. Such systems contain interaction between continuous dynamics and discrete event systems, and therefore, can be categorized as hybrid systems. As linear complementarity and extended linear complementarity systems finds applications in different research areas, such as impact mechanical systems, traffic control and process control, this work will contribute to the development of control design method for those areas as well, as shown by three given examples.

  12. Modeling Learning in Doubly Multilevel Binary Longitudinal Data Using Generalized Linear Mixed Models: An Application to Measuring and Explaining Word Learning.

    Science.gov (United States)

    Cho, Sun-Joo; Goodwin, Amanda P

    2016-04-01

    When word learning is supported by instruction in experimental studies for adolescents, word knowledge outcomes tend to be collected from complex data structure, such as multiple aspects of word knowledge, multilevel reader data, multilevel item data, longitudinal design, and multiple groups. This study illustrates how generalized linear mixed models can be used to measure and explain word learning for data having such complexity. Results from this application provide deeper understanding of word knowledge than could be attained from simpler models and show that word knowledge is multidimensional and depends on word characteristics and instructional contexts.

  13. Guidance for the utility of linear models in meta-analysis of genetic association studies of binary phenotypes.

    Science.gov (United States)

    Cook, James P; Mahajan, Anubha; Morris, Andrew P

    2017-02-01

    Linear mixed models are increasingly used for the analysis of genome-wide association studies (GWAS) of binary phenotypes because they can efficiently and robustly account for population stratification and relatedness through inclusion of random effects for a genetic relationship matrix. However, the utility of linear (mixed) models in the context of meta-analysis of GWAS of binary phenotypes has not been previously explored. In this investigation, we present simulations to compare the performance of linear and logistic regression models under alternative weighting schemes in a fixed-effects meta-analysis framework, considering designs that incorporate variable case-control imbalance, confounding factors and population stratification. Our results demonstrate that linear models can be used for meta-analysis of GWAS of binary phenotypes, without loss of power, even in the presence of extreme case-control imbalance, provided that one of the following schemes is used: (i) effective sample size weighting of Z-scores or (ii) inverse-variance weighting of allelic effect sizes after conversion onto the log-odds scale. Our conclusions thus provide essential recommendations for the development of robust protocols for meta-analysis of binary phenotypes with linear models.

  14. Modeling Linguistic Variables With Regression Models: Addressing Non-Gaussian Distributions, Non-independent Observations, and Non-linear Predictors With Random Effects and Generalized Additive Models for Location, Scale, and Shape

    Directory of Open Access Journals (Sweden)

    Christophe Coupé

    2018-04-01

    Full Text Available As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM, which address grouping of observations, and generalized linear mixed-effects models (GLMM, which offer a family of distributions for the dependent variable. Generalized additive models (GAM are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS. We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for ‘difficult’ variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships

  15. Modeling Linguistic Variables With Regression Models: Addressing Non-Gaussian Distributions, Non-independent Observations, and Non-linear Predictors With Random Effects and Generalized Additive Models for Location, Scale, and Shape.

    Science.gov (United States)

    Coupé, Christophe

    2018-01-01

    As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for 'difficult' variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we

  16. Node-Splitting Generalized Linear Mixed Models for Evaluation of Inconsistency in Network Meta-Analysis.

    Science.gov (United States)

    Yu-Kang, Tu

    2016-12-01

    Network meta-analysis for multiple treatment comparisons has been a major development in evidence synthesis methodology. The validity of a network meta-analysis, however, can be threatened by inconsistency in evidence within the network. One particular issue of inconsistency is how to directly evaluate the inconsistency between direct and indirect evidence with regard to the effects difference between two treatments. A Bayesian node-splitting model was first proposed and a similar frequentist side-splitting model has been put forward recently. Yet, assigning the inconsistency parameter to one or the other of the two treatments or splitting the parameter symmetrically between the two treatments can yield different results when multi-arm trials are involved in the evaluation. We aimed to show that a side-splitting model can be viewed as a special case of design-by-treatment interaction model, and different parameterizations correspond to different design-by-treatment interactions. We demonstrated how to evaluate the side-splitting model using the arm-based generalized linear mixed model, and an example data set was used to compare results from the arm-based models with those from the contrast-based models. The three parameterizations of side-splitting make slightly different assumptions: the symmetrical method assumes that both treatments in a treatment contrast contribute to inconsistency between direct and indirect evidence, whereas the other two parameterizations assume that only one of the two treatments contributes to this inconsistency. With this understanding in mind, meta-analysts can then make a choice about how to implement the side-splitting method for their analysis. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  17. A brief introduction to mixed effects modelling and multi-model inference in ecology.

    Science.gov (United States)

    Harrison, Xavier A; Donaldson, Lynda; Correa-Cano, Maria Eugenia; Evans, Julian; Fisher, David N; Goodwin, Cecily E D; Robinson, Beth S; Hodgson, David J; Inger, Richard

    2018-01-01

    The use of linear mixed effects models (LMMs) is increasingly common in the analysis of biological data. Whilst LMMs offer a flexible approach to modelling a broad range of data types, ecological data are often complex and require complex model structures, and the fitting and interpretation of such models is not always straightforward. The ability to achieve robust biological inference requires that practitioners know how and when to apply these tools. Here, we provide a general overview of current methods for the application of LMMs to biological data, and highlight the typical pitfalls that can be encountered in the statistical modelling process. We tackle several issues regarding methods of model selection, with particular reference to the use of information theory and multi-model inference in ecology. We offer practical solutions and direct the reader to key references that provide further technical detail for those seeking a deeper understanding. This overview should serve as a widely accessible code of best practice for applying LMMs to complex biological problems and model structures, and in doing so improve the robustness of conclusions drawn from studies investigating ecological and evolutionary questions.

  18. Modelling female fertility traits in beef cattle using linear and non-linear models.

    Science.gov (United States)

    Naya, H; Peñagaricano, F; Urioste, J I

    2017-06-01

    Female fertility traits are key components of the profitability of beef cattle production. However, these traits are difficult and expensive to measure, particularly under extensive pastoral conditions, and consequently, fertility records are in general scarce and somehow incomplete. Moreover, fertility traits are usually dominated by the effects of herd-year environment, and it is generally assumed that relatively small margins are kept for genetic improvement. New ways of modelling genetic variation in these traits are needed. Inspired in the methodological developments made by Prof. Daniel Gianola and co-workers, we assayed linear (Gaussian), Poisson, probit (threshold), censored Poisson and censored Gaussian models to three different kinds of endpoints, namely calving success (CS), number of days from first calving (CD) and number of failed oestrus (FE). For models involving FE and CS, non-linear models overperformed their linear counterparts. For models derived from CD, linear versions displayed better adjustment than the non-linear counterparts. Non-linear models showed consistently higher estimates of heritability and repeatability in all cases (h 2  linear models; h 2  > 0.23 and r > 0.24, for non-linear models). While additive and permanent environment effects showed highly favourable correlations between all models (>0.789), consistency in selecting the 10% best sires showed important differences, mainly amongst the considered endpoints (FE, CS and CD). In consequence, endpoints should be considered as modelling different underlying genetic effects, with linear models more appropriate to describe CD and non-linear models better for FE and CS. © 2017 Blackwell Verlag GmbH.

  19. Spoofing cyber attack detection in probe-based traffic monitoring systems using mixed integer linear programming

    KAUST Repository

    Canepa, Edward S.

    2013-01-01

    Traffic sensing systems rely more and more on user generated (insecure) data, which can pose a security risk whenever the data is used for traffic flow control. In this article, we propose a new formulation for detecting malicious data injection in traffic flow monitoring systems by using the underlying traffic flow model. The state of traffic is modeled by the Lighthill-Whitham- Richards traffic flow model, which is a first order scalar conservation law with concave flux function. Given a set of traffic flow data, we show that the constraints resulting from this partial differential equation are mixed integer linear inequalities for some decision variable. We use this fact to pose the problem of detecting spoofing cyber-attacks in probe-based traffic flow information systems as mixed integer linear feasibility problem. The resulting framework can be used to detect spoofing attacks in real time, or to evaluate the worst-case effects of an attack offline. A numerical implementation is performed on a cyber-attack scenario involving experimental data from the Mobile Century experiment and the Mobile Millennium system currently operational in Northern California. © 2013 IEEE.

  20. Spoofing cyber attack detection in probe-based traffic monitoring systems using mixed integer linear programming

    KAUST Repository

    Canepa, Edward S.

    2013-09-01

    Traffic sensing systems rely more and more on user generated (insecure) data, which can pose a security risk whenever the data is used for traffic flow control. In this article, we propose a new formulation for detecting malicious data injection in traffic flow monitoring systems by using the underlying traffic flow model. The state of traffic is modeled by the Lighthill- Whitham-Richards traffic flow model, which is a first order scalar conservation law with concave flux function. Given a set of traffic flow data generated by multiple sensors of different types, we show that the constraints resulting from this partial differential equation are mixed integer linear inequalities for a specific decision variable. We use this fact to pose the problem of detecting spoofing cyber attacks in probe-based traffic flow information systems as mixed integer linear feasibility problem. The resulting framework can be used to detect spoofing attacks in real time, or to evaluate the worst-case effects of an attack offliine. A numerical implementation is performed on a cyber attack scenario involving experimental data from the Mobile Century experiment and the Mobile Millennium system currently operational in Northern California. © American Institute of Mathematical Sciences.

  1. Axial displacement of external and internal implant-abutment connection evaluated by linear mixed model analysis.

    Science.gov (United States)

    Seol, Hyon-Woo; Heo, Seong-Joo; Koak, Jai-Young; Kim, Seong-Kyun; Kim, Shin-Koo

    2015-01-01

    To analyze the axial displacement of external and internal implant-abutment connection after cyclic loading. Three groups of external abutments (Ext group), an internal tapered one-piece-type abutment (Int-1 group), and an internal tapered two-piece-type abutment (Int-2 group) were prepared. Cyclic loading was applied to implant-abutment assemblies at 150 N with a frequency of 3 Hz. The amount of axial displacement, the Periotest values (PTVs), and the removal torque values(RTVs) were measured. Both a repeated measures analysis of variance and pattern analysis based on the linear mixed model were used for statistical analysis. Scanning electron microscopy (SEM) was used to evaluate the surface of the implant-abutment connection. The mean axial displacements after 1,000,000 cycles were 0.6 μm in the Ext group, 3.7 μm in the Int-1 group, and 9.0 μm in the Int-2 group. Pattern analysis revealed a breakpoint at 171 cycles. The Ext group showed no declining pattern, and the Int-1 group showed no declining pattern after the breakpoint (171 cycles). However, the Int-2 group experienced continuous axial displacement. After cyclic loading, the PTV decreased in the Int-2 group, and the RTV decreased in all groups. SEM imaging revealed surface wear in all groups. Axial displacement and surface wear occurred in all groups. The PTVs remained stable, but the RTVs decreased after cyclic loading. Based on linear mixed model analysis, the Ext and Int-1 groups' axial displacements plateaued after little cyclic loading. The Int-2 group's rate of axial displacement slowed after 100,000 cycles.

  2. lme4qtl: linear mixed models with flexible covariance structure for genetic studies of related individuals.

    Science.gov (United States)

    Ziyatdinov, Andrey; Vázquez-Santiago, Miquel; Brunel, Helena; Martinez-Perez, Angel; Aschard, Hugues; Soria, Jose Manuel

    2018-02-27

    Quantitative trait locus (QTL) mapping in genetic data often involves analysis of correlated observations, which need to be accounted for to avoid false association signals. This is commonly performed by modeling such correlations as random effects in linear mixed models (LMMs). The R package lme4 is a well-established tool that implements major LMM features using sparse matrix methods; however, it is not fully adapted for QTL mapping association and linkage studies. In particular, two LMM features are lacking in the base version of lme4: the definition of random effects by custom covariance matrices; and parameter constraints, which are essential in advanced QTL models. Apart from applications in linkage studies of related individuals, such functionalities are of high interest for association studies in situations where multiple covariance matrices need to be modeled, a scenario not covered by many genome-wide association study (GWAS) software. To address the aforementioned limitations, we developed a new R package lme4qtl as an extension of lme4. First, lme4qtl contributes new models for genetic studies within a single tool integrated with lme4 and its companion packages. Second, lme4qtl offers a flexible framework for scenarios with multiple levels of relatedness and becomes efficient when covariance matrices are sparse. We showed the value of our package using real family-based data in the Genetic Analysis of Idiopathic Thrombophilia 2 (GAIT2) project. Our software lme4qtl enables QTL mapping models with a versatile structure of random effects and efficient computation for sparse covariances. lme4qtl is available at https://github.com/variani/lme4qtl .

  3. A multiple objective mixed integer linear programming model for power generation expansion planning

    Energy Technology Data Exchange (ETDEWEB)

    Antunes, C. Henggeler; Martins, A. Gomes [INESC-Coimbra, Coimbra (Portugal); Universidade de Coimbra, Dept. de Engenharia Electrotecnica, Coimbra (Portugal); Brito, Isabel Sofia [Instituto Politecnico de Beja, Escola Superior de Tecnologia e Gestao, Beja (Portugal)

    2004-03-01

    Power generation expansion planning inherently involves multiple, conflicting and incommensurate objectives. Therefore, mathematical models become more realistic if distinct evaluation aspects, such as cost and environmental concerns, are explicitly considered as objective functions rather than being encompassed by a single economic indicator. With the aid of multiple objective models, decision makers may grasp the conflicting nature and the trade-offs among the different objectives in order to select satisfactory compromise solutions. This paper presents a multiple objective mixed integer linear programming model for power generation expansion planning that allows the consideration of modular expansion capacity values of supply-side options. This characteristic of the model avoids the well-known problem associated with continuous capacity values that usually have to be discretized in a post-processing phase without feedback on the nature and importance of the changes in the attributes of the obtained solutions. Demand-side management (DSM) is also considered an option in the planning process, assuming there is a sufficiently large portion of the market under franchise conditions. As DSM full costs are accounted in the model, including lost revenues, it is possible to perform an evaluation of the rate impact in order to further inform the decision process (Author)

  4. MetabR: an R script for linear model analysis of quantitative metabolomic data

    Directory of Open Access Journals (Sweden)

    Ernest Ben

    2012-10-01

    Full Text Available Abstract Background Metabolomics is an emerging high-throughput approach to systems biology, but data analysis tools are lacking compared to other systems level disciplines such as transcriptomics and proteomics. Metabolomic data analysis requires a normalization step to remove systematic effects of confounding variables on metabolite measurements. Current tools may not correctly normalize every metabolite when the relationships between each metabolite quantity and fixed-effect confounding variables are different, or for the effects of random-effect confounding variables. Linear mixed models, an established methodology in the microarray literature, offer a standardized and flexible approach for removing the effects of fixed- and random-effect confounding variables from metabolomic data. Findings Here we present a simple menu-driven program, “MetabR”, designed to aid researchers with no programming background in statistical analysis of metabolomic data. Written in the open-source statistical programming language R, MetabR implements linear mixed models to normalize metabolomic data and analysis of variance (ANOVA to test treatment differences. MetabR exports normalized data, checks statistical model assumptions, identifies differentially abundant metabolites, and produces output files to help with data interpretation. Example data are provided to illustrate normalization for common confounding variables and to demonstrate the utility of the MetabR program. Conclusions We developed MetabR as a simple and user-friendly tool for implementing linear mixed model-based normalization and statistical analysis of targeted metabolomic data, which helps to fill a lack of available data analysis tools in this field. The program, user guide, example data, and any future news or updates related to the program may be found at http://metabr.r-forge.r-project.org/.

  5. Half-trek criterion for generic identifiability of linear structural equation models

    NARCIS (Netherlands)

    Foygel, R.; Draisma, J.; Drton, M.

    2012-01-01

    A linear structural equation model relates random variables of interest and corresponding Gaussian noise terms via a linear equation system. Each such model can be represented by a mixed graph in which directed edges encode the linear equations, and bidirected edges indicate possible correlations

  6. Half-trek criterion for generic identifiability of linear structural equation models

    NARCIS (Netherlands)

    Foygel, R.; Draisma, J.; Drton, M.

    2011-01-01

    A linear structural equation model relates random variables of interest and corresponding Gaussian noise terms via a linear equation system. Each such model can be represented by a mixed graph in which directed edges encode the linear equations, and bidirected edges indicate possible correlations

  7. Phase mixing of transverse oscillations in the linear and nonlinear regimes for IFR relativistic electron beam propagation

    International Nuclear Information System (INIS)

    Shokair, I.R.

    1991-01-01

    Phase mixing of transverse oscillations changes the nature of the ion hose instability from an absolute to a convective instability. The stronger the phase mixing, the faster an electron beam reaches equilibrium with the guiding ion channel. This is important for long distance propagation of relativistic electron beams where it is desired that transverse oscillations phase mix within a few betatron wavelengths of injection and subsequently an equilibrium is reached with no further beam emittance growth. In the linear regime phase mixing is well understood and results in asymptotic decay of transverse oscillations as 1/Z 2 for a Gaussian beam and channel system, Z being the axial distance measured in betatron wavelengths. In the nonlinear regime (which is likely mode of propagation for long pulse beams) results of the spread mass model indicate that phase mixing is considerably weaker than in the regime. In this paper we consider this problem of phase mixing in the nonlinear regime. Results of the spread mass model will be shown along with a simple analysis of phase mixing for multiple oscillator models. Particle simulations also indicate that phase mixing is weaker in nonlinear regime than in the linear regime. These results will also be shown. 3 refs., 4 figs

  8. A novel methodology for energy performance benchmarking of buildings by means of Linear Mixed Effect Model: The case of space and DHW heating of out-patient Healthcare Centres

    International Nuclear Information System (INIS)

    Capozzoli, Alfonso; Piscitelli, Marco Savino; Neri, Francesco; Grassi, Daniele; Serale, Gianluca

    2016-01-01

    Highlights: • 100 Healthcare Centres were analyzed to assess energy consumption reference values. • A novel robust methodology for energy benchmarking process was proposed. • A Linear Mixed Effect estimation Model was used to treat heterogeneous datasets. • A nondeterministic approach was adopted to consider the uncertainty in the process. • The methodology was developed to be upgradable and generalizable to other datasets. - Abstract: The current EU energy efficiency directive 2012/27/EU defines the existing building stocks as one of the most promising potential sector for achieving energy saving. Robust methodologies aimed to quantify the potential reduction of energy consumption for large building stocks need to be developed. To this purpose, a benchmarking analysis is necessary in order to support public planners in determining how well a building is performing, in setting credible targets for improving performance or in detecting abnormal energy consumption. In the present work, a novel methodology is proposed to perform a benchmarking analysis particularly suitable for heterogeneous samples of buildings. The methodology is based on the estimation of a statistical model for energy consumption – the Linear Mixed Effects Model –, so as to account for both the fixed effects shared by all individuals within a dataset and the random effects related to particular groups/classes of individuals in the population. The groups of individuals within the population have been classified by resorting to a supervised learning technique. Under this backdrop, a Monte Carlo simulation is worked out to compute the frequency distribution of annual energy consumption and identify a reference value for each group/class of buildings. The benchmarking analysis was tested for a case study of 100 out-patient Healthcare Centres in Northern Italy, finally resulting in 12 different frequency distributions for space and Domestic Hot Water heating energy consumption, one for

  9. Linear Optimization Techniques for Product-Mix of Paints Production in Nigeria

    Directory of Open Access Journals (Sweden)

    Sulaimon Olanrewaju Adebiyi

    2014-02-01

    Full Text Available Many paint producers in Nigeria do not lend themselves to flexible production process which is important for them to manage the use of resources for effective optimal production. These goals can be achieved through the application of optimization models in their resources allocation and utilisation. This research focuses on linear optimization for achieving product- mix optimization in terms of the product identification and the right quantity in paint production in Nigeria for better profit and optimum firm performance. The computational experiments in this research contains data and information on the units item costs, unit contribution margin, maximum resources capacity, individual products absorption rate and other constraints that are particular to each of the five products produced in the company employed as case study. In data analysis, linear programming model was employed with the aid LINDO 11 software to analyse the data. The result has showed that only two out of the five products under consideration are profitable. It also revealed the rate to which the company needs to reduce cost incurred on the three other products before making them profitable for production.

  10. A Mixed Integer Linear Programming Model for the Design of Remanufacturing Closed–loop Supply Chain Network

    Directory of Open Access Journals (Sweden)

    Mbarek Elbounjimi

    2015-11-01

    Full Text Available Closed-loop supply chain network design is a critical issue due to its impact on both economic and environmental performances of the supply chain. In this paper, we address the problem of designing a multi-echelon, multi-product and capacitated closed-loop supply chain network. First, a mixed-integer linear programming formulation is developed to maximize the total profit. The main contribution of the proposed model is addressing two economic viability issues of closed-loop supply chain. The first issue is the collection of sufficient quantity of end-of-life products are assured by retailers against an acquisition price. The second issue is exploiting the benefits of colocation of forward facilities and reverse facilities. The presented model is solved by LINGO for some test problems. Computational results and sensitivity analysis are conducted to show the performance of the proposed model.

  11. Modeling Dynamic Effects of the Marketing Mix on Market Shares

    NARCIS (Netherlands)

    D. Fok (Dennis); R. Paap (Richard); Ph.H.B.F. Franses (Philip Hans)

    2003-01-01

    textabstractTo comprehend the competitive structure of a market, it is important to understand the short-run and long-run effects of the marketing mix on market shares. A useful model to link market shares with marketing-mix variables, like price and promotion, is the market share attraction model.

  12. Nested generalized linear mixed model with ordinal response: Simulation and application on poverty data in Java Island

    Science.gov (United States)

    Widyaningsih, Yekti; Saefuddin, Asep; Notodiputro, Khairil A.; Wigena, Aji H.

    2012-05-01

    The objective of this research is to build a nested generalized linear mixed model using an ordinal response variable with some covariates. There are three main jobs in this paper, i.e. parameters estimation procedure, simulation, and implementation of the model for the real data. At the part of parameters estimation procedure, concepts of threshold, nested random effect, and computational algorithm are described. The simulations data are built for 3 conditions to know the effect of different parameter values of random effect distributions. The last job is the implementation of the model for the data about poverty in 9 districts of Java Island. The districts are Kuningan, Karawang, and Majalengka chose randomly in West Java; Temanggung, Boyolali, and Cilacap from Central Java; and Blitar, Ngawi, and Jember from East Java. The covariates in this model are province, number of bad nutrition cases, number of farmer families, and number of health personnel. In this modeling, all covariates are grouped as ordinal scale. Unit observation in this research is sub-district (kecamatan) nested in district, and districts (kabupaten) are nested in province. For the result of simulation, ARB (Absolute Relative Bias) and RRMSE (Relative Root of mean square errors) scale is used. They show that prov parameters have the highest bias, but more stable RRMSE in all conditions. The simulation design needs to be improved by adding other condition, such as higher correlation between covariates. Furthermore, as the result of the model implementation for the data, only number of farmer family and number of medical personnel have significant contributions to the level of poverty in Central Java and East Java province, and only district 2 (Karawang) of province 1 (West Java) has different random effect from the others. The source of the data is PODES (Potensi Desa) 2008 from BPS (Badan Pusat Statistik).

  13. The salinity effect in a mixed layer ocean model

    Science.gov (United States)

    Miller, J. R.

    1976-01-01

    A model of the thermally mixed layer in the upper ocean as developed by Kraus and Turner and extended by Denman is further extended to investigate the effects of salinity. In the tropical and subtropical Atlantic Ocean rapid increases in salinity occur at the bottom of a uniformly mixed surface layer. The most significant effects produced by the inclusion of salinity are the reduction of the deepening rate and the corresponding change in the heating characteristics of the mixed layer. If the net surface heating is positive, but small, salinity effects must be included to determine whether the mixed layer temperature will increase or decrease. Precipitation over tropical oceans leads to the development of a shallow stable layer accompanied by a decrease in the temperature and salinity at the sea surface.

  14. Effective connectivity between superior temporal gyrus and Heschl's gyrus during white noise listening: linear versus non-linear models.

    Science.gov (United States)

    Hamid, Ka; Yusoff, An; Rahman, Mza; Mohamad, M; Hamid, Aia

    2012-04-01

    This fMRI study is about modelling the effective connectivity between Heschl's gyrus (HG) and the superior temporal gyrus (STG) in human primary auditory cortices. MATERIALS #ENTITYSTARTX00026; Ten healthy male participants were required to listen to white noise stimuli during functional magnetic resonance imaging (fMRI) scans. Statistical parametric mapping (SPM) was used to generate individual and group brain activation maps. For input region determination, two intrinsic connectivity models comprising bilateral HG and STG were constructed using dynamic causal modelling (DCM). The models were estimated and inferred using DCM while Bayesian Model Selection (BMS) for group studies was used for model comparison and selection. Based on the winning model, six linear and six non-linear causal models were derived and were again estimated, inferred, and compared to obtain a model that best represents the effective connectivity between HG and the STG, balancing accuracy and complexity. Group results indicated significant asymmetrical activation (p(uncorr) Model comparison results showed strong evidence of STG as the input centre. The winning model is preferred by 6 out of 10 participants. The results were supported by BMS results for group studies with the expected posterior probability, r = 0.7830 and exceedance probability, ϕ = 0.9823. One-sample t-tests performed on connection values obtained from the winning model indicated that the valid connections for the winning model are the unidirectional parallel connections from STG to bilateral HG (p model comparison between linear and non-linear models using BMS prefers non-linear connection (r = 0.9160, ϕ = 1.000) from which the connectivity between STG and the ipsi- and contralateral HG is gated by the activity in STG itself. We are able to demonstrate that the effective connectivity between HG and STG while listening to white noise for the respective participants can be explained by a non-linear dynamic causal model with

  15. A turbulent mixing Reynolds stress model fitted to match linear interaction analysis predictions

    International Nuclear Information System (INIS)

    Griffond, J; Soulard, O; Souffland, D

    2010-01-01

    To predict the evolution of turbulent mixing zones developing in shock tube experiments with different gases, a turbulence model must be able to reliably evaluate the production due to the shock-turbulence interaction. In the limit of homogeneous weak turbulence, 'linear interaction analysis' (LIA) can be applied. This theory relies on Kovasznay's decomposition and allows the computation of waves transmitted or produced at the shock front. With assumptions about the composition of the upstream turbulent mixture, one can connect the second-order moments downstream from the shock front to those upstream through a transfer matrix, depending on shock strength. The purpose of this work is to provide a turbulence model that matches LIA results for the shock-turbulent mixture interaction. Reynolds stress models (RSMs) with additional equations for the density-velocity correlation and the density variance are considered here. The turbulent states upstream and downstream from the shock front calculated with these models can also be related through a transfer matrix, provided that the numerical implementation is based on a pseudo-pressure formulation. Then, the RSM should be modified in such a way that its transfer matrix matches the LIA one. Using the pseudo-pressure to introduce ad hoc production terms, we are able to obtain a close agreement between LIA and RSM matrices for any shock strength and thus improve the capabilities of the RSM.

  16. An Introduction to the Use of Linear Models with Correlated Data

    Directory of Open Access Journals (Sweden)

    Benoît Laplante

    2001-12-01

    conventional methods for estimating the variances of these estimates may yield biased results. These two problems are different, but they are related. This paper provides an introduction to the problems caused by correlated data and to possible solutions to these problems. First, we present the two problems and try to specify the relations between the two as clearly as possible. Second, we provide a critical presentation of random effects, mixed effects and hierarchical models that would help researchers to see their relevance in other kinds of linear models, particularly the so-called measurement models.

  17. Further Improvements to Linear Mixed Models for Genome-Wide Association Studies

    Science.gov (United States)

    Widmer, Christian; Lippert, Christoph; Weissbrod, Omer; Fusi, Nicolo; Kadie, Carl; Davidson, Robert; Listgarten, Jennifer; Heckerman, David

    2014-11-01

    We examine improvements to the linear mixed model (LMM) that better correct for population structure and family relatedness in genome-wide association studies (GWAS). LMMs rely on the estimation of a genetic similarity matrix (GSM), which encodes the pairwise similarity between every two individuals in a cohort. These similarities are estimated from single nucleotide polymorphisms (SNPs) or other genetic variants. Traditionally, all available SNPs are used to estimate the GSM. In empirical studies across a wide range of synthetic and real data, we find that modifications to this approach improve GWAS performance as measured by type I error control and power. Specifically, when only population structure is present, a GSM constructed from SNPs that well predict the phenotype in combination with principal components as covariates controls type I error and yields more power than the traditional LMM. In any setting, with or without population structure or family relatedness, a GSM consisting of a mixture of two component GSMs, one constructed from all SNPs and another constructed from SNPs that well predict the phenotype again controls type I error and yields more power than the traditional LMM. Software implementing these improvements and the experimental comparisons are available at http://microsoft.com/science.

  18. Design and analysis of Q-RT-PCR assays for haematological malignancies using mixed effects models

    DEFF Research Database (Denmark)

    Bøgsted, Martin; Mandrup, Charlotte; Petersen, Anders

    2009-01-01

    research use and needs qualit control for accuracy and precision. Especially the identification of experimental variations and statistical analysis has recently created discussions. The standard analytical technique is to use the Delta-Delta-Ct method. Although this method accounts for sample specific...... developed based on a linear mixed effects model for factorial designs. The model consists of an analysis of variance where the variation of each fixed effect of interest and identified experimental and biological nuisance variations are split. Hereby it accounts for varying efficiency, inhomogeneous......The recent WHO classification of haematological malignancies includes detection of genetic abnormalities with rognostic significance. Consequently, an increasing number of specific real-time quantitative reverse transcription polymerase chain reaction (Q-RT-PCR) based assays are in clinical...

  19. Local Genealogies in a Linear Mixed Model for Genome-wide Association Mapping in Complex Pedigreed Populations

    DEFF Research Database (Denmark)

    Sahana, Goutam; Mailund, Thomas; Lund, Mogens Sandø

    2011-01-01

    be extended to incorporate other effects in a straightforward and rigorous fashion. Here, we present a complementary approach, called ‘GENMIX (genealogy based mixed model)’ which combines advantages from two powerful GWAS methods: genealogy-based haplotype grouping and MMA. Subjects and Methods: We validated......Introduction: The state-of-the-art for dealing with multiple levels of relationship among the samples in genome-wide association studies (GWAS) is unified mixed model analysis (MMA). This approach is very flexible, can be applied to both family-based and population-based samples, and can...

  20. Mixed models approaches for joint modeling of different types of responses.

    Science.gov (United States)

    Ivanova, Anna; Molenberghs, Geert; Verbeke, Geert

    2016-01-01

    In many biomedical studies, one jointly collects longitudinal continuous, binary, and survival outcomes, possibly with some observations missing. Random-effects models, sometimes called shared-parameter models or frailty models, received a lot of attention. In such models, the corresponding variance components can be employed to capture the association between the various sequences. In some cases, random effects are considered common to various sequences, perhaps up to a scaling factor; in others, there are different but correlated random effects. Even though a variety of data types has been considered in the literature, less attention has been devoted to ordinal data. For univariate longitudinal or hierarchical data, the proportional odds mixed model (POMM) is an instance of the generalized linear mixed model (GLMM; Breslow and Clayton, 1993). Ordinal data are conveniently replaced by a parsimonious set of dummies, which in the longitudinal setting leads to a repeated set of dummies. When ordinal longitudinal data are part of a joint model, the complexity increases further. This is the setting considered in this paper. We formulate a random-effects based model that, in addition, allows for overdispersion. Using two case studies, it is shown that the combination of random effects to capture association with further correction for overdispersion can improve the model's fit considerably and that the resulting models allow to answer research questions that could not be addressed otherwise. Parameters can be estimated in a fairly straightforward way, using the SAS procedure NLMIXED.

  1. Goodness-of-fit tests in mixed models

    KAUST Repository

    Claeskens, Gerda

    2009-05-12

    Mixed models, with both random and fixed effects, are most often estimated on the assumption that the random effects are normally distributed. In this paper we propose several formal tests of the hypothesis that the random effects and/or errors are normally distributed. Most of the proposed methods can be extended to generalized linear models where tests for non-normal distributions are of interest. Our tests are nonparametric in the sense that they are designed to detect virtually any alternative to normality. In case of rejection of the null hypothesis, the nonparametric estimation method that is used to construct a test provides an estimator of the alternative distribution. © 2009 Sociedad de Estadística e Investigación Operativa.

  2. Generalized Linear Mixed Model Analysis of Urban-Rural Differences in Social and Behavioral Factors for Colorectal Cancer Screening

    Science.gov (United States)

    Wang, Ke-Sheng; Liu, Xuefeng; Ategbole, Muyiwa; Xie, Xin; Liu, Ying; Xu, Chun; Xie, Changchun; Sha, Zhanxin

    2017-09-27

    Objective: Screening for colorectal cancer (CRC) can reduce disease incidence, morbidity, and mortality. However, few studies have investigated the urban-rural differences in social and behavioral factors influencing CRC screening. The objective of the study was to investigate the potential factors across urban-rural groups on the usage of CRC screening. Methods: A total of 38,505 adults (aged ≥40 years) were selected from the 2009 California Health Interview Survey (CHIS) data - the latest CHIS data on CRC screening. The weighted generalized linear mixed-model (WGLIMM) was used to deal with this hierarchical structure data. Weighted simple and multiple mixed logistic regression analyses in SAS ver. 9.4 were used to obtain the odds ratios (ORs) and their 95% confidence intervals (CIs). Results: The overall prevalence of CRC screening was 48.1% while the prevalence in four residence groups - urban, second city, suburban, and town/rural, were 45.8%, 46.9%, 53.7% and 50.1%, respectively. The results of WGLIMM analysis showed that there was residence effect (pregression analysis revealed that age, race, marital status, education level, employment stats, binge drinking, and smoking status were associated with CRC screening (p<0.05). Stratified by residence regions, age and poverty level showed associations with CRC screening in all four residence groups. Education level was positively associated with CRC screening in second city and suburban. Infrequent binge drinking was associated with CRC screening in urban and suburban; while current smoking was a protective factor in urban and town/rural groups. Conclusions: Mixed models are useful to deal with the clustered survey data. Social factors and behavioral factors (binge drinking and smoking) were associated with CRC screening and the associations were affected by living areas such as urban and rural regions. Creative Commons Attribution License

  3. A guide to developing resource selection functions from telemetry data using generalized estimating equations and generalized linear mixed models

    Directory of Open Access Journals (Sweden)

    Nicola Koper

    2012-03-01

    Full Text Available Resource selection functions (RSF are often developed using satellite (ARGOS or Global Positioning System (GPS telemetry datasets, which provide a large amount of highly correlated data. We discuss and compare the use of generalized linear mixed-effects models (GLMM and generalized estimating equations (GEE for using this type of data to develop RSFs. GLMMs directly model differences among caribou, while GEEs depend on an adjustment of the standard error to compensate for correlation of data points within individuals. Empirical standard errors, rather than model-based standard errors, must be used with either GLMMs or GEEs when developing RSFs. There are several important differences between these approaches; in particular, GLMMs are best for producing parameter estimates that predict how management might influence individuals, while GEEs are best for predicting how management might influence populations. As the interpretation, value, and statistical significance of both types of parameter estimates differ, it is important that users select the appropriate analytical method. We also outline the use of k-fold cross validation to assess fit of these models. Both GLMMs and GEEs hold promise for developing RSFs as long as they are used appropriately.

  4. Comparison of linear, skewed-linear, and proportional hazard models for the analysis of lambing interval in Ripollesa ewes.

    Science.gov (United States)

    Casellas, J; Bach, R

    2012-06-01

    Lambing interval is a relevant reproductive indicator for sheep populations under continuous mating systems, although there is a shortage of selection programs accounting for this trait in the sheep industry. Both the historical assumption of small genetic background and its unorthodox distribution pattern have limited its implementation as a breeding objective. In this manuscript, statistical performances of 3 alternative parametrizations [i.e., symmetric Gaussian mixed linear (GML) model, skew-Gaussian mixed linear (SGML) model, and piecewise Weibull proportional hazard (PWPH) model] have been compared to elucidate the preferred methodology to handle lambing interval data. More specifically, flock-by-flock analyses were performed on 31,986 lambing interval records (257.3 ± 0.2 d) from 6 purebred Ripollesa flocks. Model performances were compared in terms of deviance information criterion (DIC) and Bayes factor (BF). For all flocks, PWPH models were clearly preferred; they generated a reduction of 1,900 or more DIC units and provided BF estimates larger than 100 (i.e., PWPH models against linear models). These differences were reduced when comparing PWPH models with different number of change points for the baseline hazard function. In 4 flocks, only 2 change points were required to minimize the DIC, whereas 4 and 6 change points were needed for the 2 remaining flocks. These differences demonstrated a remarkable degree of heterogeneity across sheep flocks that must be properly accounted for in genetic evaluation models to avoid statistical biases and suboptimal genetic trends. Within this context, all 6 Ripollesa flocks revealed substantial genetic background for lambing interval with heritabilities ranging between 0.13 and 0.19. This study provides the first evidence of the suitability of PWPH models for lambing interval analysis, clearly discarding previous parametrizations focused on mixed linear models.

  5. Population stochastic modelling (PSM)--an R package for mixed-effects models based on stochastic differential equations.

    Science.gov (United States)

    Klim, Søren; Mortensen, Stig Bousgaard; Kristensen, Niels Rode; Overgaard, Rune Viig; Madsen, Henrik

    2009-06-01

    The extension from ordinary to stochastic differential equations (SDEs) in pharmacokinetic and pharmacodynamic (PK/PD) modelling is an emerging field and has been motivated in a number of articles [N.R. Kristensen, H. Madsen, S.H. Ingwersen, Using stochastic differential equations for PK/PD model development, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 109-141; C.W. Tornøe, R.V. Overgaard, H. Agersø, H.A. Nielsen, H. Madsen, E.N. Jonsson, Stochastic differential equations in NONMEM: implementation, application, and comparison with ordinary differential equations, Pharm. Res. 22 (August(8)) (2005) 1247-1258; R.V. Overgaard, N. Jonsson, C.W. Tornøe, H. Madsen, Non-linear mixed-effects models with stochastic differential equations: implementation of an estimation algorithm, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 85-107; U. Picchini, S. Ditlevsen, A. De Gaetano, Maximum likelihood estimation of a time-inhomogeneous stochastic differential model of glucose dynamics, Math. Med. Biol. 25 (June(2)) (2008) 141-155]. PK/PD models are traditionally based ordinary differential equations (ODEs) with an observation link that incorporates noise. This state-space formulation only allows for observation noise and not for system noise. Extending to SDEs allows for a Wiener noise component in the system equations. This additional noise component enables handling of autocorrelated residuals originating from natural variation or systematic model error. Autocorrelated residuals are often partly ignored in PK/PD modelling although violating the hypothesis for many standard statistical tests. This article presents a package for the statistical program R that is able to handle SDEs in a mixed-effects setting. The estimation method implemented is the FOCE(1) approximation to the population likelihood which is generated from the individual likelihoods that are approximated using the Extended Kalman Filter's one-step predictions.

  6. A novel mixed-synchronization phenomenon in coupled Chua's circuits via non-fragile linear control

    International Nuclear Information System (INIS)

    Wang Jun-Wei; Ma Qing-Hua; Zeng Li

    2011-01-01

    Dynamical variables of coupled nonlinear oscillators can exhibit different synchronization patterns depending on the designed coupling scheme. In this paper, a non-fragile linear feedback control strategy with multiplicative controller gain uncertainties is proposed for realizing the mixed-synchronization of Chua's circuits connected in a drive-response configuration. In particular, in the mixed-synchronization regime, different state variables of the response system can evolve into complete synchronization, anti-synchronization and even amplitude death simultaneously with the drive variables for an appropriate choice of scaling matrix. Using Lyapunov stability theory, we derive some sufficient criteria for achieving global mixed-synchronization. It is shown that the desired non-fragile state feedback controller can be constructed by solving a set of linear matrix inequalities (LMIs). Numerical simulations are also provided to demonstrate the effectiveness of the proposed control approach. (general)

  7. The coefficient of determination R2 and intra-class correlation coefficient from generalized linear mixed-effects models revisited and expanded.

    Science.gov (United States)

    Nakagawa, Shinichi; Johnson, Paul C D; Schielzeth, Holger

    2017-09-01

    The coefficient of determination R 2 quantifies the proportion of variance explained by a statistical model and is an important summary statistic of biological interest. However, estimating R 2 for generalized linear mixed models (GLMMs) remains challenging. We have previously introduced a version of R 2 that we called [Formula: see text] for Poisson and binomial GLMMs, but not for other distributional families. Similarly, we earlier discussed how to estimate intra-class correlation coefficients (ICCs) using Poisson and binomial GLMMs. In this paper, we generalize our methods to all other non-Gaussian distributions, in particular to negative binomial and gamma distributions that are commonly used for modelling biological data. While expanding our approach, we highlight two useful concepts for biologists, Jensen's inequality and the delta method, both of which help us in understanding the properties of GLMMs. Jensen's inequality has important implications for biologically meaningful interpretation of GLMMs, whereas the delta method allows a general derivation of variance associated with non-Gaussian distributions. We also discuss some special considerations for binomial GLMMs with binary or proportion data. We illustrate the implementation of our extension by worked examples from the field of ecology and evolution in the R environment. However, our method can be used across disciplines and regardless of statistical environments. © 2017 The Author(s).

  8. Modelling non-linear effects of dark energy

    Science.gov (United States)

    Bose, Benjamin; Baldi, Marco; Pourtsidou, Alkistis

    2018-04-01

    We investigate the capabilities of perturbation theory in capturing non-linear effects of dark energy. We test constant and evolving w models, as well as models involving momentum exchange between dark energy and dark matter. Specifically, we compare perturbative predictions at 1-loop level against N-body results for four non-standard equations of state as well as varying degrees of momentum exchange between dark energy and dark matter. The interaction is modelled phenomenologically using a time dependent drag term in the Euler equation. We make comparisons at the level of the matter power spectrum and the redshift space monopole and quadrupole. The multipoles are modelled using the Taruya, Nishimichi and Saito (TNS) redshift space spectrum. We find perturbation theory does very well in capturing non-linear effects coming from dark sector interaction. We isolate and quantify the 1-loop contribution coming from the interaction and from the non-standard equation of state. We find the interaction parameter ξ amplifies scale dependent signatures in the range of scales considered. Non-standard equations of state also give scale dependent signatures within this same regime. In redshift space the match with N-body is improved at smaller scales by the addition of the TNS free parameter σv. To quantify the importance of modelling the interaction, we create mock data sets for varying values of ξ using perturbation theory. This data is given errors typical of Stage IV surveys. We then perform a likelihood analysis using the first two multipoles on these sets and a ξ=0 modelling, ignoring the interaction. We find the fiducial growth parameter f is generally recovered even for very large values of ξ both at z=0.5 and z=1. The ξ=0 modelling is most biased in its estimation of f for the phantom w=‑1.1 case.

  9. Linear models for joint association and linkage QTL mapping

    Directory of Open Access Journals (Sweden)

    Fernando Rohan L

    2009-09-01

    Full Text Available Abstract Background Populational linkage disequilibrium and within-family linkage are commonly used for QTL mapping and marker assisted selection. The combination of both results in more robust and accurate locations of the QTL, but models proposed so far have been either single marker, complex in practice or well fit to a particular family structure. Results We herein present linear model theory to come up with additive effects of the QTL alleles in any member of a general pedigree, conditional to observed markers and pedigree, accounting for possible linkage disequilibrium among QTLs and markers. The model is based on association analysis in the founders; further, the additive effect of the QTLs transmitted to the descendants is a weighted (by the probabilities of transmission average of the substitution effects of founders' haplotypes. The model allows for non-complete linkage disequilibrium QTL-markers in the founders. Two submodels are presented: a simple and easy to implement Haley-Knott type regression for half-sib families, and a general mixed (variance component model for general pedigrees. The model can use information from all markers. The performance of the regression method is compared by simulation with a more complex IBD method by Meuwissen and Goddard. Numerical examples are provided. Conclusion The linear model theory provides a useful framework for QTL mapping with dense marker maps. Results show similar accuracies but a bias of the IBD method towards the center of the region. Computations for the linear regression model are extremely simple, in contrast with IBD methods. Extensions of the model to genomic selection and multi-QTL mapping are straightforward.

  10. Mixed-Integer-Linear-Programming-Based Energy Management System for Hybrid PV-Wind-Battery Microgrids

    DEFF Research Database (Denmark)

    Hernández, Adriana Carolina Luna; Aldana, Nelson Leonardo Diaz; Graells, Moises

    2017-01-01

    -side strategy, defined as a general mixed-integer linear programming by taking into account two stages for proper charging of the storage units. This model is considered as a deterministic problem that aims to minimize operating costs and promote self-consumption based on 24-hour ahead forecast data...

  11. A property of assignment type mixed integer linear programming problems

    NARCIS (Netherlands)

    Benders, J.F.; van Nunen, J.A.E.E.

    1982-01-01

    In this paper we will proof that rather tight upper bounds can be given for the number of non-unique assignments that are achieved after solving the linear programming relaxation of some types of mixed integer linear assignment problems. Since in these cases the number of splitted assignments is

  12. Longitudinal mixed-effects models for latent cognitive function

    NARCIS (Netherlands)

    van den Hout, Ardo; Fox, Gerardus J.A.; Muniz-Terrera, Graciela

    2015-01-01

    A mixed-effects regression model with a bent-cable change-point predictor is formulated to describe potential decline of cognitive function over time in the older population. For the individual trajectories, cognitive function is considered to be a latent variable measured through an item response

  13. Semiparametric mixed-effects analysis of PK/PD models using differential equations.

    Science.gov (United States)

    Wang, Yi; Eskridge, Kent M; Zhang, Shunpu

    2008-08-01

    Motivated by the use of semiparametric nonlinear mixed-effects modeling on longitudinal data, we develop a new semiparametric modeling approach to address potential structural model misspecification for population pharmacokinetic/pharmacodynamic (PK/PD) analysis. Specifically, we use a set of ordinary differential equations (ODEs) with form dx/dt = A(t)x + B(t) where B(t) is a nonparametric function that is estimated using penalized splines. The inclusion of a nonparametric function in the ODEs makes identification of structural model misspecification feasible by quantifying the model uncertainty and provides flexibility for accommodating possible structural model deficiencies. The resulting model will be implemented in a nonlinear mixed-effects modeling setup for population analysis. We illustrate the method with an application to cefamandole data and evaluate its performance through simulations.

  14. Modeling and simulation of protein elution in linear pH and salt gradients on weak, strong and mixed cation exchange resins applying an extended Donnan ion exchange model.

    Science.gov (United States)

    Wittkopp, Felix; Peeck, Lars; Hafner, Mathias; Frech, Christian

    2018-04-13

    Process development and characterization based on mathematic modeling provides several advantages and has been applied more frequently over the last few years. In this work, a Donnan equilibrium ion exchange (DIX) model is applied for modelling and simulation of ion exchange chromatography of a monoclonal antibody in linear chromatography. Four different cation exchange resin prototypes consisting of weak, strong and mixed ligands are characterized using pH and salt gradient elution experiments applying the extended DIX model. The modelling results are compared with the results using a classic stoichiometric displacement model. The Donnan equilibrium model is able to describe all four prototype resins while the stoichiometric displacement model fails for the weak and mixed weak/strong ligands. Finally, in silico chromatogram simulations of pH and pH/salt dual gradients are performed to verify the results and to show the consistency of the developed model. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Role of Statistical Random-Effects Linear Models in Personalized Medicine.

    Science.gov (United States)

    Diaz, Francisco J; Yeh, Hung-Wen; de Leon, Jose

    2012-03-01

    Some empirical studies and recent developments in pharmacokinetic theory suggest that statistical random-effects linear models are valuable tools that allow describing simultaneously patient populations as a whole and patients as individuals. This remarkable characteristic indicates that these models may be useful in the development of personalized medicine, which aims at finding treatment regimes that are appropriate for particular patients, not just appropriate for the average patient. In fact, published developments show that random-effects linear models may provide a solid theoretical framework for drug dosage individualization in chronic diseases. In particular, individualized dosages computed with these models by means of an empirical Bayesian approach may produce better results than dosages computed with some methods routinely used in therapeutic drug monitoring. This is further supported by published empirical and theoretical findings that show that random effects linear models may provide accurate representations of phase III and IV steady-state pharmacokinetic data, and may be useful for dosage computations. These models have applications in the design of clinical algorithms for drug dosage individualization in chronic diseases; in the computation of dose correction factors; computation of the minimum number of blood samples from a patient that are necessary for calculating an optimal individualized drug dosage in therapeutic drug monitoring; measure of the clinical importance of clinical, demographic, environmental or genetic covariates; study of drug-drug interactions in clinical settings; the implementation of computational tools for web-site-based evidence farming; design of pharmacogenomic studies; and in the development of a pharmacological theory of dosage individualization.

  16. Multivariate covariance generalized linear models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Jørgensen, Bent

    2016-01-01

    are fitted by using an efficient Newton scoring algorithm based on quasi-likelihood and Pearson estimating functions, using only second-moment assumptions. This provides a unified approach to a wide variety of types of response variables and covariance structures, including multivariate extensions......We propose a general framework for non-normal multivariate data analysis called multivariate covariance generalized linear models, designed to handle multivariate response variables, along with a wide range of temporal and spatial correlation structures defined in terms of a covariance link...... function combined with a matrix linear predictor involving known matrices. The method is motivated by three data examples that are not easily handled by existing methods. The first example concerns multivariate count data, the second involves response variables of mixed types, combined with repeated...

  17. Entropy correlation and entanglement for mixed states in an algebraic model

    International Nuclear Information System (INIS)

    Hou Xiwen; Chen Jinghua; Wan Mingfang; Ma Zhongqi

    2009-01-01

    As an alternative with potential connections to actual experiments, other than the systems more usually used in the field of entanglement, the dynamics of entropy correlation and entanglement between two anharmonic vibrations in a well-established algebraic model, with parameters extracted from fitting to highly excited spectral experimental results for molecules H 2 O and SO 2 , is studied in terms of the linear entropy and two negativities for various initial states that are respectively taken to be the mixed density matrices of thermal states and squeezed states on each mode. For a suitable parameter in initial states the entropies in two stretches can show positive correlation or anti-correlation. And the linear entropy of each mode is positively correlated with the negativities just for the mixed-squeezed states with small parameters in H 2 O while they do not display any correlation in other cases. For the mixed-squeezed states the negativities exhibit dominantly positive correlations with an effective mutual entropy. The differences in the linear entropy and the negativities between H 2 O and SO 2 are discussed as well. Those are useful for molecular quantum computing and quantum information processing

  18. Deliberate practice predicts performance over time in adolescent chess players and drop-outs: a linear mixed models analysis.

    Science.gov (United States)

    de Bruin, Anique B H; Smits, Niels; Rikers, Remy M J P; Schmidt, Henk G

    2008-11-01

    In this study, the longitudinal relation between deliberate practice and performance in chess was examined using a linear mixed models analysis. The practice activities and performance ratings of young elite chess players, who were either in, or had dropped out of the Dutch national chess training, were analysed since they had started playing chess seriously. The results revealed that deliberate practice (i.e. serious chess study alone and serious chess play) strongly contributed to chess performance. The influence of deliberate practice was not only observable in current performance, but also over chess players' careers. Moreover, although the drop-outs' chess ratings developed more slowly over time, both the persistent and drop-out chess players benefited to the same extent from investments in deliberate practice. Finally, the effect of gender on chess performance proved to be much smaller than the effect of deliberate practice. This study provides longitudinal support for the monotonic benefits assumption of deliberate practice, by showing that over chess players' careers, deliberate practice has a significant effect on performance, and to the same extent for chess players of different ultimate performance levels. The results of this study are not in line with critique raised against the deliberate practice theory that the factors deliberate practice and talent could be confounded.

  19. Mixed linear-nonlinear fault slip inversion: Bayesian inference of model, weighting, and smoothing parameters

    Science.gov (United States)

    Fukuda, J.; Johnson, K. M.

    2009-12-01

    Studies utilizing inversions of geodetic data for the spatial distribution of coseismic slip on faults typically present the result as a single fault plane and slip distribution. Commonly the geometry of the fault plane is assumed to be known a priori and the data are inverted for slip. However, sometimes there is not strong a priori information on the geometry of the fault that produced the earthquake and the data is not always strong enough to completely resolve the fault geometry. We develop a method to solve for the full posterior probability distribution of fault slip and fault geometry parameters in a Bayesian framework using Monte Carlo methods. The slip inversion problem is particularly challenging because it often involves multiple data sets with unknown relative weights (e.g. InSAR, GPS), model parameters that are related linearly (slip) and nonlinearly (fault geometry) through the theoretical model to surface observations, prior information on model parameters, and a regularization prior to stabilize the inversion. We present the theoretical framework and solution method for a Bayesian inversion that can handle all of these aspects of the problem. The method handles the mixed linear/nonlinear nature of the problem through combination of both analytical least-squares solutions and Monte Carlo methods. We first illustrate and validate the inversion scheme using synthetic data sets. We then apply the method to inversion of geodetic data from the 2003 M6.6 San Simeon, California earthquake. We show that the uncertainty in strike and dip of the fault plane is over 20 degrees. We characterize the uncertainty in the slip estimate with a volume around the mean fault solution in which the slip most likely occurred. Slip likely occurred somewhere in a volume that extends 5-10 km in either direction normal to the fault plane. We implement slip inversions with both traditional, kinematic smoothing constraints on slip and a simple physical condition of uniform stress

  20. Light Scattering Study of Mixed Micelles Made from Elastin-Like Polypeptide Linear Chains and Trimers

    Science.gov (United States)

    Terrano, Daniel; Tsuper, Ilona; Maraschky, Adam; Holland, Nolan; Streletzky, Kiril

    Temperature sensitive nanoparticles were generated from a construct (H20F) of three chains of elastin-like polypeptides (ELP) linked to a negatively charged foldon domain. This ELP system was mixed at different ratios with linear chains of ELP (H40L) which lacks the foldon domain. The mixed system is soluble at room temperature and at a transition temperature (Tt) will form swollen micelles with the hydrophobic linear chains hidden inside. This system was studied using depolarized dynamic light scattering (DDLS) and static light scattering (SLS) to determine the size, shape, and internal structure of the mixed micelles. The mixed micelle in equal parts of H20F and H40L show a constant apparent hydrodynamic radius of 40-45 nm at the concentration window from 25:25 to 60:60 uM (1:1 ratio). At a fixed 50 uM concentration of the H20F, varying H40L concentration from 5 to 80 uM resulted in a linear growth in the hydrodynamic radius from about 11 to about 62 nm, along with a 1000-fold increase in VH signal. A possible simple model explaining the growth of the swollen micelles is considered. Lastly, the VH signal can indicate elongation in the geometry of the particle or could possibly be a result from anisotropic properties from the core of the micelle. SLS was used to study the molecular weight, and the radius of gyration of the micelle to help identify the structure and morphology of mixed micelles and the tangible cause of the VH signal.

  1. Ordinal Log-Linear Models for Contingency Tables

    Directory of Open Access Journals (Sweden)

    Brzezińska Justyna

    2016-12-01

    Full Text Available A log-linear analysis is a method providing a comprehensive scheme to describe the association for categorical variables in a contingency table. The log-linear model specifies how the expected counts depend on the levels of the categorical variables for these cells and provide detailed information on the associations. The aim of this paper is to present theoretical, as well as empirical, aspects of ordinal log-linear models used for contingency tables with ordinal variables. We introduce log-linear models for ordinal variables: linear-by-linear association, row effect model, column effect model and RC Goodman’s model. Algorithm, advantages and disadvantages will be discussed in the paper. An empirical analysis will be conducted with the use of R.

  2. Stability Criterion of Linear Stochastic Systems Subject to Mixed H2/Passivity Performance

    Directory of Open Access Journals (Sweden)

    Cheung-Chieh Ku

    2015-01-01

    Full Text Available The H2 control scheme and passivity theory are applied to investigate the stability criterion of continuous-time linear stochastic system subject to mixed performance. Based on the stochastic differential equation, the stochastic behaviors can be described as multiplicative noise terms. For the considered system, the H2 control scheme is applied to deal with the problem on minimizing output energy. And the asymptotical stability of the system can be guaranteed under desired initial conditions. Besides, the passivity theory is employed to constrain the effect of external disturbance on the system. Moreover, the Itô formula and Lyapunov function are used to derive the sufficient conditions which are converted into linear matrix inequality (LMI form for applying convex optimization algorithm. Via solving the sufficient conditions, the state feedback controller can be established such that the asymptotical stability and mixed performance of the system are achieved in the mean square. Finally, the synchronous generator system is used to verify the effectiveness and applicability of the proposed design method.

  3. Non-linear mixed-effects modeling for photosynhetic response of Rosa hybrida L. under elevated CO2 in greenhouses - short communication

    DEFF Research Database (Denmark)

    Ozturk, I.; Ottosen, C.O.; Ritz, Christian

    2011-01-01

    conditions. Leaf gas exchanges were measured at 11 light intensities from 0 to 1,400 µmol/m2s, at 800 ppm CO2, 25°C, and 65 ± 5% relative humidity. In order to describe the data corresponding to diff erent measurement dates, the non-linear mixed-eff ects regression analysis was used. Th e model successfully...... effi ciency. Th e results suggested acclimation response, as carbon assimilation rates and stomatal conductance at each measurement date were higher for Escimo than Mercedes. Diff erences in photosynthesis rates were attributed to the adaptive capacity of the cultivars to light conditions at a specifi......Photosynthetic response to light was measured on the leaves of two cultivars of Rosa hybrida L. (Escimo and Mercedes) in the greenhouse to obtain light-response curves and their parameters. Th e aim was to use a model to simulate leaf photosynthetic carbon gain with respect to environmental...

  4. Twice random, once mixed: applying mixed models to simultaneously analyze random effects of language and participants.

    Science.gov (United States)

    Janssen, Dirk P

    2012-03-01

    Psychologists, psycholinguists, and other researchers using language stimuli have been struggling for more than 30 years with the problem of how to analyze experimental data that contain two crossed random effects (items and participants). The classical analysis of variance does not apply; alternatives have been proposed but have failed to catch on, and a statistically unsatisfactory procedure of using two approximations (known as F(1) and F(2)) has become the standard. A simple and elegant solution using mixed model analysis has been available for 15 years, and recent improvements in statistical software have made mixed models analysis widely available. The aim of this article is to increase the use of mixed models by giving a concise practical introduction and by giving clear directions for undertaking the analysis in the most popular statistical packages. The article also introduces the DJMIXED: add-on package for SPSS, which makes entering the models and reporting their results as straightforward as possible.

  5. Linearity and Non-linearity of Photorefractive effect in Materials ...

    African Journals Online (AJOL)

    In this paper we have studied the Linearity and Non-linearity of Photorefractive effect in materials using the band transport model. For low light beam intensities the change in the refractive index is proportional to the electric field for linear optics while for non- linear optics the change in refractive index is directly proportional ...

  6. Transformation of Summary Statistics from Linear Mixed Model Association on All-or-None Traits to Odds Ratio.

    Science.gov (United States)

    Lloyd-Jones, Luke R; Robinson, Matthew R; Yang, Jian; Visscher, Peter M

    2018-04-01

    Genome-wide association studies (GWAS) have identified thousands of loci that are robustly associated with complex diseases. The use of linear mixed model (LMM) methodology for GWAS is becoming more prevalent due to its ability to control for population structure and cryptic relatedness and to increase power. The odds ratio (OR) is a common measure of the association of a disease with an exposure ( e.g. , a genetic variant) and is readably available from logistic regression. However, when the LMM is applied to all-or-none traits it provides estimates of genetic effects on the observed 0-1 scale, a different scale to that in logistic regression. This limits the comparability of results across studies, for example in a meta-analysis, and makes the interpretation of the magnitude of an effect from an LMM GWAS difficult. In this study, we derived transformations from the genetic effects estimated under the LMM to the OR that only rely on summary statistics. To test the proposed transformations, we used real genotypes from two large, publicly available data sets to simulate all-or-none phenotypes for a set of scenarios that differ in underlying model, disease prevalence, and heritability. Furthermore, we applied these transformations to GWAS summary statistics for type 2 diabetes generated from 108,042 individuals in the UK Biobank. In both simulation and real-data application, we observed very high concordance between the transformed OR from the LMM and either the simulated truth or estimates from logistic regression. The transformations derived and validated in this study improve the comparability of results from prospective and already performed LMM GWAS on complex diseases by providing a reliable transformation to a common comparative scale for the genetic effects. Copyright © 2018 by the Genetics Society of America.

  7. Generalized linear models with random effects unified analysis via H-likelihood

    CERN Document Server

    Lee, Youngjo; Pawitan, Yudi

    2006-01-01

    Since their introduction in 1972, generalized linear models (GLMs) have proven useful in the generalization of classical normal models. Presenting methods for fitting GLMs with random effects to data, Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood explores a wide range of applications, including combining information over trials (meta-analysis), analysis of frailty models for survival data, genetic epidemiology, and analysis of spatial and temporal models with correlated errors.Written by pioneering authorities in the field, this reference provides an introduction to various theories and examines likelihood inference and GLMs. The authors show how to extend the class of GLMs while retaining as much simplicity as possible. By maximizing and deriving other quantities from h-likelihood, they also demonstrate how to use a single algorithm for all members of the class, resulting in a faster algorithm as compared to existing alternatives. Complementing theory with examples, many of...

  8. Comparison of linear, mixed integer and non-linear programming methods in energy system dispatch modelling

    DEFF Research Database (Denmark)

    Ommen, Torben Schmidt; Markussen, Wiebke Brix; Elmegaard, Brian

    2014-01-01

    In the paper, three frequently used operation optimisation methods are examined with respect to their impact on operation management of the combined utility technologies for electric power and DH (district heating) of eastern Denmark. The investigation focusses on individual plant operation...... differences and differences between the solution found by each optimisation method. One of the investigated approaches utilises LP (linear programming) for optimisation, one uses LP with binary operation constraints, while the third approach uses NLP (non-linear programming). The LP model is used...... as a benchmark, as this type is frequently used, and has the lowest amount of constraints of the three. A comparison of the optimised operation of a number of units shows significant differences between the three methods. Compared to the reference, the use of binary integer variables, increases operation...

  9. Minimising negative externalities cost using 0-1 mixed integer linear programming model in e-commerce environment

    Directory of Open Access Journals (Sweden)

    Akyene Tetteh

    2017-04-01

    Full Text Available Background: Although the Internet boosts business profitability, without certain activities like efficient transportation, scheduling, products ordered via the Internet may reach their destination very late. The environmental problems (vehicle part disposal, carbon monoxide [CO], nitrogen oxide [NOx] and hydrocarbons [HC] associated with transportation are mostly not accounted for by industries. Objectives: The main objective of this article is to minimising negative externalities cost in e-commerce environments. Method: The 0-1 mixed integer linear programming (0-1 MILP model was used to model the problem statement. The result was further analysed using the externality percentage impact factor (EPIF. Results: The simulation results suggest that (1 The mode of ordering refined petroleum products does not impact on the cost of distribution, (2 an increase in private cost is directly proportional to the externality cost, (3 externality cost is largely controlled by the government and number of vehicles used in the distribution and this is in no way influenced by the mode of request (i.e. Internet or otherwise and (4 externality cost may be reduce by using more ecofriendly fuel system.

  10. Stochastic Mixed-Effects Parameters Bertalanffy Process, with Applications to Tree Crown Width Modeling

    Directory of Open Access Journals (Sweden)

    Petras Rupšys

    2015-01-01

    Full Text Available A stochastic modeling approach based on the Bertalanffy law gained interest due to its ability to produce more accurate results than the deterministic approaches. We examine tree crown width dynamic with the Bertalanffy type stochastic differential equation (SDE and mixed-effects parameters. In this study, we demonstrate how this simple model can be used to calculate predictions of crown width. We propose a parameter estimation method and computational guidelines. The primary goal of the study was to estimate the parameters by considering discrete sampling of the diameter at breast height and crown width and by using maximum likelihood procedure. Performance statistics for the crown width equation include statistical indexes and analysis of residuals. We use data provided by the Lithuanian National Forest Inventory from Scots pine trees to illustrate issues of our modeling technique. Comparison of the predicted crown width values of mixed-effects parameters model with those obtained using fixed-effects parameters model demonstrates the predictive power of the stochastic differential equations model with mixed-effects parameters. All results were implemented in a symbolic algebra system MAPLE.

  11. Examples of mixed-effects modeling with crossed random effects and with binomial data

    NARCIS (Netherlands)

    Quené, H.; van den Bergh, H.

    2008-01-01

    Psycholinguistic data are often analyzed with repeated-measures analyses of variance (ANOVA), but this paper argues that mixed-effects (multilevel) models provide a better alternative method. First, models are discussed in which the two random factors of participants and items are crossed, and not

  12. Optimising the selection of food items for food frequency questionnaires using Mixed Integer Linear Programming

    NARCIS (Netherlands)

    Lemmen-Gerdessen, van J.C.; Souverein, O.W.; Veer, van 't P.; Vries, de J.H.M.

    2015-01-01

    Objective To support the selection of food items for FFQs in such a way that the amount of information on all relevant nutrients is maximised while the food list is as short as possible. Design Selection of the most informative food items to be included in FFQs was modelled as a Mixed Integer Linear

  13. Optimization Research of Generation Investment Based on Linear Programming Model

    Science.gov (United States)

    Wu, Juan; Ge, Xueqian

    Linear programming is an important branch of operational research and it is a mathematical method to assist the people to carry out scientific management. GAMS is an advanced simulation and optimization modeling language and it will combine a large number of complex mathematical programming, such as linear programming LP, nonlinear programming NLP, MIP and other mixed-integer programming with the system simulation. In this paper, based on the linear programming model, the optimized investment decision-making of generation is simulated and analyzed. At last, the optimal installed capacity of power plants and the final total cost are got, which provides the rational decision-making basis for optimized investments.

  14. Foundations of linear and generalized linear models

    CERN Document Server

    Agresti, Alan

    2015-01-01

    A valuable overview of the most important ideas and results in statistical analysis Written by a highly-experienced author, Foundations of Linear and Generalized Linear Models is a clear and comprehensive guide to the key concepts and results of linear statistical models. The book presents a broad, in-depth overview of the most commonly used statistical models by discussing the theory underlying the models, R software applications, and examples with crafted models to elucidate key ideas and promote practical model building. The book begins by illustrating the fundamentals of linear models,

  15. Yield response of winter wheat cultivars to environments modeled by different variance-covariance structures in linear mixed models

    Energy Technology Data Exchange (ETDEWEB)

    Studnicki, M.; Mądry, W.; Noras, K.; Wójcik-Gront, E.; Gacek, E.

    2016-11-01

    The main objectives of multi-environmental trials (METs) are to assess cultivar adaptation patterns under different environmental conditions and to investigate genotype by environment (G×E) interactions. Linear mixed models (LMMs) with more complex variance-covariance structures have become recognized and widely used for analyzing METs data. Best practice in METs analysis is to carry out a comparison of competing models with different variance-covariance structures. Improperly chosen variance-covariance structures may lead to biased estimation of means resulting in incorrect conclusions. In this work we focused on adaptive response of cultivars on the environments modeled by the LMMs with different variance-covariance structures. We identified possible limitations of inference when using an inadequate variance-covariance structure. In the presented study we used the dataset on grain yield for 63 winter wheat cultivars, evaluated across 18 locations, during three growing seasons (2008/2009-2010/2011) from the Polish Post-registration Variety Testing System. For the evaluation of variance-covariance structures and the description of cultivars adaptation to environments, we calculated adjusted means for the combination of cultivar and location in models with different variance-covariance structures. We concluded that in order to fully describe cultivars adaptive patterns modelers should use the unrestricted variance-covariance structure. The restricted compound symmetry structure may interfere with proper interpretation of cultivars adaptive patterns. We found, that the factor-analytic structure is also a good tool to describe cultivars reaction on environments, and it can be successfully used in METs data after determining the optimal component number for each dataset. (Author)

  16. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear Optimization and Robust Mixed Integer Linear Optimization

    Science.gov (United States)

    Li, Zukui; Ding, Ran; Floudas, Christodoulos A.

    2011-01-01

    Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263

  17. Modelling ventricular fibrillation coarseness during cardiopulmonary resuscitation by mixed effects stochastic differential equations.

    Science.gov (United States)

    Gundersen, Kenneth; Kvaløy, Jan Terje; Eftestøl, Trygve; Kramer-Johansen, Jo

    2015-10-15

    For patients undergoing cardiopulmonary resuscitation (CPR) and being in a shockable rhythm, the coarseness of the electrocardiogram (ECG) signal is an indicator of the state of the patient. In the current work, we show how mixed effects stochastic differential equations (SDE) models, commonly used in pharmacokinetic and pharmacodynamic modelling, can be used to model the relationship between CPR quality measurements and ECG coarseness. This is a novel application of mixed effects SDE models to a setting quite different from previous applications of such models and where using such models nicely solves many of the challenges involved in analysing the available data. Copyright © 2015 John Wiley & Sons, Ltd.

  18. Generalized linear mixed model for binary outcomes when covariates are subject to measurement errors and detection limits.

    Science.gov (United States)

    Xie, Xianhong; Xue, Xiaonan; Strickler, Howard D

    2018-01-15

    Longitudinal measurement of biomarkers is important in determining risk factors for binary endpoints such as infection or disease. However, biomarkers are subject to measurement error, and some are also subject to left-censoring due to a lower limit of detection. Statistical methods to address these issues are few. We herein propose a generalized linear mixed model and estimate the model parameters using the Monte Carlo Newton-Raphson (MCNR) method. Inferences regarding the parameters are made by applying Louis's method and the delta method. Simulation studies were conducted to compare the proposed MCNR method with existing methods including the maximum likelihood (ML) method and the ad hoc approach of replacing the left-censored values with half of the detection limit (HDL). The results showed that the performance of the MCNR method is superior to ML and HDL with respect to the empirical standard error, as well as the coverage probability for the 95% confidence interval. The HDL method uses an incorrect imputation method, and the computation is constrained by the number of quadrature points; while the ML method also suffers from the constrain for the number of quadrature points, the MCNR method does not have this limitation and approximates the likelihood function better than the other methods. The improvement of the MCNR method is further illustrated with real-world data from a longitudinal study of local cervicovaginal HIV viral load and its effects on oncogenic HPV detection in HIV-positive women. Copyright © 2017 John Wiley & Sons, Ltd.

  19. A Bayesian Approach to Functional Mixed Effect Modeling for Longitudinal Data with Binomial Outcomes

    Science.gov (United States)

    Kliethermes, Stephanie; Oleson, Jacob

    2014-01-01

    Longitudinal growth patterns are routinely seen in medical studies where individual and population growth is followed over a period of time. Many current methods for modeling growth presuppose a parametric relationship between the outcome and time (e.g., linear, quadratic); however, these relationships may not accurately capture growth over time. Functional mixed effects (FME) models provide flexibility in handling longitudinal data with nonparametric temporal trends. Although FME methods are well-developed for continuous, normally distributed outcome measures, nonparametric methods for handling categorical outcomes are limited. We consider the situation with binomially distributed longitudinal outcomes. Although percent correct data can be modeled assuming normality, estimates outside the parameter space are possible and thus estimated curves can be unrealistic. We propose a binomial FME model using Bayesian methodology to account for growth curves with binomial (percentage) outcomes. The usefulness of our methods is demonstrated using a longitudinal study of speech perception outcomes from cochlear implant users where we successfully model both the population and individual growth trajectories. Simulation studies also advocate the usefulness of the binomial model particularly when outcomes occur near the boundary of the probability parameter space and in situations with a small number of trials. PMID:24723495

  20. Mixed-effects height–diameter models for ten conifers in the inland ...

    African Journals Online (AJOL)

    To demonstrate the utility of mixed-effects height–diameter models when conducting forest inventories, mixedeffects height–diameter models are presented for several commercially and ecologically important conifers in the inland Northwest of the USA. After obtaining height–diameter measurements from a plot/stand of ...

  1. A comparative study of generalized linear mixed modelling and artificial neural network approach for the joint modelling of survival and incidence of Dengue patients in Sri Lanka

    Science.gov (United States)

    Hapugoda, J. C.; Sooriyarachchi, M. R.

    2017-09-01

    Survival time of patients with a disease and the incidence of that particular disease (count) is frequently observed in medical studies with the data of a clustered nature. In many cases, though, the survival times and the count can be correlated in a way that, diseases that occur rarely could have shorter survival times or vice versa. Due to this fact, joint modelling of these two variables will provide interesting and certainly improved results than modelling these separately. Authors have previously proposed a methodology using Generalized Linear Mixed Models (GLMM) by joining the Discrete Time Hazard model with the Poisson Regression model to jointly model survival and count model. As Aritificial Neural Network (ANN) has become a most powerful computational tool to model complex non-linear systems, it was proposed to develop a new joint model of survival and count of Dengue patients of Sri Lanka by using that approach. Thus, the objective of this study is to develop a model using ANN approach and compare the results with the previously developed GLMM model. As the response variables are continuous in nature, Generalized Regression Neural Network (GRNN) approach was adopted to model the data. To compare the model fit, measures such as root mean square error (RMSE), absolute mean error (AME) and correlation coefficient (R) were used. The measures indicate the GRNN model fits the data better than the GLMM model.

  2. Negative binomial mixed models for analyzing microbiome count data.

    Science.gov (United States)

    Zhang, Xinyan; Mallick, Himel; Tang, Zaixiang; Zhang, Lei; Cui, Xiangqin; Benson, Andrew K; Yi, Nengjun

    2017-01-03

    Recent advances in next-generation sequencing (NGS) technology enable researchers to collect a large volume of metagenomic sequencing data. These data provide valuable resources for investigating interactions between the microbiome and host environmental/clinical factors. In addition to the well-known properties of microbiome count measurements, for example, varied total sequence reads across samples, over-dispersion and zero-inflation, microbiome studies usually collect samples with hierarchical structures, which introduce correlation among the samples and thus further complicate the analysis and interpretation of microbiome count data. In this article, we propose negative binomial mixed models (NBMMs) for detecting the association between the microbiome and host environmental/clinical factors for correlated microbiome count data. Although having not dealt with zero-inflation, the proposed mixed-effects models account for correlation among the samples by incorporating random effects into the commonly used fixed-effects negative binomial model, and can efficiently handle over-dispersion and varying total reads. We have developed a flexible and efficient IWLS (Iterative Weighted Least Squares) algorithm to fit the proposed NBMMs by taking advantage of the standard procedure for fitting the linear mixed models. We evaluate and demonstrate the proposed method via extensive simulation studies and the application to mouse gut microbiome data. The results show that the proposed method has desirable properties and outperform the previously used methods in terms of both empirical power and Type I error. The method has been incorporated into the freely available R package BhGLM ( http://www.ssg.uab.edu/bhglm/ and http://github.com/abbyyan3/BhGLM ), providing a useful tool for analyzing microbiome data.

  3. Topics in computational linear optimization

    DEFF Research Database (Denmark)

    Hultberg, Tim Helge

    2000-01-01

    Linear optimization has been an active area of research ever since the pioneering work of G. Dantzig more than 50 years ago. This research has produced a long sequence of practical as well as theoretical improvements of the solution techniques avilable for solving linear optimization problems...... of high quality solvers and the use of algebraic modelling systems to handle the communication between the modeller and the solver. This dissertation features four topics in computational linear optimization: A) automatic reformulation of mixed 0/1 linear programs, B) direct solution of sparse unsymmetric...... systems of linear equations, C) reduction of linear programs and D) integration of algebraic modelling of linear optimization problems in C++. Each of these topics is treated in a separate paper included in this dissertation. The efficiency of solving mixed 0-1 linear programs by linear programming based...

  4. Phylogenetic mixtures and linear invariants for equal input models.

    Science.gov (United States)

    Casanellas, Marta; Steel, Mike

    2017-04-01

    The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).

  5. Interpretable inference on the mixed effect model with the Box-Cox transformation.

    Science.gov (United States)

    Maruo, K; Yamaguchi, Y; Noma, H; Gosho, M

    2017-07-10

    We derived results for inference on parameters of the marginal model of the mixed effect model with the Box-Cox transformation based on the asymptotic theory approach. We also provided a robust variance estimator of the maximum likelihood estimator of the parameters of this model in consideration of the model misspecifications. Using these results, we developed an inference procedure for the difference of the model median between treatment groups at the specified occasion in the context of mixed effects models for repeated measures analysis for randomized clinical trials, which provided interpretable estimates of the treatment effect. From simulation studies, it was shown that our proposed method controlled type I error of the statistical test for the model median difference in almost all the situations and had moderate or high performance for power compared with the existing methods. We illustrated our method with cluster of differentiation 4 (CD4) data in an AIDS clinical trial, where the interpretability of the analysis results based on our proposed method is demonstrated. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  6. Modeling of non-linear CHP efficiency curves in distributed energy systems

    DEFF Research Database (Denmark)

    Milan, Christian; Stadler, Michael; Cardoso, Gonçalo

    2015-01-01

    Distributed energy resources gain an increased importance in commercial and industrial building design. Combined heat and power (CHP) units are considered as one of the key technologies for cost and emission reduction in buildings. In order to make optimal decisions on investment and operation...... for these technologies, detailed system models are needed. These models are often formulated as linear programming problems to keep computational costs and complexity in a reasonable range. However, CHP systems involve variations of the efficiency for large nameplate capacity ranges and in case of part load operation......, which can be even of non-linear nature. Since considering these characteristics would turn the models into non-linear problems, in most cases only constant efficiencies are assumed. This paper proposes possible solutions to address this issue. For a mixed integer linear programming problem two...

  7. Trending in Probability of Collision Measurements via a Bayesian Zero-Inflated Beta Mixed Model

    Science.gov (United States)

    Vallejo, Jonathon; Hejduk, Matt; Stamey, James

    2015-01-01

    We investigate the performance of a generalized linear mixed model in predicting the Probabilities of Collision (Pc) for conjunction events. Specifically, we apply this model to the log(sub 10) transformation of these probabilities and argue that this transformation yields values that can be considered bounded in practice. Additionally, this bounded random variable, after scaling, is zero-inflated. Consequently, we model these values using the zero-inflated Beta distribution, and utilize the Bayesian paradigm and the mixed model framework to borrow information from past and current events. This provides a natural way to model the data and provides a basis for answering questions of interest, such as what is the likelihood of observing a probability of collision equal to the effective value of zero on a subsequent observation.

  8. Turbulence closure for mixing length theories

    Science.gov (United States)

    Jermyn, Adam S.; Lesaffre, Pierre; Tout, Christopher A.; Chitre, Shashikumar M.

    2018-05-01

    We present an approach to turbulence closure based on mixing length theory with three-dimensional fluctuations against a two-dimensional background. This model is intended to be rapidly computable for implementation in stellar evolution software and to capture a wide range of relevant phenomena with just a single free parameter, namely the mixing length. We incorporate magnetic, rotational, baroclinic, and buoyancy effects exactly within the formalism of linear growth theories with non-linear decay. We treat differential rotation effects perturbatively in the corotating frame using a novel controlled approximation, which matches the time evolution of the reference frame to arbitrary order. We then implement this model in an efficient open source code and discuss the resulting turbulent stresses and transport coefficients. We demonstrate that this model exhibits convective, baroclinic, and shear instabilities as well as the magnetorotational instability. It also exhibits non-linear saturation behaviour, and we use this to extract the asymptotic scaling of various transport coefficients in physically interesting limits.

  9. Variable Selection in Heterogeneous Datasets: A Truncated-rank Sparse Linear Mixed Model with Applications to Genome-wide Association Studies.

    Science.gov (United States)

    Wang, Haohan; Aragam, Bryon; Xing, Eric P

    2018-04-26

    A fundamental and important challenge in modern datasets of ever increasing dimensionality is variable selection, which has taken on renewed interest recently due to the growth of biological and medical datasets with complex, non-i.i.d. structures. Naïvely applying classical variable selection methods such as the Lasso to such datasets may lead to a large number of false discoveries. Motivated by genome-wide association studies in genetics, we study the problem of variable selection for datasets arising from multiple subpopulations, when this underlying population structure is unknown to the researcher. We propose a unified framework for sparse variable selection that adaptively corrects for population structure via a low-rank linear mixed model. Most importantly, the proposed method does not require prior knowledge of sample structure in the data and adaptively selects a covariance structure of the correct complexity. Through extensive experiments, we illustrate the effectiveness of this framework over existing methods. Further, we test our method on three different genomic datasets from plants, mice, and human, and discuss the knowledge we discover with our method. Copyright © 2018. Published by Elsevier Inc.

  10. A Bayesian approach to functional mixed-effects modeling for longitudinal data with binomial outcomes.

    Science.gov (United States)

    Kliethermes, Stephanie; Oleson, Jacob

    2014-08-15

    Longitudinal growth patterns are routinely seen in medical studies where individual growth and population growth are followed up over a period of time. Many current methods for modeling growth presuppose a parametric relationship between the outcome and time (e.g., linear and quadratic); however, these relationships may not accurately capture growth over time. Functional mixed-effects (FME) models provide flexibility in handling longitudinal data with nonparametric temporal trends. Although FME methods are well developed for continuous, normally distributed outcome measures, nonparametric methods for handling categorical outcomes are limited. We consider the situation with binomially distributed longitudinal outcomes. Although percent correct data can be modeled assuming normality, estimates outside the parameter space are possible, and thus, estimated curves can be unrealistic. We propose a binomial FME model using Bayesian methodology to account for growth curves with binomial (percentage) outcomes. The usefulness of our methods is demonstrated using a longitudinal study of speech perception outcomes from cochlear implant users where we successfully model both the population and individual growth trajectories. Simulation studies also advocate the usefulness of the binomial model particularly when outcomes occur near the boundary of the probability parameter space and in situations with a small number of trials. Copyright © 2014 John Wiley & Sons, Ltd.

  11. Statistical models of global Langmuir mixing

    Science.gov (United States)

    Li, Qing; Fox-Kemper, Baylor; Breivik, Øyvind; Webb, Adrean

    2017-05-01

    The effects of Langmuir mixing on the surface ocean mixing may be parameterized by applying an enhancement factor which depends on wave, wind, and ocean state to the turbulent velocity scale in the K-Profile Parameterization. Diagnosing the appropriate enhancement factor online in global climate simulations is readily achieved by coupling with a prognostic wave model, but with significant computational and code development expenses. In this paper, two alternatives that do not require a prognostic wave model, (i) a monthly mean enhancement factor climatology, and (ii) an approximation to the enhancement factor based on the empirical wave spectra, are explored and tested in a global climate model. Both appear to reproduce the Langmuir mixing effects as estimated using a prognostic wave model, with nearly identical and substantial improvements in the simulated mixed layer depth and intermediate water ventilation over control simulations, but significantly less computational cost. Simpler approaches, such as ignoring Langmuir mixing altogether or setting a globally constant Langmuir number, are found to be deficient. Thus, the consequences of Stokes depth and misaligned wind and waves are important.

  12. Systematic analysis of the impact of mixing locality on Mixing-DAC linearity for multicarrier GSM

    NARCIS (Netherlands)

    Bechthum, E.; Radulov, G.I.; Briaire, J.; Geelen, G.; Roermund, van A.H.M.

    2012-01-01

    In an RF transmitter, the function of the mixer and the DAC can be combined in a single block: the Mixing-DAC. For the generation of multicarrier GSM signals in a basestation, high dynamic linearity is required, i.e. SFDR>85dBc, at high output signal frequency, i.e. ƒout ˜ 4GHz. This represents a

  13. A Mixed-Integer Linear Programming approach to wind farm layout and inter-array cable routing

    DEFF Research Database (Denmark)

    Fischetti, Martina; Leth, John-Josef; Borchersen, Anders Bech

    2015-01-01

    A Mixed-Integer Linear Programming (MILP) approach is proposed to optimize the turbine allocation and inter-array offshore cable routing. The two problems are considered with a two steps strategy, solving the layout problem first and then the cable problem. We give an introduction to both problems...... and present the MILP models we developed to solve them. To deal with interference in the onshore cases, we propose an adaptation of the standard Jensen’s model, suitable for 3D cases. A simple Stochastic Programming variant of our model allows us to consider different wind scenarios in the optimization...

  14. A multilevel nonlinear mixed-effects approach to model growth in pigs

    DEFF Research Database (Denmark)

    Strathe, Anders Bjerring; Danfær, Allan Christian; Sørensen, H.

    2010-01-01

    Growth functions have been used to predict market weight of pigs and maximize return over feed costs. This study was undertaken to compare 4 growth functions and methods of analyzing data, particularly one that considers nonlinear repeated measures. Data were collected from an experiment with 40...... pigs maintained from birth to maturity and their BW measured weekly or every 2 wk up to 1,007 d. Gompertz, logistic, Bridges, and Lopez functions were fitted to the data and compared using information criteria. For each function, a multilevel nonlinear mixed effects model was employed because....... Furthermore, studies should consider adding continuous autoregressive process when analyzing nonlinear mixed models with repeated measures....

  15. Analysis of oligonucleotide array experiments with repeated measures using mixed models

    Directory of Open Access Journals (Sweden)

    Getchell Thomas V

    2004-12-01

    Full Text Available Abstract Background Two or more factor mixed factorial experiments are becoming increasingly common in microarray data analysis. In this case study, the two factors are presence (Patients with Alzheimer's disease or absence (Control of the disease, and brain regions including olfactory bulb (OB or cerebellum (CER. In the design considered in this manuscript, OB and CER are repeated measurements from the same subject and, hence, are correlated. It is critical to identify sources of variability in the analysis of oligonucleotide array experiments with repeated measures and correlations among data points have to be considered. In addition, multiple testing problems are more complicated in experiments with multi-level treatments or treatment combinations. Results In this study we adopted a linear mixed model to analyze oligonucleotide array experiments with repeated measures. We first construct a generalized F test to select differentially expressed genes. The Benjamini and Hochberg (BH procedure of controlling false discovery rate (FDR at 5% was applied to the P values of the generalized F test. For those genes with significant generalized F test, we then categorize them based on whether the interaction terms were significant or not at the α-level (αnew = 0.0033 determined by the FDR procedure. Since simple effects may be examined for the genes with significant interaction effect, we adopt the protected Fisher's least significant difference test (LSD procedure at the level of αnew to control the family-wise error rate (FWER for each gene examined. Conclusions A linear mixed model is appropriate for analysis of oligonucleotide array experiments with repeated measures. We constructed a generalized F test to select differentially expressed genes, and then applied a specific sequence of tests to identify factorial effects. This sequence of tests applied was designed to control for gene based FWER.

  16. Local genealogies in a linear mixed model for genome-wide association mapping in complex pedigreed populations.

    Directory of Open Access Journals (Sweden)

    Goutam Sahana

    Full Text Available INTRODUCTION: The state-of-the-art for dealing with multiple levels of relationship among the samples in genome-wide association studies (GWAS is unified mixed model analysis (MMA. This approach is very flexible, can be applied to both family-based and population-based samples, and can be extended to incorporate other effects in a straightforward and rigorous fashion. Here, we present a complementary approach, called 'GENMIX (genealogy based mixed model' which combines advantages from two powerful GWAS methods: genealogy-based haplotype grouping and MMA. SUBJECTS AND METHODS: We validated GENMIX using genotyping data of Danish Jersey cattle and simulated phenotype and compared to the MMA. We simulated scenarios for three levels of heritability (0.21, 0.34, and 0.64, seven levels of MAF (0.05, 0.10, 0.15, 0.20, 0.25, 0.35, and 0.45 and five levels of QTL effect (0.1, 0.2, 0.5, 0.7 and 1.0 in phenotypic standard deviation unit. Each of these 105 possible combinations (3 h(2 x 7 MAF x 5 effects of scenarios was replicated 25 times. RESULTS: GENMIX provides a better ranking of markers close to the causative locus' location. GENMIX outperformed MMA when the QTL effect was small and the MAF at the QTL was low. In scenarios where MAF was high or the QTL affecting the trait had a large effect both GENMIX and MMA performed similarly. CONCLUSION: In discovery studies, where high-ranking markers are identified and later examined in validation studies, we therefore expect GENMIX to enrich candidates brought to follow-up studies with true positives over false positives more than the MMA would.

  17. Quantifying the effect of mixing on the mean age of air in CCMVal-2 and CCMI-1 models

    Science.gov (United States)

    Dietmüller, Simone; Eichinger, Roland; Garny, Hella; Birner, Thomas; Boenisch, Harald; Pitari, Giovanni; Mancini, Eva; Visioni, Daniele; Stenke, Andrea; Revell, Laura; Rozanov, Eugene; Plummer, David A.; Scinocca, John; Jöckel, Patrick; Oman, Luke; Deushi, Makoto; Kiyotaka, Shibata; Kinnison, Douglas E.; Garcia, Rolando; Morgenstern, Olaf; Zeng, Guang; Stone, Kane Adam; Schofield, Robyn

    2018-05-01

    The stratospheric age of air (AoA) is a useful measure of the overall capabilities of a general circulation model (GCM) to simulate stratospheric transport. Previous studies have reported a large spread in the simulation of AoA by GCMs and coupled chemistry-climate models (CCMs). Compared to observational estimates, simulated AoA is mostly too low. Here we attempt to untangle the processes that lead to the AoA differences between the models and between models and observations. AoA is influenced by both mean transport by the residual circulation and two-way mixing; we quantify the effects of these processes using data from the CCM inter-comparison projects CCMVal-2 (Chemistry-Climate Model Validation Activity 2) and CCMI-1 (Chemistry-Climate Model Initiative, phase 1). Transport along the residual circulation is measured by the residual circulation transit time (RCTT). We interpret the difference between AoA and RCTT as additional aging by mixing. Aging by mixing thus includes mixing on both the resolved and subgrid scale. We find that the spread in AoA between the models is primarily caused by differences in the effects of mixing and only to some extent by differences in residual circulation strength. These effects are quantified by the mixing efficiency, a measure of the relative increase in AoA by mixing. The mixing efficiency varies strongly between the models from 0.24 to 1.02. We show that the mixing efficiency is not only controlled by horizontal mixing, but by vertical mixing and vertical diffusion as well. Possible causes for the differences in the models' mixing efficiencies are discussed. Differences in subgrid-scale mixing (including differences in advection schemes and model resolutions) likely contribute to the differences in mixing efficiency. However, differences in the relative contribution of resolved versus parameterized wave forcing do not appear to be related to differences in mixing efficiency or AoA.

  18. Modeling of speed distribution for mixed bicycle traffic flow

    Directory of Open Access Journals (Sweden)

    Cheng Xu

    2015-11-01

    Full Text Available Speed is a fundamental measure of traffic performance for highway systems. There were lots of results for the speed characteristics of motorized vehicles. In this article, we studied the speed distribution for mixed bicycle traffic which was ignored in the past. Field speed data were collected from Hangzhou, China, under different survey sites, traffic conditions, and percentages of electric bicycle. The statistics results of field data show that the total mean speed of electric bicycles is 17.09 km/h, 3.63 km/h faster and 27.0% higher than that of regular bicycles. Normal, log-normal, gamma, and Weibull distribution models were used for testing speed data. The results of goodness-of-fit hypothesis tests imply that the log-normal and Weibull model can fit the field data very well. Then, the relationships between mean speed and electric bicycle proportions were proposed using linear regression models, and the mean speed for purely electric bicycles or regular bicycles can be obtained. The findings of this article will provide effective help for the safety and traffic management of mixed bicycle traffic.

  19. Local hyperspectral data multisharpening based on linear/linear-quadratic nonnegative matrix factorization by integrating lidar data

    Science.gov (United States)

    Benhalouche, Fatima Zohra; Karoui, Moussa Sofiane; Deville, Yannick; Ouamri, Abdelaziz

    2015-10-01

    In this paper, a new Spectral-Unmixing-based approach, using Nonnegative Matrix Factorization (NMF), is proposed to locally multi-sharpen hyperspectral data by integrating a Digital Surface Model (DSM) obtained from LIDAR data. In this new approach, the nature of the local mixing model is detected by using the local variance of the object elevations. The hyper/multispectral images are explored using small zones. In each zone, the variance of the object elevations is calculated from the DSM data in this zone. This variance is compared to a threshold value and the adequate linear/linearquadratic spectral unmixing technique is used in the considered zone to independently unmix hyperspectral and multispectral data, using an adequate linear/linear-quadratic NMF-based approach. The obtained spectral and spatial information thus respectively extracted from the hyper/multispectral images are then recombined in the considered zone, according to the selected mixing model. Experiments based on synthetic hyper/multispectral data are carried out to evaluate the performance of the proposed multi-sharpening approach and literature linear/linear-quadratic approaches used on the whole hyper/multispectral data. In these experiments, real DSM data are used to generate synthetic data containing linear and linear-quadratic mixed pixel zones. The DSM data are also used for locally detecting the nature of the mixing model in the proposed approach. Globally, the proposed approach yields good spatial and spectral fidelities for the multi-sharpened data and significantly outperforms the used literature methods.

  20. Controlling attribute effect in linear regression

    KAUST Repository

    Calders, Toon; Karim, Asim A.; Kamiran, Faisal; Ali, Wasif Mohammad; Zhang, Xiangliang

    2013-01-01

    In data mining we often have to learn from biased data, because, for instance, data comes from different batches or there was a gender or racial bias in the collection of social data. In some applications it may be necessary to explicitly control this bias in the models we learn from the data. This paper is the first to study learning linear regression models under constraints that control the biasing effect of a given attribute such as gender or batch number. We show how propensity modeling can be used for factoring out the part of the bias that can be justified by externally provided explanatory attributes. Then we analytically derive linear models that minimize squared error while controlling the bias by imposing constraints on the mean outcome or residuals of the models. Experiments with discrimination-aware crime prediction and batch effect normalization tasks show that the proposed techniques are successful in controlling attribute effects in linear regression models. © 2013 IEEE.

  1. Controlling attribute effect in linear regression

    KAUST Repository

    Calders, Toon

    2013-12-01

    In data mining we often have to learn from biased data, because, for instance, data comes from different batches or there was a gender or racial bias in the collection of social data. In some applications it may be necessary to explicitly control this bias in the models we learn from the data. This paper is the first to study learning linear regression models under constraints that control the biasing effect of a given attribute such as gender or batch number. We show how propensity modeling can be used for factoring out the part of the bias that can be justified by externally provided explanatory attributes. Then we analytically derive linear models that minimize squared error while controlling the bias by imposing constraints on the mean outcome or residuals of the models. Experiments with discrimination-aware crime prediction and batch effect normalization tasks show that the proposed techniques are successful in controlling attribute effects in linear regression models. © 2013 IEEE.

  2. Performance of nonlinear mixed effects models in the presence of informative dropout.

    Science.gov (United States)

    Björnsson, Marcus A; Friberg, Lena E; Simonsson, Ulrika S H

    2015-01-01

    Informative dropout can lead to bias in statistical analyses if not handled appropriately. The objective of this simulation study was to investigate the performance of nonlinear mixed effects models with regard to bias and precision, with and without handling informative dropout. An efficacy variable and dropout depending on that efficacy variable were simulated and model parameters were reestimated, with or without including a dropout model. The Laplace and FOCE-I estimation methods in NONMEM 7, and the stochastic simulations and estimations (SSE) functionality in PsN, were used in the analysis. For the base scenario, bias was low, less than 5% for all fixed effects parameters, when a dropout model was used in the estimations. When a dropout model was not included, bias increased up to 8% for the Laplace method and up to 21% if the FOCE-I estimation method was applied. The bias increased with decreasing number of observations per subject, increasing placebo effect and increasing dropout rate, but was relatively unaffected by the number of subjects in the study. This study illustrates that ignoring informative dropout can lead to biased parameters in nonlinear mixed effects modeling, but even in cases with few observations or high dropout rate, the bias is relatively low and only translates into small effects on predictions of the underlying effect variable. A dropout model is, however, crucial in the presence of informative dropout in order to make realistic simulations of trial outcomes.

  3. Mixed H∞ and passive control for linear switched systems via hybrid control approach

    Science.gov (United States)

    Zheng, Qunxian; Ling, Youzhu; Wei, Lisheng; Zhang, Hongbin

    2018-03-01

    This paper investigates the mixed H∞ and passive control problem for linear switched systems based on a hybrid control strategy. To solve this problem, first, a new performance index is proposed. This performance index can be viewed as the mixed weighted H∞ and passivity performance. Then, the hybrid controllers are used to stabilise the switched systems. The hybrid controllers consist of dynamic output-feedback controllers for every subsystem and state updating controllers at the switching instant. The design of state updating controllers not only depends on the pre-switching subsystem and the post-switching subsystem, but also depends on the measurable output signal. The hybrid controllers proposed in this paper can include some existing ones as special cases. Combine the multiple Lyapunov functions approach with the average dwell time technique, new sufficient conditions are obtained. Under the new conditions, the closed-loop linear switched systems are globally uniformly asymptotically stable with a mixed H∞ and passivity performance index. Moreover, the desired hybrid controllers can be constructed by solving a set of linear matrix inequalities. Finally, a numerical example and a practical example are given.

  4. Image quality optimization and evaluation of linearly mixed images in dual-source, dual-energy CT

    International Nuclear Information System (INIS)

    Yu Lifeng; Primak, Andrew N.; Liu Xin; McCollough, Cynthia H.

    2009-01-01

    In dual-source dual-energy CT, the images reconstructed from the low- and high-energy scans (typically at 80 and 140 kV, respectively) can be mixed together to provide a single set of non-material-specific images for the purpose of routine diagnostic interpretation. Different from the material-specific information that may be obtained from the dual-energy scan data, the mixed images are created with the purpose of providing the interpreting physician a single set of images that have an appearance similar to that in single-energy images acquired at the same total radiation dose. In this work, the authors used a phantom study to evaluate the image quality of linearly mixed images in comparison to single-energy CT images, assuming the same total radiation dose and taking into account the effect of patient size and the dose partitioning between the low-and high-energy scans. The authors first developed a method to optimize the quality of the linearly mixed images such that the single-energy image quality was compared to the best-case image quality of the dual-energy mixed images. Compared to 80 kV single-energy images for the same radiation dose, the iodine CNR in dual-energy mixed images was worse for smaller phantom sizes. However, similar noise and similar or improved iodine CNR relative to 120 kV images could be achieved for dual-energy mixed images using the same total radiation dose over a wide range of patient sizes (up to 45 cm lateral thorax dimension). Thus, for adult CT practices, which primarily use 120 kV scanning, the use of dual-energy CT for the purpose of material-specific imaging can also produce a set of non-material-specific images for routine diagnostic interpretation that are of similar or improved quality relative to single-energy 120 kV scans.

  5. Effect of Process Parameters on Friction Model in Computer Simulation of Linear Friction Welding

    Directory of Open Access Journals (Sweden)

    A. Yamileva

    2014-07-01

    Full Text Available The friction model is important part of a numerical model of linear friction welding. Its selection determines the accuracy of the results. Existing models employ the classical law of Amonton-Coulomb where the friction coefficient is either constant or linearly dependent on a single parameter. Determination of the coefficient of friction is a time consuming process that requires a lot of experiments. So the feasibility of determinating the complex dependence should be assessing by analysis of effect of approximating law for friction model on simulation results.

  6. Fixed versus mixed RSA: Explaining visual representations by fixed and mixed feature sets from shallow and deep computational models.

    Science.gov (United States)

    Khaligh-Razavi, Seyed-Mahdi; Henriksson, Linda; Kay, Kendrick; Kriegeskorte, Nikolaus

    2017-02-01

    Studies of the primate visual system have begun to test a wide range of complex computational object-vision models. Realistic models have many parameters, which in practice cannot be fitted using the limited amounts of brain-activity data typically available. Task performance optimization (e.g. using backpropagation to train neural networks) provides major constraints for fitting parameters and discovering nonlinear representational features appropriate for the task (e.g. object classification). Model representations can be compared to brain representations in terms of the representational dissimilarities they predict for an image set. This method, called representational similarity analysis (RSA), enables us to test the representational feature space as is (fixed RSA) or to fit a linear transformation that mixes the nonlinear model features so as to best explain a cortical area's representational space (mixed RSA). Like voxel/population-receptive-field modelling, mixed RSA uses a training set (different stimuli) to fit one weight per model feature and response channel (voxels here), so as to best predict the response profile across images for each response channel. We analysed response patterns elicited by natural images, which were measured with functional magnetic resonance imaging (fMRI). We found that early visual areas were best accounted for by shallow models, such as a Gabor wavelet pyramid (GWP). The GWP model performed similarly with and without mixing, suggesting that the original features already approximated the representational space, obviating the need for mixing. However, a higher ventral-stream visual representation (lateral occipital region) was best explained by the higher layers of a deep convolutional network and mixing of its feature set was essential for this model to explain the representation. We suspect that mixing was essential because the convolutional network had been trained to discriminate a set of 1000 categories, whose frequencies

  7. Learning oncogenetic networks by reducing to mixed integer linear programming.

    Science.gov (United States)

    Shahrabi Farahani, Hossein; Lagergren, Jens

    2013-01-01

    Cancer can be a result of accumulation of different types of genetic mutations such as copy number aberrations. The data from tumors are cross-sectional and do not contain the temporal order of the genetic events. Finding the order in which the genetic events have occurred and progression pathways are of vital importance in understanding the disease. In order to model cancer progression, we propose Progression Networks, a special case of Bayesian networks, that are tailored to model disease progression. Progression networks have similarities with Conjunctive Bayesian Networks (CBNs) [1],a variation of Bayesian networks also proposed for modeling disease progression. We also describe a learning algorithm for learning Bayesian networks in general and progression networks in particular. We reduce the hard problem of learning the Bayesian and progression networks to Mixed Integer Linear Programming (MILP). MILP is a Non-deterministic Polynomial-time complete (NP-complete) problem for which very good heuristics exists. We tested our algorithm on synthetic and real cytogenetic data from renal cell carcinoma. We also compared our learned progression networks with the networks proposed in earlier publications. The software is available on the website https://bitbucket.org/farahani/diprog.

  8. Linearization effect in multifractal analysis: Insights from the Random Energy Model

    Science.gov (United States)

    Angeletti, Florian; Mézard, Marc; Bertin, Eric; Abry, Patrice

    2011-08-01

    The analysis of the linearization effect in multifractal analysis, and hence of the estimation of moments for multifractal processes, is revisited borrowing concepts from the statistical physics of disordered systems, notably from the analysis of the so-called Random Energy Model. Considering a standard multifractal process (compound Poisson motion), chosen as a simple representative example, we show the following: (i) the existence of a critical order q∗ beyond which moments, though finite, cannot be estimated through empirical averages, irrespective of the sample size of the observation; (ii) multifractal exponents necessarily behave linearly in q, for q>q∗. Tailoring the analysis conducted for the Random Energy Model to that of Compound Poisson motion, we provide explicative and quantitative predictions for the values of q∗ and for the slope controlling the linear behavior of the multifractal exponents. These quantities are shown to be related only to the definition of the multifractal process and not to depend on the sample size of the observation. Monte Carlo simulations, conducted over a large number of large sample size realizations of compound Poisson motion, comfort and extend these analyses.

  9. Multiple model adaptive control with mixing

    Science.gov (United States)

    Kuipers, Matthew

    Despite the remarkable theoretical accomplishments and successful applications of adaptive control, the field is not sufficiently mature to solve challenging control problems requiring strict performance and safety guarantees. Towards addressing these issues, a novel deterministic multiple-model adaptive control approach called adaptive mixing control is proposed. In this approach, adaptation comes from a high-level system called the supervisor that mixes into feedback a number of candidate controllers, each finely-tuned to a subset of the parameter space. The mixing signal, the supervisor's output, is generated by estimating the unknown parameters and, at every instant of time, calculating the contribution level of each candidate controller based on certainty equivalence. The proposed architecture provides two characteristics relevant to solving stringent, performance-driven applications. First, the full-suite of linear time invariant control tools is available. A disadvantage of conventional adaptive control is its restriction to utilizing only those control laws whose solutions can be feasibly computed in real-time, such as model reference and pole-placement type controllers. Because its candidate controllers are computed off line, the proposed approach suffers no such restriction. Second, the supervisor's output is smooth and does not necessarily depend on explicit a priori knowledge of the disturbance model. These characteristics can lead to improved performance by avoiding the unnecessary switching and chattering behaviors associated with some other multiple adaptive control approaches. The stability and robustness properties of the adaptive scheme are analyzed. It is shown that the mean-square regulation error is of the order of the modeling error. And when the parameter estimate converges to its true value, which is guaranteed if a persistence of excitation condition is satisfied, the adaptive closed-loop system converges exponentially fast to a closed

  10. Species Distribution Modeling: Comparison of Fixed and Mixed Effects Models Using INLA

    Directory of Open Access Journals (Sweden)

    Lara Dutra Silva

    2017-12-01

    Full Text Available Invasive alien species are among the most important, least controlled, and least reversible of human impacts on the world’s ecosystems, with negative consequences affecting biodiversity and socioeconomic systems. Species distribution models have become a fundamental tool in assessing the potential spread of invasive species in face of their native counterparts. In this study we compared two different modeling techniques: (i fixed effects models accounting for the effect of ecogeographical variables (EGVs; and (ii mixed effects models including also a Gaussian random field (GRF to model spatial correlation (Matérn covariance function. To estimate the potential distribution of Pittosporum undulatum and Morella faya (respectively, invasive and native trees, we used geo-referenced data of their distribution in Pico and São Miguel islands (Azores and topographic, climatic and land use EGVs. Fixed effects models run with maximum likelihood or the INLA (Integrated Nested Laplace Approximation approach provided very similar results, even when reducing the size of the presences data set. The addition of the GRF increased model adjustment (lower Deviance Information Criterion, particularly for the less abundant tree, M. faya. However, the random field parameters were clearly affected by sample size and species distribution pattern. A high degree of spatial autocorrelation was found and should be taken into account when modeling species distribution.

  11. A mixing-model approach to quantifying sources of organic matter to salt marsh sediments

    Science.gov (United States)

    Bowles, K. M.; Meile, C. D.

    2010-12-01

    Salt marshes are highly productive ecosystems, where autochthonous production controls an intricate exchange of carbon and energy among organisms. The major sources of organic carbon to these systems include 1) autochthonous production by vascular plant matter, 2) import of allochthonous plant material, and 3) phytoplankton biomass. Quantifying the relative contribution of organic matter sources to a salt marsh is important for understanding the fate and transformation of organic carbon in these systems, which also impacts the timing and magnitude of carbon export to the coastal ocean. A common approach to quantify organic matter source contributions to mixtures is the use of linear mixing models. To estimate the relative contributions of endmember materials to total organic matter in the sediment, the problem is formulated as a constrained linear least-square problem. However, the type of data that is utilized in such mixing models, the uncertainties in endmember compositions and the temporal dynamics of non-conservative entitites can have varying affects on the results. Making use of a comprehensive data set that encompasses several endmember characteristics - including a yearlong degradation experiment - we study the impact of these factors on estimates of the origin of sedimentary organic carbon in a saltmarsh located in the SE United States. We first evaluate the sensitivity of linear mixing models to the type of data employed by analyzing a series of mixing models that utilize various combinations of parameters (i.e. endmember characteristics such as δ13COC, C/N ratios or lignin content). Next, we assess the importance of using more than the minimum number of parameters required to estimate endmember contributions to the total organic matter pool. Then, we quantify the impact of data uncertainty on the outcome of the analysis using Monte Carlo simulations and accounting for the uncertainty in endmember characteristics. Finally, as biogeochemical processes

  12. Comparing linear probability model coefficients across groups

    DEFF Research Database (Denmark)

    Holm, Anders; Ejrnæs, Mette; Karlson, Kristian Bernt

    2015-01-01

    of the following three components: outcome truncation, scale parameters and distributional shape of the predictor variable. These results point to limitations in using linear probability model coefficients for group comparisons. We also provide Monte Carlo simulations and real examples to illustrate......This article offers a formal identification analysis of the problem in comparing coefficients from linear probability models between groups. We show that differences in coefficients from these models can result not only from genuine differences in effects, but also from differences in one or more...... these limitations, and we suggest a restricted approach to using linear probability model coefficients in group comparisons....

  13. A knowledge representation model for the optimisation of electricity generation mixes

    International Nuclear Information System (INIS)

    Chee Tahir, Aidid; Bañares-Alcántara, René

    2012-01-01

    Highlights: ► Prototype energy model which uses semantic representation (ontologies). ► Model accepts both quantitative and qualitative based energy policy goals. ► Uses logic inference to formulate equations for linear optimisation. ► Proposes electricity generation mix based on energy policy goals. -- Abstract: Energy models such as MARKAL, MESSAGE and DNE-21 are optimisation tools which aid in the formulation of energy policies. The strength of these models lie in their solid theoretical foundations built on rigorous mathematical equations designed to process numerical (quantitative) data related to economics and the environment. Nevertheless, a complete consideration of energy policy issues also requires the consideration of the political and social aspects of energy. These political and social issues are often associated with non-numerical (qualitative) information. To enable the evaluation of these aspects in a computer model, we hypothesise that a different approach to energy model optimisation design is required. A prototype energy model that is based on a semantic representation using ontologies and is integrated to engineering models implemented in Java has been developed. The model provides both quantitative and qualitative evaluation capabilities through the use of logical inference. The semantic representation of energy policy goals is used (i) to translate a set of energy policy goals into a set of logic queries which is then used to determine the preferred electricity generation mix and (ii) to assist in the formulation of a set of equations which is then solved in order to obtain a proposed electricity generation mix. Scenario case studies have been developed and tested on the prototype energy model to determine its capabilities. Knowledge queries were made on the semantic representation to determine an electricity generation mix which fulfilled a set of energy policy goals (e.g. CO 2 emissions reduction, water conservation, energy supply

  14. A linearized dispersion relation for orthorhombic pseudo-acoustic modeling

    KAUST Repository

    Song, Xiaolei; Alkhalifah, Tariq Ali

    2012-01-01

    Wavefield extrapolation in acoustic orthorhombic anisotropic media suffers from wave-mode coupling and stability limitations in the parameter range. We introduce a linearized form of the dispersion relation for acoustic orthorhombic media to model acoustic wavefields. We apply the lowrank approximation approach to handle the corresponding space-wavenumber mixed-domain operator. Numerical experiments show that the proposed wavefield extrapolator is accurate and practically free of dispersions. Further, there is no coupling of qSv and qP waves, because we use the analytical dispersion relation. No constraints on Thomsen's parameters are required for stability. The linearized expression may provide useful application for parameter estimation in orthorhombic media.

  15. A linearized dispersion relation for orthorhombic pseudo-acoustic modeling

    KAUST Repository

    Song, Xiaolei

    2012-11-04

    Wavefield extrapolation in acoustic orthorhombic anisotropic media suffers from wave-mode coupling and stability limitations in the parameter range. We introduce a linearized form of the dispersion relation for acoustic orthorhombic media to model acoustic wavefields. We apply the lowrank approximation approach to handle the corresponding space-wavenumber mixed-domain operator. Numerical experiments show that the proposed wavefield extrapolator is accurate and practically free of dispersions. Further, there is no coupling of qSv and qP waves, because we use the analytical dispersion relation. No constraints on Thomsen\\'s parameters are required for stability. The linearized expression may provide useful application for parameter estimation in orthorhombic media.

  16. Linear models with R

    CERN Document Server

    Faraway, Julian J

    2014-01-01

    A Hands-On Way to Learning Data AnalysisPart of the core of statistics, linear models are used to make predictions and explain the relationship between the response and the predictors. Understanding linear models is crucial to a broader competence in the practice of statistics. Linear Models with R, Second Edition explains how to use linear models in physical science, engineering, social science, and business applications. The book incorporates several improvements that reflect how the world of R has greatly expanded since the publication of the first edition.New to the Second EditionReorganiz

  17. An efficient model for predicting mixing lengths in serial pumping of petroleum products

    Energy Technology Data Exchange (ETDEWEB)

    Baptista, Renan Martins [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil). Centro de Pesquisas. Div. de Explotacao]. E-mail: renan@cenpes.petrobras.com.br; Rachid, Felipe Bastos de Freitas [Universidade Federal Fluminense, Niteroi, RJ (Brazil). Dept. de Engenharia Mecanica]. E-mail: rachid@mec.uff.br; Araujo, Jose Henrique Carneiro de [Universidade Federal Fluminense, Niteroi, RJ (Brazil). Dept. de Ciencia da Computacao]. E-mail: jhca@dcc.ic.uff.br

    2000-07-01

    This paper presents a new model for estimating mixing volumes which arises in batching transfers in multi product pipelines. The novel features of the model are the incorporation of the flow rate variation with time and the use of a more precise effective dispersion coefficient, which is considered to depend on the concentration. The governing equation of the model forms a non linear initial value problem that is solved by using a predictor corrector finite difference method. A comparison among the theoretical predictions of the proposed model, a field test and other classical procedures show that it exhibits the best estimate over the whole range of admissible concentrations investigated. (author)

  18. Generalized functional linear models for gene-based case-control association studies.

    Science.gov (United States)

    Fan, Ruzong; Wang, Yifan; Mills, James L; Carter, Tonia C; Lobach, Iryna; Wilson, Alexander F; Bailey-Wilson, Joan E; Weeks, Daniel E; Xiong, Momiao

    2014-11-01

    By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene region are disease related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease datasets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. © 2014 WILEY PERIODICALS, INC.

  19. Bias and inference from misspecified mixed-effect models in stepped wedge trial analysis.

    Science.gov (United States)

    Thompson, Jennifer A; Fielding, Katherine L; Davey, Calum; Aiken, Alexander M; Hargreaves, James R; Hayes, Richard J

    2017-10-15

    Many stepped wedge trials (SWTs) are analysed by using a mixed-effect model with a random intercept and fixed effects for the intervention and time periods (referred to here as the standard model). However, it is not known whether this model is robust to misspecification. We simulated SWTs with three groups of clusters and two time periods; one group received the intervention during the first period and two groups in the second period. We simulated period and intervention effects that were either common-to-all or varied-between clusters. Data were analysed with the standard model or with additional random effects for period effect or intervention effect. In a second simulation study, we explored the weight given to within-cluster comparisons by simulating a larger intervention effect in the group of the trial that experienced both the control and intervention conditions and applying the three analysis models described previously. Across 500 simulations, we computed bias and confidence interval coverage of the estimated intervention effect. We found up to 50% bias in intervention effect estimates when period or intervention effects varied between clusters and were treated as fixed effects in the analysis. All misspecified models showed undercoverage of 95% confidence intervals, particularly the standard model. A large weight was given to within-cluster comparisons in the standard model. In the SWTs simulated here, mixed-effect models were highly sensitive to departures from the model assumptions, which can be explained by the high dependence on within-cluster comparisons. Trialists should consider including a random effect for time period in their SWT analysis model. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  20. Sensitivity Analysis of Mixed Models for Incomplete Longitudinal Data

    Science.gov (United States)

    Xu, Shu; Blozis, Shelley A.

    2011-01-01

    Mixed models are used for the analysis of data measured over time to study population-level change and individual differences in change characteristics. Linear and nonlinear functions may be used to describe a longitudinal response, individuals need not be observed at the same time points, and missing data, assumed to be missing at random (MAR),…

  1. Visual, Algebraic and Mixed Strategies in Visually Presented Linear Programming Problems.

    Science.gov (United States)

    Shama, Gilli; Dreyfus, Tommy

    1994-01-01

    Identified and classified solution strategies of (n=49) 10th-grade students who were presented with linear programming problems in a predominantly visual setting in the form of a computerized game. Visual strategies were developed more frequently than either algebraic or mixed strategies. Appendix includes questionnaires. (Contains 11 references.)…

  2. The use of copulas to practical estimation of multivariate stochastic differential equation mixed effects models

    International Nuclear Information System (INIS)

    Rupšys, P.

    2015-01-01

    A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE

  3. The use of copulas to practical estimation of multivariate stochastic differential equation mixed effects models

    Energy Technology Data Exchange (ETDEWEB)

    Rupšys, P. [Aleksandras Stulginskis University, Studenų g. 11, Akademija, Kaunas district, LT – 53361 Lithuania (Lithuania)

    2015-10-28

    A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.

  4. Direction of Effects in Multiple Linear Regression Models.

    Science.gov (United States)

    Wiedermann, Wolfgang; von Eye, Alexander

    2015-01-01

    Previous studies analyzed asymmetric properties of the Pearson correlation coefficient using higher than second order moments. These asymmetric properties can be used to determine the direction of dependence in a linear regression setting (i.e., establish which of two variables is more likely to be on the outcome side) within the framework of cross-sectional observational data. Extant approaches are restricted to the bivariate regression case. The present contribution extends the direction of dependence methodology to a multiple linear regression setting by analyzing distributional properties of residuals of competing multiple regression models. It is shown that, under certain conditions, the third central moments of estimated regression residuals can be used to decide upon direction of effects. In addition, three different approaches for statistical inference are discussed: a combined D'Agostino normality test, a skewness difference test, and a bootstrap difference test. Type I error and power of the procedures are assessed using Monte Carlo simulations, and an empirical example is provided for illustrative purposes. In the discussion, issues concerning the quality of psychological data, possible extensions of the proposed methods to the fourth central moment of regression residuals, and potential applications are addressed.

  5. Software engineering the mixed model for genome-wide association studies on large samples.

    Science.gov (United States)

    Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J

    2009-11-01

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.

  6. Comparison between the SIMPLE and ENERGY mixing models

    International Nuclear Information System (INIS)

    Burns, K.J.; Todreas, N.E.

    1980-07-01

    The SIMPLE and ENERGY mixing models were compared in order to investigate the limitations of SIMPLE's analytically formulated mixing parameter, relative to the experimentally calibrated ENERGY mixing parameters. For interior subchannels, it was shown that when the SIMPLE and ENERGY parameters are reduced to a common form, there is good agreement between the two models for a typical fuel geometry. However, large discrepancies exist for typical blanket (lower P/D) geometries. Furthermore, the discrepancies between the mixing parameters result in significant differences in terms of the temperature profiles generated by the ENERGY code utilizing these mixing parameters as input. For edge subchannels, the assumptions made in the development of the SIMPLE model were extended to the rectangular edge subchannel geometry used in ENERGY. The resulting effective eddy diffusivities (used by the ENERGY code) associated with the SIMPLE model are again closest to those of the ENERGY model for the fuel assembly geometry. Finally, the SIMPLE model's neglect of a net swirl effect in the edge region is most limiting for assemblies exhibiting relatively large radial power skews

  7. Linear dose dependence of ion beam mixing of metals on Si

    International Nuclear Information System (INIS)

    Poker, D.B.; Appleton, B.R.

    1985-01-01

    These experiments were conducted to determine the dose dependences of ion beam mixing of various metal-silicon couples. V/Si and Cr/Si were included because these couples were previously suspected of exhibiting a linear dose dependence. Pd/Si was chosen because it had been reported as exhibiting only the square root dependence. Samples were cut from wafers of (100) n-type Si. The samples were cleaned in organic solvents, etched in hydrofluoric acid, and rinsed with methanol before mounting in an oil-free vacuum system for thin-film deposition. Films of Au, V, Cr, or Pd were evaporated onto the Si samples with a nominal deposition rate of 10 A/s. The thicknesses were large compared with those usually used to measure ion beam mixing and were used to ensure that conditions of unlimited supply were met. Samples were mixed with Si ions ranging in energy from 300 to 375 keV, chosen to produce ion ranges that significantly exceeded the metal film depth. Si was used as the mixing ion to prevent impurity doping of the Si substrate and to exclude a background signal from the Rutherford backscattering (RBS) spectra. Samples were mixed at room temperature, with the exception of the Au/Si samples, which were mixed at liquid nitrogen temperature. The samples were alternately mixed and analyzed in situ without exposure to atmosphere between mixing doses. The compositional distributions after mixing were measured using RBS of 2.5-MeV 4 He atoms

  8. On the origin of the mixed alkali effect on indentation in silicate glasses

    DEFF Research Database (Denmark)

    Kjeldsen, Jonas; Smedskjær, Morten Mattrup; Mauro, J. C.

    2014-01-01

    The compositional scaling of Vickers hardness (Hv) in mixed alkali oxide glasses manifests itself as a positive deviation from linearity as a function of the network modifier/modifier ratio, with a maximum deviation at the ratio of 1:1. In this work, we investigate the link between the indentation...... deformation processes (elastic deformation, plastic deformation, and densification) and Hv in two mixed sodium–potassium silicate glass series. We show that the mixed alkali effect in Hv originates from the nonlinear scaling of the resistance to plastic deformation. We thus confirm a direct relation between...... the resistance to plastic flow and Hv in mixed modifier glasses. Furthermore, we find that the mixed alkali effect also manifests itself as a positive deviation from linearity in the compositional scaling of density for glasses with high alumina content. This trend could be linked to a compaction of the network...

  9. Development of a shortleaf pine individual-tree growth equation using non-linear mixed modeling techniques

    Science.gov (United States)

    Chakra B. Budhathoki; Thomas B. Lynch; James M. Guldin

    2010-01-01

    Nonlinear mixed-modeling methods were used to estimate parameters in an individual-tree basal area growth model for shortleaf pine (Pinus echinata Mill.). Shortleaf pine individual-tree growth data were available from over 200 permanently established 0.2-acre fixed-radius plots located in naturally-occurring even-aged shortleaf pine forests on the...

  10. Multilevel nonlinear mixed-effects models for the modeling of earlywood and latewood microfibril angle

    Science.gov (United States)

    Lewis Jordon; Richard F. Daniels; Alexander Clark; Rechun He

    2005-01-01

    Earlywood and latewood microfibril angle (MFA) was determined at I-millimeter intervals from disks at 1.4 meters, then at 3-meter intervals to a height of 13.7 meters, from 18 loblolly pine (Pinus taeda L.) trees grown in southeastern Texas. A modified three-parameter logistic function with mixed effects is used for modeling earlywood and latewood...

  11. The Solution Set Characterization and Error Bound for the Extended Mixed Linear Complementarity Problem

    Directory of Open Access Journals (Sweden)

    Hongchun Sun

    2012-01-01

    Full Text Available For the extended mixed linear complementarity problem (EML CP, we first present the characterization of the solution set for the EMLCP. Based on this, its global error bound is also established under milder conditions. The results obtained in this paper can be taken as an extension for the classical linear complementarity problems.

  12. An extended heterogeneous car-following model accounting for anticipation driving behavior and mixed maximum speeds

    Science.gov (United States)

    Sun, Fengxin; Wang, Jufeng; Cheng, Rongjun; Ge, Hongxia

    2018-02-01

    The optimal driving speeds of the different vehicles may be different for the same headway. In the optimal velocity function of the optimal velocity (OV) model, the maximum speed vmax is an important parameter determining the optimal driving speed. A vehicle with higher maximum speed is more willing to drive faster than that with lower maximum speed in similar situation. By incorporating the anticipation driving behavior of relative velocity and mixed maximum speeds of different percentages into optimal velocity function, an extended heterogeneous car-following model is presented in this paper. The analytical linear stable condition for this extended heterogeneous traffic model is obtained by using linear stability theory. Numerical simulations are carried out to explore the complex phenomenon resulted from the cooperation between anticipation driving behavior and heterogeneous maximum speeds in the optimal velocity function. The analytical and numerical results all demonstrate that strengthening driver's anticipation effect can improve the stability of heterogeneous traffic flow, and increasing the lowest value in the mixed maximum speeds will result in more instability, but increasing the value or proportion of the part already having higher maximum speed will cause different stabilities at high or low traffic densities.

  13. Item Response Theory Models for Wording Effects in Mixed-Format Scales

    Science.gov (United States)

    Wang, Wen-Chung; Chen, Hui-Fang; Jin, Kuan-Yu

    2015-01-01

    Many scales contain both positively and negatively worded items. Reverse recoding of negatively worded items might not be enough for them to function as positively worded items do. In this study, we commented on the drawbacks of existing approaches to wording effect in mixed-format scales and used bi-factor item response theory (IRT) models to…

  14. Measuring Teacher Effectiveness through Hierarchical Linear Models: Exploring Predictors of Student Achievement and Truancy

    Science.gov (United States)

    Subedi, Bidya Raj; Reese, Nancy; Powell, Randy

    2015-01-01

    This study explored significant predictors of student's Grade Point Average (GPA) and truancy (days absent), and also determined teacher effectiveness based on proportion of variance explained at teacher level model. We employed a two-level hierarchical linear model (HLM) with student and teacher data at level-1 and level-2 models, respectively.…

  15. Flapping model of scalar mixing in turbulence

    International Nuclear Information System (INIS)

    Kerstein, A.R.

    1991-01-01

    Motivated by the fluctuating plume model of turbulent mixing downstream of a point source, a flapping model is formulated for application to other configurations. For the scalar mixing layer, simple expressions for single-point scalar fluctuation statistics are obtained that agree with measurements. For a spatially homogeneous scalar mixing field, the family of probability density functions previously derived using mapping closure is reproduced. It is inferred that single-point scalar statistics may depend primarily on large-scale flapping motions in many cases of interest, and thus that multipoint statistics may be the principal indicators of finer-scale mixing effects

  16. Introduction to generalized linear models

    CERN Document Server

    Dobson, Annette J

    2008-01-01

    Introduction Background Scope Notation Distributions Related to the Normal Distribution Quadratic Forms Estimation Model Fitting Introduction Examples Some Principles of Statistical Modeling Notation and Coding for Explanatory Variables Exponential Family and Generalized Linear Models Introduction Exponential Family of Distributions Properties of Distributions in the Exponential Family Generalized Linear Models Examples Estimation Introduction Example: Failure Times for Pressure Vessels Maximum Likelihood Estimation Poisson Regression Example Inference Introduction Sampling Distribution for Score Statistics Taylor Series Approximations Sampling Distribution for MLEs Log-Likelihood Ratio Statistic Sampling Distribution for the Deviance Hypothesis Testing Normal Linear Models Introduction Basic Results Multiple Linear Regression Analysis of Variance Analysis of Covariance General Linear Models Binary Variables and Logistic Regression Probability Distributions ...

  17. Dimension of linear models

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    1996-01-01

    Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four of these cri......Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four...... the basic problems in determining the dimension of linear models. Then each of the eight measures are treated. The results are illustrated by examples....

  18. Development of two mix model postprocessors for the investigation of shell mix in indirect drive implosion cores

    International Nuclear Information System (INIS)

    Welser-Sherrill, L.; Mancini, R. C.; Haynes, D. A.; Haan, S. W.; Koch, J. A.; Izumi, N.; Tommasini, R.; Golovkin, I. E.; MacFarlane, J. J.; Radha, P. B.; Delettrez, J. A.; Regan, S. P.; Smalyuk, V. A.

    2007-01-01

    The presence of shell mix in inertial confinement fusion implosion cores is an important characteristic. Mixing in this experimental regime is primarily due to hydrodynamic instabilities, such as Rayleigh-Taylor and Richtmyer-Meshkov, which can affect implosion dynamics. Two independent theoretical mix models, Youngs' model and the Haan saturation model, were used to estimate the level of Rayleigh-Taylor mixing in a series of indirect drive experiments. The models were used to predict the radial width of the region containing mixed fuel and shell materials. The results for Rayleigh-Taylor mixing provided by Youngs' model are considered to be a lower bound for the mix width, while those generated by Haan's model incorporate more experimental characteristics and consequently have larger mix widths. These results are compared with an independent experimental analysis, which infers a larger mix width based on all instabilities and effects captured in the experimental data

  19. A primer on linear models

    CERN Document Server

    Monahan, John F

    2008-01-01

    Preface Examples of the General Linear Model Introduction One-Sample Problem Simple Linear Regression Multiple Regression One-Way ANOVA First Discussion The Two-Way Nested Model Two-Way Crossed Model Analysis of Covariance Autoregression Discussion The Linear Least Squares Problem The Normal Equations The Geometry of Least Squares Reparameterization Gram-Schmidt Orthonormalization Estimability and Least Squares Estimators Assumptions for the Linear Mean Model Confounding, Identifiability, and Estimability Estimability and Least Squares Estimators F

  20. A thermal mixing model of crossflow in tube bundles for use with the porous body approximation

    International Nuclear Information System (INIS)

    Ashcroft, J.; Kaminski, D.A.

    1996-06-01

    Diffusive thermal mixing in a heated tube bundle with a cooling fluid in crossflow was analyzed numerically. From the results of detailed two-dimensional models, which calculated the diffusion of heat downstream of one heated tube in an otherwise adiabatic flow field, a diffusion model appropriate for use with the porous body method was developed. The model accounts for both molecular and turbulent diffusion of heat by determining the effective thermal conductivity in the porous region. The model was developed for triangular shaped staggered tube bundles with pitch to diameter ratios between 1.10 and 2.00 and for Reynolds numbers between 1,000 and 20,000. The tubes are treated as nonconducting. Air and water were considered as working fluids. The effective thermal conductivity was found to be linearly dependent on the tube Reynolds number and fluid Prandtl number, and dependent on the bundle geometry. The porous body thermal mixing model was then compared against numerical models for flows with multiple heated tubes with very good agreement

  1. The effect of non-uniform mass loading on the linear, temporal development of particle-laden shear layers

    Energy Technology Data Exchange (ETDEWEB)

    Senatore, Giacomo [Department of Aerospace Engineering, Universita di Pisa, Pisa 56122 (Italy); Davis, Sean; Jacobs, Gustaaf, E-mail: gjacobs@mail.sdsu.edu [Department of Aerospace Engineering and Engineering Mechanics, San Diego State University, San Diego, 92182 California (United States)

    2015-03-15

    The effect of non-uniformity in bulk particle mass loading on the linear development of a particle-laden shear layer is analyzed by means of a stochastic Eulerian-Eulerian model. From the set of governing equations of the two-fluid model, a modified Rayleigh equation is derived that governs the linear growth of a spatially periodic disturbance. Eigenvalues for this Rayleigh equation are determined numerically using proper conditions at the co-flowing gas and particle interface locations. For the first time, it is shown that non-uniform loading of small-inertia particles (Stokes number (St) <0.2) may destabilize the inviscid mixing layer development as compared to the pure-gas flow. The destabilization is triggered by an energy transfer rate that globally flows from the particle phase to the gas phase. For intermediate St (1 < St < 10), a maximum stabilizing effect is computed, while at larger St, two unstable modes may coexist. The growth rate computations from linear stability analysis are verified numerically through simulations based on an Eulerian-Lagrangian (EL) model based on the inviscid Euler equations and a point particle model. The growth rates found in numerical experiments using the EL method are in very good agreement with growth rates from the linear stability analysis and validate the destabilizing effect induced by the presence of particles with low St.

  2. Incorporation of diet information derived from Bayesian stable isotope mixing models into mass-balanced marine ecosystem models: A case study from the Marennes-Oleron Estuary, France

    Science.gov (United States)

    We investigated the use of output from Bayesian stable isotope mixing models as constraints for a linear inverse food web model of a temperate intertidal seagrass system in the Marennes-Oléron Bay, France. Linear inverse modeling (LIM) is a technique that estimates a complete net...

  3. Effects of internal mixing and aggregate morphology on optical properties of black carbon using a discrete dipole approximation model

    Directory of Open Access Journals (Sweden)

    B. V. Scarnato

    2013-05-01

    Full Text Available According to recent studies, internal mixing of black carbon (BC with other aerosol materials in the atmosphere alters its aggregate shape, absorption of solar radiation, and radiative forcing. These mixing state effects are not yet fully understood. In this study, we characterize the morphology and mixing state of bare BC and BC internally mixed with sodium chloride (NaCl using electron microscopy and examine the sensitivity of optical properties to BC mixing state and aggregate morphology using a discrete dipole approximation model (DDSCAT. DDSCAT is flexible in simulating the geometry and refractive index of particle aggregates. DDSCAT predicts a higher mass absorption coefficient (MAC, lower single scattering albedo (SSA, and higher absorption Angstrom exponent (AAE for bare BC aggregates that are lacy rather than compact. Predicted values of SSA at 550 nm range between 0.16 and 0.27 for lacy and compact aggregates, respectively, in agreement with reported experimental values of 0.25 ± 0.05. The variation in absorption with wavelength does not adhere precisely to a power law relationship over the 200 to 1000 nm range. Consequently, AAE values depend on the wavelength region over which they are computed. The MAC of BC (averaged over the 200–1000 nm range is amplified when internally mixed with NaCl (100–300 nm in radius by factors ranging from 1.0 for lacy BC aggregates partially immersed in NaCl to 2.2 for compact BC aggregates fully immersed in NaCl. The SSA of BC internally mixed with NaCl is higher than for bare BC and increases with the embedding in the NaCl. Internally mixed BC SSA values decrease in the 200–400 nm wavelength range, a feature also common to the optical properties of dust and organics. Linear polarization features are also predicted in DDSCAT and are dependent on particle size and morphology. This study shows that DDSCAT predicts complex morphology and mixing state dependent aerosol optical properties that have

  4. Model for predicting non-linear crack growth considering load sequence effects (LOSEQ)

    International Nuclear Information System (INIS)

    Fuehring, H.

    1982-01-01

    A new analytical model for predicting non-linear crack growth is presented which takes into account the retardation as well as the acceleration effects due to irregular loading. It considers not only the maximum peak of a load sequence to effect crack growth but also all other loads of the history according to a generalised memory criterion. Comparisons between crack growth predicted by using the LOSEQ-programme and experimentally observed data are presented. (orig.) [de

  5. Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data.

    Science.gov (United States)

    Ying, Gui-Shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard

    2017-04-01

    To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field in the elderly. When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI -0.03 to 0.32D, p = 0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, p = 0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller p-values, while analysis of the worse eye provided larger p-values than mixed effects models and marginal models. In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision.

  6. Genetic analyses using GGE model and a mixed linear model approach, and stability analyses using AMMI bi-plot for late-maturity alpha-amylase activity in bread wheat genotypes.

    Science.gov (United States)

    Rasul, Golam; Glover, Karl D; Krishnan, Padmanaban G; Wu, Jixiang; Berzonsky, William A; Fofana, Bourlaye

    2017-06-01

    Low falling number and discounting grain when it is downgraded in class are the consequences of excessive late-maturity α-amylase activity (LMAA) in bread wheat (Triticum aestivum L.). Grain expressing high LMAA produces poorer quality bread products. To effectively breed for low LMAA, it is necessary to understand what genes control it and how they are expressed, particularly when genotypes are grown in different environments. In this study, an International Collection (IC) of 18 spring wheat genotypes and another set of 15 spring wheat cultivars adapted to South Dakota (SD), USA were assessed to characterize the genetic component of LMAA over 5 and 13 environments, respectively. The data were analysed using a GGE model with a mixed linear model approach and stability analysis was presented using an AMMI bi-plot on R software. All estimated variance components and their proportions to the total phenotypic variance were highly significant for both sets of genotypes, which were validated by the AMMI model analysis. Broad-sense heritability for LMAA was higher in SD adapted cultivars (53%) compared to that in IC (49%). Significant genetic effects and stability analyses showed some genotypes, e.g. 'Lancer', 'Chester' and 'LoSprout' from IC, and 'Alsen', 'Traverse' and 'Forefront' from SD cultivars could be used as parents to develop new cultivars expressing low levels of LMAA. Stability analysis using an AMMI bi-plot revealed that 'Chester', 'Lancer' and 'Advance' were the most stable across environments, while in contrast, 'Kinsman', 'Lerma52' and 'Traverse' exhibited the lowest stability for LMAA across environments.

  7. Assessing robustness of designs for random effects parameters for nonlinear mixed-effects models.

    Science.gov (United States)

    Duffull, Stephen B; Hooker, Andrew C

    2017-12-01

    Optimal designs for nonlinear models are dependent on the choice of parameter values. Various methods have been proposed to provide designs that are robust to uncertainty in the prior choice of parameter values. These methods are generally based on estimating the expectation of the determinant (or a transformation of the determinant) of the information matrix over the prior distribution of the parameter values. For high dimensional models this can be computationally challenging. For nonlinear mixed-effects models the question arises as to the importance of accounting for uncertainty in the prior value of the variances of the random effects parameters. In this work we explore the influence of the variance of the random effects parameters on the optimal design. We find that the method for approximating the expectation and variance of the likelihood is of potential importance for considering the influence of random effects. The most common approximation to the likelihood, based on a first-order Taylor series approximation, yields designs that are relatively insensitive to the prior value of the variance of the random effects parameters and under these conditions it appears to be sufficient to consider uncertainty on the fixed-effects parameters only.

  8. Testing concordance of instrumental variable effects in generalized linear models with application to Mendelian randomization

    Science.gov (United States)

    Dai, James Y.; Chan, Kwun Chuen Gary; Hsu, Li

    2014-01-01

    Instrumental variable regression is one way to overcome unmeasured confounding and estimate causal effect in observational studies. Built on structural mean models, there has been considerale work recently developed for consistent estimation of causal relative risk and causal odds ratio. Such models can sometimes suffer from identification issues for weak instruments. This hampered the applicability of Mendelian randomization analysis in genetic epidemiology. When there are multiple genetic variants available as instrumental variables, and causal effect is defined in a generalized linear model in the presence of unmeasured confounders, we propose to test concordance between instrumental variable effects on the intermediate exposure and instrumental variable effects on the disease outcome, as a means to test the causal effect. We show that a class of generalized least squares estimators provide valid and consistent tests of causality. For causal effect of a continuous exposure on a dichotomous outcome in logistic models, the proposed estimators are shown to be asymptotically conservative. When the disease outcome is rare, such estimators are consistent due to the log-linear approximation of the logistic function. Optimality of such estimators relative to the well-known two-stage least squares estimator and the double-logistic structural mean model is further discussed. PMID:24863158

  9. Nonlinear spectral mixing theory to model multispectral signatures

    Energy Technology Data Exchange (ETDEWEB)

    Borel, C.C. [Los Alamos National Lab., NM (United States). Astrophysics and Radiation Measurements Group

    1996-02-01

    Nonlinear spectral mixing occurs due to multiple reflections and transmissions between discrete surfaces, e.g. leaves or facets of a rough surface. The radiosity method is an energy conserving computational method used in thermal engineering and it models nonlinear spectral mixing realistically and accurately. In contrast to the radiative transfer method the radiosity method takes into account the discreteness of the scattering surfaces (e.g. exact location, orientation and shape) such as leaves and includes mutual shading between them. An analytic radiosity-based scattering model for vegetation was developed and used to compute vegetation indices for various configurations. The leaf reflectance and transmittance was modeled using the PROSPECT model for various amounts of water, chlorophyll and variable leaf structure. The soil background was modeled using SOILSPEC with a linear mixture of reflectances of sand, clay and peat. A neural network and a geometry based retrieval scheme were used to retrieve leaf area index and chlorophyll concentration for dense canopies. Only simulated canopy reflectances in the 6 visible through short wave IR Landsat TM channels were used. The authors used an empirical function to compute the signal-to-noise ratio of a retrieved quantity.

  10. Using a generalized linear mixed model approach to explore the role of age, motor proficiency, and cognitive styles in children's reach estimation accuracy.

    Science.gov (United States)

    Caçola, Priscila M; Pant, Mohan D

    2014-10-01

    The purpose was to use a multi-level statistical technique to analyze how children's age, motor proficiency, and cognitive styles interact to affect accuracy on reach estimation tasks via Motor Imagery and Visual Imagery. Results from the Generalized Linear Mixed Model analysis (GLMM) indicated that only the 7-year-old age group had significant random intercepts for both tasks. Motor proficiency predicted accuracy in reach tasks, and cognitive styles (object scale) predicted accuracy in the motor imagery task. GLMM analysis is suitable to explore age and other parameters of development. In this case, it allowed an assessment of motor proficiency interacting with age to shape how children represent, plan, and act on the environment.

  11. linear-quadratic-linear model

    Directory of Open Access Journals (Sweden)

    Tanwiwat Jaikuna

    2017-02-01

    Full Text Available Purpose: To develop an in-house software program that is able to calculate and generate the biological dose distribution and biological dose volume histogram by physical dose conversion using the linear-quadratic-linear (LQL model. Material and methods : The Isobio software was developed using MATLAB version 2014b to calculate and generate the biological dose distribution and biological dose volume histograms. The physical dose from each voxel in treatment planning was extracted through Computational Environment for Radiotherapy Research (CERR, and the accuracy was verified by the differentiation between the dose volume histogram from CERR and the treatment planning system. An equivalent dose in 2 Gy fraction (EQD2 was calculated using biological effective dose (BED based on the LQL model. The software calculation and the manual calculation were compared for EQD2 verification with pair t-test statistical analysis using IBM SPSS Statistics version 22 (64-bit. Results: Two and three-dimensional biological dose distribution and biological dose volume histogram were displayed correctly by the Isobio software. Different physical doses were found between CERR and treatment planning system (TPS in Oncentra, with 3.33% in high-risk clinical target volume (HR-CTV determined by D90%, 0.56% in the bladder, 1.74% in the rectum when determined by D2cc, and less than 1% in Pinnacle. The difference in the EQD2 between the software calculation and the manual calculation was not significantly different with 0.00% at p-values 0.820, 0.095, and 0.593 for external beam radiation therapy (EBRT and 0.240, 0.320, and 0.849 for brachytherapy (BT in HR-CTV, bladder, and rectum, respectively. Conclusions : The Isobio software is a feasible tool to generate the biological dose distribution and biological dose volume histogram for treatment plan evaluation in both EBRT and BT.

  12. [Application of Mixed-effect Model in PMI Estimation by Vitreous Humor].

    Science.gov (United States)

    Yang, M Z; Li, H J; Zhang, T Y; Ding, Z J; Wu, S F; Qiu, X G; Liu, Q

    2018-02-01

    To test the changes of the potassium (K⁺) and magnesium (Mg²⁺) concentrations in vitreous humor of rabbits along with postmortem interval (PMI) under different temperatures, and explore the feasibility of PMI estimation using mixed-effect model. After sacrifice, rabbit carcasses were preserved at 5 ℃, 15 ℃, 25 ℃ and 35 ℃, and 80-100 μL of vitreous humor was collected by the double-eye alternating micro-sampling method at every 12 h. The concentrations of K⁺ and Mg²⁺ in vitreous humor were measured by a biochemical-immune analyser. The mixed-effect model was used to perform analysis and fitting, and established the equations for PMI estimation. The data detected from the samples that were stoned at 10 ℃, 20 ℃ and 30 ℃ with 20, 40 and 65 h were used to validate the equations of PMI estimation. The concentrations of K⁺ and Mg²⁺ [f( x , y )] in vitreous humor of rabbits under different temperature increased along with PMI ( x ). The relative equations of K⁺ and Mg²⁺ concentration with PMI and temperature under 5 ℃~35 ℃ were f K⁺ ( x , y )=3.413 0+0.309 2 x +0.337 6 y +0.010 83 xy -0.002 47 x ² ( P PMI estimation by K⁺ and Mg²⁺ was in 10 h when PMI was between 0 to 40 h, and the time of deviation was in 21 h when PMI was between 40 to 65 h. the ambient temperature range of 5 ℃-35 ℃, the mixed-effect model based on temperature and vitreous humor substance concentrations can provide a new method for the practical application of vitreous humor chemicals for PMI estimation. Copyright© by the Editorial Department of Journal of Forensic Medicine.

  13. Mixed Effects Modeling Using Stochastic Differential Equations: Illustrated by Pharmacokinetic Data of Nicotinic Acid in Obese Zucker Rats.

    Science.gov (United States)

    Leander, Jacob; Almquist, Joachim; Ahlström, Christine; Gabrielsson, Johan; Jirstrand, Mats

    2015-05-01

    Inclusion of stochastic differential equations in mixed effects models provides means to quantify and distinguish three sources of variability in data. In addition to the two commonly encountered sources, measurement error and interindividual variability, we also consider uncertainty in the dynamical model itself. To this end, we extend the ordinary differential equation setting used in nonlinear mixed effects models to include stochastic differential equations. The approximate population likelihood is derived using the first-order conditional estimation with interaction method and extended Kalman filtering. To illustrate the application of the stochastic differential mixed effects model, two pharmacokinetic models are considered. First, we use a stochastic one-compartmental model with first-order input and nonlinear elimination to generate synthetic data in a simulated study. We show that by using the proposed method, the three sources of variability can be successfully separated. If the stochastic part is neglected, the parameter estimates become biased, and the measurement error variance is significantly overestimated. Second, we consider an extension to a stochastic pharmacokinetic model in a preclinical study of nicotinic acid kinetics in obese Zucker rats. The parameter estimates are compared between a deterministic and a stochastic NiAc disposition model, respectively. Discrepancies between model predictions and observations, previously described as measurement noise only, are now separated into a comparatively lower level of measurement noise and a significant uncertainty in model dynamics. These examples demonstrate that stochastic differential mixed effects models are useful tools for identifying incomplete or inaccurate model dynamics and for reducing potential bias in parameter estimates due to such model deficiencies.

  14. Linear and Weakly Nonlinear Instability of Shallow Mixing Layers with Variable Friction

    Directory of Open Access Journals (Sweden)

    Irina Eglite

    2018-01-01

    Full Text Available Linear and weakly nonlinear instability of shallow mixing layers is analysed in the present paper. It is assumed that the resistance force varies in the transverse direction. Linear stability problem is solved numerically using collocation method. It is shown that the increase in the ratio of the friction coefficients in the main channel to that in the floodplain has a stabilizing influence on the flow. The amplitude evolution equation for the most unstable mode (the complex Ginzburg–Landau equation is derived from the shallow water equations under the rigid-lid assumption. Results of numerical calculations are presented.

  15. Non-linear effects in the Boltzmann equation

    International Nuclear Information System (INIS)

    Barrachina, R.O.

    1985-01-01

    The Boltzmann equation is studied by defining an integral transformation of the energy distribution function for an isotropic and homogeneous gas. This transformation may be interpreted as a linear superposition of equilibrium states with variable temperatures. It is shown that the temporal evolution features of the distribution function are determined by the singularities of said transformation. This method is applied to Maxwell and Very Hard Particle interaction models. For the latter, the solution of the Boltzmann equation with the solution of its linearized version is compared, finding out many basic discrepancies and non-linear effects. This gives a hint to propose a new rational approximation method with a clear physical meaning. Applying this technique, the relaxation features of the BKW (Bobylev, Krook anf Wu) mode is analyzed, finding a conclusive counter-example for the Krook and Wu conjecture. The anisotropic Boltzmann equation for Maxwell models is solved as an expansion in terms of the eigenfunctions of the corresponding linearized collision operator, finding interesting transient overpopulation and underpopulation effects at thermal energies as well as a new preferential spreading effect. By analyzing the initial collision, a criterion is established to deduce the general features of the final approach to equilibrium. Finally, it is shown how to improve the convergence of the eigenfunction expansion for high energy underpopulated distribution functions. As an application of this theory, the linear cascade model for sputtering is analyzed, thus finding out that many differences experimentally observed are due to non-linear effects. (M.E.L.) [es

  16. From spiking neuron models to linear-nonlinear models.

    Science.gov (United States)

    Ostojic, Srdjan; Brunel, Nicolas

    2011-01-20

    Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.

  17. Mixed-mode modelling mixing methodologies for organisational intervention

    CERN Document Server

    Clarke, Steve; Lehaney, Brian

    2001-01-01

    The 1980s and 1990s have seen a growing interest in research and practice in the use of methodologies within problem contexts characterised by a primary focus on technology, human issues, or power. During the last five to ten years, this has given rise to challenges regarding the ability of a single methodology to address all such contexts, and the consequent development of approaches which aim to mix methodologies within a single problem situation. This has been particularly so where the situation has called for a mix of technological (the so-called 'hard') and human­ centred (so-called 'soft') methods. The approach developed has been termed mixed-mode modelling. The area of mixed-mode modelling is relatively new, with the phrase being coined approximately four years ago by Brian Lehaney in a keynote paper published at the 1996 Annual Conference of the UK Operational Research Society. Mixed-mode modelling, as suggested above, is a new way of considering problem situations faced by organisations. Traditional...

  18. Modeling tides and vertical tidal mixing: A reality check

    International Nuclear Information System (INIS)

    Robertson, Robin

    2010-01-01

    Recently, there has been a great interest in the tidal contribution to vertical mixing in the ocean. In models, vertical mixing is estimated using parameterization of the sub-grid scale processes. Estimates of the vertical mixing varied widely depending on which vertical mixing parameterization was used. This study investigated the performance of ten different vertical mixing parameterizations in a terrain-following ocean model when simulating internal tides. The vertical mixing parameterization was found to have minor effects on the velocity fields at the tidal frequencies, but large effects on the estimates of vertical diffusivity of temperature. Although there was no definitive best performer for the vertical mixing parameterization, several parameterizations were eliminated based on comparison of the vertical diffusivity estimates with observations. The best performers were the new generic coefficients for the generic length scale schemes and Mellor-Yamada's 2.5 level closure scheme.

  19. Predictors for physical activity in adolescent girls using statistical shrinkage techniques for hierarchical longitudinal mixed effects models.

    Directory of Open Access Journals (Sweden)

    Edward M Grant

    Full Text Available We examined associations among longitudinal, multilevel variables and girls' physical activity to determine the important predictors for physical activity change at different adolescent ages. The Trial of Activity for Adolescent Girls 2 study (Maryland contributed participants from 8th (2009 to 11th grade (2011 (n=561. Questionnaires were used to obtain demographic, and psychosocial information (individual- and social-level variables; height, weight, and triceps skinfold to assess body composition; interviews and surveys for school-level data; and self-report for neighborhood-level variables. Moderate to vigorous physical activity minutes were assessed from accelerometers. A doubly regularized linear mixed effects model was used for the longitudinal multilevel data to identify the most important covariates for physical activity. Three fixed effects at the individual level and one random effect at the school level were chosen from an initial total of 66 variables, consisting of 47 fixed effects and 19 random effects variables, in additional to the time effect. Self-management strategies, perceived barriers, and social support from friends were the three selected fixed effects, and whether intramural or interscholastic programs were offered in middle school was the selected random effect. Psychosocial factors and friend support, plus a school's physical activity environment, affect adolescent girl's moderate to vigorous physical activity longitudinally.

  20. Superradiance Effects in the Linear and Nonlinear Optical Response of Quantum Dot Molecules

    Science.gov (United States)

    Sitek, A.; Machnikowski, P.

    2008-11-01

    We calculate the linear optical response from a single quantum dot molecule and the nonlinear, four-wave-mixing response from an inhomogeneously broadened ensemble of such molecules. We show that both optical signals are affected by the coupling-dependent superradiance effect and by optical interference between the two polarizations. As a result, the linear and nonlinear responses are not identical.

  1. Low-sensitivity, low-bounce, high-linearity current-controlled oscillator suitable for single-supply mixed-mode instrumentation system.

    Science.gov (United States)

    Hwang, Yuh-Shyan; Kung, Che-Min; Lin, Ho-Cheng; Chen, Jiann-Jong

    2009-02-01

    A low-sensitivity, low-bounce, high-linearity current-controlled oscillator (CCO) suitable for a single-supply mixed-mode instrumentation system is designed and proposed in this paper. The designed CCO can be operated at low voltage (2 V). The power bounce and ground bounce generated by this CCO is less than 7 mVpp when the power-line parasitic inductance is increased to 100 nH to demonstrate the effect of power bounce and ground bounce. The power supply noise caused by the proposed CCO is less than 0.35% in reference to the 2 V supply voltage. The average conversion ratio KCCO is equal to 123.5 GHz/A. The linearity of conversion ratio is high and its tolerance is within +/-1.2%. The sensitivity of the proposed CCO is nearly independent of the power supply voltage, which is less than a conventional current-starved oscillator. The performance of the proposed CCO has been compared with the current-starved oscillator. It is shown that the proposed CCO is suitable for single-supply mixed-mode instrumentation systems.

  2. Optimising the selection of food items for FFQs using Mixed Integer Linear Programming.

    Science.gov (United States)

    Gerdessen, Johanna C; Souverein, Olga W; van 't Veer, Pieter; de Vries, Jeanne Hm

    2015-01-01

    To support the selection of food items for FFQs in such a way that the amount of information on all relevant nutrients is maximised while the food list is as short as possible. Selection of the most informative food items to be included in FFQs was modelled as a Mixed Integer Linear Programming (MILP) model. The methodology was demonstrated for an FFQ with interest in energy, total protein, total fat, saturated fat, monounsaturated fat, polyunsaturated fat, total carbohydrates, mono- and disaccharides, dietary fibre and potassium. The food lists generated by the MILP model have good performance in terms of length, coverage and R 2 (explained variance) of all nutrients. MILP-generated food lists were 32-40 % shorter than a benchmark food list, whereas their quality in terms of R 2 was similar to that of the benchmark. The results suggest that the MILP model makes the selection process faster, more standardised and transparent, and is especially helpful in coping with multiple nutrients. The complexity of the method does not increase with increasing number of nutrients. The generated food lists appear either shorter or provide more information than a food list generated without the MILP model.

  3. Comparison between linear quadratic and early time dose models

    International Nuclear Information System (INIS)

    Chougule, A.A.; Supe, S.J.

    1993-01-01

    During the 70s, much interest was focused on fractionation in radiotherapy with the aim of improving tumor control rate without producing unacceptable normal tissue damage. To compare the radiobiological effectiveness of various fractionation schedules, empirical formulae such as Nominal Standard Dose, Time Dose Factor, Cumulative Radiation Effect and Tumour Significant Dose, were introduced and were used despite many shortcomings. It has been claimed that a recent linear quadratic model is able to predict the radiobiological responses of tumours as well as normal tissues more accurately. We compared Time Dose Factor and Tumour Significant Dose models with the linear quadratic model for tumour regression in patients with carcinomas of the cervix. It was observed that the prediction of tumour regression estimated by the Tumour Significant Dose and Time Dose factor concepts varied by 1.6% from that of the linear quadratic model prediction. In view of the lack of knowledge of the precise values of the parameters of the linear quadratic model, it should be applied with caution. One can continue to use the Time Dose Factor concept which has been in use for more than a decade as its results are within ±2% as compared to that predicted by the linear quadratic model. (author). 11 refs., 3 figs., 4 tabs

  4. A spatiotemporal mixed model to assess the influence of environmental and socioeconomic factors on the incidence of hand, foot and mouth disease

    Directory of Open Access Journals (Sweden)

    Lianfa Li

    2018-02-01

    Full Text Available Abstract Background As a common infectious disease, hand, foot and mouth disease (HFMD is affected by multiple environmental and socioeconomic factors, and its pathogenesis is complex. Furthermore, the transmission of HFMD is characterized by strong spatial clustering and autocorrelation, and the classical statistical approach may be biased without consideration of spatial autocorrelation. In this paper, we propose to embed spatial characteristics into a spatiotemporal additive model to improve HFMD incidence assessment. Methods Using incidence data (6439 samples from 137 monitoring district for Shandong Province, China, along with meteorological, environmental and socioeconomic spatial and spatiotemporal covariate data, we proposed a spatiotemporal mixed model to estimate HFMD incidence. Geo-additive regression was used to model the non-linear effects of the covariates on the incidence risk of HFMD in univariate and multivariate models. Furthermore, the spatial effect was constructed to capture spatial autocorrelation at the sub-regional scale, and clusters (hotspots of high risk were generated using spatiotemporal scanning statistics as a predictor. Linear and non-linear effects were compared to illustrate the usefulness of non-linear associations. Patterns of spatial effects and clusters were explored to illustrate the variation of the HFMD incidence across geographical sub-regions. To validate our approach, 10-fold cross-validation was conducted. Results The results showed that there were significant non-linear associations of the temporal index, spatiotemporal meteorological factors and spatial environmental and socioeconomic factors with HFMD incidence. Furthermore, there were strong spatial autocorrelation and clusters for the HFMD incidence. Spatiotemporal meteorological parameters, the normalized difference vegetation index (NDVI, the temporal index, spatiotemporal clustering and spatial effects played important roles as predictors in

  5. Estimating the numerical diapycnal mixing in an eddy-permitting ocean model

    Science.gov (United States)

    Megann, Alex

    2018-01-01

    Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, having attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimates have been made of the typical magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is a recent ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre. It forms the ocean component of the GC2 climate model, and is closely related to the ocean component of the UKESM1 Earth System Model, the UK's contribution to the CMIP6 model intercomparison. GO5.0 uses version 3.4 of the NEMO model, on the ORCA025 global tripolar grid. An approach to quantifying the numerical diapycnal mixing in this model, based on the isopycnal watermass analysis of Lee et al. (2002), is described, and the estimates thereby obtained of the effective diapycnal diffusivity in GO5.0 are compared with the values of the explicit diffusivity used by the model. It is shown that the effective mixing in this model configuration is up to an order of magnitude higher than the explicit mixing in much of the ocean interior, implying that mixing in the model below the mixed layer is largely dominated by numerical mixing. This is likely to have adverse consequences for the representation of heat uptake in climate models intended for decadal climate projections, and in particular is highly relevant to the interpretation of the CMIP6 class of climate models, many of which use constant-depth ocean models at ¼° resolution

  6. Decision-case mix model for analyzing variation in cesarean rates.

    Science.gov (United States)

    Eldenburg, L; Waller, W S

    2001-01-01

    This article contributes a decision-case mix model for analyzing variation in c-section rates. Like recent contributions to the literature, the model systematically takes into account the effect of case mix. Going beyond past research, the model highlights differences in physician decision making in response to obstetric factors. Distinguishing the effects of physician decision making and case mix is important in understanding why c-section rates vary and in developing programs to effect change in physician behavior. The model was applied to a sample of deliveries at a hospital where physicians exhibited considerable variation in their c-section rates. Comparing groups with a low versus high rate, the authors' general conclusion is that the difference in physician decision tendencies (to perform a c-section), in response to specific obstetric factors, is at least as important as case mix in explaining variation in c-section rates. The exact effects of decision making versus case mix depend on how the model application defines the obstetric condition of interest and on the weighting of deliveries by their estimated "risk of Cesarean." The general conclusion is supported by an additional analysis that uses the model's elements to predict individual physicians' annual c-section rates.

  7. Reduced Rank Mixed Effects Models for Spatially Correlated Hierarchical Functional Data

    KAUST Repository

    Zhou, Lan

    2010-03-01

    Hierarchical functional data are widely seen in complex studies where sub-units are nested within units, which in turn are nested within treatment groups. We propose a general framework of functional mixed effects model for such data: within unit and within sub-unit variations are modeled through two separate sets of principal components; the sub-unit level functions are allowed to be correlated. Penalized splines are used to model both the mean functions and the principal components functions, where roughness penalties are used to regularize the spline fit. An EM algorithm is developed to fit the model, while the specific covariance structure of the model is utilized for computational efficiency to avoid storage and inversion of large matrices. Our dimension reduction with principal components provides an effective solution to the difficult tasks of modeling the covariance kernel of a random function and modeling the correlation between functions. The proposed methodology is illustrated using simulations and an empirical data set from a colon carcinogenesis study. Supplemental materials are available online.

  8. Validation of effective momentum and heat flux models for stratification and mixing in a water pool

    Energy Technology Data Exchange (ETDEWEB)

    Hua Li; Villanueva, W.; Kudinov, P. [Royal Institute of Technology (KTH), Div. of Nuclear Power Safety, Stockholm (Sweden)

    2013-06-15

    The pressure suppression pool is the most important feature of the pressure suppression system in a Boiling Water Reactor (BWR) that acts primarily as a passive heat sink during a loss of coolant accident (LOCA) or when the reactor is isolated from the main heat sink. The steam injection into the pool through the blowdown pipes can lead to short term dynamic phenomena and long term thermal transient in the pool. The development of thermal stratification or mixing in the pool is a transient phenomenon that can influence the pool's pressure suppression capacity. Different condensation regimes depending on the pool's bulk temperature and steam flow rates determine the onset of thermal stratification or erosion of stratified layers. Previously, we have proposed to model the effect of steam injection on the mixing and stratification with the Effective Heat Source (EHS) and the Effective Momentum Source (EMS) models. The EHS model is used to provide thermal effect of steam injection on the pool, preserving heat and mass balance. The EMS model is used to simulate momentum induced by steam injection in different flow regimes. The EMS model is based on the combination of (i) synthetic jet theory, which predicts effective momentum if amplitude and frequency of flow oscillations in the pipe are given, and (ii) model proposed by Aya and Nariai for prediction of the amplitude and frequency of oscillations at a given pool temperature and steam mass flux. The complete EHS/EMS models only require the steam mass flux, initial pool bulk temperature, and design-specific parameters, to predict thermal stratification and mixing in a pressure suppression pool. In this work we use EHS/EMS models implemented in containment thermal hydraulic code GOTHIC. The PPOOLEX experiments (Lappeenranta University of Technology, Finland) are utilized to (a) quantify errors due to GOTHIC's physical models and numerical schemes, (b) propose necessary improvements in GOTHIC sub-grid scale

  9. Application of mixed-integer linear programming in a car seats assembling process

    Directory of Open Access Journals (Sweden)

    Jorge Iván Perez Rave

    2011-12-01

    Full Text Available In this paper, a decision problem involving a car parts manufacturing company is modeled in order to prepare the company for an increase in demand. Mixed-integer linear programming was used with the following decision variables: creating a second shift, purchasing additional equipment, determining the required work force, and other alternatives involving new manners of work distribution that make it possible to separate certain operations from some workplaces and integrate them into others to minimize production costs. The model was solved using GAMS. The solution consisted of programming 19 workers under a configuration that merges two workplaces and separates some operations from some workplaces. The solution did not involve purchasing additional machinery or creating a second shift. As a result, the manufacturing paradigms that had been valid in the company for over 14 years were broken. This study allowed the company to increase its productivity and obtain significant savings. It also shows the benefits of joint work between academia and companies, and provides useful information for professors, students and engineers regarding production and continuous improvement.

  10. Strengthen forensic entomology in court--the need for data exploration and the validation of a generalised additive mixed model.

    Science.gov (United States)

    Baqué, Michèle; Amendt, Jens

    2013-01-01

    Developmental data of juvenile blow flies (Diptera: Calliphoridae) are typically used to calculate the age of immature stages found on or around a corpse and thus to estimate a minimum post-mortem interval (PMI(min)). However, many of those data sets don't take into account that immature blow flies grow in a non-linear fashion. Linear models do not supply a sufficient reliability on age estimates and may even lead to an erroneous determination of the PMI(min). According to the Daubert standard and the need for improvements in forensic science, new statistic tools like smoothing methods and mixed models allow the modelling of non-linear relationships and expand the field of statistical analyses. The present study introduces into the background and application of these statistical techniques by analysing a model which describes the development of the forensically important blow fly Calliphora vicina at different temperatures. The comparison of three statistical methods (linear regression, generalised additive modelling and generalised additive mixed modelling) clearly demonstrates that only the latter provided regression parameters that reflect the data adequately. We focus explicitly on both the exploration of the data--to assure their quality and to show the importance of checking it carefully prior to conducting the statistical tests--and the validation of the resulting models. Hence, we present a common method for evaluating and testing forensic entomological data sets by using for the first time generalised additive mixed models.

  11. A simulated Linear Mixture Model to Improve Classification Accuracy of Satellite Data Utilizing Degradation of Atmospheric Effect

    Directory of Open Access Journals (Sweden)

    WIDAD Elmahboub

    2005-02-01

    Full Text Available Researchers in remote sensing have attempted to increase the accuracy of land cover information extracted from remotely sensed imagery. Factors that influence the supervised and unsupervised classification accuracy are the presence of atmospheric effect and mixed pixel information. A linear mixture simulated model experiment is generated to simulate real world data with known end member spectral sets and class cover proportions (CCP. The CCP were initially generated by a random number generator and normalized to make the sum of the class proportions equal to 1.0 using MATLAB program. Random noise was intentionally added to pixel values using different combinations of noise levels to simulate a real world data set. The atmospheric scattering error is computed for each pixel value for three generated images with SPOT data. Accuracy can either be classified or misclassified. Results portrayed great improvement in classified accuracy, for example, in image 1, misclassified pixels due to atmospheric noise is 41 %. Subsequent to the degradation of atmospheric effect, the misclassified pixels were reduced to 4 %. We can conclude that accuracy of classification can be improved by degradation of atmospheric noise.

  12. Linear Models

    CERN Document Server

    Searle, Shayle R

    2012-01-01

    This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.

  13. Low-energy limit of the extended Linear Sigma Model

    Energy Technology Data Exchange (ETDEWEB)

    Divotgey, Florian [Johann Wolfgang Goethe-Universitaet, Institut fuer Theoretische Physik, Frankfurt am Main (Germany); Kovacs, Peter [Wigner Research Center for Physics, Hungarian Academy of Sciences, Institute for Particle and Nuclear Physics, Budapest (Hungary); GSI Helmholtzzentrum fuer Schwerionenforschung, ExtreMe Matter Institute, Darmstadt (Germany); Giacosa, Francesco [Johann Wolfgang Goethe-Universitaet, Institut fuer Theoretische Physik, Frankfurt am Main (Germany); Jan-Kochanowski University, Institute of Physics, Kielce (Poland); Rischke, Dirk H. [Johann Wolfgang Goethe-Universitaet, Institut fuer Theoretische Physik, Frankfurt am Main (Germany); University of Science and Technology of China, Interdisciplinary Center for Theoretical Study and Department of Modern Physics, Hefei, Anhui (China)

    2018-01-15

    The extended Linear Sigma Model is an effective hadronic model based on the linear realization of chiral symmetry SU(N{sub f}){sub L} x SU(N{sub f}){sub R}, with (pseudo)scalar and (axial-)vector mesons as degrees of freedom. In this paper, we study the low-energy limit of the extended Linear Sigma Model (eLSM) for N{sub f} = flavors by integrating out all fields except for the pions, the (pseudo-)Nambu-Goldstone bosons of chiral symmetry breaking. The resulting low-energy effective action is identical to Chiral Perturbation Theory (ChPT) after choosing a representative for the coset space generated by chiral symmetry breaking and expanding it in powers of (derivatives of) the pion fields. The tree-level values of the coupling constants of the effective low-energy action agree remarkably well with those of ChPT. (orig.)

  14. Comparison of Repeated Measurement Design and Mixed Models in Evaluation of the Entonox Effect on Labor Pain

    Directory of Open Access Journals (Sweden)

    Nasim Karimi

    2017-01-01

    Full Text Available Background & objectives: In many medical studies, the response variable is measured repeatedly over time to evaluate the treatment effect that is known as longitudinal study. The analysis method for this type of data is repeated measures ANOVA that uses only one correlation structure and the results are not valid with inappropriate correlation structure. To avoid this problem, a convenient alternative is mixed models. So, the aim of this study was to compare of mixed and repeated measurement models for examination of the Entonox effect on the labor pain. Methods: This experimental study was designed to compare the effect of Entonox and oxygen inhalation on pain relief between two groups. Data were analyzed using repeated measurement and mixed models with different correlation structures. Selection and comparison of proper correlation structures performed using Akaike information criterion, Bayesian information criterion and restricted log-likelihood. Data were analyzed using SPSS-22. Results: Results of our study showed that all variables containing analgesia methods, labor duration of the first and second stages, and time were significant in these tests. In mixed model, heterogeneous first-order autoregressive, first-order autoregressive, heterogeneous Toeplitz and unstructured correlation structures were recognized as the best structures. Also, all variables were significant in these structures. Unstructured variance covariance matrix was recognized as the worst structure and labor duration of the first and second stages was not significant in this structure. Conclusions: This study showed that the Entonox inhalation has a significant effect on pain relief in primiparous and it is confirmed by all of the models.

  15. Dynamic Linear Models with R

    CERN Document Server

    Campagnoli, Patrizia; Petris, Giovanni

    2009-01-01

    State space models have gained tremendous popularity in as disparate fields as engineering, economics, genetics and ecology. Introducing general state space models, this book focuses on dynamic linear models, emphasizing their Bayesian analysis. It illustrates the fundamental steps needed to use dynamic linear models in practice, using R package.

  16. Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models.

    Science.gov (United States)

    de Jesus, Karla; Ayala, Helon V H; de Jesus, Kelly; Coelho, Leandro Dos S; Medeiros, Alexandre I A; Abraldes, José A; Vaz, Mário A P; Fernandes, Ricardo J; Vilas-Boas, João Paulo

    2018-03-01

    Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances.

  17. A mixed integer linear programming model applied in barge planning for Omya

    Directory of Open Access Journals (Sweden)

    David Bredström

    2015-12-01

    Full Text Available This article presents a mathematical model for barge transport planning on the river Rhine, which is part of a decision support system (DSS recently taken into use by the Swiss company Omya. The system is operated by Omya’s regional office in Cologne, Germany, responsible for distribution planning at the regional distribution center (RDC in Moerdijk, the Netherlands. The distribution planning is a vital part of supply chain management of Omya’s production of Norwegian high quality calcium carbonate slurry, supplied to European paper manufacturers. The DSS operates within a vendor managed inventory (VMI setting, where the customer inventories are monitored by Omya, who decides upon the refilling days and quantities delivered by barges. The barge planning problem falls into the category of inventory routing problems (IRP and is further characterized with multiple products, heterogeneous fleet with availability restrictions (the fleet is owned by third party, vehicle compartments, dependency of barge capacity on water-level, multiple customer visits, bounded customer inventories and rolling planning horizon. There are additional modelling details which had to be considered to make it possible to employ the model in practice at a sufficient level of detail. To the best of our knowledge, we have not been able to find similar models covering all these aspects in barge planning. This article presents the developed mixed-integer programming model and discusses practical experience with its solution. Briefly, it also puts the model into the context of the entire business case of value chain optimization in Omya.

  18. Linearized models for a new magnetic control in MAST

    International Nuclear Information System (INIS)

    Artaserse, G.; Maviglia, F.; Albanese, R.; McArdle, G.J.; Pangione, L.

    2013-01-01

    Highlights: ► We applied linearized models for a new magnetic control on MAST tokamak. ► A suite of procedures, conceived to be machine independent, have been used. ► We carried out model-based simulations, taking into account eddy currents effects. ► Comparison with the EFIT flux maps and the experimental magnetic signals are shown. ► A current driven model for the dynamic simulations of the experimental data have been performed. -- Abstract: The aim of this work is to provide reliable linearized models for the design and assessment of a new magnetic control system for MAST (Mega Ampère Spherical Tokamak) using rtEFIT, which can easily be exported to MAST Upgrade. Linearized models for magnetic control have been obtained using the 2D axisymmetric finite element code CREATE L. MAST linearized models include equivalent 2D axisymmetric schematization of poloidal field (PF) coils, vacuum vessel, and other conducting structures. A plasmaless and a double null configuration have been chosen as benchmark cases for the comparison with experimental data and EFIT reconstructions. Good agreement has been found with the EFIT flux map and the experimental signals coming from magnetic probes with only few mismatches probably due to broken sensors. A suite of procedures (equipped with a user friendly interface to be run even remotely) to provide linearized models for magnetic control is now available on the MAST linux machines. A new current driven model has been used to obtain a state space model having the PF coil currents as inputs. Dynamic simulations of experimental data have been carried out using linearized models, including modelling of the effects of the passive structures, showing a fair agreement. The modelling activity has been useful also to reproduce accurately the interaction between plasma current and radial position control loops

  19. Linearized models for a new magnetic control in MAST

    Energy Technology Data Exchange (ETDEWEB)

    Artaserse, G., E-mail: giovanni.artaserse@enea.it [Associazione Euratom-ENEA sulla Fusione, Via Enrico Fermi 45, I-00044 Frascati (RM) (Italy); Maviglia, F.; Albanese, R. [Associazione Euratom-ENEA-CREATE sulla Fusione, Via Claudio 21, I-80125 Napoli (Italy); McArdle, G.J.; Pangione, L. [EURATOM/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom)

    2013-10-15

    Highlights: ► We applied linearized models for a new magnetic control on MAST tokamak. ► A suite of procedures, conceived to be machine independent, have been used. ► We carried out model-based simulations, taking into account eddy currents effects. ► Comparison with the EFIT flux maps and the experimental magnetic signals are shown. ► A current driven model for the dynamic simulations of the experimental data have been performed. -- Abstract: The aim of this work is to provide reliable linearized models for the design and assessment of a new magnetic control system for MAST (Mega Ampère Spherical Tokamak) using rtEFIT, which can easily be exported to MAST Upgrade. Linearized models for magnetic control have been obtained using the 2D axisymmetric finite element code CREATE L. MAST linearized models include equivalent 2D axisymmetric schematization of poloidal field (PF) coils, vacuum vessel, and other conducting structures. A plasmaless and a double null configuration have been chosen as benchmark cases for the comparison with experimental data and EFIT reconstructions. Good agreement has been found with the EFIT flux map and the experimental signals coming from magnetic probes with only few mismatches probably due to broken sensors. A suite of procedures (equipped with a user friendly interface to be run even remotely) to provide linearized models for magnetic control is now available on the MAST linux machines. A new current driven model has been used to obtain a state space model having the PF coil currents as inputs. Dynamic simulations of experimental data have been carried out using linearized models, including modelling of the effects of the passive structures, showing a fair agreement. The modelling activity has been useful also to reproduce accurately the interaction between plasma current and radial position control loops.

  20. Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots

    Science.gov (United States)

    Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.

    2013-01-01

    Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…

  1. Modeling of soft impingement effect during solid-state partitioning phase transformations in binary alloys

    NARCIS (Netherlands)

    Chen, H.; Van der Zwaag, S.

    2010-01-01

    The soft impingement effect at the later stage of partitioning phase transformations has been modeled both for the diffusion-controlled growth model and for the mixed-mode model. Instead of the linear and exponential approximations for the concentration gradient in front of the interface used in the

  2. A mixed integer linear programming model for operational planning of a biodiesel supply chain network from used cooking oil

    Science.gov (United States)

    Jonrinaldi, Hadiguna, Rika Ampuh; Salastino, Rades

    2017-11-01

    Environmental consciousness has paid many attention nowadays. It is not only about how to recycle, remanufacture or reuse used end products but it is also how to optimize the operations of the reverse system. A previous research has proposed a design of reverse supply chain of biodiesel network from used cooking oil. However, the research focused on the design of the supply chain strategy not the operations of the supply chain. It only decided how to design the structure of the supply chain in the next few years, and the process of each stage will be conducted in the supply chain system in general. The supply chain system has not considered operational policies to be conducted by the companies in the supply chain. Companies need a policy for each stage of the supply chain operations to be conducted so as to produce the optimal supply chain system, including how to use all the resources that have been designed in order to achieve the objectives of the supply chain system. Therefore, this paper proposes a model to optimize the operational planning of a biodiesel supply chain network from used cooking oil. A mixed integer linear programming is developed to model the operational planning of biodiesel supply chain in order to minimize the total operational cost of the supply chain. Based on the implementation of the model developed, the total operational cost of the biodiesel supply chain incurred by the system is less than the total operational cost of supply chain based on the previous research during seven days of operational planning about amount of 2,743,470.00 or 0.186%. Production costs contributed to 74.6 % of total operational cost and the cost of purchasing the used cooking oil contributed to 24.1 % of total operational cost. So, the system should pay more attention to these two aspects as changes in the value of these aspects will cause significant effects to the change in the total operational cost of the supply chain.

  3. Effects of Precipitation on Ocean Mixed-Layer Temperature and Salinity as Simulated in a 2-D Coupled Ocean-Cloud Resolving Atmosphere Model

    Science.gov (United States)

    Li, Xiaofan; Sui, C.-H.; Lau, K-M.; Adamec, D.

    1999-01-01

    A two-dimensional coupled ocean-cloud resolving atmosphere model is used to investigate possible roles of convective scale ocean disturbances induced by atmospheric precipitation on ocean mixed-layer heat and salt budgets. The model couples a cloud resolving model with an embedded mixed layer-ocean circulation model. Five experiment are performed under imposed large-scale atmospheric forcing in terms of vertical velocity derived from the TOGA COARE observations during a selected seven-day period. The dominant variability of mixed-layer temperature and salinity are simulated by the coupled model with imposed large-scale forcing. The mixed-layer temperatures in the coupled experiments with 1-D and 2-D ocean models show similar variations when salinity effects are not included. When salinity effects are included, however, differences in the domain-mean mixed-layer salinity and temperature between coupled experiments with 1-D and 2-D ocean models could be as large as 0.3 PSU and 0.4 C respectively. Without fresh water effects, the nocturnal heat loss over ocean surface causes deep mixed layers and weak cooling rates so that the nocturnal mixed-layer temperatures tend to be horizontally-uniform. The fresh water flux, however, causes shallow mixed layers over convective areas while the nocturnal heat loss causes deep mixed layer over convection-free areas so that the mixed-layer temperatures have large horizontal fluctuations. Furthermore, fresh water flux exhibits larger spatial fluctuations than surface heat flux because heavy rainfall occurs over convective areas embedded in broad non-convective or clear areas, whereas diurnal signals over whole model areas yield high spatial correlation of surface heat flux. As a result, mixed-layer salinities contribute more to the density differences than do mixed-layer temperatures.

  4. Optimization model of energy mix taking into account the environmental impact

    International Nuclear Information System (INIS)

    Gruenwald, O.; Oprea, D.

    2012-01-01

    At present, the energy system in the Czech Republic needs to decide some important issues regarding limited fossil resources, greater efficiency in producing of electrical energy and reducing emission levels of pollutants. These problems can be decided only by formulating and implementing an energy mix that will meet these conditions: rational, reliable, sustainable and competitive. The aim of this article is to find a new way of determining an optimal mix for the energy system in the Czech Republic. To achieve the aim, the linear optimization model comprising several economics, environmental and technical aspects will be applied. (Authors)

  5. Comparison of linear and non-linear models for the adsorption of fluoride onto geo-material: limonite.

    Science.gov (United States)

    Sahin, Rubina; Tapadia, Kavita

    2015-01-01

    The three widely used isotherms Langmuir, Freundlich and Temkin were examined in an experiment using fluoride (F⁻) ion adsorption on a geo-material (limonite) at four different temperatures by linear and non-linear models. Comparison of linear and non-linear regression models were given in selecting the optimum isotherm for the experimental results. The coefficient of determination, r², was used to select the best theoretical isotherm. The four Langmuir linear equations (1, 2, 3, and 4) are discussed. Langmuir isotherm parameters obtained from the four Langmuir linear equations using the linear model differed but they were the same when using the nonlinear model. Langmuir-2 isotherm is one of the linear forms, and it had the highest coefficient of determination (r² = 0.99) compared to the other Langmuir linear equations (1, 3 and 4) in linear form, whereas, for non-linear, Langmuir-4 fitted best among all the isotherms because it had the highest coefficient of determination (r² = 0.99). The results showed that the non-linear model may be a better way to obtain the parameters. In the present work, the thermodynamic parameters show that the absorption of fluoride onto limonite is both spontaneous (ΔG 0). Scanning electron microscope and X-ray diffraction images also confirm the adsorption of F⁻ ion onto limonite. The isotherm and kinetic study reveals that limonite can be used as an adsorbent for fluoride removal. In future we can develop new technology for fluoride removal in large scale by using limonite which is cost-effective, eco-friendly and is easily available in the study area.

  6. Estimating linear effects in ANOVA designs: the easy way.

    Science.gov (United States)

    Pinhas, Michal; Tzelgov, Joseph; Ganor-Stern, Dana

    2012-09-01

    Research in cognitive science has documented numerous phenomena that are approximated by linear relationships. In the domain of numerical cognition, the use of linear regression for estimating linear effects (e.g., distance and SNARC effects) became common following Fias, Brysbaert, Geypens, and d'Ydewalle's (1996) study on the SNARC effect. While their work has become the model for analyzing linear effects in the field, it requires statistical analysis of individual participants and does not provide measures of the proportions of variability accounted for (cf. Lorch & Myers, 1990). In the present methodological note, using both the distance and SNARC effects as examples, we demonstrate how linear effects can be estimated in a simple way within the framework of repeated measures analysis of variance. This method allows for estimating effect sizes in terms of both slope and proportions of variability accounted for. Finally, we show that our method can easily be extended to estimate linear interaction effects, not just linear effects calculated as main effects.

  7. Analysis and modeling of subgrid scalar mixing using numerical data

    Science.gov (United States)

    Girimaji, Sharath S.; Zhou, YE

    1995-01-01

    Direct numerical simulations (DNS) of passive scalar mixing in isotropic turbulence is used to study, analyze and, subsequently, model the role of small (subgrid) scales in the mixing process. In particular, we attempt to model the dissipation of the large scale (supergrid) scalar fluctuations caused by the subgrid scales by decomposing it into two parts: (1) the effect due to the interaction among the subgrid scales; and (2) the effect due to interaction between the supergrid and the subgrid scales. Model comparisons with DNS data show good agreement. This model is expected to be useful in the large eddy simulations of scalar mixing and reaction.

  8. Linking linear programming and spatial simulation models to predict landscape effects of forest management alternatives

    Science.gov (United States)

    Eric J. Gustafson; L. Jay Roberts; Larry A. Leefers

    2006-01-01

    Forest management planners require analytical tools to assess the effects of alternative strategies on the sometimes disparate benefits from forests such as timber production and wildlife habitat. We assessed the spatial patterns of alternative management strategies by linking two models that were developed for different purposes. We used a linear programming model (...

  9. Translational mixed-effects PKPD modelling of recombinant human growth hormone - from hypophysectomized rat to patients

    DEFF Research Database (Denmark)

    Thorsted, Anders; Thygesen, Peter; Agersø, Henrik

    2016-01-01

    was developed from experimental PKPD studies of rhGH and effects of long-term treatment as measured by insulin-like growth factor 1 (IGF-1) and bodyweight gain in rats. Modelled parameter values were scaled to human values using the allometric approach with fixed exponents for PKs and unscaled for PDs...... and validated through simulations relative to patient data. KEY RESULTS: The final model described rhGH PK as a two compartmental model with parallel linear and non-linear elimination terms, parallel first-order absorption with a total s.c. bioavailability of 87% in rats. Induction of IGF-1 was described...... by an indirect response model with stimulation of kin and related to rhGH exposure through an Emax relationship. Increase in bodyweight was directly linked to individual concentrations of IGF-1 by a linear relation. The scaled model provided robust predictions of human systemic PK of rhGH, but exposure following...

  10. Effective Momentum and heat flux models for simulation of stratification and mixing in a large pool of water

    Energy Technology Data Exchange (ETDEWEB)

    Hua Li; Villanueva, W.; Kudinov, P. [Royal Institute of Technology (KTH). Div. of Nuclear Power Safety, Stockholm (Sweden)

    2012-06-15

    Performance of a boiling water reactor (BWR) containment is mostly determined by reliable operation of pressure suppression pool which serves as a heat sink to cool and condense steam released from the core vessel. Thermal stratification in the pool can significantly impede the pool's pressure suppression capacity. A source of momentum is required in order to break stratification and mix the pool. It is important to have reliable prediction of transient development of stratification and mixing in the pool in different regimes of steam injection. Previously, we have proposed to model the effect of steam injection on the mixing and stratification with the Effective Heat Source (EHS) and the Effective Momentum Source (EMS) models. The EHS model is used to provide thermal effect of steam injection on the pool, preserving heat and mass balance. The EMS model is used to simulate momentum induced by steam injection in different flow regimes. The EMS model is based on the combination of (1) synthetic jet theory, which predicts effective momentum if amplitude and frequency of flow oscillations in the pipe are given, and (2) model proposed by Aya and Nariai for prediction of the amplitude and frequency of oscillations at a given pool temperature and steam mass flux. The complete EHS/EMS models only require the steam mass flux, initial pool bulk temperature, and design-specific parameters, to predict thermal stratification and mixing in a pressure suppression pool. In this work we use EHS/EMS models implemented in containment thermal hydraulic code GOTHIC. The POOLEX/PPOOLEX experiments (Lappeenranta University of Technology, Finland) are utilized, to (a) quantify errors due to GOTHIC's physical models and numerical schemes, (b) propose necessary improvements in GOTHIC sub-grid scale modeling, and (c) validate our proposed models. Specifically the data from POOLEX STB-21 and PPOOLEX STR-03 and STR-04 tests are used for validation of the EHS and EMS models in this

  11. Effective Momentum and heat flux models for simulation of stratification and mixing in a large pool of water

    International Nuclear Information System (INIS)

    Hua Li; Villanueva, W.; Kudinov, P.

    2012-06-01

    Performance of a boiling water reactor (BWR) containment is mostly determined by reliable operation of pressure suppression pool which serves as a heat sink to cool and condense steam released from the core vessel. Thermal stratification in the pool can significantly impede the pool's pressure suppression capacity. A source of momentum is required in order to break stratification and mix the pool. It is important to have reliable prediction of transient development of stratification and mixing in the pool in different regimes of steam injection. Previously, we have proposed to model the effect of steam injection on the mixing and stratification with the Effective Heat Source (EHS) and the Effective Momentum Source (EMS) models. The EHS model is used to provide thermal effect of steam injection on the pool, preserving heat and mass balance. The EMS model is used to simulate momentum induced by steam injection in different flow regimes. The EMS model is based on the combination of (1) synthetic jet theory, which predicts effective momentum if amplitude and frequency of flow oscillations in the pipe are given, and (2) model proposed by Aya and Nariai for prediction of the amplitude and frequency of oscillations at a given pool temperature and steam mass flux. The complete EHS/EMS models only require the steam mass flux, initial pool bulk temperature, and design-specific parameters, to predict thermal stratification and mixing in a pressure suppression pool. In this work we use EHS/EMS models implemented in containment thermal hydraulic code GOTHIC. The POOLEX/PPOOLEX experiments (Lappeenranta University of Technology, Finland) are utilized, to (a) quantify errors due to GOTHIC's physical models and numerical schemes, (b) propose necessary improvements in GOTHIC sub-grid scale modeling, and (c) validate our proposed models. Specifically the data from POOLEX STB-21 and PPOOLEX STR-03 and STR-04 tests are used for validation of the EHS and EMS models in this work. We

  12. Goodness-of-fit tests in mixed models

    KAUST Repository

    Claeskens, Gerda; Hart, Jeffrey D.

    2009-01-01

    Mixed models, with both random and fixed effects, are most often estimated on the assumption that the random effects are normally distributed. In this paper we propose several formal tests of the hypothesis that the random effects and/or errors

  13. Simple, efficient estimators of treatment effects in randomized trials using generalized linear models to leverage baseline variables.

    Science.gov (United States)

    Rosenblum, Michael; van der Laan, Mark J

    2010-04-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation.

  14. Simple, Efficient Estimators of Treatment Effects in Randomized Trials Using Generalized Linear Models to Leverage Baseline Variables

    Science.gov (United States)

    Rosenblum, Michael; van der Laan, Mark J.

    2010-01-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636

  15. ADVANCED MIXING MODELS

    International Nuclear Information System (INIS)

    Lee, S; Richard Dimenna, R; David Tamburello, D

    2008-01-01

    ,000 to 300,000) with a relative standard deviation of ± 11.83%. An improved correlation including the effect of circulation time was proposed by Grenville and Tilton [11] via a better fit of mixing time data for turbulent jet mixing under a wider range of jet Reynolds numbers (50,000 to 300,000). The circulation time was defined as the liquid volume divided by the entrained flow rate. They assumed that the mixing rate at the end of the jet length controls the mixing time for the entire tank by estimating the kinetic energy dissipation rate as discussed earlier. They predicted that for a given volume, an optimum geometry exists for a mixing vessel, allowing a desired mixing time to be achieved for a minimum power input. The current work will compare their correlation of the jet mixing time with CFD modeling results for their experimental tanks in an attempt to achieve a fundamental understanding of the turbulent jet mixing and to establish mixing indicators

  16. ADVANCED MIXING MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S; Richard Dimenna, R; David Tamburello, D

    2008-11-13

    (50,000 to 300,000) with a relative standard deviation of {+-} 11.83%. An improved correlation including the effect of circulation time was proposed by Grenville and Tilton [11] via a better fit of mixing time data for turbulent jet mixing under a wider range of jet Reynolds numbers (50,000 to 300,000). The circulation time was defined as the liquid volume divided by the entrained flow rate. They assumed that the mixing rate at the end of the jet length controls the mixing time for the entire tank by estimating the kinetic energy dissipation rate as discussed earlier. They predicted that for a given volume, an optimum geometry exists for a mixing vessel, allowing a desired mixing time to be achieved for a minimum power input. The current work will compare their correlation of the jet mixing time with CFD modeling results for their experimental tanks in an attempt to achieve a fundamental understanding of the turbulent jet mixing and to establish mixing indicators.

  17. Genetic parameters for racing records in trotters using linear and generalized linear models.

    Science.gov (United States)

    Suontama, M; van der Werf, J H J; Juga, J; Ojala, M

    2012-09-01

    Heritability and repeatability and genetic and phenotypic correlations were estimated for trotting race records with linear and generalized linear models using 510,519 records on 17,792 Finnhorses and 513,161 records on 25,536 Standardbred trotters. Heritability and repeatability were estimated for single racing time and earnings traits with linear models, and logarithmic scale was used for racing time and fourth-root scale for earnings to correct for nonnormality. Generalized linear models with a gamma distribution were applied for single racing time and with a multinomial distribution for single earnings traits. In addition, genetic parameters for annual earnings were estimated with linear models on the observed and fourth-root scales. Racing success traits of single placings, winnings, breaking stride, and disqualifications were analyzed using generalized linear models with a binomial distribution. Estimates of heritability were greatest for racing time, which ranged from 0.32 to 0.34. Estimates of heritability were low for single earnings with all distributions, ranging from 0.01 to 0.09. Annual earnings were closer to normal distribution than single earnings. Heritability estimates were moderate for annual earnings on the fourth-root scale, 0.19 for Finnhorses and 0.27 for Standardbred trotters. Heritability estimates for binomial racing success variables ranged from 0.04 to 0.12, being greatest for winnings and least for breaking stride. Genetic correlations among racing traits were high, whereas phenotypic correlations were mainly low to moderate, except correlations between racing time and earnings were high. On the basis of a moderate heritability and moderate to high repeatability for racing time and annual earnings, selection of horses for these traits is effective when based on a few repeated records. Because of high genetic correlations, direct selection for racing time and annual earnings would also result in good genetic response in racing success.

  18. Modeling of particle mixing in the atmosphere

    International Nuclear Information System (INIS)

    Zhu, Shupeng

    2015-01-01

    This thesis presents a newly developed size-composition resolved aerosol model (SCRAM), which is able to simulate the dynamics of externally-mixed particles in the atmosphere, and evaluates its performance in three-dimensional air-quality simulations. The main work is split into four parts. First, the research context of external mixing and aerosol modelling is introduced. Secondly, the development of the SCRAM box model is presented along with validation tests. Each particle composition is defined by the combination of mass-fraction sections of its chemical components or aggregates of components. The three main processes involved in aerosol dynamic (nucleation, coagulation, condensation/ evaporation) are included in SCRAM. The model is first validated by comparisons with published reference solutions for coagulation and condensation/evaporation of internally-mixed particles. The particle mixing state is investigated in a 0-D simulation using data representative of air pollution at a traffic site in Paris. The relative influence on the mixing state of the different aerosol processes and of the algorithm used to model condensation/evaporation (dynamic evolution or bulk equilibrium between particles and gas) is studied. Then, SCRAM is integrated into the Polyphemus air quality platform and used to conduct simulations over Greater Paris during the summer period of 2009. This evaluation showed that SCRAM gives satisfactory results for both PM2.5/PM10 concentrations and aerosol optical depths, as assessed from comparisons to observations. Besides, the model allows us to analyze the particle mixing state, as well as the impact of the mixing state assumption made in the modelling on particle formation, aerosols optical properties, and cloud condensation nuclei activation. Finally, two simulations are conducted during the winter campaign of MEGAPOLI (Megacities: Emissions, urban, regional and Global Atmospheric Pollution and climate effects, and Integrated tools for

  19. Mixed problems for linear symmetric hyperbolic systems with characteristic boundary conditions

    International Nuclear Information System (INIS)

    Secchi, P.

    1994-01-01

    We consider the initial-boundary value problem for symmetric hyperbolic systems with characteristic boundary of constant multiplicity. In the linear case we give some results about the existence of regular solutions in suitable functions spaces which take in account the loss of regularity in the normal direction to the characteristic boundary. We also consider the equations of ideal magneto-hydrodynamics under perfectly conducting wall boundary conditions and give some results about the solvability of such mixed problem. (author). 16 refs

  20. Mixed Integer Linear Programming based machine learning approach identifies regulators of telomerase in yeast.

    Science.gov (United States)

    Poos, Alexandra M; Maicher, André; Dieckmann, Anna K; Oswald, Marcus; Eils, Roland; Kupiec, Martin; Luke, Brian; König, Rainer

    2016-06-02

    Understanding telomere length maintenance mechanisms is central in cancer biology as their dysregulation is one of the hallmarks for immortalization of cancer cells. Important for this well-balanced control is the transcriptional regulation of the telomerase genes. We integrated Mixed Integer Linear Programming models into a comparative machine learning based approach to identify regulatory interactions that best explain the discrepancy of telomerase transcript levels in yeast mutants with deleted regulators showing aberrant telomere length, when compared to mutants with normal telomere length. We uncover novel regulators of telomerase expression, several of which affect histone levels or modifications. In particular, our results point to the transcription factors Sum1, Hst1 and Srb2 as being important for the regulation of EST1 transcription, and we validated the effect of Sum1 experimentally. We compiled our machine learning method leading to a user friendly package for R which can straightforwardly be applied to similar problems integrating gene regulator binding information and expression profiles of samples of e.g. different phenotypes, diseases or treatments. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. Group-Level EEG-Processing Pipeline for Flexible Single Trial-Based Analyses Including Linear Mixed Models.

    Science.gov (United States)

    Frömer, Romy; Maier, Martin; Abdel Rahman, Rasha

    2018-01-01

    Here we present an application of an EEG processing pipeline customizing EEGLAB and FieldTrip functions, specifically optimized to flexibly analyze EEG data based on single trial information. The key component of our approach is to create a comprehensive 3-D EEG data structure including all trials and all participants maintaining the original order of recording. This allows straightforward access to subsets of the data based on any information available in a behavioral data structure matched with the EEG data (experimental conditions, but also performance indicators, such accuracy or RTs of single trials). In the present study we exploit this structure to compute linear mixed models (LMMs, using lmer in R) including random intercepts and slopes for items. This information can easily be read out from the matched behavioral data, whereas it might not be accessible in traditional ERP approaches without substantial effort. We further provide easily adaptable scripts for performing cluster-based permutation tests (as implemented in FieldTrip), as a more robust alternative to traditional omnibus ANOVAs. Our approach is particularly advantageous for data with parametric within-subject covariates (e.g., performance) and/or multiple complex stimuli (such as words, faces or objects) that vary in features affecting cognitive processes and ERPs (such as word frequency, salience or familiarity), which are sometimes hard to control experimentally or might themselves constitute variables of interest. The present dataset was recorded from 40 participants who performed a visual search task on previously unfamiliar objects, presented either visually intact or blurred. MATLAB as well as R scripts are provided that can be adapted to different datasets.

  2. Eliciting mixed emotions: A meta-analysis comparing models, types and measures.

    Directory of Open Access Journals (Sweden)

    Raul eBerrios

    2015-04-01

    Full Text Available The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model – dimensional or discrete – as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative. The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (dIG+ = .77, which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought.

  3. Additive action model for mixed irradiation

    International Nuclear Information System (INIS)

    Lam, G.K.Y.

    1984-01-01

    Recent experimental results indicate that a mixture of high and low LET radiation may have some beneficial features (such as lower OER but with skin sparing) for clinical use, and interest has been renewed in the study of mixtures of high and low LET radiation. Several standard radiation inactivation models can readily accommodate interaction between two mixed radiations, however, this is usually handled by postulating extra free parameters, which can only be determined by fitting to experimental data. A model without any free parameter is proposed to explain the biological effect of mixed radiations, based on the following two assumptions: (a) The combined biological action due to two radiations is additive, assuming no repair has taken place during the interval between the two irradiations; and (b) The initial physical damage induced by radiation develops into final biological effect (e.g. cell killing) over a relatively long period (hours) after irradiation. This model has been shown to provide satisfactory fit to the experiment results of previous studies

  4. LINEAR MIXED MODEL TO DESCRIBE THE BASAL AREA INCREMENT FOR INDIVUDUAL CEDRO (Cedrela odorata L.TREES IN OCCIDENTAL AMAZON, BRAZIL

    Directory of Open Access Journals (Sweden)

    Thiago Augusto da Cunha

    2013-01-01

    Full Text Available Reliable growth data from trees are important to establish a rational forest management. Characteristics from trees, like the size, crown architecture and competition indices have been used to mathematically describe the increment efficiently when associated with them. However, the precise role of these effects in the growth-modeling destined to tropical trees needs to be further studied. Here it is reconstructed the basal area increment (BAI of individual Cedrela odorata trees, sampled at Amazon forest, to develop a growth- model using potential-predictors like: (1 classical tree size; (2 morphometric data; (3 competition and (4 social position including liana loads. Despite the large variation in tree size and growth, we observed that these kinds of predictor variables described well the BAI in level of individual tree. The fitted mixed model achieve a high efficiency (R2=92.7 % and predicted 3-years BAI over bark for trees of Cedrela odorata ranging from 10 to 110 cm at diameter at breast height. Tree height, steam slenderness and crown formal demonstrated high influence in the BAI growth model and explaining most of the growth variance (Partial R2=87.2%. Competition variables had negative influence on the BAI, however, explained about 7% of the total variation. The introduction of a random parameter on the regressions model (mixed modelprocedure has demonstrated a better significance approach to the data observed and showed more realistic predictions than the fixed model.

  5. Appropriateness of mechanistic and non-mechanistic models for the application of ultrafiltration to mixed waste

    International Nuclear Information System (INIS)

    Foust, Henry; Ghosehajra, Malay

    2007-01-01

    This study asks two questions: (1) How appropriate is the use of a basic filtration equation to the application of ultrafiltration of mixed waste, and (2) How appropriate are non-parametric models for permeate rates (volumes)? To answer these questions, mechanistic and non-mechanistic approaches are developed for permeate rates and volumes associated with an ultrafiltration/mixed waste system in dia-filtration mode. The mechanistic approach is based on a filtration equation which states that t/V vs. V is a linear relationship. The coefficients associated with this linear regression are composed of physical/chemical parameters of the system and based the mass balance equation associated with the membrane and associated developing cake layer. For several sets of data, a high correlation is shown that supports the assertion that t/V vs. V is a linear relationship. It is also shown that non-mechanistic approaches, i.e., the use of regression models to are not appropriate. One models considered is Q(p) = a*ln(Cb)+b. Regression models are inappropriate because the scale-up from a bench scale (pilot scale) study to full-scale for permeate rates (volumes) is not simply the ratio of the two membrane surface areas. (authors)

  6. Validation of Effective Models for Simulation of Thermal Stratification and Mixing Induced by Steam Injection into a Large Pool of Water

    Directory of Open Access Journals (Sweden)

    Hua Li

    2014-01-01

    Full Text Available The Effective Heat Source (EHS and Effective Momentum Source (EMS models have been proposed to predict the development of thermal stratification and mixing during a steam injection into a large pool of water. These effective models are implemented in GOTHIC software and validated against the POOLEX STB-20 and STB-21 tests and the PPOOLEX MIX-01 test. First, the EHS model is validated against STB-20 test which shows the development of thermal stratification. Different numerical schemes and grid resolutions have been tested. A 48×114 grid with second order scheme is sufficient to capture the vertical temperature distribution in the pool. Next, the EHS and EMS models are validated against STB-21 test. Effective momentum is estimated based on the water level oscillations in the blowdown pipe. An effective momentum selected within the experimental measurement uncertainty can reproduce the mixing details. Finally, the EHS-EMS models are validated against MIX-01 test which has improved space and time resolution of temperature measurements inside the blowdown pipe. Excellent agreement in averaged pool temperature and water level in the pool between the experiment and simulation has been achieved. The development of thermal stratification in the pool is also well captured in the simulation as well as the thermal behavior of the pool during the mixing phase.

  7. A Lagrangian mixing frequency model for transported PDF modeling

    Science.gov (United States)

    Turkeri, Hasret; Zhao, Xinyu

    2017-11-01

    In this study, a Lagrangian mixing frequency model is proposed for molecular mixing models within the framework of transported probability density function (PDF) methods. The model is based on the dissipations of mixture fraction and progress variables obtained from Lagrangian particles in PDF methods. The new model is proposed as a remedy to the difficulty in choosing the optimal model constant parameters when using conventional mixing frequency models. The model is implemented in combination with the Interaction by exchange with the mean (IEM) mixing model. The performance of the new model is examined by performing simulations of Sandia Flame D and the turbulent premixed flame from the Cambridge stratified flame series. The simulations are performed using the pdfFOAM solver which is a LES/PDF solver developed entirely in OpenFOAM. A 16-species reduced mechanism is used to represent methane/air combustion, and in situ adaptive tabulation is employed to accelerate the finite-rate chemistry calculations. The results are compared with experimental measurements as well as with the results obtained using conventional mixing frequency models. Dynamic mixing frequencies are predicted using the new model without solving additional transport equations, and good agreement with experimental data is observed.

  8. Gyrofluid turbulence models with kinetic effects

    International Nuclear Information System (INIS)

    Dorland, W.; Hammett, G.W.

    1992-12-01

    Nonlinear gyrofluid equations are derived by taking moments of the nonlinear, electrostatic gyrokinetic equation. The principal model presented includes evolution equations for the guiding center n, u parallel, T parallel, and T perpendicular along with an equation expressing the quasineutrality constraint. Additional evolution equations for higher moments are derived which may be used if greater accuracy is desired. The moment hierarchy is closed with a Landau-damping model which is equivalent to a multi-pole approximation to the plasma dispersion function, extended to include finite Larmor radius effects. In particular, new dissipative, nonlinear terms are found which model the perpendicular phase-mixing of the distribution function along contours of constant electrostatic potential. These ''FLR phase-mixing'' terms introduce a hyperviscosity-like damping ∝ k perpendicular 2 |Φ rvec k rvec k x rvec k'| which should provide a physics-based damping mechanism at high k perpendicular ρ which is potentially as important as the usual polarization drift nonlinearity. The moments are taken in guiding center space to pick up the correct nonlinear FLR terms and the gyroaveraging of the shear. The equations are solved with a nonlinear, three dimensional initial value code. Linear results are presented, showing excellent agreement with linear gyrokinetic theory

  9. Dimension of linear models

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    1996-01-01

    Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four...... the basic problems in determining the dimension of linear models. Then each of the eight measures are treated. The results are illustrated by examples....... of these criteria are widely used ones, while the remaining four are ones derived from the H-principle of mathematical modeling. Many examples from practice show that the criteria derived from the H-principle function better than the known and popular criteria for the number of components. We shall briefly review...

  10. Mixed-effects models for estimating stand volume by means of small footprint airborne laser scanner data.

    Science.gov (United States)

    J. Breidenbach; E. Kublin; R. McGaughey; H.-E. Andersen; S. Reutebuch

    2008-01-01

    For this study, hierarchical data sets--in that several sample plots are located within a stand--were analyzed for study sites in the USA and Germany. The German data had an additional hierarchy as the stands are located within four distinct public forests. Fixed-effects models and mixed-effects models with a random intercept on the stand level were fit to each data...

  11. Response Surface Method and Linear Programming in the development of mixed nectar of acceptability high and minimum cost

    Directory of Open Access Journals (Sweden)

    Enrique López Calderón

    2012-06-01

    Full Text Available The aim of this study was to develop a high acceptability mixed nectar and low cost. To obtain the nectar mixed considered different amounts of passion fruit, sweet pepino, sucrose, and completing 100% with water, following a two-stage design: screening (using a design of type 2 3 + 4 center points and optimization (using a design of type 2 2 + 2*2 + 4 center points; stages that allow explore a high acceptability formulation. Then we used the technique of Linear Programming to minimize the cost of high acceptability nectar. Result of this process was obtained a mixed nectar optimal acceptability (score of 7, when the formulation is between 9 and 14% of passion fruit, 4 and 5% of sucrose, 73.5% of sweet pepino juice and filling with water to the 100%. Linear Programming possible reduced the cost of nectar mixed with optimal acceptability at S/.174 for a production of 1000 L/day.

  12. Parameterized Linear Longitudinal Airship Model

    Science.gov (United States)

    Kulczycki, Eric; Elfes, Alberto; Bayard, David; Quadrelli, Marco; Johnson, Joseph

    2010-01-01

    A parameterized linear mathematical model of the longitudinal dynamics of an airship is undergoing development. This model is intended to be used in designing control systems for future airships that would operate in the atmospheres of Earth and remote planets. Heretofore, the development of linearized models of the longitudinal dynamics of airships has been costly in that it has been necessary to perform extensive flight testing and to use system-identification techniques to construct models that fit the flight-test data. The present model is a generic one that can be relatively easily specialized to approximate the dynamics of specific airships at specific operating points, without need for further system identification, and with significantly less flight testing. The approach taken in the present development is to merge the linearized dynamical equations of an airship with techniques for estimation of aircraft stability derivatives, and to thereby make it possible to construct a linearized dynamical model of the longitudinal dynamics of a specific airship from geometric and aerodynamic data pertaining to that airship. (It is also planned to develop a model of the lateral dynamics by use of the same methods.) All of the aerodynamic data needed to construct the model of a specific airship can be obtained from wind-tunnel testing and computational fluid dynamics

  13. Linear and non-linear autoregressive models for short-term wind speed forecasting

    International Nuclear Information System (INIS)

    Lydia, M.; Suresh Kumar, S.; Immanuel Selvakumar, A.; Edwin Prem Kumar, G.

    2016-01-01

    Highlights: • Models for wind speed prediction at 10-min intervals up to 1 h built on time-series wind speed data. • Four different multivariate models for wind speed built based on exogenous variables. • Non-linear models built using three data mining algorithms outperform the linear models. • Autoregressive models based on wind direction perform better than other models. - Abstract: Wind speed forecasting aids in estimating the energy produced from wind farms. The soaring energy demands of the world and minimal availability of conventional energy sources have significantly increased the role of non-conventional sources of energy like solar, wind, etc. Development of models for wind speed forecasting with higher reliability and greater accuracy is the need of the hour. In this paper, models for predicting wind speed at 10-min intervals up to 1 h have been built based on linear and non-linear autoregressive moving average models with and without external variables. The autoregressive moving average models based on wind direction and annual trends have been built using data obtained from Sotavento Galicia Plc. and autoregressive moving average models based on wind direction, wind shear and temperature have been built on data obtained from Centre for Wind Energy Technology, Chennai, India. While the parameters of the linear models are obtained using the Gauss–Newton algorithm, the non-linear autoregressive models are developed using three different data mining algorithms. The accuracy of the models has been measured using three performance metrics namely, the Mean Absolute Error, Root Mean Squared Error and Mean Absolute Percentage Error.

  14. Speed Sensorless mixed sensitivity linear parameter variant H_inf control of the induction motor

    NARCIS (Netherlands)

    Toth, R.; Fodor, D.

    2004-01-01

    The paper shows the design of a robust control structure for the speed sensorless vector control of the IM, based on the mixed sensitivity (MS) linear parameter variant (LPV) H8 control theory. The controller makes possible the direct control of the flux and speed of the motor with torque adaptation

  15. Correlations and Non-Linear Probability Models

    DEFF Research Database (Denmark)

    Breen, Richard; Holm, Anders; Karlson, Kristian Bernt

    2014-01-01

    the dependent variable of the latent variable model and its predictor variables. We show how this correlation can be derived from the parameters of non-linear probability models, develop tests for the statistical significance of the derived correlation, and illustrate its usefulness in two applications. Under......Although the parameters of logit and probit and other non-linear probability models are often explained and interpreted in relation to the regression coefficients of an underlying linear latent variable model, we argue that they may also be usefully interpreted in terms of the correlations between...... certain circumstances, which we explain, the derived correlation provides a way of overcoming the problems inherent in cross-sample comparisons of the parameters of non-linear probability models....

  16. Use of nonlinear dose-effect models to predict consequences

    International Nuclear Information System (INIS)

    Seiler, F.A.; Alvarez, J.L.

    1996-01-01

    The linear dose-effect relationship was introduced as a model for the induction of cancer from exposure to nuclear radiation. Subsequently, it has been used by analogy to assess the risk of chemical carcinogens also. Recently, however, the model for radiation carcinogenesis has come increasingly under attack because its calculations contradict the epidemiological data, such as cancer in atomic bomb survivors. Even so, its proponents vigorously defend it, often using arguments that are not so much scientific as a mix of scientific, societal, and often political arguments. At least in part, the resilience of the linear model is due to two convenient properties that are exclusive to linearity: First, the risk of an event is determined solely by the event dose; second, the total risk of a population group depends only on the total population dose. In reality, the linear model has been conclusively falsified; i.e., it has been shown to make wrong predictions, and once this fact is generally realized, the scientific method calls for a new paradigm model. As all alternative models are by necessity nonlinear, all the convenient properties of the linear model are invalid, and calculational procedures have to be used that are appropriate for nonlinear models

  17. Application of zero-inflated poisson mixed models in prognostic factors of hepatitis C.

    Science.gov (United States)

    Akbarzadeh Baghban, Alireza; Pourhoseingholi, Asma; Zayeri, Farid; Jafari, Ali Akbar; Alavian, Seyed Moayed

    2013-01-01

    In recent years, hepatitis C virus (HCV) infection represents a major public health problem. Evaluation of risk factors is one of the solutions which help protect people from the infection. This study aims to employ zero-inflated Poisson mixed models to evaluate prognostic factors of hepatitis C. The data was collected from a longitudinal study during 2005-2010. First, mixed Poisson regression (PR) model was fitted to the data. Then, a mixed zero-inflated Poisson model was fitted with compound Poisson random effects. For evaluating the performance of the proposed mixed model, standard errors of estimators were compared. The results obtained from mixed PR showed that genotype 3 and treatment protocol were statistically significant. Results of zero-inflated Poisson mixed model showed that age, sex, genotypes 2 and 3, the treatment protocol, and having risk factors had significant effects on viral load of HCV patients. Of these two models, the estimators of zero-inflated Poisson mixed model had the minimum standard errors. The results showed that a mixed zero-inflated Poisson model was the almost best fit. The proposed model can capture serial dependence, additional overdispersion, and excess zeros in the longitudinal count data.

  18. A Fay-Herriot Model with Different Random Effect Variances

    Czech Academy of Sciences Publication Activity Database

    Hobza, Tomáš; Morales, D.; Herrador, M.; Esteban, M.D.

    2011-01-01

    Roč. 40, č. 5 (2011), s. 785-797 ISSN 0361-0926 R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : small area estimation * Fay-Herriot model * Linear mixed model * Labor Force Survey Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.274, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/hobza-a%20fay-herriot%20model%20with%20different%20random%20effect%20variances.pdf

  19. Hierarchical linear modeling of longitudinal pedigree data for genetic association analysis

    DEFF Research Database (Denmark)

    Tan, Qihua; B Hjelmborg, Jacob V; Thomassen, Mads

    2014-01-01

    -effect models to explicitly model the genetic relationship. These have proved to be an efficient way of dealing with sample clustering in pedigree data. Although current algorithms implemented in popular statistical packages are useful for adjusting relatedness in the mixed modeling of genetic effects...... associated with blood pressure with estimated inflation factors of 0.99, suggesting that our modeling of random effects efficiently handles the genetic relatedness in pedigrees. Application to simulated data captures important variants specified in the simulation. Our results show that the method is useful......Genetic association analysis on complex phenotypes under a longitudinal design involving pedigrees encounters the problem of correlation within pedigrees, which could affect statistical assessment of the genetic effects. Approaches have been proposed to integrate kinship correlation into the mixed...

  20. Effect of linear and non-linear blade modelling techniques on simulated fatigue and extreme loads using Bladed

    Science.gov (United States)

    Beardsell, Alec; Collier, William; Han, Tao

    2016-09-01

    There is a trend in the wind industry towards ever larger and more flexible turbine blades. Blade tip deflections in modern blades now commonly exceed 10% of blade length. Historically, the dynamic response of wind turbine blades has been analysed using linear models of blade deflection which include the assumption of small deflections. For modern flexible blades, this assumption is becoming less valid. In order to continue to simulate dynamic turbine performance accurately, routine use of non-linear models of blade deflection may be required. This can be achieved by representing the blade as a connected series of individual flexible linear bodies - referred to in this paper as the multi-part approach. In this paper, Bladed is used to compare load predictions using single-part and multi-part blade models for several turbines. The study examines the impact on fatigue and extreme loads and blade deflection through reduced sets of load calculations based on IEC 61400-1 ed. 3. Damage equivalent load changes of up to 16% and extreme load changes of up to 29% are observed at some turbine load locations. It is found that there is no general pattern in the loading differences observed between single-part and multi-part blade models. Rather, changes in fatigue and extreme loads with a multi-part blade model depend on the characteristics of the individual turbine and blade. Key underlying causes of damage equivalent load change are identified as differences in edgewise- torsional coupling between the multi-part and single-part models, and increased edgewise rotor mode damping in the multi-part model. Similarly, a causal link is identified between torsional blade dynamics and changes in ultimate load results.

  1. Modeling patterns in data using linear and related models

    International Nuclear Information System (INIS)

    Engelhardt, M.E.

    1996-06-01

    This report considers the use of linear models for analyzing data related to reliability and safety issues of the type usually associated with nuclear power plants. The report discusses some of the general results of linear regression analysis, such as the model assumptions and properties of the estimators of the parameters. The results are motivated with examples of operational data. Results about the important case of a linear regression model with one covariate are covered in detail. This case includes analysis of time trends. The analysis is applied with two different sets of time trend data. Diagnostic procedures and tests for the adequacy of the model are discussed. Some related methods such as weighted regression and nonlinear models are also considered. A discussion of the general linear model is also included. Appendix A gives some basic SAS programs and outputs for some of the analyses discussed in the body of the report. Appendix B is a review of some of the matrix theoretic results which are useful in the development of linear models

  2. Mixtures of Berkson and classical covariate measurement error in the linear mixed model: Bias analysis and application to a study on ultrafine particles.

    Science.gov (United States)

    Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette

    2018-03-13

    The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Generalized Path Analysis and Generalized Simultaneous Equations Model for Recursive Systems with Responses of Mixed Types

    Science.gov (United States)

    Tsai, Tien-Lung; Shau, Wen-Yi; Hu, Fu-Chang

    2006-01-01

    This article generalizes linear path analysis (PA) and simultaneous equations models (SiEM) to deal with mixed responses of different types in a recursive or triangular system. An efficient instrumental variable (IV) method for estimating the structural coefficients of a 2-equation partially recursive generalized path analysis (GPA) model and…

  4. Applied mixed generalized additive model to assess the effect of temperature on the incidence of bacillary dysentery and its forecast.

    Directory of Open Access Journals (Sweden)

    Weiping Ma

    Full Text Available BACKGROUND: Association between bacillary dysentery (BD disease and temperature has been reported in some studies applying Poisson regression model, however the effect estimation might be biased due to the data autocorrelation. Furthermore the temperature effect distributed in the time of different lags has not been studied either. The purpose of this work was to obtaining the association between the BD counts and the climatic factors such as temperature in the form of the weighted averages, concerning the autocorrelation pattern of the model residuals, and to make short term predictions using the model. The data was collected in the city of Shanghai from 2004 to 2008. METHODS: We used mixed generalized additive model (MGAM to analyze data on bacillary dysentery, temperature and other covariates with autoregressive random effect. Short term predictions were made using MGAM with the moving average of the BD counts. MAIN RESULTS: Our results showed that temperature was significant linearly associated with the logarithm of BD count for temperature in the range from 12°C to 22°C. Optimal weights in the temperature effect have been obtained, in which the one of 1-day-lag was close to 0, and the one of 2-days-lag was the maximum (p-value of the difference was less than 0.05. The predictive model was showing good fitness on the internal data with R(2 value 0.875, and the good short term prediction effect on the external data with correlation coefficient to be 0.859. CONCLUSION: According to the model estimation, corresponding Risk Ratio to affect BD was close to 1.1 when temperature effect goes up for 1°C in the range from 12°C to 22°C. And the 1-day incubation period could be inferred from the model estimation. Good prediction has been made using the predictive MGAM.

  5. The effect of turbulent mixing models on the predictions of subchannel codes

    International Nuclear Information System (INIS)

    Tapucu, A.; Teyssedou, A.; Tye, P.; Troche, N.

    1994-01-01

    In this paper, the predictions of the COBRA-IV and ASSERT-4 subchannel codes have been compared with experimental data on void fraction, mass flow rate, and pressure drop obtained for two interconnected subchannels. COBRA-IV is based on a one-dimensional separated flow model with the turbulent intersubchannel mixing formulated as an extension of the single-phase mixing model, i.e. fluctuating equal mass exchange. ASSERT-4 is based on a drift flux model with the turbulent mixing modelled by assuming an exchange of equal volumes with different densities thus allowing a net fluctuating transverse mass flux from one subchannel to the other. This feature is implemented in the constitutive relationship for the relative velocity required by the conservation equations. It is observed that the predictions of ASSERT-4 follow the experimental trends better than COBRA-IV; therefore the approach of equal volume exchange constitutes an improvement over that of the equal mass exchange. ((orig.))

  6. A mixed integer linear programming approach for optimal DER portfolio, sizing, and placement in multi-energy microgrids

    International Nuclear Information System (INIS)

    Mashayekh, Salman; Stadler, Michael; Cardoso, Gonçalo; Heleno, Miguel

    2017-01-01

    Highlights: • This paper presents a MILP model for optimal design of multi-energy microgrids. • Our microgrid design includes optimal technology portfolio, placement, and operation. • Our model includes microgrid electrical power flow and heat transfer equations. • The case study shows advantages of our model over aggregate single-node approaches. • The case study shows the accuracy of the integrated linearized power flow model. - Abstract: Optimal microgrid design is a challenging problem, especially for multi-energy microgrids with electricity, heating, and cooling loads as well as sources, and multiple energy carriers. To address this problem, this paper presents an optimization model formulated as a mixed-integer linear program, which determines the optimal technology portfolio, the optimal technology placement, and the associated optimal dispatch, in a microgrid with multiple energy types. The developed model uses a multi-node modeling approach (as opposed to an aggregate single-node approach) that includes electrical power flow and heat flow equations, and hence, offers the ability to perform optimal siting considering physical and operational constraints of electrical and heating/cooling networks. The new model is founded on the existing optimization model DER-CAM, a state-of-the-art decision support tool for microgrid planning and design. The results of a case study that compares single-node vs. multi-node optimal design for an example microgrid show the importance of multi-node modeling. It has been shown that single-node approaches are not only incapable of optimal DER placement, but may also result in sub-optimal DER portfolio, as well as underestimation of investment costs.

  7. Non-linear modelling to describe lactation curve in Gir crossbred cows

    Directory of Open Access Journals (Sweden)

    Yogesh C. Bangar

    2017-02-01

    Full Text Available Abstract Background The modelling of lactation curve provides guidelines in formulating farm managerial practices in dairy cows. The aim of the present study was to determine the suitable non-linear model which most accurately fitted to lactation curves of five lactations in 134 Gir crossbred cows reared in Research-Cum-Development Project (RCDP on Cattle farm, MPKV (Maharashtra. Four models viz. gamma-type function, quadratic model, mixed log function and Wilmink model were fitted to each lactation separately and then compared on the basis of goodness of fit measures viz. adjusted R2, root mean square error (RMSE, Akaike’s Informaion Criteria (AIC and Bayesian Information Criteria (BIC. Results In general, highest milk yield was observed in fourth lactation whereas it was lowest in first lactation. Among the models investigated, mixed log function and gamma-type function provided best fit of the lactation curve of first and remaining lactations, respectively. Quadratic model gave least fit to lactation curve in almost all lactations. Peak yield was observed as highest and lowest in fourth and first lactation, respectively. Further, first lactation showed highest persistency but relatively higher time to achieve peak yield than other lactations. Conclusion Lactation curve modelling using gamma-type function may be helpful to setting the management strategies at farm level, however, modelling must be optimized regularly before implementing them to enhance productivity in Gir crossbred cows.

  8. Effects of mixing and stirring on the critical behaviour

    International Nuclear Information System (INIS)

    Antonov, N V; Hnatich, Michal; Honkonen, Juha

    2006-01-01

    Stochastic dynamics of a nonconserved scalar order parameter near its critical point, subject to random stirring and mixing, is studied using the field-theoretic renormalization group. The stirring and mixing are modelled by a random external Gaussian noise with the correlation function ∼δ(t - t')k 4-d-y and the divergence-free (due to incompressibility) velocity field, governed by the stochastic Navier-Stokes equation with a random Gaussian force with the correlation function ∝ δ(t-t')k 4-d-y' . Depending on the relations between the exponents y and y' and the space dimensionality d, the model reveals several types of scaling regimes. Some of them are well known (model A of equilibrium critical dynamics and linear passive scalar field advected by a random turbulent flow), but there are three new non-equilibrium regimes (universality classes) associated with new nontrivial fixed points of the renormalization group equations. The corresponding critical dimensions are calculated in the two-loop approximation (second order of the triple expansion in y, y' and ε = 4 - d)

  9. A linear model of ductile plastic damage

    International Nuclear Information System (INIS)

    Lemaitre, J.

    1983-01-01

    A three-dimensional model of isotropic ductile plastic damage based on a continuum damage variable on the effective stress concept and on thermodynamics is derived. As shown by experiments on several metals and alloys, the model, integrated in the case of proportional loading, is linear with respect to the accumulated plastic strain and shows a large influence of stress triaxiality [fr

  10. Core seismic behaviour: linear and non-linear models

    International Nuclear Information System (INIS)

    Bernard, M.; Van Dorsselaere, M.; Gauvain, M.; Jenapierre-Gantenbein, M.

    1981-08-01

    The usual methodology for the core seismic behaviour analysis leads to a double complementary approach: to define a core model to be included in the reactor-block seismic response analysis, simple enough but representative of basic movements (diagrid or slab), to define a finer core model, with basic data issued from the first model. This paper presents the history of the different models of both kinds. The inert mass model (IMM) yielded a first rough diagrid movement. The direct linear model (DLM), without shocks and with sodium as an added mass, let to two different ones: DLM 1 with independent movements of the fuel and radial blanket subassemblies, and DLM 2 with a core combined movement. The non-linear (NLM) ''CORALIE'' uses the same basic modelization (Finite Element Beams) but accounts for shocks. It studies the response of a diameter on flats and takes into account the fluid coupling and the wrapper tube flexibility at the pad level. Damping consists of one modal part of 2% and one part due to shocks. Finally, ''CORALIE'' yields the time-history of the displacements and efforts on the supports, but damping (probably greater than 2%) and fluid-structures interaction are still to be precised. The validation experiments were performed on a RAPSODIE core mock-up on scale 1, in similitude of 1/3 as to SPX 1. The equivalent linear model (ELM) was developed for the SPX 1 reactor-block response analysis and a specified seismic level (SB or SM). It is composed of several oscillators fixed to the diagrid and yields the same maximum displacements and efforts than the NLM. The SPX 1 core seismic analysis with a diagrid input spectrum which corresponds to a 0,1 g group acceleration, has been carried out with these models: some aspects of these calculations are presented here

  11. Eliciting mixed emotions: a meta-analysis comparing models, types, and measures

    Science.gov (United States)

    Berrios, Raul; Totterdell, Peter; Kellett, Stephen

    2015-01-01

    The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model—dimensional or discrete—as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative). The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (dIG+ = 0.77), which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects) resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought. PMID:25926805

  12. Linear matrix inequality approach for synchronization control of fuzzy cellular neural networks with mixed time delays

    International Nuclear Information System (INIS)

    Balasubramaniam, P.; Kalpana, M.; Rakkiyappan, R.

    2012-01-01

    Fuzzy cellular neural networks (FCNNs) are special kinds of cellular neural networks (CNNs). Each cell in an FCNN contains fuzzy operating abilities. The entire network is governed by cellular computing laws. The design of FCNNs is based on fuzzy local rules. In this paper, a linear matrix inequality (LMI) approach for synchronization control of FCNNs with mixed delays is investigated. Mixed delays include discrete time-varying delays and unbounded distributed delays. A dynamic control scheme is proposed to achieve the synchronization between a drive network and a response network. By constructing the Lyapunov—Krasovskii functional which contains a triple-integral term and the free-weighting matrices method an improved delay-dependent stability criterion is derived in terms of LMIs. The controller can be easily obtained by solving the derived LMIs. A numerical example and its simulations are presented to illustrate the effectiveness of the proposed method. (interdisciplinary physics and related areas of science and technology)

  13. Linear Logistic Test Modeling with R

    Science.gov (United States)

    Baghaei, Purya; Kubinger, Klaus D.

    2015-01-01

    The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…

  14. Perturbative estimates of lepton mixing angles in unified models

    International Nuclear Information System (INIS)

    Antusch, Stefan; King, Stephen F.; Malinsky, Michal

    2009-01-01

    Many unified models predict two large neutrino mixing angles, with the charged lepton mixing angles being small and quark-like, and the neutrino masses being hierarchical. Assuming this, we present simple approximate analytic formulae giving the lepton mixing angles in terms of the underlying high energy neutrino mixing angles together with small perturbations due to both charged lepton corrections and renormalisation group (RG) effects, including also the effects of third family canonical normalization (CN). We apply the perturbative formulae to the ubiquitous case of tri-bimaximal neutrino mixing at the unification scale, in order to predict the theoretical corrections to mixing angle predictions and sum rule relations, and give a general discussion of all limiting cases. We also discuss the implications for the sum rule relations of the measurement of a non-zero reactor angle, as hinted at by recent experimental measurements.

  15. Predicting the effect of ionising radiation on biological populations: testing of a non-linear Leslie model applied to a small mammal population

    International Nuclear Information System (INIS)

    Monte, Luigi

    2013-01-01

    The present work describes the application of a non-linear Leslie model for predicting the effects of ionising radiation on wild populations. The model assumes that, for protracted chronic irradiation, the effect-dose relationship is linear. In particular, the effects of radiation are modelled by relating the increase in the mortality rates of the individuals to the dose rates through a proportionality factor C. The model was tested using independent data and information from a series of experiments that were aimed at assessing the response to radiation of wild populations of meadow voles and whose results were described in the international literature. The comparison of the model results with the data selected from the above mentioned experiments showed that the model overestimated the detrimental effects of radiation on the size of irradiated populations when the values of C were within the range derived from the median lethal dose (L 50 ) for small mammals. The described non-linear model suggests that the non-expressed biotic potential of the species whose growth is limited by processes of environmental resistance, such as the competition among the individuals of the same or of different species for the exploitation of the available resources, can be a factor that determines a more effective response of population to the radiation effects. -- Highlights: • A model to assess the radiation effects on wild population is described. • The model is based on non-linear Leslie matrix. • The model is applied to small mammals living in an irradiated meadow. • Model output is conservative if effect-dose factor estimated from L 50 is used. • Systemic response to stress of populations in competitive conditions may be more effective

  16. Explorative methods in linear models

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    2004-01-01

    The author has developed the H-method of mathematical modeling that builds up the model by parts, where each part is optimized with respect to prediction. Besides providing with better predictions than traditional methods, these methods provide with graphic procedures for analyzing different feat...... features in data. These graphic methods extend the well-known methods and results of Principal Component Analysis to any linear model. Here the graphic procedures are applied to linear regression and Ridge Regression....

  17. Sparse Linear Identifiable Multivariate Modeling

    DEFF Research Database (Denmark)

    Henao, Ricardo; Winther, Ole

    2011-01-01

    and bench-marked on artificial and real biological data sets. SLIM is closest in spirit to LiNGAM (Shimizu et al., 2006), but differs substantially in inference, Bayesian network structure learning and model comparison. Experimentally, SLIM performs equally well or better than LiNGAM with comparable......In this paper we consider sparse and identifiable linear latent variable (factor) and linear Bayesian network models for parsimonious analysis of multivariate data. We propose a computationally efficient method for joint parameter and model inference, and model comparison. It consists of a fully...

  18. Standardizing effect size from linear regression models with log-transformed variables for meta-analysis.

    Science.gov (United States)

    Rodríguez-Barranco, Miguel; Tobías, Aurelio; Redondo, Daniel; Molina-Portillo, Elena; Sánchez, María José

    2017-03-17

    Meta-analysis is very useful to summarize the effect of a treatment or a risk factor for a given disease. Often studies report results based on log-transformed variables in order to achieve the principal assumptions of a linear regression model. If this is the case for some, but not all studies, the effects need to be homogenized. We derived a set of formulae to transform absolute changes into relative ones, and vice versa, to allow including all results in a meta-analysis. We applied our procedure to all possible combinations of log-transformed independent or dependent variables. We also evaluated it in a simulation based on two variables either normally or asymmetrically distributed. In all the scenarios, and based on different change criteria, the effect size estimated by the derived set of formulae was equivalent to the real effect size. To avoid biased estimates of the effect, this procedure should be used with caution in the case of independent variables with asymmetric distributions that significantly differ from the normal distribution. We illustrate an application of this procedure by an application to a meta-analysis on the potential effects on neurodevelopment in children exposed to arsenic and manganese. The procedure proposed has been shown to be valid and capable of expressing the effect size of a linear regression model based on different change criteria in the variables. Homogenizing the results from different studies beforehand allows them to be combined in a meta-analysis, independently of whether the transformations had been performed on the dependent and/or independent variables.

  19. Modeling exposure–lag–response associations with distributed lag non-linear models

    Science.gov (United States)

    Gasparrini, Antonio

    2014-01-01

    In biomedical research, a health effect is frequently associated with protracted exposures of varying intensity sustained in the past. The main complexity of modeling and interpreting such phenomena lies in the additional temporal dimension needed to express the association, as the risk depends on both intensity and timing of past exposures. This type of dependency is defined here as exposure–lag–response association. In this contribution, I illustrate a general statistical framework for such associations, established through the extension of distributed lag non-linear models, originally developed in time series analysis. This modeling class is based on the definition of a cross-basis, obtained by the combination of two functions to flexibly model linear or nonlinear exposure-responses and the lag structure of the relationship, respectively. The methodology is illustrated with an example application to cohort data and validated through a simulation study. This modeling framework generalizes to various study designs and regression models, and can be applied to study the health effects of protracted exposures to environmental factors, drugs or carcinogenic agents, among others. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:24027094

  20. CFD simulation on reactor flow mixing phenomena

    International Nuclear Information System (INIS)

    Kwon, T.S.; Kim, K.H.

    2016-01-01

    A pre-test calculation for multi-dimensional flow mixing in a reactor core and downcomer has been studied using a CFD code. To study the effects of Reactor Coolant Pump (RCP) and core zone on the boron mixing behaviors in a lower downcomer and core inlet, a 1/5-scale CFD model of flow mixing test facility for the APR+ reference plant was simulated. The flow paths of the 1/5-scale model were scaled down by the linear scaling method. The aspect ratio (L/D) of all flow paths was preserved to 1. To preserve a dynamic similarity, the ratio of Euler number was also preserved to 1. A single phase water flow at low pressure and temperature conditions was considered in this calculation. The calculation shows that the asymmetric effect driven by RCPs shifted the high velocity field to the failed pump's flow zone. The borated water flow zone at the core inlet was also shifted to the failed RCP side. (author)

  1. Latent log-linear models for handwritten digit classification.

    Science.gov (United States)

    Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann

    2012-06-01

    We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.

  2. A componential model of human interaction with graphs: 1. Linear regression modeling

    Science.gov (United States)

    Gillan, Douglas J.; Lewis, Robert

    1994-01-01

    Task analyses served as the basis for developing the Mixed Arithmetic-Perceptual (MA-P) model, which proposes (1) that people interacting with common graphs to answer common questions apply a set of component processes-searching for indicators, encoding the value of indicators, performing arithmetic operations on the values, making spatial comparisons among indicators, and repsonding; and (2) that the type of graph and user's task determine the combination and order of the components applied (i.e., the processing steps). Two experiments investigated the prediction that response time will be linearly related to the number of processing steps according to the MA-P model. Subjects used line graphs, scatter plots, and stacked bar graphs to answer comparison questions and questions requiring arithmetic calculations. A one-parameter version of the model (with equal weights for all components) and a two-parameter version (with different weights for arithmetic and nonarithmetic processes) accounted for 76%-85% of individual subjects' variance in response time and 61%-68% of the variance taken across all subjects. The discussion addresses possible modifications in the MA-P model, alternative models, and design implications from the MA-P model.

  3. Solving a mixed-integer linear programming model for a multi-skilled project scheduling problem by simulated annealing

    Directory of Open Access Journals (Sweden)

    H Kazemipoor

    2012-04-01

    Full Text Available A multi-skilled project scheduling problem (MSPSP has been generally presented to schedule a project with staff members as resources. Each activity in project network requires different skills and also staff members have different skills, too. This causes the MSPSP becomes a special type of a multi-mode resource-constrained project scheduling problem (MM-RCPSP with a huge number of modes. Given the importance of this issue, in this paper, a mixed integer linear programming for the MSPSP is presented. Due to the complexity of the problem, a meta-heuristic algorithm is proposed in order to find near optimal solutions. To validate performance of the algorithm, results are compared against exact solutions solved by the LINGO solver. The results are promising and show that optimal or near-optimal solutions are derived for small instances and good solutions for larger instances in reasonable time.

  4. Equivalent linear damping characterization in linear and nonlinear force-stiffness muscle models.

    Science.gov (United States)

    Ovesy, Marzieh; Nazari, Mohammad Ali; Mahdavian, Mohammad

    2016-02-01

    In the current research, the muscle equivalent linear damping coefficient which is introduced as the force-velocity relation in a muscle model and the corresponding time constant are investigated. In order to reach this goal, a 1D skeletal muscle model was used. Two characterizations of this model using a linear force-stiffness relationship (Hill-type model) and a nonlinear one have been implemented. The OpenSim platform was used for verification of the model. The isometric activation has been used for the simulation. The equivalent linear damping and the time constant of each model were extracted by using the results obtained from the simulation. The results provide a better insight into the characteristics of each model. It is found that the nonlinear models had a response rate closer to the reality compared to the Hill-type models.

  5. A mixed model framework for teratology studies.

    Science.gov (United States)

    Braeken, Johan; Tuerlinckx, Francis

    2009-10-01

    A mixed model framework is presented to model the characteristic multivariate binary anomaly data as provided in some teratology studies. The key features of the model are the incorporation of covariate effects, a flexible random effects distribution by means of a finite mixture, and the application of copula functions to better account for the relation structure of the anomalies. The framework is motivated by data of the Boston Anticonvulsant Teratogenesis study and offers an integrated approach to investigate substantive questions, concerning general and anomaly-specific exposure effects of covariates, interrelations between anomalies, and objective diagnostic measurement.

  6. Effects of socioeconomic position and social mobility on linear growth from early childhood until adolescence.

    Science.gov (United States)

    Muraro, Ana Paula; Souza, Rita Adriana Gomes de; Rodrigues, Paulo Rogério Melo; Ferreira, Márcia Gonçalves; Sichieri, Rosely

    2017-01-01

    To assess the effect of socioeconomic position (SEP) in childhood and social mobility on linear growth through adolescence in a population-based cohort. Children born in Cuiabá-MT, central-western Brazil, were evaluated during 1994 - 1999. They were first assessed during 1999 - 2000 (0 - 5 years) and again during 2009 - 2011 (10 - 17 years), and their height-for-age was evaluated during these two periods.Awealth index was used to classify the SEP of each child's family as low, medium, or high. Social mobility was categorized as upward mobility or no upward mobility. Linear mixed models were used. We evaluated 1,716 children (71.4% of baseline) after 10 years, and 60.6% of the families showed upward mobility, with a higher percentage among the lowest economic classes. A higher height-for-age was also observed among those from families with a high SEP both in childhood (low SEP= -0.35 z-score; high SEP= 0.15 z-score, p childhood and social mobility did not greatly influence linear growth through childhood in this central-western Brazilian cohort.

  7. The effect of workload constraints in linear programming models for production planning

    NARCIS (Netherlands)

    Jansen, M.M.; Kok, de A.G.; Adan, I.J.B.F.

    2011-01-01

    Linear programming (LP) models for production planning incorporate a model of the manufacturing system that is necessarily deterministic. Although these deterministic models are the current state-of-the-art, it should be recognized that they are used in an environment that is inherently stochastic.

  8. Geometric phase effects in excited state dynamics through a conical intersection in large molecules: N-dimensional linear vibronic coupling model study

    Science.gov (United States)

    Li, Jiaru; Joubert-Doriol, Loïc; Izmaylov, Artur F.

    2017-08-01

    We investigate geometric phase (GP) effects in nonadiabatic transitions through a conical intersection (CI) in an N-dimensional linear vibronic coupling (ND-LVC) model. This model allows for the coordinate transformation encompassing all nonadiabatic effects within a two-dimensional (2D) subsystem, while the other N - 2 dimensions form a system of uncoupled harmonic oscillators identical for both electronic states and coupled bi-linearly with the subsystem coordinates. The 2D subsystem governs ultra-fast nonadiabatic dynamics through the CI and provides a convenient model for studying GP effects. Parameters of the original ND-LVC model define the Hamiltonian of the transformed 2D subsystem and thus influence GP effects directly. Our analysis reveals what values of ND-LVC parameters can introduce symmetry breaking in the 2D subsystem that diminishes GP effects.

  9. A fuzzy Bi-linear management model in reverse logistic chains

    Directory of Open Access Journals (Sweden)

    Tadić Danijela

    2016-01-01

    Full Text Available The management of the electrical and electronic waste (WEEE problem in the uncertain environment has a critical effect on the economy and environmental protection of each region. The considered problem can be stated as a fuzzy non-convex optimization problem with linear objective function and a set of linear and non-linear constraints. The original problem is reformulated by using linear relaxation into a fuzzy linear programming problem. The fuzzy rating of collecting point capacities and fix costs of recycling centers are modeled by triangular fuzzy numbers. The optimal solution of the reformulation model is found by using optimality concept. The proposed model is verified through an illustrative example with real-life data. The obtained results represent an input for future research which should include a good benchmark base for tested reverse logistic chains and their continuous improvement. [Projekat Ministarstva nauke Republike Srbije, br. 035033: Sustainable development technology and equipment for the recycling of motor vehicles

  10. Reliability assessment of competing risks with generalized mixed shock models

    International Nuclear Information System (INIS)

    Rafiee, Koosha; Feng, Qianmei; Coit, David W.

    2017-01-01

    This paper investigates reliability modeling for systems subject to dependent competing risks considering the impact from a new generalized mixed shock model. Two dependent competing risks are soft failure due to a degradation process, and hard failure due to random shocks. The shock process contains fatal shocks that can cause hard failure instantaneously, and nonfatal shocks that impact the system in three different ways: 1) damaging the unit by immediately increasing the degradation level, 2) speeding up the deterioration by accelerating the degradation rate, and 3) weakening the unit strength by reducing the hard failure threshold. While the first impact from nonfatal shocks comes from each individual shock, the other two impacts are realized when the condition for a new generalized mixed shock model is satisfied. Unlike most existing mixed shock models that consider a combination of two shock patterns, our new generalized mixed shock model includes three classic shock patterns. According to the proposed generalized mixed shock model, the degradation rate and the hard failure threshold can simultaneously shift multiple times, whenever the condition for one of these three shock patterns is satisfied. An example using micro-electro-mechanical systems devices illustrates the effectiveness of the proposed approach with sensitivity analysis. - Highlights: • A rich reliability model for systems subject to dependent failures is proposed. • The degradation rate and the hard failure threshold can shift simultaneously. • The shift is triggered by a new generalized mixed shock model. • The shift can occur multiple times under the generalized mixed shock model.

  11. Linear Magnetoelectric Effect by Orbital Magnetism

    NARCIS (Netherlands)

    Scaramucci, A.; Bousquet, E.; Fechner, M.; Mostovoy, M.; Spaldin, N. A.

    2012-01-01

    We use symmetry analysis and first-principles calculations to show that the linear magnetoelectric effect can originate from the response of orbital magnetic moments to the polar distortions induced by an applied electric field. Using LiFePO4 as a model compound we show that spin-orbit coupling

  12. Extended Linear Models with Gaussian Priors

    DEFF Research Database (Denmark)

    Quinonero, Joaquin

    2002-01-01

    In extended linear models the input space is projected onto a feature space by means of an arbitrary non-linear transformation. A linear model is then applied to the feature space to construct the model output. The dimension of the feature space can be very large, or even infinite, giving the model...... a very big flexibility. Support Vector Machines (SVM's) and Gaussian processes are two examples of such models. In this technical report I present a model in which the dimension of the feature space remains finite, and where a Bayesian approach is used to train the model with Gaussian priors...... on the parameters. The Relevance Vector Machine, introduced by Tipping, is a particular case of such a model. I give the detailed derivations of the expectation-maximisation (EM) algorithm used in the training. These derivations are not found in the literature, and might be helpful for newcomers....

  13. Hybrid Spectral Unmixing: Using Artificial Neural Networks for Linear/Non-Linear Switching

    Directory of Open Access Journals (Sweden)

    Asmau M. Ahmed

    2017-07-01

    Full Text Available Spectral unmixing is a key process in identifying spectral signature of materials and quantifying their spatial distribution over an image. The linear model is expected to provide acceptable results when two assumptions are satisfied: (1 The mixing process should occur at macroscopic level and (2 Photons must interact with single material before reaching the sensor. However, these assumptions do not always hold and more complex nonlinear models are required. This study proposes a new hybrid method for switching between linear and nonlinear spectral unmixing of hyperspectral data based on artificial neural networks. The neural networks was trained with parameters within a window of the pixel under consideration. These parameters are computed to represent the diversity of the neighboring pixels and are based on the Spectral Angular Distance, Covariance and a non linearity parameter. The endmembers were extracted using Vertex Component Analysis while the abundances were estimated using the method identified by the neural networks (Vertex Component Analysis, Fully Constraint Least Square Method, Polynomial Post Nonlinear Mixing Model or Generalized Bilinear Model. Results show that the hybrid method performs better than each of the individual techniques with high overall accuracy, while the abundance estimation error is significantly lower than that obtained using the individual methods. Experiments on both synthetic dataset and real hyperspectral images demonstrated that the proposed hybrid switch method is efficient for solving spectral unmixing of hyperspectral images as compared to individual algorithms.

  14. Fitting and Calibrating a Multilevel Mixed-Effects Stem Taper Model for Maritime Pine in NW Spain

    Science.gov (United States)

    Arias-Rodil, Manuel; Castedo-Dorado, Fernando; Cámara-Obregón, Asunción; Diéguez-Aranda, Ulises

    2015-01-01

    Stem taper data are usually hierarchical (several measurements per tree, and several trees per plot), making application of a multilevel mixed-effects modelling approach essential. However, correlation between trees in the same plot/stand has often been ignored in previous studies. Fitting and calibration of a variable-exponent stem taper function were conducted using data from 420 trees felled in even-aged maritime pine (Pinus pinaster Ait.) stands in NW Spain. In the fitting step, the tree level explained much more variability than the plot level, and therefore calibration at plot level was omitted. Several stem heights were evaluated for measurement of the additional diameter needed for calibration at tree level. Calibration with an additional diameter measured at between 40 and 60% of total tree height showed the greatest improvement in volume and diameter predictions. If additional diameter measurement is not available, the fixed-effects model fitted by the ordinary least squares technique should be used. Finally, we also evaluated how the expansion of parameters with random effects affects the stem taper prediction, as we consider this a key question when applying the mixed-effects modelling approach to taper equations. The results showed that correlation between random effects should be taken into account when assessing the influence of random effects in stem taper prediction. PMID:26630156

  15. The Use of Mixed Effects Models for Obtaining Low-Cost Ecosystem Carbon Stock Estimates in Mangroves of the Asia-Pacific

    Science.gov (United States)

    Bukoski, J. J.; Broadhead, J. S.; Donato, D.; Murdiyarso, D.; Gregoire, T. G.

    2016-12-01

    Mangroves provide extensive ecosystem services that support both local livelihoods and international environmental goals, including coastal protection, water filtration, biodiversity conservation and the sequestration of carbon (C). While voluntary C market projects that seek to preserve and enhance forest C stocks offer a potential means of generating finance for mangrove conservation, their implementation faces barriers due to the high costs of quantifying C stocks through measurement, reporting and verification (MRV) activities. To streamline MRV activities in mangrove C forestry projects, we develop predictive models for (i) biomass-based C stocks, and (ii) soil-based C stocks for the mangroves of the Asia-Pacific. We use linear mixed effect models to account for spatial correlation in modeling the expected C as a function of stand attributes. The most parsimonious biomass model predicts total biomass C stocks as a function of both basal area and the interaction between latitude and basal area, whereas the most parsimonious soil C model predicts soil C stocks as a function of the logarithmic transformations of both latitude and basal area. Random effects are specified by site for both models, and are found to explain a substantial proportion of variance within the estimation datasets. The root mean square error (RMSE) of the biomass C model is approximated at 24.6 Mg/ha (18.4% of mean biomass C in the dataset), whereas the RMSE of the soil C model is estimated at 4.9 mg C/cm 3 (14.1% of mean soil C). A substantial proportion of the variation in soil C, however, is explained by the random effects and thus the use of the SOC model may be most valuable for sites in which field measurements of soil C exist.

  16. Robust Linear Models for Cis-eQTL Analysis.

    Science.gov (United States)

    Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C

    2015-01-01

    Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.

  17. Non-linear finite element modeling

    DEFF Research Database (Denmark)

    Mikkelsen, Lars Pilgaard

    The note is written for courses in "Non-linear finite element method". The note has been used by the author teaching non-linear finite element modeling at Civil Engineering at Aalborg University, Computational Mechanics at Aalborg University Esbjerg, Structural Engineering at the University...

  18. Large eddy simulation of n-heptane spray combustion in partially premixed combustion regime with linear eddy model

    International Nuclear Information System (INIS)

    Xiao, Gang; Jia, Ming; Wang, Tianyou

    2016-01-01

    Spray combustion of n-heptane in a constant-volume vessel under engine-relevant conditions was investigated using linear eddy model in the framework of large eddy simulation. In this numerical approach, turbulent mixing was traced by an innovative stochastic approach instead of the conventional gradient diffusion model. Chemical reaction rates were calculated with the consideration of the sub-grid scale spatial fluctuations of reactive scalars. Turbulence-chemistry interactions were represented by the separated treatments of the underlying processes including turbulent stirring, chemical reaction, and molecular diffusion. The model was validated against the experimental data of ignition delay times, chemiluminescence images, and soot images from Sandia National Laboratories. Numerical results showed that the ignition process changed from the temperature-controlled regime to the mixing-controlled regime as the initial ambient temperature increased from 800 K to 1000 K. The premixed flame and the diffusion flame coexisted, while the gross heat release rate was found to be dominated by the premixed flame. The temperature fluctuation was mainly observed around the spray jet due to the cooling effect of the fuel vaporization. The fluctuations were more significantly smoothed out by the high-temperature flame than the low-temperature flame. The mean temperature would be overpredicted if the sub-grid temperature fluctuation was neglected. - Highlights: • Turbulent mixing is traced by stochastic method instead of gradient diffusion model. • Sub-grid scale fluctuations of reactive scalars are captured. • Ignition process varies from temperature-controlled to mixing-controlled regime. • Temperature fluctuation can be smoothed out by high-temperature flame. • The heat release rate is dominated by the premixed flame.

  19. THE EFFECT OF SOLAR RADIATION ON AUTOMOBILE ENVIRONMENT THROUGH NATURAL CONVECTION AND MIXED CONVECTION

    Directory of Open Access Journals (Sweden)

    MD. FAISAL KADER

    2012-10-01

    Full Text Available In the present paper, the effect of solar radiation on automobiles has been studied by both experimentally and numerically. The numerical solution is done by an operation friendly and fast CFD code – SC/Tetra with a full scale model of a SM3 car and turbulence is modeled by the standard k-ε equation. Numerical analysis of the three-dimensional model predicts a detailed description of fluid flow and temperature distribution in the passenger compartment during both the natural convection due to the incoming solar radiation and mixed convection due to the flow from defrost nozzle and radiation. It can be seen that solar radiation is an important parameter to raise the compartment temperature above the ambient temperature during summer. During natural convection, the rate of heat transfer is fast at the initial period. In the mixed convection analyses, it is found that the temperature drops down to a comfortable range almost linearly at the initial stage. Experimental investigations are performed to determine the temperature contour on the windshield and the local temperature at a particular point for further validation of the numerical results.

  20. Prediction error variance and expected response to selection, when selection is based on the best predictor - for Gaussian and threshold characters, traits following a Poisson mixed model and survival traits

    DEFF Research Database (Denmark)

    Andersen, Anders Holst; Korsgaard, Inge Riis; Jensen, Just

    2002-01-01

    In this paper, we consider selection based on the best predictor of animal additive genetic values in Gaussian linear mixed models, threshold models, Poisson mixed models, and log normal frailty models for survival data (including models with time-dependent covariates with associated fixed...... or random effects). In the different models, expressions are given (when these can be found - otherwise unbiased estimates are given) for prediction error variance, accuracy of selection and expected response to selection on the additive genetic scale and on the observed scale. The expressions given for non...... Gaussian traits are generalisations of the well-known formulas for Gaussian traits - and reflect, for Poisson mixed models and frailty models for survival data, the hierarchal structure of the models. In general the ratio of the additive genetic variance to the total variance in the Gaussian part...

  1. Inference of ICF Implosion Core Mix using Experimental Data and Theoretical Mix Modeling

    International Nuclear Information System (INIS)

    Welser-Sherrill, L.; Haynes, D.A.; Mancini, R.C.; Cooley, J.H.; Tommasini, R.; Golovkin, I.E.; Sherrill, M.E.; Haan, S.W.

    2009-01-01

    The mixing between fuel and shell materials in Inertial Confinement Fusion (ICF) implosion cores is a current topic of interest. The goal of this work was to design direct-drive ICF experiments which have varying levels of mix, and subsequently to extract information on mixing directly from the experimental data using spectroscopic techniques. The experimental design was accomplished using hydrodynamic simulations in conjunction with Haan's saturation model, which was used to predict the mix levels of candidate experimental configurations. These theoretical predictions were then compared to the mixing information which was extracted from the experimental data, and it was found that Haan's mix model performed well in predicting trends in the width of the mix layer. With these results, we have contributed to an assessment of the range of validity and predictive capability of the Haan saturation model, as well as increased our confidence in the methods used to extract mixing information from experimental data.

  2. Optimal placement of capacitors in a radial network using conic and mixed integer linear programming

    Energy Technology Data Exchange (ETDEWEB)

    Jabr, R.A. [Electrical, Computer and Communication Engineering Department, Notre Dame University, P.O. Box: 72, Zouk Mikhael, Zouk Mosbeh (Lebanon)

    2008-06-15

    This paper considers the problem of optimally placing fixed and switched type capacitors in a radial distribution network. The aim of this problem is to minimize the costs associated with capacitor banks, peak power, and energy losses whilst satisfying a pre-specified set of physical and technical constraints. The proposed solution is obtained using a two-phase approach. In phase-I, the problem is formulated as a conic program in which all nodes are candidates for placement of capacitor banks whose sizes are considered as continuous variables. A global solution of the phase-I problem is obtained using an interior-point based conic programming solver. Phase-II seeks a practical optimal solution by considering capacitor sizes as discrete variables. The problem in this phase is formulated as a mixed integer linear program based on minimizing the L1-norm of deviations from the phase-I state variable values. The solution to the phase-II problem is obtained using a mixed integer linear programming solver. The proposed method is validated via extensive comparisons with previously published results. (author)

  3. Theoretical Models of Neutrino Mixing Recent Developments

    CERN Document Server

    Altarelli, Guido

    2009-01-01

    The data on neutrino mixing are at present compatible with Tri-Bimaximal (TB) mixing. If one takes this indication seriously then the models that lead to TB mixing in first approximation are particularly interesting and A4 models are prominent in this list. However, the agreement of TB mixing with the data could still be an accident. We discuss a recent model based on S4 where Bimaximal mixing is instead valid at leading order and the large corrections needed to reproduce the data arise from the diagonalization of charged leptons. The value of $\\theta_{13}$ could distinguish between the two alternatives.

  4. Multicollinearity in hierarchical linear models.

    Science.gov (United States)

    Yu, Han; Jiang, Shanhe; Land, Kenneth C

    2015-09-01

    This study investigates an ill-posed problem (multicollinearity) in Hierarchical Linear Models from both the data and the model perspectives. We propose an intuitive, effective approach to diagnosing the presence of multicollinearity and its remedies in this class of models. A simulation study demonstrates the impacts of multicollinearity on coefficient estimates, associated standard errors, and variance components at various levels of multicollinearity for finite sample sizes typical in social science studies. We further investigate the role multicollinearity plays at each level for estimation of coefficient parameters in terms of shrinkage. Based on these analyses, we recommend a top-down method for assessing multicollinearity in HLMs that first examines the contextual predictors (Level-2 in a two-level model) and then the individual predictors (Level-1) and uses the results for data collection, research problem redefinition, model re-specification, variable selection and estimation of a final model. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Mixed-Effects Modeling of Neurofeedback Self-Regulation Performance: Moderators for Learning in Children with ADHD.

    Science.gov (United States)

    Zuberer, Agnieszka; Minder, Franziska; Brandeis, Daniel; Drechsler, Renate

    2018-01-01

    Neurofeedback (NF) has gained increasing popularity as a training method for children and adults with attention deficit hyperactivity disorder (ADHD). However, it is unclear to what extent children learn to regulate their brain activity and in what way NF learning may be affected by subject- and treatment-related factors. In total, 48 subjects with ADHD (age 8.5-16.5 years; 16 subjects on methylphenidate (MPH)) underwent 15 double training sessions of NF in either a clinical or a school setting. Four mixed-effects models were employed to analyze learning: training within-sessions, across-sessions, with continuous feedback, and with transfer in which performance feedback is delayed. Age and MPH affected the NF performance in all models. Cross-session learning in the feedback condition was mainly moderated by age and MPH, whereas NF learning in the transfer condition was mainly boosted by MPH. Apart from IQ and task types, other subject-related or treatment-related effects were unrelated to NF learning. This first study analyzing moderators of NF learning in ADHD with a mixed-effects modeling approach shows that NF performance is moderated differentially by effects of age and MPH depending on the training task and time window. Future studies may benefit from using this approach to analyze NF learning and NF specificity. The trial name Neurofeedback and Computerized Cognitive Training in Different Settings for Children and Adolescents With ADHD is registered with NCT02358941.

  6. Latent Fundamentals Arbitrage with a Mixed Effects Factor Model

    Directory of Open Access Journals (Sweden)

    Andrei Salem Gonçalves

    2012-09-01

    Full Text Available We propose a single-factor mixed effects panel data model to create an arbitrage portfolio that identifies differences in firm-level latent fundamentals. Furthermore, we show that even though the characteristics that affect returns are unknown variables, it is possible to identify the strength of the combination of these latent fundamentals for each stock by following a simple approach using historical data. As a result, a trading strategy that bought the stocks with the best fundamentals (strong fundamentals portfolio and sold the stocks with the worst ones (weak fundamentals portfolio realized significant risk-adjusted returns in the U.S. market for the period between July 1986 and June 2008. To ensure robustness, we performed sub period and seasonal analyses and adjusted for trading costs and we found further empirical evidence that using a simple investment rule, that identified these latent fundamentals from the structure of past returns, can lead to profit.

  7. Modelling mixed forest growth : a review of models for forest management

    NARCIS (Netherlands)

    Porte, A.; Bartelink, H.H.

    2002-01-01

    Most forests today are multi-specific and heterogeneous forests (`mixed forests'). However, forest modelling has been focusing on mono-specific stands for a long time, only recently have models been developed for mixed forests. Previous reviews of mixed forest modelling were restricted to certain

  8. Modelling rainfall amounts using mixed-gamma model for Kuantan district

    Science.gov (United States)

    Zakaria, Roslinazairimah; Moslim, Nor Hafizah

    2017-05-01

    An efficient design of flood mitigation and construction of crop growth models depend upon good understanding of the rainfall process and characteristics. Gamma distribution is usually used to model nonzero rainfall amounts. In this study, the mixed-gamma model is applied to accommodate both zero and nonzero rainfall amounts. The mixed-gamma model presented is for the independent case. The formulae of mean and variance are derived for the sum of two and three independent mixed-gamma variables, respectively. Firstly, the gamma distribution is used to model the nonzero rainfall amounts and the parameters of the distribution (shape and scale) are estimated using the maximum likelihood estimation method. Then, the mixed-gamma model is defined for both zero and nonzero rainfall amounts simultaneously. The formulae of mean and variance for the sum of two and three independent mixed-gamma variables derived are tested using the monthly rainfall amounts from rainfall stations within Kuantan district in Pahang Malaysia. Based on the Kolmogorov-Smirnov goodness of fit test, the results demonstrate that the descriptive statistics of the observed sum of rainfall amounts is not significantly different at 5% significance level from the generated sum of independent mixed-gamma variables. The methodology and formulae demonstrated can be applied to find the sum of more than three independent mixed-gamma variables.

  9. Mixed Hitting-Time Models

    NARCIS (Netherlands)

    Abbring, J.H.

    2009-01-01

    We study mixed hitting-time models, which specify durations as the first time a Levy process (a continuous-time process with stationary and independent increments) crosses a heterogeneous threshold. Such models of substantial interest because they can be reduced from optimal-stopping models with

  10. A Detailed Analytical Study of Non-Linear Semiconductor Device Modelling

    Directory of Open Access Journals (Sweden)

    Umesh Kumar

    1995-01-01

    junction diode have been developed. The results of computer simulated examples have been presented in each case. The non-linear lumped model for Gunn is a unified model as it describes the diffusion effects as the-domain traves from cathode to anode. An additional feature of this model is that it describes the domain extinction and nucleation phenomena in Gunn dioder with the help of a simple timing circuit. The non-linear lumped model for SCR is general and is valid under any mode of operation in any circuit environment. The memristive circuit model for p-n junction diodes is capable of simulating realistically the diode’s dynamic behavior under reverse, forward and sinusiodal operating modes. The model uses memristor, the charge-controlled resistor to mimic various second-order effects due to conductivity modulation. It is found that both storage time and fall time of the diode can be accurately predicted.

  11. Two-level mixed modeling of longitudinal pedigree data for genetic association analysis

    DEFF Research Database (Denmark)

    Tan, Q.

    2013-01-01

    of follow-up. Approaches have been proposed to integrate kinship correlation into the mixed effect models to explicitly model the genetic relationship which have been proven as an efficient way for dealing with sample clustering in pedigree data. Although useful for adjusting relatedness in the mixed...... assess the genetic associations with the mean level and the rate of change in a phenotype both with kinship correlation integrated in the mixed effect models. We apply our method to longitudinal pedigree data to estimate the genetic effects on systolic blood pressure measured over time in large pedigrees......Genetic association analysis on complex phenotypes under a longitudinal design involving pedigrees encounters the problem of correlation within pedigrees which could affect statistical assessment of the genetic effects on both the mean level of the phenotype and its rate of change over the time...

  12. Kovacs effect in the one-dimensional Ising model: A linear response analysis

    Science.gov (United States)

    Ruiz-García, M.; Prados, A.

    2014-01-01

    We analyze the so-called Kovacs effect in the one-dimensional Ising model with Glauber dynamics. We consider small enough temperature jumps, for which a linear response theory has been recently derived. Within this theory, the Kovacs hump is directly related to the monotonic relaxation function of the energy. The analytical results are compared with extensive Monte Carlo simulations, and an excellent agreement is found. Remarkably, the position of the maximum in the Kovacs hump depends on the fact that the true asymptotic behavior of the relaxation function is different from the stretched exponential describing the relevant part of the relaxation at low temperatures.

  13. Modeling Individual Differences in Within-Person Variation of Negative and Positive Affect in a Mixed Effects Location Scale Model Using BUGS/JAGS

    Science.gov (United States)

    Rast, Philippe; Hofer, Scott M.; Sparks, Catharine

    2012-01-01

    A mixed effects location scale model was used to model and explain individual differences in within-person variability of negative and positive affect across 7 days (N=178) within a measurement burst design. The data come from undergraduate university students and are pooled from a study that was repeated at two consecutive years. Individual…

  14. Multivariate statistical modelling based on generalized linear models

    CERN Document Server

    Fahrmeir, Ludwig

    1994-01-01

    This book is concerned with the use of generalized linear models for univariate and multivariate regression analysis. Its emphasis is to provide a detailed introductory survey of the subject based on the analysis of real data drawn from a variety of subjects including the biological sciences, economics, and the social sciences. Where possible, technical details and proofs are deferred to an appendix in order to provide an accessible account for non-experts. Topics covered include: models for multi-categorical responses, model checking, time series and longitudinal data, random effects models, and state-space models. Throughout, the authors have taken great pains to discuss the underlying theoretical ideas in ways that relate well to the data at hand. As a result, numerous researchers whose work relies on the use of these models will find this an invaluable account to have on their desks. "The basic aim of the authors is to bring together and review a large part of recent advances in statistical modelling of m...

  15. Matrix Tricks for Linear Statistical Models

    CERN Document Server

    Puntanen, Simo; Styan, George PH

    2011-01-01

    In teaching linear statistical models to first-year graduate students or to final-year undergraduate students there is no way to proceed smoothly without matrices and related concepts of linear algebra; their use is really essential. Our experience is that making some particular matrix tricks very familiar to students can substantially increase their insight into linear statistical models (and also multivariate statistical analysis). In matrix algebra, there are handy, sometimes even very simple "tricks" which simplify and clarify the treatment of a problem - both for the student and

  16. A Mixed Integer Linear Programming Approach to Electrical Stimulation Optimization Problems.

    Science.gov (United States)

    Abouelseoud, Gehan; Abouelseoud, Yasmine; Shoukry, Amin; Ismail, Nour; Mekky, Jaidaa

    2018-02-01

    Electrical stimulation optimization is a challenging problem. Even when a single region is targeted for excitation, the problem remains a constrained multi-objective optimization problem. The constrained nature of the problem results from safety concerns while its multi-objectives originate from the requirement that non-targeted regions should remain unaffected. In this paper, we propose a mixed integer linear programming formulation that can successfully address the challenges facing this problem. Moreover, the proposed framework can conclusively check the feasibility of the stimulation goals. This helps researchers to avoid wasting time trying to achieve goals that are impossible under a chosen stimulation setup. The superiority of the proposed framework over alternative methods is demonstrated through simulation examples.

  17. An online re-linearization scheme suited for Model Predictive and Linear Quadratic Control

    DEFF Research Database (Denmark)

    Henriksen, Lars Christian; Poulsen, Niels Kjølstad

    This technical note documents the equations for primal-dual interior-point quadratic programming problem solver used for MPC. The algorithm exploits the special structure of the MPC problem and is able to reduce the computational burden such that the computational burden scales with prediction...... horizon length in a linear way rather than cubic, which would be the case if the structure was not exploited. It is also shown how models used for design of model-based controllers, e.g. linear quadratic and model predictive, can be linearized both at equilibrium and non-equilibrium points, making...

  18. Comparison between linear and non-parametric regression models for genome-enabled prediction in wheat.

    Science.gov (United States)

    Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne

    2012-12-01

    In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.

  19. An evaluation of bias in propensity score-adjusted non-linear regression models.

    Science.gov (United States)

    Wan, Fei; Mitra, Nandita

    2018-03-01

    Propensity score methods are commonly used to adjust for observed confounding when estimating the conditional treatment effect in observational studies. One popular method, covariate adjustment of the propensity score in a regression model, has been empirically shown to be biased in non-linear models. However, no compelling underlying theoretical reason has been presented. We propose a new framework to investigate bias and consistency of propensity score-adjusted treatment effects in non-linear models that uses a simple geometric approach to forge a link between the consistency of the propensity score estimator and the collapsibility of non-linear models. Under this framework, we demonstrate that adjustment of the propensity score in an outcome model results in the decomposition of observed covariates into the propensity score and a remainder term. Omission of this remainder term from a non-collapsible regression model leads to biased estimates of the conditional odds ratio and conditional hazard ratio, but not for the conditional rate ratio. We further show, via simulation studies, that the bias in these propensity score-adjusted estimators increases with larger treatment effect size, larger covariate effects, and increasing dissimilarity between the coefficients of the covariates in the treatment model versus the outcome model.

  20. Income and health-related quality of life among prostate cancer patients over a one-year period after radical prostatectomy: a linear mixed model analysis.

    Science.gov (United States)

    Klein, Jens; Lüdecke, Daniel; Hofreuter-Gätgens, Kerstin; Fisch, Margit; Graefen, Markus; von dem Knesebeck, Olaf

    2017-09-01

    To examine income-related disparities in health-related quality of life (HRQOL) over a one-year period after surgery (radical prostatectomy) and its contributory factors in a longitudinal perspective. Evidence of associations between income and HRQOL among patients with prostate cancer (PCa) is sparse and their explanations still remain unclear. 246 males of two German hospitals filled out a questionnaire at the time of acute treatment, 6 and 12 months later. Age, partnership status, baseline disease and treatment factors, physical and psychological comorbidities, as well as treatment factors and adverse effects at follow-up were additionally included in the analyses to explain potential disparities. HRQOL was assessed with the EORTC (European Organisation for Research and Treatment of Cancer) QLQ-C30 core questionnaire and the prostate-specific QLQ-PR25. A linear mixed model for repeated measures was calculated. The fixed effects showed highly significant income-related inequalities regarding the majority of HRQOL scales. Less affluent PCa patients reported lower HRQOL in terms of global quality of life, all functional scales and urinary symptoms. After introducing relevant covariates, some associations became insignificant (physical, cognitive and sexual function), while others only showed reduced estimates (global quality of life, urinary symptoms, role, emotional and social function). In particular, mental disorders/psychological comorbidity played a relevant role in the explanation of income-related disparities. One year after surgery, income-related disparities in various dimensions of HRQOL persist. With respect to economically disadvantaged PCa patients, the findings emphasize the importance of continuous psychosocial screening and tailored interventions, of patients' empowerment and improved access to supportive care.

  1. Modeling of Volatility with Non-linear Time Series Model

    OpenAIRE

    Kim Song Yon; Kim Mun Chol

    2013-01-01

    In this paper, non-linear time series models are used to describe volatility in financial time series data. To describe volatility, two of the non-linear time series are combined into form TAR (Threshold Auto-Regressive Model) with AARCH (Asymmetric Auto-Regressive Conditional Heteroskedasticity) error term and its parameter estimation is studied.

  2. Application of the simplex method of linear programming model to ...

    African Journals Online (AJOL)

    This work discussed how the simplex method of linear programming could be used to maximize the profit of any business firm using Saclux Paint Company as a case study. It equally elucidated the effect variation in the optimal result obtained from linear programming model, will have on any given firm. It was demonstrated ...

  3. Applicability of linear and non-linear potential flow models on a Wavestar float

    DEFF Research Database (Denmark)

    Bozonnet, Pauline; Dupin, Victor; Tona, Paolino

    2017-01-01

    as a model based on non-linear potential flow theory and weakscatterer hypothesis are successively considered. Simple tests, such as dip tests, decay tests and captive tests enable to highlight the improvements obtained with the introduction of nonlinearities. Float motion under wave actions and without...... control action, limited to small amplitude motion with a single float, is well predicted by the numerical models, including the linear one. Still, float velocity is better predicted by accounting for non-linear hydrostatic and Froude-Krylov forces.......Numerical models based on potential flow theory, including different types of nonlinearities are compared and validated against experimental data for the Wavestar wave energy converter technology. Exact resolution of the rotational motion, non-linear hydrostatic and Froude-Krylov forces as well...

  4. Population stochastic modelling (PSM)-An R package for mixed-effects models based on stochastic differential equations

    DEFF Research Database (Denmark)

    Klim, Søren; Mortensen, Stig Bousgaard; Kristensen, Niels Rode

    2009-01-01

    are often partly ignored in PK/PD modelling although violating the hypothesis for many standard statistical tests. This article presents a package for the statistical program R that is able to handle SDEs in a mixed-effects setting. The estimation method implemented is the FOCE1 approximation......The extension from ordinary to stochastic differential equations (SDEs) in pharmacokinetic and pharmacodynamic (PK/PD) modelling is an emerging field and has been motivated in a number of articles [N.R. Kristensen, H. Madsen, S.H. Ingwersen, Using stochastic differential equations for PK/PD model...... development, J. Pharmacokinet. Pharmacodyn. 32 (February(l)) (2005) 109-141; C.W. Tornoe, R.V Overgaard, H. Agerso, H.A. Nielsen, H. Madsen, E.N. Jonsson, Stochastic differential equations in NONMEM: implementation, application, and comparison with ordinary differential equations, Pharm. Res. 22 (August(8...

  5. Forecasting Volatility of Dhaka Stock Exchange: Linear Vs Non-linear models

    Directory of Open Access Journals (Sweden)

    Masudul Islam

    2012-10-01

    Full Text Available Prior information about a financial market is very essential for investor to invest money on parches share from the stock market which can strengthen the economy. The study examines the relative ability of various models to forecast daily stock indexes future volatility. The forecasting models that employed from simple to relatively complex ARCH-class models. It is found that among linear models of stock indexes volatility, the moving average model ranks first using root mean square error, mean absolute percent error, Theil-U and Linex loss function  criteria. We also examine five nonlinear models. These models are ARCH, GARCH, EGARCH, TGARCH and restricted GARCH models. We find that nonlinear models failed to dominate linear models utilizing different error measurement criteria and moving average model appears to be the best. Then we forecast the next two months future stock index price volatility by the best (moving average model.

  6. Estimating the Numerical Diapycnal Mixing in the GO5.0 Ocean Model

    Science.gov (United States)

    Megann, A.; Nurser, G.

    2014-12-01

    Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, and have attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimations have been made of the magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is the latest ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre (Megann et al, 2014), and forms part of the GC1 and GC2 climate models. It uses version 3.4 of the NEMO model, on the ORCA025 ¼° global tripolar grid. We describe various approaches to quantifying the numerical diapycnal mixing in this model, and present results from analysis of the GO5.0 model based on the isopycnal watermass analysis of Lee et al (2002) that indicate that numerical mixing does indeed form a significant component of the watermass transformation in the ocean interior.

  7. Neutron stars in non-linear coupling models

    International Nuclear Information System (INIS)

    Taurines, Andre R.; Vasconcellos, Cesar A.Z.; Malheiro, Manuel; Chiapparini, Marcelo

    2001-01-01

    We present a class of relativistic models for nuclear matter and neutron stars which exhibits a parameterization, through mathematical constants, of the non-linear meson-baryon couplings. For appropriate choices of the parameters, it recovers current QHD models found in the literature: Walecka, ZM and ZM3 models. We have found that the ZM3 model predicts a very small maximum neutron star mass, ∼ 0.72M s un. A strong similarity between the results of ZM-like models and those with exponential couplings is noted. Finally, we discuss the very intense scalar condensates found in the interior of neutron stars which may lead to negative effective masses. (author)

  8. Neutron stars in non-linear coupling models

    Energy Technology Data Exchange (ETDEWEB)

    Taurines, Andre R.; Vasconcellos, Cesar A.Z. [Rio Grande do Sul Univ., Porto Alegre, RS (Brazil); Malheiro, Manuel [Universidade Federal Fluminense, Niteroi, RJ (Brazil); Chiapparini, Marcelo [Universidade do Estado, Rio de Janeiro, RJ (Brazil)

    2001-07-01

    We present a class of relativistic models for nuclear matter and neutron stars which exhibits a parameterization, through mathematical constants, of the non-linear meson-baryon couplings. For appropriate choices of the parameters, it recovers current QHD models found in the literature: Walecka, ZM and ZM3 models. We have found that the ZM3 model predicts a very small maximum neutron star mass, {approx} 0.72M{sub s}un. A strong similarity between the results of ZM-like models and those with exponential couplings is noted. Finally, we discuss the very intense scalar condensates found in the interior of neutron stars which may lead to negative effective masses. (author)

  9. Development and validation of effective models for simulation of stratification and mixing phenomena in a pool of water

    Energy Technology Data Exchange (ETDEWEB)

    Li, H.; Kudinov, P.; Villanueva, W. (Royal Institute of Technology (KTH). Div. of Nuclear Power Safety (Sweden))

    2011-06-15

    This work pertains to the research program on Containment Thermal-Hydraulics at KTH. The objective is to evaluate and improve performance of methods, which are used to analyze thermal-hydraulics of steam suppression pools in a BWR plant under different abnormal transient and accident conditions. The pressure suppression pool was designed to have the capability as a heat sink to cool and condense steam released from the core vessel and/or main steam line during loss of coolant accident (LOCA) or opening of safety relief valve in normal operation of BWRs. For the case of small flow rates of steam influx, thermal stratification could develop on the part above the blowdown pipe exit and significantly impede the pool's pressure suppression capacity. Once steam flow rate increases significantly, momentum introduced by the steam injection and/or periodic expansion and collapse of large steam bubbles due to direct contact condensation can destroy stratified layers and lead to mixing of the pool water. We use CFD-like model of the general purpose thermal-hydraulic code GOTHIC for addressing the issues of stratification and mixing in the pool. In the previous works we have demonstrated that accurate and computationally efficient prediction of the pool thermal-hydraulics in the scenarios with transition between thermal stratification and mixing, presents a computational challenge. The reason is that direct contact condensation phenomena, which drive oscillatory motion of the water in the blowdown pipes, are difficult to simulate with original GOTHIC models because of appearance of artificial oscillations due to numerical disturbances. To resolve this problem we propose to model the effect of steam injection on the mixing and stratification with the Effective Heat Source (EHS) model and the Effective Momentum Source (EMS) model. We use POOLEX/PPOOLEX experiment (Lappeenranta University of Technology in Finland), in order to (a) quantify errors due to GOTHIC

  10. Development and validation of effective models for simulation of stratification and mixing phenomena in a pool of water

    International Nuclear Information System (INIS)

    Li, H.; Kudinov, P.; Villanueva, W.

    2011-06-01

    This work pertains to the research program on Containment Thermal-Hydraulics at KTH. The objective is to evaluate and improve performance of methods, which are used to analyze thermal-hydraulics of steam suppression pools in a BWR plant under different abnormal transient and accident conditions. The pressure suppression pool was designed to have the capability as a heat sink to cool and condense steam released from the core vessel and/or main steam line during loss of coolant accident (LOCA) or opening of safety relief valve in normal operation of BWRs. For the case of small flow rates of steam influx, thermal stratification could develop on the part above the blowdown pipe exit and significantly impede the pool's pressure suppression capacity. Once steam flow rate increases significantly, momentum introduced by the steam injection and/or periodic expansion and collapse of large steam bubbles due to direct contact condensation can destroy stratified layers and lead to mixing of the pool water. We use CFD-like model of the general purpose thermal-hydraulic code GOTHIC for addressing the issues of stratification and mixing in the pool. In the previous works we have demonstrated that accurate and computationally efficient prediction of the pool thermal-hydraulics in the scenarios with transition between thermal stratification and mixing, presents a computational challenge. The reason is that direct contact condensation phenomena, which drive oscillatory motion of the water in the blowdown pipes, are difficult to simulate with original GOTHIC models because of appearance of artificial oscillations due to numerical disturbances. To resolve this problem we propose to model the effect of steam injection on the mixing and stratification with the Effective Heat Source (EHS) model and the Effective Momentum Source (EMS) model. We use POOLEX/PPOOLEX experiment (Lappeenranta University of Technology in Finland), in order to (a) quantify errors due to GOTHIC's physical models

  11. A linear ion optics model for extraction from a plasma ion source

    International Nuclear Information System (INIS)

    Dietrich, J.

    1987-01-01

    A linear ion optics model for ion extraction from a plasma ion source is presented, based on the paraxial equations which account for lens effects, space charge and finite source ion temperature. This model is applied to three- and four-electrode extraction systems with circular apertures. The results are compared with experimental data and numerical calculations in the literature. It is shown that the improved calculations of space charge effects and lens effects allow better agreement to be obtained than in earlier linear optics models. A principal result is that the model presented here describes the dependence of the optimum perveance on the aspect ratio in a manner similar to the nonlinear optics theory. (orig.)

  12. Mixed H2/Hinfinity output-feedback control of second-order neutral systems with time-varying state and input delays.

    Science.gov (United States)

    Karimi, Hamid Reza; Gao, Huijun

    2008-07-01

    A mixed H2/Hinfinity output-feedback control design methodology is presented in this paper for second-order neutral linear systems with time-varying state and input delays. Delay-dependent sufficient conditions for the design of a desired control are given in terms of linear matrix inequalities (LMIs). A controller, which guarantees asymptotic stability and a mixed H2/Hinfinity performance for the closed-loop system of the second-order neutral linear system, is then developed directly instead of coupling the model to a first-order neutral system. A Lyapunov-Krasovskii method underlies the LMI-based mixed H2/Hinfinity output-feedback control design using some free weighting matrices. The simulation results illustrate the effectiveness of the proposed methodology.

  13. An A Posteriori Error Analysis of Mixed Finite Element Galerkin Approximations to Second Order Linear Parabolic Problems

    KAUST Repository

    Memon, Sajid; Nataraj, Neela; Pani, Amiya Kumar

    2012-01-01

    In this article, a posteriori error estimates are derived for mixed finite element Galerkin approximations to second order linear parabolic initial and boundary value problems. Using mixed elliptic reconstructions, a posteriori error estimates in L∞(L2)- and L2(L2)-norms for the solution as well as its flux are proved for the semidiscrete scheme. Finally, based on a backward Euler method, a completely discrete scheme is analyzed and a posteriori error bounds are derived, which improves upon earlier results on a posteriori estimates of mixed finite element approximations to parabolic problems. Results of numerical experiments verifying the efficiency of the estimators have also been provided. © 2012 Society for Industrial and Applied Mathematics.

  14. Modeling the Non-Linear Response of Fiber-Reinforced Laminates Using a Combined Damage/Plasticity Model

    Science.gov (United States)

    Schuecker, Clara; Davila, Carlos G.; Pettermann, Heinz E.

    2008-01-01

    The present work is concerned with modeling the non-linear response of fiber reinforced polymer laminates. Recent experimental data suggests that the non-linearity is not only caused by matrix cracking but also by matrix plasticity due to shear stresses. To capture the effects of those two mechanisms, a model combining a plasticity formulation with continuum damage has been developed to simulate the non-linear response of laminates under plane stress states. The model is used to compare the predicted behavior of various laminate lay-ups to experimental data from the literature by looking at the degradation of axial modulus and Poisson s ratio of the laminates. The influence of residual curing stresses and in-situ effect on the predicted response is also investigated. It is shown that predictions of the combined damage/plasticity model, in general, correlate well with the experimental data. The test data shows that there are two different mechanisms that can have opposite effects on the degradation of the laminate Poisson s ratio which is captured correctly by the damage/plasticity model. Residual curing stresses are found to have a minor influence on the predicted response for the cases considered here. Some open questions remain regarding the prediction of damage onset.

  15. Testing Parametric versus Semiparametric Modelling in Generalized Linear Models

    NARCIS (Netherlands)

    Härdle, W.K.; Mammen, E.; Müller, M.D.

    1996-01-01

    We consider a generalized partially linear model E(Y|X,T) = G{X'b + m(T)} where G is a known function, b is an unknown parameter vector, and m is an unknown function.The paper introduces a test statistic which allows to decide between a parametric and a semiparametric model: (i) m is linear, i.e.

  16. Quadratic temporal finite element method for linear elastic structural dynamics based on mixed convolved action

    International Nuclear Information System (INIS)

    Kim, Jin Kyu; Kim, Dong Keon

    2016-01-01

    A common approach for dynamic analysis in current practice is based on a discrete time-integration scheme. This approach can be largely attributed to the absence of a true variational framework for initial value problems. To resolve this problem, a new stationary variational principle was recently established for single-degree-of-freedom oscillating systems using mixed variables, fractional derivatives and convolutions of convolutions. In this mixed convolved action, all the governing differential equations and initial conditions are recovered from the stationarity of a single functional action. Thus, the entire description of linear elastic dynamical systems is encapsulated. For its practical application to structural dynamics, this variational formalism is systemically extended to linear elastic multidegree- of-freedom systems in this study, and a corresponding weak form is numerically implemented via a quadratic temporal finite element method. The developed numerical method is symplectic and unconditionally stable with respect to a time step for the underlying conservative system. For the forced-damped vibration, a three-story shear building is used as an example to investigate the performance of the developed numerical method, which provides accurate results with good convergence characteristics

  17. Quadratic temporal finite element method for linear elastic structural dynamics based on mixed convolved action

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jin Kyu [School of Architecture and Architectural Engineering, Hanyang University, Ansan (Korea, Republic of); Kim, Dong Keon [Dept. of Architectural Engineering, Dong A University, Busan (Korea, Republic of)

    2016-09-15

    A common approach for dynamic analysis in current practice is based on a discrete time-integration scheme. This approach can be largely attributed to the absence of a true variational framework for initial value problems. To resolve this problem, a new stationary variational principle was recently established for single-degree-of-freedom oscillating systems using mixed variables, fractional derivatives and convolutions of convolutions. In this mixed convolved action, all the governing differential equations and initial conditions are recovered from the stationarity of a single functional action. Thus, the entire description of linear elastic dynamical systems is encapsulated. For its practical application to structural dynamics, this variational formalism is systemically extended to linear elastic multidegree- of-freedom systems in this study, and a corresponding weak form is numerically implemented via a quadratic temporal finite element method. The developed numerical method is symplectic and unconditionally stable with respect to a time step for the underlying conservative system. For the forced-damped vibration, a three-story shear building is used as an example to investigate the performance of the developed numerical method, which provides accurate results with good convergence characteristics.

  18. A general method to determine sampling windows for nonlinear mixed effects models with an application to population pharmacokinetic studies.

    Science.gov (United States)

    Foo, Lee Kien; McGree, James; Duffull, Stephen

    2012-01-01

    Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.

  19. Lincx: A Linear Logical Framework with First-class Contexts

    DEFF Research Database (Denmark)

    Linn Georges, Aina; Murawska, Agata; Otis, Shawn

    2017-01-01

    Linear logic provides an elegant framework for modelling stateful, imperative and concurrent systems by viewing a context of assumptions as a set of resources. However, mechanizing the meta-theory of such systems remains a challenge, as we need to manage and reason about mixed contexts of linear...

  20. Nonlinear mixed effects modelling approach in investigating phenobarbital pharmacokinetic interactions in epileptic patients.

    Science.gov (United States)

    Vučićević, Katarina; Jovanović, Marija; Golubović, Bojana; Kovačević, Sandra Vezmar; Miljković, Branislava; Martinović, Žarko; Prostran, Milica

    2015-02-01

    The present study aimed to establish population pharmacokinetic model for phenobarbital (PB), examining and quantifying the magnitude of PB interactions with other antiepileptic drugs concomitantly used and to demonstrate its use for individualization of PB dosing regimen in adult epileptic patients. In total 205 PB concentrations were obtained during routine clinical monitoring of 136 adult epilepsy patients. PB steady state concentrations were measured by homogeneous enzyme immunoassay. Nonlinear mixed effects modelling (NONMEM) was applied for data analyses and evaluation of the final model. According to the final population model, significant determinant of apparent PB clearance (CL/F) was daily dose of concomitantly given valproic acid (VPA). Typical value of PB CL/F for final model was estimated at 0.314 l/h. Based on the final model, co-therapy with usual VPA dose of 1000 mg/day, resulted in PB CL/F average decrease of about 25 %, while 2000 mg/day leads to an average 50 % decrease in PB CL/F. Developed population PB model may be used in estimating individual CL/F for adult epileptic patients and could be applied for individualizing dosing regimen taking into account dose-dependent effect of concomitantly given VPA.

  1. System equivalent model mixing

    Science.gov (United States)

    Klaassen, Steven W. B.; van der Seijs, Maarten V.; de Klerk, Dennis

    2018-05-01

    This paper introduces SEMM: a method based on Frequency Based Substructuring (FBS) techniques that enables the construction of hybrid dynamic models. With System Equivalent Model Mixing (SEMM) frequency based models, either of numerical or experimental nature, can be mixed to form a hybrid model. This model follows the dynamic behaviour of a predefined weighted master model. A large variety of applications can be thought of, such as the DoF-space expansion of relatively small experimental models using numerical models, or the blending of different models in the frequency spectrum. SEMM is outlined, both mathematically and conceptually, based on a notation commonly used in FBS. A critical physical interpretation of the theory is provided next, along with a comparison to similar techniques; namely DoF expansion techniques. SEMM's concept is further illustrated by means of a numerical example. It will become apparent that the basic method of SEMM has some shortcomings which warrant a few extensions to the method. One of the main applications is tested in a practical case, performed on a validated benchmark structure; it will emphasize the practicality of the method.

  2. An overview of solution methods for multi-objective mixed integer linear programming programs

    DEFF Research Database (Denmark)

    Andersen, Kim Allan; Stidsen, Thomas Riis

    Multiple objective mixed integer linear programming (MOMIP) problems are notoriously hard to solve to optimality, i.e. finding the complete set of non-dominated solutions. We will give an overview of existing methods. Among those are interactive methods, the two phases method and enumeration...... methods. In particular we will discuss the existing branch and bound approaches for solving multiple objective integer programming problems. Despite the fact that branch and bound methods has been applied successfully to integer programming problems with one criterion only a few attempts has been made...

  3. Models of mixed irradiation with a 'reciprocal-time' pattern of the repair function

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Shozo; Miura, Yuri; Mizuno, Shoichi [Tokyo Metropolitan Inst. of Gerontology (Japan); Furusawa, Yoshiya [National Inst. of Radiological Sciences, Chiba (Japan)

    2002-09-01

    Suzuki presented models for mixed irradiation with two and multiple types of radiation by extending the Zaider and Rossi model, which is based on the theory of dual radiation action. In these models, the repair function was simply assumed to be semi-logarithmically linear (i.e., monoexponential), or a first-order process, which has been experimentally contradicted. Fowler, however, suggested that the repair of radiation damage might be largely a second-order process rather than a first-order one, and presented data in support of this hypothesis. In addition, a second-order repair function is preferred to an n-exponential repair function for the reason that only one parameter is used in the former instead of 2n-1 parameters for the latter, although both repair functions show a good fit to the experimental data. However, according to a second-order repair function, the repair rate depends on the dose, which is incompatible with the experimental data. We, therefore, revised the models for mixed irradiation by Zaider and Rossi and by Suzuki, by substituting a 'reciprocal-time' pattern of the repair function, which is derived from the assumption that the repair rate is independent of the dose in a second-order repair function, for a first-order one in reduction and interaction factors of the models, although the underlying mechanism for this assumption cannot be well-explained. The reduction factor, which reduces the contribution of the square of a dose to cell killing in the linear-quadratic model and its derivatives, and the interaction factor, which also reduces the contribution of the interaction of two or more doses of different types of radiation, were formulated by using a 'reciprocal-time' patterns of the repair function. Cell survivals calculated from the older and the newly modified models were compared in terms of the dose-rate by assuming various types of single and mixed irradiation. The result implies that the newly modified models for

  4. Non-linear effects of drought under shade: reconciling physiological and ecological models in plant communities.

    Science.gov (United States)

    Holmgren, Milena; Gómez-Aparicio, Lorena; Quero, José Luis; Valladares, Fernando

    2012-06-01

    The combined effects of shade and drought on plant performance and the implications for species interactions are highly debated in plant ecology. Empirical evidence for positive and negative effects of shade on the performance of plants under dry conditions supports two contrasting theoretical models about the role of shade under dry conditions: the trade-off and the facilitation hypotheses. We performed a meta-analysis of field and greenhouse studies evaluating the effects of drought at two or more irradiance levels on nine response variables describing plant physiological condition, growth, and survival. We explored differences in plant response across plant functional types, ecosystem types and methodological approaches. The data were best fit using quadratic models indicating a humped-back shape response to drought along an irradiance gradient for survival, whole plant biomass, maximum photosynthetic capacity, stomatal conductance and maximal photochemical efficiency. Drought effects were ameliorated at intermediate irradiance, becoming more severe at higher or lower light levels. This general pattern was maintained when controlling for potential variations in the strength of the drought treatment among light levels. Our quantitative meta-analysis indicates that dense shade ameliorates drought especially among drought-intolerant and shade-tolerant species. Wet tropical species showed larger negative effects of drought with increasing irradiance than semiarid and cold temperate species. Non-linear responses to irradiance were stronger under field conditions than under controlled greenhouse conditions. Non-linear responses to drought along the irradiance gradient reconciliate opposing views in plant ecology, indicating that facilitation is more likely within certain range of environmental conditions, fading under deep shade, especially for drought-tolerant species.

  5. Surface wind mixing in the Regional Ocean Modeling System (ROMS)

    Science.gov (United States)

    Robertson, Robin; Hartlipp, Paul

    2017-12-01

    Mixing at the ocean surface is key for atmosphere-ocean interactions and the distribution of heat, energy, and gases in the upper ocean. Winds are the primary force for surface mixing. To properly simulate upper ocean dynamics and the flux of these quantities within the upper ocean, models must reproduce mixing in the upper ocean. To evaluate the performance of the Regional Ocean Modeling System (ROMS) in replicating the surface mixing, the results of four different vertical mixing parameterizations were compared against observations, using the surface mixed layer depth, the temperature fields, and observed diffusivities for comparisons. The vertical mixing parameterizations investigated were Mellor- Yamada 2.5 level turbulent closure (MY), Large- McWilliams- Doney Kpp (LMD), Nakanishi- Niino (NN), and the generic length scale (GLS) schemes. This was done for one temperate site in deep water in the Eastern Pacific and three shallow water sites in the Baltic Sea. The model reproduced the surface mixed layer depth reasonably well for all sites; however, the temperature fields were reproduced well for the deep site, but not for the shallow Baltic Sea sites. In the Baltic Sea, the models overmixed the water column after a few days. Vertical temperature diffusivities were higher than those observed and did not show the temporal fluctuations present in the observations. The best performance was by NN and MY; however, MY became unstable in two of the shallow simulations with high winds. The performance of GLS nearly as good as NN and MY. LMD had the poorest performance as it generated temperature diffusivities that were too high and induced too much mixing. Further observational comparisons are needed to evaluate the effects of different stratification and wind conditions and the limitations on the vertical mixing parameterizations.

  6. Linear approximation model network and its formation via ...

    Indian Academy of Sciences (India)

    To overcome the deficiency of `local model network' (LMN) techniques, an alternative `linear approximation model' (LAM) network approach is proposed. Such a network models a nonlinear or practical system with multiple linear models fitted along operating trajectories, where individual models are simply networked ...

  7. A Closer Look on Spatiotemporal Variations of Dissolved Oxygen in Waste Stabilization Ponds Using Mixed Models

    Directory of Open Access Journals (Sweden)

    Long Ho

    2018-02-01

    Full Text Available Dissolved oxygen is an essential controlling factor in the performance of facultative and maturation ponds since both take many advantages of algal photosynthetic oxygenation. The rate of this photosynthesis strongly depends on the time during the day and the location in a pond system, whose roles have been overlooked in previous guidelines of pond operation and maintenance (O&M. To elucidate these influences, a linear mixed effect model (LMM was built on the data collected from three intensive sampling campaigns in a waste stabilization pond in Cuenca, Ecuador. Within two parallel lines of facultative and maturation ponds, nine locations were sampled at two depths in each pond. In general, the output of the mixed model indicated high spatial autocorrelations of data and wide spatiotemporal variations of the oxygen level among and within the ponds. Particularly, different ponds showed different patterns of oxygen dynamics, which were associated with many factors including flow behavior, sludge accumulation, algal distribution, influent fluctuation, and pond function. Moreover, a substantial temporal change in the oxygen level between day and night, from zero to above 20 mg O2·L−1, was observed. Algal photosynthetic activity appeared to be the main reason for these variations in the model, as it was facilitated by intensive solar radiation at high altitude. Since these diurnal and spatial patterns can supply a large amount of useful information on pond performance, insightful recommendations on dissolved oxygen (DO monitoring and regulations were delivered. More importantly, as a mixed model showed high predictive performance, i.e., high goodness-of-fit (R2 of 0.94, low values of mean absolute error, we recommended this advanced statistical technique as an effective tool for dealing with high autocorrelation of data in pond systems.

  8. Evaluation of scalar mixing and time scale models in PDF simulations of a turbulent premixed flame

    Energy Technology Data Exchange (ETDEWEB)

    Stoellinger, Michael; Heinz, Stefan [Department of Mathematics, University of Wyoming, Laramie, WY (United States)

    2010-09-15

    Numerical simulation results obtained with a transported scalar probability density function (PDF) method are presented for a piloted turbulent premixed flame. The accuracy of the PDF method depends on the scalar mixing model and the scalar time scale model. Three widely used scalar mixing models are evaluated: the interaction by exchange with the mean (IEM) model, the modified Curl's coalescence/dispersion (CD) model and the Euclidean minimum spanning tree (EMST) model. The three scalar mixing models are combined with a simple model for the scalar time scale which assumes a constant C{sub {phi}}=12 value. A comparison of the simulation results with available measurements shows that only the EMST model calculates accurately the mean and variance of the reaction progress variable. An evaluation of the structure of the PDF's of the reaction progress variable predicted by the three scalar mixing models confirms this conclusion: the IEM and CD models predict an unrealistic shape of the PDF. Simulations using various C{sub {phi}} values ranging from 2 to 50 combined with the three scalar mixing models have been performed. The observed deficiencies of the IEM and CD models persisted for all C{sub {phi}} values considered. The value C{sub {phi}}=12 combined with the EMST model was found to be an optimal choice. To avoid the ad hoc choice for C{sub {phi}}, more sophisticated models for the scalar time scale have been used in simulations using the EMST model. A new model for the scalar time scale which is based on a linear blending between a model for flamelet combustion and a model for distributed combustion is developed. The new model has proven to be very promising as a scalar time scale model which can be applied from flamelet to distributed combustion. (author)

  9. A model for quasi parity-doublet spectra with strong coriolis mixing

    International Nuclear Information System (INIS)

    Minkov, N.; Drenska, S.; Strecker, M.

    2013-01-01

    The model of coherent quadrupole and octupole motion (CQOM) is combined with the reflection-asymmetric deformed shell model (DSM) in a way allowing fully microscopic description of the Coriolis decoupling and K-mixing effects in the quasi parity-doublet spectra of odd-mass nuclei. In this approach the even-even core is considered within the CQOM model, while the odd nucleon is described within DSM with pairing interaction. The Coriolis decoupling/mixing factors are calculated through a parity-projection of the single-particle wave function. Expressions for the Coriolis mixed quasi parity-doublet levels are obtained in the second order of perturbation theory, while the K-mixed core plus particle wave function is obtained in the first order. Expressions for the B(E1), B(E2) and B(E3) reduced probabilities for transitions within and between different quasi-doublets are obtained by using the total K-mixed wave function. The model scheme is elaborated in a form capable of describing the yrast and non-yrast quasi parity-doublet spectra in odd-mass nuclei. (author)

  10. Composite Linear Models | Division of Cancer Prevention

    Science.gov (United States)

    By Stuart G. Baker The composite linear models software is a matrix approach to compute maximum likelihood estimates and asymptotic standard errors for models for incomplete multinomial data. It implements the method described in Baker SG. Composite linear models for incomplete multinomial data. Statistics in Medicine 1994;13:609-622. The software includes a library of thirty

  11. Study of linear induction motor characteristics : the Oberretl model

    Science.gov (United States)

    1975-05-30

    The Oberretl theory of the double-sided linear induction motor (LIM) is examined, starting with the idealized model and accompanying assumptions, and ending with relations for predicted thrust, airgap power, and motor efficiency. The effect of varyin...

  12. The minimal linear σ model for the Goldstone Higgs

    International Nuclear Information System (INIS)

    Feruglio, F.; Gavela, M.B.; Kanshin, K.; Machado, P.A.N.; Rigolin, S.; Saa, S.

    2016-01-01

    In the context of the minimal SO(5) linear σ-model, a complete renormalizable Lagrangian -including gauge bosons and fermions- is considered, with the symmetry softly broken to SO(4). The scalar sector describes both the electroweak Higgs doublet and the singlet σ. Varying the σ mass would allow to sweep from the regime of perturbative ultraviolet completion to the non-linear one assumed in models in which the Higgs particle is a low-energy remnant of some strong dynamics. We analyze the phenomenological implications and constraints from precision observables and LHC data. Furthermore, we derive the d≤6 effective Lagrangian in the limit of heavy exotic fermions.

  13. Compressibility effects on turbulent mixing

    Science.gov (United States)

    Panickacheril John, John; Donzis, Diego

    2016-11-01

    We investigate the effect of compressibility on passive scalar mixing in isotropic turbulence with a focus on the fundamental mechanisms that are responsible for such effects using a large Direct Numerical Simulation (DNS) database. The database includes simulations with Taylor Reynolds number (Rλ) up to 100, turbulent Mach number (Mt) between 0.1 and 0.6 and Schmidt number (Sc) from 0.5 to 1.0. We present several measures of mixing efficiency on different canonical flows to robustly identify compressibility effects. We found that, like shear layers, mixing is reduced as Mach number increases. However, data also reveal a non-monotonic trend with Mt. To assess directly the effect of dilatational motions we also present results with both dilatational and soleniodal forcing. Analysis suggests that a small fraction of dilatational forcing decreases mixing time at higher Mt. Scalar spectra collapse when normalized by Batchelor variables which suggests that a compressive mechanism similar to Batchelor mixing in incompressible flows might be responsible for better mixing at high Mt and with dilatational forcing compared to pure solenoidal mixing. We also present results on scalar budgets, in particular on production and dissipation. Support from NSF is gratefully acknowledged.

  14. Non-linear characterisation of the physical model of an ancient masonry bridge

    International Nuclear Information System (INIS)

    Fragonara, L Zanotti; Ceravolo, R; Matta, E; Quattrone, A; De Stefano, A; Pecorelli, M

    2012-01-01

    This paper presents the non-linear investigations carried out on a scaled model of a two-span masonry arch bridge. The model has been built in order to study the effect of the central pile settlement due to riverbank erosion. Progressive damage was induced in several steps by applying increasing settlements at the central pier. For each settlement step, harmonic shaker tests were conducted under different excitation levels, this allowing for the non-linear identification of the progressively damaged system. The shaker tests have been performed at resonance with the modal frequency of the structure, which were determined from a previous linear identification. Estimated non-linearity parameters, which result from the systematic application of restoring force based identification algorithms, can corroborate models to be used in the reassessment of existing structures. The method used for non-linear identification allows monitoring the evolution of non-linear parameters or indicators which can be used in damage and safety assessment.

  15. Stochastic modeling of mode interactions via linear parabolized stability equations

    Science.gov (United States)

    Ran, Wei; Zare, Armin; Hack, M. J. Philipp; Jovanovic, Mihailo

    2017-11-01

    Low-complexity approximations of the Navier-Stokes equations have been widely used in the analysis of wall-bounded shear flows. In particular, the parabolized stability equations (PSE) and Floquet theory have been employed to capture the evolution of primary and secondary instabilities in spatially-evolving flows. We augment linear PSE with Floquet analysis to formally treat modal interactions and the evolution of secondary instabilities in the transitional boundary layer via a linear progression. To this end, we leverage Floquet theory by incorporating the primary instability into the base flow and accounting for different harmonics in the flow state. A stochastic forcing is introduced into the resulting linear dynamics to model the effect of nonlinear interactions on the evolution of modes. We examine the H-type transition scenario to demonstrate how our approach can be used to model nonlinear effects and capture the growth of the fundamental and subharmonic modes observed in direct numerical simulations and experiments.

  16. ADVANCED MIXING MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S; Dimenna, R; Tamburello, D

    2011-02-14

    height from zero to 10 ft. The sludge has been characterized and modeled as micron-sized solids, typically 1 to 5 microns, at weight fractions as high as 20 to 30 wt%, specific gravities to 1.4, and viscosities up to 64 cp during motion. The sludge is suspended and mixed through the use of submersible slurry jet pumps. To suspend settled sludge, water is added to the tank as a slurry medium and stirred with the jet pump. Although there is considerable technical literature on mixing and solid suspension in agitated tanks, very little literature has been published on jet mixing in a large-scale tank. One of the main objectives in the waste processing is to provide feed of a uniform slurry composition at a certain weight percentage (e.g. typically {approx}13 wt% at SRS) over an extended period of time. In preparation of the sludge for slurrying, several important questions have been raised with regard to sludge suspension and mixing of the solid suspension in the bulk of the tank: (1) How much time is required to prepare a slurry with a uniform solid composition? (2) How long will it take to suspend and mix the sludge for uniform composition in any particular waste tank? (3) What are good mixing indicators to answer the questions concerning sludge mixing stated above in a general fashion applicable to any waste tank/slurry pump geometry and fluid/sludge combination?

  17. Modeling of Mixing Behavior in a Combined Blowing Steelmaking Converter with a Filter-Based Euler-Lagrange Model

    Science.gov (United States)

    Li, Mingming; Li, Lin; Li, Qiang; Zou, Zongshu

    2018-05-01

    A filter-based Euler-Lagrange multiphase flow model is used to study the mixing behavior in a combined blowing steelmaking converter. The Euler-based volume of fluid approach is employed to simulate the top blowing, while the Lagrange-based discrete phase model that embeds the local volume change of rising bubbles for the bottom blowing. A filter-based turbulence method based on the local meshing resolution is proposed aiming to improve the modeling of turbulent eddy viscosities. The model validity is verified through comparison with physical experiments in terms of mixing curves and mixing times. The effects of the bottom gas flow rate on bath flow and mixing behavior are investigated and the inherent reasons for the mixing result are clarified in terms of the characteristics of bottom-blowing plumes, the interaction between plumes and top-blowing jets, and the change of bath flow structure.

  18. Linearity and Non-linearity of Photorefractive effect in Materials ...

    African Journals Online (AJOL)

    Linearity and Non-linearity of Photorefractive effect in Materials using the Band transport ... For low light beam intensities the change in the refractive index is ... field is spatially phase shifted by /2 relative to the interference fringe pattern, which ...

  19. Cluster Correlation in Mixed Models

    Science.gov (United States)

    Gardini, A.; Bonometto, S. A.; Murante, G.; Yepes, G.

    2000-10-01

    We evaluate the dependence of the cluster correlation length, rc, on the mean intercluster separation, Dc, for three models with critical matter density, vanishing vacuum energy (Λ=0), and COBE normalization: a tilted cold dark matter (tCDM) model (n=0.8) and two blue mixed models with two light massive neutrinos, yielding Ωh=0.26 and 0.14 (MDM1 and MDM2, respectively). All models approach the observational value of σ8 (and hence the observed cluster abundance) and are consistent with the observed abundance of damped Lyα systems. Mixed models have a motivation in recent results of neutrino physics; they also agree with the observed value of the ratio σ8/σ25, yielding the spectral slope parameter Γ, and nicely fit Las Campanas Redshift Survey (LCRS) reconstructed spectra. We use parallel AP3M simulations, performed in a wide box (of side 360 h-1 Mpc) and with high mass and distance resolution, enabling us to build artificial samples of clusters, whose total number and mass range allow us to cover the same Dc interval inspected through Automatic Plate Measuring Facility (APM) and Abell cluster clustering data. We find that the tCDM model performs substantially better than n=1 critical density CDM models. Our main finding, however, is that mixed models provide a surprisingly good fit to cluster clustering data.

  20. Development of two phase turbulent mixing model for subchannel analysis relevant to BWR

    International Nuclear Information System (INIS)

    Sharma, M.P.; Nayak, A.K.; Kannan, Umasankari

    2014-01-01

    A two phase flow model is presented, which predicts both liquid and gas phase turbulent mixing rate between adjacent subchannels of reactor rod bundles. The model presented here is for slug churn flow regime, which is dominant as compared to the other regimes like bubbly flow and annular flow regimes, since turbulent mixing rate is the highest in slug churn flow regime. In this paper, we have defined new dimensionless parameters i.e. liquid mixing number and gas mixing number for two phase turbulent mixing. The liquid mixing number is a function of mixture Reynolds number whereas the gas phase mixing number is a function of both mixture Reynolds number and volumetric fraction of gas. The effect of pressure, geometrical influence of subchannel is also included in this model. The present model has been tested against low pressure and temperature air-water and high pressure and temperature steam-water experimental data found that it shows good agreement with available experimental data. (author)

  1. Individual taper models for natural cedar and Taurus fir mixed stands of Bucak Region, Turkey

    Directory of Open Access Journals (Sweden)

    Ramazan Özçelik

    2017-11-01

    Full Text Available In this study, we assessed the performance of different types of taper equations for predicting tree diameters at specific heights and total stem volumes for mixed stands of Taurus cedar (Cedrus libani A. Rich. and Taurus fir (Abies cilicica Carr.. We used data from mixed stands containing a total of 131 cedar and 124 Taurus fir trees. We evaluated six commonly used and well-known forestry taper functions developed by a variety of researchers (Biging (1984, Zakrzewski (1999, Muhairwe (1999, Fang et al. (2000, Kozak (2004, and Sharma and Zhang (2004. To address problems related to autocorrelation and multicollinearity in the hierarchical data associated with the construction of taper models, we used appropriate statistical procedures for the model fitting. We compared model performances based on the analysis of three goodness-of-fit statistics and found the compatible segmented model of Fang et al. (2000 to be superior in describing the stem profile and stem volume of both tree species in mixed stands. The equation used by Zakrzewski (1999 exhibited the poorest fitting results of the three taper equations. In general, we found segmented taper equations to provide more accurate predictions than variable-form models for both tree species. Results from the non-linear extra sum of squares method indicate that stem tapers differ among tree species in mixed stands. Therefore, a different taper function should be used for each tree species in mixed stands in the Bucak district. Using individual-specific taper equations yields more robust estimations and, therefore, will enhance the prediction accuracy of diameters at different heights and volumes in mixed stands.

  2. Unique heavy lepton signature at e+e- linear collider with polarized beams

    International Nuclear Information System (INIS)

    Moortgat-Pick, G.; Osland, P.; Pankov, A.A.; Tsytrinov, A.V.

    2013-03-01

    We explore the effects of neutrino and electron mixing with exotic heavy leptons in the process e + e - →W + W - within E 6 models. We examine the possibility of uniquely distinguishing and identifying such effects of heavy neutral lepton exchange from Z-Z' mixing within the same class of models and also from analogous ones due to competitor models with anomalous trilinear gauge couplings (AGC) that can lead to very similar experimental signatures at the e + e - International Linear Collider (ILC) for √(s)=350, 500 GeV and 1 TeV. Such clear identification of the model is possible by using a certain double polarization asymmetry. The availability of both beams being polarized plays a crucial role in identifying such exotic-lepton admixture. In addition, the sensitivity of the ILC for probing exotic-lepton admixture is substantially enhanced when the polarization of the produced W ± bosons is considered.

  3. Calculation of mixed depth for some metal-Si systems

    International Nuclear Information System (INIS)

    Poker, D.B.

    1986-01-01

    The linearity of mixing during ion beam mixing of metals on Si has been found to depend critically upon the method by which the mixed depth is determined. For nonstoichiometric, diffuse mixing, several methods of calculating the mixed depth may be used, namely: integrated area, moment, error function, and 10%-90%. For stoichiometric mixing, the determination of the mixed depth is somewhat more straightforward, and several of the same methods may be used. Some of these methods suffer from the exhibition of an initial offset due to the finite detector resolution. An empirical method of removing the offset using a cubic correction is an improvement, but adds a nonlinear perturbation to the power law dependence on dose, approaching 2/3 for small depths. The effect of detector resolution on the measured depth of mixing is given for several methods, using simulated data with a linear increase in depth as a function of dose. The results effect on the exponent of a power law fit to the dose dependence is given. Only the moment method is immune to the resolution effects

  4. Frequency Response of Synthetic Vocal Fold Models with Linear and Nonlinear Material Properties

    Science.gov (United States)

    Shaw, Stephanie M.; Thomson, Scott L.; Dromey, Christopher; Smith, Simeon

    2012-01-01

    Purpose: The purpose of this study was to create synthetic vocal fold models with nonlinear stress-strain properties and to investigate the effect of linear versus nonlinear material properties on fundamental frequency (F[subscript 0]) during anterior-posterior stretching. Method: Three materially linear and 3 materially nonlinear models were…

  5. QCD mixing effects in a gauge invariant quark model for photo- and electroproduction of baryon resonances

    International Nuclear Information System (INIS)

    Zhenping Li; Close, F.E.

    1990-03-01

    The photo and electroproduction of baryon resonances has been calculated using the Constituent Quark Model with chromodynamics consistent with O(υ 2 /c 2 ) for the quarks. We find that the successes of the nonrelativistic quark model are preserved, some problems are removed and that QCD mixing effects may become important with increasing q 2 in electroproduction. For the first time both spectroscopy and transitions receive a unified treatment with a single set of parameters. (author)

  6. Application of the Fokker-Planck molecular mixing model to turbulent scalar mixing using moment methods

    Science.gov (United States)

    Madadi-Kandjani, E.; Fox, R. O.; Passalacqua, A.

    2017-06-01

    An extended quadrature method of moments using the β kernel density function (β -EQMOM) is used to approximate solutions to the evolution equation for univariate and bivariate composition probability distribution functions (PDFs) of a passive scalar for binary and ternary mixing. The key element of interest is the molecular mixing term, which is described using the Fokker-Planck (FP) molecular mixing model. The direct numerical simulations (DNSs) of Eswaran and Pope ["Direct numerical simulations of the turbulent mixing of a passive scalar," Phys. Fluids 31, 506 (1988)] and the amplitude mapping closure (AMC) of Pope ["Mapping closures for turbulent mixing and reaction," Theor. Comput. Fluid Dyn. 2, 255 (1991)] are taken as reference solutions to establish the accuracy of the FP model in the case of binary mixing. The DNSs of Juneja and Pope ["A DNS study of turbulent mixing of two passive scalars," Phys. Fluids 8, 2161 (1996)] are used to validate the results obtained for ternary mixing. Simulations are performed with both the conditional scalar dissipation rate (CSDR) proposed by Fox [Computational Methods for Turbulent Reacting Flows (Cambridge University Press, 2003)] and the CSDR from AMC, with the scalar dissipation rate provided as input and obtained from the DNS. Using scalar moments up to fourth order, the ability of the FP model to capture the evolution of the shape of the PDF, important in turbulent mixing problems, is demonstrated. Compared to the widely used assumed β -PDF model [S. S. Girimaji, "Assumed β-pdf model for turbulent mixing: Validation and extension to multiple scalar mixing," Combust. Sci. Technol. 78, 177 (1991)], the β -EQMOM solution to the FP model more accurately describes the initial mixing process with a relatively small increase in computational cost.

  7. Nonlinearity measure and internal model control based linearization in anti-windup design

    Energy Technology Data Exchange (ETDEWEB)

    Perev, Kamen [Systems and Control Department, Technical University of Sofia, 8 Cl. Ohridski Blvd., 1756 Sofia (Bulgaria)

    2013-12-18

    This paper considers the problem of internal model control based linearization in anti-windup design. The nonlinearity measure concept is used for quantifying the control system degree of nonlinearity. The linearizing effect of a modified internal model control structure is presented by comparing the nonlinearity measures of the open-loop and closed-loop systems. It is shown that the linearization properties are improved by increasing the control system local feedback gain. However, it is emphasized that at the same time the stability of the system deteriorates. The conflicting goals of stability and linearization are resolved by solving the design problem in different frequency ranges.

  8. Estimation of non-linear continuous time models for the heat exchange dynamics of building integrated photovoltaic modules

    DEFF Research Database (Denmark)

    Jimenez, M.J.; Madsen, Henrik; Bloem, J.J.

    2008-01-01

    This paper focuses on a method for linear or non-linear continuous time modelling of physical systems using discrete time data. This approach facilitates a more appropriate modelling of more realistic non-linear systems. Particularly concerning advanced building components, convective and radiati...... that a description of the non-linear heat transfer is essential. The resulting model is a non-linear first order stochastic differential equation for the heat transfer of the PV component....... heat interchanges are non-linear effects and represent significant contributions in a variety of components such as photovoltaic integrated facades or roofs and those using these effects as passive cooling strategies, etc. Since models are approximations of the physical system and data is encumbered...

  9. Risk evaluations of aging phenomena: the linear aging reliability model and its extensions

    International Nuclear Information System (INIS)

    Vesely, W.E.

    1987-01-01

    A model for component failure rates due to aging mechanisms has been developed from basic phenomenological considerations. In the treatment, the occurrences of deterioration are modeled as following a Poisson process. The severity of damage is allowed to have any distribution, however the damage is assumed to accumulate independently. Finally, the failure rate is modeled as being proportional to the accumulated damage. Using this treatment, the linear aging failure rate model is obtained. The applicability of the linear aging model to various mechanisms is discussed. The model can be extended to cover nonlinear and dependent aging phenomena. The implementability of the linear aging model is demonstrated by applying it to the aging data collected in NRC's Nuclear Plant Aging Research (NPAR) Program. The applications show that aging as observed in collected data have significant effects on the component failure probability and component reliability when aging is not effectively detected and controlled by testing and maintenance

  10. Risk evaluations of aging phenomena: The linear aging reliability model and its extensions

    International Nuclear Information System (INIS)

    Vesely, W.E.

    1986-01-01

    A model for component failure rates due to aging mechanisms has been developed from basic phenomenological considerations. In the treatment, the occurrences of deterioration are modeled as following a Poisson process. The severity of damage is allowed to have any distribution, however the damage is assumed to accumulate independently. Finally, the failure rate is modeled as being proportional to the accumulated damage. Using this treatment, the linear aging failure rate model is obtained. The applicability of the linear aging model to various mechanisms is discussed. The model can be extended to cover nonlinear and dependent aging phenomena. The implementability of the linear aging model is demonstrated by applying it of the aging data collected in NRC's Nuclear Plant Aging Research (NPAR) Program. The applications show that aging as observed in collected data have significant effects on the component failure probability and component reliability when aging is not effectively detected and controlled by testing and maintenance

  11. Avoiding Boundary Estimates in Hierarchical Linear Models through Weakly Informative Priors

    Science.gov (United States)

    Chung, Yeojin; Rabe-Hesketh, Sophia; Gelman, Andrew; Dorie, Vincent; Liu, Jinchen

    2012-01-01

    Hierarchical or multilevel linear models are widely used for longitudinal or cross-sectional data on students nested in classes and schools, and are particularly important for estimating treatment effects in cluster-randomized trials, multi-site trials, and meta-analyses. The models can allow for variation in treatment effects, as well as…

  12. Enriching an effect calculus with linear types

    DEFF Research Database (Denmark)

    Egger, Jeff; Møgelberg, Rasmus Ejlers; Simpson, Alex

    2009-01-01

    We define an ``enriched effect calculus'' by conservatively extending  a type theory for computational effects with primitives from linear logic. By doing so, we obtain a generalisation of linear type theory, intended as a formalism for expressing linear aspects of effects. As a worked example, we...... formulate  linearly-used continuations in the enriched effect calculus. These are captured by a fundamental translation of the enriched effect calculus into itself, which extends existing call-by-value and call-by-name linearly-used CPS translations. We show that our translation is involutive. Full...... completeness results for the various linearly-used CPS translations  follow. Our main results, the conservativity of enriching the effect calculus with linear primitives, and the involution property of the fundamental translation, are proved using a category-theoretic semantics for the enriched effect calculus...

  13. Non-linear thermal engineering, chaotic advection and mixing; Thermique non-lineaire, melange et advection chaotique

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-12-31

    This conference day was jointly organized by the `university group of thermal engineering (GUT)` and the French association of thermal engineers. This book of proceedings contains 7 papers entitled: `energy spectra of a passive scalar undergoing advection by a chaotic flow`; `analysis of chaotic behaviours: from topological characterization to modeling`; `temperature homogeneity by Lagrangian chaos in a direct current flow heat exchanger: numerical approach`; ` thermal instabilities in a mixed convection phenomenon: nonlinear dynamics`; `experimental characterization study of the 3-D Lagrangian chaos by thermal analogy`; `influence of coherent structures on the mixing of a passive scalar`; `evaluation of the performance index of a chaotic advection effect heat exchanger for a wide range of Reynolds numbers`. (J.S.)

  14. Non-linear thermal engineering, chaotic advection and mixing; Thermique non-lineaire, melange et advection chaotique

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-12-31

    This conference day was jointly organized by the `university group of thermal engineering (GUT)` and the French association of thermal engineers. This book of proceedings contains 7 papers entitled: `energy spectra of a passive scalar undergoing advection by a chaotic flow`; `analysis of chaotic behaviours: from topological characterization to modeling`; `temperature homogeneity by Lagrangian chaos in a direct current flow heat exchanger: numerical approach`; ` thermal instabilities in a mixed convection phenomenon: nonlinear dynamics`; `experimental characterization study of the 3-D Lagrangian chaos by thermal analogy`; `influence of coherent structures on the mixing of a passive scalar`; `evaluation of the performance index of a chaotic advection effect heat exchanger for a wide range of Reynolds numbers`. (J.S.)

  15. Effects of socioeconomic position and social mobility on linear growth from early childhood until adolescence

    Directory of Open Access Journals (Sweden)

    Ana Paula Muraro

    Full Text Available ABSTRACT: Objective: To assess the effect of socioeconomic position (SEP in childhood and social mobility on linear growth through adolescence in a population-based cohort. Methods: Children born in Cuiabá-MT, central-western Brazil, were evaluated during 1994 - 1999. They were first assessed during 1999 - 2000 (0 - 5 years and again during 2009 - 2011 (10 - 17 years, and their height-for-age was evaluated during these two periods.Awealth index was used to classify the SEP of each child’s family as low, medium, or high. Social mobility was categorized as upward mobility or no upward mobility. Linear mixed models were used. Results: We evaluated 1,716 children (71.4% of baseline after 10 years, and 60.6% of the families showed upward mobility, with a higher percentage among the lowest economic classes. A higher height-for-age was also observed among those from families with a high SEP both in childhood (low SEP= -0.35 z-score; high SEP= 0.15 z-score, p < 0.01 and adolescence (low SEP= -0.01 z-score; high SEP= 0.45 z-score, p < 0.01, whereas upward mobility did not affect their linear growth. Conclusion: Expressive social mobility was observed, but SEP in childhood and social mobility did not greatly influence linear growth through childhood in this central-western Brazilian cohort.

  16. Lagrangian mixed layer modeling of the western equatorial Pacific

    Science.gov (United States)

    Shinoda, Toshiaki; Lukas, Roger

    1995-01-01

    Processes that control the upper ocean thermohaline structure in the western equatorial Pacific are examined using a Lagrangian mixed layer model. The one-dimensional bulk mixed layer model of Garwood (1977) is integrated along the trajectories derived from a nonlinear 1 1/2 layer reduced gravity model forced with actual wind fields. The Global Precipitation Climatology Project (GPCP) data are used to estimate surface freshwater fluxes for the mixed layer model. The wind stress data which forced the 1 1/2 layer model are used for the mixed layer model. The model was run for the period 1987-1988. This simple model is able to simulate the isothermal layer below the mixed layer in the western Pacific warm pool and its variation. The subduction mechanism hypothesized by Lukas and Lindstrom (1991) is evident in the model results. During periods of strong South Equatorial Current, the warm and salty mixed layer waters in the central Pacific are subducted below the fresh shallow mixed layer in the western Pacific. However, this subduction mechanism is not evident when upwelling Rossby waves reach the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific due to episodes of strong wind and light precipitation associated with the El Nino-Southern Oscillation. Comparison of the results between the Lagrangian mixed layer model and a locally forced Eulerian mixed layer model indicated that horizontal advection of salty waters from the central Pacific strongly affects the upper ocean salinity variation in the western Pacific, and that this advection is necessary to maintain the upper ocean thermohaline structure in this region.

  17. Heteroscedasticity as a Basis of Direction Dependence in Reversible Linear Regression Models.

    Science.gov (United States)

    Wiedermann, Wolfgang; Artner, Richard; von Eye, Alexander

    2017-01-01

    Heteroscedasticity is a well-known issue in linear regression modeling. When heteroscedasticity is observed, researchers are advised to remedy possible model misspecification of the explanatory part of the model (e.g., considering alternative functional forms and/or omitted variables). The present contribution discusses another source of heteroscedasticity in observational data: Directional model misspecifications in the case of nonnormal variables. Directional misspecification refers to situations where alternative models are equally likely to explain the data-generating process (e.g., x → y versus y → x). It is shown that the homoscedasticity assumption is likely to be violated in models that erroneously treat true nonnormal predictors as response variables. Recently, Direction Dependence Analysis (DDA) has been proposed as a framework to empirically evaluate the direction of effects in linear models. The present study links the phenomenon of heteroscedasticity with DDA and describes visual diagnostics and nine homoscedasticity tests that can be used to make decisions concerning the direction of effects in linear models. Results of a Monte Carlo simulation that demonstrate the adequacy of the approach are presented. An empirical example is provided, and applicability of the methodology in cases of violated assumptions is discussed.

  18. A Solvable Dynamic Principal-Agent Model with Linear Marginal Productivity

    Directory of Open Access Journals (Sweden)

    Bing Liu

    2018-01-01

    Full Text Available We study how to design an optimal contract which provides incentives for agent to put forth the desired effort in a continuous time dynamic moral hazard model with linear marginal productivity. Using exponential utility and linear production, three different information structures, full information, hidden actions and hidden savings, are considered in the principal-agent model. Applying the stochastic maximum principle, we solve the model explicitly, where the agent’s optimization problem becomes the principal’s problem of choosing an optimal contract. The explicit solutions to our model allow us to analyze the distortion of allocations. The main effect of hidden actions is a reduction of effort, but the a smaller effect is on the consumption allocation. In the hidden saving case, the consumption distortion almost vanishes but the effort distortion is expanded. In our setting, the agent’s optimal effort is also reduced with the decline of marginal productivity.

  19. Modeling the Bergeron-Findeisen Process Using PDF Methods With an Explicit Representation of Mixing

    Science.gov (United States)

    Jeffery, C.; Reisner, J.

    2005-12-01

    Currently, the accurate prediction of cloud droplet and ice crystal number concentration in cloud resolving, numerical weather prediction and climate models is a formidable challenge. The Bergeron-Findeisen process in which ice crystals grow by vapor deposition at the expense of super-cooled droplets is expected to be inhomogeneous in nature--some droplets will evaporate completely in centimeter-scale filaments of sub-saturated air during turbulent mixing while others remain unchanged [Baker et al., QJRMS, 1980]--and is unresolved at even cloud-resolving scales. Despite the large body of observational evidence in support of the inhomogeneous mixing process affecting cloud droplet number [most recently, Brenguier et al., JAS, 2000], it is poorly understood and has yet to be parameterized and incorporated into a numerical model. In this talk, we investigate the Bergeron-Findeisen process using a new approach based on simulations of the probability density function (PDF) of relative humidity during turbulent mixing. PDF methods offer a key advantage over Eulerian (spatial) models of cloud mixing and evaporation: the low probability (cm-scale) filaments of entrained air are explicitly resolved (in probability space) during the mixing event even though their spatial shape, size and location remain unknown. Our PDF approach reveals the following features of the inhomogeneous mixing process during the isobaric turbulent mixing of two parcels containing super-cooled water and ice, respectively: (1) The scavenging of super-cooled droplets is inhomogeneous in nature; some droplets evaporate completely at early times while others remain unchanged. (2) The degree of total droplet evaporation during the initial mixing period depends linearly on the mixing fractions of the two parcels and logarithmically on Damköhler number (Da)---the ratio of turbulent to evaporative time-scales. (3) Our simulations predict that the PDF of Lagrangian (time-integrated) subsaturation (S) goes as

  20. Relating masses and mixing angles. A model-independent model

    Energy Technology Data Exchange (ETDEWEB)

    Hollik, Wolfgang Gregor [DESY, Hamburg (Germany); Saldana-Salazar, Ulises Jesus [CINVESTAV (Mexico)

    2016-07-01

    In general, mixing angles and fermion masses are seen to be independent parameters of the Standard Model. However, exploiting the observed hierarchy in the masses, it is viable to construct the mixing matrices for both quarks and leptons in terms of the corresponding mass ratios only. A closer view on the symmetry properties leads to potential realizations of that approach in extensions of the Standard Model. We discuss the application in the context of flavored multi-Higgs models.

  1. Heterotic sigma models and non-linear strings

    International Nuclear Information System (INIS)

    Hull, C.M.

    1986-01-01

    The two-dimensional supersymmetric non-linear sigma models are examined with respect to the heterotic string. The paper was presented at the workshop on :Supersymmetry and its applications', Cambridge, United Kingdom, 1985. The non-linear sigma model with Wess-Zumino-type term, the coupling of the fermionic superfields to the sigma model, super-conformal invariance, and the supersymmetric string, are all discussed. (U.K.)

  2. Modeling and analysis of mover gaps in tubular moving-magnet linear oscillating motors

    Directory of Open Access Journals (Sweden)

    Xuesong LUO

    2018-05-01

    Full Text Available A tubular moving-magnet linear oscillating motor (TMMLOM has merits of high efficiency and excellent dynamic capability. To enhance the thrust performance, quasi-Halbach permanent magnet (PM arrays are arranged on its mover in the application of a linear electro-hydrostatic actuator in more electric aircraft. The arrays are assembled by several individual segments, which lead to gaps between them inevitably. To investigate the effects of the gaps on the radial magnetic flux density and the machine thrust in this paper, an analytical model is built considering both axial and radial gaps. The model is validated by finite element simulations and experimental results. Distributions of the magnetic flux are described in condition of different sizes of radial and axial gaps. Besides, the output force is also discussed in normal and end windings. Finally, the model has demonstrated that both kinds of gaps have a negative effect on the thrust, and the linear motor is more sensitive to radial ones. Keywords: Air-gap flux density, Linear motor, Mover gaps, Quasi-Halbach array, Thrust output, Tubular moving-magnet linear oscillating motor (TMMLOM

  3. Scotogenic model for co-bimaximal mixing

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, P.M. [Instituto Superior de Engenharia de Lisboa - ISEL,1959-007 Lisboa (Portugal); Centro de Física Teórica e Computacional - FCUL, Universidade de Lisboa,R. Ernesto de Vasconcelos, 1749-016 Lisboa (Portugal); Grimus, W. [Faculty of Physics, University of Vienna,Boltzmanngasse 5, A-1090 Wien (Austria); Jurčiukonis, D. [Institute of Theoretical Physics and Astronomy, Vilnius University,Saul?etekio ave. 3, LT-10222 Vilnius (Lithuania); Lavoura, L. [CFTP, Instituto Superior Técnico, Universidade de Lisboa,1049-001 Lisboa (Portugal)

    2016-07-04

    We present a scotogenic model, i.e. a one-loop neutrino mass model with dark right-handed neutrino gauge singlets and one inert dark scalar gauge doublet η, which has symmetries that lead to co-bimaximal mixing, i.e. to an atmospheric mixing angle θ{sub 23}=45{sup ∘} and to a CP-violating phase δ=±π/2, while the mixing angle θ{sub 13} remains arbitrary. The symmetries consist of softly broken lepton numbers L{sub α} (α=e,μ,τ), a non-standard CP symmetry, and three ℤ{sub 2} symmetries. We indicate two possibilities for extending the model to the quark sector. Since the model has, besides η, three scalar gauge doublets, we perform a thorough discussion of its scalar sector. We demonstrate that it can accommodate a Standard Model-like scalar with mass 125 GeV, with all the other charged and neutral scalars having much higher masses.

  4. Models of neutrino mass and mixing

    International Nuclear Information System (INIS)

    Ma, Ernest

    2000-01-01

    There are two basic theoretical approaches to obtaining neutrino mass and mixing. In the minimalist approach, one adds just enough new stuff to the Minimal Standard Model to get m ν ≠0 and U αi ≠1. In the holistic approach, one uses a general framework or principle to enlarge the Minimal Standard Model such that, among other things, m ν ≠0 and U αi ≠1. In both cases, there are important side effects besides neutrino oscillations. I discuss a number of examples, including the possibility of leptogenesis from R parity nonconservation in supersymmetry

  5. Personalized prediction of chronic wound healing: an exponential mixed effects model using stereophotogrammetric measurement.

    Science.gov (United States)

    Xu, Yifan; Sun, Jiayang; Carter, Rebecca R; Bogie, Kath M

    2014-05-01

    Stereophotogrammetric digital imaging enables rapid and accurate detailed 3D wound monitoring. This rich data source was used to develop a statistically validated model to provide personalized predictive healing information for chronic wounds. 147 valid wound images were obtained from a sample of 13 category III/IV pressure ulcers from 10 individuals with spinal cord injury. Statistical comparison of several models indicated the best fit for the clinical data was a personalized mixed-effects exponential model (pMEE), with initial wound size and time as predictors and observed wound size as the response variable. Random effects capture personalized differences. Other models are only valid when wound size constantly decreases. This is often not achieved for clinical wounds. Our model accommodates this reality. Two criteria to determine effective healing time outcomes are proposed: r-fold wound size reduction time, t(r-fold), is defined as the time when wound size reduces to 1/r of initial size. t(δ) is defined as the time when the rate of the wound healing/size change reduces to a predetermined threshold δ current model improves with each additional evaluation. Routine assessment of wounds using detailed stereophotogrammetric imaging can provide personalized predictions of wound healing time. Application of a valid model will help the clinical team to determine wound management care pathways. Published by Elsevier Ltd.

  6. Modeling and verifying non-linearities in heterodyne displacement interferometry

    NARCIS (Netherlands)

    Cosijns, S.J.A.G.; Haitjema, H.; Schellekens, P.H.J.

    2002-01-01

    The non-linearities in a heterodyne laser interferometer system occurring from the phase measurement system of the interferometer andfrom non-ideal polarization effects of the optics are modeled into one analytical expression which includes the initial polarization state ofthe laser source, the

  7. Ion beam mixing of titanium films on stainless steel

    International Nuclear Information System (INIS)

    Bolse, W.; Weber, T.

    1990-01-01

    The ion mixing of Ti-steel bilayers with N + , Ar + , Ti + , Kr + and Xe + ions was investigated by means of Rutherford backscattering spectroscopy (RBS). The mixing rates exhibit a linear scaling with the deposited damage energy F D . No correlation between the properties of the mixing ion and the mixing efficiency was found. The results are compared with the predictions of ballistic and thermal-spike models. (orig.)

  8. Non linear viscoelastic models

    DEFF Research Database (Denmark)

    Agerkvist, Finn T.

    2011-01-01

    Viscoelastic eects are often present in loudspeaker suspensions, this can be seen in the displacement transfer function which often shows a frequency dependent value below the resonance frequency. In this paper nonlinear versions of the standard linear solid model (SLS) are investigated....... The simulations show that the nonlinear version of the Maxwell SLS model can result in a time dependent small signal stiness while the Kelvin Voight version does not....

  9. Generalised linear models for correlated pseudo-observations, with applications to multi-state models

    DEFF Research Database (Denmark)

    Andersen, Per Kragh; Klein, John P.; Rosthøj, Susanne

    2003-01-01

    Generalised estimating equation; Generalised linear model; Jackknife pseudo-value; Logistic regression; Markov Model; Multi-state model......Generalised estimating equation; Generalised linear model; Jackknife pseudo-value; Logistic regression; Markov Model; Multi-state model...

  10. Large Spatial and Temporal Separations of Cause and Effect in Policy Making - Dealing with Non-linear Effects

    Science.gov (United States)

    McCaskill, John

    There can be large spatial and temporal separation of cause and effect in policy making. Determining the correct linkage between policy inputs and outcomes can be highly impractical in the complex environments faced by policy makers. In attempting to see and plan for the probable outcomes, standard linear models often overlook, ignore, or are unable to predict catastrophic events that only seem improbable due to the issue of multiple feedback loops. There are several issues with the makeup and behaviors of complex systems that explain the difficulty many mathematical models (factor analysis/structural equation modeling) have in dealing with non-linear effects in complex systems. This chapter highlights those problem issues and offers insights to the usefulness of ABM in dealing with non-linear effects in complex policy making environments.

  11. Linear causal modeling with structural equations

    CERN Document Server

    Mulaik, Stanley A

    2009-01-01

    Emphasizing causation as a functional relationship between variables that describe objects, Linear Causal Modeling with Structural Equations integrates a general philosophical theory of causation with structural equation modeling (SEM) that concerns the special case of linear causal relations. In addition to describing how the functional relation concept may be generalized to treat probabilistic causation, the book reviews historical treatments of causation and explores recent developments in experimental psychology on studies of the perception of causation. It looks at how to perceive causal

  12. TENSOR DECOMPOSITIONS AND SPARSE LOG-LINEAR MODELS

    Science.gov (United States)

    Johndrow, James E.; Bhattacharya, Anirban; Dunson, David B.

    2017-01-01

    Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. We derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions. PMID:29332971

  13. An Efficient Technique for Bayesian Modelling of Family Data Using the BUGS software

    Directory of Open Access Journals (Sweden)

    Harold T Bae

    2014-11-01

    Full Text Available Linear mixed models have become a popular tool to analyze continuous data from family-based designs by using random effects that model the correlation of subjects from the same family. However, mixed models for family data are challenging to implement with the BUGS (Bayesian inference Using Gibbs Sampling software because of the high-dimensional covariance matrix of the random effects. This paper describes an efficient parameterization that utilizes the singular value decomposition of the covariance matrix of random effects, includes the BUGS code for such implementation, and extends the parameterization to generalized linear mixed models. The implementation is evaluated using simulated data and an example from a large family-based study is presented with a comparison to other existing methods.

  14. Probability of atrial fibrillation after ablation: Using a parametric nonlinear temporal decomposition mixed effects model.

    Science.gov (United States)

    Rajeswaran, Jeevanantham; Blackstone, Eugene H; Ehrlinger, John; Li, Liang; Ishwaran, Hemant; Parides, Michael K

    2018-01-01

    Atrial fibrillation is an arrhythmic disorder where the electrical signals of the heart become irregular. The probability of atrial fibrillation (binary response) is often time varying in a structured fashion, as is the influence of associated risk factors. A generalized nonlinear mixed effects model is presented to estimate the time-related probability of atrial fibrillation using a temporal decomposition approach to reveal the pattern of the probability of atrial fibrillation and their determinants. This methodology generalizes to patient-specific analysis of longitudinal binary data with possibly time-varying effects of covariates and with different patient-specific random effects influencing different temporal phases. The motivation and application of this model is illustrated using longitudinally measured atrial fibrillation data obtained through weekly trans-telephonic monitoring from an NIH sponsored clinical trial being conducted by the Cardiothoracic Surgery Clinical Trials Network.

  15. A marketing mix model for a complex and turbulent environment

    Directory of Open Access Journals (Sweden)

    R. B. Mason

    2007-12-01

    Full Text Available Purpose: This paper is based on the proposition that the choice of marketing tactics is determined, or at least significantly influenced, by the nature of the company’s external environment. It aims to illustrate the type of marketing mix tactics that are suggested for a complex and turbulent environment when marketing and the environment are viewed through a chaos and complexity theory lens. Design/Methodology/Approach: Since chaos and complexity theories are proposed as a good means of understanding the dynamics of complex and turbulent markets, a comprehensive review and analysis of literature on the marketing mix and marketing tactics from a chaos and complexity viewpoint was conducted. From this literature review, a marketing mix model was conceptualised. Findings: A marketing mix model considered appropriate for success in complex and turbulent environments was developed. In such environments, the literature suggests destabilising marketing activities are more effective, whereas stabilising type activities are more effective in simple, stable environments. Therefore the model proposes predominantly destabilising type tactics as appropriate for a complex and turbulent environment such as is currently being experienced in South Africa. Implications: This paper is of benefit to marketers by emphasising a new way to consider the future marketing activities of their companies. How this model can assist marketers and suggestions for research to develop and apply this model are provided. It is hoped that the model suggested will form the basis of empirical research to test its applicability in the turbulent South African environment. Originality/Value: Since businesses and markets are complex adaptive systems, using complexity theory to understand how to cope in complex, turbulent environments is necessary, but has not been widely researched. In fact, most chaos and complexity theory work in marketing has concentrated on marketing strategy, with

  16. A detailed aerosol mixing state model for investigating interactions between mixing state, semivolatile partitioning, and coagulation

    Directory of Open Access Journals (Sweden)

    J. Lu

    2010-04-01

    Full Text Available A new method for describing externally mixed particles, the Detailed Aerosol Mixing State (DAMS representation, is presented in this study. This novel method classifies aerosols by both composition and size, using a user-specified mixing criterion to define boundaries between compositional populations. Interactions between aerosol mixing state, semivolatile partitioning, and coagulation are investigated with a Lagrangian box model that incorporates the DAMS approach. Model results predict that mixing state affects the amount and types of semivolatile organics that partition to available aerosol phases, causing external mixtures to produce a more size-varying composition than internal mixtures. Both coagulation and condensation contribute to the mixing of emitted particles, producing a collection of multiple compositionally distinct aerosol populations that exists somewhere between the extremes of a strictly external or internal mixture. The selection of mixing criteria has a significant impact on the size and type of individual populations that compose the modeled aerosol mixture. Computational demands for external mixture modeling are significant and can be controlled by limiting the number of aerosol populations used in the model.

  17. On D-branes from gauged linear sigma models

    International Nuclear Information System (INIS)

    Govindarajan, S.; Jayaraman, T.; Sarkar, T.

    2001-01-01

    We study both A-type and B-type D-branes in the gauged linear sigma model by considering worldsheets with boundary. The boundary conditions on the matter and vector multiplet fields are first considered in the large-volume phase/non-linear sigma model limit of the corresponding Calabi-Yau manifold, where we find that we need to add a contact term on the boundary. These considerations enable to us to derive the boundary conditions in the full gauged linear sigma model, including the addition of the appropriate boundary contact terms, such that these boundary conditions have the correct non-linear sigma model limit. Most of the analysis is for the case of Calabi-Yau manifolds with one Kaehler modulus (including those corresponding to hypersurfaces in weighted projective space), though we comment on possible generalisations

  18. Impacto não Linear do Marketing Mix no Desempenho em Vendas de Marcas

    Directory of Open Access Journals (Sweden)

    Rafael Barreiros Porto

    2015-01-01

    Full Text Available O padrão de impacto que as atividades de marketingexercem nas vendas não tem sido evidenciado na literatura. Muitas pesquisas adotam perspectivas lineares restritas, desconsiderando as evidências empíricas. Este trabalho investigou o impacto não linear do marketingmixno volume em vendas e no volume de consumidores e de compra por consumidor. Realizou-se um estudo longitudinal em painel de marcas e de consumidores simultâneos. Analisaram-se 121 marcas durante 13 meses, com 793 compras/mês feitas pelos consumidores por meio de três equações de estimativas generalizadas. Os resultados apontam que o marketing mix, em especial brandinge precificação, impacta fortemente todas as dependentes em formato não linear, com bons ajustes dos parâmetros. Oefeito conjunto gera economias de escala para as marcas, enquanto, para cada consumidor, o efeito conjunto estimula-o a adquirir maiores quantidades gradativamente. A pesquisa demonstra oito padrões impactantes do marketingmixsobre os indicadores investigados, com alterações de sua ordem e de seu peso para marcas e consumidores.

  19. A mathematical model for turbulent incompressible flows through mixing grids

    International Nuclear Information System (INIS)

    Allaire, G.

    1989-01-01

    A mathematical model is proposed for the computation of turbulent incompressible flows through mixing grids. This model is obtained as follows: in a three-dimentional-domain we represent a mixing grid by small identical wings of size ε 2 periodically distributed at the nodes of a plane regular mesh of size ε, and we consider incompressible Navier-Stokes equations with a no-slip condition on the wings. Using an appropriate homogenization process we pass to the limit when ε tends to zero and we obtain a Brinkman equation, i.e. a Navier-Stokes equation plus a zero-order term for the velocity, in a homogeneous domain without anymore wings. The interest of this model is that the spatial discretization is simpler in a homogeneous domain, and, moreover, the new term, which expresses the grid's mixing effect, can be evaluated with a local computation around a single wing

  20. Effects of patient safety auditing in hospital care: results of a mixed-method evaluation (part 1).

    Science.gov (United States)

    Hanskamp-Sebregts, Mirelle; Zegers, Marieke; Westert, Gert P; Boeijen, Wilma; Teerenstra, Steven; van Gurp, Petra J; Wollersheim, Hub

    2018-06-15

    To evaluate the effectiveness of internal auditing in hospital care focussed on improving patient safety. A before-and-after mixed-method evaluation study was carried out in eight departments of a university medical center in the Netherlands. Internal auditing and feedback focussed on improving patient safety. The effect of internal auditing was assessed 15 months after the audit, using linear mixed models, on the patient, professional, team and departmental levels. The measurement methods were patient record review on adverse events (AEs), surveys regarding patient experiences, safety culture and team climate, analysis of administrative hospital data (standardized mortality rate, SMR) and safety walk rounds (SWRs) to observe frontline care processes on safety. The AE rate decreased from 36.1% to 31.3% and the preventable AE rate from 5.5% to 3.6%; however, the differences before and after auditing were not statistically significant. The patient-reported experience measures regarding patient safety improved slightly over time (P audit. The SWRs showed that medication safety and information security were improved (P auditing was associated with improved patient experiences and observed safety on wards. No effects were found on adverse outcomes, safety culture and team climate 15 months after the internal audit.

  1. Modelling of Asphalt Concrete Stiffness in the Linear Viscoelastic Region

    Science.gov (United States)

    Mazurek, Grzegorz; Iwański, Marek

    2017-10-01

    Stiffness modulus is a fundamental parameter used in the modelling of the viscoelastic behaviour of bituminous mixtures. On the basis of the master curve in the linear viscoelasticity range, the mechanical properties of asphalt concrete at different loading times and temperatures can be predicted. This paper discusses the construction of master curves under rheological mathematical models i.e. the sigmoidal function model (MEPDG), the fractional model, and Bahia and co-workers’ model in comparison to the results from mechanistic rheological models i.e. the generalized Huet-Sayegh model, the generalized Maxwell model and the Burgers model. For the purposes of this analysis, the reference asphalt concrete mix (denoted as AC16W) intended for the binder coarse layer and for traffic category KR3 (5×105 controlled strain mode. The fixed strain level was set at 25με to guarantee that the stiffness modulus of the asphalt concrete would be tested in a linear viscoelasticity range. The master curve was formed using the time-temperature superposition principle (TTSP). The stiffness modulus of asphalt concrete was determined at temperatures 10°C, 20°C and 40°C and at loading times (frequency) of 0.1, 0.3, 1, 3, 10, 20 Hz. The model parameters were fitted to the rheological models using the original programs based on the nonlinear least squares sum method. All the rheological models under analysis were found to be capable of predicting changes in the stiffness modulus of the reference asphalt concrete to satisfactory accuracy. In the cases of the fractional model and the generalized Maxwell model, their accuracy depends on a number of elements in series. The best fit was registered for Bahia and co-workers model, generalized Maxwell model and fractional model. As for predicting the phase angle parameter, the largest discrepancies between experimental and modelled results were obtained using the fractional model. Except the Burgers model, the model matching quality was

  2. Mathematical modeling of the crack growth in linear elastic isotropic materials by conventional fracture mechanics approaches and by molecular dynamics method: crack propagation direction angle under mixed mode loading

    Science.gov (United States)

    Stepanova, Larisa; Bronnikov, Sergej

    2018-03-01

    The crack growth directional angles in the isotropic linear elastic plane with the central crack under mixed-mode loading conditions for the full range of the mixity parameter are found. Two fracture criteria of traditional linear fracture mechanics (maximum tangential stress and minimum strain energy density criteria) are used. Atomistic simulations of the central crack growth process in an infinite plane medium under mixed-mode loading using Large-scale Molecular Massively Parallel Simulator (LAMMPS), a classical molecular dynamics code, are performed. The inter-atomic potential used in this investigation is Embedded Atom Method (EAM) potential. The plane specimens with initial central crack were subjected to Mixed-Mode loadings. The simulation cell contains 400000 atoms. The crack propagation direction angles under different values of the mixity parameter in a wide range of values from pure tensile loading to pure shear loading in a wide diapason of temperatures (from 0.1 К to 800 К) are obtained and analyzed. It is shown that the crack propagation direction angles obtained by molecular dynamics method coincide with the crack propagation direction angles given by the multi-parameter fracture criteria based on the strain energy density and the multi-parameter description of the crack-tip fields.

  3. Robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming.

    Science.gov (United States)

    Baran, Richard; Northen, Trent R

    2013-10-15

    Untargeted metabolite profiling using liquid chromatography and mass spectrometry coupled via electrospray ionization is a powerful tool for the discovery of novel natural products, metabolic capabilities, and biomarkers. However, the elucidation of the identities of uncharacterized metabolites from spectral features remains challenging. A critical step in the metabolite identification workflow is the assignment of redundant spectral features (adducts, fragments, multimers) and calculation of the underlying chemical formula. Inspection of the data by experts using computational tools solving partial problems (e.g., chemical formula calculation for individual ions) can be performed to disambiguate alternative solutions and provide reliable results. However, manual curation is tedious and not readily scalable or standardized. Here we describe an automated procedure for the robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming optimization (RAMSI). Chemical rules among related ions are expressed as linear constraints and both the spectra interpretation and chemical formula calculation are performed in a single optimization step. This approach is unbiased in that it does not require predefined sets of neutral losses and positive and negative polarity spectra can be combined in a single optimization. The procedure was evaluated with 30 experimental mass spectra and was found to effectively identify the protonated or deprotonated molecule ([M + H](+) or [M - H](-)) while being robust to the presence of background ions. RAMSI provides a much-needed standardized tool for interpreting ions for subsequent identification in untargeted metabolomics workflows.

  4. NON-LINEAR FINITE ELEMENT MODELING OF DEEP DRAWING PROCESS

    Directory of Open Access Journals (Sweden)

    Hasan YILDIZ

    2004-03-01

    Full Text Available Deep drawing process is one of the main procedures used in different branches of industry. Finding numerical solutions for determination of the mechanical behaviour of this process will save time and money. In die surfaces, which have complex geometries, it is hard to determine the effects of parameters of sheet metal forming. Some of these parameters are wrinkling, tearing, and determination of the flow of the thin sheet metal in the die and thickness change. However, the most difficult one is determination of material properties during plastic deformation. In this study, the effects of all these parameters are analyzed before producing the dies. The explicit non-linear finite element method is chosen to be used in the analysis. The numerical results obtained for non-linear material and contact models are also compared with the experiments. A good agreement between the numerical and the experimental results is obtained. The results obtained for the models are given in detail.

  5. Heterotic non-linear sigma models with anti-de Sitter target spaces

    International Nuclear Information System (INIS)

    Michalogiorgakis, Georgios; Gubser, Steven S.

    2006-01-01

    We calculate the beta function of non-linear sigma models with S D+1 and AdS D+1 target spaces in a 1/D expansion up to order 1/D 2 and to all orders in α ' . This beta function encodes partial information about the spacetime effective action for the heterotic string to all orders in α ' . We argue that a zero of the beta function, corresponding to a worldsheet CFT with AdS D+1 target space, arises from competition between the one-loop and higher-loop terms, similarly to the bosonic and supersymmetric cases studied previously in [J.J. Friess, S.S. Gubser, Non-linear sigma models with anti-de Sitter target spaces, Nucl. Phys. B 750 (2006) 111-141]. Various critical exponents of the non-linear sigma model are calculated, and checks of the calculation are presented

  6. Decomposable log-linear models

    DEFF Research Database (Denmark)

    Eriksen, Poul Svante

    can be characterized by a structured set of conditional independencies between some variables given some other variables. We term the new model class decomposable log-linear models, which is illustrated to be a much richer class than decomposable graphical models.It covers a wide range of non...... The present paper considers discrete probability models with exact computational properties. In relation to contingency tables this means closed form expressions of the maksimum likelihood estimate and its distribution. The model class includes what is known as decomposable graphicalmodels, which......-hierarchical models, models with structural zeroes, models described by quasi independence and models for level merging. Also, they have a very natural interpretation as they may be formulated by a structured set of conditional independencies between two events given some other event. In relation to contingency...

  7. Sensitivity studies of different aerosol indirect effects in mixed-phase clouds

    Science.gov (United States)

    Lohmann, U.; Hoose, C.

    2009-11-01

    Aerosols affect the climate system by changing cloud characteristics. Using the global climate model ECHAM5-HAM, we investigate different aerosol effects on mixed-phase clouds: The glaciation effect, which refers to a more frequent glaciation due to anthropogenic aerosols, versus the de-activation effect, which suggests that ice nuclei become less effective because of an anthropogenic sulfate coating. The glaciation effect can partly offset the indirect aerosol effect on warm clouds and thus causes the total anthropogenic aerosol effect to be smaller. It is investigated by varying the parameterization for the Bergeron-Findeisen process and the threshold coating thickness of sulfate (SO4-crit), which is required to convert an externally mixed aerosol particle into an internally mixed particle. Differences in the net radiation at the top-of-the-atmosphere due to anthropogenic aerosols between the different sensitivity studies amount up to 0.5 W m-2. This suggests that the investigated mixed-phase processes have a major effect on the total anthropogenic aerosol effect.

  8. Linear summation of outputs in a balanced network model of motor cortex.

    Science.gov (United States)

    Capaday, Charles; van Vreeswijk, Carl

    2015-01-01

    Given the non-linearities of the neural circuitry's elements, we would expect cortical circuits to respond non-linearly when activated. Surprisingly, when two points in the motor cortex are activated simultaneously, the EMG responses are the linear sum of the responses evoked by each of the points activated separately. Additionally, the corticospinal transfer function is close to linear, implying that the synaptic interactions in motor cortex must be effectively linear. To account for this, here we develop a model of motor cortex composed of multiple interconnected points, each comprised of reciprocally connected excitatory and inhibitory neurons. We show how non-linearities in neuronal transfer functions are eschewed by strong synaptic interactions within each point. Consequently, the simultaneous activation of multiple points results in a linear summation of their respective outputs. We also consider the effects of reduction of inhibition at a cortical point when one or more surrounding points are active. The network response in this condition is linear over an approximately two- to three-fold decrease of inhibitory feedback strength. This result supports the idea that focal disinhibition allows linear coupling of motor cortical points to generate movement related muscle activation patterns; albeit with a limitation on gain control. The model also explains why neural activity does not spread as far out as the axonal connectivity allows, whilst also explaining why distant cortical points can be, nonetheless, functionally coupled by focal disinhibition. Finally, we discuss the advantages that linear interactions at the cortical level afford to motor command synthesis.

  9. Jet mixing long horizontal storage tanks

    International Nuclear Information System (INIS)

    Perona, J.J.; Hylton, T.D.; Youngblood, E.L.; Cummins, R.L.

    1994-12-01

    Large storage tanks may require mixing to achieve homogeneity of contents for several reasons: prior to sampling for mass balance purposes, for blending in reagents, for suspending settled solids for removal, or for use as a feed tank to a process. At ORNL, mixed waste evaporator concentrates are stored in 50,000-gal tanks, about 12 ft in diameter and 60 ft long. This tank configuration has the advantage of permitting transport by truck and therefore fabrication in the shop rather than in the field. Jet mixing experiments were carried out on two model tanks: a 230-gal (1/6-linear-scale) Plexiglas tank and a 25,000-gal tank (about 2/3 linear scale). Mixing times were measured using sodium chloride tracer and several conductivity probes distributed through the tanks. Several jet sizes and configurations were tested. One-directional and two-directional jets were tested in both tanks. Mixing times for each tank were correlated with the jet Reynolds number. Mixing times were correlated for the two tank sizes using the recirculation time for the developed jet. When the recirculation times were calculated using the distance from the nozzle to the end of the tank as the length of the developed jet, the correlation was only marginally successful. Data for the two tank sizes were correlated empirically using a modified effective jet length expressed as a function of the Reynolds number raised to the 1/3 power. Mixing experiments were simulated using the TEMTEST computer program. The simulations predicted trends correctly and were within the scatter of the experimental data with the lower jet Reynolds numbers. Agreement was not as good at high Reynolds numbers except for single nozzles in the 25,000-gal tank, where agreement was excellent over the entire range

  10. [Spatial heterogeneity in body condition of small yellow croaker in Yellow Sea and East China Sea based on mixed-effects model and quantile regression analysis].

    Science.gov (United States)

    Liu, Zun-Lei; Yuan, Xing-Wei; Yan, Li-Ping; Yang, Lin-Lin; Cheng, Jia-Hua

    2013-09-01

    By using the 2008-2010 investigation data about the body condition of small yellow croaker in the offshore waters of southern Yellow Sea (SYS), open waters of northern East China Sea (NECS), and offshore waters of middle East China Sea (MECS), this paper analyzed the spatial heterogeneity of body length-body mass of juvenile and adult small yellow croakers by the statistical approaches of mean regression model and quantile regression model. The results showed that the residual standard errors from the analysis of covariance (ANCOVA) and the linear mixed-effects model were similar, and those from the simple linear regression were the highest. For the juvenile small yellow croakers, their mean body mass in SYS and NECS estimated by the mixed-effects mean regression model was higher than the overall average mass across the three regions, while the mean body mass in MECS was below the overall average. For the adult small yellow croakers, their mean body mass in NECS was higher than the overall average, while the mean body mass in SYS and MECS was below the overall average. The results from quantile regression indicated the substantial differences in the allometric relationships of juvenile small yellow croakers between SYS, NECS, and MECS, with the estimated mean exponent of the allometric relationship in SYS being 2.85, and the interquartile range being from 2.63 to 2.96, which indicated the heterogeneity of body form. The results from ANCOVA showed that the allometric body length-body mass relationships were significantly different between the 25th and 75th percentile exponent values (F=6.38, df=1737, P<0.01) and the 25th percentile and median exponent values (F=2.35, df=1737, P=0.039). The relationship was marginally different between the median and 75th percentile exponent values (F=2.21, df=1737, P=0.051). The estimated body length-body mass exponent of adult small yellow croakers in SYS was 3.01 (10th and 95th percentiles = 2.77 and 3.1, respectively). The

  11. Modeling digital switching circuits with linear algebra

    CERN Document Server

    Thornton, Mitchell A

    2014-01-01

    Modeling Digital Switching Circuits with Linear Algebra describes an approach for modeling digital information and circuitry that is an alternative to Boolean algebra. While the Boolean algebraic model has been wildly successful and is responsible for many advances in modern information technology, the approach described in this book offers new insight and different ways of solving problems. Modeling the bit as a vector instead of a scalar value in the set {0, 1} allows digital circuits to be characterized with transfer functions in the form of a linear transformation matrix. The use of transf

  12. On the chiral phase transition in the linear sigma model

    International Nuclear Information System (INIS)

    Tran Huu Phat; Nguyen Tuan Anh; Le Viet Hoa

    2003-01-01

    The Cornwall- Jackiw-Tomboulis (CJT) effective action for composite operators at finite temperature is used to investigate the chiral phase transition within the framework of the linear sigma model as the low-energy effective model of quantum chromodynamics (QCD). A new renormalization prescription for the CJT effective action in the Hartree-Fock (HF) approximation is proposed. A numerical study, which incorporates both thermal and quantum effect, shows that in this approximation the phase transition is of first order. However, taking into account the higher-loop diagrams contribution the order of phase transition is unchanged. (author)

  13. Variance Function Partially Linear Single-Index Models1.

    Science.gov (United States)

    Lian, Heng; Liang, Hua; Carroll, Raymond J

    2015-01-01

    We consider heteroscedastic regression models where the mean function is a partially linear single index model and the variance function depends upon a generalized partially linear single index model. We do not insist that the variance function depend only upon the mean function, as happens in the classical generalized partially linear single index model. We develop efficient and practical estimation methods for the variance function and for the mean function. Asymptotic theory for the parametric and nonparametric parts of the model is developed. Simulations illustrate the results. An empirical example involving ozone levels is used to further illustrate the results, and is shown to be a case where the variance function does not depend upon the mean function.

  14. Variable-property effects in laminar aiding and opposing mixed convection of air in vertical tubes

    International Nuclear Information System (INIS)

    Nesreddine, H.; Galanis, N.; Nguyen, C.T.

    1997-01-01

    Mixed convection flow in tubes is encountered in many engineering applications, such as solar collectors, nuclear reactors, and compact heat exchangers. Here, a numerical investigation has been conducted in order to determine the effects of variable properties on the flow pattern and heat transfer performances in laminar developing ascending flow with mixed convection for two cases: in case 1 the fluid is heated, and in case 2 it is cooled. Calculations are performed for air at various Grashof numbers with a fixed entrance Reynolds number of 500 using both the Boussinesq approximation (constant-property model) and a variable-property model. In the latter case, the fluid viscosity and thermal conductivity are allowed to vary with absolute temperature according to simple power laws, while the density varies linearly with the temperature, and the heat capacity is assumed to be constant. The comparison between constant- and variable-property models shows a substantial difference in the temperature and velocity fields when the Grashof number |Gr| is increased. The friction factor is seen to be underpredicted by the Boussinesq approximation when the fluid is heated (case 1), while it is overpredicted for the cooling case (case 2). However, the effects on the heat transfer performance remain negligible except for cases with reverse flow. On the whole, the variable-property model predicts flow reversal at lower values of |Gr|, especially for flows with opposing buoyancy forces. The deviation in results is associated to the difference between the fluid bulk and the wall temperature

  15. Effect of Mixing Process on Polypropylene Modified Bituminous Concrete Mix Properties

    OpenAIRE

    Noor Zainab Habib; Ibrahim Kamaruddin; Madzalan Napiah; Isa Mohd Tan

    2011-01-01

    This paper presents a research conducted to investigate the effect of mixing process on polypropylene (PP) modified bitumen mixed with well graded aggregate to form modified bituminous concrete mix. Two mode of mixing, namely dry and wet with different concentration of polymer polypropylene was used with 80/100 pen bitumen, to evaluate the bituminous concrete mix properties. Three percentages of polymer varying from 1-3% by the weight of bitumen was used in this study. Three mixes namely cont...

  16. Modeling Temporal Behavior in Large Networks: A Dynamic Mixed-Membership Model

    Energy Technology Data Exchange (ETDEWEB)

    Rossi, R; Gallagher, B; Neville, J; Henderson, K

    2011-11-11

    Given a large time-evolving network, how can we model and characterize the temporal behaviors of individual nodes (and network states)? How can we model the behavioral transition patterns of nodes? We propose a temporal behavior model that captures the 'roles' of nodes in the graph and how they evolve over time. The proposed dynamic behavioral mixed-membership model (DBMM) is scalable, fully automatic (no user-defined parameters), non-parametric/data-driven (no specific functional form or parameterization), interpretable (identifies explainable patterns), and flexible (applicable to dynamic and streaming networks). Moreover, the interpretable behavioral roles are generalizable, computationally efficient, and natively supports attributes. We applied our model for (a) identifying patterns and trends of nodes and network states based on the temporal behavior, (b) predicting future structural changes, and (c) detecting unusual temporal behavior transitions. We use eight large real-world datasets from different time-evolving settings (dynamic and streaming). In particular, we model the evolving mixed-memberships and the corresponding behavioral transitions of Twitter, Facebook, IP-Traces, Email (University), Internet AS, Enron, Reality, and IMDB. The experiments demonstrate the scalability, flexibility, and effectiveness of our model for identifying interesting patterns, detecting unusual structural transitions, and predicting the future structural changes of the network and individual nodes.

  17. Improving Mixed-phase Cloud Parameterization in Climate Model with the ACRF Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Zhien [Univ. of Wyoming, Laramie, WY (United States)

    2016-12-13

    Mixed-phase cloud microphysical and dynamical processes are still poorly understood, and their representation in GCMs is a major source of uncertainties in overall cloud feedback in GCMs. Thus improving mixed-phase cloud parameterizations in climate models is critical to reducing the climate forecast uncertainties. This study aims at providing improved knowledge of mixed-phase cloud properties from the long-term ACRF observations and improving mixed-phase clouds simulations in the NCAR Community Atmosphere Model version 5 (CAM5). The key accomplishments are: 1) An improved retrieval algorithm was developed to provide liquid droplet concentration for drizzling or mixed-phase stratiform clouds. 2) A new ice concentration retrieval algorithm for stratiform mixed-phase clouds was developed. 3) A strong seasonal aerosol impact on ice generation in Arctic mixed-phase clouds was identified, which is mainly attributed to the high dust occurrence during the spring season. 4) A suite of multi-senor algorithms was applied to long-term ARM observations at the Barrow site to provide a complete dataset (LWC and effective radius profile for liquid phase, and IWC, Dge profiles and ice concentration for ice phase) to characterize Arctic stratiform mixed-phase clouds. This multi-year stratiform mixed-phase cloud dataset provides necessary information to study related processes, evaluate model stratiform mixed-phase cloud simulations, and improve model stratiform mixed-phase cloud parameterization. 5). A new in situ data analysis method was developed to quantify liquid mass partition in convective mixed-phase clouds. For the first time, we reliably compared liquid mass partitions in stratiform and convective mixed-phase clouds. Due to the different dynamics in stratiform and convective mixed-phase clouds, the temperature dependencies of liquid mass partitions are significantly different due to much higher ice concentrations in convective mixed phase clouds. 6) Systematic evaluations

  18. Effect of Mixing Condition on Rheological Behavior of Epoxy-Clay Nanocomposites

    Directory of Open Access Journals (Sweden)

    Gholamhossein Sodeifian

    2012-12-01

    Full Text Available The effect of mixing on rheological behavior of 6% wt epoxy-clay nanocomposites was studied. The mixing processes were carried out by low shear mixer, homogenizer and ultrasonic and combination of different mixing techniques at medium and maximum power. All these methods led to intercalated structure. The XRD results showed that the ultrasonic has the best effect on dispersion while a low shear mixer has the least positive effect. Opposite to an ultrasonic mixing method, the homogenization process through maximum power does not change the dispersion state significantly. The best condition would be to use an ultrasonic mixer after a homogenizer, otherwise the reverse process may result in lower dispersion. Small amplitude oscillatory measurements were carried out on linear regime over 0.1-100 Hz. According to the fact that rheological responses are very sensitive to polymerparticle interactions and accessible surface area, the slope of storage modulus and shear thinning exponent of viscosity are proportional to the level of dispersion. This implies that more increases in intergallary height may lead to less terminal slope. The continuous relaxation profile and zero shear viscosity were generated by experimental data via computer software based on neural network approach. To check the validity of software, the experimental data were recovered with very low deviation using relaxation spectrum. The experimental observations showed that a solid-like behavior, as a result of better dispersion, can prevent the profile from falling especially at longertimes.

  19. Cold and hot model investigation of flow and mixing in a multi-jet flare

    Energy Technology Data Exchange (ETDEWEB)

    Pagot, P.R. [Petrobras Petroleo Brasileiro S.A., Rio de Janeiro (Brazil); Sobiesiak, A. [Windsor Univ., ON (Canada); Grandmaison, E.W. [Queen' s Univ., Kingston, ON (Canada). Centre for Advanced Gas Combustion Technology

    2003-07-01

    The oil and gas industry commonly disposes of hydrocarbon wastes by flaring. This study simulated several features of industrial offshore flares in a multi-jet burner. Cold and hot flow experiments were performed. Twenty-four nozzles mounted on radial arms originating from a central fuel plenum were used in the burner design. In an effort to improve the mixing and radiation characteristics of this type of burner, an examination of the effect of various mixing-altering devices on the nozzle exit ports was performed. Flow visualization studies of the cold and hot flow systems were presented, along with details concerning temperature, gas composition and radiation levels from the burner models. The complex flow pattern resulting when multiple jets are injected into a cross flow stream were demonstrated with the flow visualization studies from the cold model. The trajectory followed by the leading edge jet for the reference case and the ring attachments was higher but similar to the simple round jet in a cross flow. The precessing jets and the cone attachments were more strongly deflected by the cross flow with a higher degree of mixing between the jets in the nozzle region. For different firing rates, flow visualization, gas temperature, gas composition and radiative heat flux measurements were performed in the hot model studies. Flame trajectories, projected side view areas and volumes increased with firing rates for all nozzle configurations and the ring attachment flare had the smallest flame volume. The gas temperatures reached maximum values at close to 30 per cent of the flame length and the lowest gas temperature was observed for the flare model with precessing jets. For the reference case nozzle, nitrogen oxide (NOx) concentrations were in the 30 to 45 parts per million (ppm) range. The precessing jet model yielded NOx concentrations in the 22 to 24 ppm range, the lowest obtained. There was a linear dependence between the radiative heat flux from the flames

  20. Modeling molecular mixing in a spatially inhomogeneous turbulent flow

    Science.gov (United States)

    Meyer, Daniel W.; Deb, Rajdeep

    2012-02-01

    Simulations of spatially inhomogeneous turbulent mixing in decaying grid turbulence with a joint velocity-concentration probability density function (PDF) method were conducted. The inert mixing scenario involves three streams with different compositions. The mixing model of Meyer ["A new particle interaction mixing model for turbulent dispersion and turbulent reactive flows," Phys. Fluids 22(3), 035103 (2010)], the interaction by exchange with the mean (IEM) model and its velocity-conditional variant, i.e., the IECM model, were applied. For reference, the direct numerical simulation data provided by Sawford and de Bruyn Kops ["Direct numerical simulation and lagrangian modeling of joint scalar statistics in ternary mixing," Phys. Fluids 20(9), 095106 (2008)] was used. It was found that velocity conditioning is essential to obtain accurate concentration PDF predictions. Moreover, the model of Meyer provides significantly better results compared to the IECM model at comparable computational expense.