Generalized, Linear, and Mixed Models
McCulloch, Charles E; Neuhaus, John M
2011-01-01
An accessible and self-contained introduction to statistical models-now in a modernized new editionGeneralized, Linear, and Mixed Models, Second Edition provides an up-to-date treatment of the essential techniques for developing and applying a wide variety of statistical models. The book presents thorough and unified coverage of the theory behind generalized, linear, and mixed models and highlights their similarities and differences in various construction, application, and computational aspects.A clear introduction to the basic ideas of fixed effects models, random effects models, and mixed m
Linear and Generalized Linear Mixed Models and Their Applications
Jiang, Jiming
2007-01-01
This book covers two major classes of mixed effects models, linear mixed models and generalized linear mixed models, and it presents an up-to-date account of theory and methods in analysis of these models as well as their applications in various fields. The book offers a systematic approach to inference about non-Gaussian linear mixed models. Furthermore, it has included recently developed methods, such as mixed model diagnostics, mixed model selection, and jackknife method in the context of mixed models. The book is aimed at students, researchers and other practitioners who are interested
Multivariate generalized linear mixed models using R
Berridge, Damon Mark
2011-01-01
Multivariate Generalized Linear Mixed Models Using R presents robust and methodologically sound models for analyzing large and complex data sets, enabling readers to answer increasingly complex research questions. The book applies the principles of modeling to longitudinal data from panel and related studies via the Sabre software package in R. A Unified Framework for a Broad Class of Models The authors first discuss members of the family of generalized linear models, gradually adding complexity to the modeling framework by incorporating random effects. After reviewing the generalized linear model notation, they illustrate a range of random effects models, including three-level, multivariate, endpoint, event history, and state dependence models. They estimate the multivariate generalized linear mixed models (MGLMMs) using either standard or adaptive Gaussian quadrature. The authors also compare two-level fixed and random effects linear models. The appendices contain additional information on quadrature, model...
Actuarial statistics with generalized linear mixed models
Antonio, K.; Beirlant, J.
2007-01-01
Over the last decade the use of generalized linear models (GLMs) in actuarial statistics has received a lot of attention, starting from the actuarial illustrations in the standard text by McCullagh and Nelder [McCullagh, P., Nelder, J.A., 1989. Generalized linear models. In: Monographs on Statistics
A Note on the Identifiability of Generalized Linear Mixed Models
DEFF Research Database (Denmark)
Labouriau, Rodrigo
2014-01-01
I present here a simple proof that, under general regularity conditions, the standard parametrization of generalized linear mixed model is identifiable. The proof is based on the assumptions of generalized linear mixed models on the first and second order moments and some general mild regularity...... conditions, and, therefore, is extensible to quasi-likelihood based generalized linear models. In particular, binomial and Poisson mixed models with dispersion parameter are identifiable when equipped with the standard parametrization...
Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models
Wagler, Amy E.
2014-01-01
Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…
From linear to generalized linear mixed models: A case study in repeated measures
Compared to traditional linear mixed models, generalized linear mixed models (GLMMs) can offer better correspondence between response variables and explanatory models, yielding more efficient estimates and tests in the analysis of data from designed experiments. Using proportion data from a designed...
Practical likelihood analysis for spatial generalized linear mixed models
DEFF Research Database (Denmark)
Bonat, W. H.; Ribeiro, Paulo Justiniano
2016-01-01
We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are......, respectively, examples of binomial and count datasets modeled by spatial generalized linear mixed models. Our results show that the Laplace approximation provides similar estimates to Markov Chain Monte Carlo likelihood, Monte Carlo expectation maximization, and modified Laplace approximation. Some advantages...... of Laplace approximation include the computation of the maximized log-likelihood value, which can be used for model selection and tests, and the possibility to obtain realistic confidence intervals for model parameters based on profile likelihoods. The Laplace approximation also avoids the tuning...
Faraway, Julian J
2005-01-01
Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ...
DEFF Research Database (Denmark)
Holst, René; Jørgensen, Bent
2015-01-01
The paper proposes a versatile class of multiplicative generalized linear longitudinal mixed models (GLLMM) with additive dispersion components, based on explicit modelling of the covariance structure. The class incorporates a longitudinal structure into the random effects models and retains...... a marginal as well as a conditional interpretation. The estimation procedure is based on a computationally efficient quasi-score method for the regression parameters combined with a REML-like bias-corrected Pearson estimating function for the dispersion and correlation parameters. This avoids...... the multidimensional integral of the conventional GLMM likelihood and allows an extension of the robust empirical sandwich estimator for use with both association and regression parameters. The method is applied to a set of otholit data, used for age determination of fish....
Generalized linear mixed models modern concepts, methods and applications
Stroup, Walter W
2012-01-01
PART I The Big PictureModeling BasicsWhat Is a Model?Two Model Forms: Model Equation and Probability DistributionTypes of Model EffectsWriting Models in Matrix FormSummary: Essential Elements for a Complete Statement of the ModelDesign MattersIntroductory Ideas for Translating Design and Objectives into ModelsDescribing ""Data Architecture"" to Facilitate Model SpecificationFrom Plot Plan to Linear PredictorDistribution MattersMore Complex Example: Multiple Factors with Different Units of ReplicationSetting the StageGoals for Inference with Models: OverviewBasic Tools of InferenceIssue I: Data
Bayesian prediction of spatial count data using generalized linear mixed models
DEFF Research Database (Denmark)
Christensen, Ole Fredslund; Waagepetersen, Rasmus Plenge
2002-01-01
Spatial weed count data are modeled and predicted using a generalized linear mixed model combined with a Bayesian approach and Markov chain Monte Carlo. Informative priors for a data set with sparse sampling are elicited using a previously collected data set with extensive sampling. Furthermore, ...
Modeling containment of large wildfires using generalized linear mixed-model analysis
Mark Finney; Isaac C. Grenfell; Charles W. McHugh
2009-01-01
Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...
Log-normal frailty models fitted as Poisson generalized linear mixed models.
Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver
2016-12-01
The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Fokkema, M; Smits, N; Zeileis, A; Hothorn, T; Kelderman, H
2017-10-25
Identification of subgroups of patients for whom treatment A is more effective than treatment B, and vice versa, is of key importance to the development of personalized medicine. Tree-based algorithms are helpful tools for the detection of such interactions, but none of the available algorithms allow for taking into account clustered or nested dataset structures, which are particularly common in psychological research. Therefore, we propose the generalized linear mixed-effects model tree (GLMM tree) algorithm, which allows for the detection of treatment-subgroup interactions, while accounting for the clustered structure of a dataset. The algorithm uses model-based recursive partitioning to detect treatment-subgroup interactions, and a GLMM to estimate the random-effects parameters. In a simulation study, GLMM trees show higher accuracy in recovering treatment-subgroup interactions, higher predictive accuracy, and lower type II error rates than linear-model-based recursive partitioning and mixed-effects regression trees. Also, GLMM trees show somewhat higher predictive accuracy than linear mixed-effects models with pre-specified interaction effects, on average. We illustrate the application of GLMM trees on an individual patient-level data meta-analysis on treatments for depression. We conclude that GLMM trees are a promising exploratory tool for the detection of treatment-subgroup interactions in clustered datasets.
Spatial variability in floodplain sedimentation: the use of generalized linear mixed-effects models
Directory of Open Access Journals (Sweden)
A. Cabezas
2010-08-01
Full Text Available Sediment, Total Organic Carbon (TOC and total nitrogen (TN accumulation during one overbank flood (1.15 y return interval were examined at one reach of the Middle Ebro River (NE Spain for elucidating spatial patterns. To achieve this goal, four areas with different geomorphological features and located within the study reach were examined by using artificial grass mats. Within each area, 1 m^{2} study plots consisting of three pseudo-replicates were placed in a semi-regular grid oriented perpendicular to the main channel. TOC, TN and Particle-Size composition of deposited sediments were examined and accumulation rates estimated. Generalized linear mixed-effects models were used to analyze sedimentation patterns in order to handle clustered sampling units, specific-site effects and spatial self-correlation between observations. Our results confirm the importance of channel-floodplain morphology and site micro-topography in explaining sediment, TOC and TN deposition patterns, although the importance of other factors as vegetation pattern should be included in further studies to explain small-scale variability. Generalized linear mixed-effect models provide a good framework to deal with the high spatial heterogeneity of this phenomenon at different spatial scales, and should be further investigated in order to explore its validity when examining the importance of factors such as flood magnitude or suspended sediment concentration.
Mazo Lopera, Mauricio A; Coombes, Brandon J; de Andrade, Mariza
2017-09-27
Gene-environment (GE) interaction has important implications in the etiology of complex diseases that are caused by a combination of genetic factors and environment variables. Several authors have developed GE analysis in the context of independent subjects or longitudinal data using a gene-set. In this paper, we propose to analyze GE interaction for discrete and continuous phenotypes in family studies by incorporating the relatedness among the relatives for each family into a generalized linear mixed model (GLMM) and by using a gene-based variance component test. In addition, we deal with collinearity problems arising from linkage disequilibrium among single nucleotide polymorphisms (SNPs) by considering their coefficients as random effects under the null model estimation. We show that the best linear unbiased predictor (BLUP) of such random effects in the GLMM is equivalent to the ridge regression estimator. This equivalence provides a simple method to estimate the ridge penalty parameter in comparison to other computationally-demanding estimation approaches based on cross-validation schemes. We evaluated the proposed test using simulation studies and applied it to real data from the Baependi Heart Study consisting of 76 families. Using our approach, we identified an interaction between BMI and the Peroxisome Proliferator Activated Receptor Gamma ( PPARG ) gene associated with diabetes.
Spatial generalized linear mixed models of electric power outages due to hurricanes and ice storms
International Nuclear Information System (INIS)
Liu Haibin; Davidson, Rachel A.; Apanasovich, Tatiyana V.
2008-01-01
This paper presents new statistical models that predict the number of hurricane- and ice storm-related electric power outages likely to occur in each 3 kmx3 km grid cell in a region. The models are based on a large database of recent outages experienced by three major East Coast power companies in six hurricanes and eight ice storms. A spatial generalized linear mixed modeling (GLMM) approach was used in which spatial correlation is incorporated through random effects. Models were fitted using a composite likelihood approach and the covariance matrix was estimated empirically. A simulation study was conducted to test the model estimation procedure, and model training, validation, and testing were done to select the best models and assess their predictive power. The final hurricane model includes number of protective devices, maximum gust wind speed, hurricane indicator, and company indicator covariates. The final ice storm model includes number of protective devices, ice thickness, and ice storm indicator covariates. The models should be useful for power companies as they plan for future storms. The statistical modeling approach offers a new way to assess the reliability of electric power and other infrastructure systems in extreme events
Directory of Open Access Journals (Sweden)
Nicola Koper
2012-03-01
Full Text Available Resource selection functions (RSF are often developed using satellite (ARGOS or Global Positioning System (GPS telemetry datasets, which provide a large amount of highly correlated data. We discuss and compare the use of generalized linear mixed-effects models (GLMM and generalized estimating equations (GEE for using this type of data to develop RSFs. GLMMs directly model differences among caribou, while GEEs depend on an adjustment of the standard error to compensate for correlation of data points within individuals. Empirical standard errors, rather than model-based standard errors, must be used with either GLMMs or GEEs when developing RSFs. There are several important differences between these approaches; in particular, GLMMs are best for producing parameter estimates that predict how management might influence individuals, while GEEs are best for predicting how management might influence populations. As the interpretation, value, and statistical significance of both types of parameter estimates differ, it is important that users select the appropriate analytical method. We also outline the use of k-fold cross validation to assess fit of these models. Both GLMMs and GEEs hold promise for developing RSFs as long as they are used appropriately.
Yu-Kang, Tu
2016-12-01
Network meta-analysis for multiple treatment comparisons has been a major development in evidence synthesis methodology. The validity of a network meta-analysis, however, can be threatened by inconsistency in evidence within the network. One particular issue of inconsistency is how to directly evaluate the inconsistency between direct and indirect evidence with regard to the effects difference between two treatments. A Bayesian node-splitting model was first proposed and a similar frequentist side-splitting model has been put forward recently. Yet, assigning the inconsistency parameter to one or the other of the two treatments or splitting the parameter symmetrically between the two treatments can yield different results when multi-arm trials are involved in the evaluation. We aimed to show that a side-splitting model can be viewed as a special case of design-by-treatment interaction model, and different parameterizations correspond to different design-by-treatment interactions. We demonstrated how to evaluate the side-splitting model using the arm-based generalized linear mixed model, and an example data set was used to compare results from the arm-based models with those from the contrast-based models. The three parameterizations of side-splitting make slightly different assumptions: the symmetrical method assumes that both treatments in a treatment contrast contribute to inconsistency between direct and indirect evidence, whereas the other two parameterizations assume that only one of the two treatments contributes to this inconsistency. With this understanding in mind, meta-analysts can then make a choice about how to implement the side-splitting method for their analysis. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
DEFF Research Database (Denmark)
Brooks, Mollie Elizabeth; Kristensen, Kasper; van Benthem, Koen J.
2017-01-01
Count data can be analyzed using generalized linear mixed models when observations are correlated in ways that require random effects. However, count data are often zero-inflated, containing more zeros than would be expected from the typical error distributions. We present a new package, glmm...
A simulation-based goodness-of-fit test for random effects in generalized linear mixed models
DEFF Research Database (Denmark)
Waagepetersen, Rasmus
2006-01-01
The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal...... distribution of the simulated random effects coincides with the assumed random effects distribution. In practice, the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution...
A simulation-based goodness-of-fit test for random effects in generalized linear mixed models
DEFF Research Database (Denmark)
Waagepetersen, Rasmus Plenge
The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal...... distribution of the simulated random effects coincides with the assumed random effects distribution. In practice the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution function...
Wang, Xulong; Philip, Vivek M; Ananda, Guruprasad; White, Charles C; Malhotra, Ankit; Michalski, Paul J; Karuturi, Krishna R Murthy; Chintalapudi, Sumana R; Acklin, Casey; Sasner, Michael; Bennett, David A; De Jager, Philip L; Howell, Gareth R; Carter, Gregory W
2018-03-05
Recent technical and methodological advances have greatly enhanced genome-wide association studies (GWAS). The advent of low-cost whole-genome sequencing facilitates high-resolution variant identification, and the development of linear mixed models (LMM) allows improved identification of putatively causal variants. While essential for correcting false positive associations due to sample relatedness and population stratification, LMMs have commonly been restricted to quantitative variables. However, phenotypic traits in association studies are often categorical, coded as binary case-control or ordered variables describing disease stages. To address these issues, we have devised a method for genomic association studies that implements a generalized linear mixed model (GLMM) in a Bayesian framework, called Bayes-GLMM Bayes-GLMM has four major features: (1) support of categorical, binary and quantitative variables; (2) cohesive integration of previous GWAS results for related traits; (3) correction for sample relatedness by mixed modeling; and (4) model estimation by both Markov chain Monte Carlo (MCMC) sampling and maximal likelihood estimation. We applied Bayes-GLMM to the whole-genome sequencing cohort of the Alzheimer's Disease Sequencing Project (ADSP). This study contains 570 individuals from 111 families, each with Alzheimer's disease diagnosed at one of four confidence levels. With Bayes-GLMM we identified four variants in three loci significantly associated with Alzheimer's disease. Two variants, rs140233081 and rs149372995 lie between PRKAR1B and PDGFA The coded proteins are localized to the glial-vascular unit, and PDGFA transcript levels are associated with AD-related neuropathology. In summary, this work provides implementation of a flexible, generalized mixed model approach in a Bayesian framework for association studies. Copyright © 2018, Genetics.
Diaz, Francisco J
2016-10-15
We propose statistical definitions of the individual benefit of a medical or behavioral treatment and of the severity of a chronic illness. These definitions are used to develop a graphical method that can be used by statisticians and clinicians in the data analysis of clinical trials from the perspective of personalized medicine. The method focuses on assessing and comparing individual effects of treatments rather than average effects and can be used with continuous and discrete responses, including dichotomous and count responses. The method is based on new developments in generalized linear mixed-effects models, which are introduced in this article. To illustrate, analyses of data from the Sequenced Treatment Alternatives to Relieve Depression clinical trial of sequences of treatments for depression and data from a clinical trial of respiratory treatments are presented. The estimation of individual benefits is also explained. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Wang, Ke-Sheng; Liu, Xuefeng; Ategbole, Muyiwa; Xie, Xin; Liu, Ying; Xu, Chun; Xie, Changchun; Sha, Zhanxin
2017-09-27
Objective: Screening for colorectal cancer (CRC) can reduce disease incidence, morbidity, and mortality. However, few studies have investigated the urban-rural differences in social and behavioral factors influencing CRC screening. The objective of the study was to investigate the potential factors across urban-rural groups on the usage of CRC screening. Methods: A total of 38,505 adults (aged ≥40 years) were selected from the 2009 California Health Interview Survey (CHIS) data - the latest CHIS data on CRC screening. The weighted generalized linear mixed-model (WGLIMM) was used to deal with this hierarchical structure data. Weighted simple and multiple mixed logistic regression analyses in SAS ver. 9.4 were used to obtain the odds ratios (ORs) and their 95% confidence intervals (CIs). Results: The overall prevalence of CRC screening was 48.1% while the prevalence in four residence groups - urban, second city, suburban, and town/rural, were 45.8%, 46.9%, 53.7% and 50.1%, respectively. The results of WGLIMM analysis showed that there was residence effect (pregression analysis revealed that age, race, marital status, education level, employment stats, binge drinking, and smoking status were associated with CRC screening (p<0.05). Stratified by residence regions, age and poverty level showed associations with CRC screening in all four residence groups. Education level was positively associated with CRC screening in second city and suburban. Infrequent binge drinking was associated with CRC screening in urban and suburban; while current smoking was a protective factor in urban and town/rural groups. Conclusions: Mixed models are useful to deal with the clustered survey data. Social factors and behavioral factors (binge drinking and smoking) were associated with CRC screening and the associations were affected by living areas such as urban and rural regions. Creative Commons Attribution License
Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O
2018-01-01
Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes
Wang, Ming; Li, Zheng; Lee, Eun Young; Lewis, Mechelle M; Zhang, Lijun; Sterling, Nicholas W; Wagner, Daymond; Eslinger, Paul; Du, Guangwei; Huang, Xuemei
2017-09-25
It is challenging for current statistical models to predict clinical progression of Parkinson's disease (PD) because of the involvement of multi-domains and longitudinal data. Past univariate longitudinal or multivariate analyses from cross-sectional trials have limited power to predict individual outcomes or a single moment. The multivariate generalized linear mixed-effect model (GLMM) under the Bayesian framework was proposed to study multi-domain longitudinal outcomes obtained at baseline, 18-, and 36-month. The outcomes included motor, non-motor, and postural instability scores from the MDS-UPDRS, and demographic and standardized clinical data were utilized as covariates. The dynamic prediction was performed for both internal and external subjects using the samples from the posterior distributions of the parameter estimates and random effects, and also the predictive accuracy was evaluated based on the root of mean square error (RMSE), absolute bias (AB) and the area under the receiver operating characteristic (ROC) curve. First, our prediction model identified clinical data that were differentially associated with motor, non-motor, and postural stability scores. Second, the predictive accuracy of our model for the training data was assessed, and improved prediction was gained in particularly for non-motor (RMSE and AB: 2.89 and 2.20) compared to univariate analysis (RMSE and AB: 3.04 and 2.35). Third, the individual-level predictions of longitudinal trajectories for the testing data were performed, with ~80% observed values falling within the 95% credible intervals. Multivariate general mixed models hold promise to predict clinical progression of individual outcomes in PD. The data was obtained from Dr. Xuemei Huang's NIH grant R01 NS060722 , part of NINDS PD Biomarker Program (PDBP). All data was entered within 24 h of collection to the Data Management Repository (DMR), which is publically available ( https://pdbp.ninds.nih.gov/data-management ).
Xie, Xianhong; Xue, Xiaonan; Strickler, Howard D
2018-01-15
Longitudinal measurement of biomarkers is important in determining risk factors for binary endpoints such as infection or disease. However, biomarkers are subject to measurement error, and some are also subject to left-censoring due to a lower limit of detection. Statistical methods to address these issues are few. We herein propose a generalized linear mixed model and estimate the model parameters using the Monte Carlo Newton-Raphson (MCNR) method. Inferences regarding the parameters are made by applying Louis's method and the delta method. Simulation studies were conducted to compare the proposed MCNR method with existing methods including the maximum likelihood (ML) method and the ad hoc approach of replacing the left-censored values with half of the detection limit (HDL). The results showed that the performance of the MCNR method is superior to ML and HDL with respect to the empirical standard error, as well as the coverage probability for the 95% confidence interval. The HDL method uses an incorrect imputation method, and the computation is constrained by the number of quadrature points; while the ML method also suffers from the constrain for the number of quadrature points, the MCNR method does not have this limitation and approximates the likelihood function better than the other methods. The improvement of the MCNR method is further illustrated with real-world data from a longitudinal study of local cervicovaginal HIV viral load and its effects on oncogenic HPV detection in HIV-positive women. Copyright © 2017 John Wiley & Sons, Ltd.
Widyaningsih, Yekti; Saefuddin, Asep; Notodiputro, Khairil A.; Wigena, Aji H.
2012-05-01
The objective of this research is to build a nested generalized linear mixed model using an ordinal response variable with some covariates. There are three main jobs in this paper, i.e. parameters estimation procedure, simulation, and implementation of the model for the real data. At the part of parameters estimation procedure, concepts of threshold, nested random effect, and computational algorithm are described. The simulations data are built for 3 conditions to know the effect of different parameter values of random effect distributions. The last job is the implementation of the model for the data about poverty in 9 districts of Java Island. The districts are Kuningan, Karawang, and Majalengka chose randomly in West Java; Temanggung, Boyolali, and Cilacap from Central Java; and Blitar, Ngawi, and Jember from East Java. The covariates in this model are province, number of bad nutrition cases, number of farmer families, and number of health personnel. In this modeling, all covariates are grouped as ordinal scale. Unit observation in this research is sub-district (kecamatan) nested in district, and districts (kabupaten) are nested in province. For the result of simulation, ARB (Absolute Relative Bias) and RRMSE (Relative Root of mean square errors) scale is used. They show that prov parameters have the highest bias, but more stable RRMSE in all conditions. The simulation design needs to be improved by adding other condition, such as higher correlation between covariates. Furthermore, as the result of the model implementation for the data, only number of farmer family and number of medical personnel have significant contributions to the level of poverty in Central Java and East Java province, and only district 2 (Karawang) of province 1 (West Java) has different random effect from the others. The source of the data is PODES (Potensi Desa) 2008 from BPS (Badan Pusat Statistik).
Directory of Open Access Journals (Sweden)
Crasmareanu Mircea
2017-12-01
Full Text Available We consider the paracomplex version of the notion of mixed linear spaces introduced by M. Jurchescu in [4] by replacing the complex unit i with the paracomplex unit j, j2 = 1. The linear algebra of these spaces is studied with a special view towards their morphisms.
Multivariate covariance generalized linear models
DEFF Research Database (Denmark)
Bonat, W. H.; Jørgensen, Bent
2016-01-01
are fitted by using an efficient Newton scoring algorithm based on quasi-likelihood and Pearson estimating functions, using only second-moment assumptions. This provides a unified approach to a wide variety of types of response variables and covariance structures, including multivariate extensions......We propose a general framework for non-normal multivariate data analysis called multivariate covariance generalized linear models, designed to handle multivariate response variables, along with a wide range of temporal and spatial correlation structures defined in terms of a covariance link...... function combined with a matrix linear predictor involving known matrices. The method is motivated by three data examples that are not easily handled by existing methods. The first example concerns multivariate count data, the second involves response variables of mixed types, combined with repeated...
Generalized Linear Covariance Analysis
Carpenter, James R.; Markley, F. Landis
2014-01-01
This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
Foundations of linear and generalized linear models
Agresti, Alan
2015-01-01
A valuable overview of the most important ideas and results in statistical analysis Written by a highly-experienced author, Foundations of Linear and Generalized Linear Models is a clear and comprehensive guide to the key concepts and results of linear statistical models. The book presents a broad, in-depth overview of the most commonly used statistical models by discussing the theory underlying the models, R software applications, and examples with crafted models to elucidate key ideas and promote practical model building. The book begins by illustrating the fundamentals of linear models,
Cho, Sun-Joo; Goodwin, Amanda P
2016-04-01
When word learning is supported by instruction in experimental studies for adolescents, word knowledge outcomes tend to be collected from complex data structure, such as multiple aspects of word knowledge, multilevel reader data, multilevel item data, longitudinal design, and multiple groups. This study illustrates how generalized linear mixed models can be used to measure and explain word learning for data having such complexity. Results from this application provide deeper understanding of word knowledge than could be attained from simpler models and show that word knowledge is multidimensional and depends on word characteristics and instructional contexts.
Linear mixed models in sensometrics
DEFF Research Database (Denmark)
Kuznetsova, Alexandra
quality of decision making in Danish as well as international food companies and other companies using the same methods. The two open-source R packages lmerTest and SensMixed implement and support the methodological developments in the research papers as well as the ANOVA modelling part of the Consumer...... an open-source software tool ConsumerCheck was developed in this project and now is available for everyone. will represent a major step forward when concerns this important problem in modern consumer driven product development. Standard statistical software packages can be used for some of the purposes......Today’s companies and researchers gather large amounts of data of different kind. In consumer studies the objective is the collection of the data to better understand consumer acceptance of products. In such studies a number of persons (generally not trained) are selected in order to score products...
Caçola, Priscila M; Pant, Mohan D
2014-10-01
The purpose was to use a multi-level statistical technique to analyze how children's age, motor proficiency, and cognitive styles interact to affect accuracy on reach estimation tasks via Motor Imagery and Visual Imagery. Results from the Generalized Linear Mixed Model analysis (GLMM) indicated that only the 7-year-old age group had significant random intercepts for both tasks. Motor proficiency predicted accuracy in reach tasks, and cognitive styles (object scale) predicted accuracy in the motor imagery task. GLMM analysis is suitable to explore age and other parameters of development. In this case, it allowed an assessment of motor proficiency interacting with age to shape how children represent, plan, and act on the environment.
Hapugoda, J. C.; Sooriyarachchi, M. R.
2017-09-01
Survival time of patients with a disease and the incidence of that particular disease (count) is frequently observed in medical studies with the data of a clustered nature. In many cases, though, the survival times and the count can be correlated in a way that, diseases that occur rarely could have shorter survival times or vice versa. Due to this fact, joint modelling of these two variables will provide interesting and certainly improved results than modelling these separately. Authors have previously proposed a methodology using Generalized Linear Mixed Models (GLMM) by joining the Discrete Time Hazard model with the Poisson Regression model to jointly model survival and count model. As Aritificial Neural Network (ANN) has become a most powerful computational tool to model complex non-linear systems, it was proposed to develop a new joint model of survival and count of Dengue patients of Sri Lanka by using that approach. Thus, the objective of this study is to develop a model using ANN approach and compare the results with the previously developed GLMM model. As the response variables are continuous in nature, Generalized Regression Neural Network (GRNN) approach was adopted to model the data. To compare the model fit, measures such as root mean square error (RMSE), absolute mean error (AME) and correlation coefficient (R) were used. The measures indicate the GRNN model fits the data better than the GLMM model.
Introduction to generalized linear models
Dobson, Annette J
2008-01-01
Introduction Background Scope Notation Distributions Related to the Normal Distribution Quadratic Forms Estimation Model Fitting Introduction Examples Some Principles of Statistical Modeling Notation and Coding for Explanatory Variables Exponential Family and Generalized Linear Models Introduction Exponential Family of Distributions Properties of Distributions in the Exponential Family Generalized Linear Models Examples Estimation Introduction Example: Failure Times for Pressure Vessels Maximum Likelihood Estimation Poisson Regression Example Inference Introduction Sampling Distribution for Score Statistics Taylor Series Approximations Sampling Distribution for MLEs Log-Likelihood Ratio Statistic Sampling Distribution for the Deviance Hypothesis Testing Normal Linear Models Introduction Basic Results Multiple Linear Regression Analysis of Variance Analysis of Covariance General Linear Models Binary Variables and Logistic Regression Probability Distributions ...
Nakagawa, Shinichi; Johnson, Paul C D; Schielzeth, Holger
2017-09-01
The coefficient of determination R 2 quantifies the proportion of variance explained by a statistical model and is an important summary statistic of biological interest. However, estimating R 2 for generalized linear mixed models (GLMMs) remains challenging. We have previously introduced a version of R 2 that we called [Formula: see text] for Poisson and binomial GLMMs, but not for other distributional families. Similarly, we earlier discussed how to estimate intra-class correlation coefficients (ICCs) using Poisson and binomial GLMMs. In this paper, we generalize our methods to all other non-Gaussian distributions, in particular to negative binomial and gamma distributions that are commonly used for modelling biological data. While expanding our approach, we highlight two useful concepts for biologists, Jensen's inequality and the delta method, both of which help us in understanding the properties of GLMMs. Jensen's inequality has important implications for biologically meaningful interpretation of GLMMs, whereas the delta method allows a general derivation of variance associated with non-Gaussian distributions. We also discuss some special considerations for binomial GLMMs with binary or proportion data. We illustrate the implementation of our extension by worked examples from the field of ecology and evolution in the R environment. However, our method can be used across disciplines and regardless of statistical environments. © 2017 The Author(s).
Linear mixed models for longitudinal data
Molenberghs, Geert
2000-01-01
This paperback edition is a reprint of the 2000 edition. This book provides a comprehensive treatment of linear mixed models for continuous longitudinal data. Next to model formulation, this edition puts major emphasis on exploratory data analysis for all aspects of the model, such as the marginal model, subject-specific profiles, and residual covariance structure. Further, model diagnostics and missing data receive extensive treatment. Sensitivity analysis for incomplete data is given a prominent place. Several variations to the conventional linear mixed model are discussed (a heterogeity model, conditional linear mixed models). This book will be of interest to applied statisticians and biomedical researchers in industry, public health organizations, contract research organizations, and academia. The book is explanatory rather than mathematically rigorous. Most analyses were done with the MIXED procedure of the SAS software package, and many of its features are clearly elucidated. However, some other commerc...
Statistical Tests for Mixed Linear Models
Khuri, André I; Sinha, Bimal K
2011-01-01
An advanced discussion of linear models with mixed or random effects. In recent years a breakthrough has occurred in our ability to draw inferences from exact and optimum tests of variance component models, generating much research activity that relies on linear models with mixed and random effects. This volume covers the most important research of the past decade as well as the latest developments in hypothesis testing. It compiles all currently available results in the area of exact and optimum tests for variance component models and offers the only comprehensive treatment for these models a
Linear versus non-linear supersymmetry, in general
Energy Technology Data Exchange (ETDEWEB)
Ferrara, Sergio [Theoretical Physics Department, CERN,CH-1211 Geneva 23 (Switzerland); INFN - Laboratori Nazionali di Frascati,Via Enrico Fermi 40, I-00044 Frascati (Italy); Department of Physics and Astronomy, UniversityC.L.A.,Los Angeles, CA 90095-1547 (United States); Kallosh, Renata [SITP and Department of Physics, Stanford University,Stanford, California 94305 (United States); Proeyen, Antoine Van [Institute for Theoretical Physics, Katholieke Universiteit Leuven,Celestijnenlaan 200D, B-3001 Leuven (Belgium); Wrase, Timm [Institute for Theoretical Physics, Technische Universität Wien,Wiedner Hauptstr. 8-10, A-1040 Vienna (Austria)
2016-04-12
We study superconformal and supergravity models with constrained superfields. The underlying version of such models with all unconstrained superfields and linearly realized supersymmetry is presented here, in addition to the physical multiplets there are Lagrange multiplier (LM) superfields. Once the equations of motion for the LM superfields are solved, some of the physical superfields become constrained. The linear supersymmetry of the original models becomes non-linearly realized, its exact form can be deduced from the original linear supersymmetry. Known examples of constrained superfields are shown to require the following LM’s: chiral superfields, linear superfields, general complex superfields, some of them are multiplets with a spin.
Linear versus non-linear supersymmetry, in general
International Nuclear Information System (INIS)
Ferrara, Sergio; Kallosh, Renata; Proeyen, Antoine Van; Wrase, Timm
2016-01-01
We study superconformal and supergravity models with constrained superfields. The underlying version of such models with all unconstrained superfields and linearly realized supersymmetry is presented here, in addition to the physical multiplets there are Lagrange multiplier (LM) superfields. Once the equations of motion for the LM superfields are solved, some of the physical superfields become constrained. The linear supersymmetry of the original models becomes non-linearly realized, its exact form can be deduced from the original linear supersymmetry. Known examples of constrained superfields are shown to require the following LM’s: chiral superfields, linear superfields, general complex superfields, some of them are multiplets with a spin.
Generalized Cross-Gramian for Linear Systems
DEFF Research Database (Denmark)
Shaker, Hamid Reza
2012-01-01
The cross-gramian is a well-known matrix with embedded controllability and observability information. The cross-gramian is related to the Hankel operator and the Hankel singular values of a linear square system and it has several interesting properties. These properties make the cross...... square symmetric systems, the ordinary cross-gramian does not exist. To cope with this problem, a new generalized cross-gramian is introduced in this paper. In contrast to the ordinary cross-gramian, the generalized cross-gramian can be easily obtained for general linear systems and therefore can be used...
Using generalized linear (mixed) models in HCI
Kaptein, M.C.; Robertson, J; Kaptein, M
2016-01-01
In HCI we often encounter dependent variables which are not (conditionally) normally distributed: we measure response-times, mouse-clicks, or the number of dialog steps it took a user to complete a task. Furthermore, we often encounter nested or grouped data; users are grouped within companies or
Generalized non-linear Schroedinger hierarchy
International Nuclear Information System (INIS)
Aratyn, H.; Gomes, J.F.; Zimerman, A.H.
1994-01-01
The importance in studying the completely integrable models have became evident in the last years due to the fact that those models present an algebraic structure extremely rich, providing the natural scenery for solitons description. Those models can be described through non-linear differential equations, pseudo-linear operators (Lax formulation), or a matrix formulation. The integrability implies in the existence of a conservation law associated to each of degree of freedom. Each conserved charge Q i can be associated to a Hamiltonian, defining a time evolution related to to a time t i through the Hamilton equation ∂A/∂t i =[A,Q i ]. Particularly, for a two-dimensions field theory, infinite degree of freedom exist, and consequently infinite conservation laws describing the time evolution in space of infinite times. The Hamilton equation defines a hierarchy of models which present a infinite set of conservation laws. This paper studies the generalized non-linear Schroedinger hierarchy
General solution of linear vector supersymmetry
International Nuclear Information System (INIS)
Blasi, Alberto; Maggiore, Nicola
2007-01-01
We give the general solution of the Ward identity for the linear vector supersymmetry which characterizes all topological models. Such a solution, whose expression is quite compact and simple, greatly simplifies the study of theories displaying a supersymmetric algebraic structure, reducing to a few lines the proof of their possible finiteness. In particular, the cohomology technology, usually involved for the quantum extension of these theories, is completely bypassed. The case of Chern-Simons theory is taken as an example
Identification of general linear mechanical systems
Sirlin, S. W.; Longman, R. W.; Juang, J. N.
1983-01-01
Previous work in identification theory has been concerned with the general first order time derivative form. Linear mechanical systems, a large and important class, naturally have a second order form. This paper utilizes this additional structural information for the purpose of identification. A realization is obtained from input-output data, and then knowledge of the system input, output, and inertia matrices is used to determine a set of linear equations whereby we identify the remaining unknown system matrices. Necessary and sufficient conditions on the number, type and placement of sensors and actuators are given which guarantee identificability, and less stringent conditions are given which guarantee generic identifiability. Both a priori identifiability and a posteriori identifiability are considered, i.e., identifiability being insured prior to obtaining data, and identifiability being assured with a given data set.
A property of assignment type mixed integer linear programming problems
Benders, J.F.; van Nunen, J.A.E.E.
1982-01-01
In this paper we will proof that rather tight upper bounds can be given for the number of non-unique assignments that are achieved after solving the linear programming relaxation of some types of mixed integer linear assignment problems. Since in these cases the number of splitted assignments is
Tsai, Tien-Lung; Shau, Wen-Yi; Hu, Fu-Chang
2006-01-01
This article generalizes linear path analysis (PA) and simultaneous equations models (SiEM) to deal with mixed responses of different types in a recursive or triangular system. An efficient instrumental variable (IV) method for estimating the structural coefficients of a 2-equation partially recursive generalized path analysis (GPA) model and…
Solving large mixed linear models using preconditioned conjugate gradient iteration.
Strandén, I; Lidauer, M
1999-12-01
Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.
Gravitational Wave in Linear General Relativity
Cubillos, D. J.
2017-07-01
General relativity is the best theory currently available to describe the interaction due to gravity. Within Albert Einstein's field equations this interaction is described by means of the spatiotemporal curvature generated by the matter-energy content in the universe. Weyl worked on the existence of perturbations of the curvature of space-time that propagate at the speed of light, which are known as Gravitational Waves, obtained to a first approximation through the linearization of the field equations of Einstein. Weyl's solution consists of taking the field equations in a vacuum and disturbing the metric, using the Minkowski metric slightly perturbed by a factor ɛ greater than zero but much smaller than one. If the feedback effect of the field is neglected, it can be considered as a weak field solution. After introducing the disturbed metric and ignoring ɛ terms of order greater than one, we can find the linearized field equations in terms of the perturbation, which can then be expressed in terms of the Dalambertian operator of the perturbation equalized to zero. This is analogous to the linear wave equation in classical mechanics, which can be interpreted by saying that gravitational effects propagate as waves at the speed of light. In addition to this, by studying the motion of a particle affected by this perturbation through the geodesic equation can show the transversal character of the gravitational wave and its two possible states of polarization. It can be shown that the energy carried by the wave is of the order of 1/c5 where c is the speed of light, which explains that its effects on matter are very small and very difficult to detect.
Aspects of general linear modelling of migration.
Congdon, P
1992-01-01
"This paper investigates the application of general linear modelling principles to analysing migration flows between areas. Particular attention is paid to specifying the form of the regression and error components, and the nature of departures from Poisson randomness. Extensions to take account of spatial and temporal correlation are discussed as well as constrained estimation. The issue of specification bears on the testing of migration theories, and assessing the role migration plays in job and housing markets: the direction and significance of the effects of economic variates on migration depends on the specification of the statistical model. The application is in the context of migration in London and South East England in the 1970s and 1980s." excerpt
Ker, H. W.
2014-01-01
Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…
Linear mixed models a practical guide using statistical software
West, Brady T; Galecki, Andrzej T
2006-01-01
Simplifying the often confusing array of software programs for fitting linear mixed models (LMMs), Linear Mixed Models: A Practical Guide Using Statistical Software provides a basic introduction to primary concepts, notation, software implementation, model interpretation, and visualization of clustered and longitudinal data. This easy-to-navigate reference details the use of procedures for fitting LMMs in five popular statistical software packages: SAS, SPSS, Stata, R/S-plus, and HLM. The authors introduce basic theoretical concepts, present a heuristic approach to fitting LMMs based on bo
A mixed integer linear program for an integrated fishery | Hasan ...
African Journals Online (AJOL)
... and labour allocation of quota based integrated fisheries. We demonstrate the workability of our model with a numerical example and sensitivity analysis based on data obtained from one of the major fisheries in New Zealand. Keywords: mixed integer linear program, fishing, trawler scheduling, processing, quotas ORiON: ...
Model Selection with the Linear Mixed Model for Longitudinal Data
Ryoo, Ji Hoon
2011-01-01
Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…
Generalized Linear Models in Vehicle Insurance
Directory of Open Access Journals (Sweden)
Silvie Kafková
2014-01-01
Full Text Available Actuaries in insurance companies try to find the best model for an estimation of insurance premium. It depends on many risk factors, e.g. the car characteristics and the profile of the driver. In this paper, an analysis of the portfolio of vehicle insurance data using a generalized linear model (GLM is performed. The main advantage of the approach presented in this article is that the GLMs are not limited by inflexible preconditions. Our aim is to predict the relation of annual claim frequency on given risk factors. Based on a large real-world sample of data from 57 410 vehicles, the present study proposed a classification analysis approach that addresses the selection of predictor variables. The models with different predictor variables are compared by analysis of deviance and Akaike information criterion (AIC. Based on this comparison, the model for the best estimate of annual claim frequency is chosen. All statistical calculations are computed in R environment, which contains stats package with the function for the estimation of parameters of GLM and the function for analysis of deviation.
Learning oncogenetic networks by reducing to mixed integer linear programming.
Shahrabi Farahani, Hossein; Lagergren, Jens
2013-01-01
Cancer can be a result of accumulation of different types of genetic mutations such as copy number aberrations. The data from tumors are cross-sectional and do not contain the temporal order of the genetic events. Finding the order in which the genetic events have occurred and progression pathways are of vital importance in understanding the disease. In order to model cancer progression, we propose Progression Networks, a special case of Bayesian networks, that are tailored to model disease progression. Progression networks have similarities with Conjunctive Bayesian Networks (CBNs) [1],a variation of Bayesian networks also proposed for modeling disease progression. We also describe a learning algorithm for learning Bayesian networks in general and progression networks in particular. We reduce the hard problem of learning the Bayesian and progression networks to Mixed Integer Linear Programming (MILP). MILP is a Non-deterministic Polynomial-time complete (NP-complete) problem for which very good heuristics exists. We tested our algorithm on synthetic and real cytogenetic data from renal cell carcinoma. We also compared our learned progression networks with the networks proposed in earlier publications. The software is available on the website https://bitbucket.org/farahani/diprog.
Evaluating the double Poisson generalized linear model.
Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique
2013-10-01
The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data. Copyright © 2013 Elsevier Ltd. All rights reserved.
Estimation and Inference for Very Large Linear Mixed Effects Models
Gao, K.; Owen, A. B.
2016-01-01
Linear mixed models with large imbalanced crossed random effects structures pose severe computational problems for maximum likelihood estimation and for Bayesian analysis. The costs can grow as fast as $N^{3/2}$ when there are N observations. Such problems arise in any setting where the underlying factors satisfy a many to many relationship (instead of a nested one) and in electronic commerce applications, the N can be quite large. Methods that do not account for the correlation structure can...
DEFF Research Database (Denmark)
Hernández, Adriana Carolina Luna; Aldana, Nelson Leonardo Diaz; Graells, Moises
2017-01-01
-side strategy, defined as a general mixed-integer linear programming by taking into account two stages for proper charging of the storage units. This model is considered as a deterministic problem that aims to minimize operating costs and promote self-consumption based on 24-hour ahead forecast data...
Linear mixed-effects modeling approach to FMRI group analysis.
Chen, Gang; Saad, Ziad S; Britton, Jennifer C; Pine, Daniel S; Cox, Robert W
2013-06-01
Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance-covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance-covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity
Multipliers on Generalized Mixed Norm Sequence Spaces
Directory of Open Access Journals (Sweden)
Oscar Blasco
2014-01-01
Full Text Available Given 1≤p,q≤∞ and sequences of integers (nkk and (nk′k such that nk≤nk′≤nk+1, the generalized mixed norm space ℓℐ(p,q is defined as those sequences (ajj such that ((∑j∈Ik|aj|p1/pk∈ℓq where Ik={j∈ℕ0 s.t. nk≤j
Generalized 2-vector spaces and general linear 2-groups
Elgueta, Josep
2008-01-01
In this paper a notion of {\\it generalized 2-vector space} is introduced which includes Kapranov and Voevodsky 2-vector spaces. Various kinds of generalized 2-vector spaces are considered and examples are given. The existence of non free generalized 2-vector spaces and of generalized 2-vector spaces which are non Karoubian (hence, non abelian) categories is discussed, and it is shown how any generalized 2-vector space can be identified with a full subcategory of an (abelian) functor category ...
Linear mixing model applied to AVHRR LAC data
Holben, Brent N.; Shimabukuro, Yosio E.
1993-01-01
A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55 - 3.93 microns channel was extracted and used with the two reflective channels 0.58 - 0.68 microns and 0.725 - 1.1 microns to run a Constraine Least Squares model to generate vegetation, soil, and shade fraction images for an area in the Western region of Brazil. The Landsat Thematic Mapper data covering the Emas National park region was used for estimating the spectral response of the mixture components and for evaluating the mixing model results. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse resolution data for global studies.
Systematic analysis of the impact of mixing locality on Mixing-DAC linearity for multicarrier GSM
Bechthum, E.; Radulov, G.I.; Briaire, J.; Geelen, G.; Roermund, van A.H.M.
2012-01-01
In an RF transmitter, the function of the mixer and the DAC can be combined in a single block: the Mixing-DAC. For the generation of multicarrier GSM signals in a basestation, high dynamic linearity is required, i.e. SFDR>85dBc, at high output signal frequency, i.e. ƒout ˜ 4GHz. This represents a
Smooth generalized linear models for aggregated data
Ayma Anza, Diego Armando
2016-01-01
Mención Internacional en el título de doctor Aggregated data commonly appear in areas such as epidemiology, demography, and public health. Generally, the aggregation process is done to protect the privacy of patients, to facilitate compact presentation, or to make it comparable with other coarser datasets. However, this process may hinder the visualization of the underlying distribution that follows the data. Also, it prohibits the direct analysis of relationships between ag...
Generalized local homology and cohomology for linearly compact modules
International Nuclear Information System (INIS)
Tran Tuan Nam
2006-07-01
We study generalized local homology for linearly compact modules. By duality, we get some properties of generalized local cohomology modules and extend well-known properties of local cohomology of A. Grothendieck. (author)
Linear mixed models a practical guide using statistical software
West, Brady T; Galecki, Andrzej T
2014-01-01
Highly recommended by JASA, Technometrics, and other journals, the first edition of this bestseller showed how to easily perform complex linear mixed model (LMM) analyses via a variety of software programs. Linear Mixed Models: A Practical Guide Using Statistical Software, Second Edition continues to lead readers step by step through the process of fitting LMMs. This second edition covers additional topics on the application of LMMs that are valuable for data analysts in all fields. It also updates the case studies using the latest versions of the software procedures and provides up-to-date information on the options and features of the software procedures available for fitting LMMs in SAS, SPSS, Stata, R/S-plus, and HLM.New to the Second Edition A new chapter on models with crossed random effects that uses a case study to illustrate software procedures capable of fitting these models Power analysis methods for longitudinal and clustered study designs, including software options for power analyses and suggest...
Double generalized linear compound poisson models to insurance claims data
DEFF Research Database (Denmark)
Andersen, Daniel Arnfeldt; Bonat, Wagner Hugo
2017-01-01
This paper describes the specification, estimation and comparison of double generalized linear compound Poisson models based on the likelihood paradigm. The models are motivated by insurance applications, where the distribution of the response variable is composed by a degenerate distribution...... implementation and illustrate the application of double generalized linear compound Poisson models using a data set about car insurances....
Generalized Multicarrier CDMA: Unification and Linear Equalization
Directory of Open Access Journals (Sweden)
Wang Zhengdao
2005-01-01
Full Text Available Relying on block-symbol spreading and judicious design of user codes, this paper builds on the generalized multicarrier (GMC quasisynchronous CDMA system that is capable of multiuser interference (MUI elimination and intersymbol interference (ISI suppression with guaranteed symbol recovery, regardless of the wireless frequency-selective channels. GMC-CDMA affords an all-digital unifying framework, which encompasses single-carrier and several multicarrier (MC CDMA systems. Besides the unifying framework, it is shown that GMC-CDMA offers flexibility both in full load (maximum number of users allowed by the available bandwidth and in reduced load settings. A novel blind channel estimation algorithm is also derived. Analytical evaluation and simulations illustrate the superior error performance and flexibility of uncoded GMC-CDMA over competing MC-CDMA alternatives especially in the presence of uplink multipath channels.
Linear models for sound from supersonic reacting mixing layers
Chary, P. Shivakanth; Samanta, Arnab
2016-12-01
We perform a linearized reduced-order modeling of the aeroacoustic sound sources in supersonic reacting mixing layers to explore their sensitivities to some of the flow parameters in radiating sound. Specifically, we investigate the role of outer modes as the effective flow compressibility is raised, when some of these are expected to dominate over the traditional Kelvin-Helmholtz (K-H) -type central mode. Although the outer modes are known to be of lesser importance in the near-field mixing, how these radiate to the far-field is uncertain, on which we focus. On keeping the flow compressibility fixed, the outer modes are realized via biasing the respective mean densities of the fast (oxidizer) or slow (fuel) side. Here the mean flows are laminar solutions of two-dimensional compressible boundary layers with an imposed composite (turbulent) spreading rate, which we show to significantly alter the growth of instability waves by saturating them earlier, similar to in nonlinear calculations, achieved here via solving the linear parabolized stability equations. As the flow parameters are varied, instability of the slow modes is shown to be more sensitive to heat release, potentially exceeding equivalent central modes, as these modes yield relatively compact sound sources with lesser spreading of the mixing layer, when compared to the corresponding fast modes. In contrast, the radiated sound seems to be relatively unaffected when the mixture equivalence ratio is varied, except for a lean mixture which is shown to yield a pronounced effect on the slow mode radiation by reducing its modal growth.
A novel mixed-synchronization phenomenon in coupled Chua's circuits via non-fragile linear control
International Nuclear Information System (INIS)
Wang Jun-Wei; Ma Qing-Hua; Zeng Li
2011-01-01
Dynamical variables of coupled nonlinear oscillators can exhibit different synchronization patterns depending on the designed coupling scheme. In this paper, a non-fragile linear feedback control strategy with multiplicative controller gain uncertainties is proposed for realizing the mixed-synchronization of Chua's circuits connected in a drive-response configuration. In particular, in the mixed-synchronization regime, different state variables of the response system can evolve into complete synchronization, anti-synchronization and even amplitude death simultaneously with the drive variables for an appropriate choice of scaling matrix. Using Lyapunov stability theory, we derive some sufficient criteria for achieving global mixed-synchronization. It is shown that the desired non-fragile state feedback controller can be constructed by solving a set of linear matrix inequalities (LMIs). Numerical simulations are also provided to demonstrate the effectiveness of the proposed control approach. (general)
Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.
Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine
2010-09-01
Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.
Linear mixing model applied to coarse resolution satellite data
Holben, Brent N.; Shimabukuro, Yosio E.
1992-01-01
A linear mixing model typically applied to high resolution data such as Airborne Visible/Infrared Imaging Spectrometer, Thematic Mapper, and Multispectral Scanner System is applied to the NOAA Advanced Very High Resolution Radiometer coarse resolution satellite data. The reflective portion extracted from the middle IR channel 3 (3.55 - 3.93 microns) is used with channels 1 (0.58 - 0.68 microns) and 2 (0.725 - 1.1 microns) to run the Constrained Least Squares model to generate fraction images for an area in the west central region of Brazil. The derived fraction images are compared with an unsupervised classification and the fraction images derived from Landsat TM data acquired in the same day. In addition, the relationship betweeen these fraction images and the well known NDVI images are presented. The results show the great potential of the unmixing techniques for applying to coarse resolution data for global studies.
Spatial generalised linear mixed models based on distances.
Melo, Oscar O; Mateu, Jorge; Melo, Carlos E
2016-10-01
Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.
Generalized Linear Models with Applications in Engineering and the Sciences
Myers, Raymond H; Vining, G Geoffrey; Robinson, Timothy J
2012-01-01
Praise for the First Edition "The obvious enthusiasm of Myers, Montgomery, and Vining and their reliance on their many examples as a major focus of their pedagogy make Generalized Linear Models a joy to read. Every statistician working in any area of applied science should buy it and experience the excitement of these new approaches to familiar activities."-Technometrics Generalized Linear Models: With Applications in Engineering and the Sciences, Second Edition continues to provide a clear introduction to the theoretical foundations and key applications of generalized linear models (GLMs). Ma
Generalization of mixed multiscale finite element methods with applications
Energy Technology Data Exchange (ETDEWEB)
Lee, C S [Texas A & M Univ., College Station, TX (United States)
2016-08-01
Many science and engineering problems exhibit scale disparity and high contrast. The small scale features cannot be omitted in the physical models because they can affect the macroscopic behavior of the problems. However, resolving all the scales in these problems can be prohibitively expensive. As a consequence, some types of model reduction techniques are required to design efficient solution algorithms. For practical purpose, we are interested in mixed finite element problems as they produce solutions with certain conservative properties. Existing multiscale methods for such problems include the mixed multiscale finite element methods. We show that for complicated problems, the mixed multiscale finite element methods may not be able to produce reliable approximations. This motivates the need of enrichment for coarse spaces. Two enrichment approaches are proposed, one is based on generalized multiscale finte element metthods (GMsFEM), while the other is based on spectral element-based algebraic multigrid (rAMGe). The former one, which is called mixed GMsFEM, is developed for both Darcy’s flow and linear elasticity. Application of the algorithm in two-phase flow simulations are demonstrated. For linear elasticity, the algorithm is subtly modified due to the symmetry requirement of the stress tensor. The latter enrichment approach is based on rAMGe. The algorithm differs from GMsFEM in that both of the velocity and pressure spaces are coarsened. Due the multigrid nature of the algorithm, recursive application is available, which results in an efficient multilevel construction of the coarse spaces. Stability, convergence analysis, and exhaustive numerical experiments are carried out to validate the proposed enrichment approaches. iii
Testing Parametric versus Semiparametric Modelling in Generalized Linear Models
Härdle, W.K.; Mammen, E.; Müller, M.D.
1996-01-01
We consider a generalized partially linear model E(Y|X,T) = G{X'b + m(T)} where G is a known function, b is an unknown parameter vector, and m is an unknown function.The paper introduces a test statistic which allows to decide between a parametric and a semiparametric model: (i) m is linear, i.e.
Minimal solution of general dual fuzzy linear systems
International Nuclear Information System (INIS)
Abbasbandy, S.; Otadi, M.; Mosleh, M.
2008-01-01
Fuzzy linear systems of equations, play a major role in several applications in various area such as engineering, physics and economics. In this paper, we investigate the existence of a minimal solution of general dual fuzzy linear equation systems. Two necessary and sufficient conditions for the minimal solution existence are given. Also, some examples in engineering and economic are considered
About one non linear generalization of the compression reflection ...
African Journals Online (AJOL)
Both cases of stage and spiral iterations are considered. A geometrical interpretation of a convergence of a generalize method of iteration is brought, the case of stage and spiral iterations are considered. The formula for the non linear generalize compression reflection operator as a function from one variable is obtained.
McDonald Generalized Linear Failure Rate Distribution
Directory of Open Access Journals (Sweden)
Ibrahim Elbatal
2014-10-01
Full Text Available We introduce in this paper a new six-parameters generalized version of the generalized linear failure rate (GLFR distribution which is called McDonald Generalized Linear failure rate (McGLFR distribution. The new distribution is quite flexible and can be used effectively in modeling survival data and reliability problems. It can have a constant, decreasing, increasing, and upside down bathtub-and bathtub shaped failure rate function depending on its parameters. It includes some well-known lifetime distributions as special sub-models. Some structural properties of the new distribution are studied. Moreover we discuss maximum likelihood estimation of the unknown parameters of the new model.
Longitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts, Procedures and Illustrations
Directory of Open Access Journals (Sweden)
Daniel T. L. Shek
2011-01-01
Full Text Available Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes in Hong Kong are presented.
Shek, Daniel T L; Ma, Cecilia M S
2011-01-05
Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.
Estimation and variable selection for generalized additive partial linear models
Wang, Li
2011-08-01
We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.
Analyzing longitudinal data with the linear mixed models procedure in SPSS.
West, Brady T
2009-09-01
Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.
DEFF Research Database (Denmark)
Tornøe, Christoffer Wenzel; Agersø, Henrik; Madsen, Henrik
2004-01-01
The standard software for non-linear mixed-effect analysis of pharmacokinetic/phar-macodynamic (PK/PD) data is NONMEM while the non-linear mixed-effects package NLME is an alternative as tong as the models are fairly simple. We present the nlmeODE package which combines the ordinary differential...... equation (ODE) solver package odesolve and the non-Linear mixed effects package NLME thereby enabling the analysis of complicated systems of ODEs by non-linear mixed-effects modelling. The pharmacokinetics of the anti-asthmatic drug theophylline is used to illustrate the applicability of the nlme...
An implicit spectral formula for generalized linear Schroedinger equations
International Nuclear Information System (INIS)
Schulze-Halberg, A.; Garcia-Ravelo, J.; Pena Gil, Jose Juan
2009-01-01
We generalize the semiclassical Bohr–Sommerfeld quantization rule to an exact, implicit spectral formula for linear, generalized Schroedinger equations admitting a discrete spectrum. Special cases include the position-dependent mass Schroedinger equation or the Schroedinger equation for weighted energy. Requiring knowledge of the potential and the solution associated with the lowest spectral value, our formula predicts the complete spectrum in its exact form. (author)
Solution of generalized shifted linear systems with complex symmetric matrices
International Nuclear Information System (INIS)
Sogabe, Tomohiro; Hoshi, Takeo; Zhang, Shao-Liang; Fujiwara, Takeo
2012-01-01
We develop the shifted COCG method [R. Takayama, T. Hoshi, T. Sogabe, S.-L. Zhang, T. Fujiwara, Linear algebraic calculation of Green’s function for large-scale electronic structure theory, Phys. Rev. B 73 (165108) (2006) 1–9] and the shifted WQMR method [T. Sogabe, T. Hoshi, S.-L. Zhang, T. Fujiwara, On a weighted quasi-residual minimization strategy of the QMR method for solving complex symmetric shifted linear systems, Electron. Trans. Numer. Anal. 31 (2008) 126–140] for solving generalized shifted linear systems with complex symmetric matrices that arise from the electronic structure theory. The complex symmetric Lanczos process with a suitable bilinear form plays an important role in the development of the methods. The numerical examples indicate that the methods are highly attractive when the inner linear systems can efficiently be solved.
New Implicit General Linear Method | Ibrahim | Journal of the ...
African Journals Online (AJOL)
A New implicit general linear method is designed for the numerical olution of stiff differential Equations. The coefficients matrix is derived from the stability function. The method combines the single-implicitness or diagonal implicitness with property that the first two rows are implicit and third and fourth row are explicit.
Penalized Estimation in Large-Scale Generalized Linear Array Models
DEFF Research Database (Denmark)
Lund, Adam; Vincent, Martin; Hansen, Niels Richard
2017-01-01
Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...
Yue, Chen; Chen, Shaojie; Sair, Haris I; Airan, Raag; Caffo, Brian S
2015-09-01
Data reproducibility is a critical issue in all scientific experiments. In this manuscript, the problem of quantifying the reproducibility of graphical measurements is considered. The image intra-class correlation coefficient (I2C2) is generalized and the graphical intra-class correlation coefficient (GICC) is proposed for such purpose. The concept for GICC is based on multivariate probit-linear mixed effect models. A Markov Chain Monte Carlo EM (mcm-cEM) algorithm is used for estimating the GICC. Simulation results with varied settings are demonstrated and our method is applied to the KIRBY21 test-retest dataset.
Reliability assessment of competing risks with generalized mixed shock models
International Nuclear Information System (INIS)
Rafiee, Koosha; Feng, Qianmei; Coit, David W.
2017-01-01
This paper investigates reliability modeling for systems subject to dependent competing risks considering the impact from a new generalized mixed shock model. Two dependent competing risks are soft failure due to a degradation process, and hard failure due to random shocks. The shock process contains fatal shocks that can cause hard failure instantaneously, and nonfatal shocks that impact the system in three different ways: 1) damaging the unit by immediately increasing the degradation level, 2) speeding up the deterioration by accelerating the degradation rate, and 3) weakening the unit strength by reducing the hard failure threshold. While the first impact from nonfatal shocks comes from each individual shock, the other two impacts are realized when the condition for a new generalized mixed shock model is satisfied. Unlike most existing mixed shock models that consider a combination of two shock patterns, our new generalized mixed shock model includes three classic shock patterns. According to the proposed generalized mixed shock model, the degradation rate and the hard failure threshold can simultaneously shift multiple times, whenever the condition for one of these three shock patterns is satisfied. An example using micro-electro-mechanical systems devices illustrates the effectiveness of the proposed approach with sensitivity analysis. - Highlights: • A rich reliability model for systems subject to dependent failures is proposed. • The degradation rate and the hard failure threshold can shift simultaneously. • The shift is triggered by a new generalized mixed shock model. • The shift can occur multiple times under the generalized mixed shock model.
Computation of Optimal Monotonicity Preserving General Linear Methods
Ketcheson, David I.
2009-07-01
Monotonicity preserving numerical methods for ordinary differential equations prevent the growth of propagated errors and preserve convex boundedness properties of the solution. We formulate the problem of finding optimal monotonicity preserving general linear methods for linear autonomous equations, and propose an efficient algorithm for its solution. This algorithm reliably finds optimal methods even among classes involving very high order accuracy and that use many steps and/or stages. The optimality of some recently proposed methods is verified, and many more efficient methods are found. We use similar algorithms to find optimal strong stability preserving linear multistep methods of both explicit and implicit type, including methods for hyperbolic PDEs that use downwind-biased operators.
Neural Generalized Predictive Control of a non-linear Process
DEFF Research Database (Denmark)
Sørensen, Paul Haase; Nørgård, Peter Magnus; Ravn, Ole
1998-01-01
The use of neural network in non-linear control is made difficult by the fact the stability and robustness is not guaranteed and that the implementation in real time is non-trivial. In this paper we introduce a predictive controller based on a neural network model which has promising stability qu...... detail and discuss the implementation difficulties. The neural generalized predictive controller is tested on a pneumatic servo sys-tem.......The use of neural network in non-linear control is made difficult by the fact the stability and robustness is not guaranteed and that the implementation in real time is non-trivial. In this paper we introduce a predictive controller based on a neural network model which has promising stability...... qualities. The controller is a non-linear version of the well-known generalized predictive controller developed in linear control theory. It involves minimization of a cost function which in the present case has to be done numerically. Therefore, we develop the numerical algorithms necessary in substantial...
Genetic parameters for racing records in trotters using linear and generalized linear models.
Suontama, M; van der Werf, J H J; Juga, J; Ojala, M
2012-09-01
Heritability and repeatability and genetic and phenotypic correlations were estimated for trotting race records with linear and generalized linear models using 510,519 records on 17,792 Finnhorses and 513,161 records on 25,536 Standardbred trotters. Heritability and repeatability were estimated for single racing time and earnings traits with linear models, and logarithmic scale was used for racing time and fourth-root scale for earnings to correct for nonnormality. Generalized linear models with a gamma distribution were applied for single racing time and with a multinomial distribution for single earnings traits. In addition, genetic parameters for annual earnings were estimated with linear models on the observed and fourth-root scales. Racing success traits of single placings, winnings, breaking stride, and disqualifications were analyzed using generalized linear models with a binomial distribution. Estimates of heritability were greatest for racing time, which ranged from 0.32 to 0.34. Estimates of heritability were low for single earnings with all distributions, ranging from 0.01 to 0.09. Annual earnings were closer to normal distribution than single earnings. Heritability estimates were moderate for annual earnings on the fourth-root scale, 0.19 for Finnhorses and 0.27 for Standardbred trotters. Heritability estimates for binomial racing success variables ranged from 0.04 to 0.12, being greatest for winnings and least for breaking stride. Genetic correlations among racing traits were high, whereas phenotypic correlations were mainly low to moderate, except correlations between racing time and earnings were high. On the basis of a moderate heritability and moderate to high repeatability for racing time and annual earnings, selection of horses for these traits is effective when based on a few repeated records. Because of high genetic correlations, direct selection for racing time and annual earnings would also result in good genetic response in racing success.
A Non-Gaussian Spatial Generalized Linear Latent Variable Model
Irincheeva, Irina
2012-08-03
We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.
A Non-Gaussian Spatial Generalized Linear Latent Variable Model
Irincheeva, Irina; Cantoni, Eva; Genton, Marc G.
2012-01-01
We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.
Testing for one Generalized Linear Single Order Parameter
DEFF Research Database (Denmark)
Ellegaard, Niels Langager; Christensen, Tage Emil; Dyre, Jeppe
We examine a linear single order parameter model for thermoviscoelastic relaxation in viscous liquids, allowing for a distribution of relaxation times. In this model the relaxation of volume and entalpy is completely described by the relaxation of one internal order parameter. In contrast to prior...... work the order parameter may be chosen to have a non-exponential relaxation. The model predictions contradict the general consensus of the properties of viscous liquids in two ways: (i) The model predicts that following a linear isobaric temperature step, the normalized volume and entalpy relaxation...... responses or extrapolate from measurements of a glassy state away from equilibrium. Starting from a master equation description of inherent dynamics, we calculate the complex thermodynamic response functions. We device a way of testing for the generalized single order parameter model by measuring 3 complex...
Dynamic generalized linear models for monitoring endemic diseases
DEFF Research Database (Denmark)
Lopes Antunes, Ana Carolina; Jensen, Dan; Hisham Beshara Halasa, Tariq
2016-01-01
The objective was to use a Dynamic Generalized Linear Model (DGLM) based on abinomial distribution with a linear trend, for monitoring the PRRS (Porcine Reproductive and Respiratory Syndrome sero-prevalence in Danish swine herds. The DGLM was described and its performance for monitoring control...... and eradication programmes based on changes in PRRS sero-prevalence was explored. Results showed a declining trend in PRRS sero-prevalence between 2007 and 2014 suggesting that Danish herds are slowly eradicating PRRS. The simulation study demonstrated the flexibility of DGLMs in adapting to changes intrends...... in sero-prevalence. Based on this, it was possible to detect variations in the growth model component. This study is a proof-of-concept, demonstrating the use of DGLMs for monitoring endemic diseases. In addition, the principles stated might be useful in general research on monitoring and surveillance...
Linear relativistic gyrokinetic equation in general magnetically confined plasmas
International Nuclear Information System (INIS)
Tsai, S.T.; Van Dam, J.W.; Chen, L.
1983-08-01
The gyrokinetic formalism for linear electromagnetic waves of arbitrary frequency in general magnetic-field configurations is extended to include full relativistic effects. The derivation employs the small adiabaticity parameter rho/L 0 where rho is the Larmor radius and L 0 the equilibrium scale length. The effects of the plasma and magnetic field inhomogeneities and finite Larmor-radii effects are also contained
A general method for enclosing solutions of interval linear equations
Czech Academy of Sciences Publication Activity Database
Rohn, Jiří
2012-01-01
Roč. 6, č. 4 (2012), s. 709-717 ISSN 1862-4472 R&D Projects: GA ČR GA201/09/1957; GA ČR GC201/08/J020 Institutional research plan: CEZ:AV0Z10300504 Keywords : interval linear equations * solution set * enclosure * absolute value inequality Subject RIV: BA - General Mathematics Impact factor: 1.654, year: 2012
General treatment of a non-linear gauge condition
International Nuclear Information System (INIS)
Malleville, C.
1982-06-01
A non linear gauge condition is presented in the frame of a non abelian gauge theory broken with the Higgs mechanism. It is shown that this condition already introduced for the standard SU(2) x U(1) model can be generalized for any gauge model with the same type of simplification, namely the suppression of any coupling of the form: massless gauge boson, massive gauge boson, unphysical Higgs [fr
Canonical perturbation theory in linearized general relativity theory
International Nuclear Information System (INIS)
Gonzales, R.; Pavlenko, Yu.G.
1986-01-01
Canonical perturbation theory in linearized general relativity theory is developed. It is shown that the evolution of arbitrary dynamic value, conditioned by the interaction of particles, gravitation and electromagnetic fields, can be presented in the form of a series, each member of it corresponding to the contribution of certain spontaneous or induced process. The main concepts of the approach are presented in the approximation of a weak gravitational field
Electromagnetic axial anomaly in a generalized linear sigma model
Fariborz, Amir H.; Jora, Renata
2017-06-01
We construct the electromagnetic anomaly effective term for a generalized linear sigma model with two chiral nonets, one with a quark-antiquark structure, the other one with a four-quark content. We compute in the leading order of this framework the decays into two photons of six pseudoscalars: π0(137 ), π0(1300 ), η (547 ), η (958 ), η (1295 ) and η (1760 ). Our results agree well with the available experimental data.
Solvability of Extended General Strongly Mixed Variational Inequalities
Directory of Open Access Journals (Sweden)
Balwant Singh Thakur
2013-10-01
Full Text Available In this paper, a new class of extended general strongly mixed variational inequalities is introduced and studied in Hilbert spaces. An existence theorem of solution is established and using resolvent operator technique, a new iterative algorithm for solving the extended general strongly mixed variational inequality is suggested. A convergence result for the iterative sequence generated by the new algorithm is also established.
Hossain, Ahmed; Beyene, Joseph
2014-01-01
This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.
Parametrizing linear generalized Langevin dynamics from explicit molecular dynamics simulations
Energy Technology Data Exchange (ETDEWEB)
Gottwald, Fabian; Karsten, Sven; Ivanov, Sergei D., E-mail: sergei.ivanov@uni-rostock.de; Kühn, Oliver [Institute of Physics, Rostock University, Universitätsplatz 3, 18055 Rostock (Germany)
2015-06-28
Fundamental understanding of complex dynamics in many-particle systems on the atomistic level is of utmost importance. Often the systems of interest are of macroscopic size but can be partitioned into a few important degrees of freedom which are treated most accurately and others which constitute a thermal bath. Particular attention in this respect attracts the linear generalized Langevin equation, which can be rigorously derived by means of a linear projection technique. Within this framework, a complicated interaction with the bath can be reduced to a single memory kernel. This memory kernel in turn is parametrized for a particular system studied, usually by means of time-domain methods based on explicit molecular dynamics data. Here, we discuss that this task is more naturally achieved in frequency domain and develop a Fourier-based parametrization method that outperforms its time-domain analogues. Very surprisingly, the widely used rigid bond method turns out to be inappropriate in general. Importantly, we show that the rigid bond approach leads to a systematic overestimation of relaxation times, unless the system under study consists of a harmonic bath bi-linearly coupled to the relevant degrees of freedom.
Parametrizing linear generalized Langevin dynamics from explicit molecular dynamics simulations
International Nuclear Information System (INIS)
Gottwald, Fabian; Karsten, Sven; Ivanov, Sergei D.; Kühn, Oliver
2015-01-01
Fundamental understanding of complex dynamics in many-particle systems on the atomistic level is of utmost importance. Often the systems of interest are of macroscopic size but can be partitioned into a few important degrees of freedom which are treated most accurately and others which constitute a thermal bath. Particular attention in this respect attracts the linear generalized Langevin equation, which can be rigorously derived by means of a linear projection technique. Within this framework, a complicated interaction with the bath can be reduced to a single memory kernel. This memory kernel in turn is parametrized for a particular system studied, usually by means of time-domain methods based on explicit molecular dynamics data. Here, we discuss that this task is more naturally achieved in frequency domain and develop a Fourier-based parametrization method that outperforms its time-domain analogues. Very surprisingly, the widely used rigid bond method turns out to be inappropriate in general. Importantly, we show that the rigid bond approach leads to a systematic overestimation of relaxation times, unless the system under study consists of a harmonic bath bi-linearly coupled to the relevant degrees of freedom
Mixed-Integer Conic Linear Programming: Challenges and Perspectives
2013-10-01
The novel DCCs for MISOCO may be used in branch- and-cut algorithms when solving MISOCO problems. The experimental software CICLO was developed to...perform limited, but rigorous computational experiments. The CICLO solver utilizes continuous SOCO solvers, MOSEK, CPLES or SeDuMi, builds on the open...submitted Fall 2013. Software: 1. CICLO : Integer conic linear optimization package. Authors: J.C. Góez, T.K. Ralphs, Y. Fu, and T. Terlaky
On Self-Adaptive Method for General Mixed Variational Inequalities
Directory of Open Access Journals (Sweden)
Abdellah Bnouhachem
2008-01-01
Full Text Available We suggest and analyze a new self-adaptive method for solving general mixed variational inequalities, which can be viewed as an improvement of the method of (Noor 2003. Global convergence of the new method is proved under the same assumptions as Noor's method. Some preliminary computational results are given to illustrate the efficiency of the proposed method. Since the general mixed variational inequalities include general variational inequalities, quasivariational inequalities, and nonlinear (implicit complementarity problems as special cases, results proved in this paper continue to hold for these problems.
Delta-tilde interpretation of standard linear mixed model results
DEFF Research Database (Denmark)
Brockhoff, Per Bruun; Amorim, Isabel de Sousa; Kuznetsova, Alexandra
2016-01-01
effects relative to the residual error and to choose the proper effect size measure. For multi-attribute bar plots of F-statistics this amounts, in balanced settings, to a simple transformation of the bar heights to get them transformed into depicting what can be seen as approximately the average pairwise...... data set and compared to actual d-prime calculations based on Thurstonian regression modeling through the ordinal package. For more challenging cases we offer a generic "plug-in" implementation of a version of the method as part of the R-package SensMixed. We discuss and clarify the bias mechanisms...
lmerTest Package: Tests in Linear Mixed Effects Models
DEFF Research Database (Denmark)
Kuznetsova, Alexandra; Brockhoff, Per B.; Christensen, Rune Haubo Bojesen
2017-01-01
One of the frequent questions by users of the mixed model function lmer of the lme4 package has been: How can I get p values for the F and t tests for objects returned by lmer? The lmerTest package extends the 'lmerMod' class of the lme4 package, by overloading the anova and summary functions...... by providing p values for tests for fixed effects. We have implemented the Satterthwaite's method for approximating degrees of freedom for the t and F tests. We have also implemented the construction of Type I - III ANOVA tables. Furthermore, one may also obtain the summary as well as the anova table using...
A Linear Mixed-Effects Model of Wireless Spectrum Occupancy
Directory of Open Access Journals (Sweden)
Pagadarai Srikanth
2010-01-01
Full Text Available We provide regression analysis-based statistical models to explain the usage of wireless spectrum across four mid-size US cities in four frequency bands. Specifically, the variations in spectrum occupancy across space, time, and frequency are investigated and compared between different sites within the city as well as with other cities. By applying the mixed-effects models, several conclusions are drawn that give the occupancy percentage and the ON time duration of the licensed signal transmission as a function of several predictor variables.
Thurstonian models for sensory discrimination tests as generalized linear models
DEFF Research Database (Denmark)
Brockhoff, Per B.; Christensen, Rune Haubo Bojesen
2010-01-01
as a so-called generalized linear model. The underlying sensory difference 6 becomes directly a parameter of the statistical model and the estimate d' and it's standard error becomes the "usual" output of the statistical analysis. The d' for the monadic A-NOT A method is shown to appear as a standard......Sensory discrimination tests such as the triangle, duo-trio, 2-AFC and 3-AFC tests produce binary data and the Thurstonian decision rule links the underlying sensory difference 6 to the observed number of correct responses. In this paper it is shown how each of these four situations can be viewed...
A Graphical User Interface to Generalized Linear Models in MATLAB
Directory of Open Access Journals (Sweden)
Peter Dunn
1999-07-01
Full Text Available Generalized linear models unite a wide variety of statistical models in a common theoretical framework. This paper discusses GLMLAB-software that enables such models to be fitted in the popular mathematical package MATLAB. It provides a graphical user interface to the powerful MATLAB computational engine to produce a program that is easy to use but with many features, including offsets, prior weights and user-defined distributions and link functions. MATLAB's graphical capacities are also utilized in providing a number of simple residual diagnostic plots.
International Nuclear Information System (INIS)
Suryawan, Herry P.; Gunarso, Boby
2017-01-01
The generalized mixed fractional Brownian motion is defined by taking linear combinations of a finite number of independent fractional Brownian motions with different Hurst parameters. It is a Gaussian process with stationary increments, posseses self-similarity property, and, in general, is neither a Markov process nor a martingale. In this paper we study the generalized mixed fractional Brownian motion within white noise analysis framework. As a main result, we prove that for any spatial dimension and for arbitrary Hurst parameter the self-intersection local times of the generalized mixed fractional Brownian motions, after a suitable renormalization, are well-defined as Hida white noise distributions. The chaos expansions of the self-intersection local times in the terms of Wick powers of white noises are also presented. (paper)
Renormalization in general theories with inter-generation mixing
International Nuclear Information System (INIS)
Kniehl, Bernd A.; Sirlin, Alberto
2011-11-01
We derive general and explicit expressions for the unrenormalized and renormalized dressed propagators of fermions in parity-nonconserving theories with inter-generation mixing. The mass eigenvalues, the corresponding mass counterterms, and the effect of inter-generation mixing on their determination are discussed. Invoking the Aoki-Hioki-Kawabe-Konuma-Muta renormalization conditions and employing a number of very useful relations from Matrix Algebra, we show explicitly that the renormalized dressed propagators satisfy important physical properties. (orig.)
General mirror pairs for gauged linear sigma models
Energy Technology Data Exchange (ETDEWEB)
Aspinwall, Paul S.; Plesser, M. Ronen [Departments of Mathematics and Physics, Duke University,Box 90320, Durham, NC 27708-0320 (United States)
2015-11-05
We carefully analyze the conditions for an abelian gauged linear σ-model to exhibit nontrivial IR behavior described by a nonsingular superconformal field theory determining a superstring vacuum. This is done without reference to a geometric phase, by associating singular behavior to a noncompact space of (semi-)classical vacua. We find that models determined by reflexive combinatorial data are nonsingular for generic values of their parameters. This condition has the pleasant feature that the mirror of a nonsingular gauged linear σ-model is another such model, but it is clearly too strong and we provide an example of a non-reflexive mirror pair. We discuss a weaker condition inspired by considering extremal transitions, which is also mirror symmetric and which we conjecture to be sufficient. We apply these ideas to extremal transitions and to understanding the way in which both Berglund-Hübsch mirror symmetry and the Vafa-Witten mirror orbifold with discrete torsion can be seen as special cases of the general combinatorial duality of gauged linear σ-models. In the former case we encounter an example showing that our weaker condition is still not necessary.
General mirror pairs for gauged linear sigma models
International Nuclear Information System (INIS)
Aspinwall, Paul S.; Plesser, M. Ronen
2015-01-01
We carefully analyze the conditions for an abelian gauged linear σ-model to exhibit nontrivial IR behavior described by a nonsingular superconformal field theory determining a superstring vacuum. This is done without reference to a geometric phase, by associating singular behavior to a noncompact space of (semi-)classical vacua. We find that models determined by reflexive combinatorial data are nonsingular for generic values of their parameters. This condition has the pleasant feature that the mirror of a nonsingular gauged linear σ-model is another such model, but it is clearly too strong and we provide an example of a non-reflexive mirror pair. We discuss a weaker condition inspired by considering extremal transitions, which is also mirror symmetric and which we conjecture to be sufficient. We apply these ideas to extremal transitions and to understanding the way in which both Berglund-Hübsch mirror symmetry and the Vafa-Witten mirror orbifold with discrete torsion can be seen as special cases of the general combinatorial duality of gauged linear σ-models. In the former case we encounter an example showing that our weaker condition is still not necessary.
Lepton mixing in A_5 family symmetry and generalized CP
International Nuclear Information System (INIS)
Li, Cai-Chang; Ding, Gui-Jun
2015-01-01
We study lepton mixing patterns which can be derived from the A_5 family symmetry and generalized CP. We find five phenomenologically interesting mixing patterns for which one column of the PMNS matrix is (√(((5+√5)/10)),(1/(√(5+√5))),(1/(√(5+√5))))"T (the first column of the golden ratio mixing), (√(((5−√5)/10)),(1/(√(5−√5))),(1/(√(5−√5))))"T (the second column of the golden ratio mixing), (1,1,1)"T/√3 or (√5+1,−2,√5−1)"T/4. The three lepton mixing angles are determined in terms of a single real parameter θ, and agreement with experimental data can be achieved for certain values of θ. The Dirac CP violating phase is predicted to be trivial or maximal while Majorana phases are trivial. We construct a supersymmetric model based on A_5 family symmetry and generalized CP. The lepton mixing is exactly the golden ratio pattern at leading order, and the mixing patterns of case III and case IV are reproduced after higher order corrections are considered.
Polymorphic Uncertain Linear Programming for Generalized Production Planning Problems
Directory of Open Access Journals (Sweden)
Xinbo Zhang
2014-01-01
Full Text Available A polymorphic uncertain linear programming (PULP model is constructed to formulate a class of generalized production planning problems. In accordance with the practical environment, some factors such as the consumption of raw material, the limitation of resource and the demand of product are incorporated into the model as parameters of interval and fuzzy subsets, respectively. Based on the theory of fuzzy interval program and the modified possibility degree for the order of interval numbers, a deterministic equivalent formulation for this model is derived such that a robust solution for the uncertain optimization problem is obtained. Case study indicates that the constructed model and the proposed solution are useful to search for an optimal production plan for the polymorphic uncertain generalized production planning problems.
Generalized space and linear momentum operators in quantum mechanics
International Nuclear Information System (INIS)
Costa, Bruno G. da; Borges, Ernesto P.
2014-01-01
We propose a modification of a recently introduced generalized translation operator, by including a q-exponential factor, which implies in the definition of a Hermitian deformed linear momentum operator p ^ q , and its canonically conjugate deformed position operator x ^ q . A canonical transformation leads the Hamiltonian of a position-dependent mass particle to another Hamiltonian of a particle with constant mass in a conservative force field of a deformed phase space. The equation of motion for the classical phase space may be expressed in terms of the generalized dual q-derivative. A position-dependent mass confined in an infinite square potential well is shown as an instance. Uncertainty and correspondence principles are analyzed
Evaluating significance in linear mixed-effects models in R.
Luke, Steven G
2017-08-01
Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.
Mixed integer linear programming for maximum-parsimony phylogeny inference.
Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell
2008-01-01
Reconstruction of phylogenetic trees is a fundamental problem in computational biology. While excellent heuristic methods are available for many variants of this problem, new advances in phylogeny inference will be required if we are to be able to continue to make effective use of the rapidly growing stores of variation data now being gathered. In this paper, we present two integer linear programming (ILP) formulations to find the most parsimonious phylogenetic tree from a set of binary variation data. One method uses a flow-based formulation that can produce exponential numbers of variables and constraints in the worst case. The method has, however, proven extremely efficient in practice on datasets that are well beyond the reach of the available provably efficient methods, solving several large mtDNA and Y-chromosome instances within a few seconds and giving provably optimal results in times competitive with fast heuristics than cannot guarantee optimality. An alternative formulation establishes that the problem can be solved with a polynomial-sized ILP. We further present a web server developed based on the exponential-sized ILP that performs fast maximum parsimony inferences and serves as a front end to a database of precomputed phylogenies spanning the human genome.
Directory of Open Access Journals (Sweden)
Hongchun Sun
2012-01-01
Full Text Available For the extended mixed linear complementarity problem (EML CP, we first present the characterization of the solution set for the EMLCP. Based on this, its global error bound is also established under milder conditions. The results obtained in this paper can be taken as an extension for the classical linear complementarity problems.
Bayesian Subset Modeling for High-Dimensional Generalized Linear Models
Liang, Faming
2013-06-01
This article presents a new prior setting for high-dimensional generalized linear models, which leads to a Bayesian subset regression (BSR) with the maximum a posteriori model approximately equivalent to the minimum extended Bayesian information criterion model. The consistency of the resulting posterior is established under mild conditions. Further, a variable screening procedure is proposed based on the marginal inclusion probability, which shares the same properties of sure screening and consistency with the existing sure independence screening (SIS) and iterative sure independence screening (ISIS) procedures. However, since the proposed procedure makes use of joint information from all predictors, it generally outperforms SIS and ISIS in real applications. This article also makes extensive comparisons of BSR with the popular penalized likelihood methods, including Lasso, elastic net, SIS, and ISIS. The numerical results indicate that BSR can generally outperform the penalized likelihood methods. The models selected by BSR tend to be sparser and, more importantly, of higher prediction ability. In addition, the performance of the penalized likelihood methods tends to deteriorate as the number of predictors increases, while this is not significant for BSR. Supplementary materials for this article are available online. © 2013 American Statistical Association.
Explicit estimating equations for semiparametric generalized linear latent variable models
Ma, Yanyuan
2010-07-05
We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.
DEFF Research Database (Denmark)
Ommen, Torben Schmidt; Markussen, Wiebke Brix; Elmegaard, Brian
2014-01-01
In the paper, three frequently used operation optimisation methods are examined with respect to their impact on operation management of the combined utility technologies for electric power and DH (district heating) of eastern Denmark. The investigation focusses on individual plant operation...... differences and differences between the solution found by each optimisation method. One of the investigated approaches utilises LP (linear programming) for optimisation, one uses LP with binary operation constraints, while the third approach uses NLP (non-linear programming). The LP model is used...... as a benchmark, as this type is frequently used, and has the lowest amount of constraints of the three. A comparison of the optimised operation of a number of units shows significant differences between the three methods. Compared to the reference, the use of binary integer variables, increases operation...
Directory of Open Access Journals (Sweden)
Maoyuan Feng
2014-01-01
Full Text Available This study proposes a mixed integer linear programming (MILP model to optimize the spillways scheduling for reservoir flood control. Unlike the conventional reservoir operation model, the proposed MILP model specifies the spillways status (including the number of spillways to be open and the degree of the spillway opened instead of reservoir release, since the release is actually controlled by using the spillway. The piecewise linear approximation is used to formulate the relationship between the reservoir storage and water release for a spillway, which should be open/closed with a status depicted by a binary variable. The control order and symmetry rules of spillways are described and incorporated into the constraints for meeting the practical demand. Thus, a MILP model is set up to minimize the maximum reservoir storage. The General Algebraic Modeling System (GAMS and IBM ILOG CPLEX Optimization Studio (CPLEX software are used to find the optimal solution for the proposed MILP model. The China’s Three Gorges Reservoir, whose spillways are of five types with the total number of 80, is selected as the case study. It is shown that the proposed model decreases the flood risk compared with the conventional operation and makes the operation more practical by specifying the spillways status directly.
Evaluation of a Linear Mixing Model to Retrieve Soil and Vegetation Temperatures of Land Targets
International Nuclear Information System (INIS)
Yang, Jinxin; Jia, Li; Cui, Yaokui; Zhou, Jie; Menenti, Massimo
2014-01-01
A simple linear mixing model of heterogeneous soil-vegetation system and retrieval of component temperatures from directional remote sensing measurements by inverting this model is evaluated in this paper using observations by a thermal camera. The thermal camera was used to obtain multi-angular TIR (Thermal Infra-Red) images over vegetable and orchard canopies. A whole thermal camera image was treated as a pixel of a satellite image to evaluate the model with the two-component system, i.e. soil and vegetation. The evaluation included two parts: evaluation of the linear mixing model and evaluation of the inversion of the model to retrieve component temperatures. For evaluation of the linear mixing model, the RMSE is 0.2 K between the observed and modelled brightness temperatures, which indicates that the linear mixing model works well under most conditions. For evaluation of the model inversion, the RMSE between the model retrieved and the observed vegetation temperatures is 1.6K, correspondingly, the RMSE between the observed and retrieved soil temperatures is 2.0K. According to the evaluation of the sensitivity of retrieved component temperatures on fractional cover, the linear mixing model gives more accurate retrieval accuracies for both soil and vegetation temperatures under intermediate fractional cover conditions
An R2 statistic for fixed effects in the linear mixed model.
Edwards, Lloyd J; Muller, Keith E; Wolfinger, Russell D; Qaqish, Bahjat F; Schabenberger, Oliver
2008-12-20
Statisticians most often use the linear mixed model to analyze Gaussian longitudinal data. The value and familiarity of the R(2) statistic in the linear univariate model naturally creates great interest in extending it to the linear mixed model. We define and describe how to compute a model R(2) statistic for the linear mixed model by using only a single model. The proposed R(2) statistic measures multivariate association between the repeated outcomes and the fixed effects in the linear mixed model. The R(2) statistic arises as a 1-1 function of an appropriate F statistic for testing all fixed effects (except typically the intercept) in a full model. The statistic compares the full model with a null model with all fixed effects deleted (except typically the intercept) while retaining exactly the same covariance structure. Furthermore, the R(2) statistic leads immediately to a natural definition of a partial R(2) statistic. A mixed model in which ethnicity gives a very small p-value as a longitudinal predictor of blood pressure (BP) compellingly illustrates the value of the statistic. In sharp contrast to the extreme p-value, a very small R(2) , a measure of statistical and scientific importance, indicates that ethnicity has an almost negligible association with the repeated BP outcomes for the study.
Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.
Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P
2017-03-01
The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.
Multivariate statistical modelling based on generalized linear models
Fahrmeir, Ludwig
1994-01-01
This book is concerned with the use of generalized linear models for univariate and multivariate regression analysis. Its emphasis is to provide a detailed introductory survey of the subject based on the analysis of real data drawn from a variety of subjects including the biological sciences, economics, and the social sciences. Where possible, technical details and proofs are deferred to an appendix in order to provide an accessible account for non-experts. Topics covered include: models for multi-categorical responses, model checking, time series and longitudinal data, random effects models, and state-space models. Throughout, the authors have taken great pains to discuss the underlying theoretical ideas in ways that relate well to the data at hand. As a result, numerous researchers whose work relies on the use of these models will find this an invaluable account to have on their desks. "The basic aim of the authors is to bring together and review a large part of recent advances in statistical modelling of m...
Generalized Functional Linear Models With Semiparametric Single-Index Interactions
Li, Yehua
2010-06-01
We introduce a new class of functional generalized linear models, where the response is a scalar and some of the covariates are functional. We assume that the response depends on multiple covariates, a finite number of latent features in the functional predictor, and interaction between the two. To achieve parsimony, the interaction between the multiple covariates and the functional predictor is modeled semiparametrically with a single-index structure. We propose a two step estimation procedure based on local estimating equations, and investigate two situations: (a) when the basis functions are pre-determined, e.g., Fourier or wavelet basis functions and the functional features of interest are known; and (b) when the basis functions are data driven, such as with functional principal components. Asymptotic properties are developed. Notably, we show that when the functional features are data driven, the parameter estimates have an increased asymptotic variance, due to the estimation error of the basis functions. Our methods are illustrated with a simulation study and applied to an empirical data set, where a previously unknown interaction is detected. Technical proofs of our theoretical results are provided in the online supplemental materials.
The linearized inversion of the generalized interferometric multiple imaging
Aldawood, Ali
2016-09-06
The generalized interferometric multiple imaging (GIMI) procedure can be used to image duplex waves and other higher order internal multiples. Imaging duplex waves could help illuminate subsurface zones that are not easily illuminated by primaries such as vertical and nearly vertical fault planes, and salt flanks. To image first-order internal multiple, the GIMI framework consists of three datuming steps, followed by applying the zero-lag cross-correlation imaging condition. However, the standard GIMI procedure yields migrated images that suffer from low spatial resolution, migration artifacts, and cross-talk noise. To alleviate these problems, we propose a least-squares GIMI framework in which we formulate the first two steps as a linearized inversion problem when imaging first-order internal multiples. Tests on synthetic datasets demonstrate the ability to localize subsurface scatterers in their true positions, and delineate a vertical fault plane using the proposed method. We, also, demonstrate the robustness of the proposed framework when imaging the scatterers or the vertical fault plane with erroneous migration velocities.
Generalized Functional Linear Models With Semiparametric Single-Index Interactions
Li, Yehua; Wang, Naisyin; Carroll, Raymond J.
2010-01-01
We introduce a new class of functional generalized linear models, where the response is a scalar and some of the covariates are functional. We assume that the response depends on multiple covariates, a finite number of latent features in the functional predictor, and interaction between the two. To achieve parsimony, the interaction between the multiple covariates and the functional predictor is modeled semiparametrically with a single-index structure. We propose a two step estimation procedure based on local estimating equations, and investigate two situations: (a) when the basis functions are pre-determined, e.g., Fourier or wavelet basis functions and the functional features of interest are known; and (b) when the basis functions are data driven, such as with functional principal components. Asymptotic properties are developed. Notably, we show that when the functional features are data driven, the parameter estimates have an increased asymptotic variance, due to the estimation error of the basis functions. Our methods are illustrated with a simulation study and applied to an empirical data set, where a previously unknown interaction is detected. Technical proofs of our theoretical results are provided in the online supplemental materials.
General Methods for Evolutionary Quantitative Genetic Inference from Generalized Mixed Models.
de Villemereuil, Pierre; Schielzeth, Holger; Nakagawa, Shinichi; Morrissey, Michael
2016-11-01
Methods for inference and interpretation of evolutionary quantitative genetic parameters, and for prediction of the response to selection, are best developed for traits with normal distributions. Many traits of evolutionary interest, including many life history and behavioral traits, have inherently nonnormal distributions. The generalized linear mixed model (GLMM) framework has become a widely used tool for estimating quantitative genetic parameters for nonnormal traits. However, whereas GLMMs provide inference on a statistically convenient latent scale, it is often desirable to express quantitative genetic parameters on the scale upon which traits are measured. The parameters of fitted GLMMs, despite being on a latent scale, fully determine all quantities of potential interest on the scale on which traits are expressed. We provide expressions for deriving each of such quantities, including population means, phenotypic (co)variances, variance components including additive genetic (co)variances, and parameters such as heritability. We demonstrate that fixed effects have a strong impact on those parameters and show how to deal with this by averaging or integrating over fixed effects. The expressions require integration of quantities determined by the link function, over distributions of latent values. In general cases, the required integrals must be solved numerically, but efficient methods are available and we provide an implementation in an R package, QGglmm. We show that known formulas for quantities such as heritability of traits with binomial and Poisson distributions are special cases of our expressions. Additionally, we show how fitted GLMM can be incorporated into existing methods for predicting evolutionary trajectories. We demonstrate the accuracy of the resulting method for evolutionary prediction by simulation and apply our approach to data from a wild pedigreed vertebrate population. Copyright © 2016 de Villemereuil et al.
Solving Fully Fuzzy Linear System of Equations in General Form
Directory of Open Access Journals (Sweden)
A. Yousefzadeh
2012-06-01
Full Text Available In this work, we propose an approach for computing the positive solution of a fully fuzzy linear system where the coefficient matrix is a fuzzy $nimes n$ matrix. To do this, we use arithmetic operations on fuzzy numbers that introduced by Kaffman in and convert the fully fuzzy linear system into two $nimes n$ and $2nimes 2n$ crisp linear systems. If the solutions of these linear systems don't satisfy in positive fuzzy solution condition, we introduce the constrained least squares problem to obtain optimal fuzzy vector solution by applying the ranking function in given fully fuzzy linear system. Using our proposed method, the fully fuzzy linear system of equations always has a solution. Finally, we illustrate the efficiency of proposed method by solving some numerical examples.
Li, Zukui; Ding, Ran; Floudas, Christodoulos A.
2011-01-01
Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263
A Multiphase Non-Linear Mixed Effects Model: An Application to Spirometry after Lung Transplantation
Rajeswaran, Jeevanantham; Blackstone, Eugene H.
2014-01-01
In medical sciences, we often encounter longitudinal temporal relationships that are non-linear in nature. The influence of risk factors may also change across longitudinal follow-up. A system of multiphase non-linear mixed effects model is presented to model temporal patterns of longitudinal continuous measurements, with temporal decomposition to identify the phases and risk factors within each phase. Application of this model is illustrated using spirometry data after lung transplantation using readily available statistical software. This application illustrates the usefulness of our flexible model when dealing with complex non-linear patterns and time varying coefficients. PMID:24919830
Speed Sensorless mixed sensitivity linear parameter variant H_inf control of the induction motor
Toth, R.; Fodor, D.
2004-01-01
The paper shows the design of a robust control structure for the speed sensorless vector control of the IM, based on the mixed sensitivity (MS) linear parameter variant (LPV) H8 control theory. The controller makes possible the direct control of the flux and speed of the motor with torque adaptation
Evaluation of a Linear Mixing Model to Retrieve Soil and Vegetation Temperatures of Land Targets
Yang, J.; Jia, L.; Cui, Y.; Zhou, J.; Menenti, M.
2014-01-01
A simple linear mixing model of heterogeneous soil-vegetation system and retrieval of component temperatures from directional remote sensing measurements by inverting this model is evaluated in this paper using observations by a thermal camera. The thermal camera was used to obtain multi-angular TIR
A method for fitting regression splines with varying polynomial order in the linear mixed model.
Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W
2006-02-15
The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.
Lemmen-Gerdessen, van J.C.; Souverein, O.W.; Veer, van 't P.; Vries, de J.H.M.
2015-01-01
Objective To support the selection of food items for FFQs in such a way that the amount of information on all relevant nutrients is maximised while the food list is as short as possible. Design Selection of the most informative food items to be included in FFQs was modelled as a Mixed Integer Linear
Visual, Algebraic and Mixed Strategies in Visually Presented Linear Programming Problems.
Shama, Gilli; Dreyfus, Tommy
1994-01-01
Identified and classified solution strategies of (n=49) 10th-grade students who were presented with linear programming problems in a predominantly visual setting in the form of a computerized game. Visual strategies were developed more frequently than either algebraic or mixed strategies. Appendix includes questionnaires. (Contains 11 references.)…
A Second-Order Conditionally Linear Mixed Effects Model with Observed and Latent Variable Covariates
Harring, Jeffrey R.; Kohli, Nidhi; Silverman, Rebecca D.; Speece, Deborah L.
2012-01-01
A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a…
Cavity characterization for general use in linear electron accelerators
International Nuclear Information System (INIS)
Souza Neto, M.V. de.
1985-01-01
The main objective of this work is to is to develop measurement techniques for the characterization of microwave cavities used in linear electron accelerators. Methods are developed for the measurement of parameters that are essential to the design of an accelerator structure using conventional techniques of resonant cavities at low power. Disk-loaded cavities were designed and built, similar to those in most existing linear electron accelerators. As a result, the methods developed and the estimated accuracy were compared with those from other investigators. The results of this work are relevant for the design of cavities with the objective of developing linear electron accelerators. (author) [pt
Mixed Generalized Multiscale Finite Element Methods and Applications
Chung, Eric T.
2015-03-03
In this paper, we present a mixed generalized multiscale finite element method (GMsFEM) for solving flow in heterogeneous media. Our approach constructs multiscale basis functions following a GMsFEM framework and couples these basis functions using a mixed finite element method, which allows us to obtain a mass conservative velocity field. To construct multiscale basis functions for each coarse edge, we design a snapshot space that consists of fine-scale velocity fields supported in a union of two coarse regions that share the common interface. The snapshot vectors have zero Neumann boundary conditions on the outer boundaries, and we prescribe their values on the common interface. We describe several spectral decompositions in the snapshot space motivated by the analysis. In the paper, we also study oversampling approaches that enhance the accuracy of mixed GMsFEM. A main idea of oversampling techniques is to introduce a small dimensional snapshot space. We present numerical results for two-phase flow and transport, without updating basis functions in time. Our numerical results show that one can achieve good accuracy with a few basis functions per coarse edge if one selects appropriate offline spaces. © 2015 Society for Industrial and Applied Mathematics.
Generalization of the linear algebraic method to three dimensions
International Nuclear Information System (INIS)
Lynch, D.L.; Schneider, B.I.
1991-01-01
We present a numerical method for the solution of the Lippmann-Schwinger equation for electron-molecule collisions. By performing a three-dimensional numerical quadrature, this approach avoids both a basis-set representation of the wave function and a partial-wave expansion of the scattering potential. The resulting linear equations, analogous in form to the one-dimensional linear algebraic method, are solved with the direct iteration-variation method. Several numerical examples are presented. The prospect for using this numerical quadrature scheme for electron-polyatomic molecules is discussed
Directory of Open Access Journals (Sweden)
Shangli Zhang
2009-01-01
Full Text Available By using the methods of linear algebra and matrix inequality theory, we obtain the characterization of admissible estimators in the general multivariate linear model with respect to inequality restricted parameter set. In the classes of homogeneous and general linear estimators, the necessary and suffcient conditions that the estimators of regression coeffcient function are admissible are established.
Generalizing a Categorization of Students' Interpretations of Linear Kinematics Graphs
Bollen, Laurens; De Cock, Mieke; Zuza, Kristina; Guisasola, Jenaro; van Kampen, Paul
2016-01-01
We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven) and the Basque…
Linearly convergent stochastic heavy ball method for minimizing generalization error
Loizou, Nicolas; Richtarik, Peter
2017-01-01
In this work we establish the first linear convergence result for the stochastic heavy ball method. The method performs SGD steps with a fixed stepsize, amended by a heavy ball momentum term. In the analysis, we focus on minimizing the expected loss
General guidelines solution for linear programming with fuzzy coefficients
Directory of Open Access Journals (Sweden)
Sergio Gerardo de los Cobos Silva
2013-08-01
Full Text Available This work introduce to the Possibilistic Programming and the Fuzzy Programming as paradigms that allow to resolve problems of linear programming when the coefficients of the model or the restrictions on the same are presented as fuzzy numbers, rather than exact numbers (crisp. This work presents some examples based on [1].
A General Linear Method for Equating with Small Samples
Albano, Anthony D.
2015-01-01
Research on equating with small samples has shown that methods with stronger assumptions and fewer statistical estimates can lead to decreased error in the estimated equating function. This article introduces a new approach to linear observed-score equating, one which provides flexible control over how form difficulty is assumed versus estimated…
General Linearized Theory of Quantum Fluctuations around Arbitrary Limit Cycles.
Navarrete-Benlloch, Carlos; Weiss, Talitha; Walter, Stefan; de Valcárcel, Germán J
2017-09-29
The theory of Gaussian quantum fluctuations around classical steady states in nonlinear quantum-optical systems (also known as standard linearization) is a cornerstone for the analysis of such systems. Its simplicity, together with its accuracy far from critical points or situations where the nonlinearity reaches the strong coupling regime, has turned it into a widespread technique, being the first method of choice in most works on the subject. However, such a technique finds strong practical and conceptual complications when one tries to apply it to situations in which the classical long-time solution is time dependent, a most prominent example being spontaneous limit-cycle formation. Here, we introduce a linearization scheme adapted to such situations, using the driven Van der Pol oscillator as a test bed for the method, which allows us to compare it with full numerical simulations. On a conceptual level, the scheme relies on the connection between the emergence of limit cycles and the spontaneous breaking of the symmetry under temporal translations. On the practical side, the method keeps the simplicity and linear scaling with the size of the problem (number of modes) characteristic of standard linearization, making it applicable to large (many-body) systems.
Linear and Weakly Nonlinear Instability of Shallow Mixing Layers with Variable Friction
Directory of Open Access Journals (Sweden)
Irina Eglite
2018-01-01
Full Text Available Linear and weakly nonlinear instability of shallow mixing layers is analysed in the present paper. It is assumed that the resistance force varies in the transverse direction. Linear stability problem is solved numerically using collocation method. It is shown that the increase in the ratio of the friction coefficients in the main channel to that in the floodplain has a stabilizing influence on the flow. The amplitude evolution equation for the most unstable mode (the complex Ginzburg–Landau equation is derived from the shallow water equations under the rigid-lid assumption. Results of numerical calculations are presented.
Dark energy cosmology with generalized linear equation of state
International Nuclear Information System (INIS)
Babichev, E; Dokuchaev, V; Eroshenko, Yu
2005-01-01
Dark energy with the usually used equation of state p = wρ, where w const 0 ), where the constants α and ρ 0 are free parameters. This non-homogeneous linear equation of state provides the description of both hydrodynamically stable (α > 0) and unstable (α < 0) fluids. In particular, the considered cosmological model describes the hydrodynamically stable dark (and phantom) energy. The possible types of cosmological scenarios in this model are determined and classified in terms of attractors and unstable points by using phase trajectories analysis. For the dark energy case, some distinctive types of cosmological scenarios are possible: (i) the universe with the de Sitter attractor at late times, (ii) the bouncing universe, (iii) the universe with the big rip and with the anti-big rip. In the framework of a linear equation of state the universe filled with a phantom energy, w < -1, may have either the de Sitter attractor or the big rip
Linear dose dependence of ion beam mixing of metals on Si
International Nuclear Information System (INIS)
Poker, D.B.; Appleton, B.R.
1985-01-01
These experiments were conducted to determine the dose dependences of ion beam mixing of various metal-silicon couples. V/Si and Cr/Si were included because these couples were previously suspected of exhibiting a linear dose dependence. Pd/Si was chosen because it had been reported as exhibiting only the square root dependence. Samples were cut from wafers of (100) n-type Si. The samples were cleaned in organic solvents, etched in hydrofluoric acid, and rinsed with methanol before mounting in an oil-free vacuum system for thin-film deposition. Films of Au, V, Cr, or Pd were evaporated onto the Si samples with a nominal deposition rate of 10 A/s. The thicknesses were large compared with those usually used to measure ion beam mixing and were used to ensure that conditions of unlimited supply were met. Samples were mixed with Si ions ranging in energy from 300 to 375 keV, chosen to produce ion ranges that significantly exceeded the metal film depth. Si was used as the mixing ion to prevent impurity doping of the Si substrate and to exclude a background signal from the Rutherford backscattering (RBS) spectra. Samples were mixed at room temperature, with the exception of the Au/Si samples, which were mixed at liquid nitrogen temperature. The samples were alternately mixed and analyzed in situ without exposure to atmosphere between mixing doses. The compositional distributions after mixing were measured using RBS of 2.5-MeV 4 He atoms
Linearly convergent stochastic heavy ball method for minimizing generalization error
Loizou, Nicolas
2017-10-30
In this work we establish the first linear convergence result for the stochastic heavy ball method. The method performs SGD steps with a fixed stepsize, amended by a heavy ball momentum term. In the analysis, we focus on minimizing the expected loss and not on finite-sum minimization, which is typically a much harder problem. While in the analysis we constrain ourselves to quadratic loss, the overall objective is not necessarily strongly convex.
Generalizing a categorization of students' interpretations of linear kinematics graphs
Bollen, Laurens; De Cock, Mieke; Zuza, Kristina; Guisasola, Jenaro; van Kampen, Paul
2016-01-01
We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven) and the Basque Country, Spain (University of the Basque Country). We discuss how we adapted the categorization to accommodate a much more diverse student cohort and ...
Generalizing a categorization of students’ interpretations of linear kinematics graphs
Directory of Open Access Journals (Sweden)
Laurens Bollen
2016-02-01
Full Text Available We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven and the Basque Country, Spain (University of the Basque Country. We discuss how we adapted the categorization to accommodate a much more diverse student cohort and explain how the prior knowledge of students may account for many differences in the prevalence of approaches and success rates. Although calculus-based physics students make fewer mistakes than algebra-based physics students, they encounter similar difficulties that are often related to incorrectly dividing two coordinates. We verified that a qualitative understanding of kinematics is an important but not sufficient condition for students to determine a correct value for the speed. When comparing responses to questions on linear distance-time graphs with responses to isomorphic questions on linear water level versus time graphs, we observed that the context of a question influences the approach students use. Neither qualitative understanding nor an ability to find the slope of a context-free graph proved to be a reliable predictor for the approach students use when they determine the instantaneous speed.
Generalizing a categorization of students' interpretations of linear kinematics graphs
Bollen, Laurens; De Cock, Mieke; Zuza, Kristina; Guisasola, Jenaro; van Kampen, Paul
2016-06-01
We have investigated whether and how a categorization of responses to questions on linear distance-time graphs, based on a study of Irish students enrolled in an algebra-based course, could be adopted and adapted to responses from students enrolled in calculus-based physics courses at universities in Flanders, Belgium (KU Leuven) and the Basque Country, Spain (University of the Basque Country). We discuss how we adapted the categorization to accommodate a much more diverse student cohort and explain how the prior knowledge of students may account for many differences in the prevalence of approaches and success rates. Although calculus-based physics students make fewer mistakes than algebra-based physics students, they encounter similar difficulties that are often related to incorrectly dividing two coordinates. We verified that a qualitative understanding of kinematics is an important but not sufficient condition for students to determine a correct value for the speed. When comparing responses to questions on linear distance-time graphs with responses to isomorphic questions on linear water level versus time graphs, we observed that the context of a question influences the approach students use. Neither qualitative understanding nor an ability to find the slope of a context-free graph proved to be a reliable predictor for the approach students use when they determine the instantaneous speed.
General solutions of second-order linear difference equations of Euler type
Directory of Open Access Journals (Sweden)
Akane Hongyo
2017-01-01
Full Text Available The purpose of this paper is to give general solutions of linear difference equations which are related to the Euler-Cauchy differential equation \\(y^{\\prime\\prime}+(\\lambda/t^2y=0\\ or more general linear differential equations. We also show that the asymptotic behavior of solutions of the linear difference equations are similar to solutions of the linear differential equations.
Mixed integer linear programming model for dynamic supplier selection problem considering discounts
Directory of Open Access Journals (Sweden)
Adi Wicaksono Purnawan
2018-01-01
Full Text Available Supplier selection is one of the most important elements in supply chain management. This function involves evaluation of many factors such as, material costs, transportation costs, quality, delays, supplier capacity, storage capacity and others. Each of these factors varies with time, therefore, supplier identified for one period is not necessarily be same for the next period to supply the same product. So, mixed integer linear programming (MILP was developed to overcome the dynamic supplier selection problem (DSSP. In this paper, a mixed integer linear programming model is built to solve the lot-sizing problem with multiple suppliers, multiple periods, multiple products and quantity discounts. The buyer has to make a decision for some products which will be supplied by some suppliers for some periods cosidering by discount. To validate the MILP model with randomly generated data. The model is solved by Lingo 16.
Goeyvaerts, Nele; Leuridan, Elke; Faes, Christel; Van Damme, Pierre; Hens, Niel
2015-09-10
Biomedical studies often generate repeated measures of multiple outcomes on a set of subjects. It may be of interest to develop a biologically intuitive model for the joint evolution of these outcomes while assessing inter-subject heterogeneity. Even though it is common for biological processes to entail non-linear relationships, examples of multivariate non-linear mixed models (MNMMs) are still fairly rare. We contribute to this area by jointly analyzing the maternal antibody decay for measles, mumps, rubella, and varicella, allowing for a different non-linear decay model for each infectious disease. We present a general modeling framework to analyze multivariate non-linear longitudinal profiles subject to censoring, by combining multivariate random effects, non-linear growth and Tobit regression. We explore the hypothesis of a common infant-specific mechanism underlying maternal immunity using a pairwise correlated random-effects approach and evaluating different correlation matrix structures. The implied marginal correlation between maternal antibody levels is estimated using simulations. The mean duration of passive immunity was less than 4 months for all diseases with substantial heterogeneity between infants. The maternal antibody levels against rubella and varicella were found to be positively correlated, while little to no correlation could be inferred for the other disease pairs. For some pairs, computational issues occurred with increasing correlation matrix complexity, which underlines the importance of further developing estimation methods for MNMMs. Copyright © 2015 John Wiley & Sons, Ltd.
Mixed problems for linear symmetric hyperbolic systems with characteristic boundary conditions
International Nuclear Information System (INIS)
Secchi, P.
1994-01-01
We consider the initial-boundary value problem for symmetric hyperbolic systems with characteristic boundary of constant multiplicity. In the linear case we give some results about the existence of regular solutions in suitable functions spaces which take in account the loss of regularity in the normal direction to the characteristic boundary. We also consider the equations of ideal magneto-hydrodynamics under perfectly conducting wall boundary conditions and give some results about the solvability of such mixed problem. (author). 16 refs
Linear mixed-effects models for central statistical monitoring of multicenter clinical trials
Desmet, L.; Venet, D.; Doffagne, E.; Timmermans, C.; BURZYKOWSKI, Tomasz; LEGRAND, Catherine; BUYSE, Marc
2014-01-01
Multicenter studies are widely used to meet accrual targets in clinical trials. Clinical data monitoring is required to ensure the quality and validity of the data gathered across centers. One approach to this end is central statistical monitoring, which aims at detecting atypical patterns in the data by means of statistical methods. In this context, we consider the simple case of a continuous variable, and we propose a detection procedure based on a linear mixed-effects model to detect locat...
A Mixed Integer Linear Programming Model for the North Atlantic Aircraft Trajectory Planning
Sbihi , Mohammed; Rodionova , Olga; Delahaye , Daniel; Mongeau , Marcel
2015-01-01
International audience; This paper discusses the trajectory planning problem for ights in the North Atlantic oceanic airspace (NAT). We develop a mathematical optimization framework in view of better utilizing available capacity by re-routing aircraft. The model is constructed by discretizing the problem parameters. A Mixed integer linear program (MILP) is proposed. Based on the MILP a heuristic to solve real-size instances is also introduced
DEFF Research Database (Denmark)
Escudero, Laureano F.; Monge, Juan Francisco; Morales, Dolores Romero
2015-01-01
In this paper we consider multiperiod mixed 0–1 linear programming models under uncertainty. We propose a risk averse strategy using stochastic dominance constraints (SDC) induced by mixed-integer linear recourse as the risk measure. The SDC strategy extends the existing literature to the multist...
Terrano, Daniel; Tsuper, Ilona; Maraschky, Adam; Holland, Nolan; Streletzky, Kiril
Temperature sensitive nanoparticles were generated from a construct (H20F) of three chains of elastin-like polypeptides (ELP) linked to a negatively charged foldon domain. This ELP system was mixed at different ratios with linear chains of ELP (H40L) which lacks the foldon domain. The mixed system is soluble at room temperature and at a transition temperature (Tt) will form swollen micelles with the hydrophobic linear chains hidden inside. This system was studied using depolarized dynamic light scattering (DDLS) and static light scattering (SLS) to determine the size, shape, and internal structure of the mixed micelles. The mixed micelle in equal parts of H20F and H40L show a constant apparent hydrodynamic radius of 40-45 nm at the concentration window from 25:25 to 60:60 uM (1:1 ratio). At a fixed 50 uM concentration of the H20F, varying H40L concentration from 5 to 80 uM resulted in a linear growth in the hydrodynamic radius from about 11 to about 62 nm, along with a 1000-fold increase in VH signal. A possible simple model explaining the growth of the swollen micelles is considered. Lastly, the VH signal can indicate elongation in the geometry of the particle or could possibly be a result from anisotropic properties from the core of the micelle. SLS was used to study the molecular weight, and the radius of gyration of the micelle to help identify the structure and morphology of mixed micelles and the tangible cause of the VH signal.
Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William
2016-01-01
Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19
Mixed H∞ and passive control for linear switched systems via hybrid control approach
Zheng, Qunxian; Ling, Youzhu; Wei, Lisheng; Zhang, Hongbin
2018-03-01
This paper investigates the mixed H∞ and passive control problem for linear switched systems based on a hybrid control strategy. To solve this problem, first, a new performance index is proposed. This performance index can be viewed as the mixed weighted H∞ and passivity performance. Then, the hybrid controllers are used to stabilise the switched systems. The hybrid controllers consist of dynamic output-feedback controllers for every subsystem and state updating controllers at the switching instant. The design of state updating controllers not only depends on the pre-switching subsystem and the post-switching subsystem, but also depends on the measurable output signal. The hybrid controllers proposed in this paper can include some existing ones as special cases. Combine the multiple Lyapunov functions approach with the average dwell time technique, new sufficient conditions are obtained. Under the new conditions, the closed-loop linear switched systems are globally uniformly asymptotically stable with a mixed H∞ and passivity performance index. Moreover, the desired hybrid controllers can be constructed by solving a set of linear matrix inequalities. Finally, a numerical example and a practical example are given.
Mixed hyperbolic-second-order-parabolic formulations of general relativity
International Nuclear Information System (INIS)
Paschalidis, Vasileios
2008-01-01
Two new formulations of general relativity are introduced. The first one is a parabolization of the Arnowitt-Deser-Misner formulation and is derived by the addition of combinations of the constraints and their derivatives to the right-hand side of the Arnowitt-Deser-Misner evolution equations. The desirable property of this modification is that it turns the surface of constraints into a local attractor because the constraint propagation equations become second-order parabolic independently of the gauge conditions employed. This system may be classified as mixed hyperbolic--second-order parabolic. The second formulation is a parabolization of the Kidder-Scheel-Teukolsky formulation and is a manifestly mixed strongly hyperbolic--second-order-parabolic set of equations, bearing thus resemblance to the compressible Navier-Stokes equations. As a first test, a stability analysis of flat space is carried out and it is shown that the first modification exponentially damps and smoothes all constraint-violating modes. These systems provide a new basis for constructing schemes for long-term and stable numerical integration of the Einstein field equations.
A general algorithm for computing distance transforms in linear time
Meijster, A.; Roerdink, J.B.T.M.; Hesselink, W.H.; Goutsias, J; Vincent, L; Bloomberg, DS
2000-01-01
A new general algorithm fur computing distance transforms of digital images is presented. The algorithm consists of two phases. Both phases consist of two scans, a forward and a backward scan. The first phase scans the image column-wise, while the second phase scans the image row-wise. Since the
Generalized Heisenberg algebra and (non linear) pseudo-bosons
Bagarello, F.; Curado, E. M. F.; Gazeau, J. P.
2018-04-01
We propose a deformed version of the generalized Heisenberg algebra by using techniques borrowed from the theory of pseudo-bosons. In particular, this analysis is relevant when non self-adjoint Hamiltonians are needed to describe a given physical system. We also discuss relations with nonlinear pseudo-bosons. Several examples are discussed.
Generalizing Pooling Functions in CNNs: Mixed, Gated, and Tree.
Lee, Chen-Yu; Gallagher, Patrick; Tu, Zhuowen
2018-04-01
In this paper, we seek to improve deep neural networks by generalizing the pooling operations that play a central role in the current architectures. We pursue a careful exploration of approaches to allow pooling to learn and to adapt to complex and variable patterns. The two primary directions lie in: (1) learning a pooling function via (two strategies of) combining of max and average pooling, and (2) learning a pooling function in the form of a tree-structured fusion of pooling filters that are themselves learned. In our experiments every generalized pooling operation we explore improves performance when used in place of average or max pooling. We experimentally demonstrate that the proposed pooling operations provide a boost in invariance properties relative to conventional pooling and set the state of the art on several widely adopted benchmark datasets. These benefits come with only a light increase in computational overhead during training (ranging from additional 5 to 15 percent in time complexity) and a very modest increase in the number of model parameters (e.g., additional 1, 9, and 27 parameters for mixed, gated, and 2-level tree pooling operators, respectively). To gain more insights about our proposed pooling methods, we also visualize the learned pooling masks and the embeddings of the internal feature responses for different pooling operations. Our proposed pooling operations are easy to implement and can be applied within various deep neural network architectures.
Energy Technology Data Exchange (ETDEWEB)
Yavari, M., E-mail: yavari@iaukashan.ac.ir [Islamic Azad University, Kashan Branch (Iran, Islamic Republic of)
2016-06-15
We generalize the results of Nesterenko [13, 14] and Gogilidze and Surovtsev [15] for DNA structures. Using the generalized Hamiltonian formalism, we investigate solutions of the equilibrium shape equations for the linear free energy model.
Linear-time general decoding algorithm for the surface code
Darmawan, Andrew S.; Poulin, David
2018-05-01
A quantum error correcting protocol can be substantially improved by taking into account features of the physical noise process. We present an efficient decoder for the surface code which can account for general noise features, including coherences and correlations. We demonstrate that the decoder significantly outperforms the conventional matching algorithm on a variety of noise models, including non-Pauli noise and spatially correlated noise. The algorithm is based on an approximate calculation of the logical channel using a tensor-network description of the noisy state.
The Morava E-theories of finite general linear groups
Mattafirri, Sara
block detector few centimeters in size is used. The resolution significantly improves with increasing energy of the photons and it degrades roughly linearly with increasing distance from the detector; Larger detection efficiency can be obtained at the expenses of resolution or via targeted configurations of the detector. Results pave the way for image reconstruction of practical gamma-ray emitting sources.
Walker, Jeffrey A
2016-01-01
Self-contained tests estimate and test the association between a phenotype and mean expression level in a gene set defined a priori . Many self-contained gene set analysis methods have been developed but the performance of these methods for phenotypes that are continuous rather than discrete and with multiple nuisance covariates has not been well studied. Here, I use Monte Carlo simulation to evaluate the performance of both novel and previously published (and readily available via R) methods for inferring effects of a continuous predictor on mean expression in the presence of nuisance covariates. The motivating data are a high-profile dataset which was used to show opposing effects of hedonic and eudaimonic well-being (or happiness) on the mean expression level of a set of genes that has been correlated with social adversity (the CTRA gene set). The original analysis of these data used a linear model (GLS) of fixed effects with correlated error to infer effects of Hedonia and Eudaimonia on mean CTRA expression. The standardized effects of Hedonia and Eudaimonia on CTRA gene set expression estimated by GLS were compared to estimates using multivariate (OLS) linear models and generalized estimating equation (GEE) models. The OLS estimates were tested using O'Brien's OLS test, Anderson's permutation [Formula: see text]-test, two permutation F -tests (including GlobalAncova), and a rotation z -test (Roast). The GEE estimates were tested using a Wald test with robust standard errors. The performance (Type I, II, S, and M errors) of all tests was investigated using a Monte Carlo simulation of data explicitly modeled on the re-analyzed dataset. GLS estimates are inconsistent between data sets, and, in each dataset, at least one coefficient is large and highly statistically significant. By contrast, effects estimated by OLS or GEE are very small, especially relative to the standard errors. Bootstrap and permutation GLS distributions suggest that the GLS results in
Directory of Open Access Journals (Sweden)
Jeffrey A. Walker
2016-10-01
Full Text Available Background Self-contained tests estimate and test the association between a phenotype and mean expression level in a gene set defined a priori. Many self-contained gene set analysis methods have been developed but the performance of these methods for phenotypes that are continuous rather than discrete and with multiple nuisance covariates has not been well studied. Here, I use Monte Carlo simulation to evaluate the performance of both novel and previously published (and readily available via R methods for inferring effects of a continuous predictor on mean expression in the presence of nuisance covariates. The motivating data are a high-profile dataset which was used to show opposing effects of hedonic and eudaimonic well-being (or happiness on the mean expression level of a set of genes that has been correlated with social adversity (the CTRA gene set. The original analysis of these data used a linear model (GLS of fixed effects with correlated error to infer effects of Hedonia and Eudaimonia on mean CTRA expression. Methods The standardized effects of Hedonia and Eudaimonia on CTRA gene set expression estimated by GLS were compared to estimates using multivariate (OLS linear models and generalized estimating equation (GEE models. The OLS estimates were tested using O’Brien’s OLS test, Anderson’s permutation ${r}_{F}^{2}$ r F 2 -test, two permutation F-tests (including GlobalAncova, and a rotation z-test (Roast. The GEE estimates were tested using a Wald test with robust standard errors. The performance (Type I, II, S, and M errors of all tests was investigated using a Monte Carlo simulation of data explicitly modeled on the re-analyzed dataset. Results GLS estimates are inconsistent between data sets, and, in each dataset, at least one coefficient is large and highly statistically significant. By contrast, effects estimated by OLS or GEE are very small, especially relative to the standard errors. Bootstrap and permutation GLS
International Nuclear Information System (INIS)
Balasubramaniam, P.; Kalpana, M.; Rakkiyappan, R.
2012-01-01
Fuzzy cellular neural networks (FCNNs) are special kinds of cellular neural networks (CNNs). Each cell in an FCNN contains fuzzy operating abilities. The entire network is governed by cellular computing laws. The design of FCNNs is based on fuzzy local rules. In this paper, a linear matrix inequality (LMI) approach for synchronization control of FCNNs with mixed delays is investigated. Mixed delays include discrete time-varying delays and unbounded distributed delays. A dynamic control scheme is proposed to achieve the synchronization between a drive network and a response network. By constructing the Lyapunov—Krasovskii functional which contains a triple-integral term and the free-weighting matrices method an improved delay-dependent stability criterion is derived in terms of LMIs. The controller can be easily obtained by solving the derived LMIs. A numerical example and its simulations are presented to illustrate the effectiveness of the proposed method. (interdisciplinary physics and related areas of science and technology)
Skew-t partially linear mixed-effects models for AIDS clinical studies.
Lu, Tao
2016-01-01
We propose partially linear mixed-effects models with asymmetry and missingness to investigate the relationship between two biomarkers in clinical studies. The proposed models take into account irregular time effects commonly observed in clinical studies under a semiparametric model framework. In addition, commonly assumed symmetric distributions for model errors are substituted by asymmetric distribution to account for skewness. Further, informative missing data mechanism is accounted for. A Bayesian approach is developed to perform parameter estimation simultaneously. The proposed model and method are applied to an AIDS dataset and comparisons with alternative models are performed.
Mixed Integer Linear Programming model for Crude Palm Oil Supply Chain Planning
Sembiring, Pasukat; Mawengkang, Herman; Sadyadharma, Hendaru; Bu'ulolo, F.; Fajriana
2018-01-01
The production process of crude palm oil (CPO) can be defined as the milling process of raw materials, called fresh fruit bunch (FFB) into end products palm oil. The process usually through a series of steps producing and consuming intermediate products. The CPO milling industry considered in this paper does not have oil palm plantation, therefore the FFB are supplied by several public oil palm plantations. Due to the limited availability of FFB, then it is necessary to choose from which plantations would be appropriate. This paper proposes a mixed integer linear programming model the supply chain integrated problem, which include waste processing. The mathematical programming model is solved using neighborhood search approach.
An overview of solution methods for multi-objective mixed integer linear programming programs
DEFF Research Database (Denmark)
Andersen, Kim Allan; Stidsen, Thomas Riis
Multiple objective mixed integer linear programming (MOMIP) problems are notoriously hard to solve to optimality, i.e. finding the complete set of non-dominated solutions. We will give an overview of existing methods. Among those are interactive methods, the two phases method and enumeration...... methods. In particular we will discuss the existing branch and bound approaches for solving multiple objective integer programming problems. Despite the fact that branch and bound methods has been applied successfully to integer programming problems with one criterion only a few attempts has been made...
International Nuclear Information System (INIS)
Kim, Jin Kyu; Kim, Dong Keon
2016-01-01
A common approach for dynamic analysis in current practice is based on a discrete time-integration scheme. This approach can be largely attributed to the absence of a true variational framework for initial value problems. To resolve this problem, a new stationary variational principle was recently established for single-degree-of-freedom oscillating systems using mixed variables, fractional derivatives and convolutions of convolutions. In this mixed convolved action, all the governing differential equations and initial conditions are recovered from the stationarity of a single functional action. Thus, the entire description of linear elastic dynamical systems is encapsulated. For its practical application to structural dynamics, this variational formalism is systemically extended to linear elastic multidegree- of-freedom systems in this study, and a corresponding weak form is numerically implemented via a quadratic temporal finite element method. The developed numerical method is symplectic and unconditionally stable with respect to a time step for the underlying conservative system. For the forced-damped vibration, a three-story shear building is used as an example to investigate the performance of the developed numerical method, which provides accurate results with good convergence characteristics
Stability Criterion of Linear Stochastic Systems Subject to Mixed H2/Passivity Performance
Directory of Open Access Journals (Sweden)
Cheung-Chieh Ku
2015-01-01
Full Text Available The H2 control scheme and passivity theory are applied to investigate the stability criterion of continuous-time linear stochastic system subject to mixed performance. Based on the stochastic differential equation, the stochastic behaviors can be described as multiplicative noise terms. For the considered system, the H2 control scheme is applied to deal with the problem on minimizing output energy. And the asymptotical stability of the system can be guaranteed under desired initial conditions. Besides, the passivity theory is employed to constrain the effect of external disturbance on the system. Moreover, the Itô formula and Lyapunov function are used to derive the sufficient conditions which are converted into linear matrix inequality (LMI form for applying convex optimization algorithm. Via solving the sufficient conditions, the state feedback controller can be established such that the asymptotical stability and mixed performance of the system are achieved in the mean square. Finally, the synchronous generator system is used to verify the effectiveness and applicability of the proposed design method.
Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong
2017-12-18
Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.
Canepa, Edward S.
2013-01-01
Traffic sensing systems rely more and more on user generated (insecure) data, which can pose a security risk whenever the data is used for traffic flow control. In this article, we propose a new formulation for detecting malicious data injection in traffic flow monitoring systems by using the underlying traffic flow model. The state of traffic is modeled by the Lighthill-Whitham- Richards traffic flow model, which is a first order scalar conservation law with concave flux function. Given a set of traffic flow data, we show that the constraints resulting from this partial differential equation are mixed integer linear inequalities for some decision variable. We use this fact to pose the problem of detecting spoofing cyber-attacks in probe-based traffic flow information systems as mixed integer linear feasibility problem. The resulting framework can be used to detect spoofing attacks in real time, or to evaluate the worst-case effects of an attack offline. A numerical implementation is performed on a cyber-attack scenario involving experimental data from the Mobile Century experiment and the Mobile Millennium system currently operational in Northern California. © 2013 IEEE.
Canepa, Edward S.
2013-09-01
Traffic sensing systems rely more and more on user generated (insecure) data, which can pose a security risk whenever the data is used for traffic flow control. In this article, we propose a new formulation for detecting malicious data injection in traffic flow monitoring systems by using the underlying traffic flow model. The state of traffic is modeled by the Lighthill- Whitham-Richards traffic flow model, which is a first order scalar conservation law with concave flux function. Given a set of traffic flow data generated by multiple sensors of different types, we show that the constraints resulting from this partial differential equation are mixed integer linear inequalities for a specific decision variable. We use this fact to pose the problem of detecting spoofing cyber attacks in probe-based traffic flow information systems as mixed integer linear feasibility problem. The resulting framework can be used to detect spoofing attacks in real time, or to evaluate the worst-case effects of an attack offliine. A numerical implementation is performed on a cyber attack scenario involving experimental data from the Mobile Century experiment and the Mobile Millennium system currently operational in Northern California. © American Institute of Mathematical Sciences.
Optimal placement of capacitors in a radial network using conic and mixed integer linear programming
Energy Technology Data Exchange (ETDEWEB)
Jabr, R.A. [Electrical, Computer and Communication Engineering Department, Notre Dame University, P.O. Box: 72, Zouk Mikhael, Zouk Mosbeh (Lebanon)
2008-06-15
This paper considers the problem of optimally placing fixed and switched type capacitors in a radial distribution network. The aim of this problem is to minimize the costs associated with capacitor banks, peak power, and energy losses whilst satisfying a pre-specified set of physical and technical constraints. The proposed solution is obtained using a two-phase approach. In phase-I, the problem is formulated as a conic program in which all nodes are candidates for placement of capacitor banks whose sizes are considered as continuous variables. A global solution of the phase-I problem is obtained using an interior-point based conic programming solver. Phase-II seeks a practical optimal solution by considering capacitor sizes as discrete variables. The problem in this phase is formulated as a mixed integer linear program based on minimizing the L1-norm of deviations from the phase-I state variable values. The solution to the phase-II problem is obtained using a mixed integer linear programming solver. The proposed method is validated via extensive comparisons with previously published results. (author)
Energy Technology Data Exchange (ETDEWEB)
Kim, Jin Kyu [School of Architecture and Architectural Engineering, Hanyang University, Ansan (Korea, Republic of); Kim, Dong Keon [Dept. of Architectural Engineering, Dong A University, Busan (Korea, Republic of)
2016-09-15
A common approach for dynamic analysis in current practice is based on a discrete time-integration scheme. This approach can be largely attributed to the absence of a true variational framework for initial value problems. To resolve this problem, a new stationary variational principle was recently established for single-degree-of-freedom oscillating systems using mixed variables, fractional derivatives and convolutions of convolutions. In this mixed convolved action, all the governing differential equations and initial conditions are recovered from the stationarity of a single functional action. Thus, the entire description of linear elastic dynamical systems is encapsulated. For its practical application to structural dynamics, this variational formalism is systemically extended to linear elastic multidegree- of-freedom systems in this study, and a corresponding weak form is numerically implemented via a quadratic temporal finite element method. The developed numerical method is symplectic and unconditionally stable with respect to a time step for the underlying conservative system. For the forced-damped vibration, a three-story shear building is used as an example to investigate the performance of the developed numerical method, which provides accurate results with good convergence characteristics.
Model and measurements of linear mixing in thermal IR ground leaving radiance spectra
Balick, Lee; Clodius, William; Jeffery, Christopher; Theiler, James; McCabe, Matthew; Gillespie, Alan; Mushkin, Amit; Danilina, Iryna
2007-10-01
Hyperspectral thermal IR remote sensing is an effective tool for the detection and identification of gas plumes and solid materials. Virtually all remotely sensed thermal IR pixels are mixtures of different materials and temperatures. As sensors improve and hyperspectral thermal IR remote sensing becomes more quantitative, the concept of homogeneous pixels becomes inadequate. The contributions of the constituents to the pixel spectral ground leaving radiance are weighted by their spectral emissivities and their temperature, or more correctly, temperature distributions, because real pixels are rarely thermally homogeneous. Planck's Law defines a relationship between temperature and radiance that is strongly wavelength dependent, even for blackbodies. Spectral ground leaving radiance (GLR) from mixed pixels is temperature and wavelength dependent and the relationship between observed radiance spectra from mixed pixels and library emissivity spectra of mixtures of 'pure' materials is indirect. A simple model of linear mixing of subpixel radiance as a function of material type, the temperature distribution of each material and the abundance of the material within a pixel is presented. The model indicates that, qualitatively and given normal environmental temperature variability, spectral features remain observable in mixtures as long as the material occupies more than roughly 10% of the pixel. Field measurements of known targets made on the ground and by an airborne sensor are presented here and serve as a reality check on the model. Target spectral GLR from mixtures as a function of temperature distribution and abundance within the pixel at day and night are presented and compare well qualitatively with model output.
Effect of correlation on covariate selection in linear and nonlinear mixed effect models.
Bonate, Peter L
2017-01-01
The effect of correlation among covariates on covariate selection was examined with linear and nonlinear mixed effect models. Demographic covariates were extracted from the National Health and Nutrition Examination Survey III database. Concentration-time profiles were Monte Carlo simulated where only one covariate affected apparent oral clearance (CL/F). A series of univariate covariate population pharmacokinetic models was fit to the data and compared with the reduced model without covariate. The "best" covariate was identified using either the likelihood ratio test statistic or AIC. Weight and body surface area (calculated using Gehan and George equation, 1970) were highly correlated (r = 0.98). Body surface area was often selected as a better covariate than weight, sometimes as high as 1 in 5 times, when weight was the covariate used in the data generating mechanism. In a second simulation, parent drug concentration and three metabolites were simulated from a thorough QT study and used as covariates in a series of univariate linear mixed effects models of ddQTc interval prolongation. The covariate with the largest significant LRT statistic was deemed the "best" predictor. When the metabolite was formation-rate limited and only parent concentrations affected ddQTc intervals the metabolite was chosen as a better predictor as often as 1 in 5 times depending on the slope of the relationship between parent concentrations and ddQTc intervals. A correlated covariate can be chosen as being a better predictor than another covariate in a linear or nonlinear population analysis by sheer correlation These results explain why for the same drug different covariates may be identified in different analyses. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Impacto não Linear do Marketing Mix no Desempenho em Vendas de Marcas
Directory of Open Access Journals (Sweden)
Rafael Barreiros Porto
2015-01-01
Full Text Available O padrão de impacto que as atividades de marketingexercem nas vendas não tem sido evidenciado na literatura. Muitas pesquisas adotam perspectivas lineares restritas, desconsiderando as evidências empíricas. Este trabalho investigou o impacto não linear do marketingmixno volume em vendas e no volume de consumidores e de compra por consumidor. Realizou-se um estudo longitudinal em painel de marcas e de consumidores simultâneos. Analisaram-se 121 marcas durante 13 meses, com 793 compras/mês feitas pelos consumidores por meio de três equações de estimativas generalizadas. Os resultados apontam que o marketing mix, em especial brandinge precificação, impacta fortemente todas as dependentes em formato não linear, com bons ajustes dos parâmetros. Oefeito conjunto gera economias de escala para as marcas, enquanto, para cada consumidor, o efeito conjunto estimula-o a adquirir maiores quantidades gradativamente. A pesquisa demonstra oito padrões impactantes do marketingmixsobre os indicadores investigados, com alterações de sua ordem e de seu peso para marcas e consumidores.
Directory of Open Access Journals (Sweden)
H Kazemipoor
2012-04-01
Full Text Available A multi-skilled project scheduling problem (MSPSP has been generally presented to schedule a project with staff members as resources. Each activity in project network requires different skills and also staff members have different skills, too. This causes the MSPSP becomes a special type of a multi-mode resource-constrained project scheduling problem (MM-RCPSP with a huge number of modes. Given the importance of this issue, in this paper, a mixed integer linear programming for the MSPSP is presented. Due to the complexity of the problem, a meta-heuristic algorithm is proposed in order to find near optimal solutions. To validate performance of the algorithm, results are compared against exact solutions solved by the LINGO solver. The results are promising and show that optimal or near-optimal solutions are derived for small instances and good solutions for larger instances in reasonable time.
Generalized functional linear models for gene-based case-control association studies.
Fan, Ruzong; Wang, Yifan; Mills, James L; Carter, Tonia C; Lobach, Iryna; Wilson, Alexander F; Bailey-Wilson, Joan E; Weeks, Daniel E; Xiong, Momiao
2014-11-01
By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene region are disease related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease datasets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. © 2014 WILEY PERIODICALS, INC.
On-line validation of linear process models using generalized likelihood ratios
International Nuclear Information System (INIS)
Tylee, J.L.
1981-12-01
A real-time method for testing the validity of linear models of nonlinear processes is described and evaluated. Using generalized likelihood ratios, the model dynamics are continually monitored to see if the process has moved far enough away from the nominal linear model operating point to justify generation of a new linear model. The method is demonstrated using a seventh-order model of a natural circulation steam generator
Sensitivity theory for general non-linear algebraic equations with constraints
International Nuclear Information System (INIS)
Oblow, E.M.
1977-04-01
Sensitivity theory has been developed to a high state of sophistication for applications involving solutions of the linear Boltzmann equation or approximations to it. The success of this theory in the field of radiation transport has prompted study of possible extensions of the method to more general systems of non-linear equations. Initial work in the U.S. and in Europe on the reactor fuel cycle shows that the sensitivity methodology works equally well for those non-linear problems studied to date. The general non-linear theory for algebraic equations is summarized and applied to a class of problems whose solutions are characterized by constrained extrema. Such equations form the basis of much work on energy systems modelling and the econometrics of power production and distribution. It is valuable to have a sensitivity theory available for these problem areas since it is difficult to repeatedly solve complex non-linear equations to find out the effects of alternative input assumptions or the uncertainties associated with predictions of system behavior. The sensitivity theory for a linear system of algebraic equations with constraints which can be solved using linear programming techniques is discussed. The role of the constraints in simplifying the problem so that sensitivity methodology can be applied is highlighted. The general non-linear method is summarized and applied to a non-linear programming problem in particular. Conclusions are drawn in about the applicability of the method for practical problems
Cheng, Guang; Zhou, Lan; Huang, Jianhua Z.
2014-01-01
We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based
Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.
Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi
2017-12-01
We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study. © 2017, The International Biometric Society.
Martini, Ruud; Kersten, P.H.M.
1983-01-01
Using 1-1 mappings, the complete symmetry groups of contact transformations of general linear second-order ordinary differential equations are determined from two independent solutions of those equations, and applied to the harmonic oscillator with and without damping.
Short communication: Alteration of priors for random effects in Gaussian linear mixed model
DEFF Research Database (Denmark)
Vandenplas, Jérémie; Christensen, Ole Fredslund; Gengler, Nicholas
2014-01-01
such alterations. Therefore, the aim of this study was to propose a method to alter both the mean and (co)variance of the prior multivariate normal distributions of random effects of linear mixed models while using currently available software packages. The proposed method was tested on simulated examples with 3......, multiple-trait predictions of lactation yields, and Bayesian approaches integrating external information into genetic evaluations) need to alter both the mean and (co)variance of the prior distributions and, to our knowledge, most software packages available in the animal breeding community do not permit...... different software packages available in animal breeding. The examples showed the possibility of the proposed method to alter both the mean and (co)variance of the prior distributions with currently available software packages through the use of an extended data file and a user-supplied (co)variance matrix....
Linear mixing model applied to coarse spatial resolution data from multispectral satellite sensors
Holben, Brent N.; Shimabukuro, Yosio E.
1993-01-01
A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55-3.95 micron channel was used with the two reflective channels 0.58-0.68 micron and 0.725-1.1 micron to run a constrained least squares model to generate fraction images for an area in the west central region of Brazil. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse spatial resolution data for global studies.
A Mixed Integer Linear Programming Approach to Electrical Stimulation Optimization Problems.
Abouelseoud, Gehan; Abouelseoud, Yasmine; Shoukry, Amin; Ismail, Nour; Mekky, Jaidaa
2018-02-01
Electrical stimulation optimization is a challenging problem. Even when a single region is targeted for excitation, the problem remains a constrained multi-objective optimization problem. The constrained nature of the problem results from safety concerns while its multi-objectives originate from the requirement that non-targeted regions should remain unaffected. In this paper, we propose a mixed integer linear programming formulation that can successfully address the challenges facing this problem. Moreover, the proposed framework can conclusively check the feasibility of the stimulation goals. This helps researchers to avoid wasting time trying to achieve goals that are impossible under a chosen stimulation setup. The superiority of the proposed framework over alternative methods is demonstrated through simulation examples.
Warped linear mixed models for the genetic analysis of transformed phenotypes.
Fusi, Nicolo; Lippert, Christoph; Lawrence, Neil D; Stegle, Oliver
2014-09-19
Linear mixed models (LMMs) are a powerful and established tool for studying genotype-phenotype relationships. A limitation of the LMM is that the model assumes Gaussian distributed residuals, a requirement that rarely holds in practice. Violations of this assumption can lead to false conclusions and loss in power. To mitigate this problem, it is common practice to pre-process the phenotypic values to make them as Gaussian as possible, for instance by applying logarithmic or other nonlinear transformations. Unfortunately, different phenotypes require different transformations, and choosing an appropriate transformation is challenging and subjective. Here we present an extension of the LMM that estimates an optimal transformation from the observed data. In simulations and applications to real data from human, mouse and yeast, we show that using transformations inferred by our model increases power in genome-wide association studies and increases the accuracy of heritability estimation and phenotype prediction.
Non Linear Analyses for the Evaluation of Seismic Behavior of Mixed R.C.-Masonry Structures
International Nuclear Information System (INIS)
Liberatore, Laura; Tocci, Cesare; Masiani, Renato
2008-01-01
In this work the seismic behavior of masonry buildings with mixed structural system, consisting of perimeter masonry walls and internal r.c. frames, is studied by means of non linear static (pushover) analyses. Several aspects, like the distribution of seismic action between masonry and r.c. elements, the local and global behavior of the structure, the crisis of the connections and the attainment of the ultimate strength of the whole structure are examined. The influence of some parameters, such as the masonry compressive and tensile strength, on the structural behavior is investigated. The numerical analyses are also repeated on a building in which the r.c. internal frames are replaced with masonry walls
Riviere, Marie-Karelle; Ueckert, Sebastian; Mentré, France
2016-10-01
Non-linear mixed effect models (NLMEMs) are widely used for the analysis of longitudinal data. To design these studies, optimal design based on the expected Fisher information matrix (FIM) can be used instead of performing time-consuming clinical trial simulations. In recent years, estimation algorithms for NLMEMs have transitioned from linearization toward more exact higher-order methods. Optimal design, on the other hand, has mainly relied on first-order (FO) linearization to calculate the FIM. Although efficient in general, FO cannot be applied to complex non-linear models and with difficulty in studies with discrete data. We propose an approach to evaluate the expected FIM in NLMEMs for both discrete and continuous outcomes. We used Markov Chain Monte Carlo (MCMC) to integrate the derivatives of the log-likelihood over the random effects, and Monte Carlo to evaluate its expectation w.r.t. the observations. Our method was implemented in R using Stan, which efficiently draws MCMC samples and calculates partial derivatives of the log-likelihood. Evaluated on several examples, our approach showed good performance with relative standard errors (RSEs) close to those obtained by simulations. We studied the influence of the number of MC and MCMC samples and computed the uncertainty of the FIM evaluation. We also compared our approach to Adaptive Gaussian Quadrature, Laplace approximation, and FO. Our method is available in R-package MIXFIM and can be used to evaluate the FIM, its determinant with confidence intervals (CIs), and RSEs with CIs. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Seol, Hyon-Woo; Heo, Seong-Joo; Koak, Jai-Young; Kim, Seong-Kyun; Kim, Shin-Koo
2015-01-01
To analyze the axial displacement of external and internal implant-abutment connection after cyclic loading. Three groups of external abutments (Ext group), an internal tapered one-piece-type abutment (Int-1 group), and an internal tapered two-piece-type abutment (Int-2 group) were prepared. Cyclic loading was applied to implant-abutment assemblies at 150 N with a frequency of 3 Hz. The amount of axial displacement, the Periotest values (PTVs), and the removal torque values(RTVs) were measured. Both a repeated measures analysis of variance and pattern analysis based on the linear mixed model were used for statistical analysis. Scanning electron microscopy (SEM) was used to evaluate the surface of the implant-abutment connection. The mean axial displacements after 1,000,000 cycles were 0.6 μm in the Ext group, 3.7 μm in the Int-1 group, and 9.0 μm in the Int-2 group. Pattern analysis revealed a breakpoint at 171 cycles. The Ext group showed no declining pattern, and the Int-1 group showed no declining pattern after the breakpoint (171 cycles). However, the Int-2 group experienced continuous axial displacement. After cyclic loading, the PTV decreased in the Int-2 group, and the RTV decreased in all groups. SEM imaging revealed surface wear in all groups. Axial displacement and surface wear occurred in all groups. The PTVs remained stable, but the RTVs decreased after cyclic loading. Based on linear mixed model analysis, the Ext and Int-1 groups' axial displacements plateaued after little cyclic loading. The Int-2 group's rate of axial displacement slowed after 100,000 cycles.
Generating synthetic wave climates for coastal modelling: a linear mixed modelling approach
Thomas, C.; Lark, R. M.
2013-12-01
Numerical coastline morphological evolution models require wave climate properties to drive morphological change through time. Wave climate properties (typically wave height, period and direction) may be temporally fixed, culled from real wave buoy data, or allowed to vary in some way defined by a Gaussian or other pdf. However, to examine sensitivity of coastline morphologies to wave climate change, it seems desirable to be able to modify wave climate time series from a current to some new state along a trajectory, but in a way consistent with, or initially conditioned by, the properties of existing data, or to generate fully synthetic data sets with realistic time series properties. For example, mean or significant wave height time series may have underlying periodicities, as revealed in numerous analyses of wave data. Our motivation is to develop a simple methodology to generate synthetic wave climate time series that can change in some stochastic way through time. We wish to use such time series in a coastline evolution model to test sensitivities of coastal landforms to changes in wave climate over decadal and centennial scales. We have worked initially on time series of significant wave height, based on data from a Waverider III buoy located off the coast of Yorkshire, England. The statistical framework for the simulation is the linear mixed model. The target variable, perhaps after transformation (Box-Cox), is modelled as a multivariate Gaussian, the mean modelled as a function of a fixed effect, and two random components, one of which is independently and identically distributed (iid) and the second of which is temporally correlated. The model was fitted to the data by likelihood methods. We considered the option of a periodic mean, the period either fixed (e.g. at 12 months) or estimated from the data. We considered two possible correlation structures for the second random effect. In one the correlation decays exponentially with time. In the second
Estimation of group means when adjusting for covariates in generalized linear models.
Qu, Yongming; Luo, Junxiang
2015-01-01
Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.
Linear Optimization Techniques for Product-Mix of Paints Production in Nigeria
Directory of Open Access Journals (Sweden)
Sulaimon Olanrewaju Adebiyi
2014-02-01
Full Text Available Many paint producers in Nigeria do not lend themselves to flexible production process which is important for them to manage the use of resources for effective optimal production. These goals can be achieved through the application of optimization models in their resources allocation and utilisation. This research focuses on linear optimization for achieving product- mix optimization in terms of the product identification and the right quantity in paint production in Nigeria for better profit and optimum firm performance. The computational experiments in this research contains data and information on the units item costs, unit contribution margin, maximum resources capacity, individual products absorption rate and other constraints that are particular to each of the five products produced in the company employed as case study. In data analysis, linear programming model was employed with the aid LINDO 11 software to analyse the data. The result has showed that only two out of the five products under consideration are profitable. It also revealed the rate to which the company needs to reduce cost incurred on the three other products before making them profitable for production.
Mixed models, linear dependency, and identification in age-period-cohort models.
O'Brien, Robert M
2017-07-20
This paper examines the identification problem in age-period-cohort models that use either linear or categorically coded ages, periods, and cohorts or combinations of these parameterizations. These models are not identified using the traditional fixed effect regression model approach because of a linear dependency between the ages, periods, and cohorts. However, these models can be identified if the researcher introduces a single just identifying constraint on the model coefficients. The problem with such constraints is that the results can differ substantially depending on the constraint chosen. Somewhat surprisingly, age-period-cohort models that specify one or more of ages and/or periods and/or cohorts as random effects are identified. This is the case without introducing an additional constraint. I label this identification as statistical model identification and show how statistical model identification comes about in mixed models and why which effects are treated as fixed and which are treated as random can substantially change the estimates of the age, period, and cohort effects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Baran, Richard; Northen, Trent R
2013-10-15
Untargeted metabolite profiling using liquid chromatography and mass spectrometry coupled via electrospray ionization is a powerful tool for the discovery of novel natural products, metabolic capabilities, and biomarkers. However, the elucidation of the identities of uncharacterized metabolites from spectral features remains challenging. A critical step in the metabolite identification workflow is the assignment of redundant spectral features (adducts, fragments, multimers) and calculation of the underlying chemical formula. Inspection of the data by experts using computational tools solving partial problems (e.g., chemical formula calculation for individual ions) can be performed to disambiguate alternative solutions and provide reliable results. However, manual curation is tedious and not readily scalable or standardized. Here we describe an automated procedure for the robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming optimization (RAMSI). Chemical rules among related ions are expressed as linear constraints and both the spectra interpretation and chemical formula calculation are performed in a single optimization step. This approach is unbiased in that it does not require predefined sets of neutral losses and positive and negative polarity spectra can be combined in a single optimization. The procedure was evaluated with 30 experimental mass spectra and was found to effectively identify the protonated or deprotonated molecule ([M + H](+) or [M - H](-)) while being robust to the presence of background ions. RAMSI provides a much-needed standardized tool for interpreting ions for subsequent identification in untargeted metabolomics workflows.
Identification of hydrometeor mixtures in polarimetric radar measurements and their linear de-mixing
Besic, Nikola; Ventura, Jordi Figueras i.; Grazioli, Jacopo; Gabella, Marco; Germann, Urs; Berne, Alexis
2017-04-01
The issue of hydrometeor mixtures affects radar sampling volumes without a clear dominant hydrometeor type. Containing a number of different hydrometeor types which significantly contribute to the polarimetric variables, these volumes are likely to occur in the vicinity of the melting layer and mainly, at large distance from a given radar. Motivated by potential benefits for both quantitative and qualitative applications of dual-pol radar, we propose a method for the identification of hydrometeor mixtures and their subsequent linear de-mixing. This method is intrinsically related to our recently proposed semi-supervised approach for hydrometeor classification. The mentioned classification approach [1] performs labeling of radar sampling volumes by using as a criterion the Euclidean distance with respect to five-dimensional centroids, depicting nine hydrometeor classes. The positions of the centroids in the space formed by four radar moments and one external parameter (phase indicator), are derived through a technique of k-medoids clustering, applied on a selected representative set of radar observations, and coupled with statistical testing which introduces the assumed microphysical properties of the different hydrometeor types. Aside from a hydrometeor type label, each radar sampling volume is characterized by an entropy estimate, indicating the uncertainty of the classification. Here, we revisit the concept of entropy presented in [1], in order to emphasize its presumed potential for the identification of hydrometeor mixtures. The calculation of entropy is based on the estimate of the probability (pi ) that the observation corresponds to the hydrometeor type i (i = 1,ṡṡṡ9) . The probability is derived from the Euclidean distance (di ) of the observation to the centroid characterizing the hydrometeor type i . The parametrization of the d → p transform is conducted in a controlled environment, using synthetic polarimetric radar datasets. It ensures balanced
Huber, Stefan; Klein, Elise; Moeller, Korbinian; Willmes, Klaus
2015-10-01
In neuropsychological research, single-cases are often compared with a small control sample. Crawford and colleagues developed inferential methods (i.e., the modified t-test) for such a research design. In the present article, we suggest an extension of the methods of Crawford and colleagues employing linear mixed models (LMM). We first show that a t-test for the significance of a dummy coded predictor variable in a linear regression is equivalent to the modified t-test of Crawford and colleagues. As an extension to this idea, we then generalized the modified t-test to repeated measures data by using LMMs to compare the performance difference in two conditions observed in a single participant to that of a small control group. The performance of LMMs regarding Type I error rates and statistical power were tested based on Monte-Carlo simulations. We found that starting with about 15-20 participants in the control sample Type I error rates were close to the nominal Type I error rate using the Satterthwaite approximation for the degrees of freedom. Moreover, statistical power was acceptable. Therefore, we conclude that LMMs can be applied successfully to statistically evaluate performance differences between a single-case and a control sample. Copyright © 2015 Elsevier Ltd. All rights reserved.
Abramov, R. V.
2011-12-01
Chaotic multiscale dynamical systems are common in many areas of science, one of the examples being the interaction of the low-frequency dynamics in the atmosphere with the fast turbulent weather dynamics. One of the key questions about chaotic multiscale systems is how the fast dynamics affects chaos at the slow variables, and, therefore, impacts uncertainty and predictability of the slow dynamics. Here we demonstrate that the linear slow-fast coupling with the total energy conservation property promotes the suppression of chaos at the slow variables through the rapid mixing at the fast variables, both theoretically and through numerical simulations. A suitable mathematical framework is developed, connecting the slow dynamics on the tangent subspaces to the infinite-time linear response of the mean state to a constant external forcing at the fast variables. Additionally, it is shown that the uncoupled dynamics for the slow variables may remain chaotic while the complete multiscale system loses chaos and becomes completely predictable at the slow variables through increasing chaos and turbulence at the fast variables. This result contradicts the common sense intuition, where, naturally, one would think that coupling a slow weakly chaotic system with another much faster and much stronger chaotic system would result in general increase of chaos at the slow variables.
International Nuclear Information System (INIS)
Shokair, I.R.
1991-01-01
Phase mixing of transverse oscillations changes the nature of the ion hose instability from an absolute to a convective instability. The stronger the phase mixing, the faster an electron beam reaches equilibrium with the guiding ion channel. This is important for long distance propagation of relativistic electron beams where it is desired that transverse oscillations phase mix within a few betatron wavelengths of injection and subsequently an equilibrium is reached with no further beam emittance growth. In the linear regime phase mixing is well understood and results in asymptotic decay of transverse oscillations as 1/Z 2 for a Gaussian beam and channel system, Z being the axial distance measured in betatron wavelengths. In the nonlinear regime (which is likely mode of propagation for long pulse beams) results of the spread mass model indicate that phase mixing is considerably weaker than in the regime. In this paper we consider this problem of phase mixing in the nonlinear regime. Results of the spread mass model will be shown along with a simple analysis of phase mixing for multiple oscillator models. Particle simulations also indicate that phase mixing is weaker in nonlinear regime than in the linear regime. These results will also be shown. 3 refs., 4 figs
Directory of Open Access Journals (Sweden)
Enrique López Calderón
2012-06-01
Full Text Available The aim of this study was to develop a high acceptability mixed nectar and low cost. To obtain the nectar mixed considered different amounts of passion fruit, sweet pepino, sucrose, and completing 100% with water, following a two-stage design: screening (using a design of type 2 3 + 4 center points and optimization (using a design of type 2 2 + 2*2 + 4 center points; stages that allow explore a high acceptability formulation. Then we used the technique of Linear Programming to minimize the cost of high acceptability nectar. Result of this process was obtained a mixed nectar optimal acceptability (score of 7, when the formulation is between 9 and 14% of passion fruit, 4 and 5% of sucrose, 73.5% of sweet pepino juice and filling with water to the 100%. Linear Programming possible reduced the cost of nectar mixed with optimal acceptability at S/.174 for a production of 1000 L/day.
Mixing height determination using remote sensing systems. General remarks
Energy Technology Data Exchange (ETDEWEB)
Beyrich, F. [BTU Cottbus, LS Umweltmeteorologie, Cottbus (Germany)
1997-10-01
Remote sensing systems can be considered today as a real alternative to classical soundings with respect to the MH (mixing height) determination. They have the basic advantage to allow continuous monitoring of the ABL (atmospheric boundary layer). Some technical issues which limit their operational use at present should be solved in the near future (frequency allocation, eye safety, costs). Taking into account specific operating conditions and the formulated-above requirements of a sounding system to be used for MH determination it becomes obvious that none of the available systems meets all of them, i.e., the `Mixing height-meter` does not exist. Therefore, reliable MH determination under a wide variety of conditions can be achieved only by integrating different instruments into a complex sounding system. The S-profiles provide a suitable data base for MH estimation from all types of remote sensing instruments. The criteria to deduce MH-values from these profiles should consider the structure type and the evolution stage of the ABL as well as the shape of the profiles. A certain kind of harmonization concerning these criteria should be achieved. MH values derived automatically from remote sensing data appear to be not yet reliable enough for direct operational use, they should be in any case critically examined by a trained analyst. Contemporary mathematical methods (wavelet transforms, fuzzy logics) are supposed to allow considerable progress in this field in the near future. (au) 19 refs.
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
Fukuda, J.; Johnson, K. M.
2009-12-01
Studies utilizing inversions of geodetic data for the spatial distribution of coseismic slip on faults typically present the result as a single fault plane and slip distribution. Commonly the geometry of the fault plane is assumed to be known a priori and the data are inverted for slip. However, sometimes there is not strong a priori information on the geometry of the fault that produced the earthquake and the data is not always strong enough to completely resolve the fault geometry. We develop a method to solve for the full posterior probability distribution of fault slip and fault geometry parameters in a Bayesian framework using Monte Carlo methods. The slip inversion problem is particularly challenging because it often involves multiple data sets with unknown relative weights (e.g. InSAR, GPS), model parameters that are related linearly (slip) and nonlinearly (fault geometry) through the theoretical model to surface observations, prior information on model parameters, and a regularization prior to stabilize the inversion. We present the theoretical framework and solution method for a Bayesian inversion that can handle all of these aspects of the problem. The method handles the mixed linear/nonlinear nature of the problem through combination of both analytical least-squares solutions and Monte Carlo methods. We first illustrate and validate the inversion scheme using synthetic data sets. We then apply the method to inversion of geodetic data from the 2003 M6.6 San Simeon, California earthquake. We show that the uncertainty in strike and dip of the fault plane is over 20 degrees. We characterize the uncertainty in the slip estimate with a volume around the mean fault solution in which the slip most likely occurred. Slip likely occurred somewhere in a volume that extends 5-10 km in either direction normal to the fault plane. We implement slip inversions with both traditional, kinematic smoothing constraints on slip and a simple physical condition of uniform stress
Optimal Airport Surface Traffic Planning Using Mixed-Integer Linear Programming
Directory of Open Access Journals (Sweden)
P. C. Roling
2008-01-01
Full Text Available We describe an ongoing research effort pertaining to the development of a surface traffic automation system that will help controllers to better coordinate surface traffic movements related to arrival and departure traffic. More specifically, we describe the concept for a taxi-planning support tool that aims to optimize the routing and scheduling of airport surface traffic in such a way as to deconflict the taxi plans while optimizing delay, total taxi-time, or some other airport efficiency metric. Certain input parameters related to resource demand, such as the expected landing times and the expected pushback times, are rather difficult to predict accurately. Due to uncertainty in the input data driving the taxi-planning process, the taxi-planning tool is designed such that it produces solutions that are robust to uncertainty. The taxi-planning concept presented herein, which is based on mixed-integer linear programming, is designed such that it is able to adapt to perturbations in these input conditions, as well as to account for failure in the actual execution of surface trajectories. The capabilities of the tool are illustrated in a simple hypothetical airport.
Optimising the selection of food items for FFQs using Mixed Integer Linear Programming.
Gerdessen, Johanna C; Souverein, Olga W; van 't Veer, Pieter; de Vries, Jeanne Hm
2015-01-01
To support the selection of food items for FFQs in such a way that the amount of information on all relevant nutrients is maximised while the food list is as short as possible. Selection of the most informative food items to be included in FFQs was modelled as a Mixed Integer Linear Programming (MILP) model. The methodology was demonstrated for an FFQ with interest in energy, total protein, total fat, saturated fat, monounsaturated fat, polyunsaturated fat, total carbohydrates, mono- and disaccharides, dietary fibre and potassium. The food lists generated by the MILP model have good performance in terms of length, coverage and R 2 (explained variance) of all nutrients. MILP-generated food lists were 32-40 % shorter than a benchmark food list, whereas their quality in terms of R 2 was similar to that of the benchmark. The results suggest that the MILP model makes the selection process faster, more standardised and transparent, and is especially helpful in coping with multiple nutrients. The complexity of the method does not increase with increasing number of nutrients. The generated food lists appear either shorter or provide more information than a food list generated without the MILP model.
Spatiotemporal chaos in mixed linear-nonlinear two-dimensional coupled logistic map lattice
Zhang, Ying-Qian; He, Yi; Wang, Xing-Yuan
2018-01-01
We investigate a new spatiotemporal dynamics with mixing degrees of nonlinear chaotic maps for spatial coupling connections based on 2DCML. Here, the coupling methods are including with linear neighborhood coupling and the nonlinear chaotic map coupling of lattices, and the former 2DCML system is only a special case in the proposed system. In this paper the criteria such Kolmogorov-Sinai entropy density and universality, bifurcation diagrams, space-amplitude and snapshot pattern diagrams are provided in order to investigate the chaotic behaviors of the proposed system. Furthermore, we also investigate the parameter ranges of the proposed system which holds those features in comparisons with those of the 2DCML system and the MLNCML system. Theoretical analysis and computer simulation indicate that the proposed system contains features such as the higher percentage of lattices in chaotic behaviors for most of parameters, less periodic windows in bifurcation diagrams and the larger range of parameters for chaotic behaviors, which is more suitable for cryptography.
A turbulent mixing Reynolds stress model fitted to match linear interaction analysis predictions
International Nuclear Information System (INIS)
Griffond, J; Soulard, O; Souffland, D
2010-01-01
To predict the evolution of turbulent mixing zones developing in shock tube experiments with different gases, a turbulence model must be able to reliably evaluate the production due to the shock-turbulence interaction. In the limit of homogeneous weak turbulence, 'linear interaction analysis' (LIA) can be applied. This theory relies on Kovasznay's decomposition and allows the computation of waves transmitted or produced at the shock front. With assumptions about the composition of the upstream turbulent mixture, one can connect the second-order moments downstream from the shock front to those upstream through a transfer matrix, depending on shock strength. The purpose of this work is to provide a turbulence model that matches LIA results for the shock-turbulent mixture interaction. Reynolds stress models (RSMs) with additional equations for the density-velocity correlation and the density variance are considered here. The turbulent states upstream and downstream from the shock front calculated with these models can also be related through a transfer matrix, provided that the numerical implementation is based on a pseudo-pressure formulation. Then, the RSM should be modified in such a way that its transfer matrix matches the LIA one. Using the pseudo-pressure to introduce ad hoc production terms, we are able to obtain a close agreement between LIA and RSM matrices for any shock strength and thus improve the capabilities of the RSM.
A mixed-integer linear programming approach to the reduction of genome-scale metabolic networks.
Röhl, Annika; Bockmayr, Alexander
2017-01-03
Constraint-based analysis has become a widely used method to study metabolic networks. While some of the associated algorithms can be applied to genome-scale network reconstructions with several thousands of reactions, others are limited to small or medium-sized models. In 2015, Erdrich et al. introduced a method called NetworkReducer, which reduces large metabolic networks to smaller subnetworks, while preserving a set of biological requirements that can be specified by the user. Already in 2001, Burgard et al. developed a mixed-integer linear programming (MILP) approach for computing minimal reaction sets under a given growth requirement. Here we present an MILP approach for computing minimum subnetworks with the given properties. The minimality (with respect to the number of active reactions) is not guaranteed by NetworkReducer, while the method by Burgard et al. does not allow specifying the different biological requirements. Our procedure is about 5-10 times faster than NetworkReducer and can enumerate all minimum subnetworks in case there exist several ones. This allows identifying common reactions that are present in all subnetworks, and reactions appearing in alternative pathways. Applying complex analysis methods to genome-scale metabolic networks is often not possible in practice. Thus it may become necessary to reduce the size of the network while keeping important functionalities. We propose a MILP solution to this problem. Compared to previous work, our approach is more efficient and allows computing not only one, but even all minimum subnetworks satisfying the required properties.
Poos, Alexandra M; Maicher, André; Dieckmann, Anna K; Oswald, Marcus; Eils, Roland; Kupiec, Martin; Luke, Brian; König, Rainer
2016-06-02
Understanding telomere length maintenance mechanisms is central in cancer biology as their dysregulation is one of the hallmarks for immortalization of cancer cells. Important for this well-balanced control is the transcriptional regulation of the telomerase genes. We integrated Mixed Integer Linear Programming models into a comparative machine learning based approach to identify regulatory interactions that best explain the discrepancy of telomerase transcript levels in yeast mutants with deleted regulators showing aberrant telomere length, when compared to mutants with normal telomere length. We uncover novel regulators of telomerase expression, several of which affect histone levels or modifications. In particular, our results point to the transcription factors Sum1, Hst1 and Srb2 as being important for the regulation of EST1 transcription, and we validated the effect of Sum1 experimentally. We compiled our machine learning method leading to a user friendly package for R which can straightforwardly be applied to similar problems integrating gene regulator binding information and expression profiles of samples of e.g. different phenotypes, diseases or treatments. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Application of mixed-integer linear programming in a car seats assembling process
Directory of Open Access Journals (Sweden)
Jorge Iván Perez Rave
2011-12-01
Full Text Available In this paper, a decision problem involving a car parts manufacturing company is modeled in order to prepare the company for an increase in demand. Mixed-integer linear programming was used with the following decision variables: creating a second shift, purchasing additional equipment, determining the required work force, and other alternatives involving new manners of work distribution that make it possible to separate certain operations from some workplaces and integrate them into others to minimize production costs. The model was solved using GAMS. The solution consisted of programming 19 workers under a configuration that merges two workplaces and separates some operations from some workplaces. The solution did not involve purchasing additional machinery or creating a second shift. As a result, the manufacturing paradigms that had been valid in the company for over 14 years were broken. This study allowed the company to increase its productivity and obtain significant savings. It also shows the benefits of joint work between academia and companies, and provides useful information for professors, students and engineers regarding production and continuous improvement.
Automatic Design of Synthetic Gene Circuits through Mixed Integer Non-linear Programming
Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias
2012-01-01
Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits. PMID:22536398
A multiple objective mixed integer linear programming model for power generation expansion planning
Energy Technology Data Exchange (ETDEWEB)
Antunes, C. Henggeler; Martins, A. Gomes [INESC-Coimbra, Coimbra (Portugal); Universidade de Coimbra, Dept. de Engenharia Electrotecnica, Coimbra (Portugal); Brito, Isabel Sofia [Instituto Politecnico de Beja, Escola Superior de Tecnologia e Gestao, Beja (Portugal)
2004-03-01
Power generation expansion planning inherently involves multiple, conflicting and incommensurate objectives. Therefore, mathematical models become more realistic if distinct evaluation aspects, such as cost and environmental concerns, are explicitly considered as objective functions rather than being encompassed by a single economic indicator. With the aid of multiple objective models, decision makers may grasp the conflicting nature and the trade-offs among the different objectives in order to select satisfactory compromise solutions. This paper presents a multiple objective mixed integer linear programming model for power generation expansion planning that allows the consideration of modular expansion capacity values of supply-side options. This characteristic of the model avoids the well-known problem associated with continuous capacity values that usually have to be discretized in a post-processing phase without feedback on the nature and importance of the changes in the attributes of the obtained solutions. Demand-side management (DSM) is also considered an option in the planning process, assuming there is a sufficiently large portion of the market under franchise conditions. As DSM full costs are accounted in the model, including lost revenues, it is possible to perform an evaluation of the rate impact in order to further inform the decision process (Author)
Further Improvements to Linear Mixed Models for Genome-Wide Association Studies
Widmer, Christian; Lippert, Christoph; Weissbrod, Omer; Fusi, Nicolo; Kadie, Carl; Davidson, Robert; Listgarten, Jennifer; Heckerman, David
2014-11-01
We examine improvements to the linear mixed model (LMM) that better correct for population structure and family relatedness in genome-wide association studies (GWAS). LMMs rely on the estimation of a genetic similarity matrix (GSM), which encodes the pairwise similarity between every two individuals in a cohort. These similarities are estimated from single nucleotide polymorphisms (SNPs) or other genetic variants. Traditionally, all available SNPs are used to estimate the GSM. In empirical studies across a wide range of synthetic and real data, we find that modifications to this approach improve GWAS performance as measured by type I error control and power. Specifically, when only population structure is present, a GSM constructed from SNPs that well predict the phenotype in combination with principal components as covariates controls type I error and yields more power than the traditional LMM. In any setting, with or without population structure or family relatedness, a GSM consisting of a mixture of two component GSMs, one constructed from all SNPs and another constructed from SNPs that well predict the phenotype again controls type I error and yields more power than the traditional LMM. Software implementing these improvements and the experimental comparisons are available at http://microsoft.com/science.
Kohli, Nidhi; Sullivan, Amanda L; Sadeh, Shanna; Zopluoglu, Cengiz
2015-04-01
Effective instructional planning and intervening rely heavily on accurate understanding of students' growth, but relatively few researchers have examined mathematics achievement trajectories, particularly for students with special needs. We applied linear, quadratic, and piecewise linear mixed-effects models to identify the best-fitting model for mathematics development over elementary and middle school and to ascertain differences in growth trajectories of children with learning disabilities relative to their typically developing peers. The analytic sample of 2150 students was drawn from the Early Childhood Longitudinal Study - Kindergarten Cohort, a nationally representative sample of United States children who entered kindergarten in 1998. We first modeled students' mathematics growth via multiple mixed-effects models to determine the best fitting model of 9-year growth and then compared the trajectories of students with and without learning disabilities. Results indicate that the piecewise linear mixed-effects model captured best the functional form of students' mathematics trajectories. In addition, there were substantial achievement gaps between students with learning disabilities and students with no disabilities, and their trajectories differed such that students without disabilities progressed at a higher rate than their peers who had learning disabilities. The results underscore the need for further research to understand how to appropriately model students' mathematics trajectories and the need for attention to mathematics achievement gaps in policy. Copyright © 2015 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Numerically pricing American options under the generalized mixed fractional Brownian motion model
Chen, Wenting; Yan, Bowen; Lian, Guanghua; Zhang, Ying
2016-06-01
In this paper, we introduce a robust numerical method, based on the upwind scheme, for the pricing of American puts under the generalized mixed fractional Brownian motion (GMFBM) model. By using portfolio analysis and applying the Wick-Itô formula, a partial differential equation (PDE) governing the prices of vanilla options under the GMFBM is successfully derived for the first time. Based on this, we formulate the pricing of American puts under the current model as a linear complementarity problem (LCP). Unlike the classical Black-Scholes (B-S) model or the generalized B-S model discussed in Cen and Le (2011), the newly obtained LCP under the GMFBM model is difficult to be solved accurately because of the numerical instability which results from the degeneration of the governing PDE as time approaches zero. To overcome this difficulty, a numerical approach based on the upwind scheme is adopted. It is shown that the coefficient matrix of the current method is an M-matrix, which ensures its stability in the maximum-norm sense. Remarkably, we have managed to provide a sharp theoretic error estimate for the current method, which is further verified numerically. The results of various numerical experiments also suggest that this new approach is quite accurate, and can be easily extended to price other types of financial derivatives with an American-style exercise feature under the GMFBM model.
Czech Academy of Sciences Publication Activity Database
Náhlík, Luboš; Šestáková, L.; Hutař, Pavel; Knésl, Zdeněk
2011-01-01
Roč. 452-453, - (2011), s. 445-448 ISSN 1013-9826 R&D Projects: GA AV ČR(CZ) KJB200410803; GA ČR GA101/09/1821 Institutional research plan: CEZ:AV0Z20410507 Keywords : generalized stress intensity factor * bimaterial interface * composite materials * strain energy density factor * fracture criterion * generalized linear elastic fracture mechanics Subject RIV: JL - Materials Fatigue, Friction Mechanics
International Nuclear Information System (INIS)
Frank, T.D.
2002-01-01
We study many particle systems in the context of mean field forces, concentration-dependent diffusion coefficients, generalized equilibrium distributions, and quantum statistics. Using kinetic transport theory and linear nonequilibrium thermodynamics we derive for these systems a generalized multivariate Fokker-Planck equation. It is shown that this Fokker-Planck equation describes relaxation processes, has stationary maximum entropy distributions, can have multiple stationary solutions and stationary solutions that differ from Boltzmann distributions
The mixed linear model (MLM) is currently among the most advanced and flexible statistical modeling techniques and its use in tackling problems in plant pathology has begun surfacing in the literature. The longitudinal MLM is a multivariate extension that handles repeatedly measured data, such as r...
de Bruin, A.B.H.; Smits, N.; Rikers, R.M.J.P.; Schmidt, H.G.
2008-01-01
In this study, the longitudinal relation between deliberate practice and performance in chess was examined using a linear mixed models analysis. The practice activities and performance ratings of young elite chess players, who were either in, or had dropped out of the Dutch national chess training,
The theory of a general quantum system interacting with a linear dissipative system
International Nuclear Information System (INIS)
Feynman, R.P.; Vernon, F.L.
2000-01-01
A formalism has been developed, using Feynman's space-time formulation of nonrelativistic quantum mechanics whereby the behavior of a system of interest, which is coupled to other external quantum systems, may be calculated in terms of its own variables only. It is shown that the effect of the external systems in such a formalism can always be included in a general class of functionals (influence functionals) of the coordinates of the system only. The properties of influence functionals for general systems are examined. Then, specific forms of influence functionals representing the effect of definite and random classical forces, linear dissipative systems at finite temperatures, and combinations of these are analyzed in detail. The linear system analysis is first done for perfectly linear systems composed of combinations of harmonic oscillators, loss being introduced by continuous distributions of oscillators. Then approximately linear systems and restrictions necessary for the linear behavior are considered. Influence functionals for all linear systems are shown to have the same form in terms of their classical response functions. In addition, a fluctuation-dissipation theorem is derived relating temperature and dissipation of the linear system to a fluctuating classical potential acting on the system of interest which reduces to the Nyquist-Johnson relation for noise in the case of electric circuits. Sample calculations of transition probabilities for the spontaneous emission of an atom in free space and in a cavity are made. Finally, a theorem is proved showing that within the requirements of linearity all sources of noise or quantum fluctuation introduced by maser-type amplification devices are accounted for by a classical calculation of the characteristics of the maser
On Extended Exponential General Linear Methods PSQ with S>Q ...
African Journals Online (AJOL)
This paper is concerned with the construction and Numerical Analysis of Extended Exponential General Linear Methods. These methods, in contrast to other methods in literatures, consider methods with the step greater than the stage order (S>Q).Numerical experiments in this study, indicate that Extended Exponential ...
Directory of Open Access Journals (Sweden)
Jen-Yuan Chen
2014-01-01
Full Text Available Continuing from the works of Li et al. (2014, Li (2007, and Kincaid et al. (2000, we present more generalizations and modifications of iterative methods for solving large sparse symmetric and nonsymmetric indefinite systems of linear equations. We discuss a variety of iterative methods such as GMRES, MGMRES, MINRES, LQ-MINRES, QR MINRES, MMINRES, MGRES, and others.
The microcomputer scientific software series 2: general linear model--regression.
Harold M. Rauscher
1983-01-01
The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...
A generalized variational algebra and conserved densities for linear evolution equations
International Nuclear Information System (INIS)
Abellanas, L.; Galindo, A.
1978-01-01
The symbolic algebra of Gel'fand and Dikii is generalized to the case of n variables. Using this algebraic approach a rigorous characterization of the polynomial kernel of the variational derivative is given. This is applied to classify all the conservation laws for linear polynomial evolution equations of arbitrary order. (Auth.)
A differential-geometric approach to generalized linear models with grouped predictors
Augugliaro, Luigi; Mineo, Angelo M.; Wit, Ernst C.
We propose an extension of the differential-geometric least angle regression method to perform sparse group inference in a generalized linear model. An efficient algorithm is proposed to compute the solution curve. The proposed group differential-geometric least angle regression method has important
Regression Is a Univariate General Linear Model Subsuming Other Parametric Methods as Special Cases.
Vidal, Sherry
Although the concept of the general linear model (GLM) has existed since the 1960s, other univariate analyses such as the t-test and the analysis of variance models have remained popular. The GLM produces an equation that minimizes the mean differences of independent variables as they are related to a dependent variable. From a computer printout…
A general theory of two-wave mixing in nonlinear media
DEFF Research Database (Denmark)
Chi, Mingjun; Huignard, Jean-Pierre; Petersen, Paul Michael
2009-01-01
A general theory of two-wave mixing in nonlinear media is presented. Assuming a gain (or absorption) grating and a refractive index grating are generated because of the nonlinear process in a nonlinear medium, the coupled-wave equations of two-wave mixing are derived based on the Maxwell’s wave e...
Energy Technology Data Exchange (ETDEWEB)
Quirós Segovia, M.; Condés Ruiz, S.; Drápela, K.
2016-07-01
Aim of the study: The main objective of this study was to test Geographically Weighted Regression (GWR) for developing height-diameter curves for forests on a large scale and to compare it with Linear Mixed Models (LMM). Area of study: Monospecific stands of Pinus halepensis Mill. located in the region of Murcia (Southeast Spain). Materials and Methods: The dataset consisted of 230 sample plots (2582 trees) from the Third Spanish National Forest Inventory (SNFI) randomly split into training data (152 plots) and validation data (78 plots). Two different methodologies were used for modelling local (Petterson) and generalized height-diameter relationships (Cañadas I): GWR, with different bandwidths, and linear mixed models. Finally, the quality of the estimated models was compared throughout statistical analysis. Main results: In general, both LMM and GWR provide better prediction capability when applied to a generalized height-diameter function than when applied to a local one, with R2 values increasing from around 0.6 to 0.7 in the model validation. Bias and RMSE were also lower for the generalized function. However, error analysis showed that there were no large differences between these two methodologies, evidencing that GWR provides results which are as good as the more frequently used LMM methodology, at least when no additional measurements are available for calibrating. Research highlights: GWR is a type of spatial analysis for exploring spatially heterogeneous processes. GWR can model spatial variation in tree height-diameter relationship and its regression quality is comparable to LMM. The advantage of GWR over LMM is the possibility to determine the spatial location of every parameter without additional measurements. Abbreviations: GWR (Geographically Weighted Regression); LMM (Linear Mixed Model); SNFI (Spanish National Forest Inventory). (Author)
Ziyatdinov, Andrey; Vázquez-Santiago, Miquel; Brunel, Helena; Martinez-Perez, Angel; Aschard, Hugues; Soria, Jose Manuel
2018-02-27
Quantitative trait locus (QTL) mapping in genetic data often involves analysis of correlated observations, which need to be accounted for to avoid false association signals. This is commonly performed by modeling such correlations as random effects in linear mixed models (LMMs). The R package lme4 is a well-established tool that implements major LMM features using sparse matrix methods; however, it is not fully adapted for QTL mapping association and linkage studies. In particular, two LMM features are lacking in the base version of lme4: the definition of random effects by custom covariance matrices; and parameter constraints, which are essential in advanced QTL models. Apart from applications in linkage studies of related individuals, such functionalities are of high interest for association studies in situations where multiple covariance matrices need to be modeled, a scenario not covered by many genome-wide association study (GWAS) software. To address the aforementioned limitations, we developed a new R package lme4qtl as an extension of lme4. First, lme4qtl contributes new models for genetic studies within a single tool integrated with lme4 and its companion packages. Second, lme4qtl offers a flexible framework for scenarios with multiple levels of relatedness and becomes efficient when covariance matrices are sparse. We showed the value of our package using real family-based data in the Genetic Analysis of Idiopathic Thrombophilia 2 (GAIT2) project. Our software lme4qtl enables QTL mapping models with a versatile structure of random effects and efficient computation for sparse covariances. lme4qtl is available at https://github.com/variani/lme4qtl .
Generalized linear models with random effects unified analysis via H-likelihood
Lee, Youngjo; Pawitan, Yudi
2006-01-01
Since their introduction in 1972, generalized linear models (GLMs) have proven useful in the generalization of classical normal models. Presenting methods for fitting GLMs with random effects to data, Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood explores a wide range of applications, including combining information over trials (meta-analysis), analysis of frailty models for survival data, genetic epidemiology, and analysis of spatial and temporal models with correlated errors.Written by pioneering authorities in the field, this reference provides an introduction to various theories and examines likelihood inference and GLMs. The authors show how to extend the class of GLMs while retaining as much simplicity as possible. By maximizing and deriving other quantities from h-likelihood, they also demonstrate how to use a single algorithm for all members of the class, resulting in a faster algorithm as compared to existing alternatives. Complementing theory with examples, many of...
General solution to the E-B mixing problem
International Nuclear Information System (INIS)
Smith, Kendrick M.; Zaldarriaga, Matias
2007-01-01
We derive a general ansatz for optimizing pseudo-C l estimators used to measure cosmic microwave background anisotropy power spectra, and apply it to the recently proposed pure pseudo-C l formalism, to obtain an estimator which achieves near-optimal B-mode power spectrum errors for any specified noise distribution while minimizing leakage from ambiguous modes. Our technique should be relevant for upcoming cosmic microwave background polarization experiments searching for B-mode polarization. We compare our technique both to the theoretical limits based on a full Fisher matrix calculation and to the standard pseudo-C l technique. We demonstrate it by applying it to a fiducial survey with realistic inhomogeneous noise, complex boundaries, point source masking, and a noise level comparable to what is expected for next-generation experiments (∼5.75 μK-arcmin). For such an experiment our technique could improve the constraints on the amplitude of a gravity wave background by over a factor of 10 compared to what could be obtained using ordinary pseudo-C l , coming quite close to saturating the theoretical limit. Constraints on the amplitude of the lensing B-modes are improved by about a factor of 3
International Nuclear Information System (INIS)
Chang, H.; Weinberg, W.H.
1977-01-01
A generalized expression is developed that relates the ''reaction product vector'', epsilon exp(-iphi), to the kinetic parameters of a linear system. The formalism is appropriate for the analysis of modulated molecular beam mass spectrometry data and facilitates the correlation of experimental results to (proposed) linear models. A study of stability criteria appropriate for modulated molecular beam mass spectrometry experiments is also presented. This investigation has led to interesting inherent limitations which have not heretofore been emphasized, as well as a delineation of the conditions under which stable chemical oscillations may occur in the reacting system
Directory of Open Access Journals (Sweden)
Yingwei Li
2013-01-01
Full Text Available The global exponential stability issues are considered for almost periodic solution of the neural networks with mixed time-varying delays and discontinuous neuron activations. Some sufficient conditions for the existence, uniqueness, and global exponential stability of almost periodic solution are achieved in terms of certain linear matrix inequalities (LMIs, by applying differential inclusions theory, matrix inequality analysis technique, and generalized Lyapunov functional approach. In addition, the existence and asymptotically almost periodic behavior of the solution of the neural networks are also investigated under the framework of the solution in the sense of Filippov. Two simulation examples are given to illustrate the validity of the theoretical results.
Hossein-Zadeh, Navid Ghavi
2016-08-01
The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.
Suarez, R
2001-01-01
In this paper an alternative non-parametric historical simulation approach, the Mixing Unconditional Disturbances model with constant volatility, where price paths are generated by reshuffling disturbances for S&P 500 Index returns over the period 1950 - 1998, is used to estimate a Generalized Extreme Value Distribution and a Generalized Pareto Distribution. An ordinary back-testing for period 1999 - 2008 was made to verify this technique, providing higher accuracy returns level under upper ...
An analogue of Morse theory for planar linear networks and the generalized Steiner problem
International Nuclear Information System (INIS)
Karpunin, G A
2000-01-01
A study is made of the generalized Steiner problem: the problem of finding all the locally minimal networks spanning a given boundary set (terminal set). It is proposed to solve this problem by using an analogue of Morse theory developed here for planar linear networks. The space K of all planar linear networks spanning a given boundary set is constructed. The concept of a critical point and its index is defined for the length function l of a planar linear network. It is shown that locally minimal networks are local minima of l on K and are critical points of index 1. The theorem is proved that the sum of the indices of all the critical points is equal to χ(K)=1. This theorem is used to find estimates for the number of locally minimal networks spanning a given boundary set
Energy Technology Data Exchange (ETDEWEB)
Escane, J.M. [Ecole Superieure d' Electricite, 91 - Gif-sur-Yvette (France)
2005-04-01
The first part of this article defines the different elements of an electrical network and the models to represent them. Each model involves the current and the voltage as a function of time. Models involving time functions are simple but their use is not always easy. The Laplace transformation leads to a more convenient form where the variable is no more directly the time. This transformation leads also to the notion of transfer function which is the object of the second part. The third part aims at defining the fundamental operation rules of linear networks, commonly named 'general theorems': linearity principle and superimposition theorem, duality principle, Thevenin theorem, Norton theorem, Millman theorem, triangle-star and star-triangle transformations. These theorems allow to study complex power networks and to simplify the calculations. They are based on hypotheses, the first one is that all networks considered in this article are linear. (J.S.)
Mixed models for predictive modeling in actuarial science
Antonio, K.; Zhang, Y.
2012-01-01
We start with a general discussion of mixed (also called multilevel) models and continue with illustrating specific (actuarial) applications of this type of models. Technical details on (linear, generalized, non-linear) mixed models follow: model assumptions, specifications, estimation techniques
A general digital computer procedure for synthesizing linear automatic control systems
International Nuclear Information System (INIS)
Cummins, J.D.
1961-10-01
The fundamental concepts required for synthesizing a linear automatic control system are considered. A generalized procedure for synthesizing automatic control systems is demonstrated. This procedure has been programmed for the Ferranti Mercury and the IBM 7090 computers. Details of the programmes are given. The procedure uses the linearized set of equations which describe the plant to be controlled as the starting point. Subsequent computations determine the transfer functions between any desired variables. The programmes also compute the root and phase loci for any linear (and some non-linear) configurations in the complex plane, the open loop and closed loop frequency responses of a system, the residues of a function of the complex variable 's' and the time response corresponding to these residues. With these general programmes available the design of 'one point' automatic control systems becomes a routine scientific procedure. Also dynamic assessments of plant may be carried out. Certain classes of multipoint automatic control problems may also be solved with these procedures. Autonomous systems, invariant systems and orthogonal systems may also be studied. (author)
A cautionary note on generalized linear models for covariance of unbalanced longitudinal data
Huang, Jianhua Z.
2012-03-01
Missing data in longitudinal studies can create enormous challenges in data analysis when coupled with the positive-definiteness constraint on a covariance matrix. For complete balanced data, the Cholesky decomposition of a covariance matrix makes it possible to remove the positive-definiteness constraint and use a generalized linear model setup to jointly model the mean and covariance using covariates (Pourahmadi, 2000). However, this approach may not be directly applicable when the longitudinal data are unbalanced, as coherent regression models for the dependence across all times and subjects may not exist. Within the existing generalized linear model framework, we show how to overcome this and other challenges by embedding the covariance matrix of the observed data for each subject in a larger covariance matrix and employing the familiar EM algorithm to compute the maximum likelihood estimates of the parameters and their standard errors. We illustrate and assess the methodology using real data sets and simulations. © 2011 Elsevier B.V.
Pei, Soo-Chang; Ding, Jian-Jiun
2005-03-01
Prolate spheroidal wave functions (PSWFs) are known to be useful for analyzing the properties of the finite-extension Fourier transform (fi-FT). We extend the theory of PSWFs for the finite-extension fractional Fourier transform, the finite-extension linear canonical transform, and the finite-extension offset linear canonical transform. These finite transforms are more flexible than the fi-FT and can model much more generalized optical systems. We also illustrate how to use the generalized prolate spheroidal functions we derive to analyze the energy-preservation ratio, the self-imaging phenomenon, and the resonance phenomenon of the finite-sized one-stage or multiple-stage optical systems.
Linear relations in microbial reaction systems: a general overview of their origin, form, and use.
Noorman, H J; Heijnen, J J; Ch A M Luyben, K
1991-09-01
In microbial reaction systems, there are a number of linear relations among net conversion rates. These can be very useful in the analysis of experimental data. This article provides a general approach for the formation and application of the linear relations. Two type of system descriptions, one considering the biomass as a black box and the other based on metabolic pathways, are encountered. These are defined in a linear vector and matrix algebra framework. A correct a priori description can be obtained by three useful tests: the independency, consistency, and observability tests. The independency are different. The black box approach provides only conservations relations. They are derived from element, electrical charge, energy, and Gibbs energy balances. The metabolic approach provides, in addition to the conservation relations, metabolic and reaction relations. These result from component, energy, and Gibbs energy balances. Thus it is more attractive to use the metabolic description than the black box approach. A number of different types of linear relations given in the literature are reviewed. They are classified according to the different categories that result from the black box or the metabolic system description. Validation of hypotheses related to metabolic pathways can be supported by experimental validation of the linear metabolic relations. However, definite proof from biochemical evidence remains indispensable.
ALONSO ABAD, Ariel; Rodriguez, O.; TIBALDI, Fabian; CORTINAS ABRAHANTES, Jose
2002-01-01
In medical studies the categorical endpoints are quite often. Even though nowadays some models for handling this multicategorical variables have been developed their use is not common. This work shows an application of the Multivariate Generalized Linear Models to the analysis of Clinical Trials data. After a theoretical introduction models for ordinal and nominal responses are applied and the main results are discussed. multivariate analysis; multivariate logistic regression; multicategor...
James W. Hardin; Henrik Schmeidiche; Raymond J. Carroll
2003-01-01
This paper discusses and illustrates the method of regression calibration. This is a straightforward technique for fitting models with additive measurement error. We present this discussion in terms of generalized linear models (GLMs) following the notation defined in Hardin and Carroll (2003). Discussion will include specified measurement error, measurement error estimated by replicate error-prone proxies, and measurement error estimated by instrumental variables. The discussion focuses on s...
Synthesis of general linear networks using causal and J-isometric dilations
International Nuclear Information System (INIS)
D'Attellis, C.E.
1977-06-01
The problem of the synthesis of linear systems characterized by their scattering operator is studied. This problem is considered solved once an adequate dilation for the operator is obtained, from which the synthesis is performed following the method of Saeks (35) and Levan (19). Known results appear sistematized and generalized in this paper, obtaining an unique method of synthesis for different caterories of operators. (Author) [es
A General Construction of Linear Differential Equations with Solutions of Prescribed Properties
Czech Academy of Sciences Publication Activity Database
Neuman, František
2004-01-01
Roč. 17, č. 1 (2004), s. 71-76 ISSN 0893-9659 R&D Projects: GA AV ČR IAA1019902; GA ČR GA201/99/0295 Institutional research plan: CEZ:AV0Z1019905 Keywords : construction of linear differential equations * prescribed qualitative properties of solutions Subject RIV: BA - General Mathematics Impact factor: 0.414, year: 2004
Image quality optimization and evaluation of linearly mixed images in dual-source, dual-energy CT
International Nuclear Information System (INIS)
Yu Lifeng; Primak, Andrew N.; Liu Xin; McCollough, Cynthia H.
2009-01-01
In dual-source dual-energy CT, the images reconstructed from the low- and high-energy scans (typically at 80 and 140 kV, respectively) can be mixed together to provide a single set of non-material-specific images for the purpose of routine diagnostic interpretation. Different from the material-specific information that may be obtained from the dual-energy scan data, the mixed images are created with the purpose of providing the interpreting physician a single set of images that have an appearance similar to that in single-energy images acquired at the same total radiation dose. In this work, the authors used a phantom study to evaluate the image quality of linearly mixed images in comparison to single-energy CT images, assuming the same total radiation dose and taking into account the effect of patient size and the dose partitioning between the low-and high-energy scans. The authors first developed a method to optimize the quality of the linearly mixed images such that the single-energy image quality was compared to the best-case image quality of the dual-energy mixed images. Compared to 80 kV single-energy images for the same radiation dose, the iodine CNR in dual-energy mixed images was worse for smaller phantom sizes. However, similar noise and similar or improved iodine CNR relative to 120 kV images could be achieved for dual-energy mixed images using the same total radiation dose over a wide range of patient sizes (up to 45 cm lateral thorax dimension). Thus, for adult CT practices, which primarily use 120 kV scanning, the use of dual-energy CT for the purpose of material-specific imaging can also produce a set of non-material-specific images for routine diagnostic interpretation that are of similar or improved quality relative to single-energy 120 kV scans.
Parameterization of general Z-γ-Z' mixing in an electroweak chiral theory
International Nuclear Information System (INIS)
Zhang Ying; Wang Qing
2012-01-01
A new general parameterization with eight mixing parameters among Z, γ and an extra neutral gauge boson Z ' is proposed and subjected to phenomenological analysis. We show that in addition to the conventional Weinberg angle θ W , there are seven other phenomenological parameters, G ' , ξ, η, θ 1 , θ r , r and l, for the most general Z-γ-Z ' mixings, in which parameter G ' arises due to the presence of an extra Stueckelbergtype mass coupling. Combined with the conventional Z-Z ' mass mixing angle 0', the remaining six parameters, ξ, η, θ l -θ ' , θ r - θ ' , r and l, are caused by general kinetic mixing. In all eight phenomenological parameters, θ W , G ' , ξ, η, θ 1 , θ r , r and l, we can determine the Z-Z ' mass mixing angle θ ' and the mass ratio M Z /M Z ' . The Z-γ-Z ' mixing that we discuss are based on the model-independent description of the extended electroweak chiral Lagrangian (EWCL) previously proposed by us. In addition, we show that there are eight corresponding independent theoretical coefficients in our EWCL, which are fully fixed by our eight phenomenological mixing parameters. We further find that the experimental measurability of these eight parameters does not rely on the extended neutral current for Z ' , but depends on the Z-Z ' mass ratio. (authors)
Directory of Open Access Journals (Sweden)
Tsung-han Tsai
2013-05-01
Full Text Available There is some confusion in political science, and the social sciences in general, about the meaning and interpretation of interaction effects in models with non-interval, non-normal outcome variables. Often these terms are casually thrown into a model specification without observing that their presence fundamentally changes the interpretation of the resulting coefficients. This article explains the conditional nature of reported coefficients in models with interactions, defining the necessarily different interpretation required by generalized linear models. Methodological issues are illustrated with an application to voter information structured by electoral systems and resulting legislative behavior and democratic representation in comparative politics.
International Nuclear Information System (INIS)
LaChapelle, J.
2004-01-01
A path integral is presented that solves a general class of linear second order partial differential equations with Dirichlet/Neumann boundary conditions. Elementary kernels are constructed for both Dirichlet and Neumann boundary conditions. The general solution can be specialized to solve elliptic, parabolic, and hyperbolic partial differential equations with boundary conditions. This extends the well-known path integral solution of the Schroedinger/diffusion equation in unbounded space. The construction is based on a framework for functional integration introduced by Cartier/DeWitt-Morette
Extending Local Canonical Correlation Analysis to Handle General Linear Contrasts for fMRI Data
Directory of Open Access Journals (Sweden)
Mingwu Jin
2012-01-01
Full Text Available Local canonical correlation analysis (CCA is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM, a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic.
Personnel planning in general practices: development and testing of a skill mix analysis method.
Eitzen-Strassel, J. von; Vrijhoef, H.J.M.; Derckx, E.W.C.C.; Bakker, D.H. de
2014-01-01
Background: General practitioners (GPs) have to match patients’ demands with the mix of their practice staff’s competencies. However, apart from some general principles, there is little guidance on recruiting new staff. The purpose of this study was to develop and test a method which would allow GPs
Personnel planning in general practices : Development and testing of a skill mix analysis method
von Eitzen-Strassel, J.; Vrijhoef, H.J.M.; Derckx, E.W.C.C.; de Bakker, D.H.
2014-01-01
Background General practitioners (GPs) have to match patients’ demands with the mix of their practice staff’s competencies. However, apart from some general principles, there is little guidance on recruiting new staff. The purpose of this study was to develop and test a method which would allow GPs
Killiches, Matthias; Czado, Claudia
2018-03-22
We propose a model for unbalanced longitudinal data, where the univariate margins can be selected arbitrarily and the dependence structure is described with the help of a D-vine copula. We show that our approach is an extremely flexible extension of the widely used linear mixed model if the correlation is homogeneous over the considered individuals. As an alternative to joint maximum-likelihood a sequential estimation approach for the D-vine copula is provided and validated in a simulation study. The model can handle missing values without being forced to discard data. Since conditional distributions are known analytically, we easily make predictions for future events. For model selection, we adjust the Bayesian information criterion to our situation. In an application to heart surgery data our model performs clearly better than competing linear mixed models. © 2018, The International Biometric Society.
Edwards, Lloyd J.; Simpson, Sean L.
2010-01-01
The use of 24-hour ambulatory blood pressure monitoring (ABPM) in clinical practice and observational epidemiological studies has grown considerably in the past 25 years. ABPM is a very effective technique for assessing biological, environmental, and drug effects on blood pressure. In order to enhance the effectiveness of ABPM for clinical and observational research studies via analytical and graphical results, developing alternative data analysis approaches are important. The linear mixed mo...
Magezi, David A
2015-01-01
Linear mixed-effects models (LMMs) are increasingly being used for data analysis in cognitive neuroscience and experimental psychology, where within-participant designs are common. The current article provides an introductory review of the use of LMMs for within-participant data analysis and describes a free, simple, graphical user interface (LMMgui). LMMgui uses the package lme4 (Bates et al., 2014a,b) in the statistical environment R (R Core Team).
Linear C32H66 hydrocarbon in the mixed state with C10H22 ...
Indian Academy of Sciences (India)
Unknown
S R Research Laboratory for Studies in Crystallization Phenomena, 10-1-96, ... mixed state with certain shorter chain length homologues (SMOLLENCs), estimated ... Methods. Five hydrocarbons of even carbon numbers, C10, C12, C14, C16 ...
Memon, Sajid; Nataraj, Neela; Pani, Amiya Kumar
2012-01-01
In this article, a posteriori error estimates are derived for mixed finite element Galerkin approximations to second order linear parabolic initial and boundary value problems. Using mixed elliptic reconstructions, a posteriori error estimates in L∞(L2)- and L2(L2)-norms for the solution as well as its flux are proved for the semidiscrete scheme. Finally, based on a backward Euler method, a completely discrete scheme is analyzed and a posteriori error bounds are derived, which improves upon earlier results on a posteriori estimates of mixed finite element approximations to parabolic problems. Results of numerical experiments verifying the efficiency of the estimators have also been provided. © 2012 Society for Industrial and Applied Mathematics.
Study on sampling of continuous linear system based on generalized Fourier transform
Li, Huiguang
2003-09-01
In the research of signal and system, the signal's spectrum and the system's frequency characteristic can be discussed through Fourier Transform (FT) and Laplace Transform (LT). However, some singular signals such as impulse function and signum signal don't satisfy Riemann integration and Lebesgue integration. They are called generalized functions in Maths. This paper will introduce a new definition -- Generalized Fourier Transform (GFT) and will discuss generalized function, Fourier Transform and Laplace Transform under a unified frame. When the continuous linear system is sampled, this paper will propose a new method to judge whether the spectrum will overlap after generalized Fourier transform (GFT). Causal and non-causal systems are studied, and sampling method to maintain system's dynamic performance is presented. The results can be used on ordinary sampling and non-Nyquist sampling. The results also have practical meaning on research of "discretization of continuous linear system" and "non-Nyquist sampling of signal and system." Particularly, condition for ensuring controllability and observability of MIMO continuous systems in references 13 and 14 is just an applicable example of this paper.
Topping, Annie; Nkosana Nyawata, Idah; Stephenson, John; Featherstone, Valerie A.
2012-01-01
Title: Rethinking workforce boundaries: roles, responsibilities and skill mix and readiness for change in general practice \\ud The Problem \\ud The last 10 years has seen major changes in the way services are delivered in primary care. Skill mix, has offered many practices real opportunities for doing things differently. As the introduction of advanced nurse practitioners (ANPs) and health care assistants (HCAs) into the primary care workforce demonstrate. While workforce redesign has its crit...
Right-handed quark mixings in minimal left-right symmetric model with general CP violation
International Nuclear Information System (INIS)
Zhang Yue; Ji Xiangdong; An Haipeng; Mohapatra, R. N.
2007-01-01
We solve systematically for the right-handed quark mixings in the minimal left-right symmetric model which generally has both explicit and spontaneous CP violations. The leading-order result has the same hierarchical structure as the left-handed Cabibbo-Kobayashi-Maskawa mixing, but with additional CP phases originating from a spontaneous CP-violating phase in the Higgs vacuum expectation values. We explore the phenomenology entailed by the new right-handed mixing matrix, particularly the bounds on the mass of W R and the CP phase of the Higgs vacuum expectation values
Normality of raw data in general linear models: The most widespread myth in statistics
Kery, Marc; Hatfield, Jeff S.
2003-01-01
In years of statistical consulting for ecologists and wildlife biologists, by far the most common misconception we have come across has been the one about normality in general linear models. These comprise a very large part of the statistical models used in ecology and include t tests, simple and multiple linear regression, polynomial regression, and analysis of variance (ANOVA) and covariance (ANCOVA). There is a widely held belief that the normality assumption pertains to the raw data rather than to the model residuals. We suspect that this error may also occur in countless published studies, whenever the normality assumption is tested prior to analysis. This may lead to the use of nonparametric alternatives (if there are any), when parametric tests would indeed be appropriate, or to use of transformations of raw data, which may introduce hidden assumptions such as multiplicative effects on the natural scale in the case of log-transformed data. Our aim here is to dispel this myth. We very briefly describe relevant theory for two cases of general linear models to show that the residuals need to be normally distributed if tests requiring normality are to be used, such as t and F tests. We then give two examples demonstrating that the distribution of the response variable may be nonnormal, and yet the residuals are well behaved. We do not go into the issue of how to test normality; instead we display the distributions of response variables and residuals graphically.
The potential in general linear electrodynamics. Causal structure, propagators and quantization
Energy Technology Data Exchange (ETDEWEB)
Siemssen, Daniel [Department of Mathematical Methods in Physics, Faculty of Physics, University of Warsaw (Poland); Pfeifer, Christian [Institute for Theoretical Physics, Leibniz Universitaet Hannover (Germany); Center of Applied Space Technology and Microgravity (ZARM), Universitaet Bremen (Germany)
2016-07-01
From an axiomatic point of view, the fundamental input for a theory of electrodynamics are Maxwell's equations dF=0 (or F=dA) and dH=J, and a constitutive law H=F, which relates the field strength 2-form F and the excitation 2-form H. In this talk we consider general linear electrodynamics, the theory of electrodynamics defined by a linear constitutive law. The best known application of this theory is the effective description of electrodynamics inside (linear) media (e.g. birefringence). We analyze the classical theory of the electromagnetic potential A before we use methods familiar from mathematical quantum field theory in curved spacetimes to quantize it. Our analysis of the classical theory contains the derivation of retarded and advanced propagators, the analysis of the causal structure on the basis of the constitutive law (instead of a metric) and a discussion of the classical phase space. This classical analysis sets the stage for the construction of the quantum field algebra and quantum states, including a (generalized) microlocal spectrum condition.
Non-cooperative stochastic differential game theory of generalized Markov jump linear systems
Zhang, Cheng-ke; Zhou, Hai-ying; Bin, Ning
2017-01-01
This book systematically studies the stochastic non-cooperative differential game theory of generalized linear Markov jump systems and its application in the field of finance and insurance. The book is an in-depth research book of the continuous time and discrete time linear quadratic stochastic differential game, in order to establish a relatively complete framework of dynamic non-cooperative differential game theory. It uses the method of dynamic programming principle and Riccati equation, and derives it into all kinds of existence conditions and calculating method of the equilibrium strategies of dynamic non-cooperative differential game. Based on the game theory method, this book studies the corresponding robust control problem, especially the existence condition and design method of the optimal robust control strategy. The book discusses the theoretical results and its applications in the risk control, option pricing, and the optimal investment problem in the field of finance and insurance, enriching the...
International Nuclear Information System (INIS)
Speliotopoulos, A.D.; Chiao, Raymond Y.
2004-01-01
The coupling of gravity to matter is explored in the linearized gravity limit. The usual derivation of gravity-matter couplings within the quantum-field-theoretic framework is reviewed. A number of inconsistencies between this derivation of the couplings and the known results of tidal effects on test particles according to classical general relativity are pointed out. As a step towards resolving these inconsistencies, a general laboratory frame fixed on the worldline of an observer is constructed. In this frame, the dynamics of nonrelativistic test particles in the linearized gravity limit is studied, and their Hamiltonian dynamics is derived. It is shown that for stationary metrics this Hamiltonian reduces to the usual Hamiltonian for nonrelativistic particles undergoing geodesic motion. For nonstationary metrics with long-wavelength gravitational waves present (GWs), it reduces to the Hamiltonian for a nonrelativistic particle undergoing geodesic deviation motion. Arbitrary-wavelength GWs couple to the test particle through a vector-potential-like field N a , the net result of the tidal forces that the GW induces in the system, namely, a local velocity field on the system induced by tidal effects, as seen by an observer in the general laboratory frame. Effective electric and magnetic fields, which are related to the electric and magnetic parts of the Weyl tensor, are constructed from N a that obey equations of the same form as Maxwell's equations. A gedankin gravitational Aharonov-Bohm-type experiment using N a to measure the interference of quantum test particles is presented
Analysis of dental caries using generalized linear and count regression models
Directory of Open Access Journals (Sweden)
Javali M. Phil
2013-11-01
Full Text Available Generalized linear models (GLM are generalization of linear regression models, which allow fitting regression models to response data in all the sciences especially medical and dental sciences that follow a general exponential family. These are flexible and widely used class of such models that can accommodate response variables. Count data are frequently characterized by overdispersion and excess zeros. Zero-inflated count models provide a parsimonious yet powerful way to model this type of situation. Such models assume that the data are a mixture of two separate data generation processes: one generates only zeros, and the other is either a Poisson or a negative binomial data-generating process. Zero inflated count regression models such as the zero-inflated Poisson (ZIP, zero-inflated negative binomial (ZINB regression models have been used to handle dental caries count data with many zeros. We present an evaluation framework to the suitability of applying the GLM, Poisson, NB, ZIP and ZINB to dental caries data set where the count data may exhibit evidence of many zeros and over-dispersion. Estimation of the model parameters using the method of maximum likelihood is provided. Based on the Vuong test statistic and the goodness of fit measure for dental caries data, the NB and ZINB regression models perform better than other count regression models.
Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.
Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique
2015-05-01
The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model. © 2014 Society for Risk Analysis.
A generalization of Dirac non-linear electrodynamics, and spinning charged particles
International Nuclear Information System (INIS)
Rodrigues Junior, W.A.; Vaz Junior, J.; Recami, E.
1992-08-01
The Dirac non-linear electrodynamics is generalized by introducing two potentials (namely, the vector potential a and the pseudo-vector potential γ 5 B of the electromagnetic theory with charges and magnetic monopoles), and by imposing the pseudoscalar part of the product W W * to BE zero, with W = A + γ 5 B. Also, is demonstrated that the field equations of such a theory posses a soliton-like solution which can represent a priori a charged particle. (L.C.J.A.)
Analysis of positron lifetime spectra using quantified maximum entropy and a general linear filter
International Nuclear Information System (INIS)
Shukla, A.; Peter, M.; Hoffmann, L.
1993-01-01
Two new approaches are used to analyze positron annihilation lifetime spectra. A general linear filter is designed to filter the noise from lifetime data. The quantified maximum entropy method is used to solve the inverse problem of finding the lifetimes and intensities present in data. We determine optimal values of parameters needed for fitting using Bayesian methods. Estimates of errors are provided. We present results on simulated and experimental data with extensive tests to show the utility of this method and compare it with other existing methods. (orig.)
General formulae for polarization observables in deuteron electrodisintegration and linear relations
International Nuclear Information System (INIS)
Arenhoevel, H.; Leidemann, W.; Tomusiak, E.L.
1993-01-01
Formal expressions are derived for all possible polarization observables in deuteron electrodisintegration with longitudinally polarized incoming electrons, oriented deuteron targets and polarization analysis of outgoing nucleons. They are given in terms of general structure functions which can be determined experimentally. These structure functions are Hermitean forms of the T-matrix elements which, in principle, allow the determination of all T-matrix elements up to an arbitrary common phase. Since the set of structure functions is overcomplete, linear relations among various structure functions exist which are derived explicitly
Hao, Xu; Yujun, Sun; Xinjie, Wang; Jin, Wang; Yao, Fu
2015-01-01
A multiple linear model was developed for individual tree crown width of Cunninghamia lanceolata (Lamb.) Hook in Fujian province, southeast China. Data were obtained from 55 sample plots of pure China-fir plantation stands. An Ordinary Linear Least Squares (OLS) regression was used to establish the crown width model. To adjust for correlations between observations from the same sample plots, we developed one level linear mixed-effects (LME) models based on the multiple linear model, which take into account the random effects of plots. The best random effects combinations for the LME models were determined by the Akaike's information criterion, the Bayesian information criterion and the -2logarithm likelihood. Heteroscedasticity was reduced by three residual variance functions: the power function, the exponential function and the constant plus power function. The spatial correlation was modeled by three correlation structures: the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)], and the compound symmetry structure (CS). Then, the LME model was compared to the multiple linear model using the absolute mean residual (AMR), the root mean square error (RMSE), and the adjusted coefficient of determination (adj-R2). For individual tree crown width models, the one level LME model showed the best performance. An independent dataset was used to test the performance of the models and to demonstrate the advantage of calibrating LME models.
International Nuclear Information System (INIS)
Sanchez, Richard.
1975-11-01
The Integral Transform Method for the neutron transport equation has been developed in last years by Asaoka and others. The method uses Fourier transform techniques in solving isotropic one-dimensional transport problems in homogeneous media. The method has been extended to linearly anisotropic transport in one-dimensional homogeneous media. Series expansions were also obtained using Hembd techniques for the new anisotropic matrix elements in cylindrical geometry. Carlvik spatial-spherical harmonics method was generalized to solve the same problem. By applying a relation between the isotropic and anisotropic one-dimensional kernels, it was demonstrated that anisotropic matrix elements can be calculated by a linear combination of a few isotropic matrix elements. This means in practice that the anisotropic problem of order N with the N+2 isotropic matrix for the plane and spherical geometries, and N+1 isotropic matrix for cylindrical geometries can be solved. A method of solving linearly anisotropic one-dimensional transport problems in homogeneous media was defined by applying Mika and Stankiewicz observations: isotropic matrix elements were computed by Hembd series and anisotropic matrix elements then calculated from recursive relations. The method has been applied to albedo and critical problems in cylindrical geometries. Finally, a number of results were computed with 12-digit accuracy for use as benchmarks [fr
Vector generalized linear and additive models with an implementation in R
Yee, Thomas W
2015-01-01
This book presents a statistical framework that expands generalized linear models (GLMs) for regression modelling. The framework shared in this book allows analyses based on many semi-traditional applied statistics models to be performed as a coherent whole. This is possible through the approximately half-a-dozen major classes of statistical models included in the book and the software infrastructure component, which makes the models easily operable. The book’s methodology and accompanying software (the extensive VGAM R package) are directed at these limitations, and this is the first time the methodology and software are covered comprehensively in one volume. Since their advent in 1972, GLMs have unified important distributions under a single umbrella with enormous implications. The demands of practical data analysis, however, require a flexibility that GLMs do not have. Data-driven GLMs, in the form of generalized additive models (GAMs), are also largely confined to the exponential family. This book ...
Prospects of measuring general Higgs couplings at e{sup +}e{sup -} linear colliders
Energy Technology Data Exchange (ETDEWEB)
Hagiwara, K. [KEK, Ibaraki (Japan). Theory Group; Ishihara, S. [KEK, Ibaraki (Japan). Theory Group; Department of Physics, Hyogo University of Education, 941-1 Shimokume, Yashiro, Kato, Hyogo 673-1494 (Japan); Kamoshita, J. [Department of Physics, Ochanomizu University, 2-1-1 Otsuka, Bunkyo, Tokyo 112-8610 (Japan); Kniehl, B.A. [II. Institut fuer Theoretische Physik, Universitaet Hamburg, Luruper Chaussee 149, 22761 Hamburg (Germany)
2000-06-01
We examine how accurately the general HZV couplings, with V=Z{gamma}, may be determined by studying e{sup +}e{sup -}{yields}Hf anti f processes at future e{sup +}e{sup -} linear colliders. By using the optimal-observable method, which makes use of all available experimental information, we find out which combinations of the various HZV coupling terms may be constrained most efficiently with high luminosity. We also assess the benefits of measuring the tau-lepton helicities, identifying the bottom-hadron charges, polarizing the electron beam and running at two different collider energies. The HZZ couplings are generally found to be well constrained, even without these options, while the HZ{gamma} couplings are not. The constraints on the latter may be significantly improved by beam polarization. (orig.)
Dai, James Y.; Chan, Kwun Chuen Gary; Hsu, Li
2014-01-01
Instrumental variable regression is one way to overcome unmeasured confounding and estimate causal effect in observational studies. Built on structural mean models, there has been considerale work recently developed for consistent estimation of causal relative risk and causal odds ratio. Such models can sometimes suffer from identification issues for weak instruments. This hampered the applicability of Mendelian randomization analysis in genetic epidemiology. When there are multiple genetic variants available as instrumental variables, and causal effect is defined in a generalized linear model in the presence of unmeasured confounders, we propose to test concordance between instrumental variable effects on the intermediate exposure and instrumental variable effects on the disease outcome, as a means to test the causal effect. We show that a class of generalized least squares estimators provide valid and consistent tests of causality. For causal effect of a continuous exposure on a dichotomous outcome in logistic models, the proposed estimators are shown to be asymptotically conservative. When the disease outcome is rare, such estimators are consistent due to the log-linear approximation of the logistic function. Optimality of such estimators relative to the well-known two-stage least squares estimator and the double-logistic structural mean model is further discussed. PMID:24863158
Robust-BD Estimation and Inference for General Partially Linear Models
Directory of Open Access Journals (Sweden)
Chunming Zhang
2017-11-01
Full Text Available The classical quadratic loss for the partially linear model (PLM and the likelihood function for the generalized PLM are not resistant to outliers. This inspires us to propose a class of “robust-Bregman divergence (BD” estimators of both the parametric and nonparametric components in the general partially linear model (GPLM, which allows the distribution of the response variable to be partially specified, without being fully known. Using the local-polynomial function estimation method, we propose a computationally-efficient procedure for obtaining “robust-BD” estimators and establish the consistency and asymptotic normality of the “robust-BD” estimator of the parametric component β o . For inference procedures of β o in the GPLM, we show that the Wald-type test statistic W n constructed from the “robust-BD” estimators is asymptotically distribution free under the null, whereas the likelihood ratio-type test statistic Λ n is not. This provides an insight into the distinction from the asymptotic equivalence (Fan and Huang 2005 between W n and Λ n in the PLM constructed from profile least-squares estimators using the non-robust quadratic loss. Numerical examples illustrate the computational effectiveness of the proposed “robust-BD” estimators and robust Wald-type test in the appearance of outlying observations.
LETTER TO THE EDITOR: Mixed population Minority Game with generalized strategies
Jefferies, P.; Hart, M.; Johnson, N. F.; Hui, P. M.
2000-11-01
We present a quantitative theory, based on crowd effects, for the market volatility in a Minority Game played by a mixed population. Below a critical concentration of generalized strategy players, we find that the volatility in the crowded regime remains above the random coin-toss value regardless of the `temperature' controlling strategy use. Our theory yields good agreement with numerical simulations.
A branch-and-cut-and-price algorithm for the mixed capacitated general routing problem
DEFF Research Database (Denmark)
Bach, Lukas; Wøhlk, Sanne; Lysgaard, Jens
2016-01-01
In this paper, we consider the Mixed Capacitated General Routing Problem which is a combination of the Capacitated Vehicle Routing Problem and the Capacitated Arc Routing Problem. The problem is also known as the Node, Edge, and Arc Routing Problem. We propose a Branch-and-Cut-and-Price algorithm...
Ma, Qiuyun; Jiao, Yan; Ren, Yiping
2017-01-01
In this study, length-weight relationships and relative condition factors were analyzed for Yellow Croaker (Larimichthys polyactis) along the north coast of China. Data covered six regions from north to south: Yellow River Estuary, Coastal Waters of Northern Shandong, Jiaozhou Bay, Coastal Waters of Qingdao, Haizhou Bay, and South Yellow Sea. In total 3,275 individuals were collected during six years (2008, 2011-2015). One generalized linear model, two simply linear models and nine linear mixed effect models that applied the effects from regions and/or years to coefficient a and/or the exponent b were studied and compared. Among these twelve models, the linear mixed effect model with random effects from both regions and years fit the data best, with lowest Akaike information criterion value and mean absolute error. In this model, the estimated a was 0.0192, with 95% confidence interval 0.0178~0.0308, and the estimated exponent b was 2.917 with 95% confidence interval 2.731~2.945. Estimates for a and b with the random effects in intercept and coefficient from Region and Year, ranged from 0.013 to 0.023 and from 2.835 to 3.017, respectively. Both regions and years had effects on parameters a and b, while the effects from years were shown to be much larger than those from regions. Except for Coastal Waters of Northern Shandong, a decreased from north to south. Condition factors relative to reference years of 1960, 1986, 2005, 2007, 2008~2009 and 2010 revealed that the body shape of Yellow Croaker became thinner in recent years. Furthermore relative condition factors varied among months, years, regions and length. The values of a and relative condition factors decreased, when the environmental pollution became worse, therefore, length-weight relationships could be an indicator for the environment quality. Results from this study provided basic description of current condition of Yellow Croaker along the north coast of China.
Directory of Open Access Journals (Sweden)
Andrea Nobili
2015-01-01
Full Text Available Three generalizations of the Timoshenko beam model according to the linear theory of micropolar elasticity or its special cases, that is, the couple stress theory or the modified couple stress theory, recently developed in the literature, are investigated and compared. The analysis is carried out in a variational setting, making use of Hamilton’s principle. It is shown that both the Timoshenko and the (possibly modified couple stress models are based on a microstructural kinematics which is governed by kinosthenic (ignorable terms in the Lagrangian. Despite their difference, all models bring in a beam-plane theory only one microstructural material parameter. Besides, the micropolar model formally reduces to the couple stress model upon introducing the proper constraint on the microstructure kinematics, although the material parameter is generally different. Line loading on the microstructure results in a nonconservative force potential. Finally, the Hamiltonian form of the micropolar beam model is derived and the canonical equations are presented along with their general solution. The latter exhibits a general oscillatory pattern for the microstructure rotation and stress, whose behavior matches the numerical findings.
An Improved Search Approach for Solving Non-Convex Mixed-Integer Non Linear Programming Problems
Sitopu, Joni Wilson; Mawengkang, Herman; Syafitri Lubis, Riri
2018-01-01
The nonlinear mathematical programming problem addressed in this paper has a structure characterized by a subset of variables restricted to assume discrete values, which are linear and separable from the continuous variables. The strategy of releasing nonbasic variables from their bounds, combined with the “active constraint” method, has been developed. This strategy is used to force the appropriate non-integer basic variables to move to their neighbourhood integer points. Successful implementation of these algorithms was achieved on various test problems.
DEFF Research Database (Denmark)
Ritz, Christian; Laursen, Rikke Pilmann; Damsgaard, Camilla Trab
2017-01-01
of a school meal programme. We propose a novel and versatile framework for simultaneous inference on parameters estimated from linear mixed models that were fitted separately for several outcomes from the same study, but did not necessarily contain the same fixed or random effects. By combining asymptotic...... sizes of practical relevance we studied simultaneous coverage through simulation, which showed that the approach achieved acceptable coverage probabilities even for small sample sizes (10 clusters) and for 2–16 outcomes. The approach also compared favourably with a joint modelling approach. We also...
A Mixed-Integer Linear Programming approach to wind farm layout and inter-array cable routing
DEFF Research Database (Denmark)
Fischetti, Martina; Leth, John-Josef; Borchersen, Anders Bech
2015-01-01
A Mixed-Integer Linear Programming (MILP) approach is proposed to optimize the turbine allocation and inter-array offshore cable routing. The two problems are considered with a two steps strategy, solving the layout problem first and then the cable problem. We give an introduction to both problems...... and present the MILP models we developed to solve them. To deal with interference in the onshore cases, we propose an adaptation of the standard Jensen’s model, suitable for 3D cases. A simple Stochastic Programming variant of our model allows us to consider different wind scenarios in the optimization...
Deviation from bimaximal mixing and leptonic CP phases in S4 family symmetry and generalized CP
International Nuclear Information System (INIS)
Li, Cai-Chang; Ding, Gui-Jun
2015-01-01
The lepton flavor mixing matrix having one row or one column in common with the bimaximal mixing up to permutations is still compatible with the present neutrino oscillation data. We provide a thorough exploration of generating such a mixing matrix from S 4 family symmetry and generalized CP symmetry H CP . Supposing that S 4 ⋊H CP is broken down to Z 2 ST 2 SU ×H CP ν in the neutrino sector and Z 4 TST 2 U ⋊H CP l in the charged lepton sector, one column of the PMNS matrix would be of the form (1/2,1/√2,1/2) T up to permutations, both Dirac CP phase and Majorana CP phases are trivial to accommodate the observed lepton mixing angles. The phenomenological implications of the remnant symmetry K 4 (TST 2 ,T 2 U) ×H CP ν in the neutrino sector and Z 2 SU ×H CP l in the charged lepton sector are studied. One row of PMNS matrix is determined to be (1/2,1/2,−i/√2), and all the three leptonic CP phases can only be trivial to fit the measured values of the mixing angles. Two models based on S 4 family symmetry and generalized CP are constructed to implement these model independent predictions enforced by remnant symmetry. The correct mass hierarchy among the charged leptons is achieved. The vacuum alignment and higher order corrections are discussed.
Energy Technology Data Exchange (ETDEWEB)
Cobb, J.W.
1995-02-01
There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.
Meta-Analysis of Effect Sizes Reported at Multiple Time Points Using General Linear Mixed Model
Musekiwa, Alfred; Manda, Samuel O. M.; Mwambi, Henry G.; Chen, Ding-Geng
2016-01-01
Meta-analysis of longitudinal studies combines effect sizes measured at pre-determined time points. The most common approach involves performing separate univariate meta-analyses at individual time points. This simplistic approach ignores dependence between longitudinal effect sizes, which might result in less precise parameter estimates. In this paper, we show how to conduct a meta-analysis of longitudinal effect sizes where we contrast different covariance structures for dependence between effect sizes, both within and between studies. We propose new combinations of covariance structures for the dependence between effect size and utilize a practical example involving meta-analysis of 17 trials comparing postoperative treatments for a type of cancer, where survival is measured at 6, 12, 18 and 24 months post randomization. Although the results from this particular data set show the benefit of accounting for within-study serial correlation between effect sizes, simulations are required to confirm these results. PMID:27798661
Thermodynamics versus Kinetics Dichotomy in the Linear Self-Assembly of Mixed Nanoblocks.
Ruiz, L; Keten, S
2014-06-05
We report classical and replica exchange molecular dynamics simulations that establish the mechanisms underpinning the growth kinetics of a binary mix of nanorings that form striped nanotubes via self-assembly. A step-growth coalescence model captures the growth process of the nanotubes, which suggests that high aspect ratio nanostructures can grow by obeying the universal laws of self-similar coarsening, contrary to systems that grow through nucleation and elongation. Notably, striped patterns do not depend on specific growth mechanisms, but are governed by tempering conditions that control the likelihood of depropagation and fragmentation.
Jonrinaldi, Hadiguna, Rika Ampuh; Salastino, Rades
2017-11-01
Environmental consciousness has paid many attention nowadays. It is not only about how to recycle, remanufacture or reuse used end products but it is also how to optimize the operations of the reverse system. A previous research has proposed a design of reverse supply chain of biodiesel network from used cooking oil. However, the research focused on the design of the supply chain strategy not the operations of the supply chain. It only decided how to design the structure of the supply chain in the next few years, and the process of each stage will be conducted in the supply chain system in general. The supply chain system has not considered operational policies to be conducted by the companies in the supply chain. Companies need a policy for each stage of the supply chain operations to be conducted so as to produce the optimal supply chain system, including how to use all the resources that have been designed in order to achieve the objectives of the supply chain system. Therefore, this paper proposes a model to optimize the operational planning of a biodiesel supply chain network from used cooking oil. A mixed integer linear programming is developed to model the operational planning of biodiesel supply chain in order to minimize the total operational cost of the supply chain. Based on the implementation of the model developed, the total operational cost of the biodiesel supply chain incurred by the system is less than the total operational cost of supply chain based on the previous research during seven days of operational planning about amount of 2,743,470.00 or 0.186%. Production costs contributed to 74.6 % of total operational cost and the cost of purchasing the used cooking oil contributed to 24.1 % of total operational cost. So, the system should pay more attention to these two aspects as changes in the value of these aspects will cause significant effects to the change in the total operational cost of the supply chain.
Yock, Adam D; Rao, Arvind; Dong, Lei; Beadle, Beth M; Garden, Adam S; Kudchadker, Rajat J; Court, Laurence E
2014-05-01
The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: -11.6%-23.8%) and 14.6% (range: -7.3%-27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: -6.8%-40.3%) and 13.1% (range: -1.5%-52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: -11.1%-20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.
Galaxy bias and non-linear structure formation in general relativity
International Nuclear Information System (INIS)
Baldauf, Tobias; Seljak, Uroš; Senatore, Leonardo; Zaldarriaga, Matias
2011-01-01
Length scales probed by the large scale structure surveys are becoming closer and closer to the horizon scale. Further, it has been recently understood that non-Gaussianity in the initial conditions could show up in a scale dependence of the bias of galaxies at the largest possible distances. It is therefore important to take General Relativistic effects into account. Here we provide a General Relativistic generalization of the bias that is valid both for Gaussian and for non-Gaussian initial conditions. The collapse of objects happens on very small scales, while long-wavelength modes are always in the quasi linear regime. Around every small collapsing region, it is therefore possible to find a reference frame that is valid for arbitrary times and where the space time is almost flat: the Fermi frame. Here the Newtonian approximation is applicable and the equations of motion are the ones of the standard N-body codes. The effects of long-wavelength modes are encoded in the mapping from the cosmological frame to the local Fermi frame. At the level of the linear bias, the effect of the long-wavelength modes on the dynamics of the short scales is all encoded in the local curvature of the Universe, which allows us to define a General Relativistic generalization of the bias in the standard Newtonian setting. We show that the bias due to this effect goes to zero as the square of the ratio between the physical wavenumber and the Hubble scale for modes longer than the horizon, confirming the intuitive picture that modes longer than the horizon do not have any dynamical effect. On the other hand, the bias due to non-Gaussianities does not need to vanish for modes longer than the Hubble scale, and for non-Gaussianities of the local kind it goes to a constant. As a further application of our setup, we show that it is not necessary to perform large N-body simulations to extract information about long-wavelength modes: N-body simulations can be done on small scales and long
Cheng, Guang
2014-02-01
We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based on a spline approximation of the nonparametric part of the model and the generalized estimating equations (GEE). Although the model in consideration is natural and useful in many practical applications, the literature on this model is very limited because of challenges in dealing with dependent data for nonparametric additive models. We show that the proposed estimators are consistent and asymptotically normal even if the covariance structure is misspecified. An explicit consistent estimate of the asymptotic variance is also provided. Moreover, we derive the semiparametric efficiency score and information bound under general moment conditions. By showing that our estimators achieve the semiparametric information bound, we effectively establish their efficiency in a stronger sense than what is typically considered for GEE. The derivation of our asymptotic results relies heavily on the empirical processes tools that we develop for the longitudinal/clustered data. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2014 ISI/BS.
Li, Yanning
2013-10-01
This article presents a new robust control framework for transportation problems in which the state is modeled by a first order scalar conservation law. Using an equivalent formulation based on a Hamilton-Jacobi equation, we pose the problem of controlling the state of the system on a network link, using boundary flow control, as a Linear Program. Unlike many previously investigated transportation control schemes, this method yields a globally optimal solution and is capable of handling shocks (i.e. discontinuities in the state of the system). We also demonstrate that the same framework can handle robust control problems, in which the uncontrollable components of the initial and boundary conditions are encoded in intervals on the right hand side of inequalities in the linear program. The lower bound of the interval which defines the smallest feasible solution set is used to solve the robust LP (or MILP if the objective function depends on boolean variables). Since this framework leverages the intrinsic properties of the Hamilton-Jacobi equation used to model the state of the system, it is extremely fast. Several examples are given to demonstrate the performance of the robust control solution and the trade-off between the robustness and the optimality. © 2013 IEEE.
Li, Yanning; Canepa, Edward S.; Claudel, Christian G.
2013-01-01
This article presents a new robust control framework for transportation problems in which the state is modeled by a first order scalar conservation law. Using an equivalent formulation based on a Hamilton-Jacobi equation, we pose the problem of controlling the state of the system on a network link, using boundary flow control, as a Linear Program. Unlike many previously investigated transportation control schemes, this method yields a globally optimal solution and is capable of handling shocks (i.e. discontinuities in the state of the system). We also demonstrate that the same framework can handle robust control problems, in which the uncontrollable components of the initial and boundary conditions are encoded in intervals on the right hand side of inequalities in the linear program. The lower bound of the interval which defines the smallest feasible solution set is used to solve the robust LP (or MILP if the objective function depends on boolean variables). Since this framework leverages the intrinsic properties of the Hamilton-Jacobi equation used to model the state of the system, it is extremely fast. Several examples are given to demonstrate the performance of the robust control solution and the trade-off between the robustness and the optimality. © 2013 IEEE.
Diagnostics for generalized linear hierarchical models in network meta-analysis.
Zhao, Hong; Hodges, James S; Carlin, Bradley P
2017-09-01
Network meta-analysis (NMA) combines direct and indirect evidence comparing more than 2 treatments. Inconsistency arises when these 2 information sources differ. Previous work focuses on inconsistency detection, but little has been done on how to proceed after identifying inconsistency. The key issue is whether inconsistency changes an NMA's substantive conclusions. In this paper, we examine such discrepancies from a diagnostic point of view. Our methods seek to detect influential and outlying observations in NMA at a trial-by-arm level. These observations may have a large effect on the parameter estimates in NMA, or they may deviate markedly from other observations. We develop formal diagnostics for a Bayesian hierarchical model to check the effect of deleting any observation. Diagnostics are specified for generalized linear hierarchical NMA models and investigated for both published and simulated datasets. Results from our example dataset using either contrast- or arm-based models and from the simulated datasets indicate that the sources of inconsistency in NMA tend not to be influential, though results from the example dataset suggest that they are likely to be outliers. This mimics a familiar result from linear model theory, in which outliers with low leverage are not influential. Future extensions include incorporating baseline covariates and individual-level patient data. Copyright © 2017 John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
N. Mielenz
2015-01-01
Full Text Available Population-averaged and subject-specific models are available to evaluate count data when repeated observations per subject are present. The latter are also known in the literature as generalised linear mixed models (GLMM. In GLMM repeated measures are taken into account explicitly through random animal effects in the linear predictor. In this paper the relevant GLMMs are presented based on conditional Poisson or negative binomial distribution of the response variable for given random animal effects. Equations for the repeatability of count data are derived assuming normal distribution and logarithmic gamma distribution for the random animal effects. Using count data on aggressive behaviour events of pigs (barrows, sows and boars in mixed-sex housing, we demonstrate the use of the Poisson »log-gamma intercept«, the Poisson »normal intercept« and the »normal intercept« model with negative binomial distribution. Since not all count data can definitely be seen as Poisson or negative-binomially distributed, questions of model selection and model checking are examined. Emanating from the example, we also interpret the least squares means, estimated on the link as well as the response scale. Options provided by the SAS procedure NLMIXED for estimating model parameters and for estimating marginal expected values are presented.
Bamia, Christina; White, Ian R; Kenward, Michael G
2013-07-10
Linear mixed models are often used for the analysis of data from clinical trials with repeated quantitative outcomes. This paper considers linear mixed models where a particular form is assumed for the treatment effect, in particular constant over time or proportional to time. For simplicity, we assume no baseline covariates and complete post-baseline measures, and we model arbitrary mean responses for the control group at each time. For the variance-covariance matrix, we consider an unstructured model, a random intercepts model and a random intercepts and slopes model. We show that the treatment effect estimator can be expressed as a weighted average of the observed time-specific treatment effects, with weights depending on the covariance structure and the magnitude of the estimated variance components. For an assumed constant treatment effect, under the random intercepts model, all weights are equal, but in the random intercepts and slopes and the unstructured models, we show that some weights can be negative: thus, the estimated treatment effect can be negative, even if all time-specific treatment effects are positive. Our results suggest that particular models for the treatment effect combined with particular covariance structures may result in estimated treatment effects of unexpected magnitude and/or direction. Methods are illustrated using a Parkinson's disease trial. Copyright © 2012 John Wiley & Sons, Ltd.
Zhang, Hanze; Huang, Yangxin; Wang, Wei; Chen, Henian; Langland-Orban, Barbara
2017-01-01
In longitudinal AIDS studies, it is of interest to investigate the relationship between HIV viral load and CD4 cell counts, as well as the complicated time effect. Most of common models to analyze such complex longitudinal data are based on mean-regression, which fails to provide efficient estimates due to outliers and/or heavy tails. Quantile regression-based partially linear mixed-effects models, a special case of semiparametric models enjoying benefits of both parametric and nonparametric models, have the flexibility to monitor the viral dynamics nonparametrically and detect the varying CD4 effects parametrically at different quantiles of viral load. Meanwhile, it is critical to consider various data features of repeated measurements, including left-censoring due to a limit of detection, covariate measurement error, and asymmetric distribution. In this research, we first establish a Bayesian joint models that accounts for all these data features simultaneously in the framework of quantile regression-based partially linear mixed-effects models. The proposed models are applied to analyze the Multicenter AIDS Cohort Study (MACS) data. Simulation studies are also conducted to assess the performance of the proposed methods under different scenarios.
Scholz, Stefan; Graf von der Schulenburg, Johann-Matthias; Greiner, Wolfgang
2015-11-17
Regional differences in physician supply can be found in many health care systems, regardless of their organizational and financial structure. A theoretical model is developed for the physicians' decision on office allocation, covering demand-side factors and a consumption time function. To test the propositions following the theoretical model, generalized linear models were estimated to explain differences in 412 German districts. Various factors found in the literature were included to control for physicians' regional preferences. Evidence in favor of the first three propositions of the theoretical model could be found. Specialists show a stronger association to higher populated districts than GPs. Although indicators for regional preferences are significantly correlated with physician density, their coefficients are not as high as population density. If regional disparities should be addressed by political actions, the focus should be to counteract those parameters representing physicians' preferences in over- and undersupplied regions.
Optimal Stochastic Control Problem for General Linear Dynamical Systems in Neuroscience
Directory of Open Access Journals (Sweden)
Yan Chen
2017-01-01
Full Text Available This paper considers a d-dimensional stochastic optimization problem in neuroscience. Suppose the arm’s movement trajectory is modeled by high-order linear stochastic differential dynamic system in d-dimensional space, the optimal trajectory, velocity, and variance are explicitly obtained by using stochastic control method, which allows us to analytically establish exact relationships between various quantities. Moreover, the optimal trajectory is almost a straight line for a reaching movement; the optimal velocity bell-shaped and the optimal variance are consistent with the experimental Fitts law; that is, the longer the time of a reaching movement, the higher the accuracy of arriving at the target position, and the results can be directly applied to designing a reaching movement performed by a robotic arm in a more general environment.
Wu, Jiayang; Cao, Pan; Hu, Xiaofeng; Jiang, Xinhong; Pan, Ting; Yang, Yuxing; Qiu, Ciyuan; Tremblay, Christine; Su, Yikai
2014-10-20
We propose and experimentally demonstrate an all-optical temporal differential-equation solver that can be used to solve ordinary differential equations (ODEs) characterizing general linear time-invariant (LTI) systems. The photonic device implemented by an add-drop microring resonator (MRR) with two tunable interferometric couplers is monolithically integrated on a silicon-on-insulator (SOI) wafer with a compact footprint of ~60 μm × 120 μm. By thermally tuning the phase shifts along the bus arms of the two interferometric couplers, the proposed device is capable of solving first-order ODEs with two variable coefficients. The operation principle is theoretically analyzed, and system testing of solving ODE with tunable coefficients is carried out for 10-Gb/s optical Gaussian-like pulses. The experimental results verify the effectiveness of the fabricated device as a tunable photonic ODE solver.
DEFF Research Database (Denmark)
Østergaard, Jacob; Kramer, Mark A.; Eden, Uri T.
2018-01-01
current. We then fit these spike train datawith a statistical model (a generalized linear model, GLM, with multiplicative influences of past spiking). For different levels of noise, we show how the GLM captures both the deterministic features of the Izhikevich neuron and the variability driven...... by the noise. We conclude that the GLM captures essential features of the simulated spike trains, but for near-deterministic spike trains, goodness-of-fit analyses reveal that the model does not fit very well in a statistical sense; the essential random part of the GLM is not captured....... are separately applied; understanding the relationships between these modeling approaches remains an area of active research. In this letter, we examine this relationship using simulation. To do so, we first generate spike train data from a well-known dynamical model, the Izhikevich neuron, with a noisy input...
Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J
2015-05-01
We show how the hierarchical model for responses and response times as developed by van der Linden (2007), Fox, Klein Entink, and van der Linden (2007), Klein Entink, Fox, and van der Linden (2009), and Glas and van der Linden (2010) can be simplified to a generalized linear factor model with only the mild restriction that there is no hierarchical model at the item side. This result is valuable as it enables all well-developed modelling tools and extensions that come with these methods. We show that the restriction we impose on the hierarchical model does not influence parameter recovery under realistic circumstances. In addition, we present two illustrative real data analyses to demonstrate the practical benefits of our approach. © 2014 The British Psychological Society.
DEFF Research Database (Denmark)
Dlugosz, Stephan; Mammen, Enno; Wilke, Ralf
2017-01-01
Large data sets that originate from administrative or operational activity are increasingly used for statistical analysis as they often contain very precise information and a large number of observations. But there is evidence that some variables can be subject to severe misclassification...... or contain missing values. Given the size of the data, a flexible semiparametric misclassification model would be good choice but their use in practise is scarce. To close this gap a semiparametric model for the probability of observing labour market transitions is estimated using a sample of 20 m...... observations from Germany. It is shown that estimated marginal effects of a number of covariates are sizeably affected by misclassification and missing values in the analysis data. The proposed generalized partially linear regression extends existing models by allowing a misclassified discrete covariate...
Directory of Open Access Journals (Sweden)
Nurdan Cetin
2014-01-01
Full Text Available We consider a multiobjective linear fractional transportation problem (MLFTP with several fractional criteria, such as, the maximization of the transport profitability like profit/cost or profit/time, and its two properties are source and destination. Our aim is to introduce MLFTP which has not been studied in literature before and to provide a fuzzy approach which obtain a compromise Pareto-optimal solution for this problem. To do this, first, we present a theorem which shows that MLFTP is always solvable. And then, reducing MLFTP to the Zimmermann’s “min” operator model which is the max-min problem, we construct Generalized Dinkelbach’s Algorithm for solving the obtained problem. Furthermore, we provide an illustrative numerical example to explain this fuzzy approach.
Shen, Peiping; Zhang, Tongli; Wang, Chunfeng
2017-01-01
This article presents a new approximation algorithm for globally solving a class of generalized fractional programming problems (P) whose objective functions are defined as an appropriate composition of ratios of affine functions. To solve this problem, the algorithm solves an equivalent optimization problem (Q) via an exploration of a suitably defined nonuniform grid. The main work of the algorithm involves checking the feasibility of linear programs associated with the interesting grid points. It is proved that the proposed algorithm is a fully polynomial time approximation scheme as the ratio terms are fixed in the objective function to problem (P), based on the computational complexity result. In contrast to existing results in literature, the algorithm does not require the assumptions on quasi-concavity or low-rank of the objective function to problem (P). Numerical results are given to illustrate the feasibility and effectiveness of the proposed algorithm.
Directory of Open Access Journals (Sweden)
Wen-Min Zhou
2013-01-01
Full Text Available This paper is concerned with the consensus problem of general linear discrete-time multiagent systems (MASs with random packet dropout that happens during information exchange between agents. The packet dropout phenomenon is characterized as being a Bernoulli random process. A distributed consensus protocol with weighted graph is proposed to address the packet dropout phenomenon. Through introducing a new disagreement vector, a new framework is established to solve the consensus problem. Based on the control theory, the perturbation argument, and the matrix theory, the necessary and sufficient condition for MASs to reach mean-square consensus is derived in terms of stability of an array of low-dimensional matrices. Moreover, mean-square consensusable conditions with regard to network topology and agent dynamic structure are also provided. Finally, the effectiveness of the theoretical results is demonstrated through an illustrative example.
Hennelly, Bryan M.; Sheridan, John T.
2005-05-01
By use of matrix-based techniques it is shown how the space-bandwidth product (SBP) of a signal, as indicated by the location of the signal energy in the Wigner distribution function, can be tracked through any quadratic-phase optical system whose operation is described by the linear canonical transform. Then, applying the regular uniform sampling criteria imposed by the SBP and linking the criteria explicitly to a decomposition of the optical matrix of the system, it is shown how numerical algorithms (employing interpolation and decimation), which exhibit both invertibility and additivity, can be implemented. Algorithms appearing in the literature for a variety of transforms (Fresnel, fractional Fourier) are shown to be special cases of our general approach. The method is shown to allow the existing algorithms to be optimized and is also shown to permit the invention of many new algorithms.
A distributed-memory hierarchical solver for general sparse linear systems
Energy Technology Data Exchange (ETDEWEB)
Chen, Chao [Stanford Univ., CA (United States). Inst. for Computational and Mathematical Engineering; Pouransari, Hadi [Stanford Univ., CA (United States). Dept. of Mechanical Engineering; Rajamanickam, Sivasankaran [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Boman, Erik G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Darve, Eric [Stanford Univ., CA (United States). Inst. for Computational and Mathematical Engineering and Dept. of Mechanical Engineering
2017-12-20
We present a parallel hierarchical solver for general sparse linear systems on distributed-memory machines. For large-scale problems, this fully algebraic algorithm is faster and more memory-efficient than sparse direct solvers because it exploits the low-rank structure of fill-in blocks. Depending on the accuracy of low-rank approximations, the hierarchical solver can be used either as a direct solver or as a preconditioner. The parallel algorithm is based on data decomposition and requires only local communication for updating boundary data on every processor. Moreover, the computation-to-communication ratio of the parallel algorithm is approximately the volume-to-surface-area ratio of the subdomain owned by every processor. We also provide various numerical results to demonstrate the versatility and scalability of the parallel algorithm.
Directory of Open Access Journals (Sweden)
Enrique Calderín-Ojeda
2017-11-01
Full Text Available Generalized linear models might not be appropriate when the probability of extreme events is higher than that implied by the normal distribution. Extending the method for estimating the parameters of a double Pareto lognormal distribution (DPLN in Reed and Jorgensen (2004, we develop an EM algorithm for the heavy-tailed Double-Pareto-lognormal generalized linear model. The DPLN distribution is obtained as a mixture of a lognormal distribution with a double Pareto distribution. In this paper the associated generalized linear model has the location parameter equal to a linear predictor which is used to model insurance claim amounts for various data sets. The performance is compared with those of the generalized beta (of the second kind and lognorma distributions.
A general mixed boundary model reduction method for component mode synthesis
International Nuclear Information System (INIS)
Voormeeren, S N; Van der Valk, P L C; Rixen, D J
2010-01-01
A classic issue in component mode synthesis (CMS) methods is the choice for fixed or free boundary conditions at the interface degrees of freedom (DoF) and the associated vibration modes in the components reduction base. In this paper, a novel mixed boundary CMS method called the 'Mixed Craig-Bampton' method is proposed. The method is derived by dividing the substructure DoF into a set of internal DoF, free interface DoF and fixed interface DoF. To this end a simple but effective scheme is introduced that, for every pair of interface DoF, selects a free or fixed boundary condition for each DoF individually. Based on this selection a reduction basis is computed consisting of vibration modes, static constraint modes and static residual flexibility modes. In order to assemble the reduced substructures a novel mixed assembly procedure is developed. It is shown that this approach leads to relatively sparse reduced matrices, whereas other mixed boundary methods often lead to full matrices. As such, the Mixed Craig-Bampton method forms a natural generalization of the classic Craig-Bampton and more recent Dual Craig-Bampton methods. Finally, the method is applied to a finite element test model. Analysis reveals that the proposed method has comparable or better accuracy and superior versatility with respect to the existing methods.
A mixed integer linear programming model applied in barge planning for Omya
Directory of Open Access Journals (Sweden)
David Bredström
2015-12-01
Full Text Available This article presents a mathematical model for barge transport planning on the river Rhine, which is part of a decision support system (DSS recently taken into use by the Swiss company Omya. The system is operated by Omya’s regional office in Cologne, Germany, responsible for distribution planning at the regional distribution center (RDC in Moerdijk, the Netherlands. The distribution planning is a vital part of supply chain management of Omya’s production of Norwegian high quality calcium carbonate slurry, supplied to European paper manufacturers. The DSS operates within a vendor managed inventory (VMI setting, where the customer inventories are monitored by Omya, who decides upon the refilling days and quantities delivered by barges. The barge planning problem falls into the category of inventory routing problems (IRP and is further characterized with multiple products, heterogeneous fleet with availability restrictions (the fleet is owned by third party, vehicle compartments, dependency of barge capacity on water-level, multiple customer visits, bounded customer inventories and rolling planning horizon. There are additional modelling details which had to be considered to make it possible to employ the model in practice at a sufficient level of detail. To the best of our knowledge, we have not been able to find similar models covering all these aspects in barge planning. This article presents the developed mixed-integer programming model and discusses practical experience with its solution. Briefly, it also puts the model into the context of the entire business case of value chain optimization in Omya.
Non-linear general instability of ring-stiffened conical shells under external hydrostatic pressure
International Nuclear Information System (INIS)
Ross, C T F; Kubelt, C; McLaughlin, I; Etheridge, A; Turner, K; Paraskevaides, D; Little, A P F
2011-01-01
The paper presents the experimental results for 15 ring-stiffened circular steel conical shells, which failed by non-linear general instability. The results of these investigations were compared with various theoretical analyses, including an ANSYS eigen buckling analysis and another ANSYS analysis; which involved a step-by-step method until collapse; where both material and geometrical nonlinearity were considered. The investigation also involved an analysis using BS5500 (PD 5500), together with the method of Ross of the University of Portsmouth. The ANSYS eigen buckling analysis tended to overestimate the predicted buckling pressures; whereas the ANSYS nonlinear results compared favourably with the experimental results. The PD5500 analysis was very time consuming and tended to grossly underestimate the experimental buckling pressures and in some cases, overestimate them. In contrast to PD5500 and ANSYS, the design charts of Ross of the University of Portsmouth were the easiest of all these methods to use and generally only slightly underestimated the experimental collapse pressures. The ANSYS analyses gave some excellent graphical displays.
Optimized Waterspace Management and Scheduling Using Mixed-Integer Linear Programming
2016-01-01
PANAMA CITY DIVISION PANAMA CITY, FLORIDA 32407-7001 5(3257...task. This paper is outlined as follows: in Section 2, we discuss the general setup of the the MCM scheduling problem, including the definition of the...Suite 1425 Arlington, VA 22203-1995 Naval Surface Warfare Center, Panama City Division 1 ATTN: Technical Library 110 Vernon Avenue Panama City, FL 32407 27
The prevalence and morbidity of sensitization to fragrance mix I in the general population
DEFF Research Database (Denmark)
Thyssen, J P; Linneberg, A; Menné, T
2009-01-01
BACKGROUND: The prevalence of sensitization to fragrance mix (FM) I and Myroxylon pereirae (MP, balsam of Peru) has decreased in recent years among Danish women with dermatitis. OBJECTIVES: This study investigated whether the decrease could be confirmed among women in the general population. Furt...... supported a recent decrease in the prevalence of FM I and MP sensitization in Denmark. The study also showed that fragrance sensitization was associated with self-reported cosmetic dermatitis and use of health care related to cosmetic dermatitis....
Paddison, Charlotte; Elliott, Marc; Parker, Richard; Staetsky, Laura; Lyratzopoulos, Georgios; Campbell, John L; Roland, Martin
2012-08-01
Uncertainties exist about when and how best to adjust performance measures for case mix. Our aims are to quantify the impact of case-mix adjustment on practice-level scores in a national survey of patient experience, to identify why and when it may be useful to adjust for case mix, and to discuss unresolved policy issues regarding the use of case-mix adjustment in performance measurement in health care. Secondary analysis of the 2009 English General Practice Patient Survey. Responses from 2 163 456 patients registered with 8267 primary care practices. Linear mixed effects models were used with practice included as a random effect and five case-mix variables (gender, age, race/ethnicity, deprivation, and self-reported health) as fixed effects. Primary outcome was the impact of case-mix adjustment on practice-level means (adjusted minus unadjusted) and changes in practice percentile ranks for questions measuring patient experience in three domains of primary care: access; interpersonal care; anticipatory care planning, and overall satisfaction with primary care services. Depending on the survey measure selected, case-mix adjustment changed the rank of between 0.4% and 29.8% of practices by more than 10 percentile points. Adjusting for case-mix resulted in large increases in score for a small number of practices and small decreases in score for a larger number of practices. Practices with younger patients, more ethnic minority patients and patients living in more socio-economically deprived areas were more likely to gain from case-mix adjustment. Age and race/ethnicity were the most influential adjustors. While its effect is modest for most practices, case-mix adjustment corrects significant underestimation of scores for a small proportion of practices serving vulnerable patients and may reduce the risk that providers would 'cream-skim' by not enrolling patients from vulnerable socio-demographic groups.
Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J
2016-05-01
Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . © 2015 John Wiley & Sons Ltd/London School of Economics.
INVESTIGATION OF SECONDARY MIXED RADIATION FIELD AROUND A MEDICAL LINEAR ACCELERATOR.
Tulik, Piotr; Tulik, Monika; Maciak, Maciej; Golnik, Natalia; Kabat, Damian; Byrski, Tomasz; Lesiak, Jan
2017-09-29
The aim of this study is to investigate secondary mixed radiation field around linac, as the first part of an overall assessment of out-of-field contribution of neutron dose for new advanced radiation dose delivery techniques. All measurements were performed around Varian Clinic 2300 C/D accelerator at Maria Sklodowska-Curie Memorial, Cancer Center and Institute of Oncology, Krakow Branch. Recombination chambers REM-2 and GW2 were used for recombination index of radiation quality Q4 determination (as an estimate of quality factor Q), measurement of total tissue dose Dt and calculation of gamma and neutron components to Dt. Estimation of Dt and Q4 allowed for the ambient dose equivalent H*(10) per monitor unit (MU) calculations. Measurements around linac were performed on the height of the middle of the linac's head (three positions) and on the height of the linac's isocentre (five positions). Estimation of secondary radiation level was carried out for seven different configurations of upper and lower jaws position and multileaf collimator set open or closed in each position. Study includes the use of two photon beam modes: 6 and 18 MV. Spatial distribution of ambient dose equivalent H*(10) per MU on the height of the linac's head and on the standard couch height for patients during the routine treatment, as well as relative contribution of gamma and neutron secondary radiation inside treatment room were evaluated. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Directory of Open Access Journals (Sweden)
Ana Calabrese
2011-01-01
Full Text Available In the auditory system, the stimulus-response properties of single neurons are often described in terms of the spectrotemporal receptive field (STRF, a linear kernel relating the spectrogram of the sound stimulus to the instantaneous firing rate of the neuron. Several algorithms have been used to estimate STRFs from responses to natural stimuli; these algorithms differ in their functional models, cost functions, and regularization methods. Here, we characterize the stimulus-response function of auditory neurons using a generalized linear model (GLM. In this model, each cell's input is described by: 1 a stimulus filter (STRF; and 2 a post-spike filter, which captures dependencies on the neuron's spiking history. The output of the model is given by a series of spike trains rather than instantaneous firing rate, allowing the prediction of spike train responses to novel stimuli. We fit the model by maximum penalized likelihood to the spiking activity of zebra finch auditory midbrain neurons in response to conspecific vocalizations (songs and modulation limited (ml noise. We compare this model to normalized reverse correlation (NRC, the traditional method for STRF estimation, in terms of predictive power and the basic tuning properties of the estimated STRFs. We find that a GLM with a sparse prior predicts novel responses to both stimulus classes significantly better than NRC. Importantly, we find that STRFs from the two models derived from the same responses can differ substantially and that GLM STRFs are more consistent between stimulus classes than NRC STRFs. These results suggest that a GLM with a sparse prior provides a more accurate characterization of spectrotemporal tuning than does the NRC method when responses to complex sounds are studied in these neurons.
Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J
2015-01-01
A generalized linear modeling framework to the analysis of responses and response times is outlined. In this framework, referred to as bivariate generalized linear item response theory (B-GLIRT), separate generalized linear measurement models are specified for the responses and the response times that are subsequently linked by cross-relations. The cross-relations can take various forms. Here, we focus on cross-relations with a linear or interaction term for ability tests, and cross-relations with a curvilinear term for personality tests. In addition, we discuss how popular existing models from the psychometric literature are special cases in the B-GLIRT framework depending on restrictions in the cross-relation. This allows us to compare existing models conceptually and empirically. We discuss various extensions of the traditional models motivated by practical problems. We also illustrate the applicability of our approach using various real data examples, including data on personality and cognitive ability.
International Nuclear Information System (INIS)
Maeda, Atsushi; Sasayama, Tatsuo; Iwai, Takashi; Aizawa, Sakuei; Ohwada, Isao; Aizawa, Masao; Ohmichi, Toshihiko; Handa, Muneo
1988-11-01
Two pins containing uranium-plutonium carbide fuels which are different in stoichiometry, i.e. (U,Pu)C 1.0 and (U,Pu)C 1.1 , were constructed into a capsule, ICF-37H, and were irradiated in JRR-2 up to 1.0 at % burnup at the linear heat rate of 420 W/cm. After being cooled for about one year, the irradiated capsule was transferred to the Reactor Fuel Examination Facility where the non-destructive examinations of the fuel pins in the β-γ cells and the destructive ones in two α-γ inert gas atmosphere cells were carried out. The release rates of fission gas were low enough, 0.44 % from (U,Pu)C 1.0 fuel pin and 0.09% from (U,Pu)C 1.1 fuel pin, which is reasonable because of the low central temperature of fuel pellets, about 1000 deg C and is estimated that the release is mainly governed by recoil and knock-out mechanisms. Volume swelling of the fuels was observed to be in the range of 1.3 ∼ 1.6 % for carbide fuels below 1000 deg C. Respective open porosities of (U,Pu)C 1.0 and (U,Pu)C 1.1 fuel were 1.3 % and 0.45 %, being in accordance with the release behavior of fission gas. Metallographic observation of the radial sections of pellets showed the increase of pore size and crystal grain size in the center and middle region of (U,Pu)C 1.0 pellets. The chemical interaction between fuel pellets and claddings in the carbide fuels is the penetration of carbon in the fuels to stainless steel tubes. The depth of corrosion layer in inner sides of cladding tubes ranged 10 ∼ 15 μm in the (U,Pu)C 1.0 fuel and 15 #approx #25 μm in the (U,Pu)C 1.1 fuel, which is correlative with the carbon potential of fuels posibly affecting the amount of carbon penetration. (author)
Directory of Open Access Journals (Sweden)
Mbarek Elbounjimi
2015-11-01
Full Text Available Closed-loop supply chain network design is a critical issue due to its impact on both economic and environmental performances of the supply chain. In this paper, we address the problem of designing a multi-echelon, multi-product and capacitated closed-loop supply chain network. First, a mixed-integer linear programming formulation is developed to maximize the total profit. The main contribution of the proposed model is addressing two economic viability issues of closed-loop supply chain. The first issue is the collection of sufficient quantity of end-of-life products are assured by retailers against an acquisition price. The second issue is exploiting the benefits of colocation of forward facilities and reverse facilities. The presented model is solved by LINGO for some test problems. Computational results and sensitivity analysis are conducted to show the performance of the proposed model.
Energy Technology Data Exchange (ETDEWEB)
Ramos, Daniel, E-mail: daniel.ramos@csic.es; Frank, Ian W.; Deotare, Parag B.; Bulu, Irfan; Lončar, Marko [School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts 02138 (United States)
2014-11-03
We investigate the coupling between mechanical and optical modes supported by coupled, freestanding, photonic crystal nanobeam cavities. We show that localized cavity modes for a given gap between the nanobeams provide weak optomechanical coupling with out-of-plane mechanical modes. However, we show that the coupling can be significantly increased, more than an order of magnitude for the symmetric mechanical mode, due to optical resonances that arise from the interaction of the localized cavity modes with standing waves formed by the reflection from thesubstrate. Finally, amplification of motion for the symmetric mode has been observed and attributed to the strong optomechanical interaction of our hybrid system. The amplitude of these self-sustained oscillations is large enough to put the system into a non-linear oscillation regime where a mixing between the mechanical modes is experimentally observed and theoretically explained.
Pillai, Goonaseelan Colin; Mentré, France; Steimer, Jean-Louis
2005-04-01
Few scientific contributions have made significant impact unless there was a champion who had the vision to see the potential for its use in seemingly disparate areas-and who then drove active implementation. In this paper, we present a historical summary of the development of non-linear mixed effects (NLME) modeling up to the more recent extensions of this statistical methodology. The paper places strong emphasis on the pivotal role played by Lewis B. Sheiner (1940-2004), who used this statistical methodology to elucidate solutions to real problems identified in clinical practice and in medical research and on how he drove implementation of the proposed solutions. A succinct overview of the evolution of the NLME modeling methodology is presented as well as ideas on how its expansion helped to provide guidance for a more scientific view of (model-based) drug development that reduces empiricism in favor of critical quantitative thinking and decision making.
International Nuclear Information System (INIS)
Huang, Zhibin; Mayr, Nina A.; Lo, Simon S.; Wang, Jian Z.; Jia Guang; Yuh, William T. C.; Johnke, Roberta
2012-01-01
Purpose: It has been conventionally assumed that the repair rate for sublethal damage (SLD) remains constant during the entire radiation course. However, increasing evidence from animal studies suggest that this may not the case. Rather, it appears that the repair rate for radiation-induced SLD slows down with increasing time. Such a slowdown in repair would suggest that the exponential repair pattern would not necessarily accurately predict repair process. As a result, the purpose of this study was to investigate a new generalized linear-quadratic (LQ) model incorporating a repair pattern with reciprocal time. The new formulas were tested with published experimental data. Methods: The LQ model has been widely used in radiation therapy, and the parameter G in the surviving fraction represents the repair process of sublethal damage with T r as the repair half-time. When a reciprocal pattern of repair process was adopted, a closed form of G was derived analytically for arbitrary radiation schemes. The published animal data adopted to test the reciprocal formulas. Results: A generalized LQ model to describe the repair process in a reciprocal pattern was obtained. Subsequently, formulas for special cases were derived from this general form. The reciprocal model showed a better fit to the animal data than the exponential model, particularly for the ED50 data (reduced χ 2 min of 2.0 vs 4.3, p = 0.11 vs 0.006), with the following gLQ parameters: α/β = 2.6-4.8 Gy, T r = 3.2-3.9 h for rat feet skin, and α/β = 0.9 Gy, T r = 1.1 h for rat spinal cord. Conclusions: These results of repair process following a reciprocal time suggest that the generalized LQ model incorporating the reciprocal time of sublethal damage repair shows a better fit than the exponential repair model. These formulas can be used to analyze the experimental and clinical data, where a slowing-down repair process appears during the course of radiation therapy.
Furlotte, Nicholas A; Eskin, Eleazar
2015-05-01
Multiple-trait association mapping, in which multiple traits are used simultaneously in the identification of genetic variants affecting those traits, has recently attracted interest. One class of approaches for this problem builds on classical variance component methodology, utilizing a multitrait version of a linear mixed model. These approaches both increase power and provide insights into the genetic architecture of multiple traits. In particular, it is possible to estimate the genetic correlation, which is a measure of the portion of the total correlation between traits that is due to additive genetic effects. Unfortunately, the practical utility of these methods is limited since they are computationally intractable for large sample sizes. In this article, we introduce a reformulation of the multiple-trait association mapping approach by defining the matrix-variate linear mixed model. Our approach reduces the computational time necessary to perform maximum-likelihood inference in a multiple-trait model by utilizing a data transformation. By utilizing a well-studied human cohort, we show that our approach provides more than a 10-fold speedup, making multiple-trait association feasible in a large population cohort on the genome-wide scale. We take advantage of the efficiency of our approach to analyze gene expression data. By decomposing gene coexpression into a genetic and environmental component, we show that our method provides fundamental insights into the nature of coexpressed genes. An implementation of this method is available at http://genetics.cs.ucla.edu/mvLMM. Copyright © 2015 by the Genetics Society of America.
Characterizing the performance of the Conway-Maxwell Poisson generalized linear model.
Francis, Royce A; Geedipally, Srinivas Reddy; Guikema, Seth D; Dhavala, Soma Sekhar; Lord, Dominique; LaRocca, Sarah
2012-01-01
Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway-Maxwell Poisson (COM-Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM-Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM-Poisson GLM, and (2) estimate the prediction accuracy of the COM-Poisson GLM using simulated data sets. The results of the study indicate that the COM-Poisson GLM is flexible enough to model under-, equi-, and overdispersed data sets with different sample mean values. The results also show that the COM-Poisson GLM yields accurate parameter estimates. The COM-Poisson GLM provides a promising and flexible approach for performing count data regression. © 2011 Society for Risk Analysis.
Directory of Open Access Journals (Sweden)
Miguel Flores
2016-11-01
Full Text Available This work aims to classify the DNA sequences of healthy and malignant cancer respectively. For this, supervised and unsupervised classification methods from a functional context are used; i.e. each strand of DNA is an observation. The observations are discretized, for that reason different ways to represent these observations with functions are evaluated. In addition, an exploratory study is done: estimating the mean and variance of each functional type of cancer. For the unsupervised classification method, hierarchical clustering with different measures of functional distance is used. On the other hand, for the supervised classification method, a functional generalized linear model is used. For this model the first and second derivatives are used which are included as discriminating variables. It has been verified that one of the advantages of working in the functional context is to obtain a model to correctly classify cancers by 100%. For the implementation of the methods it has been used the fda.usc R package that includes all the techniques of functional data analysis used in this work. In addition, some that have been developed in recent decades. For more details of these techniques can be consulted Ramsay, J. O. and Silverman (2005 and Ferraty et al. (2006.
Population decoding of motor cortical activity using a generalized linear model with hidden states.
Lawhern, Vernon; Wu, Wei; Hatsopoulos, Nicholas; Paninski, Liam
2010-06-15
Generalized linear models (GLMs) have been developed for modeling and decoding population neuronal spiking activity in the motor cortex. These models provide reasonable characterizations between neural activity and motor behavior. However, they lack a description of movement-related terms which are not observed directly in these experiments, such as muscular activation, the subject's level of attention, and other internal or external states. Here we propose to include a multi-dimensional hidden state to address these states in a GLM framework where the spike count at each time is described as a function of the hand state (position, velocity, and acceleration), truncated spike history, and the hidden state. The model can be identified by an Expectation-Maximization algorithm. We tested this new method in two datasets where spikes were simultaneously recorded using a multi-electrode array in the primary motor cortex of two monkeys. It was found that this method significantly improves the model-fitting over the classical GLM, for hidden dimensions varying from 1 to 4. This method also provides more accurate decoding of hand state (reducing the mean square error by up to 29% in some cases), while retaining real-time computational efficiency. These improvements on representation and decoding over the classical GLM model suggest that this new approach could contribute as a useful tool to motor cortical decoding and prosthetic applications. Copyright (c) 2010 Elsevier B.V. All rights reserved.
A new approach in simulating RF linacs using a general, linear real-time signal processor
International Nuclear Information System (INIS)
Young, A.; Jachim, S.P.
1991-01-01
Strict requirements on the tolerances of the amplitude and phase of the radio frequency (RF) cavity field are necessary to advance the field of accelerator technology. Due to these stringent requirements upon modern accelerators,a new approach of modeling and simulating is essential in developing and understanding their characteristics. This paper describes the implementation of a general, linear model of an RF cavity which is used to develop a real-time signal processor. This device fully emulates the response of an RF cavity upon receiving characteristic parameters (Q 0 , ω 0 , Δω, R S , Z 0 ). Simulating an RF cavity with a real-time signal processor is beneficial to an accelerator designer because the device allows one to answer fundamental questions on the response of the cavity to a particular stimulus without operating the accelerator. In particular, the complex interactions between the RF power and the control systems, the beam and cavity fields can simply be observed in a real-time domain. The signal processor can also be used upon initialization of the accelerator as a diagnostic device and as a dummy load for determining the closed-loop error of the control system. In essence, the signal processor is capable of providing information that allows an operator to determine whether the control systems and peripheral devices are operating properly without going through the tedious procedure of running the beam through a cavity
Directory of Open Access Journals (Sweden)
Tülin Acar
2012-01-01
Full Text Available The aim of this research is to compare the result of the differential item functioning (DIF determining with hierarchical generalized linear model (HGLM technique and the results of the DIF determining with logistic regression (LR and item response theory–likelihood ratio (IRT-LR techniques on the test items. For this reason, first in this research, it is determined whether the students encounter DIF with HGLM, LR, and IRT-LR techniques according to socioeconomic status (SES, in the Turkish, Social Sciences, and Science subtest items of the Secondary School Institutions Examination. When inspecting the correlations among the techniques in terms of determining the items having DIF, it was discovered that there was significant correlation between the results of IRT-LR and LR techniques in all subtests; merely in Science subtest, the results of the correlation between HGLM and IRT-LR techniques were found significant. DIF applications can be made on test items with other DIF analysis techniques that were not taken to the scope of this research. The analysis results, which were determined by using the DIF techniques in different sample sizes, can be compared.
Population Decoding of Motor Cortical Activity using a Generalized Linear Model with Hidden States
Lawhern, Vernon; Wu, Wei; Hatsopoulos, Nicholas G.; Paninski, Liam
2010-01-01
Generalized linear models (GLMs) have been developed for modeling and decoding population neuronal spiking activity in the motor cortex. These models provide reasonable characterizations between neural activity and motor behavior. However, they lack a description of movement-related terms which are not observed directly in these experiments, such as muscular activation, the subject's level of attention, and other internal or external states. Here we propose to include a multi-dimensional hidden state to address these states in a GLM framework where the spike count at each time is described as a function of the hand state (position, velocity, and acceleration), truncated spike history, and the hidden state. The model can be identified by an Expectation-Maximization algorithm. We tested this new method in two datasets where spikes were simultaneously recorded using a multi-electrode array in the primary motor cortex of two monkeys. It was found that this method significantly improves the model-fitting over the classical GLM, for hidden dimensions varying from 1 to 4. This method also provides more accurate decoding of hand state (lowering the Mean Square Error by up to 29% in some cases), while retaining real-time computational efficiency. These improvements on representation and decoding over the classical GLM model suggest that this new approach could contribute as a useful tool to motor cortical decoding and prosthetic applications. PMID:20359500
International Nuclear Information System (INIS)
Manrique, John Peter O.; Costa, Alessandro M.
2016-01-01
The spectral distribution of megavoltage X-rays used in radiotherapy departments is a fundamental quantity from which, in principle, all relevant information required for radiotherapy treatments can be determined. To calculate the dose delivered to the patient who make radiation therapy, are used treatment planning systems (TPS), which make use of convolution and superposition algorithms and which requires prior knowledge of the photon fluence spectrum to perform the calculation of three-dimensional doses and thus ensure better accuracy in the tumor control probabilities preserving the normal tissue complication probabilities low. In this work we have obtained the photon fluence spectrum of X-ray of the SIEMENS ONCOR linear accelerator of 6 MV, using an character-inverse method to the reconstruction of the spectra of photons from transmission curves measured for different thicknesses of aluminum; the method used for reconstruction of the spectra is a stochastic technique known as generalized simulated annealing (GSA), based on the work of quasi-equilibrium statistic of Tsallis. For the validation of the reconstructed spectra we calculated the curve of percentage depth dose (PDD) for energy of 6 MV, using Monte Carlo simulation with Penelope code, and from the PDD then calculate the beam quality index TPR_2_0_/_1_0. (author)
Systems of general nonlinear set-valued mixed variational inequalities problems in Hilbert spaces
Directory of Open Access Journals (Sweden)
Cho Yeol
2011-01-01
Full Text Available Abstract In this paper, the existing theorems and methods for finding solutions of systems of general nonlinear set-valued mixed variational inequalities problems in Hilbert spaces are studied. To overcome the difficulties, due to the presence of a proper convex lower semi-continuous function, φ and a mapping g, which appeared in the considered problem, we have used some applications of the resolvent operator technique. We would like to point out that although many authors have proved results for finding solutions of the systems of nonlinear set-valued (mixed variational inequalities problems, it is clear that it cannot be directly applied to the problems that we have considered in this paper because of φ and g. 2000 AMS Subject Classification: 47H05; 47H09; 47J25; 65J15.
An Optimal Order Nonnested Mixed Multigrid Method for Generalized Stokes Problems
Deng, Qingping
1996-01-01
A multigrid algorithm is developed and analyzed for generalized Stokes problems discretized by various nonnested mixed finite elements within a unified framework. It is abstractly proved by an element-independent analysis that the multigrid algorithm converges with an optimal order if there exists a 'good' prolongation operator. A technique to construct a 'good' prolongation operator for nonnested multilevel finite element spaces is proposed. Its basic idea is to introduce a sequence of auxiliary nested multilevel finite element spaces and define a prolongation operator as a composite operator of two single grid level operators. This makes not only the construction of a prolongation operator much easier (the final explicit forms of such prolongation operators are fairly simple), but the verification of the approximate properties for prolongation operators is also simplified. Finally, as an application, the framework and technique is applied to seven typical nonnested mixed finite elements.
International Nuclear Information System (INIS)
Yock, Adam D.; Kudchadker, Rajat J.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Court, Laurence E.
2014-01-01
Purpose: The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Methods: Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. Results: In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: −11.6%–23.8%) and 14.6% (range: −7.3%–27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: −6.8%–40.3%) and 13.1% (range: −1.5%–52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: −11.1%–20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. Conclusions: A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography
Optimization of light quality from color mixing light-emitting diode systems for general lighting
Thorseth, Anders
2012-03-01
Given the problem of metamerisms inherent in color mixing in light-emitting diode (LED) systems with more than three distinct colors, a method for optimizing the spectral output of multicolor LED system with regards to standardized light quality parameters has been developed. The composite spectral power distribution from the LEDs are simulated using spectral radiometric measurements of single commercially available LEDs for varying input power, to account for the efficiency droop and other non-linear effects in electrical power vs. light output. The method uses electrical input powers as input parameters in a randomized steepest decent optimization. The resulting spectral power distributions are evaluated with regard to the light quality using the standard characteristics: CIE color rendering index, correlated color temperature and chromaticity distance. The results indicate Pareto optimal boundaries for each system, mapping the capabilities of the simulated lighting systems with regard to the light quality characteristics.
Generalized Projective Synchronization between Two Different Neural Networks with Mixed Time Delays
Directory of Open Access Journals (Sweden)
Xuefei Wu
2012-01-01
Full Text Available The generalized projective synchronization (GPS between two different neural networks with nonlinear coupling and mixed time delays is considered. Several kinds of nonlinear feedback controllers are designed to achieve GPS between two different such neural networks. Some results for GPS of these neural networks are proved theoretically by using the Lyapunov stability theory and the LaSalle invariance principle. Moreover, by comparison, we determine an optimal nonlinear controller from several ones and provide an adaptive update law for it. Computer simulations are provided to show the effectiveness and feasibility of the proposed methods.
Nontrauma emergency surgery: optimal case mix for general surgery and acute care surgery training.
Cherry-Bukowiec, Jill R; Miller, Barbra S; Doherty, Gerard M; Brunsvold, Melissa E; Hemmila, Mark R; Park, Pauline K; Raghavendran, Krishnan; Sihler, Kristen C; Wahl, Wendy L; Wang, Stewart C; Napolitano, Lena M
2011-11-01
To examine the case mix and patient characteristics and outcomes of the nontrauma emergency (NTE) service in an academic Division of Acute Care Surgery. An NTE service (attending, chief resident, postgraduate year-3 and postgraduate year-2 residents, and two physician assistants) was created in July 2005 for all urgent and emergent inpatient and emergency department general surgery patient consults and admissions. An NTE database was created with prospective data collection of all NTE admissions initiated from November 1, 2007. Prospective data were collected by a dedicated trauma registrar and Acute Physiology and Chronic Health Evaluation-intensive care unit (ICU) coordinator daily. NTE case mix and ICU characteristics were reviewed for the 2-year time period January 1, 2008, through December 31, 2009. During the same time period, trauma operative cases and procedures were examined and compared with the NTE case mix. Thousand seven hundred eight patients were admitted to the NTE service during this time period (789 in 2008 and 910 in 2009). Surgical intervention was required in 70% of patients admitted to the NTE service. Exploratory laparotomy or laparoscopy was performed in 449 NTE patients, comprising 37% of all surgical procedures. In comparison, only 118 trauma patients (5.9% of admissions) required a major laparotomy or thoracotomy during the same time period. Acuity of illness of NTE patients was high, with a significant portion (13%) of NTE patients requiring ICU admission. NTE patients had higher admission Acute Physiology and Chronic Health Evaluation III scores [61.2 vs. 58.8 (2008); 58.2 vs. 55.8 (2009)], increased mortality [(9.71% vs. 4.89% (2008); 6.78% vs. 5.16% (2009)], and increased readmission rates (15.5% vs. 7.4%) compared with the total surgical ICU (SICU) admissions. In an era of declining operative caseload in trauma, the NTE service provides ample opportunity for complex general surgery decision making and operative procedures for
Radiating spheres in general relativity with a mixed transport energy flow
International Nuclear Information System (INIS)
Barreto, W.; Nunez, L.A.
1989-10-01
A seminumeric method by Herrera, Jimenez and Ruggeri is extended to handle the evolution of general relativistic spheres where diffusion and free streaming radiation processes coexist. It is shown when mixed-mode radiation is present a very different hydrodynamic picture emerges from the models previously considered in both radiation limits. Characteristic times for free streaming, hydrodynamics and diffusion processes are considered comparable. Hydrodynamics and radiation are strongly coupled and the particular equation of state of the model emerges as a very important element in the dynamic of the matter distribution. (author). 16 refs, 5 figs
Predicting stem borer density in maize using RapidEye data and generalized linear models
Abdel-Rahman, Elfatih M.; Landmann, Tobias; Kyalo, Richard; Ong'amo, George; Mwalusepo, Sizah; Sulieman, Saad; Ru, Bruno Le
2017-05-01
Average maize yield in eastern Africa is 2.03 t ha-1 as compared to global average of 6.06 t ha-1 due to biotic and abiotic constraints. Amongst the biotic production constraints in Africa, stem borers are the most injurious. In eastern Africa, maize yield losses due to stem borers are currently estimated between 12% and 21% of the total production. The objective of the present study was to explore the possibility of RapidEye spectral data to assess stem borer larva densities in maize fields in two study sites in Kenya. RapidEye images were acquired for the Bomet (western Kenya) test site on the 9th of December 2014 and on 27th of January 2015, and for Machakos (eastern Kenya) a RapidEye image was acquired on the 3rd of January 2015. Five RapidEye spectral bands as well as 30 spectral vegetation indices (SVIs) were utilized to predict per field maize stem borer larva densities using generalized linear models (GLMs), assuming Poisson ('Po') and negative binomial ('NB') distributions. Root mean square error (RMSE) and ratio prediction to deviation (RPD) statistics were used to assess the models performance using a leave-one-out cross-validation approach. The Zero-inflated NB ('ZINB') models outperformed the 'NB' models and stem borer larva densities could only be predicted during the mid growing season in December and early January in both study sites, respectively (RMSE = 0.69-1.06 and RPD = 8.25-19.57). Overall, all models performed similar when all the 30 SVIs (non-nested) and only the significant (nested) SVIs were used. The models developed could improve decision making regarding controlling maize stem borers within integrated pest management (IPM) interventions.
The Spike-and-Slab Lasso Generalized Linear Models for Prediction and Associated Genes Detection.
Tang, Zaixiang; Shen, Yueping; Zhang, Xinyan; Yi, Nengjun
2017-01-01
Large-scale "omics" data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, there are considerable challenges in analyzing high-dimensional molecular data, including the large number of potential molecular predictors, limited number of samples, and small effect of each predictor. We propose new Bayesian hierarchical generalized linear models, called spike-and-slab lasso GLMs, for prognostic prediction and detection of associated genes using large-scale molecular data. The proposed model employs a spike-and-slab mixture double-exponential prior for coefficients that can induce weak shrinkage on large coefficients, and strong shrinkage on irrelevant coefficients. We have developed a fast and stable algorithm to fit large-scale hierarchal GLMs by incorporating expectation-maximization (EM) steps into the fast cyclic coordinate descent algorithm. The proposed approach integrates nice features of two popular methods, i.e., penalized lasso and Bayesian spike-and-slab variable selection. The performance of the proposed method is assessed via extensive simulation studies. The results show that the proposed approach can provide not only more accurate estimates of the parameters, but also better prediction. We demonstrate the proposed procedure on two cancer data sets: a well-known breast cancer data set consisting of 295 tumors, and expression data of 4919 genes; and the ovarian cancer data set from TCGA with 362 tumors, and expression data of 5336 genes. Our analyses show that the proposed procedure can generate powerful models for predicting outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). Copyright © 2017 by the Genetics Society of America.
Hubbard, Rebecca A; Johnson, Eric; Chubak, Jessica; Wernli, Karen J; Kamineni, Aruna; Bogart, Andy; Rutter, Carolyn M
2017-06-01
Exposures derived from electronic health records (EHR) may be misclassified, leading to biased estimates of their association with outcomes of interest. An example of this problem arises in the context of cancer screening where test indication, the purpose for which a test was performed, is often unavailable. This poses a challenge to understanding the effectiveness of screening tests because estimates of screening test effectiveness are biased if some diagnostic tests are misclassified as screening. Prediction models have been developed for a variety of exposure variables that can be derived from EHR, but no previous research has investigated appropriate methods for obtaining unbiased association estimates using these predicted probabilities. The full likelihood incorporating information on both the predicted probability of exposure-class membership and the association between the exposure and outcome of interest can be expressed using a finite mixture model. When the regression model of interest is a generalized linear model (GLM), the expectation-maximization algorithm can be used to estimate the parameters using standard software for GLMs. Using simulation studies, we compared the bias and efficiency of this mixture model approach to alternative approaches including multiple imputation and dichotomization of the predicted probabilities to create a proxy for the missing predictor. The mixture model was the only approach that was unbiased across all scenarios investigated. Finally, we explored the performance of these alternatives in a study of colorectal cancer screening with colonoscopy. These findings have broad applicability in studies using EHR data where gold-standard exposures are unavailable and prediction models have been developed for estimating proxies.
Use of generalized linear models and digital data in a forest inventory of Northern Utah
Moisen, Gretchen G.; Edwards, Thomas C.
1999-01-01
Forest inventories, like those conducted by the Forest Service's Forest Inventory and Analysis Program (FIA) in the Rocky Mountain Region, are under increased pressure to produce better information at reduced costs. Here we describe our efforts in Utah to merge satellite-based information with forest inventory data for the purposes of reducing the costs of estimates of forest population totals and providing spatial depiction of forest resources. We illustrate how generalized linear models can be used to construct approximately unbiased and efficient estimates of population totals while providing a mechanism for prediction in space for mapping of forest structure. We model forest type and timber volume of five tree species groups as functions of a variety of predictor variables in the northern Utah mountains. Predictor variables include elevation, aspect, slope, geographic coordinates, as well as vegetation cover types based on satellite data from both the Advanced Very High Resolution Radiometer (AVHRR) and Thematic Mapper (TM) platforms. We examine the relative precision of estimates of area by forest type and mean cubic-foot volumes under six different models, including the traditional double sampling for stratification strategy. Only very small gains in precision were realized through the use of expensive photointerpreted or TM-based data for stratification, while models based on topography and spatial coordinates alone were competitive. We also compare the predictive capability of the models through various map accuracy measures. The models including the TM-based vegetation performed best overall, while topography and spatial coordinates alone provided substantial information at very low cost.
Directory of Open Access Journals (Sweden)
Pau Baya
2011-05-01
Full Text Available Remenat (Catalan (Mixed, "revoltillo" (Scrambled in Spanish, is a dish which, in Catalunya, consists of a beaten egg cooked with vegetables or other ingredients, normally prawns or asparagus. It is delicious. Scrambled refers to the action of mixing the beaten egg with other ingredients in a pan, normally using a wooden spoon Thought is frequently an amalgam of past ideas put through a spinner and rhythmically shaken around like a cocktail until a uniform and dense paste is made. This malleable product, rather like a cake mixture can be deformed pulling it out, rolling it around, adapting its shape to the commands of one’s hands or the tool which is being used on it. In the piece Mixed, the contortion of the wood seeks to reproduce the plasticity of this slow heavy movement. Each piece lays itself on the next piece consecutively like a tongue of incandescent lava slowly advancing but with unstoppable inertia.
Holdaway, Daniel; Kent, James
2015-01-01
The linearity of a selection of common advection schemes is tested and examined with a view to their use in the tangent linear and adjoint versions of an atmospheric general circulation model. The schemes are tested within a simple offline one-dimensional periodic domain as well as using a simplified and complete configuration of the linearised version of NASA's Goddard Earth Observing System version 5 (GEOS-5). All schemes which prevent the development of negative values and preserve the shape of the solution are confirmed to have nonlinear behaviour. The piecewise parabolic method (PPM) with certain flux limiters, including that used by default in GEOS-5, is found to support linear growth near the shocks. This property can cause the rapid development of unrealistically large perturbations within the tangent linear and adjoint models. It is shown that these schemes with flux limiters should not be used within the linearised version of a transport scheme. The results from tests using GEOS-5 show that the current default scheme (a version of PPM) is not suitable for the tangent linear and adjoint model, and that using a linear third-order scheme for the linearised model produces better behaviour. Using the third-order scheme for the linearised model improves the correlations between the linear and non-linear perturbation trajectories for cloud liquid water and cloud liquid ice in GEOS-5.
Wrigley, Christopher James (Inventor); Hancock, Bruce R. (Inventor); Newton, Kenneth W. (Inventor); Cunningham, Thomas J. (Inventor)
2014-01-01
An analog-to-digital converter (ADC) converts pixel voltages from a CMOS image into a digital output. A voltage ramp generator generates a voltage ramp that has a linear first portion and a non-linear second portion. A digital output generator generates a digital output based on the voltage ramp, the pixel voltages, and comparator output from an array of comparators that compare the voltage ramp to the pixel voltages. A return lookup table linearizes the digital output values.
Mixed multiplicities for arbitrary ideals and generalized Buchsbaum-Rim multiplicities
International Nuclear Information System (INIS)
Callejas-Bedregal, R.; Jorge Perez, V.H.
2005-12-01
We introduce first the notion of mixed multiplicities for arbitrary ideals in a local d-dimensional noetherian ring (A, m) which, in some sense, generalizes the concept of mixed multiplicities for m-primary ideals. We also generalize Teissier's Product Formula for a set of arbitrary ideals. We also extend the notion of the Buchsbaum-Rim multiplicity (in short, we write BR-multiplicity) of a submodule of a free module to the case where the submodule no longer has finite colength. For a submodule M of A p we introduce a sequence e BR k (M), k = 0,...,d + p - 1 which in the ideal case coincides with the multiplicity sequence c 0 (I, A),...,c d (I, A) defined for an arbitrary ideal I of A by Achilles and Manaresi in [AM]. In case that M has finite colength in A p and it is totally decomposable we prove that our BR-multiplicity sequence essentially falls into the standard BR-multiplicity of M. (author)
Directory of Open Access Journals (Sweden)
Ana Milstein
1979-01-01
Full Text Available The vertical distribution of each developmental stage of Paracalanus crassirostris was studied in a shallow water station at Ubatuba, SP, Brazil (23º30'S-45º07'W. Samples were collected monthly at the surface, 2m and near bottom levels . Salinity, temperature, dissolved oxygen, tide height, light penetration arid solar radiation were also recorded. Data were analysed by the general linear model. It showed that the amount of individuals at any developmental stage is affected diversely by hour, depth, hour-depth interaction and environmental factors throughout the year and that these effects are stronger in summer. All developmental stages were spread in the water column showing no regular vertical migrations. On the other hand, the number of organisms caught in a particular hour seemed to dependmore on the tide than on the animals behaviour. The results of the present paper showed, as observed by some other authors, the lack of vertical migration of a coastal copepod which is a grazer of fine particles throughout its life.A distribuição vertical dos diferentes estádios de desenvolvimento de P. crassirostris foi estudada durante um ano (junho 1976 - maio 1977, numa estação pouco profunda (5 m em Ubatuba. As amostras foram coletadas mensalmente, em tres profundidades, cada quatro horas, com garrafa van Dorn de 9 l registrando-se dados ambientais. Os dados foram processados com a técnica dos Mínimos Quadrados, na forma de uma Aralise de Regressão de um Modelo Linear que inclui covariáveis. O modelo foi construído a priori, considerando densidade de organismos por amostra, fatores ambientais, diferenças entre amostras procedentes de diferentes profundidades e horas, também como interações entre hora e profundidade. Para cada estádio de P. crassirostris, o modelo foi repetido 9 vezes, com os dados de dois meses cada vez, a fim de obter a variação das respostas no ano. Os resultados do modelo indicaram que a quantidade de indiv
International Nuclear Information System (INIS)
Mashayekh, Salman; Stadler, Michael; Cardoso, Gonçalo; Heleno, Miguel
2017-01-01
Highlights: • This paper presents a MILP model for optimal design of multi-energy microgrids. • Our microgrid design includes optimal technology portfolio, placement, and operation. • Our model includes microgrid electrical power flow and heat transfer equations. • The case study shows advantages of our model over aggregate single-node approaches. • The case study shows the accuracy of the integrated linearized power flow model. - Abstract: Optimal microgrid design is a challenging problem, especially for multi-energy microgrids with electricity, heating, and cooling loads as well as sources, and multiple energy carriers. To address this problem, this paper presents an optimization model formulated as a mixed-integer linear program, which determines the optimal technology portfolio, the optimal technology placement, and the associated optimal dispatch, in a microgrid with multiple energy types. The developed model uses a multi-node modeling approach (as opposed to an aggregate single-node approach) that includes electrical power flow and heat flow equations, and hence, offers the ability to perform optimal siting considering physical and operational constraints of electrical and heating/cooling networks. The new model is founded on the existing optimization model DER-CAM, a state-of-the-art decision support tool for microgrid planning and design. The results of a case study that compares single-node vs. multi-node optimal design for an example microgrid show the importance of multi-node modeling. It has been shown that single-node approaches are not only incapable of optimal DER placement, but may also result in sub-optimal DER portfolio, as well as underestimation of investment costs.
General methods for determining the linear stability of coronal magnetic fields
Craig, I. J. D.; Sneyd, A. D.; Mcclymont, A. N.
1988-01-01
A time integration of a linearized plasma equation of motion has been performed to calculate the ideal linear stability of arbitrary three-dimensional magnetic fields. The convergence rates of the explicit and implicit power methods employed are speeded up by using sequences of cyclic shifts. Growth rates are obtained for Gold-Hoyle force-free equilibria, and the corkscrew-kink instability is found to be very weak.
Directory of Open Access Journals (Sweden)
Dauda GuliburYAKUBU
2012-12-01
Full Text Available Accurate solutions to initial value systems of ordinary differential equations may be approximated efficiently by Runge-Kutta methods or linear multistep methods. Each of these has limitations of one sort or another. In this paper we consider, as a middle ground, the derivation of continuous general linear methods for solution of stiff systems of initial value problems in ordinary differential equations. These methods are designed to combine the advantages of both Runge-Kutta and linear multistep methods. Particularly, methods possessing the property of A-stability are identified as promising methods within this large class of general linear methods. We show that the continuous general linear methods are self-starting and have more ability to solve the stiff systems of ordinary differential equations, than the discrete ones. The initial value systems of ordinary differential equations are solved, for instance, without looking for any other method to start the integration process. This desirable feature of the proposed approach leads to obtaining very high accuracy of the solution of the given problem. Illustrative examples are given to demonstrate the novelty and reliability of the methods.
Ferreira-Ferreira, J.; Francisco, M. S.; Silva, T. S. F.
2017-12-01
Amazon floodplains play an important role in biodiversity maintenance and provide important ecosystem services. Flood duration is the prime factor modulating biogeochemical cycling in Amazonian floodplain systems, as well as influencing ecosystem structure and function. However, due to the absence of accurate terrain information, fine-scale hydrological modeling is still not possible for most of the Amazon floodplains, and little is known regarding the spatio-temporal behavior of flooding in these environments. Our study presents an new approach for spatial modeling of flood duration, using Synthetic Aperture Radar (SAR) and Generalized Linear Modeling. Our focal study site was Mamirauá Sustainable Development Reserve, in the Central Amazon. We acquired a series of L-band ALOS-1/PALSAR Fine-Beam mosaics, chosen to capture the widest possible range of river stage heights at regular intervals. We then mapped flooded area on each image, and used the resulting binary maps as the response variable (flooded/non-flooded) for multiple logistic regression. Explanatory variables were accumulated precipitation 15 days prior and the water stage height recorded in the Mamirauá lake gauging station observed for each image acquisition date, Euclidean distance from the nearest drainage, and slope, terrain curvature, profile curvature, planform curvature and Height Above the Nearest Drainage (HAND) derived from the 30-m SRTM DEM. Model results were validated with water levels recorded by ten pressure transducers installed within the floodplains, from 2014 to 2016. The most accurate model included water stage height and HAND as explanatory variables, yielding a RMSE of ±38.73 days of flooding per year when compared to the ground validation sites. The largest disagreements were 57 days and 83 days for two validation sites, while remaining locations achieved absolute errors lower than 38 days. In five out of nine validation sites, the model predicted flood durations with
Digital Repository Service at National Institute of Oceanography (India)
Nakamoto, S.; PrasannaKumar, S.; Muneyama, K.; Frouin, R.
, embedded in the ocean isopycnal general circulation model (OPYC). A higher abundance of chlorophyll in October than in April in the Arabian Sea increases absorption of solar irradiance and heating rate in the upper ocean, resulting in decreasing the mixed...
Digital Repository Service at National Institute of Oceanography (India)
Nakamoto, S.; Saito, H.; Muneyama, K.; Sato, T.; PrasannaKumar, S.; Kumar, A.; Frouin, R.
-chemical system that supports steady carbon circulation in geological time scale in the world ocean using Mixed Layer-Isopycnal ocean General Circulation model with remotely sensed Coastal Zone Color Scanner (CZCS) chlorophyll pigment concentration....
Digital Repository Service at National Institute of Oceanography (India)
Nakamoto, S.; PrasannaKumar, S.; Oberhuber, J.M.; Ishizaka, J.; Muneyama, K.; Frouin, R.
The influence of phytoplankton on the upper ocean dynamics and thermodynamics in the equatorial Pacific is investigated using an isopycnal ocean general circulation model (OPYC) coupled with a mixed layer model and remotely sensed chlorophyll...
de Bruin, Anique B H; Smits, Niels; Rikers, Remy M J P; Schmidt, Henk G
2008-11-01
In this study, the longitudinal relation between deliberate practice and performance in chess was examined using a linear mixed models analysis. The practice activities and performance ratings of young elite chess players, who were either in, or had dropped out of the Dutch national chess training, were analysed since they had started playing chess seriously. The results revealed that deliberate practice (i.e. serious chess study alone and serious chess play) strongly contributed to chess performance. The influence of deliberate practice was not only observable in current performance, but also over chess players' careers. Moreover, although the drop-outs' chess ratings developed more slowly over time, both the persistent and drop-out chess players benefited to the same extent from investments in deliberate practice. Finally, the effect of gender on chess performance proved to be much smaller than the effect of deliberate practice. This study provides longitudinal support for the monotonic benefits assumption of deliberate practice, by showing that over chess players' careers, deliberate practice has a significant effect on performance, and to the same extent for chess players of different ultimate performance levels. The results of this study are not in line with critique raised against the deliberate practice theory that the factors deliberate practice and talent could be confounded.
Zhang, Peng; Luo, Dandan; Li, Pengfei; Sharpsten, Lucie; Medeiros, Felipe A.
2015-01-01
Glaucoma is a progressive disease due to damage in the optic nerve with associated functional losses. Although the relationship between structural and functional progression in glaucoma is well established, there is disagreement on how this association evolves over time. In addressing this issue, we propose a new class of non-Gaussian linear-mixed models to estimate the correlations among subject-specific effects in multivariate longitudinal studies with a skewed distribution of random effects, to be used in a study of glaucoma. This class provides an efficient estimation of subject-specific effects by modeling the skewed random effects through the log-gamma distribution. It also provides more reliable estimates of the correlations between the random effects. To validate the log-gamma assumption against the usual normality assumption of the random effects, we propose a lack-of-fit test using the profile likelihood function of the shape parameter. We apply this method to data from a prospective observation study, the Diagnostic Innovations in Glaucoma Study, to present a statistically significant association between structural and functional change rates that leads to a better understanding of the progression of glaucoma over time. PMID:26075565
Directory of Open Access Journals (Sweden)
Akyene Tetteh
2017-04-01
Full Text Available Background: Although the Internet boosts business profitability, without certain activities like efficient transportation, scheduling, products ordered via the Internet may reach their destination very late. The environmental problems (vehicle part disposal, carbon monoxide [CO], nitrogen oxide [NOx] and hydrocarbons [HC] associated with transportation are mostly not accounted for by industries. Objectives: The main objective of this article is to minimising negative externalities cost in e-commerce environments. Method: The 0-1 mixed integer linear programming (0-1 MILP model was used to model the problem statement. The result was further analysed using the externality percentage impact factor (EPIF. Results: The simulation results suggest that (1 The mode of ordering refined petroleum products does not impact on the cost of distribution, (2 an increase in private cost is directly proportional to the externality cost, (3 externality cost is largely controlled by the government and number of vehicles used in the distribution and this is in no way influenced by the mode of request (i.e. Internet or otherwise and (4 externality cost may be reduce by using more ecofriendly fuel system.
Raymond L. Czaplewski
1973-01-01
A generalized, non-linear population dynamics model of an ecosystem is used to investigate the direction of selective pressures upon a mutant by studying the competition between parent and mutant populations. The model has the advantages of considering selection as operating on the phenotype, of retaining the interaction of the mutant population with the ecosystem as a...
A General Linear Model (GLM) was used to evaluate the deviation of predicted values from expected values for a complex environmental model. For this demonstration, we used the default level interface of the Regional Mercury Cycling Model (R-MCM) to simulate epilimnetic total mer...
Ruiz-Baier, Ricardo; Lunati, Ivan
2016-10-01
We present a novel discretization scheme tailored to a class of multiphase models that regard the physical system as consisting of multiple interacting continua. In the framework of mixture theory, we consider a general mathematical model that entails solving a system of mass and momentum equations for both the mixture and one of the phases. The model results in a strongly coupled and nonlinear system of partial differential equations that are written in terms of phase and mixture (barycentric) velocities, phase pressure, and saturation. We construct an accurate, robust and reliable hybrid method that combines a mixed finite element discretization of the momentum equations with a primal discontinuous finite volume-element discretization of the mass (or transport) equations. The scheme is devised for unstructured meshes and relies on mixed Brezzi-Douglas-Marini approximations of phase and total velocities, on piecewise constant elements for the approximation of phase or total pressures, as well as on a primal formulation that employs discontinuous finite volume elements defined on a dual diamond mesh to approximate scalar fields of interest (such as volume fraction, total density, saturation, etc.). As the discretization scheme is derived for a general formulation of multicontinuum physical systems, it can be readily applied to a large class of simplified multiphase models; on the other, the approach can be seen as a generalization of these models that are commonly encountered in the literature and employed when the latter are not sufficiently accurate. An extensive set of numerical test cases involving two- and three-dimensional porous media are presented to demonstrate the accuracy of the method (displaying an optimal convergence rate), the physics-preserving properties of the mixed-primal scheme, as well as the robustness of the method (which is successfully used to simulate diverse physical phenomena such as density fingering, Terzaghi's consolidation
Recent advances toward a general purpose linear-scaling quantum force field.
Giese, Timothy J; Huang, Ming; Chen, Haoyuan; York, Darrin M
2014-09-16
Conspectus There is need in the molecular simulation community to develop new quantum mechanical (QM) methods that can be routinely applied to the simulation of large molecular systems in complex, heterogeneous condensed phase environments. Although conventional methods, such as the hybrid quantum mechanical/molecular mechanical (QM/MM) method, are adequate for many problems, there remain other applications that demand a fully quantum mechanical approach. QM methods are generally required in applications that involve changes in electronic structure, such as when chemical bond formation or cleavage occurs, when molecules respond to one another through polarization or charge transfer, or when matter interacts with electromagnetic fields. A full QM treatment, rather than QM/MM, is necessary when these features present themselves over a wide spatial range that, in some cases, may span the entire system. Specific examples include the study of catalytic events that involve delocalized changes in chemical bonds, charge transfer, or extensive polarization of the macromolecular environment; drug discovery applications, where the wide range of nonstandard residues and protonation states are challenging to model with purely empirical MM force fields; and the interpretation of spectroscopic observables. Unfortunately, the enormous computational cost of conventional QM methods limit their practical application to small systems. Linear-scaling electronic structure methods (LSQMs) make possible the calculation of large systems but are still too computationally intensive to be applied with the degree of configurational sampling often required to make meaningful comparison with experiment. In this work, we present advances in the development of a quantum mechanical force field (QMFF) suitable for application to biological macromolecules and condensed phase simulations. QMFFs leverage the benefits provided by the LSQM and QM/MM approaches to produce a fully QM method that is able to
Stone, Louise
2015-04-01
To determine what diagnostic terms are utilized by general practitioners (GPs) when seeing patients with mixed emotional and physical symptoms. Prototype cases of depression, anxiety, hypochondriasis, somatization and undifferentiated somatoform disorders were sourced from the psychiatric literature and the author's clinical practice. These were presented, in paper form, to a sample of GPs and GP registrars who were asked to provide a written diagnosis. Fifty-two questionnaires were returned (30% response rate). The depression and anxiety cases were identified correctly by most participants. There was moderate identification of the hypochondriasis and somatization disorder cases, and poor identification of the undifferentiated somatoform case. Somatization and undifferentiated somatoform disorders were infrequently recognized as diagnostic categories by the GPs in this study. Future research into the language and diagnostic reasoning utilized by GPs may help develop better diagnostic classification systems for use in primary care in this important area of practice.
Yan, Jun; Aseltine, Robert H., Jr.; Harel, Ofer
2013-01-01
Comparing regression coefficients between models when one model is nested within another is of great practical interest when two explanations of a given phenomenon are specified as linear models. The statistical problem is whether the coefficients associated with a given set of covariates change significantly when other covariates are added into…
DEFF Research Database (Denmark)
Abrahamsen, Trine Julie; Hansen, Lars Kai
2011-01-01
We investigate sparse non-linear denoising of functional brain images by kernel Principal Component Analysis (kernel PCA). The main challenge is the mapping of denoised feature space points back into input space, also referred to as ”the pre-image problem”. Since the feature space mapping is typi...
Huppert, Theodore J
2016-01-01
Functional near-infrared spectroscopy (fNIRS) is a noninvasive neuroimaging technique that uses low levels of light to measure changes in cerebral blood oxygenation levels. In the majority of NIRS functional brain studies, analysis of this data is based on a statistical comparison of hemodynamic levels between a baseline and task or between multiple task conditions by means of a linear regression model: the so-called general linear model. Although these methods are similar to their implementation in other fields, particularly for functional magnetic resonance imaging, the specific application of these methods in fNIRS research differs in several key ways related to the sources of noise and artifacts unique to fNIRS. In this brief communication, we discuss the application of linear regression models in fNIRS and the modifications needed to generalize these models in order to deal with structured (colored) noise due to systemic physiology and noise heteroscedasticity due to motion artifacts. The objective of this work is to present an overview of these noise properties in the context of the linear model as it applies to fNIRS data. This work is aimed at explaining these mathematical issues to the general fNIRS experimental researcher but is not intended to be a complete mathematical treatment of these concepts.
Energy Technology Data Exchange (ETDEWEB)
Studnicki, M.; Mądry, W.; Noras, K.; Wójcik-Gront, E.; Gacek, E.
2016-11-01
The main objectives of multi-environmental trials (METs) are to assess cultivar adaptation patterns under different environmental conditions and to investigate genotype by environment (G×E) interactions. Linear mixed models (LMMs) with more complex variance-covariance structures have become recognized and widely used for analyzing METs data. Best practice in METs analysis is to carry out a comparison of competing models with different variance-covariance structures. Improperly chosen variance-covariance structures may lead to biased estimation of means resulting in incorrect conclusions. In this work we focused on adaptive response of cultivars on the environments modeled by the LMMs with different variance-covariance structures. We identified possible limitations of inference when using an inadequate variance-covariance structure. In the presented study we used the dataset on grain yield for 63 winter wheat cultivars, evaluated across 18 locations, during three growing seasons (2008/2009-2010/2011) from the Polish Post-registration Variety Testing System. For the evaluation of variance-covariance structures and the description of cultivars adaptation to environments, we calculated adjusted means for the combination of cultivar and location in models with different variance-covariance structures. We concluded that in order to fully describe cultivars adaptive patterns modelers should use the unrestricted variance-covariance structure. The restricted compound symmetry structure may interfere with proper interpretation of cultivars adaptive patterns. We found, that the factor-analytic structure is also a good tool to describe cultivars reaction on environments, and it can be successfully used in METs data after determining the optimal component number for each dataset. (Author)
Directory of Open Access Journals (Sweden)
Nahed S. Hussein
2014-01-01
Full Text Available A numerical boundary integral scheme is proposed for the solution to the system of eld equations of plane. The stresses are prescribed on one-half of the circle, while the displacements are given. The considered problem with mixed boundary conditions in the circle is replaced by two problems with homogeneous boundary conditions, one of each type, having a common solution. The equations are reduced to a system of boundary integral equations, which is then discretized in the usual way, and the problem at this stage is reduced to the solution to a rectangular linear system of algebraic equations. The unknowns in this system of equations are the boundary values of four harmonic functions which define the full elastic solution and the unknown boundary values of stresses or displacements on proper parts of the boundary. On the basis of the obtained results, it is inferred that a stress component has a singularity at each of the two separation points, thought to be of logarithmic type. The results are discussed and boundary plots are given. We have also calculated the unknown functions in the bulk directly from the given boundary conditions using the boundary collocation method. The obtained results in the bulk are discussed and three-dimensional plots are given. A tentative form for the singular solution is proposed and the corresponding singular stresses and displacements are plotted in the bulk. The form of the singular tangential stress is seen to be compatible with the boundary values obtained earlier. The efficiency of the used numerical schemes is discussed.
Zhang, Huiling; Huang, Qingsheng; Bei, Zhendong; Wei, Yanjie; Floudas, Christodoulos A
2016-03-01
In this article, we present COMSAT, a hybrid framework for residue contact prediction of transmembrane (TM) proteins, integrating a support vector machine (SVM) method and a mixed integer linear programming (MILP) method. COMSAT consists of two modules: COMSAT_SVM which is trained mainly on position-specific scoring matrix features, and COMSAT_MILP which is an ab initio method based on optimization models. Contacts predicted by the SVM model are ranked by SVM confidence scores, and a threshold is trained to improve the reliability of the predicted contacts. For TM proteins with no contacts above the threshold, COMSAT_MILP is used. The proposed hybrid contact prediction scheme was tested on two independent TM protein sets based on the contact definition of 14 Å between Cα-Cα atoms. First, using a rigorous leave-one-protein-out cross validation on the training set of 90 TM proteins, an accuracy of 66.8%, a coverage of 12.3%, a specificity of 99.3% and a Matthews' correlation coefficient (MCC) of 0.184 were obtained for residue pairs that are at least six amino acids apart. Second, when tested on a test set of 87 TM proteins, the proposed method showed a prediction accuracy of 64.5%, a coverage of 5.3%, a specificity of 99.4% and a MCC of 0.106. COMSAT shows satisfactory results when compared with 12 other state-of-the-art predictors, and is more robust in terms of prediction accuracy as the length and complexity of TM protein increase. COMSAT is freely accessible at http://hpcc.siat.ac.cn/COMSAT/. © 2016 Wiley Periodicals, Inc.
Frömer, Romy; Maier, Martin; Abdel Rahman, Rasha
2018-01-01
Here we present an application of an EEG processing pipeline customizing EEGLAB and FieldTrip functions, specifically optimized to flexibly analyze EEG data based on single trial information. The key component of our approach is to create a comprehensive 3-D EEG data structure including all trials and all participants maintaining the original order of recording. This allows straightforward access to subsets of the data based on any information available in a behavioral data structure matched with the EEG data (experimental conditions, but also performance indicators, such accuracy or RTs of single trials). In the present study we exploit this structure to compute linear mixed models (LMMs, using lmer in R) including random intercepts and slopes for items. This information can easily be read out from the matched behavioral data, whereas it might not be accessible in traditional ERP approaches without substantial effort. We further provide easily adaptable scripts for performing cluster-based permutation tests (as implemented in FieldTrip), as a more robust alternative to traditional omnibus ANOVAs. Our approach is particularly advantageous for data with parametric within-subject covariates (e.g., performance) and/or multiple complex stimuli (such as words, faces or objects) that vary in features affecting cognitive processes and ERPs (such as word frequency, salience or familiarity), which are sometimes hard to control experimentally or might themselves constitute variables of interest. The present dataset was recorded from 40 participants who performed a visual search task on previously unfamiliar objects, presented either visually intact or blurred. MATLAB as well as R scripts are provided that can be adapted to different datasets.
Lloyd-Jones, Luke R; Robinson, Matthew R; Yang, Jian; Visscher, Peter M
2018-04-01
Genome-wide association studies (GWAS) have identified thousands of loci that are robustly associated with complex diseases. The use of linear mixed model (LMM) methodology for GWAS is becoming more prevalent due to its ability to control for population structure and cryptic relatedness and to increase power. The odds ratio (OR) is a common measure of the association of a disease with an exposure ( e.g. , a genetic variant) and is readably available from logistic regression. However, when the LMM is applied to all-or-none traits it provides estimates of genetic effects on the observed 0-1 scale, a different scale to that in logistic regression. This limits the comparability of results across studies, for example in a meta-analysis, and makes the interpretation of the magnitude of an effect from an LMM GWAS difficult. In this study, we derived transformations from the genetic effects estimated under the LMM to the OR that only rely on summary statistics. To test the proposed transformations, we used real genotypes from two large, publicly available data sets to simulate all-or-none phenotypes for a set of scenarios that differ in underlying model, disease prevalence, and heritability. Furthermore, we applied these transformations to GWAS summary statistics for type 2 diabetes generated from 108,042 individuals in the UK Biobank. In both simulation and real-data application, we observed very high concordance between the transformed OR from the LMM and either the simulated truth or estimates from logistic regression. The transformations derived and validated in this study improve the comparability of results from prospective and already performed LMM GWAS on complex diseases by providing a reliable transformation to a common comparative scale for the genetic effects. Copyright © 2018 by the Genetics Society of America.
Linear-algebraic approach to electron-molecule collisions: General formulation
International Nuclear Information System (INIS)
Collins, L.A.; Schneider, B.I.
1981-01-01
We present a linear-algebraic approach to electron-molecule collisions based on an integral equations form with either logarithmic or asymptotic boundary conditions. The introduction of exchange effects does not alter the basic form or order of the linear-algebraic equations for a local potential. In addition to the standard procedure of directly evaluating the exchange integrals by numerical quadrature, we also incorporate exchange effects through a separable-potential approximation. Efficient schemes are developed for reducing the number of points and channels that must be included. The method is applied at the static-exchange level to a number of molecular systems including H 2 , N 2 , LiH, and CO 2
Generalization of the Wide-Sense Markov Concept to a Widely Linear Processing
International Nuclear Information System (INIS)
Espinosa-Pulido, Juan Antonio; Navarro-Moreno, Jesús; Fernández-Alcalá, Rosa María; Ruiz-Molina, Juan Carlos; Oya-Lechuga, Antonia; Ruiz-Fuentes, Nuria
2014-01-01
In this paper we show that the classical definition and the associated characterizations of wide-sense Markov (WSM) signals are not valid for improper complex signals. For that, we propose an extension of the concept of WSM to a widely linear (WL) setting and the study of new characterizations. Specifically, we introduce a new class of signals, called widely linear Markov (WLM) signals, and we analyze some of their properties based either on second-order properties or on state-space models from a WL processing standpoint. The study is performed in both the forwards and backwards directions of time. Thus, we provide two forwards and backwards Markovian representations for WLM signals. Finally, different estimation recursive algorithms are obtained for these models
International Nuclear Information System (INIS)
VanMeter, N. M.; Lougovski, P.; Dowling, Jonathan P.; Uskov, D. B.; Kieling, K.; Eisert, J.
2007-01-01
We introduce schemes for linear-optical quantum state generation. A quantum state generator is a device that prepares a desired quantum state using product inputs from photon sources, linear-optical networks, and postselection using photon counters. We show that this device can be concisely described in terms of polynomial equations and unitary constraints. We illustrate the power of this language by applying the Groebner-basis technique along with the notion of vacuum extensions to solve the problem of how to construct a quantum state generator analytically for any desired state, and use methods of convex optimization to identify bounds to success probabilities. In particular, we disprove a conjecture concerning the preparation of the maximally path-entangled |n,0>+|0,n> (NOON) state by providing a counterexample using these methods, and we derive a new upper bound on the resources required for NOON-state generation
Non-linear partial differential equations an algebraic view of generalized solutions
Rosinger, Elemer E
1990-01-01
A massive transition of interest from solving linear partial differential equations to solving nonlinear ones has taken place during the last two or three decades. The availability of better computers has often made numerical experimentations progress faster than the theoretical understanding of nonlinear partial differential equations. The three most important nonlinear phenomena observed so far both experimentally and numerically, and studied theoretically in connection with such equations have been the solitons, shock waves and turbulence or chaotical processes. In many ways, these phenomen
Continuity and general perturbation of the Drazin inverse for closed linear operators
Directory of Open Access Journals (Sweden)
N. Castro González
2002-01-01
Full Text Available We study perturbations and continuity of the Drazin inverse of a closed linear operator A and obtain explicit error estimates in terms of the gap between closed operators and the gap between ranges and nullspaces of operators. The results are used to derive a theorem on the continuity of the Drazin inverse for closed operators and to describe the asymptotic behavior of operator semigroups.
Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette
2018-03-13
The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Generalized Forecast Error Variance Decomposition for Linear and Nonlinear Multivariate Models
DEFF Research Database (Denmark)
Lanne, Markku; Nyberg, Henri
We propose a new generalized forecast error variance decomposition with the property that the proportions of the impact accounted for by innovations in each variable sum to unity. Our decomposition is based on the well-established concept of the generalized impulse response function. The use of t...
Generalized linear differential equations in a Banach space : continuous dependence on a parameter
Czech Academy of Sciences Publication Activity Database
Monteiro, G.A.; Tvrdý, Milan
2013-01-01
Roč. 33, č. 1 (2013), s. 283-303 ISSN 1078-0947 Institutional research plan: CEZ:AV0Z10190503 Keywords : generalized differential equations * continuous dependence * Kurzweil-Stieltjes integral Subject RIV: BA - General Mathematics Impact factor: 0.923, year: 2013 http://aimsciences.org/journals/displayArticlesnew.jsp?paperID=7615
The pricing of credit default swaps under a generalized mixed fractional Brownian motion
He, Xinjiang; Chen, Wenting
2014-06-01
In this paper, we consider the pricing of the CDS (credit default swap) under a GMFBM (generalized mixed fractional Brownian motion) model. As the name suggests, the GMFBM model is indeed a generalization of all the FBM (fractional Brownian motion) models used in the literature, and is proved to be able to effectively capture the long-range dependence of the stock returns. To develop the pricing mechanics of the CDS, we firstly derive a sufficient condition for the market modeled under the GMFBM to be arbitrage free. Then under the risk-neutral assumption, the CDS is fairly priced by investigating the two legs of the cash flow involved. The price we obtained involves elementary functions only, and can be easily implemented for practical purpose. Finally, based on numerical experiments, we analyze quantitatively the impacts of different parameters on the prices of the CDS. Interestingly, in comparison with all the other FBM models documented in the literature, the results produced from the GMFBM model are in a better agreement with those calculated from the classical Black-Scholes model.
International Nuclear Information System (INIS)
Fernandes, L.; Friedlander, A.; Guedes, M.; Judice, J.
2001-01-01
This paper addresses a General Linear Complementarity Problem (GLCP) that has found applications in global optimization. It is shown that a solution of the GLCP can be computed by finding a stationary point of a differentiable function over a set defined by simple bounds on the variables. The application of this result to the solution of bilinear programs and LCPs is discussed. Some computational evidence of its usefulness is included in the last part of the paper
International Nuclear Information System (INIS)
Penuela, G; Ordonez R, A; Bejarano, A
1998-01-01
A generalized material balance equation was presented at the Escuela de Petroleos de la Universidad Industrial de Santander for coal seam gas reservoirs based on Walsh's method, who worked in an analogous approach for oil and gas conventional reservoirs (Walsh, 1995). Our equation was based on twelve similar assumptions itemized by Walsh for his generalized expression for conventional reservoirs it was started from the same volume balance consideration and was finally reorganized like Walsh (1994) did. Because it is not expressed in terms of traditional (P/Z) plots, as proposed by King (1990), it allows to perform a lot of quantitative and qualitative analyses. It was also demonstrated that the existent equations are only particular cases of the generalized expression evaluated under certain restrictions. This equation is applicable to coal seam gas reservoirs in saturated, equilibrium and under saturated conditions, and to any type of coal beds without restriction on especial values of the constant diffusion
Nemeth, Michael P.; Schultz, Marc R.
2012-01-01
A detailed exact solution is presented for laminated-composite circular cylinders with general wall construction and that undergo axisymmetric deformations. The overall solution is formulated in a general, systematic way and is based on the solution of a single fourth-order, nonhomogeneous ordinary differential equation with constant coefficients in which the radial displacement is the dependent variable. Moreover, the effects of general anisotropy are included and positive-definiteness of the strain energy is used to define uniquely the form of the basis functions spanning the solution space of the ordinary differential equation. Loading conditions are considered that include axisymmetric edge loads, surface tractions, and temperature fields. Likewise, all possible axisymmetric boundary conditions are considered. Results are presented for five examples that demonstrate a wide range of behavior for specially orthotropic and fully anisotropic cylinders.
Examining secular trend and seasonality in count data using dynamic generalized linear modelling
DEFF Research Database (Denmark)
Lundbye-Christensen, Søren; Dethlefsen, Claus; Gorst-Rasmussen, Anders
series regression model for Poisson counts. It differs in allowing the regression coefficients to vary gradually over time in a random fashion. Data In the period January 1980 to 1999, 17,989 incidents of acute myocardial infarction were recorded in the county of Northern Jutland, Denmark. Records were......Aims Time series of incidence counts often show secular trends and seasonal patterns. We present a model for incidence counts capable of handling a possible gradual change in growth rates and seasonal patterns, serial correlation and overdispersion. Methods The model resembles an ordinary time...... updated daily. Results The model with a seasonal pattern and an approximately linear trend was fitted to the data, and diagnostic plots indicate a good model fit. The analysis with the dynamic model revealed peaks coinciding with influenza epidemics. On average the peak-to-trough ratio is estimated...
General rigid motion correction for computed tomography imaging based on locally linear embedding
Chen, Mianyi; He, Peng; Feng, Peng; Liu, Baodong; Yang, Qingsong; Wei, Biao; Wang, Ge
2018-02-01
The patient motion can damage the quality of computed tomography images, which are typically acquired in cone-beam geometry. The rigid patient motion is characterized by six geometric parameters and are more challenging to correct than in fan-beam geometry. We extend our previous rigid patient motion correction method based on the principle of locally linear embedding (LLE) from fan-beam to cone-beam geometry and accelerate the computational procedure with the graphics processing unit (GPU)-based all scale tomographic reconstruction Antwerp toolbox. The major merit of our method is that we need neither fiducial markers nor motion-tracking devices. The numerical and experimental studies show that the LLE-based patient motion correction is capable of calibrating the six parameters of the patient motion simultaneously, reducing patient motion artifacts significantly.
Iterative solution of general sparse linear systems on clusters of workstations
Energy Technology Data Exchange (ETDEWEB)
Lo, Gen-Ching; Saad, Y. [Univ. of Minnesota, Minneapolis, MN (United States)
1996-12-31
Solving sparse irregularly structured linear systems on parallel platforms poses several challenges. First, sparsity makes it difficult to exploit data locality, whether in a distributed or shared memory environment. A second, perhaps more serious challenge, is to find efficient ways to precondition the system. Preconditioning techniques which have a large degree of parallelism, such as multicolor SSOR, often have a slower rate of convergence than their sequential counterparts. Finally, a number of other computational kernels such as inner products could ruin any gains gained from parallel speed-ups, and this is especially true on workstation clusters where start-up times may be high. In this paper we discuss these issues and report on our experience with PSPARSLIB, an on-going project for building a library of parallel iterative sparse matrix solvers.
International Nuclear Information System (INIS)
Kagramanov, E.D.; Nagiyev, Sh.M.; Mir-Kasimov, R.M.
1989-03-01
An exactly soluble problem for the finite-difference Schroedinger equation in the relativistic configurational space is considered. The appropriate finite-difference generalization of the factorization method is developed. The theory of new special functions ''the relativistic Hermite polynomials'', in which the solutions are expressed, is constructed. (author). 14 refs
Equilibrium arrival times to queues with general service times and non-linear utility functions
DEFF Research Database (Denmark)
Breinbjerg, Jesper
2017-01-01
by a general utility function which is decreasing in the waiting time and service completion time of each customer. Applications of such queueing games range from people choosing when to arrive at a grand opening sale to travellers choosing when to line up at the gate when boarding an airplane. We develop...
The energy and the linear momentum of space-times in general relativity
International Nuclear Information System (INIS)
Schoen, R.; Yau, S.T.
1981-01-01
We extend our previous proof of the positive mass conjecture to allow a more general asymptotic condition proposed by York. Hence we are able to prove that for an isolated physical system, the energy momentum four vector is a future timelike vector unless the system is trivial. Furthermore, we allow singularities of the type of black holes. (orig.)
Leahy, Dorothy; Lyons, Aoife; Dahm, Matthias; Quinlan, Diarmuid; Bradley, Colin
2017-11-01
Text messaging has become more prevalent in general practice as a tool with which to communicate with patients. The main objectives were to assess the extent, growth, and perceived risks and benefits of text messaging by GPs to communicate with patients, and assess patients' attitudes towards receiving text messages from their GP. A mixed methods study, using surveys, a review, and a focus group, was conducted in both urban and rural practices in the south-west of Ireland. A telephone survey of 389 GPs was conducted to ascertain the prevalence of text messaging. Subsequently, the following were also carried out: additional telephone surveys with 25 GPs who use text messaging and 26 GPs who do not, a written satisfaction survey given to 78 patients, a review of the electronic information systems of five practices, and a focus group with six GPs to ascertain attitudes towards text messaging. In total, 38% ( n = 148) of the surveyed GPs used text messaging to communicate with patients and 62% ( n = 241) did not. Time management was identified as the key advantage of text messaging among GPs who used it (80%; n = 20) and those who did not (50%; n = 13). Confidentiality was reported as the principal concern among both groups, at 32% ( n = 8) and 69% ( n = 18) respectively. Most patients (99%; n = 77) were happy to receive text messages from their GP. The GP focus group identified similar issues and benefits in terms of confidentiality and time management. Data were extracted from the IT systems of five consenting practices and the number of text messages sent during the period from January 2013 to March 2016 was generated. This increased by 40% per annum. Collaborative efforts are required from relevant policymakers to address data protection and text messaging issues so that GPs can be provided with clear guidelines to protect patient confidentiality. © British Journal of General Practice 2017.
Generalized W1;1-Young Measures and Relaxation of Problems with Linear Growth
Czech Academy of Sciences Publication Activity Database
Baia, M.; Krömer, Stefan; Kružík, Martin
2018-01-01
Roč. 50, č. 1 (2018), s. 1076-1119 ISSN 0036-1410 R&D Projects: GA ČR GA14-15264S; GA ČR(CZ) GF16-34894L Institutional support: RVO:67985556 Keywords : lower semicontinuity * quasiconvexity * Young measures Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 1.648, year: 2016 http://library.utia.cas.cz/2018/MTR/kruzik-0487019.pdf
Directory of Open Access Journals (Sweden)
Weiping Ma
Full Text Available BACKGROUND: Association between bacillary dysentery (BD disease and temperature has been reported in some studies applying Poisson regression model, however the effect estimation might be biased due to the data autocorrelation. Furthermore the temperature effect distributed in the time of different lags has not been studied either. The purpose of this work was to obtaining the association between the BD counts and the climatic factors such as temperature in the form of the weighted averages, concerning the autocorrelation pattern of the model residuals, and to make short term predictions using the model. The data was collected in the city of Shanghai from 2004 to 2008. METHODS: We used mixed generalized additive model (MGAM to analyze data on bacillary dysentery, temperature and other covariates with autoregressive random effect. Short term predictions were made using MGAM with the moving average of the BD counts. MAIN RESULTS: Our results showed that temperature was significant linearly associated with the logarithm of BD count for temperature in the range from 12°C to 22°C. Optimal weights in the temperature effect have been obtained, in which the one of 1-day-lag was close to 0, and the one of 2-days-lag was the maximum (p-value of the difference was less than 0.05. The predictive model was showing good fitness on the internal data with R(2 value 0.875, and the good short term prediction effect on the external data with correlation coefficient to be 0.859. CONCLUSION: According to the model estimation, corresponding Risk Ratio to affect BD was close to 1.1 when temperature effect goes up for 1°C in the range from 12°C to 22°C. And the 1-day incubation period could be inferred from the model estimation. Good prediction has been made using the predictive MGAM.
General, database-driven fast-feedback system for the Stanford Linear Collider
International Nuclear Information System (INIS)
Rouse, F.; Allison, S.; Castillo, S.; Gromme, T.; Hall, B.; Hendrickson, L.; Himel, T.; Krauter, K.; Sass, B.; Shoaee, H.
1991-05-01
A new feedback system has been developed for stabilizing the SLC beams at many locations. The feedback loops are designed to sample and correct at the 60 Hz repetition rate of the accelerator. Each loop can be distributed across several of the standard 80386 microprocessors which control the SLC hardware. A new communications system, KISNet, has been implemented to pass signals between the microprocessors at this rate. The software is written in a general fashion using the state space formalism of digital control theory. This allows a new loop to be implemented by just setting up the online database and perhaps installing a communications link. 3 refs., 4 figs
International Nuclear Information System (INIS)
Khattab, K.M.
1998-01-01
The diffusion synthetic acceleration (DSA) method has been known to be an effective tool for accelerating the iterative solution of transport equations with isotopic or mildly anisotropic scattering. However, the DSA method is not effective for transport equations that have strongly anisotropic scattering. A generalization of the modified DSA (MDSA) methods is proposed. This method converges (Clock time) faster than the MDSA method. It is developed, the results of a Fourier analysis that theoretically predicts its efficiency are described, and numerical results that verify the theoretical prediction are presented. (author). 9 refs., 2 tabs., 5 figs
International Nuclear Information System (INIS)
Khattab, K.M.
1997-01-01
The diffusion synthetic acceleration (DSA) method has been known to be an effective tool for accelerating the iterative solution of transport equations with isotropic or mildly anisotropic scattering. However, the DSA method is not effective for transport equations that have strongly anisotropic scattering. A generalization of the modified DSA (MDSA) method is proposed that converges (clock time) faster than the MDSA method. This method is developed, the results of a Fourier analysis that theoretically predicts its efficiency are described, and numerical results that verify the theoretical prediction are presented
A general mixed boundary model reduction method for component mode synthesis
Voormeeren, S.N.; Van der Valk, P.L.C.; Rixen, D.J.
2010-01-01
A classic issue in component mode synthesis (CMS) methods is the choice for fixed or free boundary conditions at the interface degrees of freedom (DoF) and the associated vibration modes in the components reduction base. In this paper, a novel mixed boundary CMS method called the “Mixed
International Nuclear Information System (INIS)
Reshak, A. H.; Brik, M. G.; Auluck, S.
2014-01-01
Based on the electronic band structure, we have calculated the dispersion of the linear and nonlinear optical susceptibilities for the mixed CuAl(S 1−x Se x ) 2 chaclcopyrite compounds with x = 0.0, 0.25, 0.5, 0.75, and 1.0. Calculations are performed within the Perdew-Becke-Ernzerhof general gradient approximation. The investigated compounds possess a direct band gap of about 2.2 eV (CuAlS 2 ), 1.9 eV (CuAl(S 0.75 Se 0.25 ) 2 ), 1.7 eV (CuAl(S 0.5 Se 0.5 ) 2 ), 1.5 eV (CuAl(S 0.25 Se 0.75 ) 2 ), and 1.4 eV (CuAlSe 2 ) which tuned to make them optically active for the optoelectronics and photovoltaic applications. These results confirm that substituting S by Se causes significant band gaps' reduction. The optical function's dispersion ε 2 xx (ω) and ε 2 zz (ω)/ε 2 xx (ω), ε 2 yy (ω), and ε 2 zz (ω) was calculated and discussed in detail. To demonstrate the effect of substituting S by Se on the complex second-order nonlinear optical susceptibility tensors, we performed detailed calculations for the complex second-order nonlinear optical susceptibility tensors, which show that the neat parents compounds CuAlS 2 and CuAlSe 2 exhibit | χ 123 (2) (−2ω;ω;ω) | as the dominant component, while the mixed alloys exhibit | χ 111 (2) (−2ω;ω;ω) | as the dominant component. The features of | χ 123 (2) (−2ω;ω;ω) | and | χ 111 (2) (−2ω;ω;ω) | spectra were analyzed on the basis of the absorptive part of the corresponding dielectric function ε 2 (ω) as a function of both ω/2 and ω.
De Carvalho, Irene Stuart Torrié; Granfeldt, Yvonne; Dejmek, Petr; Håkansson, Andreas
2015-03-01
Linear programming has been used extensively as a tool for nutritional recommendations. Extending the methodology to food formulation presents new challenges, since not all combinations of nutritious ingredients will produce an acceptable food. Furthermore, it would help in implementation and in ensuring the feasibility of the suggested recommendations. To extend the previously used linear programming methodology from diet optimization to food formulation using consistency constraints. In addition, to exemplify usability using the case of a porridge mix formulation for emergency situations in rural Mozambique. The linear programming method was extended with a consistency constraint based on previously published empirical studies on swelling of starch in soft porridges. The new method was exemplified using the formulation of a nutritious, minimum-cost porridge mix for children aged 1 to 2 years for use as a complete relief food, based primarily on local ingredients, in rural Mozambique. A nutritious porridge fulfilling the consistency constraints was found; however, the minimum cost was unfeasible with local ingredients only. This illustrates the challenges in formulating nutritious yet economically feasible foods from local ingredients. The high cost was caused by the high cost of mineral-rich foods. A nutritious, low-cost porridge that fulfills the consistency constraints was obtained by including supplements of zinc and calcium salts as ingredients. The optimizations were successful in fulfilling all constraints and provided a feasible porridge, showing that the extended constrained linear programming methodology provides a systematic tool for designing nutritious foods.
Li, Xi-Zeng; Su, Bao-Xia
1996-01-01
It is found that the field of the combined mode of the probe wave and the phase-conjugate wave in the process of non-degenerate four-wave mixing exhibits higher-order squeezing to all even orders. And the generalized uncertainty relations in this process are also presented.
van Velzen, Joke H.
2018-01-01
There were two purposes for this mixed methods study: to investigate (a) the realistic meaning of awareness and understanding as the underlying constructs of general knowledge of the learning process and (b) a procedure for data consolidation. The participants were 11th-grade high school and first-year university students. Integrated data…
The general linear thermoelastic end problem for solid and hollow cylinders
International Nuclear Information System (INIS)
Thompson, J.J.; Chen, P.Y.P.
1977-01-01
This paper reports on three topics arising from work in progress on theoretical and computational aspects of the utilization of self equilibrating and load stress systems, to solve thermoelastic problems of finite, or semi-infinite, solid or hollow circular cylinders, with particular reference to the pellets, rods, tubes and shells with arbitrary internal heat generation encountered in Nuclear Reactor Technology. Specifically the work is aimed at the evaluation of stress intensification factors in the end elastic boundary layer region, due to various thermal and mechanical end load conditions, in relation to the external, exact stress solutions, which satisfy conditions on the curved surfaces only and are valid over the remainder of the cylindrical body. More generally, it is possible, at least for symmetric thermoelastic problems, to derive exact external solutions, using self equilibrating end load systems, which describe the stress/displacement state completely as a combination of a simple local plane strain solution and a correction dependent on the magnitude of axial thermal gradients. Thus plane strain, and self equilibrating end load systems are sufficient for the complete external and boundary layer solution of a finite cylindrical body. This formulation is capable of further extension, e.g., to concentric multi-region problems, and provides a useful approach to the study of local stress intensification factors due to thermal perturbations
International Nuclear Information System (INIS)
Rath, J.; Freeman, A.J.
1975-01-01
A detailed study of the generalized susceptibility chi(vector q) of Sc metal determined from an accurate augmented-plane-wave method calculation of its energy-band structure is presented. The calculations were done by means of a computational scheme for chi(vector q) derived as an extension of the work of Jepsen and Andersen and Lehmann and Taut on the density-of-states problem. The procedure yields simple analytic expressions for the chi(vector q) integral inside a tetrahedral microzone of the Brillouin zone which depends only on the volume of the tetrahedron and the differences of the energies at its corners. Constant-matrix-element results have been obtained for Sc which show very good agreement with the results of Liu, Gupta, and Sinha (but with one less peak) and exhibit a first maximum in chi(vector q) at (0, 0, 0.31) 2π/c [vs (0, 0, 0.35) 2π/c obtained by Liu et al.] which relates very well to dilute rare-earth alloy magnetic ordering at vector q/sub m/ = (0, 0, 0.28) 2π/c and to the kink in the LA-phonon dispersion curve at (0, 0, 0.27) 2π/c. (U.S.)
Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging
Energy Technology Data Exchange (ETDEWEB)
Fowler, Michael James [Clarkson Univ., Potsdam, NY (United States)
2014-04-25
In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographs is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy
Molenaar, Dylan; Bolsinova, Maria
2017-05-01
In generalized linear modelling of responses and response times, the observed response time variables are commonly transformed to make their distribution approximately normal. A normal distribution for the transformed response times is desirable as it justifies the linearity and homoscedasticity assumptions in the underlying linear model. Past research has, however, shown that the transformed response times are not always normal. Models have been developed to accommodate this violation. In the present study, we propose a modelling approach for responses and response times to test and model non-normality in the transformed response times. Most importantly, we distinguish between non-normality due to heteroscedastic residual variances, and non-normality due to a skewed speed factor. In a simulation study, we establish parameter recovery and the power to separate both effects. In addition, we apply the model to a real data set. © 2017 The Authors. British Journal of Mathematical and Statistical Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.
Zhang, Xin; Liu, Pan; Chen, Yuguang; Bai, Lu; Wang, Wei
2014-01-01
The primary objective of this study was to identify whether the frequency of traffic conflicts at signalized intersections can be modeled. The opposing left-turn conflicts were selected for the development of conflict predictive models. Using data collected at 30 approaches at 20 signalized intersections, the underlying distributions of the conflicts under different traffic conditions were examined. Different conflict-predictive models were developed to relate the frequency of opposing left-turn conflicts to various explanatory variables. The models considered include a linear regression model, a negative binomial model, and separate models developed for four traffic scenarios. The prediction performance of different models was compared. The frequency of traffic conflicts follows a negative binominal distribution. The linear regression model is not appropriate for the conflict frequency data. In addition, drivers behaved differently under different traffic conditions. Accordingly, the effects of conflicting traffic volumes on conflict frequency vary across different traffic conditions. The occurrences of traffic conflicts at signalized intersections can be modeled using generalized linear regression models. The use of conflict predictive models has potential to expand the uses of surrogate safety measures in safety estimation and evaluation.
Directory of Open Access Journals (Sweden)
Anas Altaleb
2017-03-01
Full Text Available The aim of this work is to synthesize 8*8 substitution boxes (S-boxes for block ciphers. The confusion creating potential of an S-box depends on its construction technique. In the first step, we have applied the algebraic action of the projective general linear group PGL(2,GF(28 on Galois field GF(28. In step 2 we have used the permutations of the symmetric group S256 to construct new kind of S-boxes. To explain the proposed extension scheme, we have given an example and constructed one new S-box. The strength of the extended S-box is computed, and an insight is given to calculate the confusion-creating potency. To analyze the security of the S-box some popular algebraic and statistical attacks are performed as well. The proposed S-box has been analyzed by bit independent criterion, linear approximation probability test, non-linearity test, strict avalanche criterion, differential approximation probability test, and majority logic criterion. A comparison of the proposed S-box with existing S-boxes shows that the analyses of the extended S-box are comparatively better.
A Family of Multipoint Flux Mixed Finite Element Methods for Elliptic Problems on General Grids
Wheeler, Mary F.; Xue, Guangri; Yotov, Ivan
2011-01-01
In this paper, we discuss a family of multipoint flux mixed finite element (MFMFE) methods on simplicial, quadrilateral, hexahedral, and triangular-prismatic grids. The MFMFE methods are locally conservative with continuous normal fluxes, since
DEFF Research Database (Denmark)
Sahana, Goutam; Mailund, Thomas; Lund, Mogens Sandø
2011-01-01
be extended to incorporate other effects in a straightforward and rigorous fashion. Here, we present a complementary approach, called ‘GENMIX (genealogy based mixed model)’ which combines advantages from two powerful GWAS methods: genealogy-based haplotype grouping and MMA. Subjects and Methods: We validated......Introduction: The state-of-the-art for dealing with multiple levels of relationship among the samples in genome-wide association studies (GWAS) is unified mixed model analysis (MMA). This approach is very flexible, can be applied to both family-based and population-based samples, and can...
Off-diagonal generalization of the mixed-state geometric phase
International Nuclear Information System (INIS)
Filipp, Stefan; Sjoeqvist, Erik
2003-01-01
The concept of off-diagonal geometric phases for mixed quantal states in unitary evolution is developed. We show that these phases arise from three basic ideas: (1) fulfillment of quantum parallel transport of a complete basis, (2) a concept of mixed-state orthogonality adapted to unitary evolution, and (3) a normalization condition. We provide a method for computing the off-diagonal mixed-state phases to any order for unitarities that divide the parallel transported basis of Hilbert space into two parts: one part where each basis vector undergoes cyclic evolution and one part where all basis vectors are permuted among each other. We also demonstrate a purification based experimental procedure for the two lowest-order mixed-state phases and consider a physical scenario for a full characterization of the qubit mixed-state geometric phases in terms of polarization-entangled photon pairs. An alternative second order off-diagonal mixed-state geometric phase, which can be tested in single-particle experiments, is proposed
Zhang, Chenglong; Guo, Ping
2017-10-01
The vague and fuzzy parametric information is a challenging issue in irrigation water management problems. In response to this problem, a generalized fuzzy credibility-constrained linear fractional programming (GFCCFP) model is developed for optimal irrigation water allocation under uncertainty. The model can be derived from integrating generalized fuzzy credibility-constrained programming (GFCCP) into a linear fractional programming (LFP) optimization framework. Therefore, it can solve ratio optimization problems associated with fuzzy parameters, and examine the variation of results under different credibility levels and weight coefficients of possibility and necessary. It has advantages in: (1) balancing the economic and resources objectives directly; (2) analyzing system efficiency; (3) generating more flexible decision solutions by giving different credibility levels and weight coefficients of possibility and (4) supporting in-depth analysis of the interrelationships among system efficiency, credibility level and weight coefficient. The model is applied to a case study of irrigation water allocation in the middle reaches of Heihe River Basin, northwest China. Therefore, optimal irrigation water allocation solutions from the GFCCFP model can be obtained. Moreover, factorial analysis on the two parameters (i.e. λ and γ) indicates that the weight coefficient is a main factor compared with credibility level for system efficiency. These results can be effective for support reasonable irrigation water resources management and agricultural production.
Chakra B. Budhathoki; Thomas B. Lynch; James M. Guldin
2010-01-01
Nonlinear mixed-modeling methods were used to estimate parameters in an individual-tree basal area growth model for shortleaf pine (Pinus echinata Mill.). Shortleaf pine individual-tree growth data were available from over 200 permanently established 0.2-acre fixed-radius plots located in naturally-occurring even-aged shortleaf pine forests on the...
Blackstone, Christopher C.; Sanov, Andrei
2016-06-01
Using the generalized model for photodetachment of electrons from mixed-character molecular orbitals, we gain insight into the nature of the HOMO of HO2- by treating it as a coherent superpostion of one p- and one d-type atomic orbital. Fitting the pd model function to the ab initio calculated HOMO of HO2- yields a fractional d-character, γp, of 0.979. The modeled curve of the anisotropy parameter, β, as a function of electron kinetic energy for a pd-type mixed character orbital is matched to the experimental data.
Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D
2016-05-01
Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with
Directory of Open Access Journals (Sweden)
Guo Chun Wen
2009-05-01
Full Text Available This article concerns the oblique derivative problems for second-order quasilinear degenerate equations of mixed type with several characteristic boundaries, which include the Tricomi problem as a special case. First we formulate the problem and obtain estimates of its solutions, then we show the existence of solutions by the successive iterations and the Leray-Schauder theorem. We use a complex analytic method: elliptic complex functions are used in the elliptic domain, and hyperbolic complex functions in the hyperbolic domain, such that second-order equations of mixed type with degenerate curve are reduced to the first order mixed complex equations with singular coefficients. An application of the complex analytic method, solves (1.1 below with $m=n=1$, $a=b=0$, which was posed as an open problem by Rassias.
Energy Technology Data Exchange (ETDEWEB)
NONE
1999-12-31
This conference day was jointly organized by the `university group of thermal engineering (GUT)` and the French association of thermal engineers. This book of proceedings contains 7 papers entitled: `energy spectra of a passive scalar undergoing advection by a chaotic flow`; `analysis of chaotic behaviours: from topological characterization to modeling`; `temperature homogeneity by Lagrangian chaos in a direct current flow heat exchanger: numerical approach`; ` thermal instabilities in a mixed convection phenomenon: nonlinear dynamics`; `experimental characterization study of the 3-D Lagrangian chaos by thermal analogy`; `influence of coherent structures on the mixing of a passive scalar`; `evaluation of the performance index of a chaotic advection effect heat exchanger for a wide range of Reynolds numbers`. (J.S.)
Energy Technology Data Exchange (ETDEWEB)
NONE
1998-12-31
This conference day was jointly organized by the `university group of thermal engineering (GUT)` and the French association of thermal engineers. This book of proceedings contains 7 papers entitled: `energy spectra of a passive scalar undergoing advection by a chaotic flow`; `analysis of chaotic behaviours: from topological characterization to modeling`; `temperature homogeneity by Lagrangian chaos in a direct current flow heat exchanger: numerical approach`; ` thermal instabilities in a mixed convection phenomenon: nonlinear dynamics`; `experimental characterization study of the 3-D Lagrangian chaos by thermal analogy`; `influence of coherent structures on the mixing of a passive scalar`; `evaluation of the performance index of a chaotic advection effect heat exchanger for a wide range of Reynolds numbers`. (J.S.)
Xiao, Lin; Liao, Bolin; Li, Shuai; Chen, Ke
2018-02-01
In order to solve general time-varying linear matrix equations (LMEs) more efficiently, this paper proposes two nonlinear recurrent neural networks based on two nonlinear activation functions. According to Lyapunov theory, such two nonlinear recurrent neural networks are proved to be convergent within finite-time. Besides, by solving differential equation, the upper bounds of the finite convergence time are determined analytically. Compared with existing recurrent neural networks, the proposed two nonlinear recurrent neural networks have a better convergence property (i.e., the upper bound is lower), and thus the accurate solutions of general time-varying LMEs can be obtained with less time. At last, various different situations have been considered by setting different coefficient matrices of general time-varying LMEs and a great variety of computer simulations (including the application to robot manipulators) have been conducted to validate the better finite-time convergence of the proposed two nonlinear recurrent neural networks. Copyright © 2017 Elsevier Ltd. All rights reserved.
Who is MADD? Mixed anxiety depressive disorder in the general population
Spijker, J.; Batelaan, N.M.; de Graaf, R.; Cuijpers, P.
2010-01-01
Background: Diagnostic criteria for (subthreshold) mixed anxiety depression (MADD) were proposed in DSM-IV. Yet the usefulness of this classification is questioned. We therefore assessed the prevalence of MADD, and investigated whether MADD adds to separate classifications of pure subthreshold
Who is MADD? Mixed anxiety depressive disorder in the general population
Spijker, J.; Batelaan, N.M.; Graaf, R. de; Cuijpers, P.
2010-01-01
Background: Diagnostic criteria for (subthreshold) mixed anxiety depression (MADD) were proposed in DSM-IV. Yet the usefulness of this classification is questioned. We therefore assessed the prevalence of MADD, and investigated whether MADD adds to separate classifications Of pure subthreshold
Abramov, Rafail V.
2011-01-01
Chaotic multiscale dynamical systems are common in many areas of science, one of the examples being the interaction of the low-frequency dynamics in the atmosphere with the fast turbulent weather dynamics. One of the key questions about chaotic multiscale systems is how the fast dynamics affects chaos at the slow variables, and, therefore, impacts uncertainty and predictability of the slow dynamics. Here we demonstrate that the linear slow-fast coupling with the total energy conservation prop...
International Nuclear Information System (INIS)
Fiorenza, Alberto; Vincenzi, Giovanni
2011-01-01
Research highlights: → We prove a result true for all linear homogeneous recurrences with constant coefficients. → As a corollary of our results we immediately get the celebrated Poincare' theorem. → The limit of the ratio of adjacent terms is characterized as the unique leading root of the characteristic polynomial. → The Golden Ratio, Kepler limit of the classical Fibonacci sequence, is the unique leading root. → The Kepler limit may differ from the unique root of maximum modulus and multiplicity. - Abstract: For complex linear homogeneous recursive sequences with constant coefficients we find a necessary and sufficient condition for the existence of the limit of the ratio of consecutive terms. The result can be applied even if the characteristic polynomial has not necessarily roots with modulus pairwise distinct, as in the celebrated Poincare's theorem. In case of existence, we characterize the limit as a particular root of the characteristic polynomial, which depends on the initial conditions and that is not necessarily the unique root with maximum modulus and multiplicity. The result extends to a quite general context the way used to find the Golden mean as limit of ratio of consecutive terms of the classical Fibonacci sequence.
International Nuclear Information System (INIS)
Maldonado, G.I.; Turinsky, P.J.; Kropaczek, D.J.
1993-01-01
The computational capability of efficiently and accurately evaluate reactor core attributes (i.e., k eff and power distributions as a function of cycle burnup) utilizing a second-order accurate advanced nodal Generalized Perturbation Theory (GPT) model has been developed. The GPT model is derived from the forward non-linear iterative Nodal Expansion Method (NEM) strategy, thereby extending its inherent savings in memory storage and high computational efficiency to also encompass GPT via the preservation of the finite-difference matrix structure. The above development was easily implemented into the existing coarse-mesh finite-difference GPT-based in-core fuel management optimization code FORMOSA-P, thus combining the proven robustness of its adaptive Simulated Annealing (SA) multiple-objective optimization algorithm with a high-fidelity NEM GPT neutronics model to produce a powerful computational tool used to generate families of near-optimum loading patterns for PWRs. (orig.)
International Nuclear Information System (INIS)
Shimizu, Yoshiaki
1991-01-01
In recent complicated nuclear systems, there are increasing demands for developing highly advanced procedures for various problems-solvings. Among them keen interests have been paid on man-machine communications to improve both safety and economy factors. Many optimization methods have been good enough to elaborate on these points. In this preliminary note, we will concern with application of linear programming (LP) for this purpose. First we will present a new superior version of the generalized PAPA method (GEPAPA) to solve LP problems. We will then examine its effectiveness when applied to derive dynamic matrix control (DMC) as the LP solution. The approach is to aim at the above goal through a quality control of process that will appear in the system. (author)
Mendez, Javier; Monleon-Getino, Antonio; Jofre, Juan; Lucena, Francisco
2017-10-01
The present study aimed to establish the kinetics of the appearance of coliphage plaques using the double agar layer titration technique to evaluate the feasibility of using traditional coliphage plaque forming unit (PFU) enumeration as a rapid quantification method. Repeated measurements of the appearance of plaques of coliphages titrated according to ISO 10705-2 at different times were analysed using non-linear mixed-effects regression to determine the most suitable model of their appearance kinetics. Although this model is adequate, to simplify its applicability two linear models were developed to predict the numbers of coliphages reliably, using the PFU counts as determined by the ISO after only 3 hours of incubation. One linear model, when the number of plaques detected was between 4 and 26 PFU after 3 hours, had a linear fit of: (1.48 × Counts 3 h + 1.97); and the other, values >26 PFU, had a fit of (1.18 × Counts 3 h + 2.95). If the number of plaques detected was PFU after 3 hours, we recommend incubation for (18 ± 3) hours. The study indicates that the traditional coliphage plating technique has a reasonable potential to provide results in a single working day without the need to invest in additional laboratory equipment.
DEFF Research Database (Denmark)
and straightforward idea is to interpret effects relative to the residual error and to choose the proper effect size measure. For multi-attribute bar plots of F-statistics this amounts, in balanced settings, to a simple transformation of the bar heights to get them transformed into depicting what can be seen...... on a multifactorial sensory profile data set and compared to actual d-prime calculations based on ordinal regression modelling through the ordinal package. A generic ``plug-in'' implementation of the method is given in the SensMixed package, which again depends on the lmerTest package. We discuss and clarify the bias...
Xu, Xueli; von Davier, Matthias
2008-01-01
The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…
Spence, Jeffrey S; Brier, Matthew R; Hart, John; Ferree, Thomas C
2013-03-01
Linear statistical models are used very effectively to assess task-related differences in EEG power spectral analyses. Mixed models, in particular, accommodate more than one variance component in a multisubject study, where many trials of each condition of interest are measured on each subject. Generally, intra- and intersubject variances are both important to determine correct standard errors for inference on functions of model parameters, but it is often assumed that intersubject variance is the most important consideration in a group study. In this article, we show that, under common assumptions, estimates of some functions of model parameters, including estimates of task-related differences, are properly tested relative to the intrasubject variance component only. A substantial gain in statistical power can arise from the proper separation of variance components when there is more than one source of variability. We first develop this result analytically, then show how it benefits a multiway factoring of spectral, spatial, and temporal components from EEG data acquired in a group of healthy subjects performing a well-studied response inhibition task. Copyright © 2011 Wiley Periodicals, Inc.
General features of conceptual design for the pilot plant to manufacture fuel rods from mixed oxides
International Nuclear Information System (INIS)
Quesada, C.A.; Adelfang, P.; Esteban, A.; Aparicio, G.; Friedenthal, M.; Orlando, O.S.
1987-01-01
This paper conceptually describes: 1) the processes in the manufacturing lines; 2) the distribution of quality controls and glove boxes in manufacturing lines; 3) the Control and Radiological Safety Room; 4) the Dressing Room; 5) the requirements of the ventilation system. The plant will be located in the first floor of the Radiochemical Processes Laboratory building, occupying a surface of 600 m 2 . The necessary equipment for the following manufacturing lines will be provided: a) conversion from Pu(NO3)4 to PuO 2 (through Pu(III)oxalate); b) manufacture of homogeneous of mixed oxides of U and Pu; c) manufacture of (U,Pu)O 2 pellets; d) manufacture of fuel rods of mixed uranium and plutonium oxides. (Author)
A Family of Multipoint Flux Mixed Finite Element Methods for Elliptic Problems on General Grids
Wheeler, Mary F.
2011-01-01
In this paper, we discuss a family of multipoint flux mixed finite element (MFMFE) methods on simplicial, quadrilateral, hexahedral, and triangular-prismatic grids. The MFMFE methods are locally conservative with continuous normal fluxes, since they are developed within a variational framework as mixed finite element methods with special approximating spaces and quadrature rules. The latter allows for local flux elimination giving a cell-centered system for the scalar variable. We study two versions of the method: with a symmetric quadrature rule on smooth grids and a non-symmetric quadrature rule on rough grids. Theoretical and numerical results demonstrate first order convergence for problems with full-tensor coefficients. Second order superconvergence is observed on smooth grids. © 2011 Published by Elsevier Ltd.
Lazar, Ann A.; Zerbe, Gary O.
2011-01-01
Researchers often compare the relationship between an outcome and covariate for two or more groups by evaluating whether the fitted regression curves differ significantly. When they do, researchers need to determine the "significance region," or the values of the covariate where the curves significantly differ. In analysis of covariance (ANCOVA),…
Directory of Open Access Journals (Sweden)
Jie Wang
2017-03-01
Full Text Available Deep convolutional neural networks (CNNs have been widely used to obtain high-level representation in various computer vision tasks. However, in the field of remote sensing, there are not sufficient images to train a useful deep CNN. Instead, we tend to transfer successful pre-trained deep CNNs to remote sensing tasks. In the transferring process, generalization power of features in pre-trained deep CNNs plays the key role. In this paper, we propose two promising architectures to extract general features from pre-trained deep CNNs for remote scene classification. These two architectures suggest two directions for improvement. First, before the pre-trained deep CNNs, we design a linear PCA network (LPCANet to synthesize spatial information of remote sensing images in each spectral channel. This design shortens the spatial “distance” of target and source datasets for pre-trained deep CNNs. Second, we introduce quaternion algebra to LPCANet, which further shortens the spectral “distance” between remote sensing images and images used to pre-train deep CNNs. With five well-known pre-trained deep CNNs, experimental results on three independent remote sensing datasets demonstrate that our proposed framework obtains state-of-the-art results without fine-tuning and feature fusing. This paper also provides baseline for transferring fresh pretrained deep CNNs to other remote sensing tasks.
Hwang, Yuh-Shyan; Kung, Che-Min; Lin, Ho-Cheng; Chen, Jiann-Jong
2009-02-01
A low-sensitivity, low-bounce, high-linearity current-controlled oscillator (CCO) suitable for a single-supply mixed-mode instrumentation system is designed and proposed in this paper. The designed CCO can be operated at low voltage (2 V). The power bounce and ground bounce generated by this CCO is less than 7 mVpp when the power-line parasitic inductance is increased to 100 nH to demonstrate the effect of power bounce and ground bounce. The power supply noise caused by the proposed CCO is less than 0.35% in reference to the 2 V supply voltage. The average conversion ratio KCCO is equal to 123.5 GHz/A. The linearity of conversion ratio is high and its tolerance is within +/-1.2%. The sensitivity of the proposed CCO is nearly independent of the power supply voltage, which is less than a conventional current-starved oscillator. The performance of the proposed CCO has been compared with the current-starved oscillator. It is shown that the proposed CCO is suitable for single-supply mixed-mode instrumentation systems.
Directory of Open Access Journals (Sweden)
Goutam Sahana
Full Text Available INTRODUCTION: The state-of-the-art for dealing with multiple levels of relationship among the samples in genome-wide association studies (GWAS is unified mixed model analysis (MMA. This approach is very flexible, can be applied to both family-based and population-based samples, and can be extended to incorporate other effects in a straightforward and rigorous fashion. Here, we present a complementary approach, called 'GENMIX (genealogy based mixed model' which combines advantages from two powerful GWAS methods: genealogy-based haplotype grouping and MMA. SUBJECTS AND METHODS: We validated GENMIX using genotyping data of Danish Jersey cattle and simulated phenotype and compared to the MMA. We simulated scenarios for three levels of heritability (0.21, 0.34, and 0.64, seven levels of MAF (0.05, 0.10, 0.15, 0.20, 0.25, 0.35, and 0.45 and five levels of QTL effect (0.1, 0.2, 0.5, 0.7 and 1.0 in phenotypic standard deviation unit. Each of these 105 possible combinations (3 h(2 x 7 MAF x 5 effects of scenarios was replicated 25 times. RESULTS: GENMIX provides a better ranking of markers close to the causative locus' location. GENMIX outperformed MMA when the QTL effect was small and the MAF at the QTL was low. In scenarios where MAF was high or the QTL affecting the trait had a large effect both GENMIX and MMA performed similarly. CONCLUSION: In discovery studies, where high-ranking markers are identified and later examined in validation studies, we therefore expect GENMIX to enrich candidates brought to follow-up studies with true positives over false positives more than the MMA would.
Optimization of light quality from color mixing light-emitting diode systems for general lighting
DEFF Research Database (Denmark)
Thorseth, Anders
2012-01-01
are simulated using radiometrically measured single LED spectra. The method uses electrical input powers as input parameters and optimizes the resulting spectral power distribution with regard to color rendering index, correlated color temperature and chromaticity distance. The results indicate Pareto optimal......To address the problem of spectral light quality from color mixing light-emitting diode systems, a method for optimizing the spectral output of multicolor LED system with regards to standardized quality parameters has been developed. The composite spectral power distribution from the LEDs...
Hughes, Vanessa K; Langlois, Neil E I
2010-12-01
Bruises can have medicolegal significance such that the age of a bruise may be an important issue. This study sought to determine if colorimetry or reflectance spectrophotometry could be employed to objectively estimate the age of bruises. Based on a previously described method, reflectance spectrophotometric scans were obtained from bruises using a Cary 100 Bio spectrophotometer fitted with a fibre-optic reflectance probe. Measurements were taken from the bruise and a control area. Software was used to calculate the first derivative at 490 and 480 nm; the proportion of oxygenated hemoglobin was calculated using an isobestic point method and a software application converted the scan data into colorimetry data. In addition, data on factors that might be associated with the determination of the age of a bruise: subject age, subject sex, degree of trauma, bruise size, skin color, body build, and depth of bruise were recorded. From 147 subjects, 233 reflectance spectrophotometry scans were obtained for analysis. The age of the bruises ranged from 0.5 to 231.5 h. A General Linear Model analysis method was used. This revealed that colorimetric measurement of the yellowness of a bruise accounted for 13% of the bruise age. By incorporation of the other recorded data (as above), yellowness could predict up to 32% of the age of a bruise-implying that 68% of the variation was dependent on other factors. However, critical appraisal of the model revealed that the colorimetry method of determining the age of a bruise was affected by skin tone and required a measure of the proportion of oxygenated hemoglobin, which is obtained by spectrophotometric methods. Using spectrophotometry, the first derivative at 490 nm alone accounted for 18% of the bruise age estimate. When additional factors (subject sex, bruise depth and oxygenation of hemoglobin) were included in the General Linear Model this increased to 31%-implying that 69% of the variation was dependent on other factors. This
LENUS (Irish Health Repository)
Kiely, A
2017-03-01
The Special Type Consultation (STC) scheme is a fee-for-service reimbursement scheme for General Practitioners (GPs) in Ireland. Introduced in 1989, the scheme includes specified patient services involving the application of a learned skill, e.g. suturing. This study aims to establish the extent to which GPs believe this scheme is appropriate for current General Practice. This is an embedded mixed-methods study combining quantitative data on GPs working experience of and qualitative data on GPs attitudes towards the scheme. Data were collected by means of an anonymous postal questionnaire. The response rate was 60.4% (n=159.) Twenty-nine percent (n=46) disagreed and 65% (n=104) strongly disagreed that the current list of special items is satisfactory. Two overriding themes were identified: economics and advancement of the STC process. This study demonstrates an overwhelming consensus among GPs that the current STC scheme is outdated and in urgent need of revision to reflect modern General Practice.
Foo, Lee Kien; McGree, James; Duffull, Stephen
2012-01-01
Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.
Introduction to the General Theory of Defects of the Mixed Economy
Directory of Open Access Journals (Sweden)
Alexander Yakovlevich Rubinstein
2016-12-01
Full Text Available The author gives a new perspective on defects of the mixed economy in the context of nature and evolution of paternalism as one of elements of the theory of patronized goods. The researcher supplements standard market failures with ‘behavioral failure’, which is a direct consequence of irrational behavior of individuals, and ‘paternalistic failure’, which occurs as a result of erroneous actions of the state. The article also discusses the formation of paternalistic attitudes and reviews theoretical and applied aspects of liberalization of making political and economic decisions based on the development of civil society institutions and civic engagement aimed at reducing the risks of ‘government failures’ and welfare loss
Directory of Open Access Journals (Sweden)
Phayap Katchang
2010-01-01
Full Text Available The purpose of this paper is to investigate the problem of finding a common element of the set of solutions for mixed equilibrium problems, the set of solutions of the variational inclusions with set-valued maximal monotone mappings and inverse-strongly monotone mappings, and the set of fixed points of a family of finitely nonexpansive mappings in the setting of Hilbert spaces. We propose a new iterative scheme for finding the common element of the above three sets. Our results improve and extend the corresponding results of the works by Zhang et al. (2008, Peng et al. (2008, Peng and Yao (2009, as well as Plubtieng and Sriprad (2009 and some well-known results in the literature.
Digital Repository Service at National Institute of Oceanography (India)
Nakamoto, S.; PrasannaKumar, S.; Oberhuber, J.M.; Saito, H.; Muneyama, K.
and supported by quasi-steady upwelling. Remotely sensed chlorophyll pigment concentrations from the Coastal Zone Color Scanner (CZCS) are used to investigate the chlorophyll modulation of ocean mixed layer thermodynamics in a bulk mixed-layer model, embedded...
Robustness Property of Robust-BD Wald-Type Test for Varying-Dimensional General Linear Models
Directory of Open Access Journals (Sweden)
Xiao Guo
2018-03-01
Full Text Available An important issue for robust inference is to examine the stability of the asymptotic level and power of the test statistic in the presence of contaminated data. Most existing results are derived in finite-dimensional settings with some particular choices of loss functions. This paper re-examines this issue by allowing for a diverging number of parameters combined with a broader array of robust error measures, called “robust- BD ”, for the class of “general linear models”. Under regularity conditions, we derive the influence function of the robust- BD parameter estimator and demonstrate that the robust- BD Wald-type test enjoys the robustness of validity and efficiency asymptotically. Specifically, the asymptotic level of the test is stable under a small amount of contamination of the null hypothesis, whereas the asymptotic power is large enough under a contaminated distribution in a neighborhood of the contiguous alternatives, thus lending supports to the utility of the proposed robust- BD Wald-type test.
Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F
2016-08-01
Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
2013-01-01
Background In statistical modeling, finding the most favorable coding for an exploratory quantitative variable involves many tests. This process involves multiple testing problems and requires the correction of the significance level. Methods For each coding, a test on the nullity of the coefficient associated with the new coded variable is computed. The selected coding corresponds to that associated with the largest statistical test (or equivalently the smallest pvalue). In the context of the Generalized Linear Model, Liquet and Commenges (Stat Probability Lett,71:33–38,2005) proposed an asymptotic correction of the significance level. This procedure, based on the score test, has been developed for dichotomous and Box-Cox transformations. In this paper, we suggest the use of resampling methods to estimate the significance level for categorical transformations with more than two levels and, by definition those that involve more than one parameter in the model. The categorical transformation is a more flexible way to explore the unknown shape of the effect between an explanatory and a dependent variable. Results The simulations we ran in this study showed good performances of the proposed methods. These methods were illustrated using the data from a study of the relationship between cholesterol and dementia. Conclusion The algorithms were implemented using R, and the associated CPMCGLM R package is available on the CRAN. PMID:23758852
Felleki, M; Lee, D; Lee, Y; Gilmour, A R; Rönnegård, L
2012-12-01
The possibility of breeding for uniform individuals by selecting animals expressing a small response to environment has been studied extensively in animal breeding. Bayesian methods for fitting models with genetic components in the residual variance have been developed for this purpose, but have limitations due to the computational demands. We use the hierarchical (h)-likelihood from the theory of double hierarchical generalized linear models (DHGLM) to derive an estimation algorithm that is computationally feasible for large datasets. Random effects for both the mean and residual variance parts of the model are estimated together with their variance/covariance components. An important feature of the algorithm is that it can fit a correlation between the random effects for mean and variance. An h-likelihood estimator is implemented in the R software and an iterative reweighted least square (IRWLS) approximation of the h-likelihood is implemented using ASReml. The difference in variance component estimates between the two implementations is investigated, as well as the potential bias of the methods, using simulations. IRWLS gives the same results as h-likelihood in simple cases with no severe indication of bias. For more complex cases, only IRWLS could be used, and bias did appear. The IRWLS is applied on the pig litter size data previously analysed by Sorensen & Waagepetersen (2003) using Bayesian methodology. The estimates we obtained by using IRWLS are similar to theirs, with the estimated correlation between the random genetic effects being -0·52 for IRWLS and -0·62 in Sorensen & Waagepetersen (2003).
Chen, Vivian Yi-Ju; Yang, Tse-Chuan
2012-08-01
An increasing interest in exploring spatial non-stationarity has generated several specialized analytic software programs; however, few of these programs can be integrated natively into a well-developed statistical environment such as SAS. We not only developed a set of SAS macro programs to fill this gap, but also expanded the geographically weighted generalized linear modeling (GWGLM) by integrating the strengths of SAS into the GWGLM framework. Three features distinguish our work. First, the macro programs of this study provide more kernel weighting functions than the existing programs. Second, with our codes the users are able to better specify the bandwidth selection process compared to the capabilities of existing programs. Third, the development of the macro programs is fully embedded in the SAS environment, providing great potential for future exploration of complicated spatially varying coefficient models in other disciplines. We provided three empirical examples to illustrate the use of the SAS macro programs and demonstrated the advantages explained above. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Tang, Zaixiang; Shen, Yueping; Li, Yan; Zhang, Xinyan; Wen, Jia; Qian, Chen'ao; Zhuang, Wenzhuo; Shi, Xinghua; Yi, Nengjun
2018-03-15
Large-scale molecular data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, standard approaches for omics data analysis ignore the group structure among genes encoded in functional relationships or pathway information. We propose new Bayesian hierarchical generalized linear models, called group spike-and-slab lasso GLMs, for predicting disease outcomes and detecting associated genes by incorporating large-scale molecular data and group structures. The proposed model employs a mixture double-exponential prior for coefficients that induces self-adaptive shrinkage amount on different coefficients. The group information is incorporated into the model by setting group-specific parameters. We have developed a fast and stable deterministic algorithm to fit the proposed hierarchal GLMs, which can perform variable selection within groups. We assess the performance of the proposed method on several simulated scenarios, by varying the overlap among groups, group size, number of non-null groups, and the correlation within group. Compared with existing methods, the proposed method provides not only more accurate estimates of the parameters but also better prediction. We further demonstrate the application of the proposed procedure on three cancer datasets by utilizing pathway structures of genes. Our results show that the proposed method generates powerful models for predicting disease outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). nyi@uab.edu. Supplementary data are available at Bioinformatics online.
Agalarov, Agalar; Zhulego, Vladimir; Gadzhimuradov, Telman
2015-04-01
The reduction procedure for the general coupled nonlinear Schrödinger (GCNLS) equations with four-wave mixing terms is proposed. It is shown that the GCNLS system is equivalent to the well known integrable families of the Manakov and Makhankov U(n,m)-vector models. This equivalence allows us to construct bright-bright and dark-dark solitons and a quasibreather-dark solution with unconventional dynamics: the density of the first component oscillates in space and time, whereas the density of the second component does not. The collision properties of solitons are also studied.
2013-01-01
that these constraints can often lead to significant reductions in the gap between the optimal solution and its non-integral linear programming bound relative to the prior art as well as often substantially faster processing of moderately hard problem instances. Conclusion We provide an indication of the conditions under which such an optimal enumeration approach is likely to be feasible, suggesting that these strategies are usable for relatively large numbers of taxa, although with stricter limits on numbers of variable sites. The work thus provides methodology suitable for provably optimal solution of some harder instances that resist all prior approaches. PMID:23343437
Energy Technology Data Exchange (ETDEWEB)
Waddell, Lucas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Muldoon, Frank [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Henry, Stephen Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hoffman, Matthew John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zwerneman, April Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Backlund, Peter [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melander, Darryl J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lawton, Craig R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rice, Roy Eugene [Teledyne Brown Engineering, Huntsville, AL (United States)
2017-09-01
In order to effectively plan the management and modernization of their large and diverse fleets of vehicles, Program Executive Office Ground Combat Systems (PEO GCS) and Program Executive Office Combat Support and Combat Service Support (PEO CS&CSS) commis- sioned the development of a large-scale portfolio planning optimization tool. This software, the Capability Portfolio Analysis Tool (CPAT), creates a detailed schedule that optimally prioritizes the modernization or replacement of vehicles within the fleet - respecting numerous business rules associated with fleet structure, budgets, industrial base, research and testing, etc., while maximizing overall fleet performance through time. This paper contains a thor- ough documentation of the terminology, parameters, variables, and constraints that comprise the fleet management mixed integer linear programming (MILP) mathematical formulation. This paper, which is an update to the original CPAT formulation document published in 2015 (SAND2015-3487), covers the formulation of important new CPAT features.
Said-Houari, Belkacem
2017-01-01
This self-contained, clearly written textbook on linear algebra is easily accessible for students. It begins with the simple linear equation and generalizes several notions from this equation for the system of linear equations and introduces the main ideas using matrices. It then offers a detailed chapter on determinants and introduces the main ideas with detailed proofs. The third chapter introduces the Euclidean spaces using very simple geometric ideas and discusses various major inequalities and identities. These ideas offer a solid basis for understanding general Hilbert spaces in functional analysis. The following two chapters address general vector spaces, including some rigorous proofs to all the main results, and linear transformation: areas that are ignored or are poorly explained in many textbooks. Chapter 6 introduces the idea of matrices using linear transformation, which is easier to understand than the usual theory of matrices approach. The final two chapters are more advanced, introducing t...
Shirazi, Mohammadali; Lord, Dominique; Dhavala, Soma Sekhar; Geedipally, Srinivas Reddy
2016-06-01
Crash data can often be characterized by over-dispersion, heavy (long) tail and many observations with the value zero. Over the last few years, a small number of researchers have started developing and applying novel and innovative multi-parameter models to analyze such data. These multi-parameter models have been proposed for overcoming the limitations of the traditional negative binomial (NB) model, which cannot handle this kind of data efficiently. The research documented in this paper continues the work related to multi-parameter models. The objective of this paper is to document the development and application of a flexible NB generalized linear model with randomly distributed mixed effects characterized by the Dirichlet process (NB-DP) to model crash data. The objective of the study was accomplished using two datasets. The new model was compared to the NB and the recently introduced model based on the mixture of the NB and Lindley (NB-L) distributions. Overall, the research study shows that the NB-DP model offers a better performance than the NB model once data are over-dispersed and have a heavy tail. The NB-DP performed better than the NB-L when the dataset has a heavy tail, but a smaller percentage of zeros. However, both models performed similarly when the dataset contained a large amount of zeros. In addition to a greater flexibility, the NB-DP provides a clustering by-product that allows the safety analyst to better understand the characteristics of the data, such as the identification of outliers and sources of dispersion. Copyright © 2016 Elsevier Ltd. All rights reserved.
Koerner, Tess K; Zhang, Yang
2017-02-27
Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers.
Pentzek, Michael; Leve, Verena; Leucht, Verena
2017-05-01
Public awareness for dementia is rising and patients with concerns about forgetfulness are not uncommon in general practice. For the general practitioner (GP) subjectively perceived memory impairment (SMI) also offers a chance to broach the issue of cognitive function with the patient. This may support GPs' patient-centered care in terms of a broader frailty concept. What is SMI (definition, operationalization, prevalence and burden)? Which conceptions and approaches do GPs have regarding SMI? Narrative overview of recent SMI criteria and results, selective utilization of results from a systematic literature search on GP dementia care, non-systematic search regarding SMI in general practice, deduction of a study design from the overview and development according to international standards. Studies revealed that approximately 60% of GP patients aged >74 reported a declining memory, every sixth person had concerns about this aspect and only relatively few seek medical advice. Concerns about SMI are considered a risk factor for future dementia. Specific general practice conceptions about SMI could not be identified in the literature. Using guidelines for mixed methods research, the design of an exploratory sequential mixed methods study is presented, which should reveal different attitudes of GPs towards SMI. Subjective memory impairment (SMI) is a common feature and troubles a considerable proportion of patients. Neuropsychiatric research is progressing, but for the transfer of the SMI concept into routine practice, involvement of GP research is necessary. A new study aims to make a contribution to this.
Camilo, Daniela Castro
2017-08-30
Grid-based landslide susceptibility models at regional scales are computationally demanding when using a fine grid resolution. Conversely, Slope-Unit (SU) based susceptibility models allows to investigate the same areas offering two main advantages: 1) a smaller computational burden and 2) a more geomorphologically-oriented interpretation. In this contribution, we generate SU-based landslide susceptibility for the Sado Island in Japan. This island is characterized by deep-seated landslides which we assume can only limitedly be explained by the first two statistical moments (mean and variance) of a set of predictors within each slope unit. As a consequence, in a nested experiment, we first analyse the distributions of a set of continuous predictors within each slope unit computing the standard deviation and quantiles from 0.05 to 0.95 with a step of 0.05. These are then used as predictors for landslide susceptibility. In addition, we combine shape indices for polygon features and the normalized extent of each class belonging to the outcropping lithology in a given SU. This procedure significantly enlarges the size of the predictor hyperspace, thus producing a high level of slope-unit characterization. In a second step, we adopt a LASSO-penalized Generalized Linear Model to shrink back the predictor set to a sensible and interpretable number, carrying only the most significant covariates in the models. As a result, we are able to document the geomorphic features (e.g., 95% quantile of Elevation and 5% quantile of Plan Curvature) that primarily control the SU-based susceptibility within the test area while producing high predictive performances. The implementation of the statistical analyses are included in a parallelized R script (LUDARA) which is here made available for the community to replicate analogous experiments.
Camilo, Daniela Castro; Lombardo, Luigi; Mai, Paul Martin; Dou, Jie; Huser, Raphaë l
2017-01-01
Grid-based landslide susceptibility models at regional scales are computationally demanding when using a fine grid resolution. Conversely, Slope-Unit (SU) based susceptibility models allows to investigate the same areas offering two main advantages: 1) a smaller computational burden and 2) a more geomorphologically-oriented interpretation. In this contribution, we generate SU-based landslide susceptibility for the Sado Island in Japan. This island is characterized by deep-seated landslides which we assume can only limitedly be explained by the first two statistical moments (mean and variance) of a set of predictors within each slope unit. As a consequence, in a nested experiment, we first analyse the distributions of a set of continuous predictors within each slope unit computing the standard deviation and quantiles from 0.05 to 0.95 with a step of 0.05. These are then used as predictors for landslide susceptibility. In addition, we combine shape indices for polygon features and the normalized extent of each class belonging to the outcropping lithology in a given SU. This procedure significantly enlarges the size of the predictor hyperspace, thus producing a high level of slope-unit characterization. In a second step, we adopt a LASSO-penalized Generalized Linear Model to shrink back the predictor set to a sensible and interpretable number, carrying only the most significant covariates in the models. As a result, we are able to document the geomorphic features (e.g., 95% quantile of Elevation and 5% quantile of Plan Curvature) that primarily control the SU-based susceptibility within the test area while producing high predictive performances. The implementation of the statistical analyses are included in a parallelized R script (LUDARA) which is here made available for the community to replicate analogous experiments.
Salihu, Hamisu M; Salemi, Jason L; Nash, Michelle C; Chandler, Kristen; Mbah, Alfred K; Alio, Amina P
2014-08-01
Lack of paternal involvement has been shown to be associated with adverse pregnancy outcomes, including infant morbidity and mortality, but the impact on health care costs is unknown. Various methodological approaches have been used in cost minimization and cost effectiveness analyses and it remains unclear how cost estimates vary according to the analytic strategy adopted. We illustrate a methodological comparison of decision analysis modeling and generalized linear modeling (GLM) techniques using a case study that assesses the cost-effectiveness of potential father involvement interventions. We conducted a 12-year retrospective cohort study using a statewide enhanced maternal-infant database that contains both clinical and nonclinical information. A missing name for the father on the infant's birth certificate was used as a proxy for lack of paternal involvement, the main exposure of this study. Using decision analysis modeling and GLM, we compared all infant inpatient hospitalization costs over the first year of life. Costs were calculated from hospital charges using department-level cost-to-charge ratios and were adjusted for inflation. In our cohort of 2,243,891 infants, 9.2% had a father uninvolved during pregnancy. Lack of paternal involvement was associated with higher rates of preterm birth, small-for-gestational age, and infant morbidity and mortality. Both analytic approaches estimate significantly higher per-infant costs for father uninvolved pregnancies (decision analysis model: $1,827, GLM: $1,139). This paper provides sufficient evidence that healthcare costs could be significantly reduced through enhanced father involvement during pregnancy, and buttresses the call for a national program to involve fathers in antenatal care.
Dorren, H.J.S.
1998-01-01
It is shown that the Korteweg–de Vries (KdV) equation can be transformed into an ordinary linear partial differential equation in the wave number domain. Explicit solutions of the KdV equation can be obtained by subsequently solving this linear differential equation and by applying a cascade of
Ferencz, Donald C.; Viterna, Larry A.
1991-01-01
ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.
Coupé, Christophe
2018-01-01
As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for 'difficult' variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we
Directory of Open Access Journals (Sweden)
Christophe Coupé
2018-04-01
Full Text Available As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM, which address grouping of observations, and generalized linear mixed-effects models (GLMM, which offer a family of distributions for the dependent variable. Generalized additive models (GAM are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS. We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for ‘difficult’ variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships
Directory of Open Access Journals (Sweden)
Kyle A McQuisten
2009-10-01
Full Text Available Exogenous short interfering RNAs (siRNAs induce a gene knockdown effect in cells by interacting with naturally occurring RNA processing machinery. However not all siRNAs induce this effect equally. Several heterogeneous kinds of machine learning techniques and feature sets have been applied to modeling siRNAs and their abilities to induce knockdown. There is some growing agreement to which techniques produce maximally predictive models and yet there is little consensus for methods to compare among predictive models. Also, there are few comparative studies that address what the effect of choosing learning technique, feature set or cross validation approach has on finding and discriminating among predictive models.Three learning techniques were used to develop predictive models for effective siRNA sequences including Artificial Neural Networks (ANNs, General Linear Models (GLMs and Support Vector Machines (SVMs. Five feature mapping methods were also used to generate models of siRNA activities. The 2 factors of learning technique and feature mapping were evaluated by complete 3x5 factorial ANOVA. Overall, both learning techniques and feature mapping contributed significantly to the observed variance in predictive models, but to differing degrees for precision and accuracy as well as across different kinds and levels of model cross-validation.The methods presented here provide a robust statistical framework to compare among models developed under distinct learning techniques and feature sets for siRNAs. Further comparisons among current or future modeling approaches should apply these or other suitable statistically equivalent methods to critically evaluate the performance of proposed models. ANN and GLM techniques tend to be more sensitive to the inclusion of noisy features, but the SVM technique is more robust under large numbers of features for measures of model precision and accuracy. Features found to result in maximally predictive models are
Chen, Haiwen
2012-01-01
In this article, linear item response theory (IRT) observed-score equating is compared under a generalized kernel equating framework with Levine observed-score equating for nonequivalent groups with anchor test design. Interestingly, these two equating methods are closely related despite being based on different methodologies. Specifically, when…
International Nuclear Information System (INIS)
Zamuner, Stefano; Gomeni, Roberto; Bye, Alan
2002-01-01
Positron-Emission Tomography (PET) is an imaging technology currently used in drug development as a non-invasive measure of drug distribution and interaction with biochemical target system. The level of receptor occupancy achieved by a compound can be estimated by comparing time-activity measurements in an experiment done using tracer alone with the activity measured when the tracer is given following administration of unlabelled compound. The effective use of this surrogate marker as an enabling tool for drug development requires the definition of a model linking the brain receptor occupancy with the fluctuation of plasma concentrations. However, the predictive performance of such a model is strongly related to the precision on the estimate of receptor occupancy evaluated in PET scans collected at different times following drug treatment. Several methods have been proposed for the analysis and the quantification of the ligand-receptor interactions investigated from PET data. The aim of the present study is to evaluate alternative parameter estimation strategies based on the use of non-linear mixed effect models allowing to account for intra and inter-subject variability on the time-activity and for covariates potentially explaining this variability. A comparison of the different modeling approaches is presented using real data. The results of this comparison indicates that the mixed effect approach with a primary model partitioning the variance in term of Inter-Individual Variability (IIV) and Inter-Occasion Variability (IOV) and a second stage model relating the changes on binding potential to the dose of unlabelled drug is definitely the preferred approach
Directory of Open Access Journals (Sweden)
Reza Ezzati
2014-08-01
Full Text Available In this paper, we propose the least square method for computing the positive solution of a non-square fully fuzzy linear system. To this end, we use Kaffman' arithmetic operations on fuzzy numbers \\cite{17}. Here, considered existence of exact solution using pseudoinverse, if they are not satisfy in positive solution condition, we will compute fuzzy vector core and then we will obtain right and left spreads of positive fuzzy vector by introducing constrained least squares problem. Using our proposed method, non-square fully fuzzy linear system of equations always has a solution. Finally, we illustrate the efficiency of proposed method by solving some numerical examples.
Digital Repository Service at National Institute of Oceanography (India)
Nakamoto, S.; PrasannaKumar, S.; Oberhuber, J.M.; Saito, H.; Muneyama, K.; Frouin, R.
is influenced not only by local vertical mixing but also by horizontal con- vergence of mass and heat, a mixed layer model must consider both full dynamics due to the use of primitive equations and a parameterization for the vertical mass transfer and related... is dynamically determined without such a con- straint. Instantaneous atmospheric elds are inter- polated from the monthly means. Monthly mean climatology of chlorophyll pigment concentrations were obtained from the Coastal Zone Color Scan- ner (CZCS) from...
Linearization Method and Linear Complexity
Tanaka, Hidema
We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.
Wang, Haohan; Aragam, Bryon; Xing, Eric P
2018-04-26
A fundamental and important challenge in modern datasets of ever increasing dimensionality is variable selection, which has taken on renewed interest recently due to the growth of biological and medical datasets with complex, non-i.i.d. structures. Naïvely applying classical variable selection methods such as the Lasso to such datasets may lead to a large number of false discoveries. Motivated by genome-wide association studies in genetics, we study the problem of variable selection for datasets arising from multiple subpopulations, when this underlying population structure is unknown to the researcher. We propose a unified framework for sparse variable selection that adaptively corrects for population structure via a low-rank linear mixed model. Most importantly, the proposed method does not require prior knowledge of sample structure in the data and adaptively selects a covariance structure of the correct complexity. Through extensive experiments, we illustrate the effectiveness of this framework over existing methods. Further, we test our method on three different genomic datasets from plants, mice, and human, and discuss the knowledge we discover with our method. Copyright © 2018. Published by Elsevier Inc.
Kliegl, Reinhold; Wei, Ping; Dambacher, Michael; Yan, Ming; Zhou, Xiaolin
2011-01-01
Linear mixed models (LMMs) provide a still underused methodological perspective on combining experimental and individual-differences research. Here we illustrate this approach with two-rectangle cueing in visual attention (Egly et al., 1994). We replicated previous experimental cue-validity effects relating to a spatial shift of attention within an object (spatial effect), to attention switch between objects (object effect), and to the attraction of attention toward the display centroid (attraction effect), also taking into account the design-inherent imbalance of valid and other trials. We simultaneously estimated variance/covariance components of subject-related random effects for these spatial, object, and attraction effects in addition to their mean reaction times (RTs). The spatial effect showed a strong positive correlation with mean RT and a strong negative correlation with the attraction effect. The analysis of individual differences suggests that slow subjects engage attention more strongly at the cued location than fast subjects. We compare this joint LMM analysis of experimental effects and associated subject-related variances and correlations with two frequently used alternative statistical procedures. PMID:21833292
International Nuclear Information System (INIS)
Megow, Joerg; Kulesza, Alexander; Qu Zhengwang; Ronneberg, Thomas; Bonacic-Koutecky, Vlasta; May, Volkhard
2010-01-01
Graphical abstract: Structure of a single Pheo (green: C-atoms, blue: N-atoms, red; O-atoms, light grey: H-atoms). - Abstract: Linear absorption spectra of a single Pheophorbid-a molecule (Pheo) dissolved in ethanol are calculated in a mixed quantum-classical approach. In this computational scheme the absorbance is mainly determined by the time-dependent fluctuations of the energy gap between the Pheo ground and excited electronic state. The actual magnitude of the energy gap is caused by the electrostatic solvent solute coupling as well as by contributions due to intra Pheo vibrations. For the latter a new approach is proposed which is based on precalculated potential energy surfaces (PES) described in a harmonic approximation. To get the respective nuclear equilibrium configurations and Hessian matrices of the two involved electronic states we carried out the necessary electronic structure calculations in a DFT-framework. Since the Pheo changes its spatial orientation in the course of a MD run, the nuclear equilibrium configurations change their spatial position, too. Introducing a particular averaging procedure, these configurations are determined from the actual MD trajectories. The usability of the approach is underlined by a perfect reproduction of experimental data. This also demonstrates that our proposed method is suitable for the description of more complex systems in future investigations.
Directory of Open Access Journals (Sweden)
Yu-Pin Liao
2017-11-01
Full Text Available In the past few decades, demand forecasting has become relatively difficult due to rapid changes in the global environment. This research illustrates the use of the make-to-stock (MTS production strategy in order to explain how forecasting plays an essential role in business management. The linear mixed-effect (LME model has been extensively developed and is widely applied in various fields. However, no study has used the LME model for business forecasting. We suggest that the LME model be used as a tool for prediction and to overcome environment complexity. The data analysis is based on real data in an international display company, where the company needs accurate demand forecasting before adopting a MTS strategy. The forecasting result from the LME model is compared to the commonly used approaches, including the regression model, autoregressive model, times series model, and exponential smoothing model, with the results revealing that prediction performance provided by the LME model is more stable than using the other methods. Furthermore, product types in the data are regarded as a random effect in the LME model, hence demands of all types can be predicted simultaneously using a single LME model. However, some approaches require splitting the data into different type categories, and then predicting the type demand by establishing a model for each type. This feature also demonstrates the practicability of the LME model in real business operations.
International Nuclear Information System (INIS)
Wouters, Carmen; Fraga, Eric S.; James, Adrian M.
2015-01-01
The integration of distributed generation units and microgrids in the current grid infrastructure requires an efficient and cost effective local energy system design. A mixed-integer linear programming model is presented to identify such optimal design. The electricity as well as the space heating and cooling demands of a small residential neighbourhood are satisfied through the consideration and combined use of distributed generation technologies, thermal units and energy storage with an optional interconnection with the central grid. Moreover, energy integration is allowed in the form of both optimised pipeline networks and microgrid operation. The objective is to minimise the total annualised cost of the system to meet its yearly energy demand. The model integrates the operational characteristics and constraints of the different technologies for several scenarios in a South Australian setting and is implemented in GAMS. The impact of energy integration is analysed, leading to the identification of key components for residential energy systems. Additionally, a multi-microgrid concept is introduced to allow for local clustering of households within neighbourhoods. The robustness of the model is shown through sensitivity analysis, up-scaling and an effort to address the variability of solar irradiation. - Highlights: • Distributed energy system planning is employed on a small residential scale. • Full energy integration is employed based on microgrid operation and tri-generation. • An MILP for local clustering of households in multi-microgrids is developed. • Micro combined heat and power units are key components for residential microgrids
Rosenblum, Michael; van der Laan, Mark J.
2010-01-01
Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636
Rosenblum, Michael; van der Laan, Mark J
2010-04-01
Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation.
Directory of Open Access Journals (Sweden)
Eusebio Eduardo Hernández Martinez
2013-01-01
Full Text Available In robotics, solving the direct kinematics problem (DKP for parallel robots is very often more difficult and time consuming than for their serial counterparts. The problem is stated as follows: given the joint variables, the Cartesian variables should be computed, namely the pose of the mobile platform. Most of the time, the DKP requires solving a non-linear system of equations. In addition, given that the system could be non-convex, Newton or Quasi-Newton (Dogleg based solvers get trapped on local minima. The capacity of such kinds of solvers to find an adequate solution strongly depends on the starting point. A well-known problem is the selection of such a starting point, which requires a priori information about the neighbouring region of the solution. In order to circumvent this issue, this article proposes an efficient method to select and to generate the starting point based on probabilistic learning. Experiments and discussion are presented to show the method performance. The method successfully avoids getting trapped on local minima without the need for human intervention, which increases its robustness when compared with a single Dogleg approach. This proposal can be extended to other structures, to any non-linear system of equations, and of course, to non-linear optimization problems.
Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro
2015-04-05
The generalized Born model in the Onufriev, Bashford, and Case (Onufriev et al., Proteins: Struct Funct Genet 2004, 55, 383) implementation has emerged as one of the best compromises between accuracy and speed of computation. For simulations of nucleic acids, however, a number of issues should be addressed: (1) the generalized Born model is based on a linear model and the linearization of the reference Poisson-Boltmann equation may be questioned for highly charged systems as nucleic acids; (2) although much attention has been given to potentials, solvation forces could be much less sensitive to linearization than the potentials; and (3) the accuracy of the Onufriev-Bashford-Case (OBC) model for nucleic acids depends on fine tuning of parameters. Here, we show that the linearization of the Poisson Boltzmann equation has mild effects on computed forces, and that with optimal choice of the OBC model parameters, solvation forces, essential for molecular dynamics simulations, agree well with those computed using the reference Poisson-Boltzmann model. © 2015 Wiley Periodicals, Inc.
Langenbucher, Frieder
2005-01-01
A linear system comprising n compartments is completely defined by the rate constants between any of the compartments and the initial condition in which compartment(s) the drug is present at the beginning. The generalized solution is the time profiles of drug amount in each compartment, described by polyexponential equations. Based on standard matrix operations, an Excel worksheet computes the rate constants and the coefficients, finally the full time profiles for a specified range of time values.
Klein, Jens; Lüdecke, Daniel; Hofreuter-Gätgens, Kerstin; Fisch, Margit; Graefen, Markus; von dem Knesebeck, Olaf
2017-09-01
To examine income-related disparities in health-related quality of life (HRQOL) over a one-year period after surgery (radical prostatectomy) and its contributory factors in a longitudinal perspective. Evidence of associations between income and HRQOL among patients with prostate cancer (PCa) is sparse and their explanations still remain unclear. 246 males of two German hospitals filled out a questionnaire at the time of acute treatment, 6 and 12 months later. Age, partnership status, baseline disease and treatment factors, physical and psychological comorbidities, as well as treatment factors and adverse effects at follow-up were additionally included in the analyses to explain potential disparities. HRQOL was assessed with the EORTC (European Organisation for Research and Treatment of Cancer) QLQ-C30 core questionnaire and the prostate-specific QLQ-PR25. A linear mixed model for repeated measures was calculated. The fixed effects showed highly significant income-related inequalities regarding the majority of HRQOL scales. Less affluent PCa patients reported lower HRQOL in terms of global quality of life, all functional scales and urinary symptoms. After introducing relevant covariates, some associations became insignificant (physical, cognitive and sexual function), while others only showed reduced estimates (global quality of life, urinary symptoms, role, emotional and social function). In particular, mental disorders/psychological comorbidity played a relevant role in the explanation of income-related disparities. One year after surgery, income-related disparities in various dimensions of HRQOL persist. With respect to economically disadvantaged PCa patients, the findings emphasize the importance of continuous psychosocial screening and tailored interventions, of patients' empowerment and improved access to supportive care.
Webb, Marianne Julie; Wadley, Greg; Sanci, Lena Amanda
2017-08-11
Despite experiencing a high prevalence and co-occurrence of mental health disorders and health-compromising behaviors, young people tend not to seek professional help for these concerns. However, they do regularly attend primary care, making primary care providers ideally situated to identify and discuss mental health and lifestyle issues as part of young people's routine health care. The aim was to investigate whether using a codesigned health and lifestyle-screening app, Check Up GP, in general practice influenced young people's assessment of the quality of their care (measures of patient-centered care and youth friendliness), and their disclosure of sensitive issues. In addition, this study aimed to explore young people's acceptance and experience of using a screening app during regular health care. This was a mixed methods implementation study of Check Up GP with young people aged 14 to 25 years attending a general practice clinic in urban Melbourne, Australia. A 1-month treatment-as-usual group was compared to a 2-month intervention group in which young people and their general practitioners (GPs) used Check Up GP. Young people in both groups completed an exit survey immediately after their consultation about disclosure, patient-centered and youth-friendly care, and judgment. In addition, participants in the intervention group were surveyed about app acceptability and usability and their willingness to use it again. Semistructured interviews with participants in the intervention group expanded on themes covered in the survey. The exit survey was completed by 30 young people in the treatment-as-usual group and 85 young people in the intervention group. Young people using Check Up GP reported greater disclosure of health issues (Ptime to ask questions. In all, 86% (73/85) of young people felt the app was a "good idea" and only 1% (1/85) thought it a "bad idea." Thematic analysis of qualitative interviews with 14 participants found that Check Up GP created scope