WorldWideScience

Sample records for regression models describing

  1. Ordinal regression models to describe tourist satisfaction with Sintra's world heritage

    Science.gov (United States)

    Mouriño, Helena

    2013-10-01

    In Tourism Research, ordinal regression models are becoming a very powerful tool in modelling the relationship between an ordinal response variable and a set of explanatory variables. In August and September 2010, we conducted a pioneering Tourist Survey in Sintra, Portugal. The data were obtained by face-to-face interviews at the entrances of the Palaces and Parks of Sintra. The work developed in this paper focus on two main points: tourists' perception of the entrance fees; overall level of satisfaction with this heritage site. For attaining these goals, ordinal regression models were developed. We concluded that tourist's nationality was the only significant variable to describe the perception of the admission fees. Also, Sintra's image among tourists depends not only on their nationality, but also on previous knowledge about Sintra's World Heritage status.

  2. Describing Growth Pattern of Bali Cows Using Non-linear Regression Models

    Directory of Open Access Journals (Sweden)

    Mohd. Hafiz A.W

    2016-12-01

    Full Text Available The objective of this study was to evaluate the best fit non-linear regression model to describe the growth pattern of Bali cows. Estimates of asymptotic mature weight, rate of maturing and constant of integration were derived from Brody, von Bertalanffy, Gompertz and Logistic models which were fitted to cross-sectional data of body weight taken from 74 Bali cows raised in MARDI Research Station Muadzam Shah Pahang. Coefficient of determination (R2 and residual mean squares (MSE were used to determine the best fit model in describing the growth pattern of Bali cows. Von Bertalanffy model was the best model among the four growth functions evaluated to determine the mature weight of Bali cattle as shown by the highest R2 and lowest MSE values (0.973 and 601.9, respectively, followed by Gompertz (0.972 and 621.2, respectively, Logistic (0.971 and 648.4, respectively and Brody (0.932 and 660.5, respectively models. The correlation between rate of maturing and mature weight was found to be negative in the range of -0.170 to -0.929 for all models, indicating that animals of heavier mature weight had lower rate of maturing. The use of non-linear model could summarize the weight-age relationship into several biologically interpreted parameters compared to the entire lifespan weight-age data points that are difficult and time consuming to interpret.

  3. Regression models describing Rosa hybrida response to day/night temperature and photosynthetic photon flux

    International Nuclear Information System (INIS)

    Hopper, D.A.; Hammer, P.A.

    1991-01-01

    A central composite rotatable design was used to estimate quadratic equations describing the relationship of irradiance, as measured by photosynthetic photon flux (PPF), and day (DT) and night (NT) temperatures to the growth and development of Rosa hybrida L. in controlled environments. Plants were subjected to 15 treatment combinations of the PPF, DT, and NT according to the coding of the design matrix. Day and night length were each 12 hours. Environmental factor ranges were chosen to include conditions representative of winter and spring commercial greenhouse production environments in the midwestern United States. After an initial hard pinch, 11 plant growth characteristics were measured every 10 days and at flowering. Four plant characteristics were recorded to describe flower bud development. Response surface equations were displayed as three-dimensional plots, with DT and NT as the base axes and the plant character on the z-axis while PPF was held constant. Response surfaces illustrated the plant response to interactions of DT and NT, while comparisons between plots at different PPF showed the overall effect of PPF. Canonical analysis of all regression models revealed the stationary point and general shape of the response surface. All stationary points of the significant models were located outside the original design space, and all but one surface was a saddle shape. Both the plots and analysis showed greater stem diameter, as well as higher fresh and dry weights of stems, leaves, and flower buds to occur at flowering under combinations of low DT (less than or equal to 17C) and low NT (less than or equal to 14C). However, low DT and NT delayed both visible bud formation and development to flowering. Increased PPF increased overall flower stem quality by increasing stem diameter and the fresh and dry weights of all plant parts at flowering, as well as decreased time until visible bud formation and flowering. These results summarize measured development at

  4. Multilevel regression models describing regional patterns of invertebrate and algal responses to urbanization across the USA

    Science.gov (United States)

    Cuffney, T.F.; Kashuba, R.; Qian, S.S.; Alameddine, I.; Cha, Y.K.; Lee, B.; Coles, J.F.; McMahon, G.

    2011-01-01

    Multilevel hierarchical regression was used to examine regional patterns in the responses of benthic macroinvertebrates and algae to urbanization across 9 metropolitan areas of the conterminous USA. Linear regressions established that responses (intercepts and slopes) to urbanization of invertebrates and algae varied among metropolitan areas. Multilevel hierarchical regression models were able to explain these differences on the basis of region-scale predictors. Regional differences in the type of land cover (agriculture or forest) being converted to urban and climatic factors (precipitation and air temperature) accounted for the differences in the response of macroinvertebrates to urbanization based on ordination scores, total richness, Ephemeroptera, Plecoptera, Trichoptera richness, and average tolerance. Regional differences in climate and antecedent agriculture also accounted for differences in the responses of salt-tolerant diatoms, but differences in the responses of other diatom metrics (% eutraphenic, % sensitive, and % silt tolerant) were best explained by regional differences in soils (mean % clay soils). The effects of urbanization were most readily detected in regions where forest lands were being converted to urban land because agricultural development significantly degraded assemblages before urbanization and made detection of urban effects difficult. The effects of climatic factors (temperature, precipitation) on background conditions (biogeographic differences) and rates of response to urbanization were most apparent after accounting for the effects of agricultural development. The effects of climate and land cover on responses to urbanization provide strong evidence that monitoring, mitigation, and restoration efforts must be tailored for specific regions and that attainment goals (background conditions) may not be possible in regions with high levels of prior disturbance (e.g., agricultural development). ?? 2011 by The North American

  5. A theoretical model to describe progressions and regressions for exercise rehabilitation.

    Science.gov (United States)

    Blanchard, Sam; Glasgow, Phil

    2014-08-01

    This article aims to describe a new theoretical model to simplify and aid visualisation of the clinical reasoning process involved in progressing a single exercise. Exercise prescription is a core skill for physiotherapists but is an area that is lacking in theoretical models to assist clinicians when designing exercise programs to aid rehabilitation from injury. Historical models of periodization and motor learning theories lack any visual aids to assist clinicians. The concept of the proposed model is that new stimuli can be added or exchanged with other stimuli, either intrinsic or extrinsic to the participant, in order to gradually progress an exercise whilst remaining safe and effective. The proposed model maintains the core skills of physiotherapists by assisting clinical reasoning skills, exercise prescription and goal setting. It is not limited to any one pathology or rehabilitation setting and can adapted by any level of skilled clinician. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Modelos de regressão aleatória com diferentes estruturas de variância residual para descrever o tamanho da leitegada Random regression models with different residual variance structures for describing litter size in swine

    Directory of Open Access Journals (Sweden)

    Aderbal Cavalcante-Neto

    2011-12-01

    Full Text Available Objetivou-se comparar modelos de regressão aleatória com diferentes estruturas de variância residual, a fim de se buscar a melhor modelagem para a característica tamanho da leitegada ao nascer (TLN. Utilizaram-se 1.701 registros de TLN, que foram analisados por meio de modelo animal, unicaracterística, de regressão aleatória. As regressões fixa e aleatórias foram representadas por funções contínuas sobre a ordem de parto, ajustadas por polinômios ortogonais de Legendre de ordem 3. Para averiguar a melhor modelagem para a variância residual, considerou-se a heterogeneidade de variância por meio de 1 a 7 classes de variância residual. O modelo geral de análise incluiu grupo de contemporâneo como efeito fixo; os coeficientes de regressão fixa para modelar a trajetória média da população; os coeficientes de regressão aleatória do efeito genético aditivo-direto, do comum-de-leitegada e do de ambiente permanente de animal; e o efeito aleatório residual. O teste da razão de verossimilhança, o critério de informação de Akaike e o critério de informação bayesiano de Schwarz apontaram o modelo que considerou homogeneidade de variância como o que proporcionou melhor ajuste aos dados utilizados. As herdabilidades obtidas foram próximas a zero (0,002 a 0,006. O efeito de ambiente permanente foi crescente da 1ª (0,06 à 5ª (0,28 ordem, mas decrescente desse ponto até a 7ª ordem (0,18. O comum-de-leitegada apresentou valores baixos (0,01 a 0,02. A utilização de homogeneidade de variância residual foi mais adequada para modelar as variâncias associadas à característica tamanho da leitegada ao nascer nesse conjunto de dado.The objective of this work was to compare random regression models with different residual variance structures, so as to obtain the best modeling for the trait litter size at birth (LSB in swine. One thousand, seven hundred and one records of LSB were analyzed. LSB was analyzed by means of a

  7. Regression models of reactor diagnostic signals

    International Nuclear Information System (INIS)

    Vavrin, J.

    1989-01-01

    The application is described of an autoregression model as the simplest regression model of diagnostic signals in experimental analysis of diagnostic systems, in in-service monitoring of normal and anomalous conditions and their diagnostics. The method of diagnostics is described using a regression type diagnostic data base and regression spectral diagnostics. The diagnostics is described of neutron noise signals from anomalous modes in the experimental fuel assembly of a reactor. (author)

  8. Logistic regression models

    CERN Document Server

    Hilbe, Joseph M

    2009-01-01

    This book really does cover everything you ever wanted to know about logistic regression … with updates available on the author's website. Hilbe, a former national athletics champion, philosopher, and expert in astronomy, is a master at explaining statistical concepts and methods. Readers familiar with his other expository work will know what to expect-great clarity.The book provides considerable detail about all facets of logistic regression. No step of an argument is omitted so that the book will meet the needs of the reader who likes to see everything spelt out, while a person familiar with some of the topics has the option to skip "obvious" sections. The material has been thoroughly road-tested through classroom and web-based teaching. … The focus is on helping the reader to learn and understand logistic regression. The audience is not just students meeting the topic for the first time, but also experienced users. I believe the book really does meet the author's goal … .-Annette J. Dobson, Biometric...

  9. Uso de modelos de regressão aleatória para descrever a variação genética da produção de leite na raça Holandesa Random regressions models to describe the genetic variation of milk yield in Holstein breed

    Directory of Open Access Journals (Sweden)

    Cláudio Vieira de Araújo

    2006-06-01

    Full Text Available Registros de produção de leite de 68.523 controles leiteiros de 8.536 vacas da raça Holandesa, com parições nos anos de 1996 a 2001, foram utilizados na comparação entre modelos de regressão aleatória para estimação de componentes de variância. Os registros de controle leiteiro foram analisados como características múltiplas, considerando cada controle uma característica distinta. Os mesmos registros de controle leiteiro foram analisados como dados longitudinais, por meio de modelos de regressão aleatória, que diferiram entre si pela função utilizada para descrever a trajetória da curva de lactação dos animais. As funções utilizadas foram a exponencial de Wilmink, a função de Ali e Schaeffer e os polinômios de Legendre de segundo e quarto graus. A comparação entre modelos foi realizada com base nos seguintes critérios: estimativas de componentes de variância, obtidas no modelo multicaractístico e por regressão aleatória; valores da variância residual; e valores do logaritmo da função de verossimilhança. As estimativas de herdabilidade obtidas por meio dos modelos de características múltiplas variaram de 0,110 a 0,244. Para os modelos de regressão aleatória, esses valores oscilaram de 0,127 a 0,301, observando-se as maiores estimativas nos modelos com maior número de parâmetros. Verificou-se que os modelos de regressão aleatória que utilizaram os polinômios de Legendre descreveram melhor a variação genética da produção de leite.Data comprising 68,523 test day milk yield of 8,536 cows of the Holstein breed, calving from 1996 to 2001, were used to compare random regression models, for estimating variance components. Test day records (TD were analyzed as multiple traits, considering each TD as a different trait. The test day records were analyzed as longitudinal traits by different random regression models regarding the function used to describe the trajectory of the lactation curve of the animals

  10. (Non) linear regression modelling

    NARCIS (Netherlands)

    Cizek, P.; Gentle, J.E.; Hardle, W.K.; Mori, Y.

    2012-01-01

    We will study causal relationships of a known form between random variables. Given a model, we distinguish one or more dependent (endogenous) variables Y = (Y1,…,Yl), l ∈ N, which are explained by a model, and independent (exogenous, explanatory) variables X = (X1,…,Xp),p ∈ N, which explain or

  11. Panel Smooth Transition Regression Models

    DEFF Research Database (Denmark)

    González, Andrés; Terasvirta, Timo; Dijk, Dick van

    We introduce the panel smooth transition regression model. This new model is intended for characterizing heterogeneous panels, allowing the regression coefficients to vary both across individuals and over time. Specifically, heterogeneity is allowed for by assuming that these coefficients are bou...

  12. Forecasting with Dynamic Regression Models

    CERN Document Server

    Pankratz, Alan

    2012-01-01

    One of the most widely used tools in statistical forecasting, single equation regression models is examined here. A companion to the author's earlier work, Forecasting with Univariate Box-Jenkins Models: Concepts and Cases, the present text pulls together recent time series ideas and gives special attention to possible intertemporal patterns, distributed lag responses of output to input series and the auto correlation patterns of regression disturbance. It also includes six case studies.

  13. Modified Regression Correlation Coefficient for Poisson Regression Model

    Science.gov (United States)

    Kaengthong, Nattacha; Domthong, Uthumporn

    2017-09-01

    This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).

  14. Nonparametric Mixture of Regression Models.

    Science.gov (United States)

    Huang, Mian; Li, Runze; Wang, Shaoli

    2013-07-01

    Motivated by an analysis of US house price index data, we propose nonparametric finite mixture of regression models. We study the identifiability issue of the proposed models, and develop an estimation procedure by employing kernel regression. We further systematically study the sampling properties of the proposed estimators, and establish their asymptotic normality. A modified EM algorithm is proposed to carry out the estimation procedure. We show that our algorithm preserves the ascent property of the EM algorithm in an asymptotic sense. Monte Carlo simulations are conducted to examine the finite sample performance of the proposed estimation procedure. An empirical analysis of the US house price index data is illustrated for the proposed methodology.

  15. Regression Models for Repairable Systems

    Czech Academy of Sciences Publication Activity Database

    Novák, Petr

    2015-01-01

    Roč. 17, č. 4 (2015), s. 963-972 ISSN 1387-5841 Institutional support: RVO:67985556 Keywords : Reliability analysis * Repair models * Regression Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.782, year: 2015 http://library.utia.cas.cz/separaty/2015/SI/novak-0450902.pdf

  16. [From clinical judgment to linear regression model.

    Science.gov (United States)

    Palacios-Cruz, Lino; Pérez, Marcela; Rivas-Ruiz, Rodolfo; Talavera, Juan O

    2013-01-01

    When we think about mathematical models, such as linear regression model, we think that these terms are only used by those engaged in research, a notion that is far from the truth. Legendre described the first mathematical model in 1805, and Galton introduced the formal term in 1886. Linear regression is one of the most commonly used regression models in clinical practice. It is useful to predict or show the relationship between two or more variables as long as the dependent variable is quantitative and has normal distribution. Stated in another way, the regression is used to predict a measure based on the knowledge of at least one other variable. Linear regression has as it's first objective to determine the slope or inclination of the regression line: Y = a + bx, where "a" is the intercept or regression constant and it is equivalent to "Y" value when "X" equals 0 and "b" (also called slope) indicates the increase or decrease that occurs when the variable "x" increases or decreases in one unit. In the regression line, "b" is called regression coefficient. The coefficient of determination (R 2 ) indicates the importance of independent variables in the outcome.

  17. Frameworks for understanding and describing business models

    DEFF Research Database (Denmark)

    Nielsen, Christian; Roslender, Robin

    2014-01-01

    This chapter provides in a chronological fashion an introduction to six frameworks that one can apply to describing, understanding and also potentially innovating business models. These six frameworks have been chosen carefully as they represent six very different perspectives on business models...... and in this manner “complement” each other. There are a multitude of varying frameworks that could be chosen from and we urge the reader to search and trial these for themselves. The six chosen models (year of release in parenthesis) are: • Service-Profit Chain (1994) • Strategic Systems Auditing (1997) • Strategy...... Maps (2001) • Intellectual Capital Statements (2003) • Chesbrough’s framework for Open Business Models (2006) • Business Model Canvas (2008)...

  18. Interpretation of commonly used statistical regression models.

    Science.gov (United States)

    Kasza, Jessica; Wolfe, Rory

    2014-01-01

    A review of some regression models commonly used in respiratory health applications is provided in this article. Simple linear regression, multiple linear regression, logistic regression and ordinal logistic regression are considered. The focus of this article is on the interpretation of the regression coefficients of each model, which are illustrated through the application of these models to a respiratory health research study. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.

  19. Inhomogeneous Markov Models for Describing Driving Patterns

    DEFF Research Database (Denmark)

    Iversen, Emil Banning; Møller, Jan K.; Morales, Juan Miguel

    2017-01-01

    . Specifically, an inhomogeneous Markov model that captures the diurnal variation in the use of a vehicle is presented. The model is defined by the time-varying probabilities of starting and ending a trip, and is justified due to the uncertainty associated with the use of the vehicle. The model is fitted to data...... collected from the actual utilization of a vehicle. Inhomogeneous Markov models imply a large number of parameters. The number of parameters in the proposed model is reduced using B-splines....

  20. Inhomogeneous Markov Models for Describing Driving Patterns

    DEFF Research Database (Denmark)

    Iversen, Jan Emil Banning; Møller, Jan Kloppenborg; Morales González, Juan Miguel

    . Specically, an inhomogeneous Markov model that captures the diurnal variation in the use of a vehicle is presented. The model is dened by the time-varying probabilities of starting and ending a trip and is justied due to the uncertainty associated with the use of the vehicle. The model is tted to data...... collected from the actual utilization of a vehicle. Inhomogeneous Markov models imply a large number of parameters. The number of parameters in the proposed model is reduced using B-splines....

  1. Using Metaphorical Models for Describing Glaciers

    Science.gov (United States)

    Felzmann, Dirk

    2014-01-01

    To date, there has only been little conceptual change research regarding conceptions about glaciers. This study used the theoretical background of embodied cognition to reconstruct different metaphorical concepts with respect to the structure of a glacier. Applying the Model of Educational Reconstruction, the conceptions of students and scientists…

  2. Modeling Approaches for Describing Microbial Population Heterogeneity

    DEFF Research Database (Denmark)

    Lencastre Fernandes, Rita

    environmental conditions. Three cases are presented and discussed in this thesis. Common to all is the use of S. cerevisiae as model organism, and the use of cell size and cell cycle position as single-cell descriptors. The first case focuses on the experimental and mathematical description of a yeast...

  3. A Seemingly Unrelated Poisson Regression Model

    OpenAIRE

    King, Gary

    1989-01-01

    This article introduces a new estimator for the analysis of two contemporaneously correlated endogenous event count variables. This seemingly unrelated Poisson regression model (SUPREME) estimator combines the efficiencies created by single equation Poisson regression model estimators and insights from "seemingly unrelated" linear regression models.

  4. Gaussian Process Regression Model in Spatial Logistic Regression

    Science.gov (United States)

    Sofro, A.; Oktaviarina, A.

    2018-01-01

    Spatial analysis has developed very quickly in the last decade. One of the favorite approaches is based on the neighbourhood of the region. Unfortunately, there are some limitations such as difficulty in prediction. Therefore, we offer Gaussian process regression (GPR) to accommodate the issue. In this paper, we will focus on spatial modeling with GPR for binomial data with logit link function. The performance of the model will be investigated. We will discuss the inference of how to estimate the parameters and hyper-parameters and to predict as well. Furthermore, simulation studies will be explained in the last section.

  5. Variable selection and model choice in geoadditive regression models.

    Science.gov (United States)

    Kneib, Thomas; Hothorn, Torsten; Tutz, Gerhard

    2009-06-01

    Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection.

  6. Model performance analysis and model validation in logistic regression

    Directory of Open Access Journals (Sweden)

    Rosa Arboretti Giancristofaro

    2007-10-01

    Full Text Available In this paper a new model validation procedure for a logistic regression model is presented. At first, we illustrate a brief review of different techniques of model validation. Next, we define a number of properties required for a model to be considered "good", and a number of quantitative performance measures. Lastly, we describe a methodology for the assessment of the performance of a given model by using an example taken from a management study.

  7. Variable importance in latent variable regression models

    NARCIS (Netherlands)

    Kvalheim, O.M.; Arneberg, R.; Bleie, O.; Rajalahti, T.; Smilde, A.K.; Westerhuis, J.A.

    2014-01-01

    The quality and practical usefulness of a regression model are a function of both interpretability and prediction performance. This work presents some new graphical tools for improved interpretation of latent variable regression models that can also assist in improved algorithms for variable

  8. Regression modeling of ground-water flow

    Science.gov (United States)

    Cooley, R.L.; Naff, R.L.

    1985-01-01

    Nonlinear multiple regression methods are developed to model and analyze groundwater flow systems. Complete descriptions of regression methodology as applied to groundwater flow models allow scientists and engineers engaged in flow modeling to apply the methods to a wide range of problems. Organization of the text proceeds from an introduction that discusses the general topic of groundwater flow modeling, to a review of basic statistics necessary to properly apply regression techniques, and then to the main topic: exposition and use of linear and nonlinear regression to model groundwater flow. Statistical procedures are given to analyze and use the regression models. A number of exercises and answers are included to exercise the student on nearly all the methods that are presented for modeling and statistical analysis. Three computer programs implement the more complex methods. These three are a general two-dimensional, steady-state regression model for flow in an anisotropic, heterogeneous porous medium, a program to calculate a measure of model nonlinearity with respect to the regression parameters, and a program to analyze model errors in computed dependent variables such as hydraulic head. (USGS)

  9. Identification of Influential Points in a Linear Regression Model

    Directory of Open Access Journals (Sweden)

    Jan Grosz

    2011-03-01

    Full Text Available The article deals with the detection and identification of influential points in the linear regression model. Three methods of detection of outliers and leverage points are described. These procedures can also be used for one-sample (independentdatasets. This paper briefly describes theoretical aspects of several robust methods as well. Robust statistics is a powerful tool to increase the reliability and accuracy of statistical modelling and data analysis. A simulation model of the simple linear regression is presented.

  10. Regression Models for Market-Shares

    DEFF Research Database (Denmark)

    Birch, Kristina; Olsen, Jørgen Kai; Tjur, Tue

    2005-01-01

    On the background of a data set of weekly sales and prices for three brands of coffee, this paper discusses various regression models and their relation to the multiplicative competitive-interaction model (the MCI model, see Cooper 1988, 1993) for market-shares. Emphasis is put on the interpretat......On the background of a data set of weekly sales and prices for three brands of coffee, this paper discusses various regression models and their relation to the multiplicative competitive-interaction model (the MCI model, see Cooper 1988, 1993) for market-shares. Emphasis is put...... on the interpretation of the parameters in relation to models for the total sales based on discrete choice models.Key words and phrases. MCI model, discrete choice model, market-shares, price elasitcity, regression model....

  11. Conceptual hierarchical modeling to describe wetland plant community organization

    Science.gov (United States)

    Little, A.M.; Guntenspergen, G.R.; Allen, T.F.H.

    2010-01-01

    Using multivariate analysis, we created a hierarchical modeling process that describes how differently-scaled environmental factors interact to affect wetland-scale plant community organization in a system of small, isolated wetlands on Mount Desert Island, Maine. We followed the procedure: 1) delineate wetland groups using cluster analysis, 2) identify differently scaled environmental gradients using non-metric multidimensional scaling, 3) order gradient hierarchical levels according to spatiotem-poral scale of fluctuation, and 4) assemble hierarchical model using group relationships with ordination axes and post-hoc tests of environmental differences. Using this process, we determined 1) large wetland size and poor surface water chemistry led to the development of shrub fen wetland vegetation, 2) Sphagnum and water chemistry differences affected fen vs. marsh / sedge meadows status within small wetlands, and 3) small-scale hydrologic differences explained transitions between forested vs. non-forested and marsh vs. sedge meadow vegetation. This hierarchical modeling process can help explain how upper level contextual processes constrain biotic community response to lower-level environmental changes. It creates models with more nuanced spatiotemporal complexity than classification and regression tree procedures. Using this process, wetland scientists will be able to generate more generalizable theories of plant community organization, and useful management models. ?? Society of Wetland Scientists 2009.

  12. Categorical regression dose-response modeling

    Science.gov (United States)

    The goal of this training is to provide participants with training on the use of the U.S. EPA’s Categorical Regression soft¬ware (CatReg) and its application to risk assessment. Categorical regression fits mathematical models to toxicity data that have been assigned ord...

  13. Applied Regression Modeling A Business Approach

    CERN Document Server

    Pardoe, Iain

    2012-01-01

    An applied and concise treatment of statistical regression techniques for business students and professionals who have little or no background in calculusRegression analysis is an invaluable statistical methodology in business settings and is vital to model the relationship between a response variable and one or more predictor variables, as well as the prediction of a response value given values of the predictors. In view of the inherent uncertainty of business processes, such as the volatility of consumer spending and the presence of market uncertainty, business professionals use regression a

  14. Regression Models For Multivariate Count Data.

    Science.gov (United States)

    Zhang, Yiwen; Zhou, Hua; Zhou, Jin; Sun, Wei

    2017-01-01

    Data with multivariate count responses frequently occur in modern applications. The commonly used multinomial-logit model is limiting due to its restrictive mean-variance structure. For instance, analyzing count data from the recent RNA-seq technology by the multinomial-logit model leads to serious errors in hypothesis testing. The ubiquity of over-dispersion and complicated correlation structures among multivariate counts calls for more flexible regression models. In this article, we study some generalized linear models that incorporate various correlation structures among the counts. Current literature lacks a treatment of these models, partly due to the fact that they do not belong to the natural exponential family. We study the estimation, testing, and variable selection for these models in a unifying framework. The regression models are compared on both synthetic and real RNA-seq data.

  15. Testing homogeneity in Weibull-regression models.

    Science.gov (United States)

    Bolfarine, Heleno; Valença, Dione M

    2005-10-01

    In survival studies with families or geographical units it may be of interest testing whether such groups are homogeneous for given explanatory variables. In this paper we consider score type tests for group homogeneity based on a mixing model in which the group effect is modelled as a random variable. As opposed to hazard-based frailty models, this model presents survival times that conditioned on the random effect, has an accelerated failure time representation. The test statistics requires only estimation of the conventional regression model without the random effect and does not require specifying the distribution of the random effect. The tests are derived for a Weibull regression model and in the uncensored situation, a closed form is obtained for the test statistic. A simulation study is used for comparing the power of the tests. The proposed tests are applied to real data sets with censored data.

  16. Mixed-effects regression models in linguistics

    CERN Document Server

    Heylen, Kris; Geeraerts, Dirk

    2018-01-01

    When data consist of grouped observations or clusters, and there is a risk that measurements within the same group are not independent, group-specific random effects can be added to a regression model in order to account for such within-group associations. Regression models that contain such group-specific random effects are called mixed-effects regression models, or simply mixed models. Mixed models are a versatile tool that can handle both balanced and unbalanced datasets and that can also be applied when several layers of grouping are present in the data; these layers can either be nested or crossed.  In linguistics, as in many other fields, the use of mixed models has gained ground rapidly over the last decade. This methodological evolution enables us to build more sophisticated and arguably more realistic models, but, due to its technical complexity, also introduces new challenges. This volume brings together a number of promising new evolutions in the use of mixed models in linguistics, but also addres...

  17. Regression modeling methods, theory, and computation with SAS

    CERN Document Server

    Panik, Michael

    2009-01-01

    Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,

  18. Influence diagnostics in meta-regression model.

    Science.gov (United States)

    Shi, Lei; Zuo, ShanShan; Yu, Dalei; Zhou, Xiaohua

    2017-09-01

    This paper studies the influence diagnostics in meta-regression model including case deletion diagnostic and local influence analysis. We derive the subset deletion formulae for the estimation of regression coefficient and heterogeneity variance and obtain the corresponding influence measures. The DerSimonian and Laird estimation and maximum likelihood estimation methods in meta-regression are considered, respectively, to derive the results. Internal and external residual and leverage measure are defined. The local influence analysis based on case-weights perturbation scheme, responses perturbation scheme, covariate perturbation scheme, and within-variance perturbation scheme are explored. We introduce a method by simultaneous perturbing responses, covariate, and within-variance to obtain the local influence measure, which has an advantage of capable to compare the influence magnitude of influential studies from different perturbations. An example is used to illustrate the proposed methodology. Copyright © 2017 John Wiley & Sons, Ltd.

  19. AIRLINE ACTIVITY FORECASTING BY REGRESSION MODELS

    Directory of Open Access Journals (Sweden)

    Н. Білак

    2012-04-01

    Full Text Available Proposed linear and nonlinear regression models, which take into account the equation of trend and seasonality indices for the analysis and restore the volume of passenger traffic over the past period of time and its prediction for future years, as well as the algorithm of formation of these models based on statistical analysis over the years. The desired model is the first step for the synthesis of more complex models, which will enable forecasting of passenger (income level airline with the highest accuracy and time urgency.

  20. Modeling oil production based on symbolic regression

    International Nuclear Information System (INIS)

    Yang, Guangfei; Li, Xianneng; Wang, Jianliang; Lian, Lian; Ma, Tieju

    2015-01-01

    Numerous models have been proposed to forecast the future trends of oil production and almost all of them are based on some predefined assumptions with various uncertainties. In this study, we propose a novel data-driven approach that uses symbolic regression to model oil production. We validate our approach on both synthetic and real data, and the results prove that symbolic regression could effectively identify the true models beneath the oil production data and also make reliable predictions. Symbolic regression indicates that world oil production will peak in 2021, which broadly agrees with other techniques used by researchers. Our results also show that the rate of decline after the peak is almost half the rate of increase before the peak, and it takes nearly 12 years to drop 4% from the peak. These predictions are more optimistic than those in several other reports, and the smoother decline will provide the world, especially the developing countries, with more time to orchestrate mitigation plans. -- Highlights: •A data-driven approach has been shown to be effective at modeling the oil production. •The Hubbert model could be discovered automatically from data. •The peak of world oil production is predicted to appear in 2021. •The decline rate after peak is half of the increase rate before peak. •Oil production projected to decline 4% post-peak

  1. Geographically weighted regression model on poverty indicator

    Science.gov (United States)

    Slamet, I.; Nugroho, N. F. T. A.; Muslich

    2017-12-01

    In this research, we applied geographically weighted regression (GWR) for analyzing the poverty in Central Java. We consider Gaussian Kernel as weighted function. The GWR uses the diagonal matrix resulted from calculating kernel Gaussian function as a weighted function in the regression model. The kernel weights is used to handle spatial effects on the data so that a model can be obtained for each location. The purpose of this paper is to model of poverty percentage data in Central Java province using GWR with Gaussian kernel weighted function and to determine the influencing factors in each regency/city in Central Java province. Based on the research, we obtained geographically weighted regression model with Gaussian kernel weighted function on poverty percentage data in Central Java province. We found that percentage of population working as farmers, population growth rate, percentage of households with regular sanitation, and BPJS beneficiaries are the variables that affect the percentage of poverty in Central Java province. In this research, we found the determination coefficient R2 are 68.64%. There are two categories of district which are influenced by different of significance factors.

  2. A non-linear regression analysis program for describing electrophysiological data with multiple functions using Microsoft Excel.

    Science.gov (United States)

    Brown, Angus M

    2006-04-01

    The objective of this present study was to demonstrate a method for fitting complex electrophysiological data with multiple functions using the SOLVER add-in of the ubiquitous spreadsheet Microsoft Excel. SOLVER minimizes the difference between the sum of the squares of the data to be fit and the function(s) describing the data using an iterative generalized reduced gradient method. While it is a straightforward procedure to fit data with linear functions, and we have previously demonstrated a method of non-linear regression analysis of experimental data based upon a single function, it is more complex to fit data with multiple functions, usually requiring specialized expensive computer software. In this paper we describe an easily understood program for fitting experimentally acquired data, in this case the stimulus-evoked compound action potential from the mouse optic nerve, with multiple Gaussian functions. The program is flexible and can be applied to describe data with a wide variety of user-input functions.

  3. Random regression models for detection of gene by environment interaction

    Directory of Open Access Journals (Sweden)

    Meuwissen Theo HE

    2007-02-01

    Full Text Available Abstract Two random regression models, where the effect of a putative QTL was regressed on an environmental gradient, are described. The first model estimates the correlation between intercept and slope of the random regression, while the other model restricts this correlation to 1 or -1, which is expected under a bi-allelic QTL model. The random regression models were compared to a model assuming no gene by environment interactions. The comparison was done with regards to the models ability to detect QTL, to position them accurately and to detect possible QTL by environment interactions. A simulation study based on a granddaughter design was conducted, and QTL were assumed, either by assigning an effect independent of the environment or as a linear function of a simulated environmental gradient. It was concluded that the random regression models were suitable for detection of QTL effects, in the presence and absence of interactions with environmental gradients. Fixing the correlation between intercept and slope of the random regression had a positive effect on power when the QTL effects re-ranked between environments.

  4. Adaptive regression for modeling nonlinear relationships

    CERN Document Server

    Knafl, George J

    2016-01-01

    This book presents methods for investigating whether relationships are linear or nonlinear and for adaptively fitting appropriate models when they are nonlinear. Data analysts will learn how to incorporate nonlinearity in one or more predictor variables into regression models for different types of outcome variables. Such nonlinear dependence is often not considered in applied research, yet nonlinear relationships are common and so need to be addressed. A standard linear analysis can produce misleading conclusions, while a nonlinear analysis can provide novel insights into data, not otherwise possible. A variety of examples of the benefits of modeling nonlinear relationships are presented throughout the book. Methods are covered using what are called fractional polynomials based on real-valued power transformations of primary predictor variables combined with model selection based on likelihood cross-validation. The book covers how to formulate and conduct such adaptive fractional polynomial modeling in the s...

  5. Model checking biological systems described using ambient calculus

    DEFF Research Database (Denmark)

    Mardare, Radu Iulian; Priami, Corrado; Qualia, Paola

    2005-01-01

    Model checking biological systems described using ambient calculus. In Proc. of the second International Workshop on Computational Methods in Systems Biology (CMSB04), Lecture Notes in Bioinformatics 3082:85-103, Springer, 2005.......Model checking biological systems described using ambient calculus. In Proc. of the second International Workshop on Computational Methods in Systems Biology (CMSB04), Lecture Notes in Bioinformatics 3082:85-103, Springer, 2005....

  6. Bayesian Inference of a Multivariate Regression Model

    Directory of Open Access Journals (Sweden)

    Marick S. Sinay

    2014-01-01

    Full Text Available We explore Bayesian inference of a multivariate linear regression model with use of a flexible prior for the covariance structure. The commonly adopted Bayesian setup involves the conjugate prior, multivariate normal distribution for the regression coefficients and inverse Wishart specification for the covariance matrix. Here we depart from this approach and propose a novel Bayesian estimator for the covariance. A multivariate normal prior for the unique elements of the matrix logarithm of the covariance matrix is considered. Such structure allows for a richer class of prior distributions for the covariance, with respect to strength of beliefs in prior location hyperparameters, as well as the added ability, to model potential correlation amongst the covariance structure. The posterior moments of all relevant parameters of interest are calculated based upon numerical results via a Markov chain Monte Carlo procedure. The Metropolis-Hastings-within-Gibbs algorithm is invoked to account for the construction of a proposal density that closely matches the shape of the target posterior distribution. As an application of the proposed technique, we investigate a multiple regression based upon the 1980 High School and Beyond Survey.

  7. General regression and representation model for classification.

    Directory of Open Access Journals (Sweden)

    Jianjun Qian

    Full Text Available Recently, the regularized coding-based classification methods (e.g. SRC and CRC show a great potential for pattern classification. However, most existing coding methods assume that the representation residuals are uncorrelated. In real-world applications, this assumption does not hold. In this paper, we take account of the correlations of the representation residuals and develop a general regression and representation model (GRR for classification. GRR not only has advantages of CRC, but also takes full use of the prior information (e.g. the correlations between representation residuals and representation coefficients and the specific information (weight matrix of image pixels to enhance the classification performance. GRR uses the generalized Tikhonov regularization and K Nearest Neighbors to learn the prior information from the training data. Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel weights of the test sample. With the proposed model as a platform, we design two classifiers: basic general regression and representation classifier (B-GRR and robust general regression and representation classifier (R-GRR. The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.

  8. Forecasting Ebola with a regression transmission model

    OpenAIRE

    Asher, Jason

    2017-01-01

    We describe a relatively simple stochastic model of Ebola transmission that was used to produce forecasts with the lowest mean absolute error among Ebola Forecasting Challenge participants. The model enabled prediction of peak incidence, the timing of this peak, and final size of the outbreak. The underlying discrete-time compartmental model used a time-varying reproductive rate modeled as a multiplicative random walk driven by the number of infectious individuals. This structure generalizes ...

  9. Confidence bands for inverse regression models

    International Nuclear Information System (INIS)

    Birke, Melanie; Bissantz, Nicolai; Holzmann, Hajo

    2010-01-01

    We construct uniform confidence bands for the regression function in inverse, homoscedastic regression models with convolution-type operators. Here, the convolution is between two non-periodic functions on the whole real line rather than between two periodic functions on a compact interval, since the former situation arguably arises more often in applications. First, following Bickel and Rosenblatt (1973 Ann. Stat. 1 1071–95) we construct asymptotic confidence bands which are based on strong approximations and on a limit theorem for the supremum of a stationary Gaussian process. Further, we propose bootstrap confidence bands based on the residual bootstrap and prove consistency of the bootstrap procedure. A simulation study shows that the bootstrap confidence bands perform reasonably well for moderate sample sizes. Finally, we apply our method to data from a gel electrophoresis experiment with genetically engineered neuronal receptor subunits incubated with rat brain extract

  10. Forecasting Ebola with a regression transmission model

    Directory of Open Access Journals (Sweden)

    Jason Asher

    2018-03-01

    Full Text Available We describe a relatively simple stochastic model of Ebola transmission that was used to produce forecasts with the lowest mean absolute error among Ebola Forecasting Challenge participants. The model enabled prediction of peak incidence, the timing of this peak, and final size of the outbreak. The underlying discrete-time compartmental model used a time-varying reproductive rate modeled as a multiplicative random walk driven by the number of infectious individuals. This structure generalizes traditional Susceptible-Infected-Recovered (SIR disease modeling approaches and allows for the flexible consideration of outbreaks with complex trajectories of disease dynamics. Keywords: Ebola, Forecasting, Mathematical modeling, Bayesian inference

  11. Multitask Quantile Regression under the Transnormal Model.

    Science.gov (United States)

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2016-01-01

    We consider estimating multi-task quantile regression under the transnormal model, with focus on high-dimensional setting. We derive a surprisingly simple closed-form solution through rank-based covariance regularization. In particular, we propose the rank-based ℓ 1 penalization with positive definite constraints for estimating sparse covariance matrices, and the rank-based banded Cholesky decomposition regularization for estimating banded precision matrices. By taking advantage of alternating direction method of multipliers, nearest correlation matrix projection is introduced that inherits sampling properties of the unprojected one. Our work combines strengths of quantile regression and rank-based covariance regularization to simultaneously deal with nonlinearity and nonnormality for high-dimensional regression. Furthermore, the proposed method strikes a good balance between robustness and efficiency, achieves the "oracle"-like convergence rate, and provides the provable prediction interval under the high-dimensional setting. The finite-sample performance of the proposed method is also examined. The performance of our proposed rank-based method is demonstrated in a real application to analyze the protein mass spectroscopy data.

  12. Crime Modeling using Spatial Regression Approach

    Science.gov (United States)

    Saleh Ahmar, Ansari; Adiatma; Kasim Aidid, M.

    2018-01-01

    Act of criminality in Indonesia increased both variety and quantity every year. As murder, rape, assault, vandalism, theft, fraud, fencing, and other cases that make people feel unsafe. Risk of society exposed to crime is the number of reported cases in the police institution. The higher of the number of reporter to the police institution then the number of crime in the region is increasing. In this research, modeling criminality in South Sulawesi, Indonesia with the dependent variable used is the society exposed to the risk of crime. Modelling done by area approach is the using Spatial Autoregressive (SAR) and Spatial Error Model (SEM) methods. The independent variable used is the population density, the number of poor population, GDP per capita, unemployment and the human development index (HDI). Based on the analysis using spatial regression can be shown that there are no dependencies spatial both lag or errors in South Sulawesi.

  13. Real estate value prediction using multivariate regression models

    Science.gov (United States)

    Manjula, R.; Jain, Shubham; Srivastava, Sharad; Rajiv Kher, Pranav

    2017-11-01

    The real estate market is one of the most competitive in terms of pricing and the same tends to vary significantly based on a lot of factors, hence it becomes one of the prime fields to apply the concepts of machine learning to optimize and predict the prices with high accuracy. Therefore in this paper, we present various important features to use while predicting housing prices with good accuracy. We have described regression models, using various features to have lower Residual Sum of Squares error. While using features in a regression model some feature engineering is required for better prediction. Often a set of features (multiple regressions) or polynomial regression (applying a various set of powers in the features) is used for making better model fit. For these models are expected to be susceptible towards over fitting ridge regression is used to reduce it. This paper thus directs to the best application of regression models in addition to other techniques to optimize the result.

  14. Statistical models describing the energy signature of buildings

    DEFF Research Database (Denmark)

    Bacher, Peder; Madsen, Henrik; Thavlov, Anders

    2010-01-01

    Approximately one third of the primary energy production in Denmark is used for heating in buildings. Therefore efforts to accurately describe and improve energy performance of the building mass are very important. For this purpose statistical models describing the energy signature of a building, i...... or varying energy prices. The paper will give an overview of statistical methods and applied models based on experiments carried out in FlexHouse, which is an experimental building in SYSLAB, Risø DTU. The models are of different complexity and can provide estimates of physical quantities such as UA......-values, time constants of the building, and other parameters related to the heat dynamics. A method for selecting the most appropriate model for a given building is outlined and finally a perspective of the applications is given. Aknowledgements to the Danish Energy Saving Trust and the Interreg IV ``Vind i...

  15. Models for describing the thermal characteristics of building components

    DEFF Research Database (Denmark)

    Jimenez, M.J.; Madsen, Henrik

    2008-01-01

    , for example. For the analysis of these tests, dynamic analysis models and methods are required. However, a wide variety of models and methods exists, and the problem of choosing the most appropriate approach for each particular case is a non-trivial and interdisciplinary task. Knowledge of a large family....... The characteristics of each type of model are highlighted. Some available software tools for each of the methods described will be mentioned. A case study also demonstrating the difference between linear and nonlinear models is considered....... of these approaches may therefore be very useful for selecting a suitable approach for each particular case. This paper presents an overview of models that can be applied for modelling the thermal characteristics of buildings and building components using data from outdoor testing. The choice of approach depends...

  16. Icosahedral symmetry described by an incommensurately modulated crystal structure model

    DEFF Research Database (Denmark)

    Wolny, Janusz; Lebech, Bente

    1986-01-01

    A crystal structure model of an incommensurately modulated structure is presented. Although six different reciprocal vectors are used to describe the model, all calculations are done in three dimensions making calculation of the real-space structure trivial. Using this model, it is shown that both...... the positions of the bragg reflections and information about the relative intensities of these reflections are in full accordance with the diffraction patterns reported for microcrystals of the rapidly quenched Al86Mn14 alloy. It is also shown that at least the local structure possesses full icosahedral...

  17. A Model Describing Stable Coherent Synchrotron Radiation in Storage Rings

    International Nuclear Information System (INIS)

    Sannibale, F.

    2004-01-01

    We present a model describing high power stable broadband coherent synchrotron radiation (CSR) in the terahertz frequency region in an electron storage ring. The model includes distortion of bunch shape from the synchrotron radiation (SR), which enhances higher frequency coherent emission, and limits to stable emission due to an instability excited by the SR wakefield. It gives a quantitative explanation of several features of the recent observations of CSR at the BESSY II storage ring. We also use this model to optimize the performance of a source for stable CSR emission

  18. A model describing stable coherent synchrotron radiation in storage rings

    International Nuclear Information System (INIS)

    Sannibale, F.; Byrd, J.M.; Loftsdottir, A.; Venturini, M.; Abo-Bakr, M.; Feikes, J.; Holldack, K.; Kuske, P.; Wuestefeld, G.; Huebers, H.-W.; Warnock, R.

    2004-01-01

    We present a model describing high power stable broadband coherent synchrotron radiation (CSR) in the terahertz frequency region in an electron storage ring. The model includes distortion of bunch shape from the synchrotron radiation (SR), which enhances higher frequency coherent emission, and limits to stable emission due to an instability excited by the SR wakefield. It gives a quantitative explanation of several features of the recent observations of CSR at the BESSY II storage ring. We also use this model to optimize the performance of a source for stable CSR emission

  19. AN APPLICATION OF FUNCTIONAL MULTIVARIATE REGRESSION MODEL TO MULTICLASS CLASSIFICATION

    OpenAIRE

    Krzyśko, Mirosław; Smaga, Łukasz

    2017-01-01

    In this paper, the scale response functional multivariate regression model is considered. By using the basis functions representation of functional predictors and regression coefficients, this model is rewritten as a multivariate regression model. This representation of the functional multivariate regression model is used for multiclass classification for multivariate functional data. Computational experiments performed on real labelled data sets demonstrate the effectiveness of the proposed ...

  20. A Kinetic Model Describing Injury-Burden in Team Sports.

    Science.gov (United States)

    Fuller, Colin W

    2017-12-01

    Injuries in team sports are normally characterised by the incidence, severity, and location and type of injuries sustained: these measures, however, do not provide an insight into the variable injury-burden experienced during a season. Injury burden varies according to the team's match and training loads, the rate at which injuries are sustained and the time taken for these injuries to resolve. At the present time, this time-based variation of injury burden has not been modelled. To develop a kinetic model describing the time-based injury burden experienced by teams in elite team sports and to demonstrate the model's utility. Rates of injury were quantified using a large eight-season database of rugby injuries (5253) and exposure (60,085 player-match-hours) in English professional rugby. Rates of recovery from injury were quantified using time-to-recovery analysis of the injuries. The kinetic model proposed for predicting a team's time-based injury burden is based on a composite rate equation developed from the incidence of injury, a first-order rate of recovery from injury and the team's playing load. The utility of the model was demonstrated by examining common scenarios encountered in elite rugby. The kinetic model developed describes and predicts the variable injury-burden arising from match play during a season of rugby union based on the incidence of match injuries, the rate of recovery from injury and the playing load. The model is equally applicable to other team sports and other scenarios.

  1. Entrepreneurial intention modeling using hierarchical multiple regression

    Directory of Open Access Journals (Sweden)

    Marina Jeger

    2014-12-01

    Full Text Available The goal of this study is to identify the contribution of effectuation dimensions to the predictive power of the entrepreneurial intention model over and above that which can be accounted for by other predictors selected and confirmed in previous studies. As is often the case in social and behavioral studies, some variables are likely to be highly correlated with each other. Therefore, the relative amount of variance in the criterion variable explained by each of the predictors depends on several factors such as the order of variable entry and sample specifics. The results show the modest predictive power of two dimensions of effectuation prior to the introduction of the theory of planned behavior elements. The article highlights the main advantages of applying hierarchical regression in social sciences as well as in the specific context of entrepreneurial intention formation, and addresses some of the potential pitfalls that this type of analysis entails.

  2. An Additive-Multiplicative Cox-Aalen Regression Model

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    2002-01-01

    Aalen model; additive risk model; counting processes; Cox regression; survival analysis; time-varying effects......Aalen model; additive risk model; counting processes; Cox regression; survival analysis; time-varying effects...

  3. A model to describe the performance of the UASB reactor.

    Science.gov (United States)

    Rodríguez-Gómez, Raúl; Renman, Gunno; Moreno, Luis; Liu, Longcheng

    2014-04-01

    A dynamic model to describe the performance of the Upflow Anaerobic Sludge Blanket (UASB) reactor was developed. It includes dispersion, advection, and reaction terms, as well as the resistances through which the substrate passes before its biotransformation. The UASB reactor is viewed as several continuous stirred tank reactors connected in series. The good agreement between experimental and simulated results shows that the model is able to predict the performance of the UASB reactor (i.e. substrate concentration, biomass concentration, granule size, and height of the sludge bed).

  4. Using the MWC model to describe heterotropic interactions in hemoglobin

    Science.gov (United States)

    Rapp, Olga

    2017-01-01

    Hemoglobin is a classical model allosteric protein. Research on hemoglobin parallels the development of key cooperativity and allostery concepts, such as the ‘all-or-none’ Hill formalism, the stepwise Adair binding formulation and the concerted Monod-Wymann-Changuex (MWC) allosteric model. While it is clear that the MWC model adequately describes the cooperative binding of oxygen to hemoglobin, rationalizing the effects of H+, CO2 or organophosphate ligands on hemoglobin-oxygen saturation using the same model remains controversial. According to the MWC model, allosteric ligands exert their effect on protein function by modulating the quaternary conformational transition of the protein. However, data fitting analysis of hemoglobin oxygen saturation curves in the presence or absence of inhibitory ligands persistently revealed effects on both relative oxygen affinity (c) and conformational changes (L), elementary MWC parameters. The recent realization that data fitting analysis using the traditional MWC model equation may not provide reliable estimates for L and c thus calls for a re-examination of previous data using alternative fitting strategies. In the current manuscript, we present two simple strategies for obtaining reliable estimates for MWC mechanistic parameters of hemoglobin steady-state saturation curves in cases of both evolutionary and physiological variations. Our results suggest that the simple MWC model provides a reasonable description that can also account for heterotropic interactions in hemoglobin. The results, moreover, offer a general roadmap for successful data fitting analysis using the MWC model. PMID:28793329

  5. HERMES: A Model to Describe Deformation, Burning, Explosion, and Detonation

    Energy Technology Data Exchange (ETDEWEB)

    Reaugh, J E

    2011-11-22

    HERMES (High Explosive Response to MEchanical Stimulus) was developed to fill the need for a model to describe an explosive response of the type described as BVR (Burn to Violent Response) or HEVR (High Explosive Violent Response). Characteristically this response leaves a substantial amount of explosive unconsumed, the time to reaction is long, and the peak pressure developed is low. In contrast, detonations characteristically consume all explosive present, the time to reaction is short, and peak pressures are high. However, most of the previous models to describe explosive response were models for detonation. The earliest models to describe the response of explosives to mechanical stimulus in computer simulations were applied to intentional detonation (performance) of nearly ideal explosives. In this case, an ideal explosive is one with a vanishingly small reaction zone. A detonation is supersonic with respect to the undetonated explosive (reactant). The reactant cannot respond to the pressure of the detonation before the detonation front arrives, so the precise compressibility of the reactant does not matter. Further, the mesh sizes that were practical for the computer resources then available were large with respect to the reaction zone. As a result, methods then used to model detonations, known as {beta}-burn or program burn, were not intended to resolve the structure of the reaction zone. Instead, these methods spread the detonation front over a few finite-difference zones, in the same spirit that artificial viscosity is used to spread the shock front in inert materials over a few finite-difference zones. These methods are still widely used when the structure of the reaction zone and the build-up to detonation are unimportant. Later detonation models resolved the reaction zone. These models were applied both to performance, particularly as it is affected by the size of the charge, and to situations in which the stimulus was less than that needed for reliable

  6. Hierarchical regression analysis in structural Equation Modeling

    NARCIS (Netherlands)

    de Jong, P.F.

    1999-01-01

    In a hierarchical or fixed-order regression analysis, the independent variables are entered into the regression equation in a prespecified order. Such an analysis is often performed when the extra amount of variance accounted for in a dependent variable by a specific independent variable is the main

  7. Poisson Mixture Regression Models for Heart Disease Prediction.

    Science.gov (United States)

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  8. Can the "standard" unitarized Regge models describe the TOTEM data?

    CERN Document Server

    Alkin, A; Martynov, E

    2013-01-01

    The standard Regge poles are considered as inputs for two unitarization methods: eikonal and U-matrix. It is shown that only models with three input pomerons and two input odderons can describe the high energy data on $pp$ and $\\bar pp$ elastic scattering including the new data from Tevatron and LHC. However, it seems that the both considered models require a further modification (e.g. nonlinear reggeon trajectories and/or nonexponential vertex functions) for a more satisfactory description of the data at 19.0 GeV$\\leq \\sqrt{s}\\leq$ 7 TeV and 0.01 $\\leq |t|\\leq $14.2 GeV$^{2}$.

  9. Logistic regression for risk factor modelling in stuttering research.

    Science.gov (United States)

    Reed, Phil; Wu, Yaqionq

    2013-06-01

    To outline the uses of logistic regression and other statistical methods for risk factor analysis in the context of research on stuttering. The principles underlying the application of a logistic regression are illustrated, and the types of questions to which such a technique has been applied in the stuttering field are outlined. The assumptions and limitations of the technique are discussed with respect to existing stuttering research, and with respect to formulating appropriate research strategies to accommodate these considerations. Finally, some alternatives to the approach are briefly discussed. The way the statistical procedures are employed are demonstrated with some hypothetical data. Research into several practical issues concerning stuttering could benefit if risk factor modelling were used. Important examples are early diagnosis, prognosis (whether a child will recover or persist) and assessment of treatment outcome. After reading this article you will: (a) Summarize the situations in which logistic regression can be applied to a range of issues about stuttering; (b) Follow the steps in performing a logistic regression analysis; (c) Describe the assumptions of the logistic regression technique and the precautions that need to be checked when it is employed; (d) Be able to summarize its advantages over other techniques like estimation of group differences and simple regression. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Experimental investigation of statistical models describing distribution of counts

    International Nuclear Information System (INIS)

    Salma, I.; Zemplen-Papp, E.

    1992-01-01

    The binomial, Poisson and modified Poisson models which are used for describing the statistical nature of the distribution of counts are compared theoretically, and conclusions for application are considered. The validity of the Poisson and the modified Poisson statistical distribution for observing k events in a short time interval is investigated experimentally for various measuring times. The experiments to measure the influence of the significant radioactive decay were performed with 89 Y m (T 1/2 =16.06 s), using a multichannel analyser (4096 channels) in the multiscaling mode. According to the results, Poisson statistics describe the counting experiment for short measuring times (up to T=0.5T 1/2 ) and its application is recommended. However, analysis of the data demonstrated, with confidence, that for long measurements (T≥T 1/2 ) Poisson distribution is not valid and the modified Poisson function is preferable. The practical implications in calculating uncertainties and in optimizing the measuring time are discussed. Differences between the standard deviations evaluated on the basis of the Poisson and binomial models are especially significant for experiments with long measuring time (T/T 1/2 ≥2) and/or large detection efficiency (ε>0.30). Optimization of the measuring time for paired observations yields the same solution for either the binomial or the Poisson distribution. (orig.)

  11. Computer modeling describes gravity-related adaptation in cell cultures.

    Science.gov (United States)

    Alexandrov, Ludmil B; Alexandrova, Stoyana; Usheva, Anny

    2009-12-16

    Questions about the changes of biological systems in response to hostile environmental factors are important but not easy to answer. Often, the traditional description with differential equations is difficult due to the overwhelming complexity of the living systems. Another way to describe complex systems is by simulating them with phenomenological models such as the well-known evolutionary agent-based model (EABM). Here we developed an EABM to simulate cell colonies as a multi-agent system that adapts to hyper-gravity in starvation conditions. In the model, the cell's heritable characteristics are generated and transferred randomly to offspring cells. After a qualitative validation of the model at normal gravity, we simulate cellular growth in hyper-gravity conditions. The obtained data are consistent with previously confirmed theoretical and experimental findings for bacterial behavior in environmental changes, including the experimental data from the microgravity Atlantis and the Hypergravity 3000 experiments. Our results demonstrate that it is possible to utilize an EABM with realistic qualitative description to examine the effects of hypergravity and starvation on complex cellular entities.

  12. Modeling maximum daily temperature using a varying coefficient regression model

    Science.gov (United States)

    Han Li; Xinwei Deng; Dong-Yum Kim; Eric P. Smith

    2014-01-01

    Relationships between stream water and air temperatures are often modeled using linear or nonlinear regression methods. Despite a strong relationship between water and air temperatures and a variety of models that are effective for data summarized on a weekly basis, such models did not yield consistently good predictions for summaries such as daily maximum temperature...

  13. Introduction to the use of regression models in epidemiology.

    Science.gov (United States)

    Bender, Ralf

    2009-01-01

    Regression modeling is one of the most important statistical techniques used in analytical epidemiology. By means of regression models the effect of one or several explanatory variables (e.g., exposures, subject characteristics, risk factors) on a response variable such as mortality or cancer can be investigated. From multiple regression models, adjusted effect estimates can be obtained that take the effect of potential confounders into account. Regression methods can be applied in all epidemiologic study designs so that they represent a universal tool for data analysis in epidemiology. Different kinds of regression models have been developed in dependence on the measurement scale of the response variable and the study design. The most important methods are linear regression for continuous outcomes, logistic regression for binary outcomes, Cox regression for time-to-event data, and Poisson regression for frequencies and rates. This chapter provides a nontechnical introduction to these regression models with illustrating examples from cancer research.

  14. Logistic Regression Modeling of Diminishing Manufacturing Sources for Integrated Circuits

    National Research Council Canada - National Science Library

    Gravier, Michael

    1999-01-01

    .... The research identified logistic regression as a powerful tool for analysis of DMSMS and further developed twenty models attempting to identify the "best" way to model and predict DMSMS using logistic regression...

  15. Variable Selection for Regression Models of Percentile Flows

    Science.gov (United States)

    Fouad, G.

    2017-12-01

    Percentile flows describe the flow magnitude equaled or exceeded for a given percent of time, and are widely used in water resource management. However, these statistics are normally unavailable since most basins are ungauged. Percentile flows of ungauged basins are often predicted using regression models based on readily observable basin characteristics, such as mean elevation. The number of these independent variables is too large to evaluate all possible models. A subset of models is typically evaluated using automatic procedures, like stepwise regression. This ignores a large variety of methods from the field of feature (variable) selection and physical understanding of percentile flows. A study of 918 basins in the United States was conducted to compare an automatic regression procedure to the following variable selection methods: (1) principal component analysis, (2) correlation analysis, (3) random forests, (4) genetic programming, (5) Bayesian networks, and (6) physical understanding. The automatic regression procedure only performed better than principal component analysis. Poor performance of the regression procedure was due to a commonly used filter for multicollinearity, which rejected the strongest models because they had cross-correlated independent variables. Multicollinearity did not decrease model performance in validation because of a representative set of calibration basins. Variable selection methods based strictly on predictive power (numbers 2-5 from above) performed similarly, likely indicating a limit to the predictive power of the variables. Similar performance was also reached using variables selected based on physical understanding, a finding that substantiates recent calls to emphasize physical understanding in modeling for predictions in ungauged basins. The strongest variables highlighted the importance of geology and land cover, whereas widely used topographic variables were the weakest predictors. Variables suffered from a high

  16. Communication skills training: describing a new conceptual model.

    Science.gov (United States)

    Brown, Richard F; Bylund, Carma L

    2008-01-01

    Current research in communication in physician-patient consultations is multidisciplinary and multimethodological. As this research has progressed, a considerable body of evidence on the best practices in physician-patient communication has been amassed. This evidence provides a foundation for communication skills training (CST) at all levels of medical education. Although the CST literature has demonstrated that communication skills can be taught, one critique of this literature is that it is not always clear which skills are being taught and whether those skills are matched with those being assessed. The Memorial Sloan-Kettering Cancer Center Comskil Model for CST seeks to answer those critiques by explicitly defining the important components of a consultation, based on Goals, Plans, and Actions theories and sociolinguistic theory. Sequenced guidelines as a mechanism for teaching about particular communication challenges are adapted from these other methods. The authors propose that consultation communication can be guided by an overarching goal, which is achieved through the use of a set of predetermined strategies. Strategies are common in CST; however, strategies often contain embedded communication skills. These skills can exist across strategies, and the Comskil Model seeks to make them explicit in these contexts. Separate from the skills are process tasks and cognitive appraisals that need to be addressed in teaching. The authors also describe how assessment practices foster concordance between skills taught and those assessed through careful coding of trainees' communication encounters and direct feedback.

  17. INCAS: an analytical model to describe displacement cascades

    Energy Technology Data Exchange (ETDEWEB)

    Jumel, Stephanie E-mail: stephanie.jumel@edf.fr; Claude Van-Duysen, Jean E-mail: jean-claude.van-duysen@edf.fr

    2004-07-01

    REVE (REactor for Virtual Experiments) is an international project aimed at developing tools to simulate neutron irradiation effects in Light Water Reactor materials (Fe, Ni or Zr-based alloys). One of the important steps of the project is to characterise the displacement cascades induced by neutrons. Accordingly, the Department of Material Studies of Electricite de France developed an analytical model based on the binary collision approximation. This model, called INCAS (INtegration of CAScades), was devised to be applied on pure elements; however, it can also be used on diluted alloys (reactor pressure vessel steels, etc.) or alloys composed of atoms with close atomic numbers (stainless steels, etc.). INCAS describes displacement cascades by taking into account the nuclear collisions and electronic interactions undergone by the moving atoms. In particular, it enables to determine the mean number of sub-cascades induced by a PKA (depending on its energy) as well as the mean energy dissipated in each of them. The experimental validation of INCAS requires a large effort and could not be carried out in the framework of the study. However, it was verified that INCAS results are in conformity with those obtained from other approaches. As a first application, INCAS was applied to determine the sub-cascade spectrum induced in iron by the neutron spectrum corresponding to the central channel of the High Flux Irradiation Reactor of Oak Ridge National Laboratory.

  18. INCAS: an analytical model to describe displacement cascades

    Science.gov (United States)

    Jumel, Stéphanie; Claude Van-Duysen, Jean

    2004-07-01

    REVE (REactor for Virtual Experiments) is an international project aimed at developing tools to simulate neutron irradiation effects in Light Water Reactor materials (Fe, Ni or Zr-based alloys). One of the important steps of the project is to characterise the displacement cascades induced by neutrons. Accordingly, the Department of Material Studies of Electricité de France developed an analytical model based on the binary collision approximation. This model, called INCAS (INtegration of CAScades), was devised to be applied on pure elements; however, it can also be used on diluted alloys (reactor pressure vessel steels, etc.) or alloys composed of atoms with close atomic numbers (stainless steels, etc.). INCAS describes displacement cascades by taking into account the nuclear collisions and electronic interactions undergone by the moving atoms. In particular, it enables to determine the mean number of sub-cascades induced by a PKA (depending on its energy) as well as the mean energy dissipated in each of them. The experimental validation of INCAS requires a large effort and could not be carried out in the framework of the study. However, it was verified that INCAS results are in conformity with those obtained from other approaches. As a first application, INCAS was applied to determine the sub-cascade spectrum induced in iron by the neutron spectrum corresponding to the central channel of the High Flux Irradiation Reactor of Oak Ridge National Laboratory.

  19. INCAS: an analytical model to describe displacement cascades

    International Nuclear Information System (INIS)

    Jumel, Stephanie; Claude Van-Duysen, Jean

    2004-01-01

    REVE (REactor for Virtual Experiments) is an international project aimed at developing tools to simulate neutron irradiation effects in Light Water Reactor materials (Fe, Ni or Zr-based alloys). One of the important steps of the project is to characterise the displacement cascades induced by neutrons. Accordingly, the Department of Material Studies of Electricite de France developed an analytical model based on the binary collision approximation. This model, called INCAS (INtegration of CAScades), was devised to be applied on pure elements; however, it can also be used on diluted alloys (reactor pressure vessel steels, etc.) or alloys composed of atoms with close atomic numbers (stainless steels, etc.). INCAS describes displacement cascades by taking into account the nuclear collisions and electronic interactions undergone by the moving atoms. In particular, it enables to determine the mean number of sub-cascades induced by a PKA (depending on its energy) as well as the mean energy dissipated in each of them. The experimental validation of INCAS requires a large effort and could not be carried out in the framework of the study. However, it was verified that INCAS results are in conformity with those obtained from other approaches. As a first application, INCAS was applied to determine the sub-cascade spectrum induced in iron by the neutron spectrum corresponding to the central channel of the High Flux Irradiation Reactor of Oak Ridge National Laboratory

  20. On concurvity in nonlinear and nonparametric regression models

    Directory of Open Access Journals (Sweden)

    Sonia Amodio

    2014-12-01

    Full Text Available When data are affected by multicollinearity in the linear regression framework, then concurvity will be present in fitting a generalized additive model (GAM. The term concurvity describes nonlinear dependencies among the predictor variables. As collinearity results in inflated variance of the estimated regression coefficients in the linear regression model, the result of the presence of concurvity leads to instability of the estimated coefficients in GAMs. Even if the backfitting algorithm will always converge to a solution, in case of concurvity the final solution of the backfitting procedure in fitting a GAM is influenced by the starting functions. While exact concurvity is highly unlikely, approximate concurvity, the analogue of multicollinearity, is of practical concern as it can lead to upwardly biased estimates of the parameters and to underestimation of their standard errors, increasing the risk of committing type I error. We compare the existing approaches to detect concurvity, pointing out their advantages and drawbacks, using simulated and real data sets. As a result, this paper will provide a general criterion to detect concurvity in nonlinear and non parametric regression models.

  1. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    Science.gov (United States)

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  2. The MIDAS Touch: Mixed Data Sampling Regression Models

    OpenAIRE

    Ghysels, Eric; Santa-Clara, Pedro; Valkanov, Rossen

    2004-01-01

    We introduce Mixed Data Sampling (henceforth MIDAS) regression models. The regressions involve time series data sampled at different frequencies. Technically speaking MIDAS models specify conditional expectations as a distributed lag of regressors recorded at some higher sampling frequencies. We examine the asymptotic properties of MIDAS regression estimation and compare it with traditional distributed lag models. MIDAS regressions have wide applicability in macroeconomics and �nance.

  3. Model selection in kernel ridge regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    2013-01-01

    Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This method is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts....... The influence of the choice of kernel and the setting of tuning parameters on forecast accuracy is investigated. Several popular kernels are reviewed, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. The latter two kernels are interpreted in terms of their smoothing properties......, and the tuning parameters associated to all these kernels are related to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, guidelines are provided for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study...

  4. Meta-Modeling by Symbolic Regression and Pareto Simulated Annealing

    NARCIS (Netherlands)

    Stinstra, E.; Rennen, G.; Teeuwen, G.J.A.

    2006-01-01

    The subject of this paper is a new approach to Symbolic Regression.Other publications on Symbolic Regression use Genetic Programming.This paper describes an alternative method based on Pareto Simulated Annealing.Our method is based on linear regression for the estimation of constants.Interval

  5. Bayesian semiparametric regression models to characterize molecular evolution

    Directory of Open Access Journals (Sweden)

    Datta Saheli

    2012-10-01

    Full Text Available Abstract Background Statistical models and methods that associate changes in the physicochemical properties of amino acids with natural selection at the molecular level typically do not take into account the correlations between such properties. We propose a Bayesian hierarchical regression model with a generalization of the Dirichlet process prior on the distribution of the regression coefficients that describes the relationship between the changes in amino acid distances and natural selection in protein-coding DNA sequence alignments. Results The Bayesian semiparametric approach is illustrated with simulated data and the abalone lysin sperm data. Our method identifies groups of properties which, for this particular dataset, have a similar effect on evolution. The model also provides nonparametric site-specific estimates for the strength of conservation of these properties. Conclusions The model described here is distinguished by its ability to handle a large number of amino acid properties simultaneously, while taking into account that such data can be correlated. The multi-level clustering ability of the model allows for appealing interpretations of the results in terms of properties that are roughly equivalent from the standpoint of molecular evolution.

  6. Mixture of Regression Models with Single-Index

    OpenAIRE

    Xiang, Sijia; Yao, Weixin

    2016-01-01

    In this article, we propose a class of semiparametric mixture regression models with single-index. We argue that many recently proposed semiparametric/nonparametric mixture regression models can be considered special cases of the proposed model. However, unlike existing semiparametric mixture regression models, the new pro- posed model can easily incorporate multivariate predictors into the nonparametric components. Backfitting estimates and the corresponding algorithms have been proposed for...

  7. Linear regression crash prediction models : issues and proposed solutions.

    Science.gov (United States)

    2010-05-01

    The paper develops a linear regression model approach that can be applied to : crash data to predict vehicle crashes. The proposed approach involves novice data aggregation : to satisfy linear regression assumptions; namely error structure normality ...

  8. Model-based Quantile Regression for Discrete Data

    KAUST Repository

    Padellini, Tullia; Rue, Haavard

    2018-01-01

    Quantile regression is a class of methods voted to the modelling of conditional quantiles. In a Bayesian framework quantile regression has typically been carried out exploiting the Asymmetric Laplace Distribution as a working likelihood. Despite

  9. Model Selection in Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels......, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely...

  10. Using multistage models to describe radiation-induced leukaemia

    International Nuclear Information System (INIS)

    Little, M.P.; Muirhead, C.R.; Boice, J.D. Jr.; Kleinerman, R.A.

    1995-01-01

    The Armitage-Doll model of carcinogenesis is fitted to data on leukaemia mortality among the Japanese atomic bomb survivors with the DS86 dosimetry and on leukaemia incidence in the International Radiation Study of Cervical Cancer patients. Two different forms of model are fitted: the first postulates up to two radiation-affected stages and the second additionally allows for the presence at birth of a non-trivial population of cells which have already accumulated the first of the mutations leading to malignancy. Among models of the first form, a model with two adjacent radiation-affected stages appears to fit the data better than other models of the first form, including both models with two affected stages in any order and models with only one affected stage. The best fitting model predicts a linear-quadratic dose-response and reductions of relative risk with increasing time after exposure and age at exposure, in agreement with what has previously been observed in the Japanese and cervical cancer data. However, on the whole it does not provide an adequate fit to either dataset. The second form of model appears to provide a rather better fit, but the optimal models have biologically implausible parameters (the number of initiated cells at birth is negative) so that this model must also be regarded as providing an unsatisfactory description of the data. (author)

  11. Regression and regression analysis time series prediction modeling on climate data of quetta, pakistan

    International Nuclear Information System (INIS)

    Jafri, Y.Z.; Kamal, L.

    2007-01-01

    Various statistical techniques was used on five-year data from 1998-2002 of average humidity, rainfall, maximum and minimum temperatures, respectively. The relationships to regression analysis time series (RATS) were developed for determining the overall trend of these climate parameters on the basis of which forecast models can be corrected and modified. We computed the coefficient of determination as a measure of goodness of fit, to our polynomial regression analysis time series (PRATS). The correlation to multiple linear regression (MLR) and multiple linear regression analysis time series (MLRATS) were also developed for deciphering the interdependence of weather parameters. Spearman's rand correlation and Goldfeld-Quandt test were used to check the uniformity or non-uniformity of variances in our fit to polynomial regression (PR). The Breusch-Pagan test was applied to MLR and MLRATS, respectively which yielded homoscedasticity. We also employed Bartlett's test for homogeneity of variances on a five-year data of rainfall and humidity, respectively which showed that the variances in rainfall data were not homogenous while in case of humidity, were homogenous. Our results on regression and regression analysis time series show the best fit to prediction modeling on climatic data of Quetta, Pakistan. (author)

  12. Corporate prediction models, ratios or regression analysis?

    NARCIS (Netherlands)

    Bijnen, E.J.; Wijn, M.F.C.M.

    1994-01-01

    The models developed in the literature with respect to the prediction of a company s failure are based on ratios. It has been shown before that these models should be rejected on theoretical grounds. Our study of industrial companies in the Netherlands shows that the ratios which are used in

  13. STREAMFLOW AND WATER QUALITY REGRESSION MODELING ...

    African Journals Online (AJOL)

    ... downstream Obigbo station show: consistent time-trends in degree of contamination; linear and non-linear relationships for water quality models against total dissolved solids (TDS), total suspended sediment (TSS), chloride, pH and sulphate; and non-linear relationship for streamflow and water quality transport models.

  14. A new settling velocity model to describe secondary sedimentation.

    Science.gov (United States)

    Ramin, Elham; Wágner, Dorottya S; Yde, Lars; Binning, Philip J; Rasmussen, Michael R; Mikkelsen, Peter Steen; Plósz, Benedek Gy

    2014-12-01

    Secondary settling tanks (SSTs) are the most hydraulically sensitive unit operations in biological wastewater treatment plants. The maximum permissible inflow to the plant depends on the efficiency of SSTs in separating and thickening the activated sludge. The flow conditions and solids distribution in SSTs can be predicted using computational fluid dynamics (CFD) tools. Despite extensive studies on the compression settling behaviour of activated sludge and the development of advanced settling velocity models for use in SST simulations, these models are not often used, due to the challenges associated with their calibration. In this study, we developed a new settling velocity model, including hindered, transient and compression settling, and showed that it can be calibrated to data from a simple, novel settling column experimental set-up using the Bayesian optimization method DREAM(ZS). In addition, correlations between the Herschel-Bulkley rheological model parameters and sludge concentration were identified with data from batch rheological experiments. A 2-D axisymmetric CFD model of a circular SST containing the new settling velocity and rheological model was validated with full-scale measurements. Finally, it was shown that the representation of compression settling in the CFD model can significantly influence the prediction of sludge distribution in the SSTs under dry- and wet-weather flow conditions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    Science.gov (United States)

    Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami

    2017-06-01

    A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.

  16. Multiattribute shopping models and ridge regression analysis

    NARCIS (Netherlands)

    Timmermans, H.J.P.

    1981-01-01

    Policy decisions regarding retailing facilities essentially involve multiple attributes of shopping centres. If mathematical shopping models are to contribute to these decision processes, their structure should reflect the multiattribute character of retailing planning. Examination of existing

  17. Linear Regression Models for Estimating True Subsurface ...

    Indian Academy of Sciences (India)

    47

    The objective is to minimize the processing time and computer memory required. 10 to carry out inversion .... to the mainland by two long bridges. .... term. In this approach, the model converges when the squared sum of the differences. 143.

  18. Moderation analysis using a two-level regression model.

    Science.gov (United States)

    Yuan, Ke-Hai; Cheng, Ying; Maxwell, Scott

    2014-10-01

    Moderation analysis is widely used in social and behavioral research. The most commonly used model for moderation analysis is moderated multiple regression (MMR) in which the explanatory variables of the regression model include product terms, and the model is typically estimated by least squares (LS). This paper argues for a two-level regression model in which the regression coefficients of a criterion variable on predictors are further regressed on moderator variables. An algorithm for estimating the parameters of the two-level model by normal-distribution-based maximum likelihood (NML) is developed. Formulas for the standard errors (SEs) of the parameter estimates are provided and studied. Results indicate that, when heteroscedasticity exists, NML with the two-level model gives more efficient and more accurate parameter estimates than the LS analysis of the MMR model. When error variances are homoscedastic, NML with the two-level model leads to essentially the same results as LS with the MMR model. Most importantly, the two-level regression model permits estimating the percentage of variance of each regression coefficient that is due to moderator variables. When applied to data from General Social Surveys 1991, NML with the two-level model identified a significant moderation effect of race on the regression of job prestige on years of education while LS with the MMR model did not. An R package is also developed and documented to facilitate the application of the two-level model.

  19. Alternative regression models to assess increase in childhood BMI

    OpenAIRE

    Beyerlein, Andreas; Fahrmeir, Ludwig; Mansmann, Ulrich; Toschke, André M

    2008-01-01

    Abstract Background Body mass index (BMI) data usually have skewed distributions, for which common statistical modeling approaches such as simple linear or logistic regression have limitations. Methods Different regression approaches to predict childhood BMI by goodness-of-fit measures and means of interpretation were compared including generalized linear models (GLMs), quantile regression and Generalized Additive Models for Location, Scale and Shape (GAMLSS). We analyzed data of 4967 childre...

  20. Support Vector Regression Model Based on Empirical Mode Decomposition and Auto Regression for Electric Load Forecasting

    Directory of Open Access Journals (Sweden)

    Hong-Juan Li

    2013-04-01

    Full Text Available Electric load forecasting is an important issue for a power utility, associated with the management of daily operations such as energy transfer scheduling, unit commitment, and load dispatch. Inspired by strong non-linear learning capability of support vector regression (SVR, this paper presents a SVR model hybridized with the empirical mode decomposition (EMD method and auto regression (AR for electric load forecasting. The electric load data of the New South Wales (Australia market are employed for comparing the forecasting performances of different forecasting models. The results confirm the validity of the idea that the proposed model can simultaneously provide forecasting with good accuracy and interpretability.

  1. A new settling velocity model to describe secondary sedimentation

    DEFF Research Database (Denmark)

    Ramin, Elham; Wágner, Dorottya Sarolta; Yde, Lars

    2014-01-01

    Secondary settling tanks (SSTs) are the most hydraulically sensitive unit operations in biological wastewater treatment plants. The maximum permissible inflow to the plant depends on the efficiency of SSTs in separating and thickening the activated sludge. The flow conditions and solids...... distribution in SSTs can be predicted using computational fluid dynamics (CFD) tools. Despite extensive studies on the compression settling behaviour of activated sludge and the development of advanced settling velocity models for use in SST simulations, these models are not often used, due to the challenges...... associated with their calibration. In this study, we developed a new settling velocity model, including hindered, transient and compression settling, and showed that it can be calibrated to data from a simple, novel settling column experimental set-up using the Bayesian optimization method DREAM...

  2. LCM 3.0: A Language for describing Conceptual Models

    NARCIS (Netherlands)

    Feenstra, Remco; Wieringa, Roelf J.

    1993-01-01

    The syntax of the conceptual model specification language LCM is defined. LCM uses equational logic to specify data types and order-sorted dynamic logic to specify objects with identity and mutable state. LCM specifies database transactions as finite sets of atomic object transitions.

  3. A dynamic data based model describing nephropathia epidemica in Belgium

    NARCIS (Netherlands)

    Amirpour Haredasht, S.; Barrios, J.M.; Maes, P.; Verstraeten, W.W.; Clement, J.; Ducoffre, G.; Lagrou, K.; Van Ranst, M.; Coppin, P.; Berckmans, D.; Aerts, J.M.F.G.

    2011-01-01

    ropathia epidemica (NE) is a human infection caused by Puumala virus (PUUV), which is naturally carried and shed by bank voles (Myodes glareolus). Population dynamics and infectious diseases in general, such as NE, have often been modelled with mechanistic SIR (Susceptible, Infective and Remove with

  4. A general modeling framework for describing spatially structured population dynamics

    Science.gov (United States)

    Sample, Christine; Fryxell, John; Bieri, Joanna; Federico, Paula; Earl, Julia; Wiederholt, Ruscena; Mattsson, Brady; Flockhart, Tyler; Nicol, Sam; Diffendorfer, James E.; Thogmartin, Wayne E.; Erickson, Richard A.; Norris, D. Ryan

    2017-01-01

    Variation in movement across time and space fundamentally shapes the abundance and distribution of populations. Although a variety of approaches model structured population dynamics, they are limited to specific types of spatially structured populations and lack a unifying framework. Here, we propose a unified network-based framework sufficiently novel in its flexibility to capture a wide variety of spatiotemporal processes including metapopulations and a range of migratory patterns. It can accommodate different kinds of age structures, forms of population growth, dispersal, nomadism and migration, and alternative life-history strategies. Our objective was to link three general elements common to all spatially structured populations (space, time and movement) under a single mathematical framework. To do this, we adopt a network modeling approach. The spatial structure of a population is represented by a weighted and directed network. Each node and each edge has a set of attributes which vary through time. The dynamics of our network-based population is modeled with discrete time steps. Using both theoretical and real-world examples, we show how common elements recur across species with disparate movement strategies and how they can be combined under a unified mathematical framework. We illustrate how metapopulations, various migratory patterns, and nomadism can be represented with this modeling approach. We also apply our network-based framework to four organisms spanning a wide range of life histories, movement patterns, and carrying capacities. General computer code to implement our framework is provided, which can be applied to almost any spatially structured population. This framework contributes to our theoretical understanding of population dynamics and has practical management applications, including understanding the impact of perturbations on population size, distribution, and movement patterns. By working within a common framework, there is less chance

  5. Describing a Strongly Correlated Model System with Density Functional Theory.

    Science.gov (United States)

    Kong, Jing; Proynov, Emil; Yu, Jianguo; Pachter, Ruth

    2017-07-06

    The linear chain of hydrogen atoms, a basic prototype for the transition from a metal to Mott insulator, is studied with a recent density functional theory model functional for nondynamic and strong correlation. The computed cohesive energy curve for the transition agrees well with accurate literature results. The variation of the electronic structure in this transition is characterized with a density functional descriptor that yields the atomic population of effectively localized electrons. These new methods are also applied to the study of the Peierls dimerization of the stretched even-spaced Mott insulator to a chain of H 2 molecules, a different insulator. The transitions among the two insulating states and the metallic state of the hydrogen chain system are depicted in a semiquantitative phase diagram. Overall, we demonstrate the capability of studying strongly correlated materials with a mean-field model at the fundamental level, in contrast to the general pessimistic view on such a feasibility.

  6. Poisson Mixture Regression Models for Heart Disease Prediction

    Science.gov (United States)

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  7. Modeling of the Monthly Rainfall-Runoff Process Through Regressions

    Directory of Open Access Journals (Sweden)

    Campos-Aranda Daniel Francisco

    2014-10-01

    Full Text Available To solve the problems associated with the assessment of water resources of a river, the modeling of the rainfall-runoff process (RRP allows the deduction of runoff missing data and to extend its record, since generally the information available on precipitation is larger. It also enables the estimation of inputs to reservoirs, when their building led to the suppression of the gauging station. The simplest mathematical model that can be set for the RRP is the linear regression or curve on a monthly basis. Such a model is described in detail and is calibrated with the simultaneous record of monthly rainfall and runoff in Ballesmi hydrometric station, which covers 35 years. Since the runoff of this station has an important contribution from the spring discharge, the record is corrected first by removing that contribution. In order to do this a procedure was developed based either on the monthly average regional runoff coefficients or on nearby and similar watershed; in this case the Tancuilín gauging station was used. Both stations belong to the Partial Hydrologic Region No. 26 (Lower Rio Panuco and are located within the state of San Luis Potosi, México. The study performed indicates that the monthly regression model, due to its conceptual approach, faithfully reproduces monthly average runoff volumes and achieves an excellent approximation in relation to the dispersion, proved by calculation of the means and standard deviations.

  8. Genetic evaluation of European quails by random regression models

    Directory of Open Access Journals (Sweden)

    Flaviana Miranda Gonçalves

    2012-09-01

    Full Text Available The objective of this study was to compare different random regression models, defined from different classes of heterogeneity of variance combined with different Legendre polynomial orders for the estimate of (covariance of quails. The data came from 28,076 observations of 4,507 female meat quails of the LF1 lineage. Quail body weights were determined at birth and 1, 14, 21, 28, 35 and 42 days of age. Six different classes of residual variance were fitted to Legendre polynomial functions (orders ranging from 2 to 6 to determine which model had the best fit to describe the (covariance structures as a function of time. According to the evaluated criteria (AIC, BIC and LRT, the model with six classes of residual variances and of sixth-order Legendre polynomial was the best fit. The estimated additive genetic variance increased from birth to 28 days of age, and dropped slightly from 35 to 42 days. The heritability estimates decreased along the growth curve and changed from 0.51 (1 day to 0.16 (42 days. Animal genetic and permanent environmental correlation estimates between weights and age classes were always high and positive, except for birth weight. The sixth order Legendre polynomial, along with the residual variance divided into six classes was the best fit for the growth rate curve of meat quails; therefore, they should be considered for breeding evaluation processes by random regression models.

  9. A Biophysical Neural Model To Describe Spatial Visual Attention

    International Nuclear Information System (INIS)

    Hugues, Etienne; Jose, Jorge V.

    2008-01-01

    Visual scenes have enormous spatial and temporal information that are transduced into neural spike trains. Psychophysical experiments indicate that only a small portion of a spatial image is consciously accessible. Electrophysiological experiments in behaving monkeys have revealed a number of modulations of the neural activity in special visual area known as V4, when the animal is paying attention directly towards a particular stimulus location. The nature of the attentional input to V4, however, remains unknown as well as to the mechanisms responsible for these modulations. We use a biophysical neural network model of V4 to address these issues. We first constrain our model to reproduce the experimental results obtained for different external stimulus configurations and without paying attention. To reproduce the known neuronal response variability, we found that the neurons should receive about equal, or balanced, levels of excitatory and inhibitory inputs and whose levels are high as they are in in vivo conditions. Next we consider attentional inputs that can induce and reproduce the observed spiking modulations. We also elucidate the role played by the neural network to generate these modulations

  10. A test for the parameters of multiple linear regression models ...

    African Journals Online (AJOL)

    A test for the parameters of multiple linear regression models is developed for conducting tests simultaneously on all the parameters of multiple linear regression models. The test is robust relative to the assumptions of homogeneity of variances and absence of serial correlation of the classical F-test. Under certain null and ...

  11. Mixed Frequency Data Sampling Regression Models: The R Package midasr

    Directory of Open Access Journals (Sweden)

    Eric Ghysels

    2016-08-01

    Full Text Available When modeling economic relationships it is increasingly common to encounter data sampled at different frequencies. We introduce the R package midasr which enables estimating regression models with variables sampled at different frequencies within a MIDAS regression framework put forward in work by Ghysels, Santa-Clara, and Valkanov (2002. In this article we define a general autoregressive MIDAS regression model with multiple variables of different frequencies and show how it can be specified using the familiar R formula interface and estimated using various optimization methods chosen by the researcher. We discuss how to check the validity of the estimated model both in terms of numerical convergence and statistical adequacy of a chosen regression specification, how to perform model selection based on a information criterion, how to assess forecasting accuracy of the MIDAS regression model and how to obtain a forecast aggregation of different MIDAS regression models. We illustrate the capabilities of the package with a simulated MIDAS regression model and give two empirical examples of application of MIDAS regression.

  12. Impact of multicollinearity on small sample hydrologic regression models

    Science.gov (United States)

    Kroll, Charles N.; Song, Peter

    2013-06-01

    Often hydrologic regression models are developed with ordinary least squares (OLS) procedures. The use of OLS with highly correlated explanatory variables produces multicollinearity, which creates highly sensitive parameter estimators with inflated variances and improper model selection. It is not clear how to best address multicollinearity in hydrologic regression models. Here a Monte Carlo simulation is developed to compare four techniques to address multicollinearity: OLS, OLS with variance inflation factor screening (VIF), principal component regression (PCR), and partial least squares regression (PLS). The performance of these four techniques was observed for varying sample sizes, correlation coefficients between the explanatory variables, and model error variances consistent with hydrologic regional regression models. The negative effects of multicollinearity are magnified at smaller sample sizes, higher correlations between the variables, and larger model error variances (smaller R2). The Monte Carlo simulation indicates that if the true model is known, multicollinearity is present, and the estimation and statistical testing of regression parameters are of interest, then PCR or PLS should be employed. If the model is unknown, or if the interest is solely on model predictions, is it recommended that OLS be employed since using more complicated techniques did not produce any improvement in model performance. A leave-one-out cross-validation case study was also performed using low-streamflow data sets from the eastern United States. Results indicate that OLS with stepwise selection generally produces models across study regions with varying levels of multicollinearity that are as good as biased regression techniques such as PCR and PLS.

  13. A statistical model describing combined irreversible electroporation and electroporation-induced blood-brain barrier disruption.

    Science.gov (United States)

    Sharabi, Shirley; Kos, Bor; Last, David; Guez, David; Daniels, Dianne; Harnof, Sagi; Mardor, Yael; Miklavcic, Damijan

    2016-03-01

    Electroporation-based therapies such as electrochemotherapy (ECT) and irreversible electroporation (IRE) are emerging as promising tools for treatment of tumors. When applied to the brain, electroporation can also induce transient blood-brain-barrier (BBB) disruption in volumes extending beyond IRE, thus enabling efficient drug penetration. The main objective of this study was to develop a statistical model predicting cell death and BBB disruption induced by electroporation. This model can be used for individual treatment planning. Cell death and BBB disruption models were developed based on the Peleg-Fermi model in combination with numerical models of the electric field. The model calculates the electric field thresholds for cell kill and BBB disruption and describes the dependence on the number of treatment pulses. The model was validated using in vivo experimental data consisting of rats brains MRIs post electroporation treatments. Linear regression analysis confirmed that the model described the IRE and BBB disruption volumes as a function of treatment pulses number (r(2) = 0.79; p disruption, the ratio increased with the number of pulses. BBB disruption radii were on average 67% ± 11% larger than IRE volumes. The statistical model can be used to describe the dependence of treatment-effects on the number of pulses independent of the experimental setup.

  14. A generalized multivariate regression model for modelling ocean wave heights

    Science.gov (United States)

    Wang, X. L.; Feng, Y.; Swail, V. R.

    2012-04-01

    In this study, a generalized multivariate linear regression model is developed to represent the relationship between 6-hourly ocean significant wave heights (Hs) and the corresponding 6-hourly mean sea level pressure (MSLP) fields. The model is calibrated using the ERA-Interim reanalysis of Hs and MSLP fields for 1981-2000, and is validated using the ERA-Interim reanalysis for 2001-2010 and ERA40 reanalysis of Hs and MSLP for 1958-2001. The performance of the fitted model is evaluated in terms of Pierce skill score, frequency bias index, and correlation skill score. Being not normally distributed, wave heights are subjected to a data adaptive Box-Cox transformation before being used in the model fitting. Also, since 6-hourly data are being modelled, lag-1 autocorrelation must be and is accounted for. The models with and without Box-Cox transformation, and with and without accounting for autocorrelation, are inter-compared in terms of their prediction skills. The fitted MSLP-Hs relationship is then used to reconstruct historical wave height climate from the 6-hourly MSLP fields taken from the Twentieth Century Reanalysis (20CR, Compo et al. 2011), and to project possible future wave height climates using CMIP5 model simulations of MSLP fields. The reconstructed and projected wave heights, both seasonal means and maxima, are subject to a trend analysis that allows for non-linear (polynomial) trends.

  15. A Gompertz regression model for fern spores germination

    Directory of Open Access Journals (Sweden)

    Gabriel y Galán, Jose María

    2015-06-01

    Full Text Available Germination is one of the most important biological processes for both seed and spore plants, also for fungi. At present, mathematical models of germination have been developed in fungi, bryophytes and several plant species. However, ferns are the only group whose germination has never been modelled. In this work we develop a regression model of the germination of fern spores. We have found that for Blechnum serrulatum, Blechnum yungense, Cheilanthes pilosa, Niphidium macbridei and Polypodium feuillei species the Gompertz growth model describe satisfactorily cumulative germination. An important result is that regression parameters are independent of fern species and the model is not affected by intraspecific variation. Our results show that the Gompertz curve represents a general germination model for all the non-green spore leptosporangiate ferns, including in the paper a discussion about the physiological and ecological meaning of the model.La germinación es uno de los procesos biológicos más relevantes tanto para las plantas con esporas, como para las plantas con semillas y los hongos. Hasta el momento, se han desarrollado modelos de germinación para hongos, briofitos y diversas especies de espermatófitos. Los helechos son el único grupo de plantas cuya germinación nunca ha sido modelizada. En este trabajo se desarrolla un modelo de regresión para explicar la germinación de las esporas de helechos. Observamos que para las especies Blechnum serrulatum, Blechnum yungense, Cheilanthes pilosa, Niphidium macbridei y Polypodium feuillei el modelo de crecimiento de Gompertz describe satisfactoriamente la germinación acumulativa. Un importante resultado es que los parámetros de la regresión son independientes de la especie y que el modelo no está afectado por variación intraespecífica. Por lo tanto, los resultados del trabajo muestran que la curva de Gompertz puede representar un modelo general para todos los helechos leptosporangiados

  16. Conditional Monte Carlo randomization tests for regression models.

    Science.gov (United States)

    Parhat, Parwen; Rosenberger, William F; Diao, Guoqing

    2014-08-15

    We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.

  17. Detection of epistatic effects with logic regression and a classical linear regression model.

    Science.gov (United States)

    Malina, Magdalena; Ickstadt, Katja; Schwender, Holger; Posch, Martin; Bogdan, Małgorzata

    2014-02-01

    To locate multiple interacting quantitative trait loci (QTL) influencing a trait of interest within experimental populations, usually methods as the Cockerham's model are applied. Within this framework, interactions are understood as the part of the joined effect of several genes which cannot be explained as the sum of their additive effects. However, if a change in the phenotype (as disease) is caused by Boolean combinations of genotypes of several QTLs, this Cockerham's approach is often not capable to identify them properly. To detect such interactions more efficiently, we propose a logic regression framework. Even though with the logic regression approach a larger number of models has to be considered (requiring more stringent multiple testing correction) the efficient representation of higher order logic interactions in logic regression models leads to a significant increase of power to detect such interactions as compared to a Cockerham's approach. The increase in power is demonstrated analytically for a simple two-way interaction model and illustrated in more complex settings with simulation study and real data analysis.

  18. Multiple regression and beyond an introduction to multiple regression and structural equation modeling

    CERN Document Server

    Keith, Timothy Z

    2014-01-01

    Multiple Regression and Beyond offers a conceptually oriented introduction to multiple regression (MR) analysis and structural equation modeling (SEM), along with analyses that flow naturally from those methods. By focusing on the concepts and purposes of MR and related methods, rather than the derivation and calculation of formulae, this book introduces material to students more clearly, and in a less threatening way. In addition to illuminating content necessary for coursework, the accessibility of this approach means students are more likely to be able to conduct research using MR or SEM--and more likely to use the methods wisely. Covers both MR and SEM, while explaining their relevance to one another Also includes path analysis, confirmatory factor analysis, and latent growth modeling Figures and tables throughout provide examples and illustrate key concepts and techniques For additional resources, please visit: http://tzkeith.com/.

  19. Tutorial on Using Regression Models with Count Outcomes Using R

    Directory of Open Access Journals (Sweden)

    A. Alexander Beaujean

    2016-02-01

    Full Text Available Education researchers often study count variables, such as times a student reached a goal, discipline referrals, and absences. Most researchers that study these variables use typical regression methods (i.e., ordinary least-squares either with or without transforming the count variables. In either case, using typical regression for count data can produce parameter estimates that are biased, thus diminishing any inferences made from such data. As count-variable regression models are seldom taught in training programs, we present a tutorial to help educational researchers use such methods in their own research. We demonstrate analyzing and interpreting count data using Poisson, negative binomial, zero-inflated Poisson, and zero-inflated negative binomial regression models. The count regression methods are introduced through an example using the number of times students skipped class. The data for this example are freely available and the R syntax used run the example analyses are included in the Appendix.

  20. Regularized multivariate regression models with skew-t error distributions

    KAUST Repository

    Chen, Lianfu; Pourahmadi, Mohsen; Maadooliat, Mehdi

    2014-01-01

    We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both

  1. Correlation-regression model for physico-chemical quality of ...

    African Journals Online (AJOL)

    abusaad

    areas, suggesting that groundwater quality in urban areas is closely related with land use ... the ground water, with correlation and regression model is also presented. ...... WHO (World Health Organization) (1985). Health hazards from nitrates.

  2. Wavelet regression model in forecasting crude oil price

    Science.gov (United States)

    Hamid, Mohd Helmie; Shabri, Ani

    2017-05-01

    This study presents the performance of wavelet multiple linear regression (WMLR) technique in daily crude oil forecasting. WMLR model was developed by integrating the discrete wavelet transform (DWT) and multiple linear regression (MLR) model. The original time series was decomposed to sub-time series with different scales by wavelet theory. Correlation analysis was conducted to assist in the selection of optimal decomposed components as inputs for the WMLR model. The daily WTI crude oil price series has been used in this study to test the prediction capability of the proposed model. The forecasting performance of WMLR model were also compared with regular multiple linear regression (MLR), Autoregressive Moving Average (ARIMA) and Generalized Autoregressive Conditional Heteroscedasticity (GARCH) using root mean square errors (RMSE) and mean absolute errors (MAE). Based on the experimental results, it appears that the WMLR model performs better than the other forecasting technique tested in this study.

  3. Application of random regression models to the genetic evaluation ...

    African Journals Online (AJOL)

    The model included fixed regression on AM (range from 30 to 138 mo) and the effect of herd-measurement date concatenation. Random parts of the model were RRM coefficients for additive and permanent environmental effects, while residual effects were modelled to account for heterogeneity of variance by AY. Estimates ...

  4. The APT model as reduced-rank regression

    NARCIS (Netherlands)

    Bekker, P.A.; Dobbelstein, P.; Wansbeek, T.J.

    Integrating the two steps of an arbitrage pricing theory (APT) model leads to a reduced-rank regression (RRR) model. So the results on RRR can be used to estimate APT models, making estimation very simple. We give a succinct derivation of estimation of RRR, derive the asymptotic variance of RRR

  5. Deriving Genomic Breeding Values for Residual Feed Intake from Covariance Functions of Random Regression Models

    DEFF Research Database (Denmark)

    Strathe, Anders B; Mark, Thomas; Nielsen, Bjarne

    2014-01-01

    Random regression models were used to estimate covariance functions between cumulated feed intake (CFI) and body weight (BW) in 8424 Danish Duroc pigs. Random regressions on second order Legendre polynomials of age were used to describe genetic and permanent environmental curves in BW and CFI...

  6. Alternative regression models to assess increase in childhood BMI

    Directory of Open Access Journals (Sweden)

    Mansmann Ulrich

    2008-09-01

    Full Text Available Abstract Background Body mass index (BMI data usually have skewed distributions, for which common statistical modeling approaches such as simple linear or logistic regression have limitations. Methods Different regression approaches to predict childhood BMI by goodness-of-fit measures and means of interpretation were compared including generalized linear models (GLMs, quantile regression and Generalized Additive Models for Location, Scale and Shape (GAMLSS. We analyzed data of 4967 children participating in the school entry health examination in Bavaria, Germany, from 2001 to 2002. TV watching, meal frequency, breastfeeding, smoking in pregnancy, maternal obesity, parental social class and weight gain in the first 2 years of life were considered as risk factors for obesity. Results GAMLSS showed a much better fit regarding the estimation of risk factors effects on transformed and untransformed BMI data than common GLMs with respect to the generalized Akaike information criterion. In comparison with GAMLSS, quantile regression allowed for additional interpretation of prespecified distribution quantiles, such as quantiles referring to overweight or obesity. The variables TV watching, maternal BMI and weight gain in the first 2 years were directly, and meal frequency was inversely significantly associated with body composition in any model type examined. In contrast, smoking in pregnancy was not directly, and breastfeeding and parental social class were not inversely significantly associated with body composition in GLM models, but in GAMLSS and partly in quantile regression models. Risk factor specific BMI percentile curves could be estimated from GAMLSS and quantile regression models. Conclusion GAMLSS and quantile regression seem to be more appropriate than common GLMs for risk factor modeling of BMI data.

  7. Alternative regression models to assess increase in childhood BMI.

    Science.gov (United States)

    Beyerlein, Andreas; Fahrmeir, Ludwig; Mansmann, Ulrich; Toschke, André M

    2008-09-08

    Body mass index (BMI) data usually have skewed distributions, for which common statistical modeling approaches such as simple linear or logistic regression have limitations. Different regression approaches to predict childhood BMI by goodness-of-fit measures and means of interpretation were compared including generalized linear models (GLMs), quantile regression and Generalized Additive Models for Location, Scale and Shape (GAMLSS). We analyzed data of 4967 children participating in the school entry health examination in Bavaria, Germany, from 2001 to 2002. TV watching, meal frequency, breastfeeding, smoking in pregnancy, maternal obesity, parental social class and weight gain in the first 2 years of life were considered as risk factors for obesity. GAMLSS showed a much better fit regarding the estimation of risk factors effects on transformed and untransformed BMI data than common GLMs with respect to the generalized Akaike information criterion. In comparison with GAMLSS, quantile regression allowed for additional interpretation of prespecified distribution quantiles, such as quantiles referring to overweight or obesity. The variables TV watching, maternal BMI and weight gain in the first 2 years were directly, and meal frequency was inversely significantly associated with body composition in any model type examined. In contrast, smoking in pregnancy was not directly, and breastfeeding and parental social class were not inversely significantly associated with body composition in GLM models, but in GAMLSS and partly in quantile regression models. Risk factor specific BMI percentile curves could be estimated from GAMLSS and quantile regression models. GAMLSS and quantile regression seem to be more appropriate than common GLMs for risk factor modeling of BMI data.

  8. Modelling subject-specific childhood growth using linear mixed-effect models with cubic regression splines.

    Science.gov (United States)

    Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William

    2016-01-01

    Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19

  9. Robust mislabel logistic regression without modeling mislabel probabilities.

    Science.gov (United States)

    Hung, Hung; Jou, Zhi-Yu; Huang, Su-Yun

    2018-03-01

    Logistic regression is among the most widely used statistical methods for linear discriminant analysis. In many applications, we only observe possibly mislabeled responses. Fitting a conventional logistic regression can then lead to biased estimation. One common resolution is to fit a mislabel logistic regression model, which takes into consideration of mislabeled responses. Another common method is to adopt a robust M-estimation by down-weighting suspected instances. In this work, we propose a new robust mislabel logistic regression based on γ-divergence. Our proposal possesses two advantageous features: (1) It does not need to model the mislabel probabilities. (2) The minimum γ-divergence estimation leads to a weighted estimating equation without the need to include any bias correction term, that is, it is automatically bias-corrected. These features make the proposed γ-logistic regression more robust in model fitting and more intuitive for model interpretation through a simple weighting scheme. Our method is also easy to implement, and two types of algorithms are included. Simulation studies and the Pima data application are presented to demonstrate the performance of γ-logistic regression. © 2017, The International Biometric Society.

  10. Linear regression models for quantitative assessment of left ...

    African Journals Online (AJOL)

    Changes in left ventricular structures and function have been reported in cardiomyopathies. No prediction models have been established in this environment. This study established regression models for prediction of left ventricular structures in normal subjects. A sample of normal subjects was drawn from a large urban ...

  11. Geographically Weighted Logistic Regression Applied to Credit Scoring Models

    Directory of Open Access Journals (Sweden)

    Pedro Henrique Melo Albuquerque

    Full Text Available Abstract This study used real data from a Brazilian financial institution on transactions involving Consumer Direct Credit (CDC, granted to clients residing in the Distrito Federal (DF, to construct credit scoring models via Logistic Regression and Geographically Weighted Logistic Regression (GWLR techniques. The aims were: to verify whether the factors that influence credit risk differ according to the borrower’s geographic location; to compare the set of models estimated via GWLR with the global model estimated via Logistic Regression, in terms of predictive power and financial losses for the institution; and to verify the viability of using the GWLR technique to develop credit scoring models. The metrics used to compare the models developed via the two techniques were the AICc informational criterion, the accuracy of the models, the percentage of false positives, the sum of the value of false positive debt, and the expected monetary value of portfolio default compared with the monetary value of defaults observed. The models estimated for each region in the DF were distinct in their variables and coefficients (parameters, with it being concluded that credit risk was influenced differently in each region in the study. The Logistic Regression and GWLR methodologies presented very close results, in terms of predictive power and financial losses for the institution, and the study demonstrated viability in using the GWLR technique to develop credit scoring models for the target population in the study.

  12. Physics constrained nonlinear regression models for time series

    International Nuclear Information System (INIS)

    Majda, Andrew J; Harlim, John

    2013-01-01

    A central issue in contemporary science is the development of data driven statistical nonlinear dynamical models for time series of partial observations of nature or a complex physical model. It has been established recently that ad hoc quadratic multi-level regression (MLR) models can have finite-time blow up of statistical solutions and/or pathological behaviour of their invariant measure. Here a new class of physics constrained multi-level quadratic regression models are introduced, analysed and applied to build reduced stochastic models from data of nonlinear systems. These models have the advantages of incorporating memory effects in time as well as the nonlinear noise from energy conserving nonlinear interactions. The mathematical guidelines for the performance and behaviour of these physics constrained MLR models as well as filtering algorithms for their implementation are developed here. Data driven applications of these new multi-level nonlinear regression models are developed for test models involving a nonlinear oscillator with memory effects and the difficult test case of the truncated Burgers–Hopf model. These new physics constrained quadratic MLR models are proposed here as process models for Bayesian estimation through Markov chain Monte Carlo algorithms of low frequency behaviour in complex physical data. (paper)

  13. Model-based Quantile Regression for Discrete Data

    KAUST Repository

    Padellini, Tullia

    2018-04-10

    Quantile regression is a class of methods voted to the modelling of conditional quantiles. In a Bayesian framework quantile regression has typically been carried out exploiting the Asymmetric Laplace Distribution as a working likelihood. Despite the fact that this leads to a proper posterior for the regression coefficients, the resulting posterior variance is however affected by an unidentifiable parameter, hence any inferential procedure beside point estimation is unreliable. We propose a model-based approach for quantile regression that considers quantiles of the generating distribution directly, and thus allows for a proper uncertainty quantification. We then create a link between quantile regression and generalised linear models by mapping the quantiles to the parameter of the response variable, and we exploit it to fit the model with R-INLA. We extend it also in the case of discrete responses, where there is no 1-to-1 relationship between quantiles and distribution\\'s parameter, by introducing continuous generalisations of the most common discrete variables (Poisson, Binomial and Negative Binomial) to be exploited in the fitting.

  14. Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.

    Science.gov (United States)

    Chatzis, Sotirios P; Andreou, Andreas S

    2015-11-01

    Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.

  15. Forecasting daily meteorological time series using ARIMA and regression models

    Science.gov (United States)

    Murat, Małgorzata; Malinowska, Iwona; Gos, Magdalena; Krzyszczak, Jaromir

    2018-04-01

    The daily air temperature and precipitation time series recorded between January 1, 1980 and December 31, 2010 in four European sites (Jokioinen, Dikopshof, Lleida and Lublin) from different climatic zones were modeled and forecasted. In our forecasting we used the methods of the Box-Jenkins and Holt- Winters seasonal auto regressive integrated moving-average, the autoregressive integrated moving-average with external regressors in the form of Fourier terms and the time series regression, including trend and seasonality components methodology with R software. It was demonstrated that obtained models are able to capture the dynamics of the time series data and to produce sensible forecasts.

  16. Multiple Response Regression for Gaussian Mixture Models with Known Labels.

    Science.gov (United States)

    Lee, Wonyul; Du, Ying; Sun, Wei; Hayes, D Neil; Liu, Yufeng

    2012-12-01

    Multiple response regression is a useful regression technique to model multiple response variables using the same set of predictor variables. Most existing methods for multiple response regression are designed for modeling homogeneous data. In many applications, however, one may have heterogeneous data where the samples are divided into multiple groups. Our motivating example is a cancer dataset where the samples belong to multiple cancer subtypes. In this paper, we consider modeling the data coming from a mixture of several Gaussian distributions with known group labels. A naive approach is to split the data into several groups according to the labels and model each group separately. Although it is simple, this approach ignores potential common structures across different groups. We propose new penalized methods to model all groups jointly in which the common and unique structures can be identified. The proposed methods estimate the regression coefficient matrix, as well as the conditional inverse covariance matrix of response variables. Asymptotic properties of the proposed methods are explored. Through numerical examples, we demonstrate that both estimation and prediction can be improved by modeling all groups jointly using the proposed methods. An application to a glioblastoma cancer dataset reveals some interesting common and unique gene relationships across different cancer subtypes.

  17. Thermal Efficiency Degradation Diagnosis Method Using Regression Model

    International Nuclear Information System (INIS)

    Jee, Chang Hyun; Heo, Gyun Young; Jang, Seok Won; Lee, In Cheol

    2011-01-01

    This paper proposes an idea for thermal efficiency degradation diagnosis in turbine cycles, which is based on turbine cycle simulation under abnormal conditions and a linear regression model. The correlation between the inputs for representing degradation conditions (normally unmeasured but intrinsic states) and the simulation outputs (normally measured but superficial states) was analyzed with the linear regression model. The regression models can inversely response an associated intrinsic state for a superficial state observed from a power plant. The diagnosis method proposed herein is classified into three processes, 1) simulations for degradation conditions to get measured states (referred as what-if method), 2) development of the linear model correlating intrinsic and superficial states, and 3) determination of an intrinsic state using the superficial states of current plant and the linear regression model (referred as inverse what-if method). The what-if method is to generate the outputs for the inputs including various root causes and/or boundary conditions whereas the inverse what-if method is the process of calculating the inverse matrix with the given superficial states, that is, component degradation modes. The method suggested in this paper was validated using the turbine cycle model for an operating power plant

  18. Regression modeling strategies with applications to linear models, logistic and ordinal regression, and survival analysis

    CERN Document Server

    Harrell , Jr , Frank E

    2015-01-01

    This highly anticipated second edition features new chapters and sections, 225 new references, and comprehensive R software. In keeping with the previous edition, this book is about the art and science of data analysis and predictive modeling, which entails choosing and using multiple tools. Instead of presenting isolated techniques, this text emphasizes problem solving strategies that address the many issues arising when developing multivariable models using real data and not standard textbook examples. It includes imputation methods for dealing with missing data effectively, methods for fitting nonlinear relationships and for making the estimation of transformations a formal part of the modeling process, methods for dealing with "too many variables to analyze and not enough observations," and powerful model validation techniques based on the bootstrap.  The reader will gain a keen understanding of predictive accuracy, and the harm of categorizing continuous predictors or outcomes.  This text realistically...

  19. Flexible competing risks regression modeling and goodness-of-fit

    DEFF Research Database (Denmark)

    Scheike, Thomas; Zhang, Mei-Jie

    2008-01-01

    In this paper we consider different approaches for estimation and assessment of covariate effects for the cumulative incidence curve in the competing risks model. The classic approach is to model all cause-specific hazards and then estimate the cumulative incidence curve based on these cause...... models that is easy to fit and contains the Fine-Gray model as a special case. One advantage of this approach is that our regression modeling allows for non-proportional hazards. This leads to a new simple goodness-of-fit procedure for the proportional subdistribution hazards assumption that is very easy...... of the flexible regression models to analyze competing risks data when non-proportionality is present in the data....

  20. The art of regression modeling in road safety

    CERN Document Server

    Hauer, Ezra

    2015-01-01

    This unique book explains how to fashion useful regression models from commonly available data to erect models essential for evidence-based road safety management and research. Composed from techniques and best practices presented over many years of lectures and workshops, The Art of Regression Modeling in Road Safety illustrates that fruitful modeling cannot be done without substantive knowledge about the modeled phenomenon. Class-tested in courses and workshops across North America, the book is ideal for professionals, researchers, university professors, and graduate students with an interest in, or responsibilities related to, road safety. This book also: · Presents for the first time a powerful analytical tool for road safety researchers and practitioners · Includes problems and solutions in each chapter as well as data and spreadsheets for running models and PowerPoint presentation slides · Features pedagogy well-suited for graduate courses and workshops including problems, solutions, and PowerPoint p...

  1. Model building strategy for logistic regression: purposeful selection.

    Science.gov (United States)

    Zhang, Zhongheng

    2016-03-01

    Logistic regression is one of the most commonly used models to account for confounders in medical literature. The article introduces how to perform purposeful selection model building strategy with R. I stress on the use of likelihood ratio test to see whether deleting a variable will have significant impact on model fit. A deleted variable should also be checked for whether it is an important adjustment of remaining covariates. Interaction should be checked to disentangle complex relationship between covariates and their synergistic effect on response variable. Model should be checked for the goodness-of-fit (GOF). In other words, how the fitted model reflects the real data. Hosmer-Lemeshow GOF test is the most widely used for logistic regression model.

  2. Regression analysis of a chemical reaction fouling model

    International Nuclear Information System (INIS)

    Vasak, F.; Epstein, N.

    1996-01-01

    A previously reported mathematical model for the initial chemical reaction fouling of a heated tube is critically examined in the light of the experimental data for which it was developed. A regression analysis of the model with respect to that data shows that the reference point upon which the two adjustable parameters of the model were originally based was well chosen, albeit fortuitously. (author). 3 refs., 2 tabs., 2 figs

  3. Spatial stochastic regression modelling of urban land use

    International Nuclear Information System (INIS)

    Arshad, S H M; Jaafar, J; Abiden, M Z Z; Latif, Z A; Rasam, A R A

    2014-01-01

    Urbanization is very closely linked to industrialization, commercialization or overall economic growth and development. This results in innumerable benefits of the quantity and quality of the urban environment and lifestyle but on the other hand contributes to unbounded development, urban sprawl, overcrowding and decreasing standard of living. Regulation and observation of urban development activities is crucial. The understanding of urban systems that promotes urban growth are also essential for the purpose of policy making, formulating development strategies as well as development plan preparation. This study aims to compare two different stochastic regression modeling techniques for spatial structure models of urban growth in the same specific study area. Both techniques will utilize the same datasets and their results will be analyzed. The work starts by producing an urban growth model by using stochastic regression modeling techniques namely the Ordinary Least Square (OLS) and Geographically Weighted Regression (GWR). The two techniques are compared to and it is found that, GWR seems to be a more significant stochastic regression model compared to OLS, it gives a smaller AICc (Akaike's Information Corrected Criterion) value and its output is more spatially explainable

  4. Direction of Effects in Multiple Linear Regression Models.

    Science.gov (United States)

    Wiedermann, Wolfgang; von Eye, Alexander

    2015-01-01

    Previous studies analyzed asymmetric properties of the Pearson correlation coefficient using higher than second order moments. These asymmetric properties can be used to determine the direction of dependence in a linear regression setting (i.e., establish which of two variables is more likely to be on the outcome side) within the framework of cross-sectional observational data. Extant approaches are restricted to the bivariate regression case. The present contribution extends the direction of dependence methodology to a multiple linear regression setting by analyzing distributional properties of residuals of competing multiple regression models. It is shown that, under certain conditions, the third central moments of estimated regression residuals can be used to decide upon direction of effects. In addition, three different approaches for statistical inference are discussed: a combined D'Agostino normality test, a skewness difference test, and a bootstrap difference test. Type I error and power of the procedures are assessed using Monte Carlo simulations, and an empirical example is provided for illustrative purposes. In the discussion, issues concerning the quality of psychological data, possible extensions of the proposed methods to the fourth central moment of regression residuals, and potential applications are addressed.

  5. Modeling and prediction of flotation performance using support vector regression

    Directory of Open Access Journals (Sweden)

    Despotović Vladimir

    2017-01-01

    Full Text Available Continuous efforts have been made in recent year to improve the process of paper recycling, as it is of critical importance for saving the wood, water and energy resources. Flotation deinking is considered to be one of the key methods for separation of ink particles from the cellulose fibres. Attempts to model the flotation deinking process have often resulted in complex models that are difficult to implement and use. In this paper a model for prediction of flotation performance based on Support Vector Regression (SVR, is presented. Representative data samples were created in laboratory, under a variety of practical control variables for the flotation deinking process, including different reagents, pH values and flotation residence time. Predictive model was created that was trained on these data samples, and the flotation performance was assessed showing that Support Vector Regression is a promising method even when dataset used for training the model is limited.

  6. Bayesian approach to errors-in-variables in regression models

    Science.gov (United States)

    Rozliman, Nur Aainaa; Ibrahim, Adriana Irawati Nur; Yunus, Rossita Mohammad

    2017-05-01

    In many applications and experiments, data sets are often contaminated with error or mismeasured covariates. When at least one of the covariates in a model is measured with error, Errors-in-Variables (EIV) model can be used. Measurement error, when not corrected, would cause misleading statistical inferences and analysis. Therefore, our goal is to examine the relationship of the outcome variable and the unobserved exposure variable given the observed mismeasured surrogate by applying the Bayesian formulation to the EIV model. We shall extend the flexible parametric method proposed by Hossain and Gustafson (2009) to another nonlinear regression model which is the Poisson regression model. We shall then illustrate the application of this approach via a simulation study using Markov chain Monte Carlo sampling methods.

  7. An appraisal of convergence failures in the application of logistic regression model in published manuscripts.

    Science.gov (United States)

    Yusuf, O B; Bamgboye, E A; Afolabi, R F; Shodimu, M A

    2014-09-01

    Logistic regression model is widely used in health research for description and predictive purposes. Unfortunately, most researchers are sometimes not aware that the underlying principles of the techniques have failed when the algorithm for maximum likelihood does not converge. Young researchers particularly postgraduate students may not know why separation problem whether quasi or complete occurs, how to identify it and how to fix it. This study was designed to critically evaluate convergence issues in articles that employed logistic regression analysis published in an African Journal of Medicine and medical sciences between 2004 and 2013. Problems of quasi or complete separation were described and were illustrated with the National Demographic and Health Survey dataset. A critical evaluation of articles that employed logistic regression was conducted. A total of 581 articles was reviewed, of which 40 (6.9%) used binary logistic regression. Twenty-four (60.0%) stated the use of logistic regression model in the methodology while none of the articles assessed model fit. Only 3 (12.5%) properly described the procedures. Of the 40 that used the logistic regression model, the problem of convergence occurred in 6 (15.0%) of the articles. Logistic regression tends to be poorly reported in studies published between 2004 and 2013. Our findings showed that the procedure may not be well understood by researchers since very few described the process in their reports and may be totally unaware of the problem of convergence or how to deal with it.

  8. Linear regression metamodeling as a tool to summarize and present simulation model results.

    Science.gov (United States)

    Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M

    2013-10-01

    Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.

  9. Time series regression model for infectious disease and weather.

    Science.gov (United States)

    Imai, Chisato; Armstrong, Ben; Chalabi, Zaid; Mangtani, Punam; Hashizume, Masahiro

    2015-10-01

    Time series regression has been developed and long used to evaluate the short-term associations of air pollution and weather with mortality or morbidity of non-infectious diseases. The application of the regression approaches from this tradition to infectious diseases, however, is less well explored and raises some new issues. We discuss and present potential solutions for five issues often arising in such analyses: changes in immune population, strong autocorrelations, a wide range of plausible lag structures and association patterns, seasonality adjustments, and large overdispersion. The potential approaches are illustrated with datasets of cholera cases and rainfall from Bangladesh and influenza and temperature in Tokyo. Though this article focuses on the application of the traditional time series regression to infectious diseases and weather factors, we also briefly introduce alternative approaches, including mathematical modeling, wavelet analysis, and autoregressive integrated moving average (ARIMA) models. Modifications proposed to standard time series regression practice include using sums of past cases as proxies for the immune population, and using the logarithm of lagged disease counts to control autocorrelation due to true contagion, both of which are motivated from "susceptible-infectious-recovered" (SIR) models. The complexity of lag structures and association patterns can often be informed by biological mechanisms and explored by using distributed lag non-linear models. For overdispersed models, alternative distribution models such as quasi-Poisson and negative binomial should be considered. Time series regression can be used to investigate dependence of infectious diseases on weather, but may need modifying to allow for features specific to this context. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  10. Linearity and Misspecification Tests for Vector Smooth Transition Regression Models

    DEFF Research Database (Denmark)

    Teräsvirta, Timo; Yang, Yukai

    The purpose of the paper is to derive Lagrange multiplier and Lagrange multiplier type specification and misspecification tests for vector smooth transition regression models. We report results from simulation studies in which the size and power properties of the proposed asymptotic tests in small...

  11. Application of multilinear regression analysis in modeling of soil ...

    African Journals Online (AJOL)

    The application of Multi-Linear Regression Analysis (MLRA) model for predicting soil properties in Calabar South offers a technical guide and solution in foundation designs problems in the area. Forty-five soil samples were collected from fifteen different boreholes at a different depth and 270 tests were carried out for CBR, ...

  12. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2009-01-01

    In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By

  13. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2010-01-01

    In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By

  14. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2011-01-01

    In this paper, two non-parametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a more viable alternative to existing kernel-based approaches. The second estimator

  15. A binary logistic regression model with complex sampling design of ...

    African Journals Online (AJOL)

    2017-09-03

    Sep 3, 2017 ... Bi-variable and multi-variable binary logistic regression model with complex sampling design was fitted. .... Data was entered into STATA-12 and analyzed using. SPSS-21. .... lack of access/too far or costs too much. 35. 1.2.

  16. Transpiration of glasshouse rose crops: evaluation of regression models

    NARCIS (Netherlands)

    Baas, R.; Rijssel, van E.

    2006-01-01

    Regression models of transpiration (T) based on global radiation inside the greenhouse (G), with or without energy input from heating pipes (Eh) and/or vapor pressure deficit (VPD) were parameterized. Therefore, data on T, G, temperatures from air, canopy and heating pipes, and VPD from both a

  17. Approximating prediction uncertainty for random forest regression models

    Science.gov (United States)

    John W. Coulston; Christine E. Blinn; Valerie A. Thomas; Randolph H. Wynne

    2016-01-01

    Machine learning approaches such as random forest have increased for the spatial modeling and mapping of continuous variables. Random forest is a non-parametric ensemble approach, and unlike traditional regression approaches there is no direct quantification of prediction error. Understanding prediction uncertainty is important when using model-based continuous maps as...

  18. CICAAR - Convolutive ICA with an Auto-Regressive Inverse Model

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Hansen, Lars Kai

    2004-01-01

    We invoke an auto-regressive IIR inverse model for convolutive ICA and derive expressions for the likelihood and its gradient. We argue that optimization will give a stable inverse. When there are more sensors than sources the mixing model parameters are estimated in a second step by least square...... estimation. We demonstrate the method on synthetic data and finally separate speech and music in a real room recording....

  19. Regression Models and Fuzzy Logic Prediction of TBM Penetration Rate

    Directory of Open Access Journals (Sweden)

    Minh Vu Trieu

    2017-03-01

    Full Text Available This paper presents statistical analyses of rock engineering properties and the measured penetration rate of tunnel boring machine (TBM based on the data of an actual project. The aim of this study is to analyze the influence of rock engineering properties including uniaxial compressive strength (UCS, Brazilian tensile strength (BTS, rock brittleness index (BI, the distance between planes of weakness (DPW, and the alpha angle (Alpha between the tunnel axis and the planes of weakness on the TBM rate of penetration (ROP. Four (4 statistical regression models (two linear and two nonlinear are built to predict the ROP of TBM. Finally a fuzzy logic model is developed as an alternative method and compared to the four statistical regression models. Results show that the fuzzy logic model provides better estimations and can be applied to predict the TBM performance. The R-squared value (R2 of the fuzzy logic model scores the highest value of 0.714 over the second runner-up of 0.667 from the multiple variables nonlinear regression model.

  20. Regression Models and Fuzzy Logic Prediction of TBM Penetration Rate

    Science.gov (United States)

    Minh, Vu Trieu; Katushin, Dmitri; Antonov, Maksim; Veinthal, Renno

    2017-03-01

    This paper presents statistical analyses of rock engineering properties and the measured penetration rate of tunnel boring machine (TBM) based on the data of an actual project. The aim of this study is to analyze the influence of rock engineering properties including uniaxial compressive strength (UCS), Brazilian tensile strength (BTS), rock brittleness index (BI), the distance between planes of weakness (DPW), and the alpha angle (Alpha) between the tunnel axis and the planes of weakness on the TBM rate of penetration (ROP). Four (4) statistical regression models (two linear and two nonlinear) are built to predict the ROP of TBM. Finally a fuzzy logic model is developed as an alternative method and compared to the four statistical regression models. Results show that the fuzzy logic model provides better estimations and can be applied to predict the TBM performance. The R-squared value (R2) of the fuzzy logic model scores the highest value of 0.714 over the second runner-up of 0.667 from the multiple variables nonlinear regression model.

  1. Detection of Outliers in Regression Model for Medical Data

    Directory of Open Access Journals (Sweden)

    Stephen Raj S

    2017-07-01

    Full Text Available In regression analysis, an outlier is an observation for which the residual is large in magnitude compared to other observations in the data set. The detection of outliers and influential points is an important step of the regression analysis. Outlier detection methods have been used to detect and remove anomalous values from data. In this paper, we detect the presence of outliers in simple linear regression models for medical data set. Chatterjee and Hadi mentioned that the ordinary residuals are not appropriate for diagnostic purposes; a transformed version of them is preferable. First, we investigate the presence of outliers based on existing procedures of residuals and standardized residuals. Next, we have used the new approach of standardized scores for detecting outliers without the use of predicted values. The performance of the new approach was verified with the real-life data.

  2. Hierarchical Neural Regression Models for Customer Churn Prediction

    Directory of Open Access Journals (Sweden)

    Golshan Mohammadi

    2013-01-01

    Full Text Available As customers are the main assets of each industry, customer churn prediction is becoming a major task for companies to remain in competition with competitors. In the literature, the better applicability and efficiency of hierarchical data mining techniques has been reported. This paper considers three hierarchical models by combining four different data mining techniques for churn prediction, which are backpropagation artificial neural networks (ANN, self-organizing maps (SOM, alpha-cut fuzzy c-means (α-FCM, and Cox proportional hazards regression model. The hierarchical models are ANN + ANN + Cox, SOM + ANN + Cox, and α-FCM + ANN + Cox. In particular, the first component of the models aims to cluster data in two churner and nonchurner groups and also filter out unrepresentative data or outliers. Then, the clustered data as the outputs are used to assign customers to churner and nonchurner groups by the second technique. Finally, the correctly classified data are used to create Cox proportional hazards model. To evaluate the performance of the hierarchical models, an Iranian mobile dataset is considered. The experimental results show that the hierarchical models outperform the single Cox regression baseline model in terms of prediction accuracy, Types I and II errors, RMSE, and MAD metrics. In addition, the α-FCM + ANN + Cox model significantly performs better than the two other hierarchical models.

  3. Beta Regression Finite Mixture Models of Polarization and Priming

    Science.gov (United States)

    Smithson, Michael; Merkle, Edgar C.; Verkuilen, Jay

    2011-01-01

    This paper describes the application of finite-mixture general linear models based on the beta distribution to modeling response styles, polarization, anchoring, and priming effects in probability judgments. These models, in turn, enhance our capacity for explicitly testing models and theories regarding the aforementioned phenomena. The mixture…

  4. Electricity consumption forecasting in Italy using linear regression models

    Energy Technology Data Exchange (ETDEWEB)

    Bianco, Vincenzo; Manca, Oronzio; Nardini, Sergio [DIAM, Seconda Universita degli Studi di Napoli, Via Roma 29, 81031 Aversa (CE) (Italy)

    2009-09-15

    The influence of economic and demographic variables on the annual electricity consumption in Italy has been investigated with the intention to develop a long-term consumption forecasting model. The time period considered for the historical data is from 1970 to 2007. Different regression models were developed, using historical electricity consumption, gross domestic product (GDP), gross domestic product per capita (GDP per capita) and population. A first part of the paper considers the estimation of GDP, price and GDP per capita elasticities of domestic and non-domestic electricity consumption. The domestic and non-domestic short run price elasticities are found to be both approximately equal to -0.06, while long run elasticities are equal to -0.24 and -0.09, respectively. On the contrary, the elasticities of GDP and GDP per capita present higher values. In the second part of the paper, different regression models, based on co-integrated or stationary data, are presented. Different statistical tests are employed to check the validity of the proposed models. A comparison with national forecasts, based on complex econometric models, such as Markal-Time, was performed, showing that the developed regressions are congruent with the official projections, with deviations of {+-}1% for the best case and {+-}11% for the worst. These deviations are to be considered acceptable in relation to the time span taken into account. (author)

  5. Electricity consumption forecasting in Italy using linear regression models

    International Nuclear Information System (INIS)

    Bianco, Vincenzo; Manca, Oronzio; Nardini, Sergio

    2009-01-01

    The influence of economic and demographic variables on the annual electricity consumption in Italy has been investigated with the intention to develop a long-term consumption forecasting model. The time period considered for the historical data is from 1970 to 2007. Different regression models were developed, using historical electricity consumption, gross domestic product (GDP), gross domestic product per capita (GDP per capita) and population. A first part of the paper considers the estimation of GDP, price and GDP per capita elasticities of domestic and non-domestic electricity consumption. The domestic and non-domestic short run price elasticities are found to be both approximately equal to -0.06, while long run elasticities are equal to -0.24 and -0.09, respectively. On the contrary, the elasticities of GDP and GDP per capita present higher values. In the second part of the paper, different regression models, based on co-integrated or stationary data, are presented. Different statistical tests are employed to check the validity of the proposed models. A comparison with national forecasts, based on complex econometric models, such as Markal-Time, was performed, showing that the developed regressions are congruent with the official projections, with deviations of ±1% for the best case and ±11% for the worst. These deviations are to be considered acceptable in relation to the time span taken into account. (author)

  6. Regression Model to Predict Global Solar Irradiance in Malaysia

    Directory of Open Access Journals (Sweden)

    Hairuniza Ahmed Kutty

    2015-01-01

    Full Text Available A novel regression model is developed to estimate the monthly global solar irradiance in Malaysia. The model is developed based on different available meteorological parameters, including temperature, cloud cover, rain precipitate, relative humidity, wind speed, pressure, and gust speed, by implementing regression analysis. This paper reports on the details of the analysis of the effect of each prediction parameter to identify the parameters that are relevant to estimating global solar irradiance. In addition, the proposed model is compared in terms of the root mean square error (RMSE, mean bias error (MBE, and the coefficient of determination (R2 with other models available from literature studies. Seven models based on single parameters (PM1 to PM7 and five multiple-parameter models (PM7 to PM12 are proposed. The new models perform well, with RMSE ranging from 0.429% to 1.774%, R2 ranging from 0.942 to 0.992, and MBE ranging from −0.1571% to 0.6025%. In general, cloud cover significantly affects the estimation of global solar irradiance. However, cloud cover in Malaysia lacks sufficient influence when included into multiple-parameter models although it performs fairly well in single-parameter prediction models.

  7. Poisson regression for modeling count and frequency outcomes in trauma research.

    Science.gov (United States)

    Gagnon, David R; Doron-LaMarca, Susan; Bell, Margret; O'Farrell, Timothy J; Taft, Casey T

    2008-10-01

    The authors describe how the Poisson regression method for analyzing count or frequency outcome variables can be applied in trauma studies. The outcome of interest in trauma research may represent a count of the number of incidents of behavior occurring in a given time interval, such as acts of physical aggression or substance abuse. Traditional linear regression approaches assume a normally distributed outcome variable with equal variances over the range of predictor variables, and may not be optimal for modeling count outcomes. An application of Poisson regression is presented using data from a study of intimate partner aggression among male patients in an alcohol treatment program and their female partners. Results of Poisson regression and linear regression models are compared.

  8. Two-step variable selection in quantile regression models

    Directory of Open Access Journals (Sweden)

    FAN Yali

    2015-06-01

    Full Text Available We propose a two-step variable selection procedure for high dimensional quantile regressions, in which the dimension of the covariates, pn is much larger than the sample size n. In the first step, we perform ℓ1 penalty, and we demonstrate that the first step penalized estimator with the LASSO penalty can reduce the model from an ultra-high dimensional to a model whose size has the same order as that of the true model, and the selected model can cover the true model. The second step excludes the remained irrelevant covariates by applying the adaptive LASSO penalty to the reduced model obtained from the first step. Under some regularity conditions, we show that our procedure enjoys the model selection consistency. We conduct a simulation study and a real data analysis to evaluate the finite sample performance of the proposed approach.

  9. New robust statistical procedures for the polytomous logistic regression models.

    Science.gov (United States)

    Castilla, Elena; Ghosh, Abhik; Martin, Nirian; Pardo, Leandro

    2018-05-17

    This article derives a new family of estimators, namely the minimum density power divergence estimators, as a robust generalization of the maximum likelihood estimator for the polytomous logistic regression model. Based on these estimators, a family of Wald-type test statistics for linear hypotheses is introduced. Robustness properties of both the proposed estimators and the test statistics are theoretically studied through the classical influence function analysis. Appropriate real life examples are presented to justify the requirement of suitable robust statistical procedures in place of the likelihood based inference for the polytomous logistic regression model. The validity of the theoretical results established in the article are further confirmed empirically through suitable simulation studies. Finally, an approach for the data-driven selection of the robustness tuning parameter is proposed with empirical justifications. © 2018, The International Biometric Society.

  10. THE REGRESSION MODEL OF IRAN LIBRARIES ORGANIZATIONAL CLIMATE

    OpenAIRE

    Jahani, Mohammad Ali; Yaminfirooz, Mousa; Siamian, Hasan

    2015-01-01

    Background: The purpose of this study was to drawing a regression model of organizational climate of central libraries of Iran?s universities. Methods: This study is an applied research. The statistical population of this study consisted of 96 employees of the central libraries of Iran?s public universities selected among the 117 universities affiliated to the Ministry of Health by Stratified Sampling method (510 people). Climate Qual localized questionnaire was used as research tools. For pr...

  11. Online Statistical Modeling (Regression Analysis) for Independent Responses

    Science.gov (United States)

    Made Tirta, I.; Anggraeni, Dian; Pandutama, Martinus

    2017-06-01

    Regression analysis (statistical analmodelling) are among statistical methods which are frequently needed in analyzing quantitative data, especially to model relationship between response and explanatory variables. Nowadays, statistical models have been developed into various directions to model various type and complex relationship of data. Rich varieties of advanced and recent statistical modelling are mostly available on open source software (one of them is R). However, these advanced statistical modelling, are not very friendly to novice R users, since they are based on programming script or command line interface. Our research aims to developed web interface (based on R and shiny), so that most recent and advanced statistical modelling are readily available, accessible and applicable on web. We have previously made interface in the form of e-tutorial for several modern and advanced statistical modelling on R especially for independent responses (including linear models/LM, generalized linier models/GLM, generalized additive model/GAM and generalized additive model for location scale and shape/GAMLSS). In this research we unified them in the form of data analysis, including model using Computer Intensive Statistics (Bootstrap and Markov Chain Monte Carlo/ MCMC). All are readily accessible on our online Virtual Statistics Laboratory. The web (interface) make the statistical modeling becomes easier to apply and easier to compare them in order to find the most appropriate model for the data.

  12. Reconstruction of missing daily streamflow data using dynamic regression models

    Science.gov (United States)

    Tencaliec, Patricia; Favre, Anne-Catherine; Prieur, Clémentine; Mathevet, Thibault

    2015-12-01

    River discharge is one of the most important quantities in hydrology. It provides fundamental records for water resources management and climate change monitoring. Even very short data-gaps in this information can cause extremely different analysis outputs. Therefore, reconstructing missing data of incomplete data sets is an important step regarding the performance of the environmental models, engineering, and research applications, thus it presents a great challenge. The objective of this paper is to introduce an effective technique for reconstructing missing daily discharge data when one has access to only daily streamflow data. The proposed procedure uses a combination of regression and autoregressive integrated moving average models (ARIMA) called dynamic regression model. This model uses the linear relationship between neighbor and correlated stations and then adjusts the residual term by fitting an ARIMA structure. Application of the model to eight daily streamflow data for the Durance river watershed showed that the model yields reliable estimates for the missing data in the time series. Simulation studies were also conducted to evaluate the performance of the procedure.

  13. Predicting and Modelling of Survival Data when Cox's Regression Model does not hold

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    2002-01-01

    Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects......Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects...

  14. Extended cox regression model: The choice of timefunction

    Science.gov (United States)

    Isik, Hatice; Tutkun, Nihal Ata; Karasoy, Durdu

    2017-07-01

    Cox regression model (CRM), which takes into account the effect of censored observations, is one the most applicative and usedmodels in survival analysis to evaluate the effects of covariates. Proportional hazard (PH), requires a constant hazard ratio over time, is the assumptionofCRM. Using extended CRM provides the test of including a time dependent covariate to assess the PH assumption or an alternative model in case of nonproportional hazards. In this study, the different types of real data sets are used to choose the time function and the differences between time functions are analyzed and discussed.

  15. A test of inflated zeros for Poisson regression models.

    Science.gov (United States)

    He, Hua; Zhang, Hui; Ye, Peng; Tang, Wan

    2017-01-01

    Excessive zeros are common in practice and may cause overdispersion and invalidate inference when fitting Poisson regression models. There is a large body of literature on zero-inflated Poisson models. However, methods for testing whether there are excessive zeros are less well developed. The Vuong test comparing a Poisson and a zero-inflated Poisson model is commonly applied in practice. However, the type I error of the test often deviates seriously from the nominal level, rendering serious doubts on the validity of the test in such applications. In this paper, we develop a new approach for testing inflated zeros under the Poisson model. Unlike the Vuong test for inflated zeros, our method does not require a zero-inflated Poisson model to perform the test. Simulation studies show that when compared with the Vuong test our approach not only better at controlling type I error rate, but also yield more power.

  16. Regression analysis understanding and building business and economic models using Excel

    CERN Document Server

    Wilson, J Holton

    2012-01-01

    The technique of regression analysis is used so often in business and economics today that an understanding of its use is necessary for almost everyone engaged in the field. This book will teach you the essential elements of building and understanding regression models in a business/economic context in an intuitive manner. The authors take a non-theoretical treatment that is accessible even if you have a limited statistical background. It is specifically designed to teach the correct use of regression, while advising you of its limitations and teaching about common pitfalls. This book describe

  17. Multivariate Frequency-Severity Regression Models in Insurance

    Directory of Open Access Journals (Sweden)

    Edward W. Frees

    2016-02-01

    Full Text Available In insurance and related industries including healthcare, it is common to have several outcome measures that the analyst wishes to understand using explanatory variables. For example, in automobile insurance, an accident may result in payments for damage to one’s own vehicle, damage to another party’s vehicle, or personal injury. It is also common to be interested in the frequency of accidents in addition to the severity of the claim amounts. This paper synthesizes and extends the literature on multivariate frequency-severity regression modeling with a focus on insurance industry applications. Regression models for understanding the distribution of each outcome continue to be developed yet there now exists a solid body of literature for the marginal outcomes. This paper contributes to this body of literature by focusing on the use of a copula for modeling the dependence among these outcomes; a major advantage of this tool is that it preserves the body of work established for marginal models. We illustrate this approach using data from the Wisconsin Local Government Property Insurance Fund. This fund offers insurance protection for (i property; (ii motor vehicle; and (iii contractors’ equipment claims. In addition to several claim types and frequency-severity components, outcomes can be further categorized by time and space, requiring complex dependency modeling. We find significant dependencies for these data; specifically, we find that dependencies among lines are stronger than the dependencies between the frequency and average severity within each line.

  18. Augmented Beta rectangular regression models: A Bayesian perspective.

    Science.gov (United States)

    Wang, Jue; Luo, Sheng

    2016-01-01

    Mixed effects Beta regression models based on Beta distributions have been widely used to analyze longitudinal percentage or proportional data ranging between zero and one. However, Beta distributions are not flexible to extreme outliers or excessive events around tail areas, and they do not account for the presence of the boundary values zeros and ones because these values are not in the support of the Beta distributions. To address these issues, we propose a mixed effects model using Beta rectangular distribution and augment it with the probabilities of zero and one. We conduct extensive simulation studies to assess the performance of mixed effects models based on both the Beta and Beta rectangular distributions under various scenarios. The simulation studies suggest that the regression models based on Beta rectangular distributions improve the accuracy of parameter estimates in the presence of outliers and heavy tails. The proposed models are applied to the motivating Neuroprotection Exploratory Trials in Parkinson's Disease (PD) Long-term Study-1 (LS-1 study, n = 1741), developed by The National Institute of Neurological Disorders and Stroke Exploratory Trials in Parkinson's Disease (NINDS NET-PD) network. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Regularized multivariate regression models with skew-t error distributions

    KAUST Repository

    Chen, Lianfu

    2014-06-01

    We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both the regression coefficient and inverse scale matrices simultaneously. The sparsity is introduced through penalizing the negative log-likelihood by adding L1-penalties on the entries of the two matrices. Taking advantage of the hierarchical representation of skew-t distributions, and using the expectation conditional maximization (ECM) algorithm, we reduce the problem to penalized normal likelihood and develop a procedure to minimize the ensuing objective function. Using a simulation study the performance of the method is assessed, and the methodology is illustrated using a real data set with a 24-dimensional response vector. © 2014 Elsevier B.V.

  20. Modeling the number of car theft using Poisson regression

    Science.gov (United States)

    Zulkifli, Malina; Ling, Agnes Beh Yen; Kasim, Maznah Mat; Ismail, Noriszura

    2016-10-01

    Regression analysis is the most popular statistical methods used to express the relationship between the variables of response with the covariates. The aim of this paper is to evaluate the factors that influence the number of car theft using Poisson regression model. This paper will focus on the number of car thefts that occurred in districts in Peninsular Malaysia. There are two groups of factor that have been considered, namely district descriptive factors and socio and demographic factors. The result of the study showed that Bumiputera composition, Chinese composition, Other ethnic composition, foreign migration, number of residence with the age between 25 to 64, number of employed person and number of unemployed person are the most influence factors that affect the car theft cases. These information are very useful for the law enforcement department, insurance company and car owners in order to reduce and limiting the car theft cases in Peninsular Malaysia.

  1. Dynamic Regression Intervention Modeling for the Malaysian Daily Load

    Directory of Open Access Journals (Sweden)

    Fadhilah Abdrazak

    2014-05-01

    Full Text Available Malaysia is a unique country due to having both fixed and moving holidays.  These moving holidays may overlap with other fixed holidays and therefore, increase the complexity of the load forecasting activities. The errors due to holidays’ effects in the load forecasting are known to be higher than other factors.  If these effects can be estimated and removed, the behavior of the series could be better viewed.  Thus, the aim of this paper is to improve the forecasting errors by using a dynamic regression model with intervention analysis.   Based on the linear transfer function method, a daily load model consists of either peak or average is developed.  The developed model outperformed the seasonal ARIMA model in estimating the fixed and moving holidays’ effects and achieved a smaller Mean Absolute Percentage Error (MAPE in load forecast.

  2. Learning Supervised Topic Models for Classification and Regression from Crowds.

    Science.gov (United States)

    Rodrigues, Filipe; Lourenco, Mariana; Ribeiro, Bernardete; Pereira, Francisco C

    2017-12-01

    The growing need to analyze large collections of documents has led to great developments in topic modeling. Since documents are frequently associated with other related variables, such as labels or ratings, much interest has been placed on supervised topic models. However, the nature of most annotation tasks, prone to ambiguity and noise, often with high volumes of documents, deem learning under a single-annotator assumption unrealistic or unpractical for most real-world applications. In this article, we propose two supervised topic models, one for classification and another for regression problems, which account for the heterogeneity and biases among different annotators that are encountered in practice when learning from crowds. We develop an efficient stochastic variational inference algorithm that is able to scale to very large datasets, and we empirically demonstrate the advantages of the proposed model over state-of-the-art approaches.

  3. Continuous validation of ASTEC containment models and regression testing

    International Nuclear Information System (INIS)

    Nowack, Holger; Reinke, Nils; Sonnenkalb, Martin

    2014-01-01

    The focus of the ASTEC (Accident Source Term Evaluation Code) development at GRS is primarily on the containment module CPA (Containment Part of ASTEC), whose modelling is to a large extent based on the GRS containment code COCOSYS (COntainment COde SYStem). Validation is usually understood as the approval of the modelling capabilities by calculations of appropriate experiments done by external users different from the code developers. During the development process of ASTEC CPA, bugs and unintended side effects may occur, which leads to changes in the results of the initially conducted validation. Due to the involvement of a considerable number of developers in the coding of ASTEC modules, validation of the code alone, even if executed repeatedly, is not sufficient. Therefore, a regression testing procedure has been implemented in order to ensure that the initially obtained validation results are still valid with succeeding code versions. Within the regression testing procedure, calculations of experiments and plant sequences are performed with the same input deck but applying two different code versions. For every test-case the up-to-date code version is compared to the preceding one on the basis of physical parameters deemed to be characteristic for the test-case under consideration. In the case of post-calculations of experiments also a comparison to experimental data is carried out. Three validation cases from the regression testing procedure are presented within this paper. The very good post-calculation of the HDR E11.1 experiment shows the high quality modelling of thermal-hydraulics in ASTEC CPA. Aerosol behaviour is validated on the BMC VANAM M3 experiment, and the results show also a very good agreement with experimental data. Finally, iodine behaviour is checked in the validation test-case of the THAI IOD-11 experiment. Within this test-case, the comparison of the ASTEC versions V2.0r1 and V2.0r2 shows how an error was detected by the regression testing

  4. Interpreting parameters in the logistic regression model with random effects

    DEFF Research Database (Denmark)

    Larsen, Klaus; Petersen, Jørgen Holm; Budtz-Jørgensen, Esben

    2000-01-01

    interpretation, interval odds ratio, logistic regression, median odds ratio, normally distributed random effects......interpretation, interval odds ratio, logistic regression, median odds ratio, normally distributed random effects...

  5. Learning Supervised Topic Models for Classification and Regression from Crowds

    DEFF Research Database (Denmark)

    Rodrigues, Filipe; Lourenco, Mariana; Ribeiro, Bernardete

    2017-01-01

    problems, which account for the heterogeneity and biases among different annotators that are encountered in practice when learning from crowds. We develop an efficient stochastic variational inference algorithm that is able to scale to very large datasets, and we empirically demonstrate the advantages...... annotation tasks, prone to ambiguity and noise, often with high volumes of documents, deem learning under a single-annotator assumption unrealistic or unpractical for most real-world applications. In this article, we propose two supervised topic models, one for classification and another for regression...

  6. Preference learning with evolutionary Multivariate Adaptive Regression Spline model

    DEFF Research Database (Denmark)

    Abou-Zleikha, Mohamed; Shaker, Noor; Christensen, Mads Græsbøll

    2015-01-01

    This paper introduces a novel approach for pairwise preference learning through combining an evolutionary method with Multivariate Adaptive Regression Spline (MARS). Collecting users' feedback through pairwise preferences is recommended over other ranking approaches as this method is more appealing...... for function approximation as well as being relatively easy to interpret. MARS models are evolved based on their efficiency in learning pairwise data. The method is tested on two datasets that collectively provide pairwise preference data of five cognitive states expressed by users. The method is analysed...

  7. Predicting Performance on MOOC Assessments using Multi-Regression Models

    OpenAIRE

    Ren, Zhiyun; Rangwala, Huzefa; Johri, Aditya

    2016-01-01

    The past few years has seen the rapid growth of data min- ing approaches for the analysis of data obtained from Mas- sive Open Online Courses (MOOCs). The objectives of this study are to develop approaches to predict the scores a stu- dent may achieve on a given grade-related assessment based on information, considered as prior performance or prior ac- tivity in the course. We develop a personalized linear mul- tiple regression (PLMR) model to predict the grade for a student, prior to attempt...

  8. Analytical and regression models of glass rod drawing process

    Science.gov (United States)

    Alekseeva, L. B.

    2018-03-01

    The process of drawing glass rods (light guides) is being studied. The parameters of the process affecting the quality of the light guide have been determined. To solve the problem, mathematical models based on general equations of continuum mechanics are used. The conditions for the stable flow of the drawing process have been found, which are determined by the stability of the motion of the glass mass in the formation zone to small uncontrolled perturbations. The sensitivity of the formation zone to perturbations of the drawing speed and viscosity is estimated. Experimental models of the drawing process, based on the regression analysis methods, have been obtained. These models make it possible to customize a specific production process to obtain light guides of the required quality. They allow one to find the optimum combination of process parameters in the chosen area and to determine the required accuracy of maintaining them at a specified level.

  9. Reduction of the number of parameters needed for a polynomial random regression test-day model

    NARCIS (Netherlands)

    Pool, M.H.; Meuwissen, T.H.E.

    2000-01-01

    Legendre polynomials were used to describe the (co)variance matrix within a random regression test day model. The goodness of fit depended on the polynomial order of fit, i.e., number of parameters to be estimated per animal but is limited by computing capacity. Two aspects: incomplete lactation

  10. Semi-parametric estimation of random effects in a logistic regression model using conditional inference

    DEFF Research Database (Denmark)

    Petersen, Jørgen Holm

    2016-01-01

    This paper describes a new approach to the estimation in a logistic regression model with two crossed random effects where special interest is in estimating the variance of one of the effects while not making distributional assumptions about the other effect. A composite likelihood is studied...

  11. Regression Models for Predicting Force Coefficients of Aerofoils

    Directory of Open Access Journals (Sweden)

    Mohammed ABDUL AKBAR

    2015-09-01

    Full Text Available Renewable sources of energy are attractive and advantageous in a lot of different ways. Among the renewable energy sources, wind energy is the fastest growing type. Among wind energy converters, Vertical axis wind turbines (VAWTs have received renewed interest in the past decade due to some of the advantages they possess over their horizontal axis counterparts. VAWTs have evolved into complex 3-D shapes. A key component in predicting the output of VAWTs through analytical studies is obtaining the values of lift and drag coefficients which is a function of shape of the aerofoil, ‘angle of attack’ of wind and Reynolds’s number of flow. Sandia National Laboratories have carried out extensive experiments on aerofoils for the Reynolds number in the range of those experienced by VAWTs. The volume of experimental data thus obtained is huge. The current paper discusses three Regression analysis models developed wherein lift and drag coefficients can be found out using simple formula without having to deal with the bulk of the data. Drag coefficients and Lift coefficients were being successfully estimated by regression models with R2 values as high as 0.98.

  12. Review of reactive kinetic models describing reductive dechlorination of chlorinated ethenes in soil and groundwater

    DEFF Research Database (Denmark)

    Chambon, Julie Claire Claudia; Bjerg, Poul Løgstrup; Scheutz, Charlotte

    2013-01-01

    Reductive dechlorination is a major degradation pathway of chlorinated ethenes in anaerobic subsurface environments, and reactive kinetic models describing the degradation process are needed in fate and transport models of these contaminants. However, reductive dechlorination is a complex biologi...

  13. Complex Environmental Data Modelling Using Adaptive General Regression Neural Networks

    Science.gov (United States)

    Kanevski, Mikhail

    2015-04-01

    The research deals with an adaptation and application of Adaptive General Regression Neural Networks (GRNN) to high dimensional environmental data. GRNN [1,2,3] are efficient modelling tools both for spatial and temporal data and are based on nonparametric kernel methods closely related to classical Nadaraya-Watson estimator. Adaptive GRNN, using anisotropic kernels, can be also applied for features selection tasks when working with high dimensional data [1,3]. In the present research Adaptive GRNN are used to study geospatial data predictability and relevant feature selection using both simulated and real data case studies. The original raw data were either three dimensional monthly precipitation data or monthly wind speeds embedded into 13 dimensional space constructed by geographical coordinates and geo-features calculated from digital elevation model. GRNN were applied in two different ways: 1) adaptive GRNN with the resulting list of features ordered according to their relevancy; and 2) adaptive GRNN applied to evaluate all possible models N [in case of wind fields N=(2^13 -1)=8191] and rank them according to the cross-validation error. In both cases training were carried out applying leave-one-out procedure. An important result of the study is that the set of the most relevant features depends on the month (strong seasonal effect) and year. The predictabilities of precipitation and wind field patterns, estimated using the cross-validation and testing errors of raw and shuffled data, were studied in detail. The results of both approaches were qualitatively and quantitatively compared. In conclusion, Adaptive GRNN with their ability to select features and efficient modelling of complex high dimensional data can be widely used in automatic/on-line mapping and as an integrated part of environmental decision support systems. 1. Kanevski M., Pozdnoukhov A., Timonin V. Machine Learning for Spatial Environmental Data. Theory, applications and software. EPFL Press

  14. A model describing water and salt migration in concrete during wetting/drying cycles

    NARCIS (Netherlands)

    Arends, T.; Taher, A.; van der Zanden, A.J.J.; Brouwers, H.J.H.; Bilek, V.; Kersner, Z.

    2014-01-01

    In order to predict the life span of concrete structures, models describing the migration of chloride are needed. In this paper, a start is made with a simple, theoretical model describing water and chloride transport in a concrete sample. First, transport of water in concrete is considered with

  15. Genomic breeding value estimation using nonparametric additive regression models

    Directory of Open Access Journals (Sweden)

    Solberg Trygve

    2009-01-01

    Full Text Available Abstract Genomic selection refers to the use of genomewide dense markers for breeding value estimation and subsequently for selection. The main challenge of genomic breeding value estimation is the estimation of many effects from a limited number of observations. Bayesian methods have been proposed to successfully cope with these challenges. As an alternative class of models, non- and semiparametric models were recently introduced. The present study investigated the ability of nonparametric additive regression models to predict genomic breeding values. The genotypes were modelled for each marker or pair of flanking markers (i.e. the predictors separately. The nonparametric functions for the predictors were estimated simultaneously using additive model theory, applying a binomial kernel. The optimal degree of smoothing was determined by bootstrapping. A mutation-drift-balance simulation was carried out. The breeding values of the last generation (genotyped was predicted using data from the next last generation (genotyped and phenotyped. The results show moderate to high accuracies of the predicted breeding values. A determination of predictor specific degree of smoothing increased the accuracy.

  16. Global Land Use Regression Model for Nitrogen Dioxide Air Pollution.

    Science.gov (United States)

    Larkin, Andrew; Geddes, Jeffrey A; Martin, Randall V; Xiao, Qingyang; Liu, Yang; Marshall, Julian D; Brauer, Michael; Hystad, Perry

    2017-06-20

    Nitrogen dioxide is a common air pollutant with growing evidence of health impacts independent of other common pollutants such as ozone and particulate matter. However, the worldwide distribution of NO 2 exposure and associated impacts on health is still largely uncertain. To advance global exposure estimates we created a global nitrogen dioxide (NO 2 ) land use regression model for 2011 using annual measurements from 5,220 air monitors in 58 countries. The model captured 54% of global NO 2 variation, with a mean absolute error of 3.7 ppb. Regional performance varied from R 2 = 0.42 (Africa) to 0.67 (South America). Repeated 10% cross-validation using bootstrap sampling (n = 10,000) demonstrated a robust performance with respect to air monitor sampling in North America, Europe, and Asia (adjusted R 2 within 2%) but not for Africa and Oceania (adjusted R 2 within 11%) where NO 2 monitoring data are sparse. The final model included 10 variables that captured both between and within-city spatial gradients in NO 2 concentrations. Variable contributions differed between continental regions, but major roads within 100 m and satellite-derived NO 2 were consistently the strongest predictors. The resulting model can be used for global risk assessments and health studies, particularly in countries without existing NO 2 monitoring data or models.

  17. Drought Patterns Forecasting using an Auto-Regressive Logistic Model

    Science.gov (United States)

    del Jesus, M.; Sheffield, J.; Méndez Incera, F. J.; Losada, I. J.; Espejo, A.

    2014-12-01

    Drought is characterized by a water deficit that may manifest across a large range of spatial and temporal scales. Drought may create important socio-economic consequences, many times of catastrophic dimensions. A quantifiable definition of drought is elusive because depending on its impacts, consequences and generation mechanism, different water deficit periods may be identified as a drought by virtue of some definitions but not by others. Droughts are linked to the water cycle and, although a climate change signal may not have emerged yet, they are also intimately linked to climate.In this work we develop an auto-regressive logistic model for drought prediction at different temporal scales that makes use of a spatially explicit framework. Our model allows to include covariates, continuous or categorical, to improve the performance of the auto-regressive component.Our approach makes use of dimensionality reduction (principal component analysis) and classification techniques (K-Means and maximum dissimilarity) to simplify the representation of complex climatic patterns, such as sea surface temperature (SST) and sea level pressure (SLP), while including information on their spatial structure, i.e. considering their spatial patterns. This procedure allows us to include in the analysis multivariate representation of complex climatic phenomena, as the El Niño-Southern Oscillation. We also explore the impact of other climate-related variables such as sun spots. The model allows to quantify the uncertainty of the forecasts and can be easily adapted to make predictions under future climatic scenarios. The framework herein presented may be extended to other applications such as flash flood analysis, or risk assessment of natural hazards.

  18. Collision prediction models using multivariate Poisson-lognormal regression.

    Science.gov (United States)

    El-Basyouny, Karim; Sayed, Tarek

    2009-07-01

    This paper advocates the use of multivariate Poisson-lognormal (MVPLN) regression to develop models for collision count data. The MVPLN approach presents an opportunity to incorporate the correlations across collision severity levels and their influence on safety analyses. The paper introduces a new multivariate hazardous location identification technique, which generalizes the univariate posterior probability of excess that has been commonly proposed and applied in the literature. In addition, the paper presents an alternative approach for quantifying the effect of the multivariate structure on the precision of expected collision frequency. The MVPLN approach is compared with the independent (separate) univariate Poisson-lognormal (PLN) models with respect to model inference, goodness-of-fit, identification of hot spots and precision of expected collision frequency. The MVPLN is modeled using the WinBUGS platform which facilitates computation of posterior distributions as well as providing a goodness-of-fit measure for model comparisons. The results indicate that the estimates of the extra Poisson variation parameters were considerably smaller under MVPLN leading to higher precision. The improvement in precision is due mainly to the fact that MVPLN accounts for the correlation between the latent variables representing property damage only (PDO) and injuries plus fatalities (I+F). This correlation was estimated at 0.758, which is highly significant, suggesting that higher PDO rates are associated with higher I+F rates, as the collision likelihood for both types is likely to rise due to similar deficiencies in roadway design and/or other unobserved factors. In terms of goodness-of-fit, the MVPLN model provided a superior fit than the independent univariate models. The multivariate hazardous location identification results demonstrated that some hazardous locations could be overlooked if the analysis was restricted to the univariate models.

  19. THE REGRESSION MODEL OF IRAN LIBRARIES ORGANIZATIONAL CLIMATE.

    Science.gov (United States)

    Jahani, Mohammad Ali; Yaminfirooz, Mousa; Siamian, Hasan

    2015-10-01

    The purpose of this study was to drawing a regression model of organizational climate of central libraries of Iran's universities. This study is an applied research. The statistical population of this study consisted of 96 employees of the central libraries of Iran's public universities selected among the 117 universities affiliated to the Ministry of Health by Stratified Sampling method (510 people). Climate Qual localized questionnaire was used as research tools. For predicting the organizational climate pattern of the libraries is used from the multivariate linear regression and track diagram. of the 9 variables affecting organizational climate, 5 variables of innovation, teamwork, customer service, psychological safety and deep diversity play a major role in prediction of the organizational climate of Iran's libraries. The results also indicate that each of these variables with different coefficient have the power to predict organizational climate but the climate score of psychological safety (0.94) plays a very crucial role in predicting the organizational climate. Track diagram showed that five variables of teamwork, customer service, psychological safety, deep diversity and innovation directly effects on the organizational climate variable that contribution of the team work from this influence is more than any other variables. Of the indicator of the organizational climate of climateQual, the contribution of the team work from this influence is more than any other variables that reinforcement of teamwork in academic libraries can be more effective in improving the organizational climate of this type libraries.

  20. Regression analysis of informative current status data with the additive hazards model.

    Science.gov (United States)

    Zhao, Shishun; Hu, Tao; Ma, Ling; Wang, Peijie; Sun, Jianguo

    2015-04-01

    This paper discusses regression analysis of current status failure time data arising from the additive hazards model in the presence of informative censoring. Many methods have been developed for regression analysis of current status data under various regression models if the censoring is noninformative, and also there exists a large literature on parametric analysis of informative current status data in the context of tumorgenicity experiments. In this paper, a semiparametric maximum likelihood estimation procedure is presented and in the method, the copula model is employed to describe the relationship between the failure time of interest and the censoring time. Furthermore, I-splines are used to approximate the nonparametric functions involved and the asymptotic consistency and normality of the proposed estimators are established. A simulation study is conducted and indicates that the proposed approach works well for practical situations. An illustrative example is also provided.

  1. Endorsement of Models Describing Sexual Response of Men and Women with a Sexual Partner

    DEFF Research Database (Denmark)

    Giraldi, Annamaria; Kristensen, Ellids; Sand, Michael

    2015-01-01

    , erectile dysfunction and dissatisfaction with sexual life were significantly related to endorsement of the Basson model or none of the models (P = 0.01). CONCLUSIONS: No single model of sexual response could describe men's and women's sexual responses. The majority of men and women with no sexual......INTRODUCTION: Several models have been used to describe men's and women's sexual responses. These models have been conceptualized as linear or circular models. The circular models were proposed to describe women's sexual function best. AIM: This study aims to determine whether men and women thought...... that current theoretical models of sexual responses accurately reflected their own sexual experience and to what extent this was influenced by sexual dysfunction. METHODS: A cross-sectional study of a large, broadly sampled, nonclinical population, cohort of Danish men and women. The Female Sexual Function...

  2. Modeling Information Content Via Dirichlet-Multinomial Regression Analysis.

    Science.gov (United States)

    Ferrari, Alberto

    2017-01-01

    Shannon entropy is being increasingly used in biomedical research as an index of complexity and information content in sequences of symbols, e.g. languages, amino acid sequences, DNA methylation patterns and animal vocalizations. Yet, distributional properties of information entropy as a random variable have seldom been the object of study, leading to researchers mainly using linear models or simulation-based analytical approach to assess differences in information content, when entropy is measured repeatedly in different experimental conditions. Here a method to perform inference on entropy in such conditions is proposed. Building on results coming from studies in the field of Bayesian entropy estimation, a symmetric Dirichlet-multinomial regression model, able to deal efficiently with the issue of mean entropy estimation, is formulated. Through a simulation study the model is shown to outperform linear modeling in a vast range of scenarios and to have promising statistical properties. As a practical example, the method is applied to a data set coming from a real experiment on animal communication.

  3. Variable selection in Logistic regression model with genetic algorithm.

    Science.gov (United States)

    Zhang, Zhongheng; Trevino, Victor; Hoseini, Sayed Shahabuddin; Belciug, Smaranda; Boopathi, Arumugam Manivanna; Zhang, Ping; Gorunescu, Florin; Subha, Velappan; Dai, Songshi

    2018-02-01

    Variable or feature selection is one of the most important steps in model specification. Especially in the case of medical-decision making, the direct use of a medical database, without a previous analysis and preprocessing step, is often counterproductive. In this way, the variable selection represents the method of choosing the most relevant attributes from the database in order to build a robust learning models and, thus, to improve the performance of the models used in the decision process. In biomedical research, the purpose of variable selection is to select clinically important and statistically significant variables, while excluding unrelated or noise variables. A variety of methods exist for variable selection, but none of them is without limitations. For example, the stepwise approach, which is highly used, adds the best variable in each cycle generally producing an acceptable set of variables. Nevertheless, it is limited by the fact that it commonly trapped in local optima. The best subset approach can systematically search the entire covariate pattern space, but the solution pool can be extremely large with tens to hundreds of variables, which is the case in nowadays clinical data. Genetic algorithms (GA) are heuristic optimization approaches and can be used for variable selection in multivariable regression models. This tutorial paper aims to provide a step-by-step approach to the use of GA in variable selection. The R code provided in the text can be extended and adapted to other data analysis needs.

  4. Electricity prices forecasting by automatic dynamic harmonic regression models

    International Nuclear Information System (INIS)

    Pedregal, Diego J.; Trapero, Juan R.

    2007-01-01

    The changes experienced by electricity markets in recent years have created the necessity for more accurate forecast tools of electricity prices, both for producers and consumers. Many methodologies have been applied to this aim, but in the view of the authors, state space models are not yet fully exploited. The present paper proposes a univariate dynamic harmonic regression model set up in a state space framework for forecasting prices in these markets. The advantages of the approach are threefold. Firstly, a fast automatic identification and estimation procedure is proposed based on the frequency domain. Secondly, the recursive algorithms applied offer adaptive predictions that compare favourably with respect to other techniques. Finally, since the method is based on unobserved components models, explicit information about trend, seasonal and irregular behaviours of the series can be extracted. This information is of great value to the electricity companies' managers in order to improve their strategies, i.e. it provides management innovations. The good forecast performance and the rapid adaptability of the model to changes in the data are illustrated with actual prices taken from the PJM interconnection in the US and for the Spanish market for the year 2002. (author)

  5. Characteristics and Properties of a Simple Linear Regression Model

    Directory of Open Access Journals (Sweden)

    Kowal Robert

    2016-12-01

    Full Text Available A simple linear regression model is one of the pillars of classic econometrics. Despite the passage of time, it continues to raise interest both from the theoretical side as well as from the application side. One of the many fundamental questions in the model concerns determining derivative characteristics and studying the properties existing in their scope, referring to the first of these aspects. The literature of the subject provides several classic solutions in that regard. In the paper, a completely new design is proposed, based on the direct application of variance and its properties, resulting from the non-correlation of certain estimators with the mean, within the scope of which some fundamental dependencies of the model characteristics are obtained in a much more compact manner. The apparatus allows for a simple and uniform demonstration of multiple dependencies and fundamental properties in the model, and it does it in an intuitive manner. The results were obtained in a classic, traditional area, where everything, as it might seem, has already been thoroughly studied and discovered.

  6. Bayesian Regression of Thermodynamic Models of Redox Active Materials

    Energy Technology Data Exchange (ETDEWEB)

    Johnston, Katherine [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-01

    Finding a suitable functional redox material is a critical challenge to achieving scalable, economically viable technologies for storing concentrated solar energy in the form of a defected oxide. Demonstrating e ectiveness for thermal storage or solar fuel is largely accomplished by using a thermodynamic model derived from experimental data. The purpose of this project is to test the accuracy of our regression model on representative data sets. Determining the accuracy of the model includes parameter tting the model to the data, comparing the model using di erent numbers of param- eters, and analyzing the entropy and enthalpy calculated from the model. Three data sets were considered in this project: two demonstrating materials for solar fuels by wa- ter splitting and the other of a material for thermal storage. Using Bayesian Inference and Markov Chain Monte Carlo (MCMC), parameter estimation was preformed on the three data sets. Good results were achieved, except some there was some deviations on the edges of the data input ranges. The evidence values were then calculated in a variety of ways and used to compare models with di erent number of parameters. It was believed that at least one of the parameters was unnecessary and comparing evidence values demonstrated that the parameter was need on one data set and not signi cantly helpful on another. The entropy was calculated by taking the derivative in one variable and integrating over another. and its uncertainty was also calculated by evaluating the entropy over multiple MCMC samples. Afterwards, all the parts were written up as a tutorial for the Uncertainty Quanti cation Toolkit (UQTk).

  7. Convergence diagnostics for Eigenvalue problems with linear regression model

    International Nuclear Information System (INIS)

    Shi, Bo; Petrovic, Bojan

    2011-01-01

    Although the Monte Carlo method has been extensively used for criticality/Eigenvalue problems, a reliable, robust, and efficient convergence diagnostics method is still desired. Most methods are based on integral parameters (multiplication factor, entropy) and either condense the local distribution information into a single value (e.g., entropy) or even disregard it. We propose to employ the detailed cycle-by-cycle local flux evolution obtained by using mesh tally mechanism to assess the source and flux convergence. By applying a linear regression model to each individual mesh in a mesh tally for convergence diagnostics, a global convergence criterion can be obtained. We exemplify this method on two problems and obtain promising diagnostics results. (author)

  8. The R Package threg to Implement Threshold Regression Models

    Directory of Open Access Journals (Sweden)

    Tao Xiao

    2015-08-01

    This new package includes four functions: threg, and the methods hr, predict and plot for threg objects returned by threg. The threg function is the model-fitting function which is used to calculate regression coefficient estimates, asymptotic standard errors and p values. The hr method for threg objects is the hazard-ratio calculation function which provides the estimates of hazard ratios at selected time points for specified scenarios (based on given categories or value settings of covariates. The predict method for threg objects is used for prediction. And the plot method for threg objects provides plots for curves of estimated hazard functions, survival functions and probability density functions of the first-hitting-time; function curves corresponding to different scenarios can be overlaid in the same plot for comparison to give additional research insights.

  9. Multiple linear regression and regression with time series error models in forecasting PM10 concentrations in Peninsular Malaysia.

    Science.gov (United States)

    Ng, Kar Yong; Awang, Norhashidah

    2018-01-06

    Frequent haze occurrences in Malaysia have made the management of PM 10 (particulate matter with aerodynamic less than 10 μm) pollution a critical task. This requires knowledge on factors associating with PM 10 variation and good forecast of PM 10 concentrations. Hence, this paper demonstrates the prediction of 1-day-ahead daily average PM 10 concentrations based on predictor variables including meteorological parameters and gaseous pollutants. Three different models were built. They were multiple linear regression (MLR) model with lagged predictor variables (MLR1), MLR model with lagged predictor variables and PM 10 concentrations (MLR2) and regression with time series error (RTSE) model. The findings revealed that humidity, temperature, wind speed, wind direction, carbon monoxide and ozone were the main factors explaining the PM 10 variation in Peninsular Malaysia. Comparison among the three models showed that MLR2 model was on a same level with RTSE model in terms of forecasting accuracy, while MLR1 model was the worst.

  10. Ultracentrifuge separative power modeling with multivariate regression using covariance matrix

    International Nuclear Information System (INIS)

    Migliavacca, Elder

    2004-01-01

    In this work, the least-squares methodology with covariance matrix is applied to determine a data curve fitting to obtain a performance function for the separative power δU of a ultracentrifuge as a function of variables that are experimentally controlled. The experimental data refer to 460 experiments on the ultracentrifugation process for uranium isotope separation. The experimental uncertainties related with these independent variables are considered in the calculation of the experimental separative power values, determining an experimental data input covariance matrix. The process variables, which significantly influence the δU values are chosen in order to give information on the ultracentrifuge behaviour when submitted to several levels of feed flow rate F, cut θ and product line pressure P p . After the model goodness-of-fit validation, a residual analysis is carried out to verify the assumed basis concerning its randomness and independence and mainly the existence of residual heteroscedasticity with any explained regression model variable. The surface curves are made relating the separative power with the control variables F, θ and P p to compare the fitted model with the experimental data and finally to calculate their optimized values. (author)

  11. Modeling Pan Evaporation for Kuwait by Multiple Linear Regression

    Science.gov (United States)

    Almedeij, Jaber

    2012-01-01

    Evaporation is an important parameter for many projects related to hydrology and water resources systems. This paper constitutes the first study conducted in Kuwait to obtain empirical relations for the estimation of daily and monthly pan evaporation as functions of available meteorological data of temperature, relative humidity, and wind speed. The data used here for the modeling are daily measurements of substantial continuity coverage, within a period of 17 years between January 1993 and December 2009, which can be considered representative of the desert climate of the urban zone of the country. Multiple linear regression technique is used with a procedure of variable selection for fitting the best model forms. The correlations of evaporation with temperature and relative humidity are also transformed in order to linearize the existing curvilinear patterns of the data by using power and exponential functions, respectively. The evaporation models suggested with the best variable combinations were shown to produce results that are in a reasonable agreement with observation values. PMID:23226984

  12. SPSS macros to compare any two fitted values from a regression model.

    Science.gov (United States)

    Weaver, Bruce; Dubois, Sacha

    2012-12-01

    In regression models with first-order terms only, the coefficient for a given variable is typically interpreted as the change in the fitted value of Y for a one-unit increase in that variable, with all other variables held constant. Therefore, each regression coefficient represents the difference between two fitted values of Y. But the coefficients represent only a fraction of the possible fitted value comparisons that might be of interest to researchers. For many fitted value comparisons that are not captured by any of the regression coefficients, common statistical software packages do not provide the standard errors needed to compute confidence intervals or carry out statistical tests-particularly in more complex models that include interactions, polynomial terms, or regression splines. We describe two SPSS macros that implement a matrix algebra method for comparing any two fitted values from a regression model. The !OLScomp and !MLEcomp macros are for use with models fitted via ordinary least squares and maximum likelihood estimation, respectively. The output from the macros includes the standard error of the difference between the two fitted values, a 95% confidence interval for the difference, and a corresponding statistical test with its p-value.

  13. Short-term electricity prices forecasting based on support vector regression and Auto-regressive integrated moving average modeling

    International Nuclear Information System (INIS)

    Che Jinxing; Wang Jianzhou

    2010-01-01

    In this paper, we present the use of different mathematical models to forecast electricity price under deregulated power. A successful prediction tool of electricity price can help both power producers and consumers plan their bidding strategies. Inspired by that the support vector regression (SVR) model, with the ε-insensitive loss function, admits of the residual within the boundary values of ε-tube, we propose a hybrid model that combines both SVR and Auto-regressive integrated moving average (ARIMA) models to take advantage of the unique strength of SVR and ARIMA models in nonlinear and linear modeling, which is called SVRARIMA. A nonlinear analysis of the time-series indicates the convenience of nonlinear modeling, the SVR is applied to capture the nonlinear patterns. ARIMA models have been successfully applied in solving the residuals regression estimation problems. The experimental results demonstrate that the model proposed outperforms the existing neural-network approaches, the traditional ARIMA models and other hybrid models based on the root mean square error and mean absolute percentage error.

  14. Longitudinal beta regression models for analyzing health-related quality of life scores over time

    Directory of Open Access Journals (Sweden)

    Hunger Matthias

    2012-09-01

    Full Text Available Abstract Background Health-related quality of life (HRQL has become an increasingly important outcome parameter in clinical trials and epidemiological research. HRQL scores are typically bounded at both ends of the scale and often highly skewed. Several regression techniques have been proposed to model such data in cross-sectional studies, however, methods applicable in longitudinal research are less well researched. This study examined the use of beta regression models for analyzing longitudinal HRQL data using two empirical examples with distributional features typically encountered in practice. Methods We used SF-6D utility data from a German older age cohort study and stroke-specific HRQL data from a randomized controlled trial. We described the conceptual differences between mixed and marginal beta regression models and compared both models to the commonly used linear mixed model in terms of overall fit and predictive accuracy. Results At any measurement time, the beta distribution fitted the SF-6D utility data and stroke-specific HRQL data better than the normal distribution. The mixed beta model showed better likelihood-based fit statistics than the linear mixed model and respected the boundedness of the outcome variable. However, it tended to underestimate the true mean at the upper part of the distribution. Adjusted group means from marginal beta model and linear mixed model were nearly identical but differences could be observed with respect to standard errors. Conclusions Understanding the conceptual differences between mixed and marginal beta regression models is important for their proper use in the analysis of longitudinal HRQL data. Beta regression fits the typical distribution of HRQL data better than linear mixed models, however, if focus is on estimating group mean scores rather than making individual predictions, the two methods might not differ substantially.

  15. An Ordered Regression Model to Predict Transit Passengers’ Behavioural Intentions

    Energy Technology Data Exchange (ETDEWEB)

    Oña, J. de; Oña, R. de; Eboli, L.; Forciniti, C.; Mazzulla, G.

    2016-07-01

    Passengers’ behavioural intentions after experiencing transit services can be viewed as signals that show if a customer continues to utilise a company’s service. Users’ behavioural intentions can depend on a series of aspects that are difficult to measure directly. More recently, transit passengers’ behavioural intentions have been just considered together with the concepts of service quality and customer satisfaction. Due to the characteristics of the ways for evaluating passengers’ behavioural intentions, service quality and customer satisfaction, we retain that this kind of issue could be analysed also by applying ordered regression models. This work aims to propose just an ordered probit model for analysing service quality factors that can influence passengers’ behavioural intentions towards the use of transit services. The case study is the LRT of Seville (Spain), where a survey was conducted in order to collect the opinions of the passengers about the existing transit service, and to have a measure of the aspects that can influence the intentions of the users to continue using the transit service in the future. (Author)

  16. Heterogeneous Breast Phantom Development for Microwave Imaging Using Regression Models

    Directory of Open Access Journals (Sweden)

    Camerin Hahn

    2012-01-01

    Full Text Available As new algorithms for microwave imaging emerge, it is important to have standard accurate benchmarking tests. Currently, most researchers use homogeneous phantoms for testing new algorithms. These simple structures lack the heterogeneity of the dielectric properties of human tissue and are inadequate for testing these algorithms for medical imaging. To adequately test breast microwave imaging algorithms, the phantom has to resemble different breast tissues physically and in terms of dielectric properties. We propose a systematic approach in designing phantoms that not only have dielectric properties close to breast tissues but also can be easily shaped to realistic physical models. The approach is based on regression model to match phantom's dielectric properties with the breast tissue dielectric properties found in Lazebnik et al. (2007. However, the methodology proposed here can be used to create phantoms for any tissue type as long as ex vivo, in vitro, or in vivo tissue dielectric properties are measured and available. Therefore, using this method, accurate benchmarking phantoms for testing emerging microwave imaging algorithms can be developed.

  17. A revised multi-Fickian moisture transport model to describe non-Fickian effects in wood

    DEFF Research Database (Denmark)

    Frandsen, Henrik Lund; Damkilde, Lars; Svensson, Staffan

    2007-01-01

    This paper presents a study and a refinement of the sorption rate model in a so-called multi-Fickian or multi-phase model. This type of model describes the complex moisture transport system in wood, which consists of separate water vapor and bound-water diffusion interacting through sorption...... sorption allow a simplification of the system to be modeled by a single Fickian diffusion equation. To determine the response of the system, the sorption rate model is essential. Here the function modeling the moisture-dependent adsorption rate is investigated based on existing experiments on thin wood...

  18. application of multilinear regression analysis in modeling of soil

    African Journals Online (AJOL)

    Windows User

    Accordingly [1, 3] in their work, they applied linear regression ... (MLRA) is a statistical technique that uses several explanatory ... order to check this, they adopted bivariate correlation analysis .... groups, namely A-1 through A-7, based on their relative expected ..... Multivariate Regression in Gorgan Province North of Iran” ...

  19. A model based on soil structural aspects describing the fate of genetically modified bacteria in soil

    NARCIS (Netherlands)

    Hoeven, van der N.; Elsas, van J.D.; Heijnen, C.E.

    1996-01-01

    A computer simulation model was developed which describes growth and competition of bacteria in the soil environment. In the model, soil was assumed to contain millions of pores of a few different size classes. An introduced bacterial strain, e.g. a genetically modified micro-organism (GEMMO), was

  20. CLIC Detector Concepts as described in the CDR: Differences between the GEANT4 and Engineering Models

    CERN Document Server

    Elsener, K; Schlatter, D; Siegrist, N

    2011-01-01

    The CLIC_ILD and CLIC_SiD detector concepts as used for the CDR Vol. 2 in 2011 exist both in GEANT4 simulation models and in engineering layout drawings. At this early stage of a conceptual design, there are inevitably differences between these models, which are described in this note.

  1. Cross-validation pitfalls when selecting and assessing regression and classification models.

    Science.gov (United States)

    Krstajic, Damjan; Buturovic, Ljubomir J; Leahy, David E; Thomas, Simon

    2014-03-29

    We address the problem of selecting and assessing classification and regression models using cross-validation. Current state-of-the-art methods can yield models with high variance, rendering them unsuitable for a number of practical applications including QSAR. In this paper we describe and evaluate best practices which improve reliability and increase confidence in selected models. A key operational component of the proposed methods is cloud computing which enables routine use of previously infeasible approaches. We describe in detail an algorithm for repeated grid-search V-fold cross-validation for parameter tuning in classification and regression, and we define a repeated nested cross-validation algorithm for model assessment. As regards variable selection and parameter tuning we define two algorithms (repeated grid-search cross-validation and double cross-validation), and provide arguments for using the repeated grid-search in the general case. We show results of our algorithms on seven QSAR datasets. The variation of the prediction performance, which is the result of choosing different splits of the dataset in V-fold cross-validation, needs to be taken into account when selecting and assessing classification and regression models. We demonstrate the importance of repeating cross-validation when selecting an optimal model, as well as the importance of repeating nested cross-validation when assessing a prediction error.

  2. Wheat flour dough Alveograph characteristics predicted by Mixolab regression models.

    Science.gov (United States)

    Codină, Georgiana Gabriela; Mironeasa, Silvia; Mironeasa, Costel; Popa, Ciprian N; Tamba-Berehoiu, Radiana

    2012-02-01

    In Romania, the Alveograph is the most used device to evaluate the rheological properties of wheat flour dough, but lately the Mixolab device has begun to play an important role in the breadmaking industry. These two instruments are based on different principles but there are some correlations that can be found between the parameters determined by the Mixolab and the rheological properties of wheat dough measured with the Alveograph. Statistical analysis on 80 wheat flour samples using the backward stepwise multiple regression method showed that Mixolab values using the ‘Chopin S’ protocol (40 samples) and ‘Chopin + ’ protocol (40 samples) can be used to elaborate predictive models for estimating the value of the rheological properties of wheat dough: baking strength (W), dough tenacity (P) and extensibility (L). The correlation analysis confirmed significant findings (P 0.70 for P, R²(adjusted) > 0.70 for W and R²(adjusted) > 0.38 for L, at a 95% confidence interval. Copyright © 2011 Society of Chemical Industry.

  3. Application of regression model on stream water quality parameters

    International Nuclear Information System (INIS)

    Suleman, M.; Maqbool, F.; Malik, A.H.; Bhatti, Z.A.

    2012-01-01

    Statistical analysis was conducted to evaluate the effect of solid waste leachate from the open solid waste dumping site of Salhad on the stream water quality. Five sites were selected along the stream. Two sites were selected prior to mixing of leachate with the surface water. One was of leachate and other two sites were affected with leachate. Samples were analyzed for pH, water temperature, electrical conductivity (EC), total dissolved solids (TDS), Biological oxygen demand (BOD), chemical oxygen demand (COD), dissolved oxygen (DO) and total bacterial load (TBL). In this study correlation coefficient r among different water quality parameters of various sites were calculated by using Pearson model and then average of each correlation between two parameters were also calculated, which shows TDS and EC and pH and BOD have significantly increasing r value, while temperature and TDS, temp and EC, DO and BL, DO and COD have decreasing r value. Single factor ANOVA at 5% level of significance was used which shows EC, TDS, TCL and COD were significantly differ among various sites. By the application of these two statistical approaches TDS and EC shows strongly positive correlation because the ions from the dissolved solids in water influence the ability of that water to conduct an electrical current. These two parameters significantly vary among 5 sites which are further confirmed by using linear regression. (author)

  4. Numerical model describing the heat transfer between combustion products and ventilation-system duct walls

    International Nuclear Information System (INIS)

    Bolstad, J.W.; Foster, R.D.; Gregory, W.S.

    1983-01-01

    A package of physical models simulating the heat transfer processes occurring between combustion gases and ducts in ventilation systems is described. The purpose of the numerical model is to predict how the combustion gas in a system heats up or cools down as it flows through the ducts in a ventilation system under fire conditions. The model treats a duct with (forced convection) combustion gases flowing on the inside and stagnant ambient air on the outside. The model is composed of five submodels of heat transfer processes along with a numerical solution procedure to evaluate them. Each of these quantities is evaluated independently using standard correlations based on experimental data. The details of the physical assumptions, simplifications, and ranges of applicability of the correlations are described. A typical application of this model to a full-scale fire test is discussed, and model predictions are compared with selected experimental data

  5. A standard protocol for describing individual-based and agent-based models

    Science.gov (United States)

    Grimm, Volker; Berger, Uta; Bastiansen, Finn; Eliassen, Sigrunn; Ginot, Vincent; Giske, Jarl; Goss-Custard, John; Grand, Tamara; Heinz, Simone K.; Huse, Geir; Huth, Andreas; Jepsen, Jane U.; Jorgensen, Christian; Mooij, Wolf M.; Muller, Birgit; Pe'er, Guy; Piou, Cyril; Railsback, Steven F.; Robbins, Andrew M.; Robbins, Martha M.; Rossmanith, Eva; Ruger, Nadja; Strand, Espen; Souissi, Sami; Stillman, Richard A.; Vabo, Rune; Visser, Ute; DeAngelis, Donald L.

    2006-01-01

    Simulation models that describe autonomous individual organisms (individual based models, IBM) or agents (agent-based models, ABM) have become a widely used tool, not only in ecology, but also in many other disciplines dealing with complex systems made up of autonomous entities. However, there is no standard protocol for describing such simulation models, which can make them difficult to understand and to duplicate. This paper presents a proposed standard protocol, ODD, for describing IBMs and ABMs, developed and tested by 28 modellers who cover a wide range of fields within ecology. This protocol consists of three blocks (Overview, Design concepts, and Details), which are subdivided into seven elements: Purpose, State variables and scales, Process overview and scheduling, Design concepts, Initialization, Input, and Submodels. We explain which aspects of a model should be described in each element, and we present an example to illustrate the protocol in use. In addition, 19 examples are available in an Online Appendix. We consider ODD as a first step for establishing a more detailed common format of the description of IBMs and ABMs. Once initiated, the protocol will hopefully evolve as it becomes used by a sufficiently large proportion of modellers.

  6. The microcomputer scientific software series 2: general linear model--regression.

    Science.gov (United States)

    Harold M. Rauscher

    1983-01-01

    The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...

  7. Forecasting Model for IPTV Service in Korea Using Bootstrap Ridge Regression Analysis

    Science.gov (United States)

    Lee, Byoung Chul; Kee, Seho; Kim, Jae Bum; Kim, Yun Bae

    The telecom firms in Korea are taking new step to prepare for the next generation of convergence services, IPTV. In this paper we described our analysis on the effective method for demand forecasting about IPTV broadcasting. We have tried according to 3 types of scenarios based on some aspects of IPTV potential market and made a comparison among the results. The forecasting method used in this paper is the multi generation substitution model with bootstrap ridge regression analysis.

  8. USE OF THE SIMPLE LINEAR REGRESSION MODEL IN MACRO-ECONOMICAL ANALYSES

    Directory of Open Access Journals (Sweden)

    Constantin ANGHELACHE

    2011-10-01

    Full Text Available The article presents the fundamental aspects of the linear regression, as a toolbox which can be used in macroeconomic analyses. The article describes the estimation of the parameters, the statistical tests used, the homoscesasticity and heteroskedasticity. The use of econometrics instrument in macroeconomics is an important factor that guarantees the quality of the models, analyses, results and possible interpretation that can be drawn at this level.

  9. An algorithm to detect and communicate the differences in computational models describing biological systems.

    Science.gov (United States)

    Scharm, Martin; Wolkenhauer, Olaf; Waltemath, Dagmar

    2016-02-15

    Repositories support the reuse of models and ensure transparency about results in publications linked to those models. With thousands of models available in repositories, such as the BioModels database or the Physiome Model Repository, a framework to track the differences between models and their versions is essential to compare and combine models. Difference detection not only allows users to study the history of models but also helps in the detection of errors and inconsistencies. Existing repositories lack algorithms to track a model's development over time. Focusing on SBML and CellML, we present an algorithm to accurately detect and describe differences between coexisting versions of a model with respect to (i) the models' encoding, (ii) the structure of biological networks and (iii) mathematical expressions. This algorithm is implemented in a comprehensive and open source library called BiVeS. BiVeS helps to identify and characterize changes in computational models and thereby contributes to the documentation of a model's history. Our work facilitates the reuse and extension of existing models and supports collaborative modelling. Finally, it contributes to better reproducibility of modelling results and to the challenge of model provenance. The workflow described in this article is implemented in BiVeS. BiVeS is freely available as source code and binary from sems.uni-rostock.de. The web interface BudHat demonstrates the capabilities of BiVeS at budhat.sems.uni-rostock.de. © The Author 2015. Published by Oxford University Press.

  10. A consilience model to describe N2O production during biological N removal

    DEFF Research Database (Denmark)

    Domingo Felez, Carlos; Smets, Barth F.

    2016-01-01

    Nitrous oxide (N2O), a potent greenhouse gas, is produced during biological nitrogen conversion in wastewater treatment operations. Complex mechanisms underlie N2O production by autotrophic and heterotrophic organisms, which continue to be unravelled. Mathematical models that describe nitric oxide...... (NO) and N2O dynamics have been proposed. Here, a first comprehensive model that considers all relevant NO and N2O production and consumption mechanisms is proposed. The model describes autotrophic NO production by ammonia oxidizing bacteria associated with ammonia oxidation and with nitrite reduction......, followed by NO reduction to N2O. It also considers NO and N2O as intermediates in heterotrophic denitrification in a 4-step model. Three biological NO and N2O production pathways are accounted for, improving the capabilities of existing models while not increasing their complexity. Abiotic contributions...

  11. MODELING SNAKE MICROHABITAT FROM RADIOTELEMETRY STUDIES USING POLYTOMOUS LOGISTIC REGRESSION

    Science.gov (United States)

    Multivariate analysis of snake microhabitat has historically used techniques that were derived under assumptions of normality and common covariance structure (e.g., discriminant function analysis, MANOVA). In this study, polytomous logistic regression (PLR which does not require ...

  12. Methods of Detecting Outliers in A Regression Analysis Model ...

    African Journals Online (AJOL)

    PROF. O. E. OSUAGWU

    2013-06-01

    Jun 1, 2013 ... especially true in observational studies .... Simple linear regression and multiple ... The simple linear ..... Grubbs,F.E (1950): Sample Criteria for Testing Outlying observations: Annals of ... In experimental design, the Relative.

  13. 231 Using Multiple Regression Analysis in Modelling the Role of ...

    African Journals Online (AJOL)

    User

    of Internal Revenue, Tourism Bureau and hotel records. The multiple regression .... additional guest facilities such as restaurant, a swimming pool or child care and social function ... and provide good quality service to the public. Conclusion.

  14. Modeling Fire Occurrence at the City Scale: A Comparison between Geographically Weighted Regression and Global Linear Regression.

    Science.gov (United States)

    Song, Chao; Kwan, Mei-Po; Zhu, Jiping

    2017-04-08

    An increasing number of fires are occurring with the rapid development of cities, resulting in increased risk for human beings and the environment. This study compares geographically weighted regression-based models, including geographically weighted regression (GWR) and geographically and temporally weighted regression (GTWR), which integrates spatial and temporal effects and global linear regression models (LM) for modeling fire risk at the city scale. The results show that the road density and the spatial distribution of enterprises have the strongest influences on fire risk, which implies that we should focus on areas where roads and enterprises are densely clustered. In addition, locations with a large number of enterprises have fewer fire ignition records, probably because of strict management and prevention measures. A changing number of significant variables across space indicate that heterogeneity mainly exists in the northern and eastern rural and suburban areas of Hefei city, where human-related facilities or road construction are only clustered in the city sub-centers. GTWR can capture small changes in the spatiotemporal heterogeneity of the variables while GWR and LM cannot. An approach that integrates space and time enables us to better understand the dynamic changes in fire risk. Thus governments can use the results to manage fire safety at the city scale.

  15. EMPIRICAL MODELS FOR DESCRIBING FIRE BEHAVIOR IN BRAZILIAN COMMERCIAL EUCALYPT PLANTATIONS

    Directory of Open Access Journals (Sweden)

    Benjamin Leonardo Alves White

    2016-12-01

    Full Text Available Modeling forest fire behavior is an important task that can be used to assist in fire prevention and suppression operations. However, according to previous studies, the existing common worldwide fire behavior models used do not correctly estimate the fire behavior in Brazilian commercial hybrid eucalypt plantations. Therefore, this study aims to build new empirical models to predict the fire rate of spread, flame length and fuel consumption for such vegetation. To meet these objectives, 105 laboratory experimental burns were done, where the main fuel characteristics and weather variables that influence fire behavior were controlled and/or measured in each experiment. Dependent and independent variables were fitted through multiple regression analysis. The fire rate of spread proposed model is based on the wind speed, fuel bed bulk density and 1-h dead fuel moisture content (r2 = 0.86; the flame length model is based on the fuel bed depth, 1-h dead fuel moisture content and wind speed (r2 = 0.72; the fuel consumption proposed model has the 1-h dead fuel moisture, fuel bed bulk density and 1-h dead dry fuel load as independent variables (r2= 0.80. These models were used to develop a new fire behavior software, the “Eucalyptus Fire Safety System”.

  16. Direct modeling of regression effects for transition probabilities in the progressive illness-death model

    DEFF Research Database (Denmark)

    Azarang, Leyla; Scheike, Thomas; de Uña-Álvarez, Jacobo

    2017-01-01

    In this work, we present direct regression analysis for the transition probabilities in the possibly non-Markov progressive illness–death model. The method is based on binomial regression, where the response is the indicator of the occupancy for the given state along time. Randomly weighted score...

  17. Antibiotic Resistances in Livestock: A Comparative Approach to Identify an Appropriate Regression Model for Count Data

    Directory of Open Access Journals (Sweden)

    Anke Hüls

    2017-05-01

    Full Text Available Antimicrobial resistance in livestock is a matter of general concern. To develop hygiene measures and methods for resistance prevention and control, epidemiological studies on a population level are needed to detect factors associated with antimicrobial resistance in livestock holdings. In general, regression models are used to describe these relationships between environmental factors and resistance outcome. Besides the study design, the correlation structures of the different outcomes of antibiotic resistance and structural zero measurements on the resistance outcome as well as on the exposure side are challenges for the epidemiological model building process. The use of appropriate regression models that acknowledge these complexities is essential to assure valid epidemiological interpretations. The aims of this paper are (i to explain the model building process comparing several competing models for count data (negative binomial model, quasi-Poisson model, zero-inflated model, and hurdle model and (ii to compare these models using data from a cross-sectional study on antibiotic resistance in animal husbandry. These goals are essential to evaluate which model is most suitable to identify potential prevention measures. The dataset used as an example in our analyses was generated initially to study the prevalence and associated factors for the appearance of cefotaxime-resistant Escherichia coli in 48 German fattening pig farms. For each farm, the outcome was the count of samples with resistant bacteria. There was almost no overdispersion and only moderate evidence of excess zeros in the data. Our analyses show that it is essential to evaluate regression models in studies analyzing the relationship between environmental factors and antibiotic resistances in livestock. After model comparison based on evaluation of model predictions, Akaike information criterion, and Pearson residuals, here the hurdle model was judged to be the most appropriate

  18. Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling.

    Science.gov (United States)

    Edelman, Eric R; van Kuijk, Sander M J; Hamaekers, Ankie E W; de Korte, Marcel J M; van Merode, Godefridus G; Buhre, Wolfgang F F A

    2017-01-01

    For efficient utilization of operating rooms (ORs), accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT) per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT) and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA) physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT). We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT). TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related benefits.

  19. Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling

    Directory of Open Access Journals (Sweden)

    Eric R. Edelman

    2017-06-01

    Full Text Available For efficient utilization of operating rooms (ORs, accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT. We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT. TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related

  20. A logistic regression model for Ghana National Health Insurance claims

    Directory of Open Access Journals (Sweden)

    Samuel Antwi

    2013-07-01

    Full Text Available In August 2003, the Ghanaian Government made history by implementing the first National Health Insurance System (NHIS in Sub-Saharan Africa. Within three years, over half of the country’s population had voluntarily enrolled into the National Health Insurance Scheme. This study had three objectives: 1 To estimate the risk factors that influences the Ghana national health insurance claims. 2 To estimate the magnitude of each of the risk factors in relation to the Ghana national health insurance claims. In this work, data was collected from the policyholders of the Ghana National Health Insurance Scheme with the help of the National Health Insurance database and the patients’ attendance register of the Koforidua Regional Hospital, from 1st January to 31st December 2011. Quantitative analysis was done using the generalized linear regression (GLR models. The results indicate that risk factors such as sex, age, marital status, distance and length of stay at the hospital were important predictors of health insurance claims. However, it was found that the risk factors; health status, billed charges and income level are not good predictors of national health insurance claim. The outcome of the study shows that sex, age, marital status, distance and length of stay at the hospital are statistically significant in the determination of the Ghana National health insurance premiums since they considerably influence claims. We recommended, among other things that, the National Health Insurance Authority should facilitate the institutionalization of the collection of appropriate data on a continuous basis to help in the determination of future premiums.

  1. Describing the processes of propagation and eliminating wildfires with the use of agent models

    Directory of Open Access Journals (Sweden)

    G. A. Dorrer

    2017-10-01

    Full Text Available A new method of describing the processes of propagation and elimination of wildfires on the basis of agent-based modeling is proposed. The main structural units of the creation of such models are the classes of active objects (agents. Agent approach, combined with Geographic Information Systems (GIS can effectively describe the interaction of a large number of participants in the process to combat wildfires: fire spreading, fire crews, mechanization, aerial means and other. In this paper we propose a multi-agent model to predict the spread of wildfire edge and simulate the direct method of extinguishing a ground fire with non-mechanized crews. The model consist with two classes of agents, designated A and B. The burning fire edge is represented as a chain of A-agents, each of which simulates the burning of an elementary portion of vegetation fuel. Fire front movement (moving the A-agent described by the Hamilton-Jacobi equation with using the indicatrises of normal front rate of spread (figurotris. The configuration of the front calculated on basis the algorithm of mobile grids. Agents other type, B-agents, described extinguishing process; they move to the agents of A type and act on them, reducing the combustion intensity to zero. Modeling system presented as two-level coloured nested Petri Net, which describes the agents’ interaction semantics. This model is implemented as a GIS-oriented software system that can be useful both in the fire fighting management as well as in staff training tactics to fighting wildfires. Some examples of modeling decision making on а ground fire extinguishing are presented.

  2. A model to describe potential effects of chemotherapy on critical radiobiological treatments

    International Nuclear Information System (INIS)

    Rodríguez-Pérez, D.; Desco, M.M.; Antoranz, J.C.

    2016-01-01

    Although chemo- and radiotherapy can annihilate tumors on their own. they are also used in coadjuvancy: improving local effects of radiotherapy using chemotherapy as a radiosensit.izer. The effects of radiotherapy are well described by current radiobiological models. The goal of this work is to describe a discrete radiotherapy model, that has been previously used describe high radiation dose response as well as unusual radio-responses of some types of tumors (e.g. prostate cancer), to obtain a model of chemo+radiotherapy that can describe how the outcome of their combination is a more efficient removal of the tumor. Our hypothesis is that, although both treatments haven different mechanisms, both affect similar key points of cell metabolism and regulation, that lead to cellular death. Hence, we will consider a discrete model where chemotherapy may affect a fraction of the same targets destroyed by radiotherapy. Although radiotherapy reaches all cells equally, chemotherapy diffuses through a tumor attaining lower concentration in its center and higher in its surface. With our simulations we study the enhanced effect of combined therapy treatment and how it depends on the tissue critical parameters (the parameters of the lion-extensive radiobiological model), the number of “targets” aimed at by chemotherapy, and the concentration and diffusion rate of the drug inside the tumor. The results show that an equivalent, cliemo-radio-dose can be computed that allows the prediction of the lower radiation dose that causes the same effect than a radio-only treatment. (paper)

  3. A model to describe potential effects of chemotherapy on critical radiobiological treatments

    Science.gov (United States)

    Rodríguez-Pérez, D.; Desco, M. M.; Antoranz, J. C.

    2016-08-01

    Although chemo- and radiotherapy can annihilate tumors on their own. they are also used in coadjuvancy: improving local effects of radiotherapy using chemotherapy as a radiosensit.izer. The effects of radiotherapy are well described by current radiobiological models. The goal of this work is to describe a discrete radiotherapy model, that has been previously used describe high radiation dose response as well as unusual radio-responses of some types of tumors (e.g. prostate cancer), to obtain a model of chemo+radiotherapy that can describe how the outcome of their combination is a more efficient removal of the tumor. Our hypothesis is that, although both treatments haven different mechanisms, both affect similar key points of cell metabolism and regulation, that lead to cellular death. Hence, we will consider a discrete model where chemotherapy may affect a fraction of the same targets destroyed by radiotherapy. Although radiotherapy reaches all cells equally, chemotherapy diffuses through a tumor attaining lower concentration in its center and higher in its surface. With our simulations we study the enhanced effect of combined therapy treatment and how it depends on the tissue critical parameters (the parameters of the lion-extensive radiobiological model), the number of “targets” aimed at by chemotherapy, and the concentration and diffusion rate of the drug inside the tumor. The results show that an equivalent, cliemo-radio-dose can be computed that allows the prediction of the lower radiation dose that causes the same effect than a radio-only treatment.

  4. A generalized additive regression model for survival times

    DEFF Research Database (Denmark)

    Scheike, Thomas H.

    2001-01-01

    Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models......Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models...

  5. Auto-associative Kernel Regression Model with Weighted Distance Metric for Instrument Drift Monitoring

    International Nuclear Information System (INIS)

    Shin, Ho Cheol; Park, Moon Ghu; You, Skin

    2006-01-01

    Recently, many on-line approaches to instrument channel surveillance (drift monitoring and fault detection) have been reported worldwide. On-line monitoring (OLM) method evaluates instrument channel performance by assessing its consistency with other plant indications through parametric or non-parametric models. The heart of an OLM system is the model giving an estimate of the true process parameter value against individual measurements. This model gives process parameter estimate calculated as a function of other plant measurements which can be used to identify small sensor drifts that would require the sensor to be manually calibrated or replaced. This paper describes an improvement of auto associative kernel regression (AAKR) by introducing a correlation coefficient weighting on kernel distances. The prediction performance of the developed method is compared with conventional auto-associative kernel regression

  6. A two component model describing nucleon structure functions in the low-x region

    Energy Technology Data Exchange (ETDEWEB)

    Bugaev, E.V. [Institute for Nuclear Research of the Russian Academy of Sciences, 7a, 60th October Anniversary prospect, Moscow 117312 (Russian Federation); Mangazeev, B.V. [Irkutsk State University, 1, Karl Marx Street, Irkutsk 664003 (Russian Federation)

    2009-12-15

    A two component model describing the electromagnetic nucleon structure functions in the low-x region, based on generalized vector dominance and color dipole approaches is briefly described. The model operates with the mesons of rho-family having the mass spectrum of the form m{sub n}{sup 2}=m{sub r}ho{sup 2}(1+2n) and takes into account the nondiagonal transitions in meson-nucleon scattering. The special cut-off factors are introduced in the model, to exclude the gamma-qq-bar-V transitions in the case of narrow qq-bar-pairs. For the color dipole part of the model the well known FKS-parameterization is used.

  7. A STRUCTURAL MODEL DESCRIBE CHINESE TRADESMEN ATTITUDES TOWARDS GREEK STUDENTS CONSUMPTION BEHAVIOR

    Directory of Open Access Journals (Sweden)

    Sofia D. ANASTASIADOU

    2012-12-01

    Full Text Available This study tests evaluates 43 Chinese tradesmen opinios describe the main factors that influnce Greek consumers’ behavior. A structural model was constructed to represent the relationship between consumer components. The model was tested for its Convergent and Discriminant Validity. Moreover it was tested for its reliability and construct reliability. The findings from this study may be used by Chinese tradesmen to develop their marketing campains and customers.

  8. Extending the linear model with R generalized linear, mixed effects and nonparametric regression models

    CERN Document Server

    Faraway, Julian J

    2005-01-01

    Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ...

  9. Muscle activation described with a differential equation model for large ensembles of locally coupled molecular motors.

    Science.gov (United States)

    Walcott, Sam

    2014-10-01

    Molecular motors, by turning chemical energy into mechanical work, are responsible for active cellular processes. Often groups of these motors work together to perform their biological role. Motors in an ensemble are coupled and exhibit complex emergent behavior. Although large motor ensembles can be modeled with partial differential equations (PDEs) by assuming that molecules function independently of their neighbors, this assumption is violated when motors are coupled locally. It is therefore unclear how to describe the ensemble behavior of the locally coupled motors responsible for biological processes such as calcium-dependent skeletal muscle activation. Here we develop a theory to describe locally coupled motor ensembles and apply the theory to skeletal muscle activation. The central idea is that a muscle filament can be divided into two phases: an active and an inactive phase. Dynamic changes in the relative size of these phases are described by a set of linear ordinary differential equations (ODEs). As the dynamics of the active phase are described by PDEs, muscle activation is governed by a set of coupled ODEs and PDEs, building on previous PDE models. With comparison to Monte Carlo simulations, we demonstrate that the theory captures the behavior of locally coupled ensembles. The theory also plausibly describes and predicts muscle experiments from molecular to whole muscle scales, suggesting that a micro- to macroscale muscle model is within reach.

  10. Robustness of a cross contamination model describing transfer of pathogens during grinding of meat

    DEFF Research Database (Denmark)

    Møller, Cleide Oliveira de Almeida; Sant’Ana, A. S.; Hansen, Solvej Katrine Holm

    2016-01-01

    This study aimed to evaluate a cross contamination model for its capability of describing transfer of Salmonella spp. and L. monocytogenes during grinding of varying sizes and numbers of pieces of meats in two grinder systems. Data from 19 trials were collected. Three evaluation approaches were...

  11. Robustness of a cross contamination model describing transfer of pathogens during grinding of meat

    DEFF Research Database (Denmark)

    Møller, Cleide Oliveira de Almeida; Sant’Ana, A. S.; Hansen, Solvej Katrine Holm

    2016-01-01

    This study aimed to evaluate a cross contamination model for its capability of describing transfer of Salmonella spp. and L. monocytogenes during grinding of varying sizes and numbers of pieces of meats in two grinder systems. Data from 19 trials were collected. Three evaluation approaches were a...... that grinding was influenced by sharpness of grinder knife, specific grinder and grinding temperature....

  12. Development of a Pharmacokinetic Model to Describe the Complex Pharmacokinetics of Pazopanib in Cancer Patients

    NARCIS (Netherlands)

    Yu, Huixin; van Erp, Nielka; Bins, Sander; Mathijssen, Ron H J; Schellens, Jan H M; Beijnen, Jos H.; Steeghs, Neeltje; Huitema, Alwin D R

    Background and Objective: Pazopanib is a multi-targeted anticancer tyrosine kinase inhibitor. This study was conducted to develop a population pharmacokinetic (popPK) model describing the complex pharmacokinetics of pazopanib in cancer patients. Methods: Pharmacokinetic data were available from 96

  13. Development of a Pharmacokinetic Model to Describe the Complex Pharmacokinetics of Pazopanib in Cancer Patients

    NARCIS (Netherlands)

    Yu, H.; Erp, N. van; Bins, S.; Mathijssen, R.H.; Schellens, J.H.; Beijnen, J.H.; Steeghs, N.; Huitema, A.D.

    2017-01-01

    BACKGROUND AND OBJECTIVE: Pazopanib is a multi-targeted anticancer tyrosine kinase inhibitor. This study was conducted to develop a population pharmacokinetic (popPK) model describing the complex pharmacokinetics of pazopanib in cancer patients. METHODS: Pharmacokinetic data were available from 96

  14. Structural Models Describing Placebo Treatment Effects in Schizophrenia and Other Neuropsychiatric Disorders

    NARCIS (Netherlands)

    Reddy, Venkatesh Pilla; Kozielska, Magdalena; Johnson, Martin; Vermeulen, An; de Greef, Rik; Liu, Jing; Groothuis, Geny M. M.; Danhof, Meindert; Proost, Johannes H.

    2011-01-01

    Large variation in placebo response within and among clinical trials can substantially affect conclusions about the efficacy of new medications in psychiatry. Developing a robust placebo model to describe the placebo response is important to facilitate quantification of drug effects, and eventually

  15. Early Childhood Teacher Preparation: A Tale of Authors and Multimedia, A Model of Technology Integration Described.

    Science.gov (United States)

    Wetzel, Keith; McLean, S. V.

    1997-01-01

    Describes collaboration of two teacher educators, one in early childhood language arts and one in computers in education. Discusses advantages and disadvantages and extensions of this model, including how a college-wide survey revealed that students in teamed courses are better prepared to teach and learn with technology. (DR)

  16. Comparison of six different models describing survival of mammalian cells after irradiation

    International Nuclear Information System (INIS)

    Sontag, W.

    1990-01-01

    Six different cell-survival models have been compared. All models are based on the similar assumption that irradiated cells are able to exist in one of three states. S A is the state of a totally repaired cell, in state S C the cell contains lethal lesions and in state S b the cell contains potentially lethal lesions i.e. those which either can be repaired or converted into lethal lesions. The differences between the six models lie in the different mathematical relationships between the three states. To test the six models, six different sets of experimental data were used which describe cell survival at different repair times after irradiation with sparsely ionizing irradiation. In order to compare the models, a goodness-of-fit function was used. The differences between the six models were tested by use of the nonparametric Mann-Whitney two sample test. Based on the 95% confidence limit, this required separation into three groups. (orig.)

  17. A Bayesian Nonparametric Causal Model for Regression Discontinuity Designs

    Science.gov (United States)

    Karabatsos, George; Walker, Stephen G.

    2013-01-01

    The regression discontinuity (RD) design (Thistlewaite & Campbell, 1960; Cook, 2008) provides a framework to identify and estimate causal effects from a non-randomized design. Each subject of a RD design is assigned to the treatment (versus assignment to a non-treatment) whenever her/his observed value of the assignment variable equals or…

  18. Parametric vs. Nonparametric Regression Modelling within Clinical Decision Support

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan; Zvárová, Jana

    2017-01-01

    Roč. 5, č. 1 (2017), s. 21-27 ISSN 1805-8698 R&D Projects: GA ČR GA17-01251S Institutional support: RVO:67985807 Keywords : decision support systems * decision rules * statistical analysis * nonparametric regression Subject RIV: IN - Informatics, Computer Science OBOR OECD: Statistics and probability

  19. Logistic regression modelling: procedures and pitfalls in developing and interpreting prediction models

    Directory of Open Access Journals (Sweden)

    Nataša Šarlija

    2017-01-01

    Full Text Available This study sheds light on the most common issues related to applying logistic regression in prediction models for company growth. The purpose of the paper is 1 to provide a detailed demonstration of the steps in developing a growth prediction model based on logistic regression analysis, 2 to discuss common pitfalls and methodological errors in developing a model, and 3 to provide solutions and possible ways of overcoming these issues. Special attention is devoted to the question of satisfying logistic regression assumptions, selecting and defining dependent and independent variables, using classification tables and ROC curves, for reporting model strength, interpreting odds ratios as effect measures and evaluating performance of the prediction model. Development of a logistic regression model in this paper focuses on a prediction model of company growth. The analysis is based on predominantly financial data from a sample of 1471 small and medium-sized Croatian companies active between 2009 and 2014. The financial data is presented in the form of financial ratios divided into nine main groups depicting following areas of business: liquidity, leverage, activity, profitability, research and development, investing and export. The growth prediction model indicates aspects of a business critical for achieving high growth. In that respect, the contribution of this paper is twofold. First, methodological, in terms of pointing out pitfalls and potential solutions in logistic regression modelling, and secondly, theoretical, in terms of identifying factors responsible for high growth of small and medium-sized companies.

  20. New Model to describe the interaction of slow neutrons with solid deuterium

    International Nuclear Information System (INIS)

    Granada, J.R

    2009-01-01

    A new scattering kernel to describe the interaction of slow neutrons with solid Deuterium was developed. The main characteristics of that system are contained in the formalism, including the lattice s density of states, the Young-Koppel quantum treatment of the rotations, and the internal molecular vibrations. The elastic processes involving coherent and incoherent contributions are fully described, as well as the spin-correlation effects. The results from the new model are compared with the best available experimental data, showing very good agreement. [es

  1. Regularization of a massless dirac model to describe anomalous electromagnetic response of Weyl semimetals

    International Nuclear Information System (INIS)

    Takane, Yoshitake

    2016-01-01

    An unbounded massless Dirac model with two nondegenerate Dirac cones is the simplest model for Weyl semimetals, which show the anomalous electromagnetic response of chiral magnetic effect (CME) and anomalous Hall effect (AHE). However, if this model is naively used to analyze the electromagnetic response within a linear response theory, it gives the result apparently inconsistent with the persuasive prediction based on a lattice model. We show that this serious difficulty is related to the breaking of current conservation in the Dirac model due to quantum anomaly and can be removed if current and charge operators are redefined to include the contribution from the anomaly. We demonstrate that the CME as well as the AHE can be properly described using newly defined operators, and clarify that the CME is determined by the competition between the contribution from the anomaly and that from low-energy electrons. (author)

  2. Using concept maps to describe undergraduate students’ mental model in microbiology course

    Science.gov (United States)

    Hamdiyati, Y.; Sudargo, F.; Redjeki, S.; Fitriani, A.

    2018-05-01

    The purpose of this research was to describe students’ mental model in a mental model based-microbiology course using concept map as assessment tool. Respondents were 5th semester of undergraduate students of Biology Education of Universitas Pendidikan Indonesia. The mental modelling instrument used was concept maps. Data were taken on Bacteria sub subject. A concept map rubric was subsequently developed with a maximum score of 4. Quantitative data was converted into a qualitative one to determine mental model level, namely: emergent = score 1, transitional = score 2, close to extended = score 3, and extended = score 4. The results showed that mental model level on bacteria sub subject before the implementation of mental model based-microbiology course was at the transitional level. After implementation of mental model based-microbiology course, mental model was at transitional level, close to extended, and extended. This indicated an increase in the level of students’ mental model after the implementation of mental model based-microbiology course using concept map as assessment tool.

  3. Evaluation of Logistic Regression and Multivariate Adaptive Regression Spline Models for Groundwater Potential Mapping Using R and GIS

    Directory of Open Access Journals (Sweden)

    Soyoung Park

    2017-07-01

    Full Text Available This study mapped and analyzed groundwater potential using two different models, logistic regression (LR and multivariate adaptive regression splines (MARS, and compared the results. A spatial database was constructed for groundwater well data and groundwater influence factors. Groundwater well data with a high potential yield of ≥70 m3/d were extracted, and 859 locations (70% were used for model training, whereas the other 365 locations (30% were used for model validation. We analyzed 16 groundwater influence factors including altitude, slope degree, slope aspect, plan curvature, profile curvature, topographic wetness index, stream power index, sediment transport index, distance from drainage, drainage density, lithology, distance from fault, fault density, distance from lineament, lineament density, and land cover. Groundwater potential maps (GPMs were constructed using LR and MARS models and tested using a receiver operating characteristics curve. Based on this analysis, the area under the curve (AUC for the success rate curve of GPMs created using the MARS and LR models was 0.867 and 0.838, and the AUC for the prediction rate curve was 0.836 and 0.801, respectively. This implies that the MARS model is useful and effective for groundwater potential analysis in the study area.

  4. Semiparametric Mixtures of Regressions with Single-index for Model Based Clustering

    OpenAIRE

    Xiang, Sijia; Yao, Weixin

    2017-01-01

    In this article, we propose two classes of semiparametric mixture regression models with single-index for model based clustering. Unlike many semiparametric/nonparametric mixture regression models that can only be applied to low dimensional predictors, the new semiparametric models can easily incorporate high dimensional predictors into the nonparametric components. The proposed models are very general, and many of the recently proposed semiparametric/nonparametric mixture regression models a...

  5. Carbon 13 nuclear magnetic resonance chemical shifts empiric calculations of polymers by multi linear regression and molecular modeling

    International Nuclear Information System (INIS)

    Da Silva Pinto, P.S.; Eustache, R.P.; Audenaert, M.; Bernassau, J.M.

    1996-01-01

    This work deals with carbon 13 nuclear magnetic resonance chemical shifts empiric calculations by multi linear regression and molecular modeling. The multi linear regression is indeed one way to obtain an equation able to describe the behaviour of the chemical shift for some molecules which are in the data base (rigid molecules with carbons). The methodology consists of structures describer parameters definition which can be bound to carbon 13 chemical shift known for these molecules. Then, the linear regression is used to determine the equation significant parameters. This one can be extrapolated to molecules which presents some resemblances with those of the data base. (O.L.). 20 refs., 4 figs., 1 tab

  6. Describing the clinical reasoning process: application of a model of enablement to a pediatric case.

    Science.gov (United States)

    Furze, Jennifer; Nelson, Kelly; O'Hare, Megan; Ortner, Amanda; Threlkeld, A Joseph; Jensen, Gail M

    2013-04-01

    Clinical reasoning is a core tenet of physical therapy practice leading to optimal patient care. The purpose of this case was to describe the outcomes, subjective experience, and reflective clinical reasoning process for a child with cerebral palsy using the International Classification of Functioning, Disability, and Health (ICF) model. Application of the ICF framework to a 9-year-old boy with spastic triplegic cerebral palsy was utilized to capture the interwoven factors present in this case. Interventions in the pool occurred twice weekly for 1 h over a 10-week period. Immediately post and 4 months post-intervention, the child made functional and meaningful gains. The family unit also developed an enjoyment of exercising together. Each individual family member described psychological, emotional, or physical health improvements. Reflection using the ICF model as a framework to discuss clinical reasoning can highlight important factors contributing to effective patient management.

  7. Semiparametric nonlinear quantile regression model for financial returns

    Czech Academy of Sciences Publication Activity Database

    Avdulaj, Krenar; Baruník, Jozef

    2017-01-01

    Roč. 21, č. 1 (2017), s. 81-97 ISSN 1081-1826 R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional support: RVO:67985556 Keywords : copula quantile regression * realized volatility * value-at-risk Subject RIV: AH - Economic s OBOR OECD: Applied Economic s, Econometrics Impact factor: 0.649, year: 2016 http://library.utia.cas.cz/separaty/2017/E/avdulaj-0472346.pdf

  8. Development of zircaloy deformation model to describe the zircaloy-4 cladding tube during accidents

    International Nuclear Information System (INIS)

    Raff, S.

    1978-01-01

    The development of a high-temperature deformation model for Zircaloy-4 cans is primarily based on numerous well-parametrized tensile tests to get the material behaviour including statistical variance. It is shown that plastic deformation may be described by a power creep law, the coefficients of which show strong dependence on temperature in the relevant temperature region. These coefficients have been determined. A model based on these coefficients has been established which, apart from best estimate deformation, gives upper and lower bounds of possible deformation. The model derived from isothermal uniaxial tests is being verified against isothermal and transient tube burst tests. The influence of preoxidation and increased oxygen concentration during deformation is modeled on the basis of the pseudobinary Zircaloy-oxygen phase diagram. (author)

  9. Calibration and validation of a model describing complete autotrophic nitrogen removal in a granular SBR system

    DEFF Research Database (Denmark)

    Vangsgaard, Anna Katrine; Mutlu, Ayten Gizem; Gernaey, Krist

    2013-01-01

    BACKGROUND: A validated model describing the nitritation-anammox process in a granular sequencing batch reactor (SBR) system is an important tool for: a) design of future experiments and b) prediction of process performance during optimization, while applying process control, or during system scale......-up. RESULTS: A model was calibrated using a step-wise procedure customized for the specific needs of the system. The important steps in the procedure were initialization, steady-state and dynamic calibration, and validation. A fast and effective initialization approach was developed to approximate pseudo...... screening of the parameter space proposed by Sin et al. (2008) - to find the best fit of the model to dynamic data. Finally, the calibrated model was validated with an independent data set. CONCLUSION: The presented calibration procedure is the first customized procedure for this type of system...

  10. An empirical model describing the postnatal growth of organs in ICRP reference humans: Pt. 1

    International Nuclear Information System (INIS)

    Walker, J.T.

    1991-01-01

    An empirical model is presented for describing the postnatal mass growth of lungs in ICRP reference humans. A combined exponential and logistic function containing six parameters is fitted to ICRP 23 lung data using a weighted non-linear least squares technique. The results indicate that the model delineates the data well. Further analysis shows that reference male lungs attain a higher pubertal peak velocity (PPV) and adult mass size than female lungs, although the latter reach their PPV and adult mass size first. Furthermore, the model shows that lung growth rates in infants are two to three orders of magnitude higher than those in mature adults. This finding is important because of the possible association between higher radiation risks in infants' organs that have faster cell turnover rates compared to mature adult organs. The significance of the model for ICRP dosimetric purposes will be discussed. (author)

  11. A relativistic gauge model describing N particles bound by harmonic forces

    International Nuclear Information System (INIS)

    Filippov, A.T.

    1987-01-01

    Application of the principle of gauging to linear canonical symmetries of simplest/rudimentary/bilinear lagrangians is shown to produce a relativistic version of the Lagrangian describing N particles bound by harmonic forces. For pairwise coupled identical particles the gauge group is T 1 xU 1 , xSU N-1 . A model for the relativistic discrete string (a chain of N particles) is also discussed. All these gauge theoried of particles can be quantized by standard methods

  12. Composite model describing the excitation and de-excitation of nitrogen by an electron beam

    International Nuclear Information System (INIS)

    Kassem, A.E.; Hickman, R.S.

    1975-01-01

    Based on recent studies, the effect of re-excited ions in the emission of electron beam induced fluorescence in nitrogen has been estimated. These effects are included in the formulation of a composite model describing the excitation and de-excitation of nitrogen by an electron beam. The shortcomings of previous models, namely the dependence of the measured temperature on true gas temperature as well as the gas density, are almost completely eliminated in the range of temperatures and densities covered by the available data. (auth)

  13. Validation of mathematical models to describe fluid dynamics of a cold riser by gamma ray attenuation

    International Nuclear Information System (INIS)

    Melo, Ana Cristina Bezerra Azedo de

    2004-12-01

    The fluid dynamic behavior of a riser in a cold type FCC model was investigated by means of catalyst concentration distribution measured with gamma attenuation and simulated with a mathematical model. In the riser of the cold model, MEF, 0,032 m in diameter, 2,30 m in length the fluidized bed, whose components are air and FCC catalyst, circulates. The MEF is operated by automatic control and instruments for measuring fluid dynamic variables. An axial catalyst concentration distribution was measured using an Am-241 gamma source and a NaI detector coupled to a multichannel provided with a software for data acquisition and evaluation. The MEF was adapted for a fluid dynamic model validation which describes the flow in the riser, for example, by introducing an injector for controlling the solid flow in circulation. Mathematical models were selected from literature, analyzed and tested to simulate the fluid dynamic of the riser. A methodology for validating fluid dynamic models was studied and implemented. The stages of the work were developed according to the validation methodology, such as data planning experiments, study of the equations which describe the fluidodynamic, computational solvers application and comparison with experimental data. Operational sequences were carried out keeping the MEF conditions for measuring catalyst concentration and simultaneously measuring the fluid dynamic variables, velocity of the components and pressure drop in the riser. Following this, simulated and experimental values were compared and statistical data treatment done, aiming at the required precision to validate the fluid dynamic model. The comparison tests between experimental and simulated data were carried out under validation criteria. The fluid dynamic behavior of the riser was analyzed and the results and the agreement with literature were discussed. The adopt model was validated under the MEF operational conditions, for a 3 to 6 m/s gas velocity in the riser and a slip

  14. Suitability of parametric models to describe the hydraulic properties of an unsaturated coarse sand and gravel

    Science.gov (United States)

    Mace, Andy; Rudolph, David L.; Kachanoski , R. Gary

    1998-01-01

    The performance of parametric models used to describe soil water retention (SWR) properties and predict unsaturated hydraulic conductivity (K) as a function of volumetric water content (θ) is examined using SWR and K(θ) data for coarse sand and gravel sediments. Six 70 cm long, 10 cm diameter cores of glacial outwash were instrumented at eight depths with porous cup ten-siometers and time domain reflectometry probes to measure soil water pressure head (h) and θ, respectively, for seven unsaturated and one saturated steady-state flow conditions. Forty-two θ(h) and K(θ) relationships were measured from the infiltration tests on the cores. Of the four SWR models compared in the analysis, the van Genuchten (1980) equation with parameters m and n restricted according to the Mualem (m = 1 - 1/n) criterion is best suited to describe the θ(h) relationships. The accuracy of two models that predict K(θ) using parameter values derived from the SWR models was also evaluated. The model developed by van Genuchten (1980) based on the theoretical expression of Mualem (1976) predicted K(θ) more accurately than the van Genuchten (1980) model based on the theory of Burdine (1953). A sensitivity analysis shows that more accurate predictions of K(θ) are achieved using SWR model parameters derived with residual water content (θr) specified according to independent measurements of θ at values of h where θ/h ∼ 0 rather than model-fit θr values. The accuracy of the model K(θ) function improves markedly when at least one value of unsaturated K is used to scale the K(θ) function predicted using the saturated K. The results of this investigation indicate that the hydraulic properties of coarse-grained sediments can be accurately described using the parametric models. In addition, data collection efforts should focus on measuring at least one value of unsaturated hydraulic conductivity and as complete a set of SWR data as possible, particularly in the dry range.

  15. Use of multiple linear regression and logistic regression models to investigate changes in birthweight for term singleton infants in Scotland.

    Science.gov (United States)

    Bonellie, Sandra R

    2012-10-01

    To illustrate the use of regression and logistic regression models to investigate changes over time in size of babies particularly in relation to social deprivation, age of the mother and smoking. Mean birthweight has been found to be increasing in many countries in recent years, but there are still a group of babies who are born with low birthweights. Population-based retrospective cohort study. Multiple linear regression and logistic regression models are used to analyse data on term 'singleton births' from Scottish hospitals between 1994-2003. Mothers who smoke are shown to give birth to lighter babies on average, a difference of approximately 0.57 Standard deviations lower (95% confidence interval. 0.55-0.58) when adjusted for sex and parity. These mothers are also more likely to have babies that are low birthweight (odds ratio 3.46, 95% confidence interval 3.30-3.63) compared with non-smokers. Low birthweight is 30% more likely where the mother lives in the most deprived areas compared with the least deprived, (odds ratio 1.30, 95% confidence interval 1.21-1.40). Smoking during pregnancy is shown to have a detrimental effect on the size of infants at birth. This effect explains some, though not all, of the observed socioeconomic birthweight. It also explains much of the observed birthweight differences by the age of the mother.   Identifying mothers at greater risk of having a low birthweight baby as important implications for the care and advice this group receives. © 2012 Blackwell Publishing Ltd.

  16. A metallic solution model with adjustable parameter for describing ternary thermodynamic properties from its binary constituents

    International Nuclear Information System (INIS)

    Fang Zheng; Qiu Guanzhou

    2007-01-01

    A metallic solution model with adjustable parameter k has been developed to predict thermodynamic properties of ternary systems from those of its constituent three binaries. In the present model, the excess Gibbs free energy for a ternary mixture is expressed as a weighted probability sum of those of binaries and the k value is determined based on an assumption that the ternary interaction generally strengthens the mixing effects for metallic solutions with weak interaction, making the Gibbs free energy of mixing of the ternary system more negative than that before considering the interaction. This point is never considered in the models currently reported, where the only difference in a geometrical definition of molar values of components is considered that do not involve thermodynamic principles but are completely empirical. The current model describes the results of experiments very well, and by adjusting the k value also agrees with those from models used widely in the literature. Three ternary systems, Mg-Cu-Ni, Zn-In-Cd, and Cd-Bi-Pb are recalculated to demonstrate the method of determining k and the precision of the model. The results of the calculations, especially those in Mg-Cu-Ni system, are better than those predicted by the current models in the literature

  17. A model describing intra-granular fission gas behaviour in oxide fuel for advanced engineering tools

    Science.gov (United States)

    Pizzocri, D.; Pastore, G.; Barani, T.; Magni, A.; Luzzi, L.; Van Uffelen, P.; Pitts, S. A.; Alfonsi, A.; Hales, J. D.

    2018-04-01

    The description of intra-granular fission gas behaviour is a fundamental part of any model for the prediction of fission gas release and swelling in nuclear fuel. In this work we present a model describing the evolution of intra-granular fission gas bubbles in terms of bubble number density and average size, coupled to gas release to grain boundaries. The model considers the fundamental processes of single gas atom diffusion, gas bubble nucleation, re-solution and gas atom trapping at bubbles. The model is derived from a detailed cluster dynamics formulation, yet it consists of only three differential equations in its final form; hence, it can be efficiently applied in engineering fuel performance codes while retaining a physical basis. We discuss improvements relative to previous single-size models for intra-granular bubble evolution. We validate the model against experimental data, both in terms of bubble number density and average bubble radius. Lastly, we perform an uncertainty and sensitivity analysis by propagating the uncertainties in the parameters to model results.

  18. A bottom-up model to describe consumers’ preferences towards late season peaches

    Energy Technology Data Exchange (ETDEWEB)

    Groot, E.; Albisu, L.M.

    2015-07-01

    Peaches are consumed in Mediterranean countries since ancient times. Nowadays there are few areas in Europe that produce peaches with Protected Designation of Origin (PDO), and the Calanda area is one of them. The aim of this work is to describe consumers’ preferences towards late season PDO Calanda peaches in the city of Zaragoza, Spain, by a bottom-up model. The bottom-up model proves greater amount of information than top-down models. In this approach it is estimated one utility function per consumer. Thus, it is not necessary to make assumptions about preference distributions and correlations across respondents. It was observed that preference distributions were neither normal nor independently distributed. If those preferences were estimated by top-down models, conclusions would be biased. This paper also explores a new way to describe preferences through individual utility functions. Results show that the largest behavioural group gathered origin sensitive consumers. Their utility increased if the peaches were produced in the Calanda area and, especially, when peaches had the PDO Calanda brand. In sequence, the second most valuable attribute for consumers was the price. Peach size and packaging were not so important on purchase choice decision. Nevertheless, it is advisable to avoid trading smallest size peaches (weighting around 160 g/fruit). Traders also have to be careful by using active packaging. It was found that a group of consumers disliked this kind of product, probably, because they perceived it as less natural. (Author)

  19. An extended car-following model to describe connected traffic dynamics under cyberattacks

    Science.gov (United States)

    Wang, Pengcheng; Yu, Guizhen; Wu, Xinkai; Qin, Hongmao; Wang, Yunpeng

    2018-04-01

    In this paper, the impacts of the potential cyberattacks on vehicles are modeled through an extended car-following model. To better understand the mechanism of traffic disturbance under cyberattacks, the linear and nonlinear stability analysis are conducted respectively. Particularly, linear stability analysis is performed to obtain different neutral stability conditions with various parameters; and nonlinear stability analysis is carried out by using reductive perturbation method to derive the soliton solution of the modified Korteweg de Vries equation (mKdV) near the critical point, which is used to draw coexisting stability lines. Furthermore, by applying linear and nonlinear stability analysis, traffic flow state can be divided into three states, i.e., stable, metastable and unstable states which are useful to describe shockwave dynamics and driving behaviors under cyberattacks. The theoretical results show that the proposed car-following model is capable of successfully describing the car-following behavior of connected vehicles with cyberattacks. Finally, numerical simulation using real values has confirmed the validity of theoretical analysis. The results further demonstrate our model can be used to help avoid collisions and relieve traffic congestion with cybersecurity threats.

  20. Development of a model describing virus removal process in an activated sludge basin

    Energy Technology Data Exchange (ETDEWEB)

    Kim, T.; Shiragami, N. Unno, H. [Tokyo Institute of Technology, Tokyo (Japan)

    1995-06-20

    The virus removal process from the liquid phase in an activated sludge basin possibly consists of physicochemical processes, such as adsorption onto sludge flocs, biological processes such as microbial predating and inactivation by virucidal components excreted by microbes. To describe properly the virus behavior in an activated sludge basin, a simple model is proposed based on the experimental data obtained using a poliovirus type 1. A three-compartments model, which include the virus in the liquid phase and in the peripheral and inner regions of sludge flocs is employed. By using the model, the Virus removal process was successfully simulated to highlight the implication of its distribution in the activated sludge basin. 17 refs., 8 figs.

  1. Digital Materials - Evaluation of the Possibilities of using Selected Hyperelastic Models to Describe Constitutive Relations

    Science.gov (United States)

    Mańkowski, J.; Lipnicki, J.

    2017-08-01

    The authors tried to identify the parameters of numerical models of digital materials, which are a kind of composite resulting from the manufacture of the product in 3D printers. With the arrangement of several heads of the printer, the new material can result from mixing of materials with radically different properties, during the process of producing single layer of the product. The new material has properties dependent on the base materials properties and their proportions. Digital materials tensile characteristics are often non-linear and qualify to be described by hyperelastic materials models. The identification was conducted based on the results of tensile tests models, its various degrees coefficients of the polynomials to various degrees coefficients of the polynomials. The Drucker's stability criterion was also examined. Fourteen different materials were analyzed.

  2. Digital Materials – Evaluation of the Possibilities of using Selected Hyperelastic Models to Describe Constitutive Relations

    Directory of Open Access Journals (Sweden)

    Mańkowski J.

    2017-08-01

    Full Text Available The authors tried to identify the parameters of numerical models of digital materials, which are a kind of composite resulting from the manufacture of the product in 3D printers. With the arrangement of several heads of the printer, the new material can result from mixing of materials with radically different properties, during the process of producing single layer of the product. The new material has properties dependent on the base materials properties and their proportions. Digital materials tensile characteristics are often non-linear and qualify to be described by hyperelastic materials models. The identification was conducted based on the results of tensile tests models, its various degrees coefficients of the polynomials to various degrees coefficients of the polynomials. The Drucker’s stability criterion was also examined. Fourteen different materials were analyzed.

  3. Computational Models Describing Possible Mechanisms for Generation of Excessive Beta Oscillations in Parkinson's Disease.

    Directory of Open Access Journals (Sweden)

    Alex Pavlides

    2015-12-01

    Full Text Available In Parkinson's disease, an increase in beta oscillations within the basal ganglia nuclei has been shown to be associated with difficulty in movement initiation. An important role in the generation of these oscillations is thought to be played by the motor cortex and by a network composed of the subthalamic nucleus (STN and the external segment of globus pallidus (GPe. Several alternative models have been proposed to describe the mechanisms for generation of the Parkinsonian beta oscillations. However, a recent experimental study of Tachibana and colleagues yielded results which are challenging for all published computational models of beta generation. That study investigated how the presence of beta oscillations in a primate model of Parkinson's disease is affected by blocking different connections of the STN-GPe circuit. Due to a large number of experimental conditions, the study provides strong constraints that any mechanistic model of beta generation should satisfy. In this paper we present two models consistent with the data of Tachibana et al. The first model assumes that Parkinsonian beta oscillation are generated in the cortex and the STN-GPe circuits resonates at this frequency. The second model additionally assumes that the feedback from STN-GPe circuit to cortex is important for maintaining the oscillations in the network. Predictions are made about experimental evidence that is required to differentiate between the two models, both of which are able to reproduce firing rates, oscillation frequency and effects of lesions carried out by Tachibana and colleagues. Furthermore, an analysis of the models reveals how the amplitude and frequency of the generated oscillations depend on parameters.

  4. Development of a geometric uncertainty model describing the accuracy of position-sensitive, coincidence neutron detection

    Energy Technology Data Exchange (ETDEWEB)

    Trivelpiece, Cory L., E-mail: cory@psu.ed [Department of Mechanical and Nuclear Engineering, The Pennsylvania, State University, University Park, PA 16802 (United States); Brenizer, J.S. [Department of Mechanical and Nuclear Engineering, The Pennsylvania, State University, University Park, PA 16802 (United States)

    2011-01-01

    A diameter of uncertainty (D{sub u}) was derived from a geometric uncertainty model describing the error that would be introduced into position-sensitive, coincidence neutron detection measurements by charged-particle transport phenomena and experimental setup. The transport of {alpha} and Li ions, produced by the {sup 10}B(n,{alpha}) {sup 7}Li reaction, through free-standing boro-phosphosilicate glass (BPSG) films was modeled using the Monte Carlo code SRIM, and the results of these simulations were used as input to determine D{sub u} for position-sensitive, coincidence techniques. The results of these calculations showed that D{sub u} is dependent on encoder separation, the angle of charged particle emission, and film thickness. For certain emission scenarios, the magnitude of D{sub u} is larger than the physical size of the neutron converting media that were being modeled. Spheres of uncertainty were developed that describe the difference in flight path times among the bounding-case emission scenarios that were considered in this work. It was shown the overlapping spheres represent emission angles and particle flight path lengths that would be difficult to resolve in terms of particle time-of-flight measurements. However, based on the timing resolution of current nuclear instrumentation, emission events that yield large D{sub u} can be discriminated by logical arguments during spectral deconvolution.

  5. A joint logistic regression and covariate-adjusted continuous-time Markov chain model.

    Science.gov (United States)

    Rubin, Maria Laura; Chan, Wenyaw; Yamal, Jose-Miguel; Robertson, Claudia Sue

    2017-12-10

    The use of longitudinal measurements to predict a categorical outcome is an increasingly common goal in research studies. Joint models are commonly used to describe two or more models simultaneously by considering the correlated nature of their outcomes and the random error present in the longitudinal measurements. However, there is limited research on joint models with longitudinal predictors and categorical cross-sectional outcomes. Perhaps the most challenging task is how to model the longitudinal predictor process such that it represents the true biological mechanism that dictates the association with the categorical response. We propose a joint logistic regression and Markov chain model to describe a binary cross-sectional response, where the unobserved transition rates of a two-state continuous-time Markov chain are included as covariates. We use the method of maximum likelihood to estimate the parameters of our model. In a simulation study, coverage probabilities of about 95%, standard deviations close to standard errors, and low biases for the parameter values show that our estimation method is adequate. We apply the proposed joint model to a dataset of patients with traumatic brain injury to describe and predict a 6-month outcome based on physiological data collected post-injury and admission characteristics. Our analysis indicates that the information provided by physiological changes over time may help improve prediction of long-term functional status of these severely ill subjects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  6. A classical regression framework for mediation analysis: fitting one model to estimate mediation effects.

    Science.gov (United States)

    Saunders, Christina T; Blume, Jeffrey D

    2017-10-26

    Mediation analysis explores the degree to which an exposure's effect on an outcome is diverted through a mediating variable. We describe a classical regression framework for conducting mediation analyses in which estimates of causal mediation effects and their variance are obtained from the fit of a single regression model. The vector of changes in exposure pathway coefficients, which we named the essential mediation components (EMCs), is used to estimate standard causal mediation effects. Because these effects are often simple functions of the EMCs, an analytical expression for their model-based variance follows directly. Given this formula, it is instructive to revisit the performance of routinely used variance approximations (e.g., delta method and resampling methods). Requiring the fit of only one model reduces the computation time required for complex mediation analyses and permits the use of a rich suite of regression tools that are not easily implemented on a system of three equations, as would be required in the Baron-Kenny framework. Using data from the BRAIN-ICU study, we provide examples to illustrate the advantages of this framework and compare it with the existing approaches. © The Author 2017. Published by Oxford University Press.

  7. Development of the model describing highly excited states of odd deformed nuclei

    International Nuclear Information System (INIS)

    Malov, L.A.; Solov'ev, V.G.

    1975-01-01

    An approximate method is given for solving the system of equations obtained earlier for describing the structure of states with intermediate and high energies in the framework of the model taking into account the interaction of quasiparticles with phonons. The new method possesses a number of advantages over the approximate methods of solving the system of equations mentioned. The study is performed for the example of an odd deformed nucleus when several one-quasiparticle components are taken into account at the same time

  8. Critical properties of a ferroelectric superlattice described by a transverse spin-1/2 Ising model

    International Nuclear Information System (INIS)

    Tabyaoui, A; Saber, M; Baerner, K; Ainane, A

    2007-01-01

    The phase transition properties of a ferroelectric superlattice with two alternating layers A and B described by a transverse spin-1/2 Ising model have been investigated using the effective field theory within a probability distribution technique that accounts for the self spin correlation functions. The Curie temperature T c , polarization and susceptibility have been obtained. The effects of the transverse field and the ferroelectric and antiferroelectric interfacial coupling strength between two ferroelectric materials are discussed. They relate to the physical properties of antiferroelectric/ferroelectric superlattices

  9. Model describing the effect of employment of the United States military in a complex emergency.

    Science.gov (United States)

    MacMillan, Donald S

    2005-01-01

    The end of the Cold War vastly altered the worldwide political landscape. With the loss of a main competitor, the United States (US) military has had to adapt its strategic, operational, and tactical doctrines to an ever-increasing variety of non-traditional missions, including humanitarian operations. Complex emergencies (CEs) are defined in this paper from a political and military perspective, various factors that contribute to their development are described, and issues resulting from the employment of US military forces are discussed. A model was developed to illustrate the course of a humanitarian emergency and the potential impact of a military response. The US intervention in Haiti, Northern Iraq, Kosovo, Somalia, Bosnia, and Rwanda serve as examples. A CE develops when there is civil conflict, loss of national governmental authority, a mass population movement, and massive economic failure, each leading to a general decline in food security. The military can alleviate a CE in four ways: (1) provide security for relief efforts; (2) enforce negotiated settlements; (3) provide security for non-combatants; and/or (4) employ logistical capabilities. The model incorporates Norton and Miskel's taxonomy of identifying failing states and helps illustrate the factors that lead to a CE. The model can be used to determine if and when military intervention will have the greatest impact. The model demonstrates that early military intervention and mission assignment within the core competencies of the forces can reverse the course of a CE. Further study will be needed to verify the model.

  10. A flowing plasma model to describe drift waves in a cylindrical helicon discharge

    International Nuclear Information System (INIS)

    Chang, L.; Hole, M. J.; Corr, C. S.

    2011-01-01

    A two-fluid model developed originally to describe wave oscillations in the vacuum arc centrifuge, a cylindrical, rapidly rotating, low temperature, and confined plasma column, is applied to interpret plasma oscillations in a RF generated linear magnetized plasma [WOMBAT (waves on magnetized beams and turbulence)], with similar density and field strength. Compared to typical centrifuge plasmas, WOMBAT plasmas have slower normalized rotation frequency, lower temperature, and lower axial velocity. Despite these differences, the two-fluid model provides a consistent description of the WOMBAT plasma configuration and yields qualitative agreement between measured and predicted wave oscillation frequencies with axial field strength. In addition, the radial profile of the density perturbation predicted by this model is consistent with the data. Parameter scans show that the dispersion curve is sensitive to the axial field strength and the electron temperature, and the dependence of oscillation frequency with electron temperature matches the experiment. These results consolidate earlier claims that the density and floating potential oscillations are a resistive drift mode, driven by the density gradient. To our knowledge, this is the first detailed physics model of flowing plasmas in the diffusion region away from the RF source. Possible extensions to the model, including temperature nonuniformity and magnetic field oscillations, are also discussed.

  11. A generalized exponential time series regression model for electricity prices

    DEFF Research Database (Denmark)

    Haldrup, Niels; Knapik, Oskar; Proietti, Tomasso

    on the estimated model, the best linear predictor is constructed. Our modeling approach provides good fit within sample and outperforms competing benchmark predictors in terms of forecasting accuracy. We also find that building separate models for each hour of the day and averaging the forecasts is a better...

  12. Application of a mathematical model to describe the effects of chlorpyrifos on Caenorhabditis elegans development.

    Directory of Open Access Journals (Sweden)

    Windy A Boyd

    2009-09-01

    Full Text Available The nematode Caenorhabditis elegans is being assessed as an alternative model organism as part of an interagency effort to develop better means to test potentially toxic substances. As part of this effort, assays that use the COPAS Biosort flow sorting technology to record optical measurements (time of flight (TOF and extinction (EXT of individual nematodes under various chemical exposure conditions are being developed. A mathematical model has been created that uses Biosort data to quantitatively and qualitatively describe C. elegans growth, and link changes in growth rates to biological events. Chlorpyrifos, an organophosphate pesticide known to cause developmental delays and malformations in mammals, was used as a model toxicant to test the applicability of the growth model for in vivo toxicological testing.L1 larval nematodes were exposed to a range of sub-lethal chlorpyrifos concentrations (0-75 microM and measured every 12 h. In the absence of toxicant, C. elegans matured from L1s to gravid adults by 60 h. A mathematical model was used to estimate nematode size distributions at various times. Mathematical modeling of the distributions allowed the number of measured nematodes and log(EXT and log(TOF growth rates to be estimated. The model revealed three distinct growth phases. The points at which estimated growth rates changed (change points were constant across the ten chlorpyrifos concentrations. Concentration response curves with respect to several model-estimated quantities (numbers of measured nematodes, mean log(TOF and log(EXT, growth rates, and time to reach change points showed a significant decrease in C. elegans growth with increasing chlorpyrifos concentration.Effects of chlorpyrifos on C. elegans growth and development were mathematically modeled. Statistical tests confirmed a significant concentration effect on several model endpoints. This confirmed that chlorpyrifos affects C. elegans development in a concentration dependent

  13. Forecast Model of Urban Stagnant Water Based on Logistic Regression

    Directory of Open Access Journals (Sweden)

    Liu Pan

    2017-01-01

    Full Text Available With the development of information technology, the construction of water resource system has been gradually carried out. In the background of big data, the work of water information needs to carry out the process of quantitative to qualitative change. Analyzing the correlation of data and exploring the deep value of data which are the key of water information’s research. On the basis of the research on the water big data and the traditional data warehouse architecture, we try to find out the connection of different data source. According to the temporal and spatial correlation of stagnant water and rainfall, we use spatial interpolation to integrate data of stagnant water and rainfall which are from different data source and different sensors, then use logistic regression to find out the relationship between them.

  14. Parental Vaccine Acceptance: A Logistic Regression Model Using Previsit Decisions.

    Science.gov (United States)

    Lee, Sara; Riley-Behringer, Maureen; Rose, Jeanmarie C; Meropol, Sharon B; Lazebnik, Rina

    2017-07-01

    This study explores how parents' intentions regarding vaccination prior to their children's visit were associated with actual vaccine acceptance. A convenience sample of parents accompanying 6-week-old to 17-year-old children completed a written survey at 2 pediatric practices. Using hierarchical logistic regression, for hospital-based participants (n = 216), vaccine refusal history ( P < .01) and vaccine decision made before the visit ( P < .05) explained 87% of vaccine refusals. In community-based participants (n = 100), vaccine refusal history ( P < .01) explained 81% of refusals. Over 1 in 5 parents changed their minds about vaccination during the visit. Thirty parents who were previous vaccine refusers accepted current vaccines, and 37 who had intended not to vaccinate choose vaccination. Twenty-nine parents without a refusal history declined vaccines, and 32 who did not intend to refuse before the visit declined vaccination. Future research should identify key factors to nudge parent decision making in favor of vaccination.

  15. Cosmological models described by a mixture of van der Waals fluid and dark energy

    International Nuclear Information System (INIS)

    Kremer, G.M.

    2003-01-01

    The Universe is modeled as a binary mixture whose constituents are described by a van der Waals fluid and by a dark energy density. The dark energy density is considered either as quintessence or as the Chaplygin gas. The irreversible processes concerning the energy transfer between the van der Waals fluid and the gravitational field are taken into account. This model can simulate (a) an inflationary period where the acceleration grows exponentially and the van der Waals fluid behaves like an inflaton, (b) an accelerated period where the acceleration is positive but it decreases and tends to zero whereas the energy density of the van der Waals fluid decays, (c) a decelerated period which corresponds to a matter dominated period with a non-negative pressure, and (d) a present accelerated period where the dark energy density outweighs the energy density of the van der Waals fluid

  16. The solar modulation of galactic comic rays as described by a time-dependent drift model

    International Nuclear Information System (INIS)

    Le Roux, J.A.

    1990-09-01

    The modulation process is understood to be an interaction between cosmic rays and the solar wind. The heliosphere and the observed modulation of cosmic rays in the heliosphere was reviewed and the time-dependence nature of the long-term modulation of cosmic rays highligted. A two-dimensional time-dependent drift model that describes the long-term modulation of cosmic-rays is presented. Application of the time-dependent drift model during times of increased solar activity showed that drift should be reduced during such periods. Isolated Forbush decreases were also studied in an effort to explain some observed trends in the properties of the Forbush decrease as a function of radial distance. The magnitude of the Forbush decrease and its recovery time were therefore studied as a function of radial distance in the equatorial plane. 154 refs., 95 figs., 1 tab

  17. Improving sub-pixel imperviousness change prediction by ensembling heterogeneous non-linear regression models

    Directory of Open Access Journals (Sweden)

    Drzewiecki Wojciech

    2016-12-01

    Full Text Available In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques.

  18. Probability Distribution and Deviation Information Fusion Driven Support Vector Regression Model and Its Application

    Directory of Open Access Journals (Sweden)

    Changhao Fan

    2017-01-01

    Full Text Available In modeling, only information from the deviation between the output of the support vector regression (SVR model and the training sample is considered, whereas the other prior information of the training sample, such as probability distribution information, is ignored. Probabilistic distribution information describes the overall distribution of sample data in a training sample that contains different degrees of noise and potential outliers, as well as helping develop a high-accuracy model. To mine and use the probability distribution information of a training sample, a new support vector regression model that incorporates probability distribution information weight SVR (PDISVR is proposed. In the PDISVR model, the probability distribution of each sample is considered as the weight and is then introduced into the error coefficient and slack variables of SVR. Thus, the deviation and probability distribution information of the training sample are both used in the PDISVR model to eliminate the influence of noise and outliers in the training sample and to improve predictive performance. Furthermore, examples with different degrees of noise were employed to demonstrate the performance of PDISVR, which was then compared with those of three SVR-based methods. The results showed that PDISVR performs better than the three other methods.

  19. Joint Bayesian variable and graph selection for regression models with network-structured predictors

    Science.gov (United States)

    Peterson, C. B.; Stingo, F. C.; Vannucci, M.

    2015-01-01

    In this work, we develop a Bayesian approach to perform selection of predictors that are linked within a network. We achieve this by combining a sparse regression model relating the predictors to a response variable with a graphical model describing conditional dependencies among the predictors. The proposed method is well-suited for genomic applications since it allows the identification of pathways of functionally related genes or proteins which impact an outcome of interest. In contrast to previous approaches for network-guided variable selection, we infer the network among predictors using a Gaussian graphical model and do not assume that network information is available a priori. We demonstrate that our method outperforms existing methods in identifying network-structured predictors in simulation settings, and illustrate our proposed model with an application to inference of proteins relevant to glioblastoma survival. PMID:26514925

  20. Double porosity model to describe both permeability change and dissolution processes

    International Nuclear Information System (INIS)

    Niibori, Yuichi; Usui, Hideo; Chida, Taiji

    2015-01-01

    Cement is a practical material for constructing the geological disposal system of radioactive wastes. The dynamic behavior of both permeability change and dissolution process caused by a high pH groundwater was explained using a double porosity model assuming that each packed particle consists of the sphere-shaped aggregation of smaller particles. This model assumes two kinds of porosities between the particle clusters and between the particles, where the former porosity change mainly controls the permeability change of the bed, and the latter porosity change controls the diffusion of OH"- ions inducing the dissolution of silica. The fundamental equations consist of a diffusion equation of spherical coordinates of OH"- ions including the first-order reaction term and some equations describing the size changes of both the particles and the particle clusters with time. The change of over-all permeability of the packed bed is evaluated by Kozeny-Carman equation and the calculated radii of particle clusters. The calculated result well describes the experimental result of both permeability change and dissolution processes. (author)

  1. A mathematical model for describing the mechanical behaviour of root canal instruments.

    Science.gov (United States)

    Zhang, E W; Cheung, G S P; Zheng, Y F

    2011-01-01

    The purpose of this study was to establish a general mathematical model for describing the mechanical behaviour of root canal instruments by combining a theoretical analytical approach with a numerical finite-element method. Mathematical formulas representing the longitudinal (taper, helical angle and pitch) and cross-sectional configurations and area, the bending and torsional inertia, the curvature of the boundary point and the (geometry of) loading condition were derived. Torsional and bending stresses and the resultant deformation were expressed mathematically as a function of these geometric parameters, modulus of elasticity of the material and the applied load. As illustrations, three brands of NiTi endodontic files of different cross-sectional configurations (ProTaper, Hero 642, and Mani NRT) were analysed under pure torsion and pure bending situation by entering the model into a finite-element analysis package (ANSYS). Numerical results confirmed that mathematical models were a feasible method to analyse the mechanical properties and predict the stress and deformation for root canal instruments during root canal preparation. Mathematical and numerical model can be a suitable way to examine mechanical behaviours as a criterion of the instrument design and to predict the stress and strain experienced by the endodontic instruments during root canal preparation. © 2010 International Endodontic Journal.

  2. Cosmological model with viscosity media (dark fluid) described by an effective equation of state

    International Nuclear Information System (INIS)

    Ren Jie; Meng Xinhe

    2006-01-01

    A generally parameterized equation of state (EOS) is investigated in the cosmological evolution with bulk viscosity media modelled as dark fluid, which can be regarded as a unification of dark energy and dark matter. Compared with the case of the perfect fluid, this EOS has possessed four additional parameters, which can be interpreted as the case of the non-perfect fluid with time-dependent viscosity or the model with variable cosmological constant. From this general EOS, a completely integrable dynamical equation to the scale factor is obtained with its solution explicitly given out. (i) In this parameterized model of cosmology, for a special choice of the parameters we can explain the late-time accelerating expansion universe in a new view. The early inflation, the median (relatively late time) deceleration, and the recently cosmic acceleration may be unified in a single equation. (ii) A generalized relation of the Hubble parameter scaling with the redshift is obtained for some cosmology interests. (iii) By using the SNe Ia data to fit the effective viscosity model we show that the case of matter described by p=0 plus with effective viscosity contributions can fit the observational gold data in an acceptable level

  3. Species-free species distribution models describe macroecological properties of protected area networks.

    Science.gov (United States)

    Robinson, Jason L; Fordyce, James A

    2017-01-01

    Among the greatest challenges facing the conservation of plants and animal species in protected areas are threats from a rapidly changing climate. An altered climate creates both challenges and opportunities for improving the management of protected areas in networks. Increasingly, quantitative tools like species distribution modeling are used to assess the performance of protected areas and predict potential responses to changing climates for groups of species, within a predictive framework. At larger geographic domains and scales, protected area network units have spatial geoclimatic properties that can be described in the gap analysis typically used to measure or aggregate the geographic distributions of species (stacked species distribution models, or S-SDM). We extend the use of species distribution modeling techniques in order to model the climate envelope (or "footprint") of individual protected areas within a network of protected areas distributed across the 48 conterminous United States and managed by the US National Park System. In our approach we treat each protected area as the geographic range of a hypothetical endemic species, then use MaxEnt and 5 uncorrelated BioClim variables to model the geographic distribution of the climatic envelope associated with each protected area unit (modeling the geographic area of park units as the range of a species). We describe the individual and aggregated climate envelopes predicted by a large network of 163 protected areas and briefly illustrate how macroecological measures of geodiversity can be derived from our analysis of the landscape ecological context of protected areas. To estimate trajectories of change in the temporal distribution of climatic features within a protected area network, we projected the climate envelopes of protected areas in current conditions onto a dataset of predicted future climatic conditions. Our results suggest that the climate envelopes of some parks may be locally unique or have

  4. Additive Intensity Regression Models in Corporate Default Analysis

    DEFF Research Database (Denmark)

    Lando, David; Medhat, Mamdouh; Nielsen, Mads Stenbo

    2013-01-01

    We consider additive intensity (Aalen) models as an alternative to the multiplicative intensity (Cox) models for analyzing the default risk of a sample of rated, nonfinancial U.S. firms. The setting allows for estimating and testing the significance of time-varying effects. We use a variety of mo...

  5. Misspecified poisson regression models for large-scale registry data

    DEFF Research Database (Denmark)

    Grøn, Randi; Gerds, Thomas A.; Andersen, Per K.

    2016-01-01

    working models that are then likely misspecified. To support and improve conclusions drawn from such models, we discuss methods for sensitivity analysis, for estimation of average exposure effects using aggregated data, and a semi-parametric bootstrap method to obtain robust standard errors. The methods...

  6. Logistic regression model for detecting radon prone areas in Ireland.

    Science.gov (United States)

    Elío, J; Crowley, Q; Scanlon, R; Hodgson, J; Long, S

    2017-12-01

    A new high spatial resolution radon risk map of Ireland has been developed, based on a combination of indoor radon measurements (n=31,910) and relevant geological information (i.e. Bedrock Geology, Quaternary Geology, soil permeability and aquifer type). Logistic regression was used to predict the probability of having an indoor radon concentration above the national reference level of 200Bqm -3 in Ireland. The four geological datasets evaluated were found to be statistically significant, and, based on combinations of these four variables, the predicted probabilities ranged from 0.57% to 75.5%. Results show that the Republic of Ireland may be divided in three main radon risk categories: High (HR), Medium (MR) and Low (LR). The probability of having an indoor radon concentration above 200Bqm -3 in each area was found to be 19%, 8% and 3%; respectively. In the Republic of Ireland, the population affected by radon concentrations above 200Bqm -3 is estimated at ca. 460k (about 10% of the total population). Of these, 57% (265k), 35% (160k) and 8% (35k) are in High, Medium and Low Risk Areas, respectively. Our results provide a high spatial resolution utility which permit customised radon-awareness information to be targeted at specific geographic areas. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Predicting recycling behaviour: Comparison of a linear regression model and a fuzzy logic model.

    Science.gov (United States)

    Vesely, Stepan; Klöckner, Christian A; Dohnal, Mirko

    2016-03-01

    In this paper we demonstrate that fuzzy logic can provide a better tool for predicting recycling behaviour than the customarily used linear regression. To show this, we take a set of empirical data on recycling behaviour (N=664), which we randomly divide into two halves. The first half is used to estimate a linear regression model of recycling behaviour, and to develop a fuzzy logic model of recycling behaviour. As the first comparison, the fit of both models to the data included in estimation of the models (N=332) is evaluated. As the second comparison, predictive accuracy of both models for "new" cases (hold-out data not included in building the models, N=332) is assessed. In both cases, the fuzzy logic model significantly outperforms the regression model in terms of fit. To conclude, when accurate predictions of recycling and possibly other environmental behaviours are needed, fuzzy logic modelling seems to be a promising technique. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Methodological Bases for Describing Risks of the Enterprise Business Model in Integrated Reporting

    Directory of Open Access Journals (Sweden)

    Nesterenko Oksana O.

    2017-12-01

    Full Text Available The aim of the article is to substantiate the methodological bases for describing the business and accounting risks of an enterprise business model in integrated reporting for their timely detection and assessment, and develop methods for their leveling or minimizing and possible prevention. It is proposed to consider risks in the process of forming integrated reporting from two sides: first, risks that arise in the business model of an organization and should be disclosed in its integrated report; second, accounting risks of integrated reporting, which should be taken into account by members of the cross-sectoral working group and management personnel in the process of forming and promulgating integrated reporting. To develop an adequate accounting and analytical tool for disclosure of information about the risks of the business model and integrated reporting, their leveling or minimization, in the article a terminological analysis of the essence of entrepreneurial and accounting risks is carried out. The entrepreneurial risk is defined as an objective-subjective economic category that characterizes the probability of negative or positive consequences of economic-social-ecological activity within the framework of the business model of an enterprise under uncertainty. The accounting risk is suggested to be understood as the probability of unfavorable consequences as a result of organizational, methodological errors in the integrated accounting system, which present threat to the quality, accuracy and reliability of the reporting information on economic, social and environmental activities in integrated reporting as well as threat of inappropriate decision-making by stakeholders based on the integrated report. For the timely identification of business risks and maximum leveling of the influence of accounting risks on the process of formation and publication of integrated reporting, in the study the place of entrepreneurial and accounting risks in

  9. QSAR models for describing the toxicological effects of ILs against Staphylococcus aureus based on norm indexes.

    Science.gov (United States)

    He, Wensi; Yan, Fangyou; Jia, Qingzhu; Xia, Shuqian; Wang, Qiang

    2018-03-01

    The hazardous potential of ionic liquids (ILs) is becoming an issue of great concern due to their important role in many industrial fields as green agents. The mathematical model for the toxicological effects of ILs is useful for the risk assessment and design of environmentally benign ILs. The objective of this work is to develop QSAR models to describe the minimal inhibitory concentration (MIC) and minimal bactericidal concentration (MBC) of ILs against Staphylococcus aureus (S. aureus). A total of 169 and 101 ILs with MICs and MBCs, respectively, are used to obtain multiple linear regression models based on matrix norm indexes. The norm indexes used in this work are proposed by our research group and they are first applied to estimate the antibacterial toxicity of these ILs against S. aureus. These two models precisely and reliably calculated the IL toxicities with a square of correlation coefficient (R 2 ) of 0.919 and a standard error of estimate (SE) of 0.341 (in log unit of mM) for pMIC, and an R 2 of 0.913 and SE of 0.282 for pMBC. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Logistic Regression Modeling of Diminishing Manufacturing Sources for Integrated Circuits

    National Research Council Canada - National Science Library

    Gravier, Michael

    1999-01-01

    .... This thesis draws on available data from the electronics integrated circuit industry to attempt to assess whether statistical modeling offers a viable method for predicting the presence of DMSMS...

  11. Model tool to describe chemical structures in XML format utilizing structural fragments and chemical ontology.

    Science.gov (United States)

    Sankar, Punnaivanam; Alain, Krief; Aghila, Gnanasekaran

    2010-05-24

    We have developed a model structure-editing tool, ChemEd, programmed in JAVA, which allows drawing chemical structures on a graphical user interface (GUI) by selecting appropriate structural fragments defined in a fragment library. The terms representing the structural fragments are organized in fragment ontology to provide a conceptual support. ChemEd describes the chemical structure in an XML document (ChemFul) with rich semantics explicitly encoding the details of the chemical bonding, the hybridization status, and the electron environment around each atom. The document can be further processed through suitable algorithms and with the support of external chemical ontologies to generate understandable reports about the functional groups present in the structure and their specific environment.

  12. Computer-aided Nonlinear Control System Design Using Describing Function Models

    CERN Document Server

    Nassirharand, Amir

    2012-01-01

    A systematic computer-aided approach provides a versatile setting for the control engineer to overcome the complications of controller design for highly nonlinear systems. Computer-aided Nonlinear Control System Design provides such an approach based on the use of describing functions. The text deals with a large class of nonlinear systems without restrictions on the system order, the number of inputs and/or outputs or the number, type or arrangement of nonlinear terms. The strongly software-oriented methods detailed facilitate fulfillment of tight performance requirements and help the designer to think in purely nonlinear terms, avoiding the expedient of linearization which can impose substantial and unrealistic model limitations and drive up the cost of the final product. Design procedures are presented in a step-by-step algorithmic format each step being a functional unit with outputs that drive the other steps. This procedure may be easily implemented on a digital computer with example problems from mecha...

  13. Inclusion of models to describe severe accident conditions in the fuel simulation code DIONISIO

    Energy Technology Data Exchange (ETDEWEB)

    Lemes, Martín; Soba, Alejandro [Sección Códigos y Modelos, Gerencia Ciclo del Combustible Nuclear, Comisión Nacional de Energía Atómica, Avenida General Paz 1499, 1650 San Martín, Provincia de Buenos Aires (Argentina); Daverio, Hernando [Gerencia Reactores y Centrales Nucleares, Comisión Nacional de Energía Atómica, Avenida General Paz 1499, 1650 San Martín, Provincia de Buenos Aires (Argentina); Denis, Alicia [Sección Códigos y Modelos, Gerencia Ciclo del Combustible Nuclear, Comisión Nacional de Energía Atómica, Avenida General Paz 1499, 1650 San Martín, Provincia de Buenos Aires (Argentina)

    2017-04-15

    The simulation of fuel rod behavior is a complex task that demands not only accurate models to describe the numerous phenomena occurring in the pellet, cladding and internal rod atmosphere but also an adequate interconnection between them. In the last years several models have been incorporated to the DIONISIO code with the purpose of increasing its precision and reliability. After the regrettable events at Fukushima, the need for codes capable of simulating nuclear fuels under accident conditions has come forth. Heat removal occurs in a quite different way than during normal operation and this fact determines a completely new set of conditions for the fuel materials. A detailed description of the different regimes the coolant may exhibit in such a wide variety of scenarios requires a thermal-hydraulic formulation not suitable to be included in a fuel performance code. Moreover, there exist a number of reliable and famous codes that perform this task. Nevertheless, and keeping in mind the purpose of building a code focused on the fuel behavior, a subroutine was developed for the DIONISIO code that performs a simplified analysis of the coolant in a PWR, restricted to the more representative situations and provides to the fuel simulation the boundary conditions necessary to reproduce accidental situations. In the present work this subroutine is described and the results of different comparisons with experimental data and with thermal-hydraulic codes are offered. It is verified that, in spite of its comparative simplicity, the predictions of this module of DIONISIO do not differ significantly from those of the specific, complex codes.

  14. Theoretical models for describing longitudinal bunch compression in the neutralized drift compression experiment

    Directory of Open Access Journals (Sweden)

    Adam B. Sefkow

    2006-09-01

    Full Text Available Heavy ion drivers for warm dense matter and heavy ion fusion applications use intense charge bunches which must undergo transverse and longitudinal compression in order to meet the requisite high current densities and short pulse durations desired at the target. The neutralized drift compression experiment (NDCX at the Lawrence Berkeley National Laboratory is used to study the longitudinal neutralized drift compression of a space-charge-dominated ion beam, which occurs due to an imposed longitudinal velocity tilt and subsequent neutralization of the beam’s space charge by background plasma. Reduced theoretical models have been used in order to describe the realistic propagation of an intense charge bunch through the NDCX device. A warm-fluid model is presented as a tractable computational tool for investigating the nonideal effects associated with the experimental acceleration gap geometry and voltage waveform of the induction module, which acts as a means to pulse shape both the velocity and line density profiles. Self-similar drift compression solutions can be realized in order to transversely focus the entire charge bunch to the same focal plane in upcoming simultaneous transverse and longitudinal focusing experiments. A kinetic formalism based on the Vlasov equation has been employed in order to show that the peaks in the experimental current profiles are a result of the fact that only the central portion of the beam contributes effectively to the main compressed pulse. Significant portions of the charge bunch reside in the nonlinearly compressing part of the ion beam because of deviations between the experimental and ideal velocity tilts. Those regions form a pedestal of current around the central peak, thereby decreasing the amount of achievable longitudinal compression and increasing the pulse durations achieved at the focal plane. A hybrid fluid-Vlasov model which retains the advantages of both the fluid and kinetic approaches has been

  15. Introducing mixotrophy into a biogeochemical model describing an eutrophied coastal ecosystem: The Southern North Sea

    Science.gov (United States)

    Ghyoot, Caroline; Lancelot, Christiane; Flynn, Kevin J.; Mitra, Aditee; Gypens, Nathalie

    2017-09-01

    Most biogeochemical/ecological models divide planktonic protists between phototrophs (phytoplankton) and heterotrophs (zooplankton). However, a large number of planktonic protists are able to combine several mechanisms of carbon and nutrient acquisition. Not representing these multiple mechanisms in biogeochemical/ecological models describing eutrophied coastal ecosystems can potentially lead to different conclusions regarding ecosystem functioning, especially regarding the success of harmful algae, which are often reported as mixotrophic. This modelling study investigates the implications for trophic dynamics of including 3 contrasting forms of mixotrophy, namely osmotrophy (using alkaline phosphatase activity, APA), non-constitutive mixotrophy (acquired phototrophy by microzooplankton) and also constitutive mixotrophy. The application is in the Southern North Sea, an ecosystem that faced, between 1985 and 2005, a significant increase in the nutrient supply N:P ratio (from 31 to 81 mol N:P). The comparison with a traditional model shows that, when the winter N:P ratio in the Southern North Sea is above 22 molN molP-1 (as occurred from mid-1990s), APA allows a 3-32% increase of annual gross primary production (GPP). In result of the higher GPP, the annual sedimentation increases as well as the bacterial production. By contrast, APA does not affect the export of matter to higher trophic levels because the increased GPP is mainly due to Phaeocystis colonies, which are not grazed by copepods. Under high irradiance, non-constitutive mixotrophy appreciably increases annual GPP, transfer to higher trophic levels, sedimentation, and nutrient remineralisation. In this ecosystem, non-constitutive mixotrophy is also observed to have an indirect stimulating effect on diatoms. Constitutive mixotrophy in nanoflagellates appears to have little influence on this ecosystem functioning. An important conclusion from this work is that contrasting forms of mixotrophy have different

  16. Data to support "Boosted Regression Tree Models to Explain Watershed Nutrient Concentrations & Biological Condition"

    Data.gov (United States)

    U.S. Environmental Protection Agency — Spreadsheets are included here to support the manuscript "Boosted Regression Tree Models to Explain Watershed Nutrient Concentrations and Biological Condition". This...

  17. A unified model to describe the anisotropic viscoplastic behavior of Zircaloy-4 cladding tubes

    International Nuclear Information System (INIS)

    Delobelle, P.; Robinet, P.; Bouffioux, P.; Geyer, P.; Pichon, I. Le

    1996-01-01

    This paper presents the constitutive equations of a unified viscoplastic model and its validation with experimental data. The mechanical tests were carried out in a temperature range of 20 to 400 C on both cold-worked stress-relieved and fully annealed Zircaloy-4 tubes. Although their geometry (14.3 by 1.2 mm) is different, the crystallographic texture was close to that expected in the cladding tubes. To characterize the anisotropy, mechanical tests were performed under both monotonic and cyclic uni- and bi-directional loadings, i.e., tension-compression, tension-torsion, and tension-internal pressure tests. The results obtained at ambient temperatures and the independence of the ratio R p = var-epsilon θθ p /var-epsilon zz p , with respect to temperature would seem to indicate that the set of anisotropy coefficients does not depend on temperature. Zircaloy-4 material also has a slight supplementary hardening during out-of-phase cyclic loading. The authors propose to extend the formulation of a unified viscoplastic model, developed and identified elsewhere for other initially isotropic materials, to the case of Zircaloy-4. Generally speaking, anisotropy is introduced through fourth order tensors affecting the flow directions, the linear kinematical hardening components, as well as the dynamic and static recoveries of the forementioned hardening variables. The ability of the model to describe all the mechanical properties of the material is shown. The application of the model to simulate mechanical tests (tension, creep, and relaxation) performed on true CWSR Zircaloy-4 cladding tubes with low tin content is also presented

  18. Comparison of three nonlinear models to describe long-term tag shedding by lake trout

    Science.gov (United States)

    Fabrizio, Mary C.; Swanson, Bruce L.; Schram, Stephen T.; Hoff, Michael H.

    1996-01-01

    We estimated long-term tag-shedding rates for lake trout Salvelinus namaycush using two existing models and a model we developed to account for the observed permanence of some tags. Because tag design changed over the course of the study, we examined tag-shedding rates for three types of numbered anchor tags (Floy tags FD-67, FD-67C, and FD-68BC) and an unprinted anchor tag (FD-67F). Lake trout from the Gull Island Shoal region, Lake Superior, were double-tagged, and subsequent recaptures were monitored in annual surveys conducted from 1974 to 1992. We modeled tag-shedding rates, using time at liberty and probabilities of tag shedding estimated from fish released in 1974 and 1978–1983 and later recaptured. Long-term shedding of numbered anchor tags in lake trout was best described by a nonlinear model with two parameters: an instantaneous tag-shedding rate and a constant representing the proportion of tags that were never shed. Although our estimates of annual shedding rates varied with tag type (0.300 for FD-67, 0.441 for FD-67C, and 0.656 for FD-68BC), differences were not significant. About 36% of tags remained permanently affixed to the fish. Of the numbered tags that were shed (about 64%), two mechanisms contributed to tag loss: disintegration and dislodgment. Tags from about 11% of recaptured fish had disintegrated, but most tags were dislodged. Unprinted tags were shed at a significant but low rate immediately after release, but the long-term, annual shedding rate of these tags was only 0.013. Compared with unprinted tags, numbered tags dislodged at higher annual rates; we hypothesized that this was due to the greater frictional drag associated with the larger cross-sectional area of numbered tags.

  19. Martingale Regressions for a Continuous Time Model of Exchange Rates

    OpenAIRE

    Guo, Zi-Yi

    2017-01-01

    One of the daunting problems in international finance is the weak explanatory power of existing theories of the nominal exchange rates, the so-called “foreign exchange rate determination puzzle”. We propose a continuous-time model to study the impact of order flow on foreign exchange rates. The model is estimated by a newly developed econometric tool based on a time-change sampling from calendar to volatility time. The estimation results indicate that the effect of order flow on exchange rate...

  20. Focused information criterion and model averaging based on weighted composite quantile regression

    KAUST Repository

    Xu, Ganggang; Wang, Suojin; Huang, Jianhua Z.

    2013-01-01

    We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non

  1. Cox's regression model for dynamics of grouped unemployment data

    Czech Academy of Sciences Publication Activity Database

    Volf, Petr

    2003-01-01

    Roč. 10, č. 19 (2003), s. 151-162 ISSN 1212-074X R&D Projects: GA ČR GA402/01/0539 Institutional research plan: CEZ:AV0Z1075907 Keywords : mathematical statistics * survival analysis * Cox's model Subject RIV: BB - Applied Statistics, Operational Research

  2. Multiple Linear Regression Model for Estimating the Price of a ...

    African Journals Online (AJOL)

    Ghana Mining Journal ... In the modeling, the Ordinary Least Squares (OLS) normality assumption which could introduce errors in the statistical analyses was dealt with by log transformation of the data, ensuring the data is normally ... The resultant MLRM is: Ŷi MLRM = (X'X)-1X'Y(xi') where X is the sample data matrix.

  3. Inflation, Forecast Intervals and Long Memory Regression Models

    NARCIS (Netherlands)

    C.S. Bos (Charles); Ph.H.B.F. Franses (Philip Hans); M. Ooms (Marius)

    2001-01-01

    textabstractWe examine recursive out-of-sample forecasting of monthly postwar U.S. core inflation and log price levels. We use the autoregressive fractionally integrated moving average model with explanatory variables (ARFIMAX). Our analysis suggests a significant explanatory power of leading

  4. Inflation, Forecast Intervals and Long Memory Regression Models

    NARCIS (Netherlands)

    Ooms, M.; Bos, C.S.; Franses, P.H.

    2003-01-01

    We examine recursive out-of-sample forecasting of monthly postwar US core inflation and log price levels. We use the autoregressive fractionally integrated moving average model with explanatory variables (ARFIMAX). Our analysis suggests a significant explanatory power of leading indicators

  5. Data-driven modelling of LTI systems using symbolic regression

    NARCIS (Netherlands)

    Khandelwal, D.; Toth, R.; Van den Hof, P.M.J.

    2017-01-01

    The aim of this project is to automate the task of data-driven identification of dynamical systems. The underlying goal is to develop an identification tool that models a physical system without distinguishing between classes of systems such as linear, nonlinear or possibly even hybrid systems. Such

  6. Yeast for Mathematicians: A Ferment of Discovery and Model Competition to Describe Data.

    Science.gov (United States)

    Lewis, Matthew; Powell, James

    2017-02-01

    In addition to the memorization, algorithmic skills and vocabulary which are the default focus in many mathematics classrooms, professional mathematicians are expected to creatively apply known techniques, construct new mathematical approaches and communicate with and about mathematics. We propose that students can learn these professional, higher-level skills through Laboratory Experiences in Mathematical Biology which put students in the role of mathematics researcher creating mathematics to describe and understand biological data. Here we introduce a laboratory experience centered on yeast (Saccharomyces cerevisiae) growing in a small capped flask with a jar to collect carbon dioxide created during yeast growth and respiration. The lab requires no specialized equipment and can easily be run in the context of a college math class. Students collect data and develop mathematical models to explain the data. To help place instructors in the role of mentor/collaborator (as opposed to jury/judge), we facilitate the lab using model competition judged via Bayesian Information Criterion. This article includes details about the class activity conducted, student examples and pedagogical strategies for success.

  7. Can a Linear Sigma Model Describe Walking Gauge Theories at Low Energies?

    Science.gov (United States)

    Gasbarro, Andrew

    2018-03-01

    In recent years, many investigations of confining Yang Mills gauge theories near the edge of the conformal window have been carried out using lattice techniques. These studies have revealed that the spectrum of hadrons in nearly conformal ("walking") gauge theories differs significantly from the QCD spectrum. In particular, a light singlet scalar appears in the spectrum which is nearly degenerate with the PNGBs at the lightest currently accessible quark masses. This state is a viable candidate for a composite Higgs boson. Presently, an acceptable effective field theory (EFT) description of the light states in walking theories has not been established. Such an EFT would be useful for performing chiral extrapolations of lattice data and for serving as a bridge between lattice calculations and phenomenology. It has been shown that the chiral Lagrangian fails to describe the IR dynamics of a theory near the edge of the conformal window. Here we assess a linear sigma model as an alternate EFT description by performing explicit chiral fits to lattice data. In a combined fit to the Goldstone (pion) mass and decay constant, a tree level linear sigma model has a Χ2/d.o.f. = 0.5 compared to Χ2/d.o.f. = 29.6 from fitting nextto-leading order chiral perturbation theory. When the 0++ (σ) mass is included in the fit, Χ2/d.o.f. = 4.9. We remark on future directions for providing better fits to the σ mass.

  8. A Minimal Model Describing Hexapedal Interlimb Coordination: The Tegotae-Based Approach

    Directory of Open Access Journals (Sweden)

    Dai Owaki

    2017-06-01

    Full Text Available Insects exhibit adaptive and versatile locomotion despite their minimal neural computing. Such locomotor patterns are generated via coordination between leg movements, i.e., an interlimb coordination, which is largely controlled in a distributed manner by neural circuits located in thoracic ganglia. However, the mechanism responsible for the interlimb coordination still remains elusive. Understanding this mechanism will help us to elucidate the fundamental control principle of animals' agile locomotion and to realize robots with legs that are truly adaptive and could not be developed solely by conventional control theories. This study aims at providing a “minimal" model of the interlimb coordination mechanism underlying hexapedal locomotion, in the hope that a single control principle could satisfactorily reproduce various aspects of insect locomotion. To this end, we introduce a novel concept we named “Tegotae,” a Japanese concept describing the extent to which a perceived reaction matches an expectation. By using the Tegotae-based approach, we show that a surprisingly systematic design of local sensory feedback mechanisms essential for the interlimb coordination can be realized. We also use a hexapod robot we developed to show that our mathematical model of the interlimb coordination mechanism satisfactorily reproduces various insects' gait patterns.

  9. Fast fission phenomenon, deep inelastic reactions and compound nucleus formation described within a dynamical macroscopic model

    International Nuclear Information System (INIS)

    Gregoire, C.; Ngo, C.; Remaud, B.

    1982-01-01

    We present a dynamical model to describe dissipative heavy ion reactions. It treats explicitly the relative motion of the two ions, the mass asymmetry of the system and the projection of the isospin of each ion. The deformations, which are induced during the collision, are simulated with a time-dependent interaction potential. This is done by a time-dependent transition between a sudden interaction potential in the entrance channel and an adiabatic potential in the exit channel. The model allows us to compute the compound-nucleus cross section and multidifferential cross-sections for deep inelastic reactions. In addition, for some systems, and under certain conditions which are discussed in detail, a new dissipative heavy ion collision appears: fast-fission phenomenon which has intermediate properties between deep inelastic and compound nucleus reactions. The calculated properties concerning fast fission are compared with experimental results and reproduce some of those which could not be understood as belonging to deep inelastic or compound-nucleus reactions. (orig.)

  10. Nonparametric Estimation of Regression Parameters in Measurement Error Models

    Czech Academy of Sciences Publication Activity Database

    Ehsanes Saleh, A.K.M.D.; Picek, J.; Kalina, Jan

    2009-01-01

    Roč. 67, č. 2 (2009), s. 177-200 ISSN 0026-1424 Grant - others:GA AV ČR(CZ) IAA101120801; GA MŠk(CZ) LC06024 Institutional research plan: CEZ:AV0Z10300504 Keywords : asymptotic relative efficiency(ARE) * asymptotic theory * emaculate mode * Me model * R-estimation * Reliabilty ratio(RR) Subject RIV: BB - Applied Statistics, Operational Research

  11. Pharmacodynamic Model To Describe the Concentration-Dependent Selection of Cefotaxime-Resistant Escherichia coli

    Science.gov (United States)

    Olofsson, Sara K.; Geli, Patricia; Andersson, Dan I.; Cars, Otto

    2005-01-01

    Antibiotic dosing regimens may vary in their capacity to select mutants. Our hypothesis was that selection of a more resistant bacterial subpopulation would increase with the time within a selective window (SW), i.e., when drug concentrations fall between the MICs of two strains. An in vitro kinetic model was used to study the selection of two Escherichia coli strains with different susceptibilities to cefotaxime. The bacterial mixtures were exposed to cefotaxime for 24 h and SWs of 1, 2, 4, 8, and 12 h. A mathematical model was developed that described the selection of preexisting and newborn mutants and the post-MIC effect (PME) as functions of pharmacokinetic parameters. Our main conclusions were as follows: (i) the selection between preexisting mutants increased with the time within the SW; (ii) the emergence and selection of newborn mutants increased with the time within the SW (with a short time, only 4% of the preexisting mutants were replaced by newborn mutants, compared to the longest times, where 100% were replaced); and (iii) PME increased with the area under the concentration-time curve (AUC) and was slightly more pronounced with a long elimination half-life (T1/2) than with a short T1/2 situation, when AUC is fixed. We showed that, in a dynamic competition between strains with different levels of resistance, the appearance of newborn high-level resistant mutants from the parental strains and the PME can strongly affect the outcome of the selection and that pharmacodynamic models can be used to predict the outcome of resistance development. PMID:16304176

  12. Ecohydrology in Mediterranean areas: a numerical model to describe growing seasons out of phase with precipitations

    Directory of Open Access Journals (Sweden)

    D. Pumo

    2008-02-01

    Full Text Available The probabilistic description of soil moisture dynamics is a relatively new topic in hydrology. The most common ecohydrological models start from a stochastic differential equation describing the soil water balance, where the unknown quantity, the soil moisture, depends both on spaces and time. Most of the solutions existing in literature are obtained in a probabilistic framework and under steady-state condition; even if this last condition allows the analytical handling of the problem, it has considerably simplified the same problem by subtracting generalities from it.

    The steady-state hypothesis, appears perfectly applicable in arid and semiarid climatic areas like those of African's or middle American's savannas, but it seems to be no more valid in areas with Mediterranean climate, where, notoriously, the wet season foregoes the growing season, recharging water into the soil. This moisture stored at the beginning of the growing season (known as soil moisture initial condition has a great importance, especially for deep-rooted vegetation, by enabling survival in absence of rainfalls during the growing season and, however, keeping the water stress low during the first period of the same season.

    The aim of this paper is to analyze the soil moisture dynamics using a simple non-steady numerical ecohydrological model. The numerical model here proposed is able to reproduce soil moisture probability density function, obtained analytically in previous studies for different climates and soils in steady-state conditions; consequently it can be used to compute both the soil moisture time-profile and the vegetation static water stress time-profile in non-steady conditions.

    Here the differences between the steady-analytical and the non-steady numerical probability density functions are analyzed, showing how the proposed numerical model is able to capture the effects of winter recharge on the soil moisture. The dynamic

  13. The generalized model of polypeptide chain describing the helix-coil transition in biopolymers

    International Nuclear Information System (INIS)

    Mamasakhlisov, E.S.; Badasyan, A.V.; Tsarukyan, A.V.; Grigoryan, A.V.; Morozov, V.F.

    2005-07-01

    In this paper we summarize some results of our theoretical investigations of helix-coil transition both in single-strand (polypeptides) and two-strand (polynucleotides) macromolecules. The Hamiltonian of the Generalized Model of Polypeptide Chain (GMPC) is introduced to describe the system in which the conformations are correlated over some dimensional range Δ (it equals 3 for polypeptide, because one H-bond fixes three pairs of rotation, for double strand DNA it equals to one chain rigidity because of impossibility of loop formation on the scale less than Δ). The Hamiltonian does not contain any parameter designed especially for helix-coil transition and uses pure molecular microscopic parameters (the energy of hydrogen bond formation, reduced partition function of repeated unit, the number of repeated units fixed by one hydrogen bond, the energies of interaction between the repeated units and the solvent molecules). To calculate averages we evaluate the partition function using the transfer-matrix approach. The GMPC allowed to describe the influence of a number of factors, affecting the transition, basing on a unified microscopic approach. Thus we obtained, that solvents change transition temperature and interval in different ways, depending on type of solvent and on energy of solvent- macromolecule interaction; stacking on the background of H-bonding increases stability and decreases cooperativity of melting. For heterogeneous DNA we could analytically derive well known formulae for transition temperature and interval. In the framework of GMPC we calculate and show the difference of two order parameters of helix-coil transition - the helicity degree, and the average fraction of repeated units in helical conformation. Given article has the aim to review the results obtained during twenty years in the context of GMPC. (author)

  14. Shaofu Zhuyu Decoction Regresses Endometriotic Lesions in a Rat Model

    Directory of Open Access Journals (Sweden)

    Guanghui Zhu

    2018-01-01

    Full Text Available The current therapies for endometriosis are restricted by various side effects and treatment outcome has been less than satisfactory. Shaofu Zhuyu Decoction (SZD, a classic traditional Chinese medicinal (TCM prescription for dysmenorrhea, has been widely used in clinical practice by TCM doctors to relieve symptoms of endometriosis. The present study aimed to investigate the effects of SZD on a rat model of endometriosis. Forty-eight female Sprague-Dawley rats with regular estrous cycles went through autotransplantation operation to establish endometriosis model. Then 38 rats with successful ectopic implants were randomized into two groups: vehicle- and SZD-treated groups. The latter were administered SZD through oral gavage for 4 weeks. By the end of the treatment period, the volume of the endometriotic lesions was measured, the histopathological properties of the ectopic endometrium were evaluated, and levels of proliferating cell nuclear antigen (PCNA, CD34, and hypoxia inducible factor- (HIF- 1α in the ectopic endometrium were detected with immunohistochemistry. Furthermore, apoptosis was assessed using the terminal deoxynucleotidyl transferase (TdT deoxyuridine 5′-triphosphate (dUTP nick-end labeling (TUNEL assay. In this study, SZD significantly reduced the size of ectopic lesions in rats with endometriosis, inhibited cell proliferation, increased cell apoptosis, and reduced microvessel density and HIF-1α expression. It suggested that SZD could be an effective therapy for the treatment and prevention of endometriosis recurrence.

  15. [Application of detecting and taking overdispersion into account in Poisson regression model].

    Science.gov (United States)

    Bouche, G; Lepage, B; Migeot, V; Ingrand, P

    2009-08-01

    Researchers often use the Poisson regression model to analyze count data. Overdispersion can occur when a Poisson regression model is used, resulting in an underestimation of variance of the regression model parameters. Our objective was to take overdispersion into account and assess its impact with an illustration based on the data of a study investigating the relationship between use of the Internet to seek health information and number of primary care consultations. Three methods, overdispersed Poisson, a robust estimator, and negative binomial regression, were performed to take overdispersion into account in explaining variation in the number (Y) of primary care consultations. We tested overdispersion in the Poisson regression model using the ratio of the sum of Pearson residuals over the number of degrees of freedom (chi(2)/df). We then fitted the three models and compared parameter estimation to the estimations given by Poisson regression model. Variance of the number of primary care consultations (Var[Y]=21.03) was greater than the mean (E[Y]=5.93) and the chi(2)/df ratio was 3.26, which confirmed overdispersion. Standard errors of the parameters varied greatly between the Poisson regression model and the three other regression models. Interpretation of estimates from two variables (using the Internet to seek health information and single parent family) would have changed according to the model retained, with significant levels of 0.06 and 0.002 (Poisson), 0.29 and 0.09 (overdispersed Poisson), 0.29 and 0.13 (use of a robust estimator) and 0.45 and 0.13 (negative binomial) respectively. Different methods exist to solve the problem of underestimating variance in the Poisson regression model when overdispersion is present. The negative binomial regression model seems to be particularly accurate because of its theorical distribution ; in addition this regression is easy to perform with ordinary statistical software packages.

  16. Linking Simple Economic Theory Models and the Cointegrated Vector AutoRegressive Model

    DEFF Research Database (Denmark)

    Møller, Niels Framroze

    This paper attempts to clarify the connection between simple economic theory models and the approach of the Cointegrated Vector-Auto-Regressive model (CVAR). By considering (stylized) examples of simple static equilibrium models, it is illustrated in detail, how the theoretical model and its stru....... Further fundamental extensions and advances to more sophisticated theory models, such as those related to dynamics and expectations (in the structural relations) are left for future papers......This paper attempts to clarify the connection between simple economic theory models and the approach of the Cointegrated Vector-Auto-Regressive model (CVAR). By considering (stylized) examples of simple static equilibrium models, it is illustrated in detail, how the theoretical model and its......, it is demonstrated how other controversial hypotheses such as Rational Expectations can be formulated directly as restrictions on the CVAR-parameters. A simple example of a "Neoclassical synthetic" AS-AD model is also formulated. Finally, the partial- general equilibrium distinction is related to the CVAR as well...

  17. Using the classical linear regression model in analysis of the dependences of conveyor belt life

    Directory of Open Access Journals (Sweden)

    Miriam Andrejiová

    2013-12-01

    Full Text Available The paper deals with the classical linear regression model of the dependence of conveyor belt life on some selected parameters: thickness of paint layer, width and length of the belt, conveyor speed and quantity of transported material. The first part of the article is about regression model design, point and interval estimation of parameters, verification of statistical significance of the model, and about the parameters of the proposed regression model. The second part of the article deals with identification of influential and extreme values that can have an impact on estimation of regression model parameters. The third part focuses on assumptions of the classical regression model, i.e. on verification of independence assumptions, normality and homoscedasticity of residuals.

  18. Shigella mediated depletion of macrophages in a murine breast cancer model is associated with tumor regression.

    Directory of Open Access Journals (Sweden)

    Katharina Galmbacher

    Full Text Available A tumor promoting role of macrophages has been described for a transgenic murine breast cancer model. In this model tumor-associated macrophages (TAMs represent a major component of the leukocytic infiltrate and are associated with tumor progression. Shigella flexneri is a bacterial pathogen known to specificly induce apotosis in macrophages. To evaluate whether Shigella-induced removal of macrophages may be sufficient for achieving tumor regression we have developed an attenuated strain of S. flexneri (M90TDeltaaroA and infected tumor bearing mice. Two mouse models were employed, xenotransplantation of a murine breast cancer cell line and spontanous breast cancer development in MMTV-HER2 transgenic mice. Quantitative analysis of bacterial tumor targeting demonstrated that attenuated, invasive Shigella flexneri primarily infected TAMs after systemic administration. A single i.v. injection of invasive M90TDeltaaroA resulted in caspase-1 dependent apoptosis of TAMs followed by a 74% reduction in tumors of transgenic MMTV-HER-2 mice 7 days post infection. TAM depletion was sustained and associated with complete tumor regression.These data support TAMs as useful targets for antitumor therapy and highlight attenuated bacterial pathogens as potential tools.

  19. Reliable and relevant modelling of real world data: a personal account of the development of PLS Regression

    DEFF Research Database (Denmark)

    Martens, Harald

    2001-01-01

    Why and how the Partial Least Squares Regression (PLSR) was developed, is here described from the author's perspective. The paper outlines my frustrating experiences in the 70'ies with two conflicting and equally over-ambitious and oversimplified modelling cultures - in traditional chemistry...

  20. Two levels ARIMAX and regression models for forecasting time series data with calendar variation effects

    Science.gov (United States)

    Suhartono, Lee, Muhammad Hisyam; Prastyo, Dedy Dwi

    2015-12-01

    The aim of this research is to develop a calendar variation model for forecasting retail sales data with the Eid ul-Fitr effect. The proposed model is based on two methods, namely two levels ARIMAX and regression methods. Two levels ARIMAX and regression models are built by using ARIMAX for the first level and regression for the second level. Monthly men's jeans and women's trousers sales in a retail company for the period January 2002 to September 2009 are used as case study. In general, two levels of calendar variation model yields two models, namely the first model to reconstruct the sales pattern that already occurred, and the second model to forecast the effect of increasing sales due to Eid ul-Fitr that affected sales at the same and the previous months. The results show that the proposed two level calendar variation model based on ARIMAX and regression methods yields better forecast compared to the seasonal ARIMA model and Neural Networks.

  1. Formulating state space models in R with focus on longitudinal regression models

    DEFF Research Database (Denmark)

    Dethlefsen, Claus; Lundbye-Christensen, Søren

      We provide a language for formulating a range of state space models. The described methodology is implemented in the R -package sspir available from cran.r-project.org . A state space model is specified similarly to a generalized linear model in R , by marking the time-varying terms in the form......  We provide a language for formulating a range of state space models. The described methodology is implemented in the R -package sspir available from cran.r-project.org . A state space model is specified similarly to a generalized linear model in R , by marking the time-varying terms...

  2. Heteroscedasticity as a Basis of Direction Dependence in Reversible Linear Regression Models.

    Science.gov (United States)

    Wiedermann, Wolfgang; Artner, Richard; von Eye, Alexander

    2017-01-01

    Heteroscedasticity is a well-known issue in linear regression modeling. When heteroscedasticity is observed, researchers are advised to remedy possible model misspecification of the explanatory part of the model (e.g., considering alternative functional forms and/or omitted variables). The present contribution discusses another source of heteroscedasticity in observational data: Directional model misspecifications in the case of nonnormal variables. Directional misspecification refers to situations where alternative models are equally likely to explain the data-generating process (e.g., x → y versus y → x). It is shown that the homoscedasticity assumption is likely to be violated in models that erroneously treat true nonnormal predictors as response variables. Recently, Direction Dependence Analysis (DDA) has been proposed as a framework to empirically evaluate the direction of effects in linear models. The present study links the phenomenon of heteroscedasticity with DDA and describes visual diagnostics and nine homoscedasticity tests that can be used to make decisions concerning the direction of effects in linear models. Results of a Monte Carlo simulation that demonstrate the adequacy of the approach are presented. An empirical example is provided, and applicability of the methodology in cases of violated assumptions is discussed.

  3. Statistical approach for selection of regression model during validation of bioanalytical method

    Directory of Open Access Journals (Sweden)

    Natalija Nakov

    2014-06-01

    Full Text Available The selection of an adequate regression model is the basis for obtaining accurate and reproducible results during the bionalytical method validation. Given the wide concentration range, frequently present in bioanalytical assays, heteroscedasticity of the data may be expected. Several weighted linear and quadratic regression models were evaluated during the selection of the adequate curve fit using nonparametric statistical tests: One sample rank test and Wilcoxon signed rank test for two independent groups of samples. The results obtained with One sample rank test could not give statistical justification for the selection of linear vs. quadratic regression models because slight differences between the error (presented through the relative residuals were obtained. Estimation of the significance of the differences in the RR was achieved using Wilcoxon signed rank test, where linear and quadratic regression models were treated as two independent groups. The application of this simple non-parametric statistical test provides statistical confirmation of the choice of an adequate regression model.

  4. On a Robust MaxEnt Process Regression Model with Sample-Selection

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2018-04-01

    Full Text Available In a regression analysis, a sample-selection bias arises when a dependent variable is partially observed as a result of the sample selection. This study introduces a Maximum Entropy (MaxEnt process regression model that assumes a MaxEnt prior distribution for its nonparametric regression function and finds that the MaxEnt process regression model includes the well-known Gaussian process regression (GPR model as a special case. Then, this special MaxEnt process regression model, i.e., the GPR model, is generalized to obtain a robust sample-selection Gaussian process regression (RSGPR model that deals with non-normal data in the sample selection. Various properties of the RSGPR model are established, including the stochastic representation, distributional hierarchy, and magnitude of the sample-selection bias. These properties are used in the paper to develop a hierarchical Bayesian methodology to estimate the model. This involves a simple and computationally feasible Markov chain Monte Carlo algorithm that avoids analytical or numerical derivatives of the log-likelihood function of the model. The performance of the RSGPR model in terms of the sample-selection bias correction, robustness to non-normality, and prediction, is demonstrated through results in simulations that attest to its good finite-sample performance.

  5. Predicting Antitumor Activity of Peptides by Consensus of Regression Models Trained on a Small Data Sample

    Directory of Open Access Journals (Sweden)

    Ivanka Jerić

    2011-11-01

    Full Text Available Predicting antitumor activity of compounds using regression models trained on a small number of compounds with measured biological activity is an ill-posed inverse problem. Yet, it occurs very often within the academic community. To counteract, up to some extent, overfitting problems caused by a small training data, we propose to use consensus of six regression models for prediction of biological activity of virtual library of compounds. The QSAR descriptors of 22 compounds related to the opioid growth factor (OGF, Tyr-Gly-Gly-Phe-Met with known antitumor activity were used to train regression models: the feed-forward artificial neural network, the k-nearest neighbor, sparseness constrained linear regression, the linear and nonlinear (with polynomial and Gaussian kernel support vector machine. Regression models were applied on a virtual library of 429 compounds that resulted in six lists with candidate compounds ranked by predicted antitumor activity. The highly ranked candidate compounds were synthesized, characterized and tested for an antiproliferative activity. Some of prepared peptides showed more pronounced activity compared with the native OGF; however, they were less active than highly ranked compounds selected previously by the radial basis function support vector machine (RBF SVM regression model. The ill-posedness of the related inverse problem causes unstable behavior of trained regression models on test data. These results point to high complexity of prediction based on the regression models trained on a small data sample.

  6. Financial analysis and forecasting of the results of small businesses performance based on regression model

    Directory of Open Access Journals (Sweden)

    Svetlana O. Musienko

    2017-03-01

    Full Text Available Objective to develop the economicmathematical model of the dependence of revenue on other balance sheet items taking into account the sectoral affiliation of the companies. Methods using comparative analysis the article studies the existing approaches to the construction of the company management models. Applying the regression analysis and the least squares method which is widely used for financial management of enterprises in Russia and abroad the author builds a model of the dependence of revenue on other balance sheet items taking into account the sectoral affiliation of the companies which can be used in the financial analysis and prediction of small enterprisesrsquo performance. Results the article states the need to identify factors affecting the financial management efficiency. The author analyzed scientific research and revealed the lack of comprehensive studies on the methodology for assessing the small enterprisesrsquo management while the methods used for large companies are not always suitable for the task. The systematized approaches of various authors to the formation of regression models describe the influence of certain factors on the company activity. It is revealed that the resulting indicators in the studies were revenue profit or the company relative profitability. The main drawback of most models is the mathematical not economic approach to the definition of the dependent and independent variables. Basing on the analysis it was determined that the most correct is the model of dependence between revenues and total assets of the company using the decimal logarithm. The model was built using data on the activities of the 507 small businesses operating in three spheres of economic activity. Using the presented model it was proved that there is direct dependence between the sales proceeds and the main items of the asset balance as well as differences in the degree of this effect depending on the economic activity of small

  7. A generalized right truncated bivariate Poisson regression model with applications to health data.

    Science.gov (United States)

    Islam, M Ataharul; Chowdhury, Rafiqul I

    2017-01-01

    A generalized right truncated bivariate Poisson regression model is proposed in this paper. Estimation and tests for goodness of fit and over or under dispersion are illustrated for both untruncated and right truncated bivariate Poisson regression models using marginal-conditional approach. Estimation and test procedures are illustrated for bivariate Poisson regression models with applications to Health and Retirement Study data on number of health conditions and the number of health care services utilized. The proposed test statistics are easy to compute and it is evident from the results that the models fit the data very well. A comparison between the right truncated and untruncated bivariate Poisson regression models using the test for nonnested models clearly shows that the truncated model performs significantly better than the untruncated model.

  8. Testing for constant nonparametric effects in general semiparametric regression models with interactions

    KAUST Repository

    Wei, Jiawei; Carroll, Raymond J.; Maity, Arnab

    2011-01-01

    We consider the problem of testing for a constant nonparametric effect in a general semi-parametric regression model when there is the potential for interaction between the parametrically and nonparametrically modeled variables. The work

  9. Conceptual modeling of postmortem evaluation findings to describe dairy cow deaths.

    Science.gov (United States)

    McConnel, C S; Garry, F B; Hill, A E; Lombard, J E; Gould, D H

    2010-01-01

    Dairy cow mortality levels in the United States are excessive and increasing over time. To better define cause and effect and combat rising mortality, clearer definitions of the reasons that cows die need to be acquired through thorough necropsy-based postmortem evaluations. The current study focused on organizing information generated from postmortem evaluations into a monitoring system that is based on the fundamentals of conceptual modeling and that will potentially be translatable into on-farm relational databases. This observational study was conducted on 3 high-producing, commercial dairies in northern Colorado. Throughout the study period a thorough postmortem evaluation was performed by veterinarians on cows that died on each dairy. Postmortem data included necropsy findings, life-history features (e.g., birth date, lactation number, lactational and reproductive status), clinical history and treatments, and pertinent aspects of operational management that were subject to change and considered integral to the poor outcome. During this study, 174 postmortem evaluations were performed. Postmortem evaluation results were conceptually modeled to view each death within the context of the web of factors influencing the dairy and the cow. Categories were formulated describing mortality in terms of functional characteristics potentially amenable to easy performance evaluation, management oversight, and research. In total, 21 death categories with 7 category themes were created. Themes included specific disease processes with variable etiologies, failure of disease recognition or treatment, traumatic events, multifactorial failures linked to transition or negative energy balance issues, problems with feed management, miscellaneous events not amenable to prevention or treatment, and undetermined causes. Although postmortem evaluations provide the relevant information necessary for framing a cow's death, a restructuring of on-farm databases is needed to integrate this

  10. Gaussian Process Regression (GPR) Representation in Predictive Model Markup Language (PMML).

    Science.gov (United States)

    Park, J; Lechevalier, D; Ak, R; Ferguson, M; Law, K H; Lee, Y-T T; Rachuri, S

    2017-01-01

    This paper describes Gaussian process regression (GPR) models presented in predictive model markup language (PMML). PMML is an extensible-markup-language (XML) -based standard language used to represent data-mining and predictive analytic models, as well as pre- and post-processed data. The previous PMML version, PMML 4.2, did not provide capabilities for representing probabilistic (stochastic) machine-learning algorithms that are widely used for constructing predictive models taking the associated uncertainties into consideration. The newly released PMML version 4.3, which includes the GPR model, provides new features: confidence bounds and distribution for the predictive estimations. Both features are needed to establish the foundation for uncertainty quantification analysis. Among various probabilistic machine-learning algorithms, GPR has been widely used for approximating a target function because of its capability of representing complex input and output relationships without predefining a set of basis functions, and predicting a target output with uncertainty quantification. GPR is being employed to various manufacturing data-analytics applications, which necessitates representing this model in a standardized form for easy and rapid employment. In this paper, we present a GPR model and its representation in PMML. Furthermore, we demonstrate a prototype using a real data set in the manufacturing domain.

  11. Comparing Methodologies for Developing an Early Warning System: Classification and Regression Tree Model versus Logistic Regression. REL 2015-077

    Science.gov (United States)

    Koon, Sharon; Petscher, Yaacov

    2015-01-01

    The purpose of this report was to explicate the use of logistic regression and classification and regression tree (CART) analysis in the development of early warning systems. It was motivated by state education leaders' interest in maintaining high classification accuracy while simultaneously improving practitioner understanding of the rules by…

  12. Formulating state space models in R with focus on longitudinal regression models

    DEFF Research Database (Denmark)

    Dethlefsen, Claus; Lundbye-Christensen, Søren

    2006-01-01

    We provide a language for formulating a range of state space models with response densities within the exponential family. The described methodology is implemented in the R-package sspir. A state space model is specified similarly to a generalized linear model in R, and then the time-varying terms...

  13. Analysis of dental caries using generalized linear and count regression models

    Directory of Open Access Journals (Sweden)

    Javali M. Phil

    2013-11-01

    Full Text Available Generalized linear models (GLM are generalization of linear regression models, which allow fitting regression models to response data in all the sciences especially medical and dental sciences that follow a general exponential family. These are flexible and widely used class of such models that can accommodate response variables. Count data are frequently characterized by overdispersion and excess zeros. Zero-inflated count models provide a parsimonious yet powerful way to model this type of situation. Such models assume that the data are a mixture of two separate data generation processes: one generates only zeros, and the other is either a Poisson or a negative binomial data-generating process. Zero inflated count regression models such as the zero-inflated Poisson (ZIP, zero-inflated negative binomial (ZINB regression models have been used to handle dental caries count data with many zeros. We present an evaluation framework to the suitability of applying the GLM, Poisson, NB, ZIP and ZINB to dental caries data set where the count data may exhibit evidence of many zeros and over-dispersion. Estimation of the model parameters using the method of maximum likelihood is provided. Based on the Vuong test statistic and the goodness of fit measure for dental caries data, the NB and ZINB regression models perform better than other count regression models.

  14. Rapid-relocation model for describing high-fluence retention of rare gases implanted in solids

    Science.gov (United States)

    Wittmaack, K.

    2009-09-01

    to be due to bombardment induced relocation and reemission, only the remaining 10% (or less) can be attributed to sputter erosion. The relocation efficiency is interpreted as the 'speed' of radiation enhanced diffusion towards the surface. The directionality of diffusion is attributed to the gradient of the defect density on the large-depth side of the damage distribution where most of the implanted rare gas atoms come to rest. Based on SRIM calculations, two representative parameters are defined, the peak number of lattice displacements, Nd,m, and the spacing, △ zr,d, between the peaks of the range and the damage distributions. Support in favour of rapid rare gas relocation by radiation enhanced diffusion is provided by the finding that the relocation efficiencies for Ar and Xe, which vary by up to one order of magnitude, scale as Ψ=kN/Δz, independent to the implantation energy (10-80 keV Ar, 10-500 keV Xe), within an error margin of only ± 15%. The parameter k contains the properties of the implanted rare gas atoms. A recently described computer simulation model, which assumed that the pressure established by the implanted gas drives reemission, is shown to reproduce measured Xe profiles quite well, but only at that energy at which the fitting parameter of the model was determined (140 keV). Using the same parameter at other energies, deviations by up to a factor of four are observed.

  15. Method validation using weighted linear regression models for quantification of UV filters in water samples.

    Science.gov (United States)

    da Silva, Claudia Pereira; Emídio, Elissandro Soares; de Marchi, Mary Rosa Rodrigues

    2015-01-01

    This paper describes the validation of a method consisting of solid-phase extraction followed by gas chromatography-tandem mass spectrometry for the analysis of the ultraviolet (UV) filters benzophenone-3, ethylhexyl salicylate, ethylhexyl methoxycinnamate and octocrylene. The method validation criteria included evaluation of selectivity, analytical curve, trueness, precision, limits of detection and limits of quantification. The non-weighted linear regression model has traditionally been used for calibration, but it is not necessarily the optimal model in all cases. Because the assumption of homoscedasticity was not met for the analytical data in this work, a weighted least squares linear regression was used for the calibration method. The evaluated analytical parameters were satisfactory for the analytes and showed recoveries at four fortification levels between 62% and 107%, with relative standard deviations less than 14%. The detection limits ranged from 7.6 to 24.1 ng L(-1). The proposed method was used to determine the amount of UV filters in water samples from water treatment plants in Araraquara and Jau in São Paulo, Brazil. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Accounting for measurement error in log regression models with applications to accelerated testing.

    Directory of Open Access Journals (Sweden)

    Robert Richardson

    Full Text Available In regression settings, parameter estimates will be biased when the explanatory variables are measured with error. This bias can significantly affect modeling goals. In particular, accelerated lifetime testing involves an extrapolation of the fitted model, and a small amount of bias in parameter estimates may result in a significant increase in the bias of the extrapolated predictions. Additionally, bias may arise when the stochastic component of a log regression model is assumed to be multiplicative when the actual underlying stochastic component is additive. To account for these possible sources of bias, a log regression model with measurement error and additive error is approximated by a weighted regression model which can be estimated using Iteratively Re-weighted Least Squares. Using the reduced Eyring equation in an accelerated testing setting, the model is compared to previously accepted approaches to modeling accelerated testing data with both simulations and real data.

  17. Accounting for measurement error in log regression models with applications to accelerated testing.

    Science.gov (United States)

    Richardson, Robert; Tolley, H Dennis; Evenson, William E; Lunt, Barry M

    2018-01-01

    In regression settings, parameter estimates will be biased when the explanatory variables are measured with error. This bias can significantly affect modeling goals. In particular, accelerated lifetime testing involves an extrapolation of the fitted model, and a small amount of bias in parameter estimates may result in a significant increase in the bias of the extrapolated predictions. Additionally, bias may arise when the stochastic component of a log regression model is assumed to be multiplicative when the actual underlying stochastic component is additive. To account for these possible sources of bias, a log regression model with measurement error and additive error is approximated by a weighted regression model which can be estimated using Iteratively Re-weighted Least Squares. Using the reduced Eyring equation in an accelerated testing setting, the model is compared to previously accepted approaches to modeling accelerated testing data with both simulations and real data.

  18. Generic global regression models for growth prediction of Salmonella in ground pork and pork cuts

    DEFF Research Database (Denmark)

    Buschhardt, Tasja; Hansen, Tina Beck; Bahl, Martin Iain

    2017-01-01

    Introduction and Objectives Models for the prediction of bacterial growth in fresh pork are primarily developed using two-step regression (i.e. primary models followed by secondary models). These models are also generally based on experiments in liquids or ground meat and neglect surface growth....... It has been shown that one-step global regressions can result in more accurate models and that bacterial growth on intact surfaces can substantially differ from growth in liquid culture. Material and Methods We used a global-regression approach to develop predictive models for the growth of Salmonella....... One part of obtained logtransformed cell counts was used for model development and another for model validation. The Ratkowsky square root model and the relative lag time (RLT) model were integrated into the logistic model with delay. Fitted parameter estimates were compared to investigate the effect...

  19. Predictive market segmentation model: An application of logistic regression model and CHAID procedure

    Directory of Open Access Journals (Sweden)

    Soldić-Aleksić Jasna

    2009-01-01

    Full Text Available Market segmentation presents one of the key concepts of the modern marketing. The main goal of market segmentation is focused on creating groups (segments of customers that have similar characteristics, needs, wishes and/or similar behavior regarding the purchase of concrete product/service. Companies can create specific marketing plan for each of these segments and therefore gain short or long term competitive advantage on the market. Depending on the concrete marketing goal, different segmentation schemes and techniques may be applied. This paper presents a predictive market segmentation model based on the application of logistic regression model and CHAID analysis. The logistic regression model was used for the purpose of variables selection (from the initial pool of eleven variables which are statistically significant for explaining the dependent variable. Selected variables were afterwards included in the CHAID procedure that generated the predictive market segmentation model. The model results are presented on the concrete empirical example in the following form: summary model results, CHAID tree, Gain chart, Index chart, risk and classification tables.

  20. Adjusting for overdispersion in piecewise exponential regression models to estimate excess mortality rate in population-based research.

    Science.gov (United States)

    Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard

    2016-10-01

    In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.

  1. Predictions and implications of a poisson process model to describe corrosion of transuranic waste drums

    International Nuclear Information System (INIS)

    Lyon, B.F.; Holmes, J.A.; Wilbert, K.A.

    1995-01-01

    A risk assessment methodology is described in this paper to compare risks associated with immediate or near-term retrieval of transuranic (TRU) waste drums from bermed storage versus delayed retrieval. Assuming a Poisson process adequately describes corrosion, significant breaching of drums is expected to begin at - 15 and 24 yr for pitting and general corrosion, respectively. Because of this breaching, more risk will be incurred by delayed than by immediate retrieval

  2. Development and application of an asymmetric deformation model to describe the fuel rod behaviour during LOCA

    International Nuclear Information System (INIS)

    Chakraborty, A.K.; Schubert, J.D.

    1983-01-01

    For calculation of clad ballooning from single rod and rod bundle experiments a model considering the influences of azimuthal temperature gradients due to the existing eccentricity of the pellets has been developed. This model is based on the secondary creep model of Norton and on the concentric deformation model ending in cladding burst as proposed by F. Erbacher. The new model considers the azimuthal temperature differences along the cladding and the resulting differences in deformations. With this model, calculations of cladding burst deformations from single rod and rod bundle experiments are performed with good agreement

  3. Review Of Applied Mathematical Models For Describing The Behaviour Of Aqueous Humor In Eye Structures

    Science.gov (United States)

    Dzierka, M.; Jurczak, P.

    2015-12-01

    In the paper, currently used methods for modeling the flow of the aqueous humor through eye structures are presented. Then a computational model based on rheological models of Newtonian and non-Newtonian fluids is proposed. The proposed model may be used for modeling the flow of the aqueous humor through the trabecular meshwork. The trabecular meshwork is modeled as an array of rectilinear parallel capillary tubes. The flow of Newtonian and non-Newtonian fluids is considered. As a results of discussion mathematical equations of permeability of porous media and velocity of fluid flow through porous media have been received.

  4. Linear Multivariable Regression Models for Prediction of Eddy Dissipation Rate from Available Meteorological Data

    Science.gov (United States)

    MCKissick, Burnell T. (Technical Monitor); Plassman, Gerald E.; Mall, Gerald H.; Quagliano, John R.

    2005-01-01

    Linear multivariable regression models for predicting day and night Eddy Dissipation Rate (EDR) from available meteorological data sources are defined and validated. Model definition is based on a combination of 1997-2000 Dallas/Fort Worth (DFW) data sources, EDR from Aircraft Vortex Spacing System (AVOSS) deployment data, and regression variables primarily from corresponding Automated Surface Observation System (ASOS) data. Model validation is accomplished through EDR predictions on a similar combination of 1994-1995 Memphis (MEM) AVOSS and ASOS data. Model forms include an intercept plus a single term of fixed optimal power for each of these regression variables; 30-minute forward averaged mean and variance of near-surface wind speed and temperature, variance of wind direction, and a discrete cloud cover metric. Distinct day and night models, regressing on EDR and the natural log of EDR respectively, yield best performance and avoid model discontinuity over day/night data boundaries.

  5. Robust geographically weighted regression of modeling the Air Polluter Standard Index (APSI)

    Science.gov (United States)

    Warsito, Budi; Yasin, Hasbi; Ispriyanti, Dwi; Hoyyi, Abdul

    2018-05-01

    The Geographically Weighted Regression (GWR) model has been widely applied to many practical fields for exploring spatial heterogenity of a regression model. However, this method is inherently not robust to outliers. Outliers commonly exist in data sets and may lead to a distorted estimate of the underlying regression model. One of solution to handle the outliers in the regression model is to use the robust models. So this model was called Robust Geographically Weighted Regression (RGWR). This research aims to aid the government in the policy making process related to air pollution mitigation by developing a standard index model for air polluter (Air Polluter Standard Index - APSI) based on the RGWR approach. In this research, we also consider seven variables that are directly related to the air pollution level, which are the traffic velocity, the population density, the business center aspect, the air humidity, the wind velocity, the air temperature, and the area size of the urban forest. The best model is determined by the smallest AIC value. There are significance differences between Regression and RGWR in this case, but Basic GWR using the Gaussian kernel is the best model to modeling APSI because it has smallest AIC.

  6. Parameter estimation and statistical test of geographically weighted bivariate Poisson inverse Gaussian regression models

    Science.gov (United States)

    Amalia, Junita; Purhadi, Otok, Bambang Widjanarko

    2017-11-01

    Poisson distribution is a discrete distribution with count data as the random variables and it has one parameter defines both mean and variance. Poisson regression assumes mean and variance should be same (equidispersion). Nonetheless, some case of the count data unsatisfied this assumption because variance exceeds mean (over-dispersion). The ignorance of over-dispersion causes underestimates in standard error. Furthermore, it causes incorrect decision in the statistical test. Previously, paired count data has a correlation and it has bivariate Poisson distribution. If there is over-dispersion, modeling paired count data is not sufficient with simple bivariate Poisson regression. Bivariate Poisson Inverse Gaussian Regression (BPIGR) model is mix Poisson regression for modeling paired count data within over-dispersion. BPIGR model produces a global model for all locations. In another hand, each location has different geographic conditions, social, cultural and economic so that Geographically Weighted Regression (GWR) is needed. The weighting function of each location in GWR generates a different local model. Geographically Weighted Bivariate Poisson Inverse Gaussian Regression (GWBPIGR) model is used to solve over-dispersion and to generate local models. Parameter estimation of GWBPIGR model obtained by Maximum Likelihood Estimation (MLE) method. Meanwhile, hypothesis testing of GWBPIGR model acquired by Maximum Likelihood Ratio Test (MLRT) method.

  7. Integrated model of insulin and glucose kinetics describing both hepatic glucose and pancreatic insulin regulation

    DEFF Research Database (Denmark)

    Erlandsen, Mogens; Martinussen, Christoffer; Gravholt, Claus Højbjerg

    2018-01-01

    AbstractBackground and objectives Modeling of glucose kinetics has to a large extent been based on models with plasma insulin as a known forcing function. Furthermore, population-based statistical methods for parameter estimation in these models have mainly addressed random inter-individual varia......AbstractBackground and objectives Modeling of glucose kinetics has to a large extent been based on models with plasma insulin as a known forcing function. Furthermore, population-based statistical methods for parameter estimation in these models have mainly addressed random inter......-individual variations and not intra-individual variations in the parameters. Here we present an integrated whole-body model of glucose and insulin kinetics which extends the well-known two-compartment glucose minimal model. The population-based estimation technique allow for quantification of both random inter......- and intra-individual variation in selected parameters using simultaneous data series on glucose and insulin. Methods We extend the two-compartment glucose model into a whole-body model for both glucose and insulin using a simple model for the pancreas compartment which includes feedback of glucose on both...

  8. Can We Use Regression Modeling to Quantify Mean Annual Streamflow at a Global-Scale?

    Science.gov (United States)

    Barbarossa, V.; Huijbregts, M. A. J.; Hendriks, J. A.; Beusen, A.; Clavreul, J.; King, H.; Schipper, A.

    2016-12-01

    Quantifying mean annual flow of rivers (MAF) at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. MAF can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict MAF based on climate and catchment characteristics. Yet, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. In this study, we developed a global-scale regression model for MAF using observations of discharge and catchment characteristics from 1,885 catchments worldwide, ranging from 2 to 106 km2 in size. In addition, we compared the performance of the regression model with the predictive ability of the spatially explicit global hydrological model PCR-GLOBWB [van Beek et al., 2011] by comparing results from both models to independent measurements. We obtained a regression model explaining 89% of the variance in MAF based on catchment area, mean annual precipitation and air temperature, average slope and elevation. The regression model performed better than PCR-GLOBWB for the prediction of MAF, as root-mean-square error values were lower (0.29 - 0.38 compared to 0.49 - 0.57) and the modified index of agreement was higher (0.80 - 0.83 compared to 0.72 - 0.75). Our regression model can be applied globally at any point of the river network, provided that the input parameters are within the range of values employed in the calibration of the model. The performance is reduced for water scarce regions and further research should focus on improving such an aspect for regression-based global hydrological models.

  9. A note on modeling of tumor regression for estimation of radiobiological parameters

    International Nuclear Information System (INIS)

    Zhong, Hualiang; Chetty, Indrin

    2014-01-01

    Purpose: Accurate calculation of radiobiological parameters is crucial to predicting radiation treatment response. Modeling differences may have a significant impact on derived parameters. In this study, the authors have integrated two existing models with kinetic differential equations to formulate a new tumor regression model for estimation of radiobiological parameters for individual patients. Methods: A system of differential equations that characterizes the birth-and-death process of tumor cells in radiation treatment was analytically solved. The solution of this system was used to construct an iterative model (Z-model). The model consists of three parameters: tumor doubling time T d , half-life of dead cells T r , and cell survival fraction SF D under dose D. The Jacobian determinant of this model was proposed as a constraint to optimize the three parameters for six head and neck cancer patients. The derived parameters were compared with those generated from the two existing models: Chvetsov's model (C-model) and Lim's model (L-model). The C-model and L-model were optimized with the parameter T d fixed. Results: With the Jacobian-constrained Z-model, the mean of the optimized cell survival fractions is 0.43 ± 0.08, and the half-life of dead cells averaged over the six patients is 17.5 ± 3.2 days. The parameters T r and SF D optimized with the Z-model differ by 1.2% and 20.3% from those optimized with the T d -fixed C-model, and by 32.1% and 112.3% from those optimized with the T d -fixed L-model, respectively. Conclusions: The Z-model was analytically constructed from the differential equations of cell populations that describe changes in the number of different tumor cells during the course of radiation treatment. The Jacobian constraints were proposed to optimize the three radiobiological parameters. The generated model and its optimization method may help develop high-quality treatment regimens for individual patients

  10. ANALYSIS OF THE FINANCIAL PERFORMANCES OF THE FIRM, BY USING THE MULTIPLE REGRESSION MODEL

    Directory of Open Access Journals (Sweden)

    Constantin Anghelache

    2011-11-01

    Full Text Available The information achieved through the use of simple linear regression are not always enough to characterize the evolution of an economic phenomenon and, furthermore, to identify its possible future evolution. To remedy these drawbacks, the special literature includes multiple regression models, in which the evolution of the dependant variable is defined depending on two or more factorial variables.

  11. Modelling infant mortality rate in Central Java, Indonesia use generalized poisson regression method

    Science.gov (United States)

    Prahutama, Alan; Sudarno

    2018-05-01

    The infant mortality rate is the number of deaths under one year of age occurring among the live births in a given geographical area during a given year, per 1,000 live births occurring among the population of the given geographical area during the same year. This problem needs to be addressed because it is an important element of a country’s economic development. High infant mortality rate will disrupt the stability of a country as it relates to the sustainability of the population in the country. One of regression model that can be used to analyze the relationship between dependent variable Y in the form of discrete data and independent variable X is Poisson regression model. Recently The regression modeling used for data with dependent variable is discrete, among others, poisson regression, negative binomial regression and generalized poisson regression. In this research, generalized poisson regression modeling gives better AIC value than poisson regression. The most significant variable is the Number of health facilities (X1), while the variable that gives the most influence to infant mortality rate is the average breastfeeding (X9).

  12. The Application of Classical and Neural Regression Models for the Valuation of Residential Real Estate

    Directory of Open Access Journals (Sweden)

    Mach Łukasz

    2017-06-01

    Full Text Available The research process aimed at building regression models, which helps to valuate residential real estate, is presented in the following article. Two widely used computational tools i.e. the classical multiple regression and regression models of artificial neural networks were used in order to build models. An attempt to define the utilitarian usefulness of the above-mentioned tools and comparative analysis of them is the aim of the conducted research. Data used for conducting analyses refers to the secondary transactional residential real estate market.

  13. Selection of heat transfer model for describing short-pulse laser heating silica-based sensor

    International Nuclear Information System (INIS)

    Hao Xiangnan; Nie Jinsong; Li Hua; Bian Jintian

    2012-01-01

    The fundamental equations of Fourier heat transfer model and non-Fourier heat transfer model were numerically solved, with the finite difference method. The relative changes between temperature curves of the two heat transfer models were analyzed under laser irradiation with different pulse widths of 10 ns, 1 ns, 100 ps, 10 ps. The impact of different thermal relaxation time on non-Fourier model results was discussed. For pulses of pulse width less than or equal to 100 ps irradiating silicon material, the surface temperature increases slowly and carrier effect happens, which the non-Fourier model can reflect properly. As for general material, when the pulse width is less than or equal to the thermal relaxation time of material, carrier effect occurs. In this case, the non-Fourier model should be used. (authors)

  14. [Bibliometrics and visualization analysis of land use regression models in ambient air pollution research].

    Science.gov (United States)

    Zhang, Y J; Zhou, D H; Bai, Z P; Xue, F X

    2018-02-10

    Objective: To quantitatively analyze the current status and development trends regarding the land use regression (LUR) models on ambient air pollution studies. Methods: Relevant literature from the PubMed database before June 30, 2017 was analyzed, using the Bibliographic Items Co-occurrence Matrix Builder (BICOMB 2.0). Keywords co-occurrence networks, cluster mapping and timeline mapping were generated, using the CiteSpace 5.1.R5 software. Relevant literature identified in three Chinese databases was also reviewed. Results: Four hundred sixty four relevant papers were retrieved from the PubMed database. The number of papers published showed an annual increase, in line with the growing trend of the index. Most papers were published in the journal of Environmental Health Perspectives . Results from the Co-word cluster analysis identified five clusters: cluster#0 consisted of birth cohort studies related to the health effects of prenatal exposure to air pollution; cluster#1 referred to land use regression modeling and exposure assessment; cluster#2 was related to the epidemiology on traffic exposure; cluster#3 dealt with the exposure to ultrafine particles and related health effects; cluster#4 described the exposure to black carbon and related health effects. Data from Timeline mapping indicated that cluster#0 and#1 were the main research areas while cluster#3 and#4 were the up-coming hot areas of research. Ninety four relevant papers were retrieved from the Chinese databases with most of them related to studies on modeling. Conclusion: In order to better assess the health-related risks of ambient air pollution, and to best inform preventative public health intervention policies, application of LUR models to environmental epidemiology studies in China should be encouraged.

  15. Non-linear modelling to describe lactation curve in Gir crossbred cows

    Directory of Open Access Journals (Sweden)

    Yogesh C. Bangar

    2017-02-01

    Full Text Available Abstract Background The modelling of lactation curve provides guidelines in formulating farm managerial practices in dairy cows. The aim of the present study was to determine the suitable non-linear model which most accurately fitted to lactation curves of five lactations in 134 Gir crossbred cows reared in Research-Cum-Development Project (RCDP on Cattle farm, MPKV (Maharashtra. Four models viz. gamma-type function, quadratic model, mixed log function and Wilmink model were fitted to each lactation separately and then compared on the basis of goodness of fit measures viz. adjusted R2, root mean square error (RMSE, Akaike’s Informaion Criteria (AIC and Bayesian Information Criteria (BIC. Results In general, highest milk yield was observed in fourth lactation whereas it was lowest in first lactation. Among the models investigated, mixed log function and gamma-type function provided best fit of the lactation curve of first and remaining lactations, respectively. Quadratic model gave least fit to lactation curve in almost all lactations. Peak yield was observed as highest and lowest in fourth and first lactation, respectively. Further, first lactation showed highest persistency but relatively higher time to achieve peak yield than other lactations. Conclusion Lactation curve modelling using gamma-type function may be helpful to setting the management strategies at farm level, however, modelling must be optimized regularly before implementing them to enhance productivity in Gir crossbred cows.

  16. Fractional single-phase-lagging heat conduction model for describing anomalous diffusion

    Directory of Open Access Journals (Sweden)

    T.N. Mishra

    2016-03-01

    Full Text Available The fractional single-phase-lagging (FSPL heat conduction model is obtained by combining scalar time fractional conservation equation to the single-phase-lagging (SPL heat conduction model. Based on the FSPL heat conduction model, anomalous diffusion within a finite thin film is investigated. The effect of different parameters on solution has been observed and studied the asymptotic behavior of the FSPL model. The analytical solution is obtained using Laplace transform method. The whole analysis is presented in dimensionless form. Numerical examples of particular interest have been studied and discussed in details.

  17. Steady shear rate rheology of suspensions, as described by the gaint floc model

    NARCIS (Netherlands)

    Stein, H.N.; Laven, J.

    2001-01-01

    The break-down of a particle network by shear is described as the development of shear planes: a region able to withstand low shear stresses may break down under a larger stress; thus with increasing shear stress and shear rate, the mutual distance (A) between successive shear planes decreases

  18. Using a Model to Describe Students' Inductive Reasoning in Problem Solving

    Science.gov (United States)

    Canadas, Maria C.; Castro, Encarnacion; Castro, Enrique

    2009-01-01

    Introduction: We present some aspects of a wider investigation (Canadas, 2007), whose main objective is to describe and characterize inductive reasoning used by Spanish students in years 9 and 10 when they work on problems that involved linear and quadratic sequences. Method: We produced a test composed of six problems with different…

  19. [Evaluation of estimation of prevalence ratio using bayesian log-binomial regression model].

    Science.gov (United States)

    Gao, W L; Lin, H; Liu, X N; Ren, X W; Li, J S; Shen, X P; Zhu, S L

    2017-03-10

    To evaluate the estimation of prevalence ratio ( PR ) by using bayesian log-binomial regression model and its application, we estimated the PR of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea in their infants by using bayesian log-binomial regression model in Openbugs software. The results showed that caregivers' recognition of infant' s risk signs of diarrhea was associated significantly with a 13% increase of medical care-seeking. Meanwhile, we compared the differences in PR 's point estimation and its interval estimation of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea and convergence of three models (model 1: not adjusting for the covariates; model 2: adjusting for duration of caregivers' education, model 3: adjusting for distance between village and township and child month-age based on model 2) between bayesian log-binomial regression model and conventional log-binomial regression model. The results showed that all three bayesian log-binomial regression models were convergence and the estimated PRs were 1.130(95 %CI : 1.005-1.265), 1.128(95 %CI : 1.001-1.264) and 1.132(95 %CI : 1.004-1.267), respectively. Conventional log-binomial regression model 1 and model 2 were convergence and their PRs were 1.130(95 % CI : 1.055-1.206) and 1.126(95 % CI : 1.051-1.203), respectively, but the model 3 was misconvergence, so COPY method was used to estimate PR , which was 1.125 (95 %CI : 1.051-1.200). In addition, the point estimation and interval estimation of PRs from three bayesian log-binomial regression models differed slightly from those of PRs from conventional log-binomial regression model, but they had a good consistency in estimating PR . Therefore, bayesian log-binomial regression model can effectively estimate PR with less misconvergence and have more advantages in application compared with conventional log-binomial regression model.

  20. A different approach to estimate nonlinear regression model using numerical methods

    Science.gov (United States)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper concerns with the computational methods namely the Gauss-Newton method, Gradient algorithm methods (Newton-Raphson method, Steepest Descent or Steepest Ascent algorithm method, the Method of Scoring, the Method of Quadratic Hill-Climbing) based on numerical analysis to estimate parameters of nonlinear regression model in a very different way. Principles of matrix calculus have been used to discuss the Gradient-Algorithm methods. Yonathan Bard [1] discussed a comparison of gradient methods for the solution of nonlinear parameter estimation problems. However this article discusses an analytical approach to the gradient algorithm methods in a different way. This paper describes a new iterative technique namely Gauss-Newton method which differs from the iterative technique proposed by Gorden K. Smyth [2]. Hans Georg Bock et.al [10] proposed numerical methods for parameter estimation in DAE’s (Differential algebraic equation). Isabel Reis Dos Santos et al [11], Introduced weighted least squares procedure for estimating the unknown parameters of a nonlinear regression metamodel. For large-scale non smooth convex minimization the Hager and Zhang (HZ) conjugate gradient Method and the modified HZ (MHZ) method were presented by Gonglin Yuan et al [12].

  1. Modeling Tetanus Neonatorum case using the regression of negative binomial and zero-inflated negative binomial

    Science.gov (United States)

    Amaliana, Luthfatul; Sa'adah, Umu; Wayan Surya Wardhani, Ni

    2017-12-01

    Tetanus Neonatorum is an infectious disease that can be prevented by immunization. The number of Tetanus Neonatorum cases in East Java Province is the highest in Indonesia until 2015. Tetanus Neonatorum data contain over dispersion and big enough proportion of zero-inflation. Negative Binomial (NB) regression is an alternative method when over dispersion happens in Poisson regression. However, the data containing over dispersion and zero-inflation are more appropriately analyzed by using Zero-Inflated Negative Binomial (ZINB) regression. The purpose of this study are: (1) to model Tetanus Neonatorum cases in East Java Province with 71.05 percent proportion of zero-inflation by using NB and ZINB regression, (2) to obtain the best model. The result of this study indicates that ZINB is better than NB regression with smaller AIC.

  2. A simple geometrical model describing shapes of soap films suspended on two rings

    Science.gov (United States)

    Herrmann, Felix J.; Kilvington, Charles D.; Wildenberg, Rebekah L.; Camacho, Franco E.; Walecki, Wojciech J.; Walecki, Peter S.; Walecki, Eve S.

    2016-09-01

    We measured and analysed the stability of two types of soap films suspended on two rings using the simple conical frusta-based model, where we use common definition of conical frustum as a portion of a cone that lies between two parallel planes cutting it. Using frusta-based we reproduced very well-known results for catenoid surfaces with and without a central disk. We present for the first time a simple conical frusta based spreadsheet model of the soap surface. This very simple, elementary, geometrical model produces results surprisingly well matching the experimental data and known exact analytical solutions. The experiment and the spreadsheet model can be used as a powerful teaching tool for pre-calculus and geometry students.

  3. How well do basic models describe the turbidity currents coming down Monterey and Congo Canyon?

    Science.gov (United States)

    Cartigny, M.; Simmons, S.; Heerema, C.; Xu, J. P.; Azpiroz, M.; Clare, M. A.; Cooper, C.; Gales, J. A.; Maier, K. L.; Parsons, D. R.; Paull, C. K.; Sumner, E. J.; Talling, P.

    2017-12-01

    Turbidity currents rival rivers in their global capacity to transport sediment and organic carbon. Furthermore, turbidity currents break submarine cables that now transport >95% of our global data traffic. Accurate turbidity current models are thus needed to quantify their transport capacity and to predict the forces exerted on seafloor structures. Despite this need, existing numerical models are typically only calibrated with scaled-down laboratory measurements due to the paucity of direct measurements of field-scale turbidity currents. This lack of calibration thus leaves much uncertainty in the validity of existing models. Here we use the most detailed observations of turbidity currents yet acquired to validate one of the most fundamental models proposed for turbidity currents, the modified Chézy model. Direct measurements on which the validation is based come from two sites that feature distinctly different flow modes and grain sizes. The first are from the multi-institution Coordinated Canyon Experiment (CCE) in Monterey Canyon, California. An array of six moorings along the canyon axis captured at least 15 flow events that lasted up to hours. The second is the deep-sea Congo Canyon, where 10 finer grained flows were measured by a single mooring, each lasting several days. Moorings captured depth-resolved velocity and suspended sediment concentration at high resolution (turbidity currents; the modified Chézy model. This basic model has been very useful for river studies over the past 200 years, as it provides a rapid estimate of how flow velocity varies with changes in river level and energy slope. Chézy-type models assume that the gravitational force of the flow equals the friction of the river-bed. Modified Chézy models have been proposed for turbidity currents. However, the absence of detailed measurements of friction and sediment concentration within full-scale turbidity currents has forced modellers to make rough assumptions for these parameters. Here

  4. A Bayesian method for construction of Markov models to describe dynamics on various time-scales.

    Science.gov (United States)

    Rains, Emily K; Andersen, Hans C

    2010-10-14

    The dynamics of many biological processes of interest, such as the folding of a protein, are slow and complicated enough that a single molecular dynamics simulation trajectory of the entire process is difficult to obtain in any reasonable amount of time. Moreover, one such simulation may not be sufficient to develop an understanding of the mechanism of the process, and multiple simulations may be necessary. One approach to circumvent this computational barrier is the use of Markov state models. These models are useful because they can be constructed using data from a large number of shorter simulations instead of a single long simulation. This paper presents a new Bayesian method for the construction of Markov models from simulation data. A Markov model is specified by (τ,P,T), where τ is the mesoscopic time step, P is a partition of configuration space into mesostates, and T is an N(P)×N(P) transition rate matrix for transitions between the mesostates in one mesoscopic time step, where N(P) is the number of mesostates in P. The method presented here is different from previous Bayesian methods in several ways. (1) The method uses Bayesian analysis to determine the partition as well as the transition probabilities. (2) The method allows the construction of a Markov model for any chosen mesoscopic time-scale τ. (3) It constructs Markov models for which the diagonal elements of T are all equal to or greater than 0.5. Such a model will be called a "consistent mesoscopic Markov model" (CMMM). Such models have important advantages for providing an understanding of the dynamics on a mesoscopic time-scale. The Bayesian method uses simulation data to find a posterior probability distribution for (P,T) for any chosen τ. This distribution can be regarded as the Bayesian probability that the kinetics observed in the atomistic simulation data on the mesoscopic time-scale τ was generated by the CMMM specified by (P,T). An optimization algorithm is used to find the most

  5. Transport in semiconductor nanowire superlattices described by coupled quantum mechanical and kinetic models.

    Science.gov (United States)

    Alvaro, M; Bonilla, L L; Carretero, M; Melnik, R V N; Prabhakar, S

    2013-08-21

    In this paper we develop a kinetic model for the analysis of semiconductor superlattices, accounting for quantum effects. The model consists of a Boltzmann-Poisson type system of equations with simplified Bhatnagar-Gross-Krook collisions, obtained from the general time-dependent Schrödinger-Poisson model using Wigner functions. This system for superlattice transport is supplemented by the quantum mechanical part of the model based on the Ben-Daniel-Duke form of the Schrödinger equation for a cylindrical superlattice of finite radius. The resulting energy spectrum is used to characterize the Fermi-Dirac distribution that appears in the Bhatnagar-Gross-Krook collision, thereby coupling the quantum mechanical and kinetic parts of the model. The kinetic model uses the dispersion relation obtained by the generalized Kronig-Penney method, and allows us to estimate radii of quantum wire superlattices that have the same miniband widths as in experiments. It also allows us to determine more accurately the time-dependent characteristics of superlattices, in particular their current density. Results, for several experimentally grown superlattices, are discussed in the context of self-sustained coherent oscillations of the current density which are important in an increasing range of current and potential applications.

  6. Using logic model methods in systematic review synthesis: describing complex pathways in referral management interventions.

    Science.gov (United States)

    Baxter, Susan K; Blank, Lindsay; Woods, Helen Buckley; Payne, Nick; Rimmer, Melanie; Goyder, Elizabeth

    2014-05-10

    There is increasing interest in innovative methods to carry out systematic reviews of complex interventions. Theory-based approaches, such as logic models, have been suggested as a means of providing additional insights beyond that obtained via conventional review methods. This paper reports the use of an innovative method which combines systematic review processes with logic model techniques to synthesise a broad range of literature. The potential value of the model produced was explored with stakeholders. The review identified 295 papers that met the inclusion criteria. The papers consisted of 141 intervention studies and 154 non-intervention quantitative and qualitative articles. A logic model was systematically built from these studies. The model outlines interventions, short term outcomes, moderating and mediating factors and long term demand management outcomes and impacts. Interventions were grouped into typologies of practitioner education, process change, system change, and patient intervention. Short-term outcomes identified that may result from these interventions were changed physician or patient knowledge, beliefs or attitudes and also interventions related to changed doctor-patient interaction. A range of factors which may influence whether these outcomes lead to long term change were detailed. Demand management outcomes and intended impacts included content of referral, rate of referral, and doctor or patient satisfaction. The logic model details evidence and assumptions underpinning the complex pathway from interventions to demand management impact. The method offers a useful addition to systematic review methodologies. PROSPERO registration number: CRD42013004037.

  7. A Technique of Fuzzy C-Mean in Multiple Linear Regression Model toward Paddy Yield

    Science.gov (United States)

    Syazwan Wahab, Nur; Saifullah Rusiman, Mohd; Mohamad, Mahathir; Amira Azmi, Nur; Che Him, Norziha; Ghazali Kamardan, M.; Ali, Maselan

    2018-04-01

    In this paper, we propose a hybrid model which is a combination of multiple linear regression model and fuzzy c-means method. This research involved a relationship between 20 variates of the top soil that are analyzed prior to planting of paddy yields at standard fertilizer rates. Data used were from the multi-location trials for rice carried out by MARDI at major paddy granary in Peninsular Malaysia during the period from 2009 to 2012. Missing observations were estimated using mean estimation techniques. The data were analyzed using multiple linear regression model and a combination of multiple linear regression model and fuzzy c-means method. Analysis of normality and multicollinearity indicate that the data is normally scattered without multicollinearity among independent variables. Analysis of fuzzy c-means cluster the yield of paddy into two clusters before the multiple linear regression model can be used. The comparison between two method indicate that the hybrid of multiple linear regression model and fuzzy c-means method outperform the multiple linear regression model with lower value of mean square error.

  8. Improving sub-pixel imperviousness change prediction by ensembling heterogeneous non-linear regression models

    Science.gov (United States)

    Drzewiecki, Wojciech

    2016-12-01

    In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels) was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques. The results proved that in case of sub-pixel evaluation the most accurate prediction of change may not necessarily be based on the most accurate individual assessments. When single methods are considered, based on obtained results Cubist algorithm may be advised for Landsat based mapping of imperviousness for single dates. However, Random Forest may be endorsed when the most reliable evaluation of imperviousness change is the primary goal. It gave lower accuracies for individual assessments, but better prediction of change due to more correlated errors of individual predictions. Heterogeneous model ensembles performed for individual time points assessments at least as well as the best individual models. In case of imperviousness change assessment the ensembles always outperformed single model approaches. It means that it is possible to improve the accuracy of sub-pixel imperviousness change assessment using ensembles of heterogeneous non-linear regression models.

  9. SOME STATISTICAL ISSUES RELATED TO MULTIPLE LINEAR REGRESSION MODELING OF BEACH BACTERIA CONCENTRATIONS

    Science.gov (United States)

    As a fast and effective technique, the multiple linear regression (MLR) method has been widely used in modeling and prediction of beach bacteria concentrations. Among previous works on this subject, however, several issues were insufficiently or inconsistently addressed. Those is...

  10. Accounting for exhaust gas transport dynamics in instantaneous emission models via smooth transition regression.

    Science.gov (United States)

    Kamarianakis, Yiannis; Gao, H Oliver

    2010-02-15

    Collecting and analyzing high frequency emission measurements has become very usual during the past decade as significantly more information with respect to formation conditions can be collected than from regulated bag measurements. A challenging issue for researchers is the accurate time-alignment between tailpipe measurements and engine operating variables. An alignment procedure should take into account both the reaction time of the analyzers and the dynamics of gas transport in the exhaust and measurement systems. This paper discusses a statistical modeling framework that compensates for variable exhaust transport delay while relating tailpipe measurements with engine operating covariates. Specifically it is shown that some variants of the smooth transition regression model allow for transport delays that vary smoothly as functions of the exhaust flow rate. These functions are characterized by a pair of coefficients that can be estimated via a least-squares procedure. The proposed models can be adapted to encompass inherent nonlinearities that were implicit in previous instantaneous emissions modeling efforts. This article describes the methodology and presents an illustrative application which uses data collected from a diesel bus under real-world driving conditions.

  11. Electricity demand loads modeling using AutoRegressive Moving Average (ARMA) models

    Energy Technology Data Exchange (ETDEWEB)

    Pappas, S.S. [Department of Information and Communication Systems Engineering, University of the Aegean, Karlovassi, 83 200 Samos (Greece); Ekonomou, L.; Chatzarakis, G.E. [Department of Electrical Engineering Educators, ASPETE - School of Pedagogical and Technological Education, N. Heraklion, 141 21 Athens (Greece); Karamousantas, D.C. [Technological Educational Institute of Kalamata, Antikalamos, 24100 Kalamata (Greece); Katsikas, S.K. [Department of Technology Education and Digital Systems, University of Piraeus, 150 Androutsou Srt., 18 532 Piraeus (Greece); Liatsis, P. [Division of Electrical Electronic and Information Engineering, School of Engineering and Mathematical Sciences, Information and Biomedical Engineering Centre, City University, Northampton Square, London EC1V 0HB (United Kingdom)

    2008-09-15

    This study addresses the problem of modeling the electricity demand loads in Greece. The provided actual load data is deseasonilized and an AutoRegressive Moving Average (ARMA) model is fitted on the data off-line, using the Akaike Corrected Information Criterion (AICC). The developed model fits the data in a successful manner. Difficulties occur when the provided data includes noise or errors and also when an on-line/adaptive modeling is required. In both cases and under the assumption that the provided data can be represented by an ARMA model, simultaneous order and parameter estimation of ARMA models under the presence of noise are performed. The produced results indicate that the proposed method, which is based on the multi-model partitioning theory, tackles successfully the studied problem. For validation purposes the produced results are compared with three other established order selection criteria, namely AICC, Akaike's Information Criterion (AIC) and Schwarz's Bayesian Information Criterion (BIC). The developed model could be useful in the studies that concern electricity consumption and electricity prices forecasts. (author)

  12. An empirical model to describe performance degradation for warranty abuse detection in portable electronics

    International Nuclear Information System (INIS)

    Oh, Hyunseok; Choi, Seunghyuk; Kim, Keunsu; Youn, Byeng D.; Pecht, Michael

    2015-01-01

    Portable electronics makers have introduced liquid damage indicators (LDIs) into their products to detect warranty abuse caused by water damage. However, under certain conditions, these indicators can exhibit inconsistencies in detecting liquid damage. This study is motivated by the fact that the reliability of LDIs in portable electronics is suspected. In this paper, first, the scheme of life tests is devised for LDIs in conjunction with a robust color classification rule. Second, a degradation model is proposed by considering the two physical mechanisms—(1) phase change from vapor to water and (2) water transport in the porous paper—for LDIs. Finally, the degradation model is validated with additional tests using actual smartphone sets subjected to the thermal cycling of −15 °C to 25 °C and the relative humidity of 95%. By employing the innovative life testing scheme and the novel performance degradation model, it is expected that the performance of LDIs for a particular application can be assessed quickly and accurately. - Highlights: • Devise an efficient scheme of life testing for a warranty abuse detector in portable electronics. • Develop a performance degradation model for the warranty abuse detector used in portable electronics. • Validate the performance degradation model with life tests of actual smartphone sets. • Help make a decision on warranty service in portable electronics manufacturers

  13. A model, describing the influence of water management alternatives on dike stability

    Directory of Open Access Journals (Sweden)

    J. W. M. Lambert

    2015-11-01

    Full Text Available The awareness is rising that economic effects of Land Subsidence are high. Nevertheless, quantifying these economic losses is difficult and, as far as known, not yet done in a sophisticated way. Also, to be able to decide about future strategies, for example to avoid or decrease subsidence, it is necessary to know the financial consequences of measures and possible solutions. As a first step to quantify these economic effects, a MODFLOW-SCR (coupled MODFLOW-Settlements model is coupled with the model DAM. Based on the local stratigraphy, the shape and composition of the existing dike or levee, the level of the surface water and the surface level, macro-stability of the dike is calculated and – if the dike does not meet the required stability – adaptions are proposed. The model enables to separate effects that are caused by sea-level rise and the effects of subsidence. Coupling the DAM model with an economic model to calculate costs of these adaptions is under construction.

  14. Integrated compartmental model for describing the transport of solute in a fractured porous medium. [FRACPORT

    Energy Technology Data Exchange (ETDEWEB)

    DeAngelis, D.L.; Yeh, G.T.; Huff, D.D.

    1984-10-01

    This report documents a model, FRACPORT, that simulates the transport of a solute through a fractured porous matrix. The model should be useful in analyzing the possible transport of radionuclides from shallow-land burial sites in humid environments. The use of the model is restricted to transport through saturated zones. The report first discusses the general modeling approach used, which is based on the Integrated Compartmental Method. The basic equations of solute transport are then presented. The model, which assumes a known water velocity field, solves these equations on two different time scales; one related to rapid transport of solute along fractures and the other related to slower transport through the porous matrix. FRACPORT is validated by application to a simple example of fractured porous medium transport that has previously been analyzed by other methods. Then its utility is demonstrated in analyzing more complex cases of pulses of solute into a fractured matrix. The report serves as a user's guide to FRACPORT. A detailed description of data input, along with a listing of input for a sample problem, is provided. 16 references, 18 figures, 3 tables.

  15. Developing and testing a global-scale regression model to quantify mean annual streamflow

    Science.gov (United States)

    Barbarossa, Valerio; Huijbregts, Mark A. J.; Hendriks, A. Jan; Beusen, Arthur H. W.; Clavreul, Julie; King, Henry; Schipper, Aafke M.

    2017-01-01

    Quantifying mean annual flow of rivers (MAF) at ungauged sites is essential for assessments of global water supply, ecosystem integrity and water footprints. MAF can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict MAF based on climate and catchment characteristics. Yet, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. In this study, we developed a global-scale regression model for MAF based on a dataset unprecedented in size, using observations of discharge and catchment characteristics from 1885 catchments worldwide, measuring between 2 and 106 km2. In addition, we compared the performance of the regression model with the predictive ability of the spatially explicit global hydrological model PCR-GLOBWB by comparing results from both models to independent measurements. We obtained a regression model explaining 89% of the variance in MAF based on catchment area and catchment averaged mean annual precipitation and air temperature, slope and elevation. The regression model performed better than PCR-GLOBWB for the prediction of MAF, as root-mean-square error (RMSE) values were lower (0.29-0.38 compared to 0.49-0.57) and the modified index of agreement (d) was higher (0.80-0.83 compared to 0.72-0.75). Our regression model can be applied globally to estimate MAF at any point of the river network, thus providing a feasible alternative to spatially explicit process-based global hydrological models.

  16. Modified VMD model with correct analytic properties for describing electromagnetic structure of He4 nucleus

    International Nuclear Information System (INIS)

    Dubnicka, S.; Lucan, L.

    1988-12-01

    A new phenomenological model for electromagnetic (e.m.) form factor (ff) of He 4 nucleus is presented, which is based on a modification of the well proved in e.m. interactions of hadrons vector-meson-dominance (VMD) model by means of an incorporation of correct He 4 ff analytic properties, nonzero vector-meson widths and the right power asymptotic behaviour predicted by the quark model. It reproduces the existing experimental information on He 4 e.m. ff in the space-like region quite well. Furthermore, couplings of all well established isoscalar vector mesons with J pc = 1 -- to He 4 nucleus are evaluated as a result of the analysis and the time-like region behaviour of He 4 e.m. ff is predicted. As a consequence of the latter the total cross section of e + e - → He 4 He-bar 4 process is calculated for the first time. (author). 17 refs, 3 figs

  17. Comparison of two mathematical models for describing heat-induced cell killing

    International Nuclear Information System (INIS)

    Roti Roti, J.L.; Henle, K.J.

    1980-01-01

    A computer-based minimization algorithm is utilized to obtain the optimum fits of two models to hyperthermic cell killing data. The models chosen are the multitarget, single-hit equation, which is in general use, and the linear-quadratic equation, which has been applied to cell killing by ionizing irradiation but not to heat-induced cell killing. The linear-quadratic equation fits hyperthermic cell killing data as well as the multitarget, single-hit equation. Both parameters of the linear-quadratic equation obey the Arrhenius law, whereas only one of the two parameters of the multitarget, single-hit equation obeys the Arrhenius law. Thus the linear-quadratic function can completely define cell killing as a function of both time and temperature. In addition, the linear-quadratic model will provide a simplified approach to the study of the synergism between heat and X irradiation

  18. Benchmarking of numerical models describing the dispersion of radionuclides in the Arctic Seas

    DEFF Research Database (Denmark)

    Scott, E.M.; Gurbutt, P.; Harms, I.

    1997-01-01

    As part of the International Arctic Seas Assessment Project (IASAP) of the International Atomic Energy Agency (IAEA), a working group was created to model the dispersal and transfer of radionuclides released from radioactive waste disposed of in the Kara Sea. The objectives of this group are: (1......) development of realistic and reliable assessment models for the dispersal of radioactive contaminants both within, and from, the Arctic ocean; and (2) evaluation of the contributions of different transfer mechanisms to contaminant dispersal and hence, ultimately, to the risks to human health and environment...

  19. A gauge model describing N relativistic particles bound by linear forces

    International Nuclear Information System (INIS)

    Filippov, A.T.

    1988-01-01

    A relativistic model of N particles bound by linear forces is obtained by applying the gauging procedure to the linear canonical symmteries of a simple (rudimentary) nonrelativistic N-particle Lagrangian extended to relativistic phase space. The new (gauged) Lagrangian is formally Poincare invariant, the Hamiltonian is a linear combination of first-class constraints which are closed with respect to Pisson brackets and generate the localized canonical symmteries. The gauge potentials appear as the Lagrange multipliers of the constraints. Gauge fixing and quantization of the model are also briefly discussed. 11 refs

  20. Feasibility study for a plasticity model to describe the transient thermomechanical behavior of Zircaloy. Final report

    International Nuclear Information System (INIS)

    Valanis, K.C.

    1979-11-01

    The conceptual framework of the endochronic theory is described and a summary of its capabilities, as well as past and potential applications to the mechanical response of metals to general histories of deformation, temperature, and radiation is given. The purely mechanical part of the theory is developed on the basis of the concept of intrinsic time which serves to incorporate in a unified and concise fashion the effects of strain history and strain rate on the stress response. The effects of temperature are introduced by means of the theory of deformation kinetics through its relation to the internal variable theory of irreversible thermodynamics. As a result, physically sound formulae are developed which account for the effect of temperature history on the stress response. An approach to describing irradiation effects is briefly discussed. More research would be needed to define appropriate constitutive representations for Zircaloy. The endochronic theory is also looked at from a numerical analysis viewpoint of future applications to problems of practical interest. In appendix B a first cut attempt has been made to assess the computational efficiencies of material constitutive equation approaches

  1. q-deformed Einstein's model to describe specific heat of solid

    Science.gov (United States)

    Guha, Atanu; Das, Prasanta Kumar

    2018-04-01

    Realistic phenomena can be described more appropriately using generalized canonical ensemble, with proper parameter sets involved. We have generalized the Einstein's theory for specific heat of solid in Tsallis statistics, where the temperature fluctuation is introduced into the theory via the fluctuation parameter q. At low temperature the Einstein's curve of the specific heat in the nonextensive Tsallis scenario exactly lies on the experimental data points. Consequently this q-modified Einstein's curve is found to be overlapping with the one predicted by Debye. Considering only the temperature fluctuation effect(even without considering more than one mode of vibration is being triggered) we found that the CV vs T curve is as good as obtained by considering the different modes of vibration as suggested by Debye. Generalizing the Einstein's theory in Tsallis statistics we found that a unique value of the Einstein temperature θE along with a temperature dependent deformation parameter q(T) , can well describe the phenomena of specific heat of solid i.e. the theory is equivalent to Debye's theory with a temperature dependent θD.

  2. Reflexion on linear regression trip production modelling method for ensuring good model quality

    Science.gov (United States)

    Suprayitno, Hitapriya; Ratnasari, Vita

    2017-11-01

    Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.

  3. Using the Logistic Regression model in supporting decisions of establishing marketing strategies

    Directory of Open Access Journals (Sweden)

    Cristinel CONSTANTIN

    2015-12-01

    Full Text Available This paper is about an instrumental research regarding the using of Logistic Regression model for data analysis in marketing research. The decision makers inside different organisation need relevant information to support their decisions regarding the marketing strategies. The data provided by marketing research could be computed in various ways but the multivariate data analysis models can enhance the utility of the information. Among these models we can find the Logistic Regression model, which is used for dichotomous variables. Our research is based on explanation the utility of this model and interpretation of the resulted information in order to help practitioners and researchers to use it in their future investigations

  4. OPLS statistical model versus linear regression to assess sonographic predictors of stroke prognosis.

    Science.gov (United States)

    Vajargah, Kianoush Fathi; Sadeghi-Bazargani, Homayoun; Mehdizadeh-Esfanjani, Robab; Savadi-Oskouei, Daryoush; Farhoudi, Mehdi

    2012-01-01

    The objective of the present study was to assess the comparable applicability of orthogonal projections to latent structures (OPLS) statistical model vs traditional linear regression in order to investigate the role of trans cranial doppler (TCD) sonography in predicting ischemic stroke prognosis. The study was conducted on 116 ischemic stroke patients admitted to a specialty neurology ward. The Unified Neurological Stroke Scale was used once for clinical evaluation on the first week of admission and again six months later. All data was primarily analyzed using simple linear regression and later considered for multivariate analysis using PLS/OPLS models through the SIMCA P+12 statistical software package. The linear regression analysis results used for the identification of TCD predictors of stroke prognosis were confirmed through the OPLS modeling technique. Moreover, in comparison to linear regression, the OPLS model appeared to have higher sensitivity in detecting the predictors of ischemic stroke prognosis and detected several more predictors. Applying the OPLS model made it possible to use both single TCD measures/indicators and arbitrarily dichotomized measures of TCD single vessel involvement as well as the overall TCD result. In conclusion, the authors recommend PLS/OPLS methods as complementary rather than alternative to the available classical regression models such as linear regression.

  5. Use of empirical likelihood to calibrate auxiliary information in partly linear monotone regression models.

    Science.gov (United States)

    Chen, Baojiang; Qin, Jing

    2014-05-10

    In statistical analysis, a regression model is needed if one is interested in finding the relationship between a response variable and covariates. When the response depends on the covariate, then it may also depend on the function of this covariate. If one has no knowledge of this functional form but expect for monotonic increasing or decreasing, then the isotonic regression model is preferable. Estimation of parameters for isotonic regression models is based on the pool-adjacent-violators algorithm (PAVA), where the monotonicity constraints are built in. With missing data, people often employ the augmented estimating method to improve estimation efficiency by incorporating auxiliary information through a working regression model. However, under the framework of the isotonic regression model, the PAVA does not work as the monotonicity constraints are violated. In this paper, we develop an empirical likelihood-based method for isotonic regression model to incorporate the auxiliary information. Because the monotonicity constraints still hold, the PAVA can be used for parameter estimation. Simulation studies demonstrate that the proposed method can yield more efficient estimates, and in some situations, the efficiency improvement is substantial. We apply this method to a dementia study. Copyright © 2013 John Wiley & Sons, Ltd.

  6. Using Rouse-Fowler model to describe radiation-induced electrical conductivity of nanocomposite materials

    Science.gov (United States)

    Dyuryagina, N. S.; Yalovets, A. P.

    2017-05-01

    Using the Rouse-Fowler (RF) model this work studies the radiation-induced electrical conductivity of a polymer nanocomposite material with spherical nanoparticles against the intensity and exposure time of gamma-ray, concentration and size of nanoparticles. The research has found the energy distribution of localized statesinduced by nanoparticles. The studies were conducted on polymethylmethacrylate (PMMA) with CdS nanoparticles.

  7. Kinetic model describing the UV/H2O2 photodegradation of phenol from water

    Directory of Open Access Journals (Sweden)

    Rubio-Clemente Ainhoa

    2017-01-01

    Full Text Available A kinetic model for phenol transformation through the UV/H2O2 system was developed and validated. The model includes the pollutant decomposition by direct photolysis and HO•, HO2• and O2 •- oxidation. HO• scavenging effects of CO3 2-, HCO3 -, SO4 2- and Cl- were also considered, as well as the pH changes as the process proceeds. Additionally, the detrimental action of the organic matter and reaction intermediates in shielding UV and quenching HO• was incorporated. It was observed that the model can accurately predict phenol abatement using different H2O2/phenol mass ratios (495, 228 and 125, obtaining an optimal H2O2/phenol ratio of 125, leading to a phenol removal higher than 95% after 40 min of treatment, where the main oxidation species was HO•. The developed model could be relevant for calculating the optimal level of H2O2 efficiently degrading the pollutant of interest, allowing saving in costs and time.

  8. CAN A NANOFLARE MODEL OF EXTREME-ULTRAVIOLET IRRADIANCES DESCRIBE THE HEATING OF THE SOLAR CORONA?

    Energy Technology Data Exchange (ETDEWEB)

    Tajfirouze, E.; Safari, H. [Department of Physics, University of Zanjan, P.O. Box 45195-313, Zanjan (Iran, Islamic Republic of)

    2012-01-10

    Nanoflares, the basic units of impulsive energy release, may produce much of the solar background emission. Extrapolation of the energy frequency distribution of observed microflares, which follows a power law to lower energies, can give an estimation of the importance of nanoflares for heating the solar corona. If the power-law index is greater than 2, then the nanoflare contribution is dominant. We model a time series of extreme-ultraviolet emission radiance as random flares with a power-law exponent of the flare event distribution. The model is based on three key parameters: the flare rate, the flare duration, and the power-law exponent of the flare intensity frequency distribution. We use this model to simulate emission line radiance detected in 171 A, observed by Solar Terrestrial Relation Observatory/Extreme-Ultraviolet Imager and Solar Dynamics Observatory/Atmospheric Imaging Assembly. The observed light curves are matched with simulated light curves using an Artificial Neural Network, and the parameter values are determined across the active region, quiet Sun, and coronal hole. The damping rate of nanoflares is compared with the radiative losses cooling time. The effect of background emission, data cadence, and network sensitivity on the key parameters of the model is studied. Most of the observed light curves have a power-law exponent, {alpha}, greater than the critical value 2. At these sites, nanoflare heating could be significant.

  9. Exponential law as a more compatible model to describe orbits of planetary systems

    Directory of Open Access Journals (Sweden)

    M Saeedi

    2012-12-01

    Full Text Available   According to the Titus-Bode law, orbits of planets in the solar system obey a geometric progression. Many investigations have been launched to improve this law. In this paper, we apply square and exponential models to planets of solar system, moons of planets, and some extra solar systems, and compare them with each other.

  10. Application of the International Water Association activated sludge models to describe aerobic sludge digestion.

    Science.gov (United States)

    Ghorbani, M; Eskicioglu, C

    2011-12-01

    Batch and semi-continuous flow aerobic digesters were used to stabilize thickened waste-activated sludge at different initial conditions and mean solids retention times. Under dynamic conditions, total suspended solids, volatile suspended solids (VSS) and total and particulate chemical oxygen demand (COD and PCOD) were monitored in the batch reactors and effluent from the semi-continuous flow reactors. Activated Sludge Model (ASM) no. 1 and ASM no. 3 were applied to measured data (calibration data set) to evaluate the consistency and performances of models at different flow regimes for digester COD and VSS modelling. The results indicated that both ASM1 and ASM3 predicted digester COD, VSS and PCOD concentrations well (R2, Ra2 > or = 0.93). Parameter estimation concluded that compared to ASM1, ASM3 parameters were more consistent across different batch and semi-continuous flow runs with different operating conditions. Model validation on a data set independent from the calibration data successfully predicted digester COD (R2 = 0.88) and VSS (R2 = 0.94) concentrations by ASM3, while ASM1 overestimated both reactor COD (R2 = 0.74) and VSS concentrations (R2 = 0.79) after 15 days of aerobic batch digestion.

  11. Predictive model to describe water migration in cellular solid foods during storage

    NARCIS (Netherlands)

    Voogt, J.A.; Hirte, A.; Meinders, M.B.J.

    2011-01-01

    BACKGROUND: Water migration in cellular solid foods during storage causes loss of crispness. To improve crispness retention, physical understanding of this process is needed. Mathematical models are suitable tools to gain this physical knowledge. RESULTS: Water migration in cellular solid foods

  12. Predictive model to describe water migration in cellular solid foods during storage

    NARCIS (Netherlands)

    Voogt, J.A.; Hirte, A.; Meinders, M.B.J.

    2011-01-01

    Background: Water migration in cellular solid foods during storage causes loss of crispness. To improve crispness retention, physical understanding of this process is needed. Mathematical models are suitable tools to gain this physical knowledge. Results: Water migration in cellular solid foods

  13. A vapour bubble collapse model to describe the fragmentation of low-melting materials

    International Nuclear Information System (INIS)

    Benz, R.; Schober, P.

    1977-11-01

    By means of a model, the fragmentation of a hot melt of metal in consequence of collapsing vapour-bubbles is investigated. In particular the paper deals with the development of the physical model-ideas for calculation of the temperature of contact that adjusts between the temperature of the melt and the coolant, of the waiting-time until bubble-nucleation occurs and of the maximal obtainable vapour-bubble-radius in dependence of the coolant-temperature. After that follows the description of the computing-program belonging to this model and of the results of an extensive parameter-study. The study examined the influence of the temperature of melt and coolant, the melted mass, the nucleation-site-density, the average maximum bubble-radius, the duration of film-breakdown and the coefficient of heat-transition. The calculation of the process of fragmentation turns out to be according to expectation, whereas the duration of this process seems to be somewhat too long. The dependence of the surface-enlargement on the subcooling of the water-bath and the initial temperature of the melt is not yet reproduced satisfactorily by the model. The reasons for this are the temperature-increase of the water-bath as well as the fact that the coupling of heat-flux-density and nucleation-site-density are not taken into consideration. Further improvement of the model is necessary and may improve the results in the sense of the experimental observations. (orig.) [de

  14. Atomic-orbital expansion model for describing ion-atom collisions at intermediate and low energies

    International Nuclear Information System (INIS)

    Lin, C.D.; Fritsch, W.

    1983-01-01

    In the description of inelastic processes in ion-atom collisions at moderate energies, the semiclassical close-coupling method is well established as the standard method. Ever since the pioneering work on H + + H in the early 60's, the standard procedure is to expand the electronic wavefunction in terms of molecular orbitals (MO) or atomic orbitals (AO) for describing collisions at, respectively, low or intermediate velocities. It has been recognized since early days that traveling orbitals are needed in the expansions in order to represent the asymptotic states in the collisions correctly. While the adoption of such traveling orbitals presents no conceptual difficulties for expansions using atomic orbitals, the situation for molecular orbitals is less clear. In recent years, various forms of traveling MO's have been proposed, but conflicting results for several well-studied systems have been reported

  15. A physical model describing the interaction of nuclear transport receptors with FG nucleoporin domain assemblies.

    Science.gov (United States)

    Zahn, Raphael; Osmanović, Dino; Ehret, Severin; Araya Callis, Carolina; Frey, Steffen; Stewart, Murray; You, Changjiang; Görlich, Dirk; Hoogenboom, Bart W; Richter, Ralf P

    2016-04-08

    The permeability barrier of nuclear pore complexes (NPCs) controls bulk nucleocytoplasmic exchange. It consists of nucleoporin domains rich in phenylalanine-glycine motifs (FG domains). As a bottom-up nanoscale model for the permeability barrier, we have used planar films produced with three different end-grafted FG domains, and quantitatively analyzed the binding of two different nuclear transport receptors (NTRs), NTF2 and Importin β, together with the concomitant film thickness changes. NTR binding caused only moderate changes in film thickness; the binding isotherms showed negative cooperativity and could all be mapped onto a single master curve. This universal NTR binding behavior - a key element for the transport selectivity of the NPC - was quantitatively reproduced by a physical model that treats FG domains as regular, flexible polymers, and NTRs as spherical colloids with a homogeneous surface, ignoring the detailed arrangement of interaction sites along FG domains and on the NTR surface.

  16. A stochastic Markov chain model to describe lung cancer growth and metastasis.

    Directory of Open Access Journals (Sweden)

    Paul K Newton

    Full Text Available A stochastic Markov chain model for metastatic progression is developed for primary lung cancer based on a network construction of metastatic sites with dynamics modeled as an ensemble of random walkers on the network. We calculate a transition matrix, with entries (transition probabilities interpreted as random variables, and use it to construct a circular bi-directional network of primary and metastatic locations based on postmortem tissue analysis of 3827 autopsies on untreated patients documenting all primary tumor locations and metastatic sites from this population. The resulting 50 potential metastatic sites are connected by directed edges with distributed weightings, where the site connections and weightings are obtained by calculating the entries of an ensemble of transition matrices so that the steady-state distribution obtained from the long-time limit of the Markov chain dynamical system corresponds to the ensemble metastatic distribution obtained from the autopsy data set. We condition our search for a transition matrix on an initial distribution of metastatic tumors obtained from the data set. Through an iterative numerical search procedure, we adjust the entries of a sequence of approximations until a transition matrix with the correct steady-state is found (up to a numerical threshold. Since this constrained linear optimization problem is underdetermined, we characterize the statistical variance of the ensemble of transition matrices calculated using the means and variances of their singular value distributions as a diagnostic tool. We interpret the ensemble averaged transition probabilities as (approximately normally distributed random variables. The model allows us to simulate and quantify disease progression pathways and timescales of progression from the lung position to other sites and we highlight several key findings based on the model.

  17. Generating quantitative models describing the sequence specificity of biological processes with the stabilized matrix method

    Directory of Open Access Journals (Sweden)

    Sette Alessandro

    2005-05-01

    Full Text Available Abstract Background Many processes in molecular biology involve the recognition of short sequences of nucleic-or amino acids, such as the binding of immunogenic peptides to major histocompatibility complex (MHC molecules. From experimental data, a model of the sequence specificity of these processes can be constructed, such as a sequence motif, a scoring matrix or an artificial neural network. The purpose of these models is two-fold. First, they can provide a summary of experimental results, allowing for a deeper understanding of the mechanisms involved in sequence recognition. Second, such models can be used to predict the experimental outcome for yet untested sequences. In the past we reported the development of a method to generate such models called the Stabilized Matrix Method (SMM. This method has been successfully applied to predicting peptide binding to MHC molecules, peptide transport by the transporter associated with antigen presentation (TAP and proteasomal cleavage of protein sequences. Results Herein we report the implementation of the SMM algorithm as a publicly available software package. Specific features determining the type of problems the method is most appropriate for are discussed. Advantageous features of the package are: (1 the output generated is easy to interpret, (2 input and output are both quantitative, (3 specific computational strategies to handle experimental noise are built in, (4 the algorithm is designed to effectively handle bounded experimental data, (5 experimental data from randomized peptide libraries and conventional peptides can easily be combined, and (6 it is possible to incorporate pair interactions between positions of a sequence. Conclusion Making the SMM method publicly available enables bioinformaticians and experimental biologists to easily access it, to compare its performance to other prediction methods, and to extend it to other applications.

  18. Formulation and integration of constitutive models describing large deformations in thermoplasticity and thermoviscoplasticity

    International Nuclear Information System (INIS)

    Jansohn, W.

    1997-10-01

    This report deals with the formulation and numerical integration of constitutive models in the framework of finite deformation thermomechanics. Based on the concept of dual variables, plasticity and viscoplasticity models exhibiting nonlinear kinematic hardening as well as nonlinear isotropic hardening rules are presented. Care is taken that the evolution equations governing the hardening response fulfill the intrinsic dissipation inequality in every admissible process. In view of the development of an efficient numerical integration procedure, simplified versions of these constitutive models are supposed. In these versions, the thermoelastic strains are assumed to be small and a simplified kinematic hardening rule is considered. Additionally, in view of an implementation into the ABAQUS finite element code, the elasticity law is approximated by a hypoelasticity law. For the simplified onstitutive models, an implicit time-integration algorithm is developed. First, in order to obtain a numerical objective integration scheme, use is made of the HUGHES-WINGET-Algorithm. In the resulting system of ordinary differential equations, it can be distinguished between three differential operators representing different physical effects. The structure of this system of differential equations allows to apply an operator split scheme, which leads to an efficient integration scheme for the constitutive equations. By linearizing the integration algorithm the consistent tangent modulus is derived. In this way, the quadratic convergence of Newton's method used to solve the basic finite element equations (i.e. the finite element discretization of the governing thermomechanical field equations) is preserved. The resulting integration scheme is implemented as a user subroutine UMAT in ABAQUS. The properties of the applied algorithm are first examined by test calculations on a single element under tension-compression-loading. For demonstrating the capabilities of the constitutive theory

  19. A stochastic Markov chain model to describe lung cancer growth and metastasis.

    Science.gov (United States)

    Newton, Paul K; Mason, Jeremy; Bethel, Kelly; Bazhenova, Lyudmila A; Nieva, Jorge; Kuhn, Peter

    2012-01-01

    A stochastic Markov chain model for metastatic progression is developed for primary lung cancer based on a network construction of metastatic sites with dynamics modeled as an ensemble of random walkers on the network. We calculate a transition matrix, with entries (transition probabilities) interpreted as random variables, and use it to construct a circular bi-directional network of primary and metastatic locations based on postmortem tissue analysis of 3827 autopsies on untreated patients documenting all primary tumor locations and metastatic sites from this population. The resulting 50 potential metastatic sites are connected by directed edges with distributed weightings, where the site connections and weightings are obtained by calculating the entries of an ensemble of transition matrices so that the steady-state distribution obtained from the long-time limit of the Markov chain dynamical system corresponds to the ensemble metastatic distribution obtained from the autopsy data set. We condition our search for a transition matrix on an initial distribution of metastatic tumors obtained from the data set. Through an iterative numerical search procedure, we adjust the entries of a sequence of approximations until a transition matrix with the correct steady-state is found (up to a numerical threshold). Since this constrained linear optimization problem is underdetermined, we characterize the statistical variance of the ensemble of transition matrices calculated using the means and variances of their singular value distributions as a diagnostic tool. We interpret the ensemble averaged transition probabilities as (approximately) normally distributed random variables. The model allows us to simulate and quantify disease progression pathways and timescales of progression from the lung position to other sites and we highlight several key findings based on the model.

  20. Regression models for linking patterns of growth to a later outcome: infant growth and childhood overweight

    Directory of Open Access Journals (Sweden)

    Andrew K. Wills

    2016-04-01

    Full Text Available Abstract Background Regression models are widely used to link serial measures of anthropometric size or changes in size to a later outcome. Different parameterisations of these models enable one to target different questions about the effect of growth, however, their interpretation can be challenging. Our objective was to formulate and classify several sets of parameterisations by their underlying growth pattern contrast, and to discuss their utility using an expository example. Methods We describe and classify five sets of model parameterisations in accordance with their underlying growth pattern contrast (conditional growth; being bigger v being smaller; becoming bigger and staying bigger; growing faster v being bigger; becoming and staying bigger versus being bigger. The contrasts are estimated by including different sets of repeated measures of size and changes in size in a regression model. We illustrate these models in the setting of linking infant growth (measured on 6 occasions: birth, 6 weeks, 3, 6, 12 and 24 months in weight-for-height-for-age z-scores to later childhood overweight at 8y using complete cases from the Norwegian Childhood Growth study (n = 900. Results In our expository example, conditional growth during all periods, becoming bigger in any interval and staying bigger through infancy, and being bigger from birth were all associated with higher odds of later overweight. The highest odds of later overweight occurred for individuals who experienced high conditional growth or became bigger in the 3 to 6 month period and stayed bigger, and those who were bigger from birth to 24 months. Comparisons between periods and between growth patterns require large sample sizes and need to consider how to scale associations to make comparisons fair; with respect to the latter, we show one approach. Conclusion Studies interested in detrimental growth patterns may gain extra insight from reporting several sets of growth pattern

  1. Development of an empirical model of turbine efficiency using the Taylor expansion and regression analysis

    International Nuclear Information System (INIS)

    Fang, Xiande; Xu, Yu

    2011-01-01

    The empirical model of turbine efficiency is necessary for the control- and/or diagnosis-oriented simulation and useful for the simulation and analysis of dynamic performances of the turbine equipment and systems, such as air cycle refrigeration systems, power plants, turbine engines, and turbochargers. Existing empirical models of turbine efficiency are insufficient because there is no suitable form available for air cycle refrigeration turbines. This work performs a critical review of empirical models (called mean value models in some literature) of turbine efficiency and develops an empirical model in the desired form for air cycle refrigeration, the dominant cooling approach in aircraft environmental control systems. The Taylor series and regression analysis are used to build the model, with the Taylor series being used to expand functions with the polytropic exponent and the regression analysis to finalize the model. The measured data of a turbocharger turbine and two air cycle refrigeration turbines are used for the regression analysis. The proposed model is compact and able to present the turbine efficiency map. Its predictions agree with the measured data very well, with the corrected coefficient of determination R c 2 ≥ 0.96 and the mean absolute percentage deviation = 1.19% for the three turbines. -- Highlights: → Performed a critical review of empirical models of turbine efficiency. → Developed an empirical model in the desired form for air cycle refrigeration, using the Taylor expansion and regression analysis. → Verified the method for developing the empirical model. → Verified the model.

  2. Comparing Regression Coefficients between Nested Linear Models for Clustered Data with Generalized Estimating Equations

    Science.gov (United States)

    Yan, Jun; Aseltine, Robert H., Jr.; Harel, Ofer

    2013-01-01

    Comparing regression coefficients between models when one model is nested within another is of great practical interest when two explanations of a given phenomenon are specified as linear models. The statistical problem is whether the coefficients associated with a given set of covariates change significantly when other covariates are added into…

  3. Modelling and analysis of turbulent datasets using Auto Regressive Moving Average processes

    International Nuclear Information System (INIS)

    Faranda, Davide; Dubrulle, Bérengère; Daviaud, François; Pons, Flavio Maria Emanuele; Saint-Michel, Brice; Herbert, Éric; Cortet, Pierre-Philippe

    2014-01-01

    We introduce a novel way to extract information from turbulent datasets by applying an Auto Regressive Moving Average (ARMA) statistical analysis. Such analysis goes well beyond the analysis of the mean flow and of the fluctuations and links the behavior of the recorded time series to a discrete version of a stochastic differential equation which is able to describe the correlation structure in the dataset. We introduce a new index Υ that measures the difference between the resulting analysis and the Obukhov model of turbulence, the simplest stochastic model reproducing both Richardson law and the Kolmogorov spectrum. We test the method on datasets measured in a von Kármán swirling flow experiment. We found that the ARMA analysis is well correlated with spatial structures of the flow, and can discriminate between two different flows with comparable mean velocities, obtained by changing the forcing. Moreover, we show that the Υ is highest in regions where shear layer vortices are present, thereby establishing a link between deviations from the Kolmogorov model and coherent structures. These deviations are consistent with the ones observed by computing the Hurst exponents for the same time series. We show that some salient features of the analysis are preserved when considering global instead of local observables. Finally, we analyze flow configurations with multistability features where the ARMA technique is efficient in discriminating different stability branches of the system

  4. Flexible regression models for estimating postmortem interval (PMI) in forensic medicine.

    Science.gov (United States)

    Muñoz Barús, José Ignacio; Febrero-Bande, Manuel; Cadarso-Suárez, Carmen

    2008-10-30

    Correct determination of time of death is an important goal in forensic medicine. Numerous methods have been described for estimating postmortem interval (PMI), but most are imprecise, poorly reproducible and/or have not been validated with real data. In recent years, however, some progress in PMI estimation has been made, notably through the use of new biochemical methods for quantifying relevant indicator compounds in the vitreous humour. The best, but unverified, results have been obtained with [K+] and hypoxanthine [Hx], using simple linear regression (LR) models. The main aim of this paper is to offer more flexible alternatives to LR, such as generalized additive models (GAMs) and support vector machines (SVMs) in order to obtain improved PMI estimates. The present study, based on detailed analysis of [K+] and [Hx] in more than 200 vitreous humour samples from subjects with known PMI, compared classical LR methodology with GAM and SVM methodologies. Both proved better than LR for estimation of PMI. SVM showed somewhat greater precision than GAM, but GAM offers a readily interpretable graphical output, facilitating understanding of findings by legal professionals; there are thus arguments for using both types of models. R code for these methods is available from the authors, permitting accurate prediction of PMI from vitreous humour [K+], [Hx] and [U], with confidence intervals and graphical output provided. Copyright 2008 John Wiley & Sons, Ltd.

  5. Phenomenological Model Describing the Formation of Peeling Defects on Hot-Rolled Duplex Stainless Steel 2205

    Science.gov (United States)

    Yong-jun, Zhang; Hui, Zhang; Jing-tao, Han

    2017-05-01

    The chemical composition, morphology, and microstructure of peeling defects formed on the surface of sheets from steel 2205 under hot rolling are studied. The microstructure of the surface is analyzed using scanning electron and light microscopy. The zones affected are shown to contain nonmetallic inclusions of types Al2O3 and CaO - SiO2 - Al2O3 - MgO in the form of streak precipitates and to have an unfavorable content of austenite, which causes decrease in the ductility of the area. The results obtained are used to derive a five-stage phenomenological model of formation of such defects.

  6. Principal components based support vector regression model for on-line instrument calibration monitoring in NPPs

    International Nuclear Information System (INIS)

    Seo, In Yong; Ha, Bok Nam; Lee, Sung Woo; Shin, Chang Hoon; Kim, Seong Jun

    2010-01-01

    In nuclear power plants (NPPs), periodic sensor calibrations are required to assure that sensors are operating correctly. By checking the sensor's operating status at every fuel outage, faulty sensors may remain undetected for periods of up to 24 months. Moreover, typically, only a few faulty sensors are found to be calibrated. For the safe operation of NPP and the reduction of unnecessary calibration, on-line instrument calibration monitoring is needed. In this study, principal component based auto-associative support vector regression (PCSVR) using response surface methodology (RSM) is proposed for the sensor signal validation of NPPs. This paper describes the design of a PCSVR-based sensor validation system for a power generation system. RSM is employed to determine the optimal values of SVR hyperparameters and is compared to the genetic algorithm (GA). The proposed PCSVR model is confirmed with the actual plant data of Kori Nuclear Power Plant Unit 3 and is compared with the Auto-Associative support vector regression (AASVR) and the auto-associative neural network (AANN) model. The auto-sensitivity of AASVR is improved by around six times by using a PCA, resulting in good detection of sensor drift. Compared to AANN, accuracy and cross-sensitivity are better while the auto-sensitivity is almost the same. Meanwhile, the proposed RSM for the optimization of the PCSVR algorithm performs even better in terms of accuracy, auto-sensitivity, and averaged maximum error, except in averaged RMS error, and this method is much more time efficient compared to the conventional GA method

  7. Application of a non-equilibrium reaction model for describing horizontal well performance in foamy oil

    Energy Technology Data Exchange (ETDEWEB)

    Luigi, A.; Saputelli, B.; Carlas, M.; Canache, P.; Lopez, E. [DPVS Exploracion y Produccion (Venezuela)

    1998-12-31

    This study was designed to determine the activation energy ranges and frequency factor ranges in chemical reactions in heavy oils of the Orinoco Belt in Venezuela, in order to account for the kinetics of physical changes that occur in the morphology of gas-oil dispersion. A non-equilibrium reaction model was used to model foamy oil behaviour observed at SDZ-182 horizontal well in the Zuata field. Results showed that activation energy for the first reaction ranged from 0 to 0.01 BTU/lb-mol and frequency factor from 0.001 to 1000 l/day. For the second reaction the activation energy was 50x10{sub 3} BTU/lb-mol and the frequency factor 2.75x10{sub 1}2 l/day. The second reaction was highly sensitive to the modifications in activation energy and frequency factor. However, both the activation energy and frequency factor were independent of variations for the first reaction. In the case of the activation energy, the results showed that the high sensitivity of this parameter reflected the impact that temperature has on the representation of foamy oil behaviour. 8 refs., 2 tabs., 6 figs.

  8. SPATIAL MODELLING FOR DESCRIBING SPATIAL VARIABILITY OF SOIL PHYSICAL PROPERTIES IN EASTERN CROATIA

    Directory of Open Access Journals (Sweden)

    Igor Bogunović

    2016-06-01

    Full Text Available The objectives of this study were to characterize the field-scale spatial variability and test several interpolation methods to identify the best spatial predictor of penetration resistance (PR, bulk density (BD and gravimetric water content (GWC in the silty loam soil in Eastern Croatia. The measurements were made on a 25 x 25-m grid which created 40 individual grid cells. Soil properties were measured at the center of the grid cell deep 0-10 cm and 10-20 cm. Results demonstrated that PR and GWC displayed strong spatial dependence at 0-10 cm BD, while there was moderate and weak spatial dependence of PR, BD and GWC at depth of 10-20 cm. Semi-variogram analysis suggests that future sampling intervals for investigated parameters can be increased to 35 m in order to reduce research costs. Additionally, interpolation models recorded similar root mean square values with high predictive accuracy. Results suggest that investigated properties do not have uniform interpolation method implying the need for spatial modelling in the evaluation of these soil properties in Eastern Croatia.

  9. Structured Additive Regression Models: An R Interface to BayesX

    Directory of Open Access Journals (Sweden)

    Nikolaus Umlauf

    2015-02-01

    Full Text Available Structured additive regression (STAR models provide a flexible framework for model- ing possible nonlinear effects of covariates: They contain the well established frameworks of generalized linear models and generalized additive models as special cases but also allow a wider class of effects, e.g., for geographical or spatio-temporal data, allowing for specification of complex and realistic models. BayesX is standalone software package providing software for fitting general class of STAR models. Based on a comprehensive open-source regression toolbox written in C++, BayesX uses Bayesian inference for estimating STAR models based on Markov chain Monte Carlo simulation techniques, a mixed model representation of STAR models, or stepwise regression techniques combining penalized least squares estimation with model selection. BayesX not only covers models for responses from univariate exponential families, but also models from less-standard regression situations such as models for multi-categorical responses with either ordered or unordered categories, continuous time survival data, or continuous time multi-state models. This paper presents a new fully interactive R interface to BayesX: the R package R2BayesX. With the new package, STAR models can be conveniently specified using Rs formula language (with some extended terms, fitted using the BayesX binary, represented in R with objects of suitable classes, and finally printed/summarized/plotted. This makes BayesX much more accessible to users familiar with R and adds extensive graphics capabilities for visualizing fitted STAR models. Furthermore, R2BayesX complements the already impressive capabilities for semiparametric regression in R by a comprehensive toolbox comprising in particular more complex response types and alternative inferential procedures such as simulation-based Bayesian inference.

  10. Describing model of empowering managers by applying structural equation modeling: A case study of universities in Ardabil

    Directory of Open Access Journals (Sweden)

    Maryam Ghahremani Germi

    2015-06-01

    Full Text Available Empowerment is still on the agenda as a management concept and has become a widely used management term in the last decade or so. The purpose of this research was describing model of empowering managers by applying structural equation modeling (SEM at Ardabil universities. Two hundred and twenty managers of Ardabil universities including chancellors, managers, and vice presidents of education, research, and studies participated in this study. Clear and challenging goals, evaluation of function, access to resources, and rewarding were investigated. The results indicated that the designed SEM for empowering managers at university reflects a good fitness level. As it stands out, the conceptual model in the society under investigation was used appropriately. Among variables, access to resources with 88 per cent of load factor was known as the affective variable. Evaluation of function containing 51 per cent of load factor was recognized to have less effect. Results of average rating show that evaluation of function and access to resources with 2.62 coefficients stand at first level. Due to this, they had great impact on managers' empowerment. The results of the analysis provided compelling evidence that model of empowering managers was desirable at Ardabil universities.

  11. The Norwegian Healthier Goats program--modeling lactation curves using a multilevel cubic spline regression model.

    Science.gov (United States)

    Nagel-Alne, G E; Krontveit, R; Bohlin, J; Valle, P S; Skjerve, E; Sølverød, L S

    2014-07-01

    In 2001, the Norwegian Goat Health Service initiated the Healthier Goats program (HG), with the aim of eradicating caprine arthritis encephalitis, caseous lymphadenitis, and Johne's disease (caprine paratuberculosis) in Norwegian goat herds. The aim of the present study was to explore how control and eradication of the above-mentioned diseases by enrolling in HG affected milk yield by comparison with herds not enrolled in HG. Lactation curves were modeled using a multilevel cubic spline regression model where farm, goat, and lactation were included as random effect parameters. The data material contained 135,446 registrations of daily milk yield from 28,829 lactations in 43 herds. The multilevel cubic spline regression model was applied to 4 categories of data: enrolled early, control early, enrolled late, and control late. For enrolled herds, the early and late notations refer to the situation before and after enrolling in HG; for nonenrolled herds (controls), they refer to development over time, independent of HG. Total milk yield increased in the enrolled herds after eradication: the total milk yields in the fourth lactation were 634.2 and 873.3 kg in enrolled early and enrolled late herds, respectively, and 613.2 and 701.4 kg in the control early and control late herds, respectively. Day of peak yield differed between enrolled and control herds. The day of peak yield came on d 6 of lactation for the control early category for parities 2, 3, and 4, indicating an inability of the goats to further increase their milk yield from the initial level. For enrolled herds, on the other hand, peak yield came between d 49 and 56, indicating a gradual increase in milk yield after kidding. Our results indicate that enrollment in the HG disease eradication program improved the milk yield of dairy goats considerably, and that the multilevel cubic spline regression was a suitable model for exploring effects of disease control and eradication on milk yield. Copyright © 2014

  12. Profile-driven regression for modeling and runtime optimization of mobile networks

    DEFF Research Database (Denmark)

    McClary, Dan; Syrotiuk, Violet; Kulahci, Murat

    2010-01-01

    Computer networks often display nonlinear behavior when examined over a wide range of operating conditions. There are few strategies available for modeling such behavior and optimizing such systems as they run. Profile-driven regression is developed and applied to modeling and runtime optimization...... of throughput in a mobile ad hoc network, a self-organizing collection of mobile wireless nodes without any fixed infrastructure. The intermediate models generated in profile-driven regression are used to fit an overall model of throughput, and are also used to optimize controllable factors at runtime. Unlike...

  13. Regression models for interval censored survival data: Application to HIV infection in Danish homosexual men

    DEFF Research Database (Denmark)

    Carstensen, Bendix

    1996-01-01

    This paper shows how to fit excess and relative risk regression models to interval censored survival data, and how to implement the models in standard statistical software. The methods developed are used for the analysis of HIV infection rates in a cohort of Danish homosexual men.......This paper shows how to fit excess and relative risk regression models to interval censored survival data, and how to implement the models in standard statistical software. The methods developed are used for the analysis of HIV infection rates in a cohort of Danish homosexual men....

  14. The Relationship between Economic Growth and Money Laundering – a Linear Regression Model

    Directory of Open Access Journals (Sweden)

    Daniel Rece

    2009-09-01

    Full Text Available This study provides an overview of the relationship between economic growth and money laundering modeled by a least squares function. The report analyzes statistically data collected from USA, Russia, Romania and other eleven European countries, rendering a linear regression model. The study illustrates that 23.7% of the total variance in the regressand (level of money laundering is “explained” by the linear regression model. In our opinion, this model will provide critical auxiliary judgment and decision support for anti-money laundering service systems.

  15. Comparison between linear and non-parametric regression models for genome-enabled prediction in wheat.

    Science.gov (United States)

    Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne

    2012-12-01

    In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.

  16. A primer for biomedical scientists on how to execute model II linear regression analysis.

    Science.gov (United States)

    Ludbrook, John

    2012-04-01

    1. There are two very different ways of executing linear regression analysis. One is Model I, when the x-values are fixed by the experimenter. The other is Model II, in which the x-values are free to vary and are subject to error. 2. I have received numerous complaints from biomedical scientists that they have great difficulty in executing Model II linear regression analysis. This may explain the results of a Google Scholar search, which showed that the authors of articles in journals of physiology, pharmacology and biochemistry rarely use Model II regression analysis. 3. I repeat my previous arguments in favour of using least products linear regression analysis for Model II regressions. I review three methods for executing ordinary least products (OLP) and weighted least products (WLP) regression analysis: (i) scientific calculator and/or computer spreadsheet; (ii) specific purpose computer programs; and (iii) general purpose computer programs. 4. Using a scientific calculator and/or computer spreadsheet, it is easy to obtain correct values for OLP slope and intercept, but the corresponding 95% confidence intervals (CI) are inaccurate. 5. Using specific purpose computer programs, the freeware computer program smatr gives the correct OLP regression coefficients and obtains 95% CI by bootstrapping. In addition, smatr can be used to compare the slopes of OLP lines. 6. When using general purpose computer programs, I recommend the commercial programs systat and Statistica for those who regularly undertake linear regression analysis and I give step-by-step instructions in the Supplementary Information as to how to use loss functions. © 2011 The Author. Clinical and Experimental Pharmacology and Physiology. © 2011 Blackwell Publishing Asia Pty Ltd.

  17. Logistic regression models for polymorphic and antagonistic pleiotropic gene action on human aging and longevity

    DEFF Research Database (Denmark)

    Tan, Qihua; Bathum, L; Christiansen, L

    2003-01-01

    In this paper, we apply logistic regression models to measure genetic association with human survival for highly polymorphic and pleiotropic genes. By modelling genotype frequency as a function of age, we introduce a logistic regression model with polytomous responses to handle the polymorphic...... situation. Genotype and allele-based parameterization can be used to investigate the modes of gene action and to reduce the number of parameters, so that the power is increased while the amount of multiple testing minimized. A binomial logistic regression model with fractional polynomials is used to capture...... the age-dependent or antagonistic pleiotropic effects. The models are applied to HFE genotype data to assess the effects on human longevity by different alleles and to detect if an age-dependent effect exists. Application has shown that these methods can serve as useful tools in searching for important...

  18. Modeling Governance KB with CATPCA to Overcome Multicollinearity in the Logistic Regression

    Science.gov (United States)

    Khikmah, L.; Wijayanto, H.; Syafitri, U. D.

    2017-04-01

    The problem often encounters in logistic regression modeling are multicollinearity problems. Data that have multicollinearity between explanatory variables with the result in the estimation of parameters to be bias. Besides, the multicollinearity will result in error in the classification. In general, to overcome multicollinearity in regression used stepwise regression. They are also another method to overcome multicollinearity which involves all variable for prediction. That is Principal Component Analysis (PCA). However, classical PCA in only for numeric data. Its data are categorical, one method to solve the problems is Categorical Principal Component Analysis (CATPCA). Data were used in this research were a part of data Demographic and Population Survey Indonesia (IDHS) 2012. This research focuses on the characteristic of women of using the contraceptive methods. Classification results evaluated using Area Under Curve (AUC) values. The higher the AUC value, the better. Based on AUC values, the classification of the contraceptive method using stepwise method (58.66%) is better than the logistic regression model (57.39%) and CATPCA (57.39%). Evaluation of the results of logistic regression using sensitivity, shows the opposite where CATPCA method (99.79%) is better than logistic regression method (92.43%) and stepwise (92.05%). Therefore in this study focuses on major class classification (using a contraceptive method), then the selected model is CATPCA because it can raise the level of the major class model accuracy.

  19. Describing the Process of Adopting Nutrition and Fitness Apps: Behavior Stage Model Approach.

    Science.gov (United States)

    König, Laura M; Sproesser, Gudrun; Schupp, Harald T; Renner, Britta

    2018-03-13

    Although mobile technologies such as smartphone apps are promising means for motivating people to adopt a healthier lifestyle (mHealth apps), previous studies have shown low adoption and continued use rates. Developing the means to address this issue requires further understanding of mHealth app nonusers and adoption processes. This study utilized a stage model approach based on the Precaution Adoption Process Model (PAPM), which proposes that people pass through qualitatively different motivational stages when adopting a behavior. To establish a better understanding of between-stage transitions during app adoption, this study aimed to investigate the adoption process of nutrition and fitness app usage, and the sociodemographic and behavioral characteristics and decision-making style preferences of people at different adoption stages. Participants (N=1236) were recruited onsite within the cohort study Konstanz Life Study. Use of mobile devices and nutrition and fitness apps, 5 behavior adoption stages of using nutrition and fitness apps, preference for intuition and deliberation in eating decision-making (E-PID), healthy eating style, sociodemographic variables, and body mass index (BMI) were assessed. Analysis of the 5 behavior adoption stages showed that stage 1 ("unengaged") was the most prevalent motivational stage for both nutrition and fitness app use, with half of the participants stating that they had never thought about using a nutrition app (52.41%, 533/1017), whereas less than one-third stated they had never thought about using a fitness app (29.25%, 301/1029). "Unengaged" nonusers (stage 1) showed a higher preference for an intuitive decision-making style when making eating decisions, whereas those who were already "acting" (stage 4) showed a greater preference for a deliberative decision-making style (F 4,1012 =21.83, Pdigital interventions. This study highlights that new user groups might be better reached by apps designed to address a more intuitive

  20. A nonlinear beam model to describe the postbuckling of wide neo-Hookean beams

    Science.gov (United States)

    Lubbers, Luuk A.; van Hecke, Martin; Coulais, Corentin

    2017-09-01

    Wide beams can exhibit subcritical buckling, i.e. the slope of the force-displacement curve can become negative in the postbuckling regime. In this paper, we capture this intriguing behaviour by constructing a 1D nonlinear beam model, where the central ingredient is the nonlinearity in the stress-strain relation of the beams constitutive material. First, we present experimental and numerical evidence of a transition to subcritical buckling for wide neo-Hookean hyperelastic beams, when their width-to-length ratio exceeds a critical value of 12%. Second, we construct an effective 1D energy density by combining the Mindlin-Reissner kinematics with a nonlinearity in the stress-strain relation. Finally, we establish and solve the governing beam equations to analytically determine the slope of the force-displacement curve in the postbuckling regime. We find, without any adjustable parameters, excellent agreement between the 1D theory, experiments and simulations. Our work extends the understanding of the postbuckling of structures made of wide elastic beams and opens up avenues for the reverse-engineering of instabilities in soft and metamaterials.

  1. The modulation of galactic cosmic rays as described by a three-dimensional drift model

    International Nuclear Information System (INIS)

    Potgieter, M.S.

    1984-01-01

    An outline of the present state of knowledge about the effect of drift on the modulation of galactic cosmic rays is given. Various observations related to the reversal of the solar magnetic field polarity are discussed. Comprehensive numerical solutions of the steady-state cosmic-ray transport equation in an axially-symmetric three-dimensional heliosphere, including drift are presented. This is an extention of the continuing effort of the past six years to understand the effect and importance of drift on the transport of galactic cosmic rays in the heliosphere. A flat neutral sheet which coincides with the equatorial plane is assumed. A general method of calculating the drift velocity in the neutral sheet including that used previously by other authors is presented. The effect of changing various modulation parameters on the drift solutions are illustrated in detail. The real significance of drift is illustrated by using Gaussian input spectra on the modulation boundary. A carefully selected set of modulation parameters is used to illustrate to what extent a drift model can explain prominent observational features. It is concluded that drift is important in in the process of cosmic-ray transport and must as such be considered in all modulation studies, but that it is not overwhelmingly dominant as previously anticipated

  2. Straight line fitting and predictions: On a marginal likelihood approach to linear regression and errors-in-variables models

    Science.gov (United States)

    Christiansen, Bo

    2015-04-01

    Linear regression methods are without doubt the most used approaches to describe and predict data in the physical sciences. They are often good first order approximations and they are in general easier to apply and interpret than more advanced methods. However, even the properties of univariate regression can lead to debate over the appropriateness of various models as witnessed by the recent discussion about climate reconstruction methods. Before linear regression is applied important choices have to be made regarding the origins of the noise terms and regarding which of the two variables under consideration that should be treated as the independent variable. These decisions are often not easy to make but they may have a considerable impact on the results. We seek to give a unified probabilistic - Bayesian with flat priors - treatment of univariate linear regression and prediction by taking, as starting point, the general errors-in-variables model (Christiansen, J. Clim., 27, 2014-2031, 2014). Other versions of linear regression can be obtained as limits of this model. We derive the likelihood of the model parameters and predictands of the general errors-in-variables model by marginalizing over the nuisance parameters. The resulting likelihood is relatively simple and easy to analyze and calculate. The well known unidentifiability of the errors-in-variables model is manifested as the absence of a well-defined maximum in the likelihood. However, this does not mean that probabilistic inference can not be made; the marginal likelihoods of model parameters and the predictands have, in general, well-defined maxima. We also include a probabilistic version of classical calibration and show how it is related to the errors-in-variables model. The results are illustrated by an example from the coupling between the lower stratosphere and the troposphere in the Northern Hemisphere winter.

  3. Estimasi Model Seemingly Unrelated Regression (SUR dengan Metode Generalized Least Square (GLS

    Directory of Open Access Journals (Sweden)

    Ade Widyaningsih

    2015-04-01

    Full Text Available Regression analysis is a statistical tool that is used to determine the relationship between two or more quantitative variables so that one variable can be predicted from the other variables. A method that can used to obtain a good estimation in the regression analysis is ordinary least squares method. The least squares method is used to estimate the parameters of one or more regression but relationships among the errors in the response of other estimators are not allowed. One way to overcome this problem is Seemingly Unrelated Regression model (SUR in which parameters are estimated using Generalized Least Square (GLS. In this study, the author applies SUR model using GLS method on world gasoline demand data. The author obtains that SUR using GLS is better than OLS because SUR produce smaller errors than the OLS.

  4. Estimasi Model Seemingly Unrelated Regression (SUR dengan Metode Generalized Least Square (GLS

    Directory of Open Access Journals (Sweden)

    Ade Widyaningsih

    2014-06-01

    Full Text Available Regression analysis is a statistical tool that is used to determine the relationship between two or more quantitative variables so that one variable can be predicted from the other variables. A method that can used to obtain a good estimation in the regression analysis is ordinary least squares method. The least squares method is used to estimate the parameters of one or more regression but relationships among the errors in the response of other estimators are not allowed. One way to overcome this problem is Seemingly Unrelated Regression model (SUR in which parameters are estimated using Generalized Least Square (GLS. In this study, the author applies SUR model using GLS method on world gasoline demand data. The author obtains that SUR using GLS is better than OLS because SUR produce smaller errors than the OLS.

  5. A land use regression model for ambient ultrafine particles in Montreal, Canada: A comparison of linear regression and a machine learning approach.

    Science.gov (United States)

    Weichenthal, Scott; Ryswyk, Keith Van; Goldstein, Alon; Bagg, Scott; Shekkarizfard, Maryam; Hatzopoulou, Marianne

    2016-04-01

    Existing evidence suggests that ambient ultrafine particles (UFPs) (regression model for UFPs in Montreal, Canada using mobile monitoring data collected from 414 road segments during the summer and winter months between 2011 and 2012. Two different approaches were examined for model development including standard multivariable linear regression and a machine learning approach (kernel-based regularized least squares (KRLS)) that learns the functional form of covariate impacts on ambient UFP concentrations from the data. The final models included parameters for population density, ambient temperature and wind speed, land use parameters (park space and open space), length of local roads and rail, and estimated annual average NOx emissions from traffic. The final multivariable linear regression model explained 62% of the spatial variation in ambient UFP concentrations whereas the KRLS model explained 79% of the variance. The KRLS model performed slightly better than the linear regression model when evaluated using an external dataset (R(2)=0.58 vs. 0.55) or a cross-validation procedure (R(2)=0.67 vs. 0.60). In general, our findings suggest that the KRLS approach may offer modest improvements in predictive performance compared to standard multivariable linear regression models used to estimate spatial variations in ambient UFPs. However, differences in predictive performance were not statistically significant when evaluated using the cross-validation procedure. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  6. A holistic conceptual framework model to describe medication adherence in and guide interventions in diabetes mellitus.

    Science.gov (United States)

    Jaam, Myriam; Awaisu, Ahmed; Mohamed Ibrahim, Mohamed Izham; Kheir, Nadir

    2018-04-01

    Nonadherence to medications in patients with diabetes, which results in poor treatment outcomes and increased healthcare costs, is commonly reported globally. Factors associated with medication adherence have also been widely studied. However, a clear and comprehensive, disease-specific conceptual framework model that captures all possible factors has not been established. This study aimed to develop a conceptual framework that addresses the complex network of barriers to medication adherence in patients with diabetes. Fourteen databases and grey literature sources were systematically searched for systematic reviews reporting barriers to medication adherence in patients with diabetes. A thematic approach was used to categorize all identified barriers from the reviews and to create a matrix representing the complex network and relations of the different barriers. Eighteen systematic reviews were identified and used for the development of the conceptual framework. Overall, six major themes emerged: patient-, medication-, disease-, provider-, system-, and societal-related factors. Each of these themes was further classified into different sub-categories. It was noted that most interactions were identified to be within the patient-related factors, which not only interact with other themes but also within the same theme. Patient's demographics as well as cultural beliefs were the most notable factors in terms of interactions with other categories and themes. The intricate network and interaction of factors identified between different themes and within individual themes indicate the complexity of the problem of adherence. This framework will potentially enhance the understanding of the complex relation between different barriers for medication adherence in diabetes and will facilitate design of more effective interventions. Future interventions for enhancing medication adherence should look at the overall factors and target multiple themes of barriers to improve patient

  7. Modelling of binary logistic regression for obesity among secondary students in a rural area of Kedah

    Science.gov (United States)

    Kamaruddin, Ainur Amira; Ali, Zalila; Noor, Norlida Mohd.; Baharum, Adam; Ahmad, Wan Muhamad Amir W.

    2014-07-01

    Logistic regression analysis examines the influence of various factors on a dichotomous outcome by estimating the probability of the event's occurrence. Logistic regression, also called a logit model, is a statistical procedure used to model dichotomous outcomes. In the logit model the log odds of the dichotomous outcome is modeled as a linear combination of the predictor variables. The log odds ratio in logistic regression provides a description of the probabilistic relationship of the variables and the outcome. In conducting logistic regression, selection procedures are used in selecting important predictor variables, diagnostics are used to check that assumptions are valid which include independence of errors, linearity in the logit for continuous variables, absence of multicollinearity, and lack of strongly influential outliers and a test statistic is calculated to determine the aptness of the model. This study used the binary logistic regression model to investigate overweight and obesity among rural secondary school students on the basis of their demographics profile, medical history, diet and lifestyle. The results indicate that overweight and obesity of students are influenced by obesity in family and the interaction between a student's ethnicity and routine meals intake. The odds of a student being overweight and obese are higher for a student having a family history of obesity and for a non-Malay student who frequently takes routine meals as compared to a Malay student.

  8. Validation of regression models for nitrate concentrations in the upper groundwater in sandy soils

    International Nuclear Information System (INIS)

    Sonneveld, M.P.W.; Brus, D.J.; Roelsma, J.

    2010-01-01

    For Dutch sandy regions, linear regression models have been developed that predict nitrate concentrations in the upper groundwater on the basis of residual nitrate contents in the soil in autumn. The objective of our study was to validate these regression models for one particular sandy region dominated by dairy farming. No data from this area were used for calibrating the regression models. The model was validated by additional probability sampling. This sample was used to estimate errors in 1) the predicted areal fractions where the EU standard of 50 mg l -1 is exceeded for farms with low N surpluses (ALT) and farms with higher N surpluses (REF); 2) predicted cumulative frequency distributions of nitrate concentration for both groups of farms. Both the errors in the predicted areal fractions as well as the errors in the predicted cumulative frequency distributions indicate that the regression models are invalid for the sandy soils of this study area. - This study indicates that linear regression models that predict nitrate concentrations in the upper groundwater using residual soil N contents should be applied with care.

  9. A brief introduction to regression designs and mixed-effects modelling by a recent convert

    OpenAIRE

    Balling, Laura Winther

    2008-01-01

    This article discusses the advantages of multiple regression designs over the factorial designs traditionally used in many psycholinguistic experiments. It is shown that regression designs are typically more informative, statistically more powerful and better suited to the analysis of naturalistic tasks. The advantages of including both fixed and random effects are demonstrated with reference to linear mixed-effects models, and problems of collinearity, variable distribution and variable sele...

  10. A computational approach to compare regression modelling strategies in prediction research.

    Science.gov (United States)

    Pajouheshnia, Romin; Pestman, Wiebe R; Teerenstra, Steven; Groenwold, Rolf H H

    2016-08-25

    It is often unclear which approach to fit, assess and adjust a model will yield the most accurate prediction model. We present an extension of an approach for comparing modelling strategies in linear regression to the setting of logistic regression and demonstrate its application in clinical prediction research. A framework for comparing logistic regression modelling strategies by their likelihoods was formulated using a wrapper approach. Five different strategies for modelling, including simple shrinkage methods, were compared in four empirical data sets to illustrate the concept of a priori strategy comparison. Simulations were performed in both randomly generated data and empirical data to investigate the influence of data characteristics on strategy performance. We applied the comparison framework in a case study setting. Optimal strategies were selected based on the results of a priori comparisons in a clinical data set and the performance of models built according to each strategy was assessed using the Brier score and calibration plots. The performance of modelling strategies was highly dependent on the characteristics of the development data in both linear and logistic regression settings. A priori comparisons in four empirical data sets found that no strategy consistently outperformed the others. The percentage of times that a model adjustment strategy outperformed a logistic model ranged from 3.9 to 94.9 %, depending on the strategy and data set. However, in our case study setting the a priori selection of optimal methods did not result in detectable improvement in model performance when assessed in an external data set. The performance of prediction modelling strategies is a data-dependent process and can be highly variable between data sets within the same clinical domain. A priori strategy comparison can be used to determine an optimal logistic regression modelling strategy for a given data set before selecting a final modelling approach.

  11. Modeling of Soil Aggregate Stability using Support Vector Machines and Multiple Linear Regression

    Directory of Open Access Journals (Sweden)

    Ali Asghar Besalatpour

    2016-02-01

    Full Text Available Introduction: Soil aggregate stability is a key factor in soil resistivity to mechanical stresses, including the impacts of rainfall and surface runoff, and thus to water erosion (Canasveras et al., 2010. Various indicators have been proposed to characterize and quantify soil aggregate stability, for example percentage of water-stable aggregates (WSA, mean weight diameter (MWD, geometric mean diameter (GMD of aggregates, and water-dispersible clay (WDC content (Calero et al., 2008. Unfortunately, the experimental methods available to determine these indicators are laborious, time-consuming and difficult to standardize (Canasveras et al., 2010. Therefore, it would be advantageous if aggregate stability could be predicted indirectly from more easily available data (Besalatpour et al., 2014. The main objective of this study is to investigate the potential use of support vector machines (SVMs method for estimating soil aggregate stability (as quantified by GMD as compared to multiple linear regression approach. Materials and Methods: The study area was part of the Bazoft watershed (31° 37′ to 32° 39′ N and 49° 34′ to 50° 32′ E, which is located in the Northern part of the Karun river basin in central Iran. A total of 160 soil samples were collected from the top 5 cm of soil surface. Some easily available characteristics including topographic, vegetation, and soil properties were used as inputs. Soil organic matter (SOM content was determined by the Walkley-Black method (Nelson & Sommers, 1986. Particle size distribution in the soil samples (clay, silt, sand, fine sand, and very fine sand were measured using the procedure described by Gee & Bauder (1986 and calcium carbonate equivalent (CCE content was determined by the back-titration method (Nelson, 1982. The modified Kemper & Rosenau (1986 method was used to determine wet-aggregate stability (GMD. The topographic attributes of elevation, slope, and aspect were characterized using a 20-m

  12. Modelling fourier regression for time series data- a case study: modelling inflation in foods sector in Indonesia

    Science.gov (United States)

    Prahutama, Alan; Suparti; Wahyu Utami, Tiani

    2018-03-01

    Regression analysis is an analysis to model the relationship between response variables and predictor variables. The parametric approach to the regression model is very strict with the assumption, but nonparametric regression model isn’t need assumption of model. Time series data is the data of a variable that is observed based on a certain time, so if the time series data wanted to be modeled by regression, then we should determined the response and predictor variables first. Determination of the response variable in time series is variable in t-th (yt), while the predictor variable is a significant lag. In nonparametric regression modeling, one developing approach is to use the Fourier series approach. One of the advantages of nonparametric regression approach using Fourier series is able to overcome data having trigonometric distribution. In modeling using Fourier series needs parameter of K. To determine the number of K can be used Generalized Cross Validation method. In inflation modeling for the transportation sector, communication and financial services using Fourier series yields an optimal K of 120 parameters with R-square 99%. Whereas if it was modeled by multiple linear regression yield R-square 90%.

  13. truncSP: An R Package for Estimation of Semi-Parametric Truncated Linear Regression Models

    Directory of Open Access Journals (Sweden)

    Maria Karlsson

    2014-05-01

    Full Text Available Problems with truncated data occur in many areas, complicating estimation and inference. Regarding linear regression models, the ordinary least squares estimator is inconsistent and biased for these types of data and is therefore unsuitable for use. Alternative estimators, designed for the estimation of truncated regression models, have been developed. This paper presents the R package truncSP. The package contains functions for the estimation of semi-parametric truncated linear regression models using three different estimators: the symmetrically trimmed least squares, quadratic mode, and left truncated estimators, all of which have been shown to have good asymptotic and ?nite sample properties. The package also provides functions for the analysis of the estimated models. Data from the environmental sciences are used to illustrate the functions in the package.

  14. Modeling and prediction of Turkey's electricity consumption using Support Vector Regression

    International Nuclear Information System (INIS)

    Kavaklioglu, Kadir

    2011-01-01

    Support Vector Regression (SVR) methodology is used to model and predict Turkey's electricity consumption. Among various SVR formalisms, ε-SVR method was used since the training pattern set was relatively small. Electricity consumption is modeled as a function of socio-economic indicators such as population, Gross National Product, imports and exports. In order to facilitate future predictions of electricity consumption, a separate SVR model was created for each of the input variables using their current and past values; and these models were combined to yield consumption prediction values. A grid search for the model parameters was performed to find the best ε-SVR model for each variable based on Root Mean Square Error. Electricity consumption of Turkey is predicted until 2026 using data from 1975 to 2006. The results show that electricity consumption can be modeled using Support Vector Regression and the models can be used to predict future electricity consumption. (author)

  15. Improved model of the retardance in citric acid coated ferrofluids using stepwise regression

    Science.gov (United States)

    Lin, J. F.; Qiu, X. R.

    2017-06-01

    Citric acid (CA) coated Fe3O4 ferrofluids (FFs) have been conducted for biomedical application. The magneto-optical retardance of CA coated FFs was measured by a Stokes polarimeter. Optimization and multiple regression of retardance in FFs were executed by Taguchi method and Microsoft Excel previously, and the F value of regression model was large enough. However, the model executed by Excel was not systematic. Instead we adopted the stepwise regression to model the retardance of CA coated FFs. From the results of stepwise regression by MATLAB, the developed model had highly predictable ability owing to F of 2.55897e+7 and correlation coefficient of one. The average absolute error of predicted retardances to measured retardances was just 0.0044%. Using the genetic algorithm (GA) in MATLAB, the optimized parametric combination was determined as [4.709 0.12 39.998 70.006] corresponding to the pH of suspension, molar ratio of CA to Fe3O4, CA volume, and coating temperature. The maximum retardance was found as 31.712°, close to that obtained by evolutionary solver in Excel and a relative error of -0.013%. Above all, the stepwise regression method was successfully used to model the retardance of CA coated FFs, and the maximum global retardance was determined by the use of GA.

  16. On pseudo-values for regression analysis in competing risks models

    DEFF Research Database (Denmark)

    Graw, F; Gerds, Thomas Alexander; Schumacher, M

    2009-01-01

    For regression on state and transition probabilities in multi-state models Andersen et al. (Biometrika 90:15-27, 2003) propose a technique based on jackknife pseudo-values. In this article we analyze the pseudo-values suggested for competing risks models and prove some conjectures regarding their...

  17. A Predictive Logistic Regression Model of World Conflict Using Open Source Data

    Science.gov (United States)

    2015-03-26

    No correlation between the error terms and the independent variables 9. Absence of perfect multicollinearity (Menard, 2001) When assumptions are...some of the variables before initial model building. Multicollinearity , or near-linear dependence among the variables will cause problems in the...model. High multicollinearity tends to produce unreasonably high logistic regression coefficients and can result in coefficients that are not

  18. Sample size calculation to externally validate scoring systems based on logistic regression models.

    Directory of Open Access Journals (Sweden)

    Antonio Palazón-Bru

    Full Text Available A sample size containing at least 100 events and 100 non-events has been suggested to validate a predictive model, regardless of the model being validated and that certain factors can influence calibration of the predictive model (discrimination, parameterization and incidence. Scoring systems based on binary logistic regression models are a specific type of predictive model.The aim of this study was to develop an algorithm to determine the sample size for validating a scoring system based on a binary logistic regression model and to apply it to a case study.The algorithm was based on bootstrap samples in which the area under the ROC curve, the observed event probabilities through smooth curves, and a measure to determine the lack of calibration (estimated calibration index were calculated. To illustrate its use for interested researchers, the algorithm was applied to a scoring system, based on a binary logistic regression model, to determine mortality in intensive care units.In the case study provided, the algorithm obtained a sample size with 69 events, which is lower than the value suggested in the literature.An algorithm is provided for finding the appropriate sample size to validate scoring systems based on binary logistic regression models. This could be applied to determine the sample size in other similar cases.

  19. Computational Tools for Probing Interactions in Multiple Linear Regression, Multilevel Modeling, and Latent Curve Analysis

    Science.gov (United States)

    Preacher, Kristopher J.; Curran, Patrick J.; Bauer, Daniel J.

    2006-01-01

    Simple slopes, regions of significance, and confidence bands are commonly used to evaluate interactions in multiple linear regression (MLR) models, and the use of these techniques has recently been extended to multilevel or hierarchical linear modeling (HLM) and latent curve analysis (LCA). However, conducting these tests and plotting the…

  20. Fitting multistate transition models with autoregressive logistic regression : Supervised exercise in intermittent claudication

    NARCIS (Netherlands)

    de Vries, S O; Fidler, Vaclav; Kuipers, Wietze D; Hunink, Maria G M

    1998-01-01

    The purpose of this study was to develop a model that predicts the outcome of supervised exercise for intermittent claudication. The authors present an example of the use of autoregressive logistic regression for modeling observed longitudinal data. Data were collected from 329 participants in a

  1. Endogenous glucose production from infancy to adulthood: a non-linear regression model

    NARCIS (Netherlands)

    Huidekoper, Hidde H.; Ackermans, Mariëtte T.; Ruiter, An F. C.; Sauerwein, Hans P.; Wijburg, Frits A.

    2014-01-01

    To construct a regression model for endogenous glucose production (EGP) as a function of age, and compare this with glucose supplementation using commonly used dextrose-based saline solutions at fluid maintenance rate in children. A model was constructed based on EGP data, as quantified by

  2. INTRODUCTION TO A COMBINED MULTIPLE LINEAR REGRESSION AND ARMA MODELING APPROACH FOR BEACH BACTERIA PREDICTION

    Science.gov (United States)

    Due to the complexity of the processes contributing to beach bacteria concentrations, many researchers rely on statistical modeling, among which multiple linear regression (MLR) modeling is most widely used. Despite its ease of use and interpretation, there may be time dependence...

  3. Sensitivity analysis and optimization of system dynamics models : Regression analysis and statistical design of experiments

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1995-01-01

    This tutorial discusses what-if analysis and optimization of System Dynamics models. These problems are solved, using the statistical techniques of regression analysis and design of experiments (DOE). These issues are illustrated by applying the statistical techniques to a System Dynamics model for

  4. Genomic prediction based on data from three layer lines using non-linear regression models

    NARCIS (Netherlands)

    Huang, H.; Windig, J.J.; Vereijken, A.; Calus, M.P.L.

    2014-01-01

    Background - Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. Methods - In an attempt to alleviate

  5. Logistic regression models of factors influencing the location of bioenergy and biofuels plants

    Science.gov (United States)

    T.M. Young; R.L. Zaretzki; J.H. Perdue; F.M. Guess; X. Liu

    2011-01-01

    Logistic regression models were developed to identify significant factors that influence the location of existing wood-using bioenergy/biofuels plants and traditional wood-using facilities. Logistic models provided quantitative insight for variables influencing the location of woody biomass-using facilities. Availability of "thinnings to a basal area of 31.7m2/ha...

  6. Determining factors influencing survival of breast cancer by fuzzy logistic regression model.

    Science.gov (United States)

    Nikbakht, Roya; Bahrampour, Abbas

    2017-01-01

    Fuzzy logistic regression model can be used for determining influential factors of disease. This study explores the important factors of actual predictive survival factors of breast cancer's patients. We used breast cancer data which collected by cancer registry of Kerman University of Medical Sciences during the period of 2000-2007. The variables such as morphology, grade, age, and treatments (surgery, radiotherapy, and chemotherapy) were applied in the fuzzy logistic regression model. Performance of model was determined in terms of mean degree of membership (MDM). The study results showed that almost 41% of patients were in neoplasm and malignant group and more than two-third of them were still alive after 5-year follow-up. Based on the fuzzy logistic model, the most important factors influencing survival were chemotherapy, morphology, and radiotherapy, respectively. Furthermore, the MDM criteria show that the fuzzy logistic regression have a good fit on the data (MDM = 0.86). Fuzzy logistic regression model showed that chemotherapy is more important than radiotherapy in survival of patients with breast cancer. In addition, another ability of this model is calculating possibilistic odds of survival in cancer patients. The results of this study can be applied in clinical research. Furthermore, there are few studies which applied the fuzzy logistic models. Furthermore, we recommend using this model in various research areas.

  7. Photovoltaic Array Condition Monitoring Based on Online Regression of Performance Model

    DEFF Research Database (Denmark)

    Spataru, Sergiu; Sera, Dezso; Kerekes, Tamas

    2013-01-01

    regression modeling, from PV array production, plane-of-array irradiance, and module temperature measurements, acquired during an initial learning phase of the system. After the model has been parameterized automatically, the condition monitoring system enters the normal operation phase, where...

  8. The use of logistic regression in modelling the distributions of bird ...

    African Journals Online (AJOL)

    The method of logistic regression was used to model the observed geographical distribution patterns of bird species in Swaziland in relation to a set of environmental variables. Reporting rates derived from bird atlas data are used as an index of population densities. This is justified in part by the success of the modelling ...

  9. Time series modeling by a regression approach based on a latent process.

    Science.gov (United States)

    Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice

    2009-01-01

    Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.

  10. A LATENT CLASS POISSON REGRESSION-MODEL FOR HETEROGENEOUS COUNT DATA

    NARCIS (Netherlands)

    WEDEL, M; DESARBO, WS; BULT, [No Value; RAMASWAMY, [No Value

    1993-01-01

    In this paper an approach is developed that accommodates heterogeneity in Poisson regression models for count data. The model developed assumes that heterogeneity arises from a distribution of both the intercept and the coefficients of the explanatory variables. We assume that the mixing

  11. The limiting behavior of the estimated parameters in a misspecified random field regression model

    DEFF Research Database (Denmark)

    Dahl, Christian Møller; Qin, Yu

    This paper examines the limiting properties of the estimated parameters in the random field regression model recently proposed by Hamilton (Econometrica, 2001). Though the model is parametric, it enjoys the flexibility of the nonparametric approach since it can approximate a large collection of n...

  12. Deep ensemble learning of sparse regression models for brain disease diagnosis.

    Science.gov (United States)

    Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang

    2017-04-01

    Recent studies on brain imaging analysis witnessed the core roles of machine learning techniques in computer-assisted intervention for brain disease diagnosis. Of various machine-learning techniques, sparse regression models have proved their effectiveness in handling high-dimensional data but with a small number of training samples, especially in medical problems. In the meantime, deep learning methods have been making great successes by outperforming the state-of-the-art performances in various applications. In this paper, we propose a novel framework that combines the two conceptually different methods of sparse regression and deep learning for Alzheimer's disease/mild cognitive impairment diagnosis and prognosis. Specifically, we first train multiple sparse regression models, each of which is trained with different values of a regularization control parameter. Thus, our multiple sparse regression models potentially select different feature subsets from the original feature set; thereby they have different powers to predict the response values, i.e., clinical label and clinical scores in our work. By regarding the response values from our sparse regression models as target-level representations, we then build a deep convolutional neural network for clinical decision making, which thus we call 'Deep Ensemble Sparse Regression Network.' To our best knowledge, this is the first work that combines sparse regression models with deep neural network. In our experiments with the ADNI cohort, we validated the effectiveness of the proposed method by achieving the highest diagnostic accuracies in three classification tasks. We also rigorously analyzed our results and compared with the previous studies on the ADNI cohort in the literature. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Bias and Uncertainty in Regression-Calibrated Models of Groundwater Flow in Heterogeneous Media

    DEFF Research Database (Denmark)

    Cooley, R.L.; Christensen, Steen

    2006-01-01

    small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate θ* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear...... are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis....

  14. Linear mixed-effects models to describe individual tree crown width for China-fir in Fujian Province, southeast China.

    Science.gov (United States)

    Hao, Xu; Yujun, Sun; Xinjie, Wang; Jin, Wang; Yao, Fu

    2015-01-01

    A multiple linear model was developed for individual tree crown width of Cunninghamia lanceolata (Lamb.) Hook in Fujian province, southeast China. Data were obtained from 55 sample plots of pure China-fir plantation stands. An Ordinary Linear Least Squares (OLS) regression was used to establish the crown width model. To adjust for correlations between observations from the same sample plots, we developed one level linear mixed-effects (LME) models based on the multiple linear model, which take into account the random effects of plots. The best random effects combinations for the LME models were determined by the Akaike's information criterion, the Bayesian information criterion and the -2logarithm likelihood. Heteroscedasticity was reduced by three residual variance functions: the power function, the exponential function and the constant plus power function. The spatial correlation was modeled by three correlation structures: the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)], and the compound symmetry structure (CS). Then, the LME model was compared to the multiple linear model using the absolute mean residual (AMR), the root mean square error (RMSE), and the adjusted coefficient of determination (adj-R2). For individual tree crown width models, the one level LME model showed the best performance. An independent dataset was used to test the performance of the models and to demonstrate the advantage of calibrating LME models.

  15. Evaluation of accuracy of linear regression models in predicting urban stormwater discharge characteristics.

    Science.gov (United States)

    Madarang, Krish J; Kang, Joo-Hyon

    2014-06-01

    Stormwater runoff has been identified as a source of pollution for the environment, especially for receiving waters. In order to quantify and manage the impacts of stormwater runoff on the environment, predictive models and mathematical models have been developed. Predictive tools such as regression models have been widely used to predict stormwater discharge characteristics. Storm event characteristics, such as antecedent dry days (ADD), have been related to response variables, such as pollutant loads and concentrations. However it has been a controversial issue among many studies to consider ADD as an important variable in predicting stormwater discharge characteristics. In this study, we examined the accuracy of general linear regression models in predicting discharge characteristics of roadway runoff. A total of 17 storm events were monitored in two highway segments, located in Gwangju, Korea. Data from the monitoring were used to calibrate United States Environmental Protection Agency's Storm Water Management Model (SWMM). The calibrated SWMM was simulated for 55 storm events, and the results of total suspended solid (TSS) discharge loads and event mean concentrations (EMC) were extracted. From these data, linear regression models were developed. R(2) and p-values of the regression of ADD for both TSS loads and EMCs were investigated. Results showed that pollutant loads were better predicted than pollutant EMC in the multiple regression models. Regression may not provide the true effect of site-specific characteristics, due to uncertainty in the data. Copyright © 2014 The Research Centre for Eco-Environmental Sciences, Chinese Academy of Sciences. Published by Elsevier B.V. All rights reserved.

  16. Adjusting for Confounding in Early Postlaunch Settings: Going Beyond Logistic Regression Models.

    Science.gov (United States)

    Schmidt, Amand F; Klungel, Olaf H; Groenwold, Rolf H H

    2016-01-01

    Postlaunch data on medical treatments can be analyzed to explore adverse events or relative effectiveness in real-life settings. These analyses are often complicated by the number of potential confounders and the possibility of model misspecification. We conducted a simulation study to compare the performance of logistic regression, propensity score, disease risk score, and stabilized inverse probability weighting methods to adjust for confounding. Model misspecification was induced in the independent derivation dataset. We evaluated performance using relative bias confidence interval coverage of the true effect, among other metrics. At low events per coefficient (1.0 and 0.5), the logistic regression estimates had a large relative bias (greater than -100%). Bias of the disease risk score estimates was at most 13.48% and 18.83%. For the propensity score model, this was 8.74% and >100%, respectively. At events per coefficient of 1.0 and 0.5, inverse probability weighting frequently failed or reduced to a crude regression, resulting in biases of -8.49% and 24.55%. Coverage of logistic regression estimates became less than the nominal level at events per coefficient ≤5. For the disease risk score, inverse probability weighting, and propensity score, coverage became less than nominal at events per coefficient ≤2.5, ≤1.0, and ≤1.0, respectively. Bias of misspecified disease risk score models was 16.55%. In settings with low events/exposed subjects per coefficient, disease risk score methods can be useful alternatives to logistic regression models, especially when propensity score models cannot be used. Despite better performance of disease risk score methods than logistic regression and propensity score models in small events per coefficient settings, bias, and coverage still deviated from nominal.

  17. Bivariate least squares linear regression: Towards a unified analytic formalism. I. Functional models

    Science.gov (United States)

    Caimmi, R.

    2011-08-01

    Concerning bivariate least squares linear regression, the classical approach pursued for functional models in earlier attempts ( York, 1966, 1969) is reviewed using a new formalism in terms of deviation (matrix) traces which, for unweighted data, reduce to usual quantities leaving aside an unessential (but dimensional) multiplicative factor. Within the framework of classical error models, the dependent variable relates to the independent variable according to the usual additive model. The classes of linear models considered are regression lines in the general case of correlated errors in X and in Y for weighted data, and in the opposite limiting situations of (i) uncorrelated errors in X and in Y, and (ii) completely correlated errors in X and in Y. The special case of (C) generalized orthogonal regression is considered in detail together with well known subcases, namely: (Y) errors in X negligible (ideally null) with respect to errors in Y; (X) errors in Y negligible (ideally null) with respect to errors in X; (O) genuine orthogonal regression; (R) reduced major-axis regression. In the limit of unweighted data, the results determined for functional models are compared with their counterparts related to extreme structural models i.e. the instrumental scatter is negligible (ideally null) with respect to the intrinsic scatter ( Isobe et al., 1990; Feigelson and Babu, 1992). While regression line slope and intercept estimators for functional and structural models necessarily coincide, the contrary holds for related variance estimators even if the residuals obey a Gaussian distribution, with the exception of Y models. An example of astronomical application is considered, concerning the [O/H]-[Fe/H] empirical relations deduced from five samples related to different stars and/or different methods of oxygen abundance determination. For selected samples and assigned methods, different regression models yield consistent results within the errors (∓ σ) for both

  18. Evaluating Non-Linear Regression Models in Analysis of Persian Walnut Fruit Growth

    Directory of Open Access Journals (Sweden)

    I. Karamatlou

    2016-02-01

    Full Text Available Introduction: Persian walnut (Juglans regia L. is a large, wind-pollinated, monoecious, dichogamous, long lived, perennial tree cultivated for its high quality wood and nuts throughout the temperate regions of the world. Growth model methodology has been widely used in the modeling of plant growth. Mathematical models are important tools to study the plant growth and agricultural systems. These models can be applied for decision-making anddesigning management procedures in horticulture. Through growth analysis, planning for planting systems, fertilization, pruning operations, harvest time as well as obtaining economical yield can be more accessible.Non-linear models are more difficult to specify and estimate than linear models. This research was aimed to studynon-linear regression models based on data obtained from fruit weight, length and width. Selecting the best models which explain that fruit inherent growth pattern of Persian walnut was a further goal of this study. Materials and Methods: The experimental material comprising 14 Persian walnut genotypes propagated by seed collected from a walnut orchard in Golestan province, Minoudasht region, Iran, at latitude 37◦04’N; longitude 55◦32’E; altitude 1060 m, in a silt loam soil type. These genotypes were selected as a representative sampling of the many walnut genotypes available throughout the Northeastern Iran. The age range of walnut trees was 30 to 50 years. The annual mean temperature at the location is16.3◦C, with annual mean rainfall of 690 mm.The data used here is the average of walnut fresh fruit and measured withgram/millimeter/day in2011.According to the data distribution pattern, several equations have been proposed to describesigmoidal growth patterns. Here, we used double-sigmoid and logistic–monomolecular models to evaluate fruit growth based on fruit weight and4different regression models in cluding Richards, Gompertz, Logistic and Exponential growth for evaluation

  19. Order Selection for General Expression of Nonlinear Autoregressive Model Based on Multivariate Stepwise Regression

    Science.gov (United States)

    Shi, Jinfei; Zhu, Songqing; Chen, Ruwen

    2017-12-01

    An order selection method based on multiple stepwise regressions is proposed for General Expression of Nonlinear Autoregressive model which converts the model order problem into the variable selection of multiple linear regression equation. The partial autocorrelation function is adopted to define the linear term in GNAR model. The result is set as the initial model, and then the nonlinear terms are introduced gradually. Statistics are chosen to study the improvements of both the new introduced and originally existed variables for the model characteristics, which are adopted to determine the model variables to retain or eliminate. So the optimal model is obtained through data fitting effect measurement or significance test. The simulation and classic time-series data experiment results show that the method proposed is simple, reliable and can be applied to practical engineering.

  20. Construction of risk prediction model of type 2 diabetes mellitus based on logistic regression

    Directory of Open Access Journals (Sweden)

    Li Jian

    2017-01-01

    Full Text Available Objective: to construct multi factor prediction model for the individual risk of T2DM, and to explore new ideas for early warning, prevention and personalized health services for T2DM. Methods: using logistic regression techniques to screen the risk factors for T2DM and construct the risk prediction model of T2DM. Results: Male’s risk prediction model logistic regression equation: logit(P=BMI × 0.735+ vegetables × (−0.671 + age × 0.838+ diastolic pressure × 0.296+ physical activity× (−2.287 + sleep ×(−0.009 +smoking ×0.214; Female’s risk prediction model logistic regression equation: logit(P=BMI ×1.979+ vegetables× (−0.292 + age × 1.355+ diastolic pressure× 0.522+ physical activity × (−2.287 + sleep × (−0.010.The area under the ROC curve of male was 0.83, the sensitivity was 0.72, the specificity was 0.86, the area under the ROC curve of female was 0.84, the sensitivity was 0.75, the specificity was 0.90. Conclusion: This study model data is from a compared study of nested case, the risk prediction model has been established by using the more mature logistic regression techniques, and the model is higher predictive sensitivity, specificity and stability.

  1. Model-based bootstrapping when correcting for measurement error with application to logistic regression.

    Science.gov (United States)

    Buonaccorsi, John P; Romeo, Giovanni; Thoresen, Magne

    2018-03-01

    When fitting regression models, measurement error in any of the predictors typically leads to biased coefficients and incorrect inferences. A plethora of methods have been proposed to correct for this. Obtaining standard errors and confidence intervals using the corrected estimators can be challenging and, in addition, there is concern about remaining bias in the corrected estimators. The bootstrap, which is one option to address these problems, has received limited attention in this context. It has usually been employed by simply resampling observations, which, while suitable in some situations, is not always formally justified. In addition, the simple bootstrap does not allow for estimating bias in non-linear models, including logistic regression. Model-based bootstrapping, which can potentially estimate bias in addition to being robust to the original sampling or whether the measurement error variance is constant or not, has received limited attention. However, it faces challenges that are not present in handling regression models with no measurement error. This article develops new methods for model-based bootstrapping when correcting for measurement error in logistic regression with replicate measures. The methodology is illustrated using two examples, and a series of simulations are carried out to assess and compare the simple and model-based bootstrap methods, as well as other standard methods. While not always perfect, the model-based approaches offer some distinct improvements over the other methods. © 2017, The International Biometric Society.

  2. Multiple regression models for energy use in air-conditioned office buildings in different climates

    International Nuclear Information System (INIS)

    Lam, Joseph C.; Wan, Kevin K.W.; Liu Dalong; Tsang, C.L.

    2010-01-01

    An attempt was made to develop multiple regression models for office buildings in the five major climates in China - severe cold, cold, hot summer and cold winter, mild, and hot summer and warm winter. A total of 12 key building design variables were identified through parametric and sensitivity analysis, and considered as inputs in the regression models. The coefficient of determination R 2 varies from 0.89 in Harbin to 0.97 in Kunming, indicating that 89-97% of the variations in annual building energy use can be explained by the changes in the 12 parameters. A pseudo-random number generator based on three simple multiplicative congruential generators was employed to generate random designs for evaluation of the regression models. The difference between regression-predicted and DOE-simulated annual building energy use are largely within 10%. It is envisaged that the regression models developed can be used to estimate the likely energy savings/penalty during the initial design stage when different building schemes and design concepts are being considered.

  3. Testing and Modeling Fuel Regression Rate in a Miniature Hybrid Burner

    Directory of Open Access Journals (Sweden)

    Luciano Fanton

    2012-01-01

    Full Text Available Ballistic characterization of an extended group of innovative HTPB-based solid fuel formulations for hybrid rocket propulsion was performed in a lab-scale burner. An optical time-resolved technique was used to assess the quasisteady regression history of single perforation, cylindrical samples. The effects of metalized additives and radiant heat transfer on the regression rate of such formulations were assessed. Under the investigated operating conditions and based on phenomenological models from the literature, analyses of the collected experimental data show an appreciable influence of the radiant heat flux from burnt gases and soot for both unloaded and loaded fuel formulations. Pure HTPB regression rate data are satisfactorily reproduced, while the impressive initial regression rates of metalized formulations require further assessment.

  4. LINEAR REGRESSION MODEL ESTİMATİON FOR RIGHT CENSORED DATA

    Directory of Open Access Journals (Sweden)

    Ersin Yılmaz

    2016-05-01

    Full Text Available In this study, firstly we will define a right censored data. If we say shortly right-censored data is censoring values that above the exact line. This may be related with scaling device. And then  we will use response variable acquainted from right-censored explanatory variables. Then the linear regression model will be estimated. For censored data’s existence, Kaplan-Meier weights will be used for  the estimation of the model. With the weights regression model  will be consistent and unbiased with that.   And also there is a method for the censored data that is a semi parametric regression and this method also give  useful results  for censored data too. This study also might be useful for the health studies because of the censored data used in medical issues generally.

  5. Soft-sensing model of temperature for aluminum reduction cell on improved twin support vector regression

    Science.gov (United States)

    Li, Tao

    2018-06-01

    The complexity of aluminum electrolysis process leads the temperature for aluminum reduction cells hard to measure directly. However, temperature is the control center of aluminum production. To solve this problem, combining some aluminum plant's practice data, this paper presents a Soft-sensing model of temperature for aluminum electrolysis process on Improved Twin Support Vector Regression (ITSVR). ITSVR eliminates the slow learning speed of Support Vector Regression (SVR) and the over-fit risk of Twin Support Vector Regression (TSVR) by introducing a regularization term into the objective function of TSVR, which ensures the structural risk minimization principle and lower computational complexity. Finally, the model with some other parameters as auxiliary variable, predicts the temperature by ITSVR. The simulation result shows Soft-sensing model based on ITSVR has short time-consuming and better generalization.

  6. Combination of supervised and semi-supervised regression models for improved unbiased estimation

    DEFF Research Database (Denmark)

    Arenas-Garía, Jeronimo; Moriana-Varo, Carlos; Larsen, Jan

    2010-01-01

    In this paper we investigate the steady-state performance of semisupervised regression models adjusted using a modified RLS-like algorithm, identifying the situations where the new algorithm is expected to outperform standard RLS. By using an adaptive combination of the supervised and semisupervi......In this paper we investigate the steady-state performance of semisupervised regression models adjusted using a modified RLS-like algorithm, identifying the situations where the new algorithm is expected to outperform standard RLS. By using an adaptive combination of the supervised...

  7. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    Science.gov (United States)

    Ulbrich, Norbert Manfred

    2013-01-01

    A new regression model search algorithm was developed in 2011 that may be used to analyze both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The new algorithm is a simplified version of a more complex search algorithm that was originally developed at the NASA Ames Balance Calibration Laboratory. The new algorithm has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression models. Therefore, the simplified search algorithm is not intended to replace the original search algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm either fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new regression model search algorithm.

  8. Multiple regression analysis in modelling of carbon dioxide emissions by energy consumption use in Malaysia

    Science.gov (United States)

    Keat, Sim Chong; Chun, Beh Boon; San, Lim Hwee; Jafri, Mohd Zubir Mat

    2015-04-01

    Climate change due to carbon dioxide (CO2) emissions is one of the most complex challenges threatening our planet. This issue considered as a great and international concern that primary attributed from different fossil fuels. In this paper, regression model is used for analyzing the causal relationship among CO2 emissions based on the energy consumption in Malaysia using time series data for the period of 1980-2010. The equations were developed using regression model based on the eight major sources that contribute to the CO2 emissions such as non energy, Liquefied Petroleum Gas (LPG), diesel, kerosene, refinery gas, Aviation Turbine Fuel (ATF) and Aviation Gasoline (AV Gas), fuel oil and motor petrol. The related data partly used for predict the regression model (1980-2000) and partly used for validate the regression model (2001-2010). The results of the prediction model with the measured data showed a high correlation coefficient (R2=0.9544), indicating the model's accuracy and efficiency. These results are accurate and can be used in early warning of the population to comply with air quality standards.

  9. Using exploratory regression to identify optimal driving factors for cellular automaton modeling of land use change.

    Science.gov (United States)

    Feng, Yongjiu; Tong, Xiaohua

    2017-09-22

    Defining transition rules is an important issue in cellular automaton (CA)-based land use modeling because these models incorporate highly correlated driving factors. Multicollinearity among correlated driving factors may produce negative effects that must be eliminated from the modeling. Using exploratory regression under pre-defined criteria, we identified all possible combinations of factors from the candidate factors affecting land use change. Three combinations that incorporate five driving factors meeting pre-defined criteria were assessed. With the selected combinations of factors, three logistic regression-based CA models were built to simulate dynamic land use change in Shanghai, China, from 2000 to 2015. For comparative purposes, a CA model with all candidate factors was also applied to simulate the land use change. Simulations using three CA models with multicollinearity eliminated performed better (with accuracy improvements about 3.6%) than the model incorporating all candidate factors. Our results showed that not all candidate factors are necessary for accurate CA modeling and the simulations were not sensitive to changes in statistically non-significant driving factors. We conclude that exploratory regression is an effective method to search for the optimal combinations of driving factors, leading to better land use change models that are devoid of multicollinearity. We suggest identification of dominant factors and elimination of multicollinearity before building land change models, making it possible to simulate more realistic outcomes.

  10. Linear and evolutionary polynomial regression models to forecast coastal dynamics: Comparison and reliability assessment

    Science.gov (United States)

    Bruno, Delia Evelina; Barca, Emanuele; Goncalves, Rodrigo Mikosz; de Araujo Queiroz, Heithor Alexandre; Berardi, Luigi; Passarella, Giuseppe

    2018-01-01

    In this paper, the Evolutionary Polynomial Regression data modelling strategy has been applied to study small scale, short-term coastal morphodynamics, given its capability for treating a wide database of known information, non-linearly. Simple linear and multilinear regression models were also applied to achieve a balance between the computational load and reliability of estimations of the three models. In fact, even though it is easy to imagine that the more complex the model, the more the prediction improves, sometimes a "slight" worsening of estimations can be accepted in exchange for the time saved in data organization and computational load. The models' outcomes were validated through a detailed statistical, error analysis, which revealed a slightly better estimation of the polynomial model with respect to the multilinear model, as expected. On the other hand, even though the data organization was identical for the two models, the multilinear one required a simpler simulation setting and a faster run time. Finally, the most reliable evolutionary polynomial regression model was used in order to make some conjecture about the uncertainty increase with the extension of extrapolation time of the estimation. The overlapping rate between the confidence band of the mean of the known coast position and the prediction band of the estimated position can be a good index of the weakness in producing reliable estimations when the extrapolation time increases too much. The proposed models and tests have been applied to a coastal sector located nearby Torre Colimena in the Apulia region, south Italy.

  11. Accounting for spatial effects in land use regression for urban air pollution modeling.

    Science.gov (United States)

    Bertazzon, Stefania; Johnson, Markey; Eccles, Kristin; Kaplan, Gilaad G

    2015-01-01

    In order to accurately assess air pollution risks, health studies require spatially resolved pollution concentrations. Land-use regression (LUR) models estimate ambient concentrations at a fine spatial scale. However, spatial effects such as spatial non-stationarity and spatial autocorrelation can reduce the accuracy of LUR estimates by increasing regression errors and uncertainty; and statistical methods for resolving these effects--e.g., spatially autoregressive (SAR) and geographically weighted regression (GWR) models--may be difficult to apply simultaneously. We used an alternate approach to address spatial non-stationarity and spatial autocorrelation in LUR models for nitrogen dioxide. Traditional models were re-specified to include a variable capturing wind speed and direction, and re-fit as GWR models. Mean R(2) values for the resulting GWR-wind models (summer: 0.86, winter: 0.73) showed a 10-20% improvement over traditional LUR models. GWR-wind models effectively addressed both spatial effects and produced meaningful predictive models. These results suggest a useful method for improving spatially explicit models. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Semimechanistic model describing gastric emptying and glucose absorption in healthy subjects and patients with type 2 diabetes

    DEFF Research Database (Denmark)

    Alskär, Oskar; Bagger, Jonatan I; Røge, Rikke M.

    2016-01-01

    The integrated glucose-insulin (IGI) model is a previously published semimechanistic model that describes plasma glucose and insulin concentrations after glucose challenges. The aim of this work was to use knowledge of physiology to improve the IGI model's description of glucose absorption and ga...... model provides a better description and improves the understanding of dynamic glucose tests involving oral glucose....... and gastric emptying after tests with varying glucose doses. The developed model's performance was compared to empirical models. To develop our model, data from oral and intravenous glucose challenges in patients with type 2 diabetes and healthy control subjects were used together with present knowledge...... glucose absorption was superior to linear absorption regardless of the gastric emptying model applied. The semiphysiological model developed performed better than previously published empirical models and allows better understanding of the mechanisms underlying glucose absorption. In conclusion, our new...

  13. A standardised graphic method for describing data privacy frameworks in primary care research using a flexible zone model.

    NARCIS (Netherlands)

    Kuchinke, W.; Ohmann, C.; Verheij, R.A.; Veen, E.B. van; Arvanitis, T.N.; Taweel, A.; Delaney, B.C.

    2014-01-01

    Purpose: To develop a model describing core concepts and principles of data flow, data privacy and confidentiality, in a simple and flexible way, using concise process descriptions and a diagrammatic notation applied to research workflow processes. The model should help to generate robust data

  14. Building interpretable predictive models for pediatric hospital readmission using Tree-Lasso logistic regression.

    Science.gov (United States)

    Jovanovic, Milos; Radovanovic, Sandro; Vukicevic, Milan; Van Poucke, Sven; Delibasic, Boris

    2016-09-01

    Quantification and early identification of unplanned readmission risk have the potential to improve the quality of care during hospitalization and after discharge. However, high dimensionality, sparsity, and class imbalance of electronic health data and the complexity of risk quantification, challenge the development of accurate predictive models. Predictive models require a certain level of interpretability in order to be applicable in real settings and create actionable insights. This paper aims to develop accurate and interpretable predictive models for readmission in a general pediatric patient population, by integrating a data-driven model (sparse logistic regression) and domain knowledge based on the international classification of diseases 9th-revision clinical modification (ICD-9-CM) hierarchy of diseases. Additionally, we propose a way to quantify the interpretability of a model and inspect the stability of alternative solutions. The analysis was conducted on >66,000 pediatric hospital discharge records from California, State Inpatient Databases, Healthcare Cost and Utilization Project between 2009 and 2011. We incorporated domain knowledge based on the ICD-9-CM hierarchy in a data driven, Tree-Lasso regularized logistic regression model, providing the framework for model interpretation. This approach was compared with traditional Lasso logistic regression resulting in models that are easier to interpret by fewer high-level diagnoses, with comparable prediction accuracy. The results revealed that the use of a Tree-Lasso model was as competitive in terms of accuracy (measured by area under the receiver operating characteristic curve-AUC) as the traditional Lasso logistic regression, but integration with the ICD-9-CM hierarchy of diseases provided more interpretable models in terms of high-level diagnoses. Additionally, interpretations of models are in accordance with existing medical understanding of pediatric readmission. Best performing models have

  15. A generalized regression model of arsenic variations in the shallow groundwater of Bangladesh

    Science.gov (United States)

    Taylor, Richard G.; Chandler, Richard E.

    2015-01-01

    Abstract Localized studies of arsenic (As) in Bangladesh have reached disparate conclusions regarding the impact of irrigation‐induced recharge on As concentrations in shallow (≤50 m below ground level) groundwater. We construct generalized regression models (GRMs) to describe observed spatial variations in As concentrations in shallow groundwater both (i) nationally, and (ii) regionally within Holocene deposits where As concentrations in groundwater are generally high (>10 μg L−1). At these scales, the GRMs reveal statistically significant inverse associations between observed As concentrations and two covariates: (1) hydraulic conductivity of the shallow aquifer and (2) net increase in mean recharge between predeveloped and developed groundwater‐fed irrigation periods. Further, the GRMs show that the spatial variation of groundwater As concentrations is well explained by not only surface geology but also statistical interactions (i.e., combined effects) between surface geology and mean groundwater recharge, thickness of surficial silt and clay, and well depth. Net increases in recharge result from intensive groundwater abstraction for irrigation, which induces additional recharge where it is enabled by a permeable surface geology. Collectively, these statistical associations indicate that irrigation‐induced recharge serves to flush mobile As from shallow groundwater. PMID:27524841

  16. Association of footprint measurements with plantar kinetics: a linear regression model.

    Science.gov (United States)

    Fascione, Jeanna M; Crews, Ryan T; Wrobel, James S

    2014-03-01

    The use of foot measurements to classify morphology and interpret foot function remains one of the focal concepts of lower-extremity biomechanics. However, only 27% to 55% of midfoot variance in foot pressures has been determined in the most comprehensive models. We investigated whether dynamic walking footprint measurements are associated with inter-individual foot loading variability. Thirty individuals (15 men and 15 women; mean ± SD age, 27.17 ± 2.21 years) walked at a self-selected speed over an electronic pedography platform using the midgait technique. Kinetic variables (contact time, peak pressure, pressure-time integral, and force-time integral) were collected for six masked regions. Footprints were digitized for area and linear boundaries using digital photo planimetry software. Six footprint measurements were determined: contact area, footprint index, arch index, truncated arch index, Chippaux-Smirak index, and Staheli index. Linear regression analysis with a Bonferroni adjustment was performed to determine the association between the footprint measurements and each of the kinetic variables. The findings demonstrate that a relationship exists between increased midfoot contact and increased kinetic values in respective locations. Many of these variables produced large effect sizes while describing 38% to 71% of the common variance of select plantar kinetic variables in the medial midfoot region. In addition, larger footprints were associated with larger kinetic values at the medial heel region and both masked forefoot regions. Dynamic footprint measurements are associated with dynamic plantar loading kinetics, with emphasis on the midfoot region.

  17. Harmonic regression of Landsat time series for modeling attributes from national forest inventory data

    Science.gov (United States)

    Wilson, Barry T.; Knight, Joseph F.; McRoberts, Ronald E.

    2018-03-01

    Imagery from the Landsat Program has been used frequently as a source of auxiliary data for modeling land cover, as well as a variety of attributes associated with tree cover. With ready access to all scenes in the archive since 2008 due to the USGS Landsat Data Policy, new approaches to deriving such auxiliary data from dense Landsat time series are required. Several methods have previously been developed for use with finer temporal resolution imagery (e.g. AVHRR and MODIS), including image compositing and harmonic regression using Fourier series. The manuscript presents a study, using Minnesota, USA during the years 2009-2013 as the study area and timeframe. The study examined the relative predictive power of land cover models, in particular those related to tree cover, using predictor variables based solely on composite imagery versus those using estimated harmonic regression coefficients. The study used two common non-parametric modeling approaches (i.e. k-nearest neighbors and random forests) for fitting classification and regression models of multiple attributes measured on USFS Forest Inventory and Analysis plots using all available Landsat imagery for the study area and timeframe. The estimated Fourier coefficients developed by harmonic regression of tasseled cap transformation time series data were shown to be correlated with land cover, including tree cover. Regression models using estimated Fourier coefficients as predictor variables showed a two- to threefold increase in explained variance for a small set of continuous response variables, relative to comparable models using monthly image composites. Similarly, the overall accuracies of classification models using the estimated Fourier coefficients were approximately 10-20 percentage points higher than the models using the image composites, with corresponding individual class accuracies between six and 45 percentage points higher.

  18. Multiple logistic regression model of signalling practices of drivers on urban highways

    Science.gov (United States)

    Puan, Othman Che; Ibrahim, Muttaka Na'iya; Zakaria, Rozana

    2015-05-01

    Giving signal is a way of informing other road users, especially to the conflicting drivers, the intention of a driver to change his/her movement course. Other users are exposed to hazard situation and risks of accident if the driver who changes his/her course failed to give signal as required. This paper describes the application of logistic regression model for the analysis of driver's signalling practices on multilane highways based on possible factors affecting driver's decision such as driver's gender, vehicle's type, vehicle's speed and traffic flow intensity. Data pertaining to the analysis of such factors were collected manually. More than 2000 drivers who have performed a lane changing manoeuvre while driving on two sections of multilane highways were observed. Finding from the study shows that relatively a large proportion of drivers failed to give any signals when changing lane. The result of the analysis indicates that although the proportion of the drivers who failed to provide signal prior to lane changing manoeuvre is high, the degree of compliances of the female drivers is better than the male drivers. A binary logistic model was developed to represent the probability of a driver to provide signal indication prior to lane changing manoeuvre. The model indicates that driver's gender, type of vehicle's driven, speed of vehicle and traffic volume influence the driver's decision to provide a signal indication prior to a lane changing manoeuvre on a multilane urban highway. In terms of types of vehicles driven, about 97% of motorcyclists failed to comply with the signal indication requirement. The proportion of non-compliance drivers under stable traffic flow conditions is much higher than when the flow is relatively heavy. This is consistent with the data which indicates a high degree of non-compliances when the average speed of the traffic stream is relatively high.

  19. A comparison of macroscopic models describing the collective response of sedimenting rod-like particles in shear flows

    KAUST Repository

    Helzel, Christiane; Tzavaras, Athanasios

    2016-01-01

    We consider a kinetic model, which describes the sedimentation of rod-like particles in dilute suspensions under the influence of gravity, presented in Helzel and Tzavaras (submitted for publication). Here we restrict our considerations to shear flow and consider a simplified situation, where the particle orientation is restricted to the plane spanned by the direction of shear and the direction of gravity. For this simplified kinetic model we carry out a linear stability analysis and we derive two different nonlinear macroscopic models which describe the formation of clusters of higher particle density. One of these macroscopic models is based on a diffusive scaling, the other one is based on a so-called quasi-dynamic approximation. Numerical computations, which compare the predictions of the macroscopic models with the kinetic model, complete our presentation.

  20. A comparison of macroscopic models describing the collective response of sedimenting rod-like particles in shear flows

    KAUST Repository

    Helzel, Christiane

    2016-07-22

    We consider a kinetic model, which describes the sedimentation of rod-like particles in dilute suspensions under the influence of gravity, presented in Helzel and Tzavaras (submitted for publication). Here we restrict our considerations to shear flow and consider a simplified situation, where the particle orientation is restricted to the plane spanned by the direction of shear and the direction of gravity. For this simplified kinetic model we carry out a linear stability analysis and we derive two different nonlinear macroscopic models which describe the formation of clusters of higher particle density. One of these macroscopic models is based on a diffusive scaling, the other one is based on a so-called quasi-dynamic approximation. Numerical computations, which compare the predictions of the macroscopic models with the kinetic model, complete our presentation.