Sample records for regression analyses adjusting

1. Covariate Imbalance and Adjustment for Logistic Regression Analysis of Clinical Trial Data

Science.gov (United States)

Ciolino, Jody D.; Martin, Reneé H.; Zhao, Wenle; Jauch, Edward C.; Hill, Michael D.; Palesch, Yuko Y.

2014-01-01

In logistic regression analysis for binary clinical trial data, adjusted treatment effect estimates are often not equivalent to unadjusted estimates in the presence of influential covariates. This paper uses simulation to quantify the benefit of covariate adjustment in logistic regression. However, International Conference on Harmonization guidelines suggest that covariate adjustment be pre-specified. Unplanned adjusted analyses should be considered secondary. Results suggest that that if adjustment is not possible or unplanned in a logistic setting, balance in continuous covariates can alleviate some (but never all) of the shortcomings of unadjusted analyses. The case of log binomial regression is also explored. PMID:24138438

2. Least Squares Adjustment: Linear and Nonlinear Weighted Regression Analysis

DEFF Research Database (Denmark)

Nielsen, Allan Aasbjerg

2007-01-01

This note primarily describes the mathematics of least squares regression analysis as it is often used in geodesy including land surveying and satellite positioning applications. In these fields regression is often termed adjustment. The note also contains a couple of typical land surveying...... and satellite positioning application examples. In these application areas we are typically interested in the parameters in the model typically 2- or 3-D positions and not in predictive modelling which is often the main concern in other regression analysis applications. Adjustment is often used to obtain...... the clock error) and to obtain estimates of the uncertainty with which the position is determined. Regression analysis is used in many other fields of application both in the natural, the technical and the social sciences. Examples may be curve fitting, calibration, establishing relationships between...

Science.gov (United States)

Donoghoe, Mark W; Marschner, Ian C

2016-08-15

4. Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.

Science.gov (United States)

Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg

2009-11-01

G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.

5. Applications of MIDAS regression in analysing trends in water quality

Science.gov (United States)

Penev, Spiridon; Leonte, Daniela; Lazarov, Zdravetz; Mann, Rob A.

2014-04-01

We discuss novel statistical methods in analysing trends in water quality. Such analysis uses complex data sets of different classes of variables, including water quality, hydrological and meteorological. We analyse the effect of rainfall and flow on trends in water quality utilising a flexible model called Mixed Data Sampling (MIDAS). This model arises because of the mixed frequency in the data collection. Typically, water quality variables are sampled fortnightly, whereas the rain data is sampled daily. The advantage of using MIDAS regression is in the flexible and parsimonious modelling of the influence of the rain and flow on trends in water quality variables. We discuss the model and its implementation on a data set from the Shoalhaven Supply System and Catchments in the state of New South Wales, Australia. Information criteria indicate that MIDAS modelling improves upon simplistic approaches that do not utilise the mixed data sampling nature of the data.

6. Regression Trees Identify Relevant Interactions: Can This Improve the Predictive Performance of Risk Adjustment?

Science.gov (United States)

Buchner, Florian; Wasem, Jürgen; Schillo, Sonja

2017-01-01

Risk equalization formulas have been refined since their introduction about two decades ago. Because of the complexity and the abundance of possible interactions between the variables used, hardly any interactions are considered. A regression tree is used to systematically search for interactions, a methodologically new approach in risk equalization. Analyses are based on a data set of nearly 2.9 million individuals from a major German social health insurer. A two-step approach is applied: In the first step a regression tree is built on the basis of the learning data set. Terminal nodes characterized by more than one morbidity-group-split represent interaction effects of different morbidity groups. In the second step the 'traditional' weighted least squares regression equation is expanded by adding interaction terms for all interactions detected by the tree, and regression coefficients are recalculated. The resulting risk adjustment formula shows an improvement in the adjusted R 2 from 25.43% to 25.81% on the evaluation data set. Predictive ratios are calculated for subgroups affected by the interactions. The R 2 improvement detected is only marginal. According to the sample level performance measures used, not involving a considerable number of morbidity interactions forms no relevant loss in accuracy. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

7. Adjusting for Confounding in Early Postlaunch Settings: Going Beyond Logistic Regression Models.

Science.gov (United States)

Schmidt, Amand F; Klungel, Olaf H; Groenwold, Rolf H H

2016-01-01

Postlaunch data on medical treatments can be analyzed to explore adverse events or relative effectiveness in real-life settings. These analyses are often complicated by the number of potential confounders and the possibility of model misspecification. We conducted a simulation study to compare the performance of logistic regression, propensity score, disease risk score, and stabilized inverse probability weighting methods to adjust for confounding. Model misspecification was induced in the independent derivation dataset. We evaluated performance using relative bias confidence interval coverage of the true effect, among other metrics. At low events per coefficient (1.0 and 0.5), the logistic regression estimates had a large relative bias (greater than -100%). Bias of the disease risk score estimates was at most 13.48% and 18.83%. For the propensity score model, this was 8.74% and >100%, respectively. At events per coefficient of 1.0 and 0.5, inverse probability weighting frequently failed or reduced to a crude regression, resulting in biases of -8.49% and 24.55%. Coverage of logistic regression estimates became less than the nominal level at events per coefficient ≤5. For the disease risk score, inverse probability weighting, and propensity score, coverage became less than nominal at events per coefficient ≤2.5, ≤1.0, and ≤1.0, respectively. Bias of misspecified disease risk score models was 16.55%. In settings with low events/exposed subjects per coefficient, disease risk score methods can be useful alternatives to logistic regression models, especially when propensity score models cannot be used. Despite better performance of disease risk score methods than logistic regression and propensity score models in small events per coefficient settings, bias, and coverage still deviated from nominal.

8. Multicollinearity in Regression Analyses Conducted in Epidemiologic Studies.

Science.gov (United States)

Vatcheva, Kristina P; Lee, MinJae; McCormick, Joseph B; Rahbar, Mohammad H

2016-04-01

The adverse impact of ignoring multicollinearity on findings and data interpretation in regression analysis is very well documented in the statistical literature. The failure to identify and report multicollinearity could result in misleading interpretations of the results. A review of epidemiological literature in PubMed from January 2004 to December 2013, illustrated the need for a greater attention to identifying and minimizing the effect of multicollinearity in analysis of data from epidemiologic studies. We used simulated datasets and real life data from the Cameron County Hispanic Cohort to demonstrate the adverse effects of multicollinearity in the regression analysis and encourage researchers to consider the diagnostic for multicollinearity as one of the steps in regression analysis.

9. Statistical and regression analyses of detected extrasolar systems

Czech Academy of Sciences Publication Activity Database

Pintr, Pavel; Peřinová, V.; Lukš, A.; Pathak, A.

2013-01-01

Roč. 75, č. 1 (2013), s. 37-45 ISSN 0032-0633 Institutional support: RVO:61389021 Keywords : Exoplanets * Kepler candidates * Regression analysis Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics Impact factor: 1.630, year: 2013 http://www.sciencedirect.com/science/article/pii/S0032063312003066

10. Multicollinearity in Regression Analyses Conducted in Epidemiologic Studies

OpenAIRE

Vatcheva, Kristina P.; Lee, MinJae; McCormick, Joseph B.; Rahbar, Mohammad H.

2016-01-01

The adverse impact of ignoring multicollinearity on findings and data interpretation in regression analysis is very well documented in the statistical literature. The failure to identify and report multicollinearity could result in misleading interpretations of the results. A review of epidemiological literature in PubMed from January 2004 to December 2013, illustrated the need for a greater attention to identifying and minimizing the effect of multicollinearity in analysis of data from epide...

11. Analysing inequalities in Germany a structured additive distributional regression approach

CERN Document Server

Silbersdorff, Alexander

2017-01-01

This book seeks new perspectives on the growing inequalities that our societies face, putting forward Structured Additive Distributional Regression as a means of statistical analysis that circumvents the common problem of analytical reduction to simple point estimators. This new approach allows the observed discrepancy between the individuals’ realities and the abstract representation of those realities to be explicitly taken into consideration using the arithmetic mean alone. In turn, the method is applied to the question of economic inequality in Germany.

12. Comparing treatment effects after adjustment with multivariable Cox proportional hazards regression and propensity score methods

NARCIS (Netherlands)

Martens, Edwin P; de Boer, Anthonius; Pestman, Wiebe R; Belitser, Svetlana V; Stricker, Bruno H Ch; Klungel, Olaf H

PURPOSE: To compare adjusted effects of drug treatment for hypertension on the risk of stroke from propensity score (PS) methods with a multivariable Cox proportional hazards (Cox PH) regression in an observational study with censored data. METHODS: From two prospective population-based cohort

13. An evaluation of bias in propensity score-adjusted non-linear regression models.

Science.gov (United States)

Wan, Fei; Mitra, Nandita

2018-03-01

Propensity score methods are commonly used to adjust for observed confounding when estimating the conditional treatment effect in observational studies. One popular method, covariate adjustment of the propensity score in a regression model, has been empirically shown to be biased in non-linear models. However, no compelling underlying theoretical reason has been presented. We propose a new framework to investigate bias and consistency of propensity score-adjusted treatment effects in non-linear models that uses a simple geometric approach to forge a link between the consistency of the propensity score estimator and the collapsibility of non-linear models. Under this framework, we demonstrate that adjustment of the propensity score in an outcome model results in the decomposition of observed covariates into the propensity score and a remainder term. Omission of this remainder term from a non-collapsible regression model leads to biased estimates of the conditional odds ratio and conditional hazard ratio, but not for the conditional rate ratio. We further show, via simulation studies, that the bias in these propensity score-adjusted estimators increases with larger treatment effect size, larger covariate effects, and increasing dissimilarity between the coefficients of the covariates in the treatment model versus the outcome model.

14. 10 km running performance predicted by a multiple linear regression model with allometrically adjusted variables.

Science.gov (United States)

Abad, Cesar C C; Barros, Ronaldo V; Bertuzzi, Romulo; Gagliardi, João F L; Lima-Silva, Adriano E; Lambert, Mike I; Pires, Flavio O

2016-06-01

The aim of this study was to verify the power of VO 2max , peak treadmill running velocity (PTV), and running economy (RE), unadjusted or allometrically adjusted, in predicting 10 km running performance. Eighteen male endurance runners performed: 1) an incremental test to exhaustion to determine VO 2max and PTV; 2) a constant submaximal run at 12 km·h -1 on an outdoor track for RE determination; and 3) a 10 km running race. Unadjusted (VO 2max , PTV and RE) and adjusted variables (VO 2max 0.72 , PTV 0.72 and RE 0.60 ) were investigated through independent multiple regression models to predict 10 km running race time. There were no significant correlations between 10 km running time and either the adjusted or unadjusted VO 2max . Significant correlations (p 0.84 and power > 0.88. The allometrically adjusted predictive model was composed of PTV 0.72 and RE 0.60 and explained 83% of the variance in 10 km running time with a standard error of the estimate (SEE) of 1.5 min. The unadjusted model composed of a single PVT accounted for 72% of the variance in 10 km running time (SEE of 1.9 min). Both regression models provided powerful estimates of 10 km running time; however, the unadjusted PTV may provide an uncomplicated estimation.

15. How to deal with continuous and dichotomic outcomes in epidemiological research: linear and logistic regression analyses

NARCIS (Netherlands)

Tripepi, Giovanni; Jager, Kitty J.; Stel, Vianda S.; Dekker, Friedo W.; Zoccali, Carmine

2011-01-01

Because of some limitations of stratification methods, epidemiologists frequently use multiple linear and logistic regression analyses to address specific epidemiological questions. If the dependent variable is a continuous one (for example, systolic pressure and serum creatinine), the researcher

16. New ventures require accurate risk analyses and adjustments.

Science.gov (United States)

Eastaugh, S R

2000-01-01

For new business ventures to succeed, healthcare executives need to conduct robust risk analyses and develop new approaches to balance risk and return. Risk analysis involves examination of objective risks and harder-to-quantify subjective risks. Mathematical principles applied to investment portfolios also can be applied to a portfolio of departments or strategic business units within an organization. The ideal business investment would have a high expected return and a low standard deviation. Nonetheless, both conservative and speculative strategies should be considered in determining an organization's optimal service line and helping the organization manage risk.

17. Direct comparison of risk-adjusted and non-risk-adjusted CUSUM analyses of coronary artery bypass surgery outcomes.

Science.gov (United States)

Novick, Richard J; Fox, Stephanie A; Stitt, Larry W; Forbes, Thomas L; Steiner, Stefan

2006-08-01

18. Short term load forecasting technique based on the seasonal exponential adjustment method and the regression model

International Nuclear Information System (INIS)

Wu, Jie; Wang, Jianzhou; Lu, Haiyan; Dong, Yao; Lu, Xiaoxiao

2013-01-01

Highlights: ► The seasonal and trend items of the data series are forecasted separately. ► Seasonal item in the data series is verified by the Kendall τ correlation testing. ► Different regression models are applied to the trend item forecasting. ► We examine the superiority of the combined models by the quartile value comparison. ► Paired-sample T test is utilized to confirm the superiority of the combined models. - Abstract: For an energy-limited economy system, it is crucial to forecast load demand accurately. This paper devotes to 1-week-ahead daily load forecasting approach in which load demand series are predicted by employing the information of days before being similar to that of the forecast day. As well as in many nonlinear systems, seasonal item and trend item are coexisting in load demand datasets. In this paper, the existing of the seasonal item in the load demand data series is firstly verified according to the Kendall τ correlation testing method. Then in the belief of the separate forecasting to the seasonal item and the trend item would improve the forecasting accuracy, hybrid models by combining seasonal exponential adjustment method (SEAM) with the regression methods are proposed in this paper, where SEAM and the regression models are employed to seasonal and trend items forecasting respectively. Comparisons of the quartile values as well as the mean absolute percentage error values demonstrate this forecasting technique can significantly improve the accuracy though models applied to the trend item forecasting are eleven different ones. This superior performance of this separate forecasting technique is further confirmed by the paired-sample T tests

19. The number of subjects per variable required in linear regression analyses.

Science.gov (United States)

Austin, Peter C; Steyerberg, Ewout W

2015-06-01

20. USE OF THE SIMPLE LINEAR REGRESSION MODEL IN MACRO-ECONOMICAL ANALYSES

Directory of Open Access Journals (Sweden)

Constantin ANGHELACHE

2011-10-01

Full Text Available The article presents the fundamental aspects of the linear regression, as a toolbox which can be used in macroeconomic analyses. The article describes the estimation of the parameters, the statistical tests used, the homoscesasticity and heteroskedasticity. The use of econometrics instrument in macroeconomics is an important factor that guarantees the quality of the models, analyses, results and possible interpretation that can be drawn at this level.

1. Herd-specific random regression carcass profiles for beef cattle after adjustment for animal genetic merit.

Science.gov (United States)

Englishby, Tanya M; Moore, Kirsty L; Berry, Donagh P; Coffey, Mike P; Banos, Georgios

2017-07-01

Abattoir data are an important source of information for the genetic evaluation of carcass traits, but also for on-farm management purposes. The present study aimed to quantify the contribution of herd environment to beef carcass characteristics (weight, conformation score and fat score) with particular emphasis on generating finishing herd-specific profiles for these traits across different ages at slaughter. Abattoir records from 46,115 heifers and 78,790 steers aged between 360 and 900days, and from 22,971 young bulls aged between 360 and 720days, were analysed. Finishing herd-year and animal genetic (co)variance components for each trait were estimated using random regression models. Across slaughter age and gender, the ratio of finishing herd-year to total phenotypic variance ranged from 0.31 to 0.72 for carcass weight, 0.21 to 0.57 for carcass conformation and 0.11 to 0.44 for carcass fat score. These parameters indicate that the finishing herd environment is an important contributor to carcass trait variability and amenable to improvement with management practices. Copyright © 2017 Elsevier Ltd. All rights reserved.

2. A Proportional Hazards Regression Model for the Subdistribution with Covariates-adjusted Censoring Weight for Competing Risks Data

DEFF Research Database (Denmark)

He, Peng; Eriksson, Frank; Scheike, Thomas H.

2016-01-01

function by fitting the Cox model for the censoring distribution and using the predictive probability for each individual. Our simulation study shows that the covariate-adjusted weight estimator is basically unbiased when the censoring time depends on the covariates, and the covariate-adjusted weight......With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution...... and the covariates are independent. Covariate-dependent censoring sometimes occurs in medical studies. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with proper adjustments for covariate-dependent censoring. We consider a covariate-adjusted weight...

3. Regression estimators for generic health-related quality of life and quality-adjusted life years.

Science.gov (United States)

Basu, Anirban; Manca, Andrea

2012-01-01

To develop regression models for outcomes with truncated supports, such as health-related quality of life (HRQoL) data, and account for features typical of such data such as a skewed distribution, spikes at 1 or 0, and heteroskedasticity. Regression estimators based on features of the Beta distribution. First, both a single equation and a 2-part model are presented, along with estimation algorithms based on maximum-likelihood, quasi-likelihood, and Bayesian Markov-chain Monte Carlo methods. A novel Bayesian quasi-likelihood estimator is proposed. Second, a simulation exercise is presented to assess the performance of the proposed estimators against ordinary least squares (OLS) regression for a variety of HRQoL distributions that are encountered in practice. Finally, the performance of the proposed estimators is assessed by using them to quantify the treatment effect on QALYs in the EVALUATE hysterectomy trial. Overall model fit is studied using several goodness-of-fit tests such as Pearson's correlation test, link and reset tests, and a modified Hosmer-Lemeshow test. The simulation results indicate that the proposed methods are more robust in estimating covariate effects than OLS, especially when the effects are large or the HRQoL distribution has a large spike at 1. Quasi-likelihood techniques are more robust than maximum likelihood estimators. When applied to the EVALUATE trial, all but the maximum likelihood estimators produce unbiased estimates of the treatment effect. One and 2-part Beta regression models provide flexible approaches to regress the outcomes with truncated supports, such as HRQoL, on covariates, after accounting for many idiosyncratic features of the outcomes distribution. This work will provide applied researchers with a practical set of tools to model outcomes in cost-effectiveness analysis.

4. Genetic analyses of partial egg production in Japanese quail using multi-trait random regression models.

Science.gov (United States)

Karami, K; Zerehdaran, S; Barzanooni, B; Lotfi, E

2017-12-01

1. The aim of the present study was to estimate genetic parameters for average egg weight (EW) and egg number (EN) at different ages in Japanese quail using multi-trait random regression (MTRR) models. 2. A total of 8534 records from 900 quail, hatched between 2014 and 2015, were used in the study. Average weekly egg weights and egg numbers were measured from second until sixth week of egg production. 3. Nine random regression models were compared to identify the best order of the Legendre polynomials (LP). The most optimal model was identified by the Bayesian Information Criterion. A model with second order of LP for fixed effects, second order of LP for additive genetic effects and third order of LP for permanent environmental effects (MTRR23) was found to be the best. 4. According to the MTRR23 model, direct heritability for EW increased from 0.26 in the second week to 0.53 in the sixth week of egg production, whereas the ratio of permanent environment to phenotypic variance decreased from 0.48 to 0.1. Direct heritability for EN was low, whereas the ratio of permanent environment to phenotypic variance decreased from 0.57 to 0.15 during the production period. 5. For each trait, estimated genetic correlations among weeks of egg production were high (from 0.85 to 0.98). Genetic correlations between EW and EN were low and negative for the first two weeks, but they were low and positive for the rest of the egg production period. 6. In conclusion, random regression models can be used effectively for analysing egg production traits in Japanese quail. Response to selection for increased egg weight would be higher at older ages because of its higher heritability and such a breeding program would have no negative genetic impact on egg production.

5. Reducing Inter-Laboratory Differences between Semen Analyses Using Z Score and Regression Transformations

Directory of Open Access Journals (Sweden)

Esther Leushuis

2016-12-01

Full Text Available Background: Standardization of the semen analysis may improve reproducibility. We assessed variability between laboratories in semen analyses and evaluated whether a transformation using Z scores and regression statistics was able to reduce this variability. Materials and Methods: We performed a retrospective cohort study. We calculated between-laboratory coefficients of variation (CVB for sperm concentration and for morphology. Subsequently, we standardized the semen analysis results by calculating laboratory specific Z scores, and by using regression. We used analysis of variance for four semen parameters to assess systematic differences between laboratories before and after the transformations, both in the circulation samples and in the samples obtained in the prospective cohort study in the Netherlands between January 2002 and February 2004. Results: The mean CVB was 7% for sperm concentration (range 3 to 13% and 32% for sperm morphology (range 18 to 51%. The differences between the laboratories were statistically significant for all semen parameters (all P<0.001. Standardization using Z scores did not reduce the differences in semen analysis results between the laboratories (all P<0.001. Conclusion: There exists large between-laboratory variability for sperm morphology and small, but statistically significant, between-laboratory variation for sperm concentration. Standardization using Z scores does not eliminate between-laboratory variability.

6. Adjusting for overdispersion in piecewise exponential regression models to estimate excess mortality rate in population-based research.

Science.gov (United States)

Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard

2016-10-01

In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.

7. Adjustment of regional regression models of urban-runoff quality using data for Chattanooga, Knoxville, and Nashville, Tennessee

Science.gov (United States)

Hoos, Anne B.; Patel, Anant R.

1996-01-01

Model-adjustment procedures were applied to the combined data bases of storm-runoff quality for Chattanooga, Knoxville, and Nashville, Tennessee, to improve predictive accuracy for storm-runoff quality for urban watersheds in these three cities and throughout Middle and East Tennessee. Data for 45 storms at 15 different sites (five sites in each city) constitute the data base. Comparison of observed values of storm-runoff load and event-mean concentration to the predicted values from the regional regression models for 10 constituents shows prediction errors, as large as 806,000 percent. Model-adjustment procedures, which combine the regional model predictions with local data, are applied to improve predictive accuracy. Standard error of estimate after model adjustment ranges from 67 to 322 percent. Calibration results may be biased due to sampling error in the Tennessee data base. The relatively large values of standard error of estimate for some of the constituent models, although representing significant reduction (at least 50 percent) in prediction error compared to estimation with unadjusted regional models, may be unacceptable for some applications. The user may wish to collect additional local data for these constituents and repeat the analysis, or calibrate an independent local regression model.

8. A joint logistic regression and covariate-adjusted continuous-time Markov chain model.

Science.gov (United States)

Rubin, Maria Laura; Chan, Wenyaw; Yamal, Jose-Miguel; Robertson, Claudia Sue

2017-12-10

The use of longitudinal measurements to predict a categorical outcome is an increasingly common goal in research studies. Joint models are commonly used to describe two or more models simultaneously by considering the correlated nature of their outcomes and the random error present in the longitudinal measurements. However, there is limited research on joint models with longitudinal predictors and categorical cross-sectional outcomes. Perhaps the most challenging task is how to model the longitudinal predictor process such that it represents the true biological mechanism that dictates the association with the categorical response. We propose a joint logistic regression and Markov chain model to describe a binary cross-sectional response, where the unobserved transition rates of a two-state continuous-time Markov chain are included as covariates. We use the method of maximum likelihood to estimate the parameters of our model. In a simulation study, coverage probabilities of about 95%, standard deviations close to standard errors, and low biases for the parameter values show that our estimation method is adequate. We apply the proposed joint model to a dataset of patients with traumatic brain injury to describe and predict a 6-month outcome based on physiological data collected post-injury and admission characteristics. Our analysis indicates that the information provided by physiological changes over time may help improve prediction of long-term functional status of these severely ill subjects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

9. Logistic regression and multiple classification analyses to explore risk factors of under-5 mortality in bangladesh

International Nuclear Information System (INIS)

Bhowmik, K.R.; Islam, S.

2016-01-01

Logistic regression (LR) analysis is the most common statistical methodology to find out the determinants of childhood mortality. However, the significant predictors cannot be ranked according to their influence on the response variable. Multiple classification (MC) analysis can be applied to identify the significant predictors with a priority index which helps to rank the predictors. The main objective of the study is to find the socio-demographic determinants of childhood mortality at neonatal, post-neonatal, and post-infant period by fitting LR model as well as to rank those through MC analysis. The study is conducted using the data of Bangladesh Demographic and Health Survey 2007 where birth and death information of children were collected from their mothers. Three dichotomous response variables are constructed from children age at death to fit the LR and MC models. Socio-economic and demographic variables significantly associated with the response variables separately are considered in LR and MC analyses. Both the LR and MC models identified the same significant predictors for specific childhood mortality. For both the neonatal and child mortality, biological factors of children, regional settings, and parents socio-economic status are found as 1st, 2nd, and 3rd significant groups of predictors respectively. Mother education and household environment are detected as major significant predictors of post-neonatal mortality. This study shows that MC analysis with or without LR analysis can be applied to detect determinants with rank which help the policy makers taking initiatives on a priority basis. (author)

10. The number of subjects per variable required in linear regression analyses

NARCIS (Netherlands)

P.C. Austin (Peter); E.W. Steyerberg (Ewout)

2015-01-01

textabstractObjectives To determine the number of independent variables that can be included in a linear regression model. Study Design and Setting We used a series of Monte Carlo simulations to examine the impact of the number of subjects per variable (SPV) on the accuracy of estimated regression

11. Adjusting the Adjusted X[superscript 2]/df Ratio Statistic for Dichotomous Item Response Theory Analyses: Does the Model Fit?

Science.gov (United States)

Tay, Louis; Drasgow, Fritz

2012-01-01

Two Monte Carlo simulation studies investigated the effectiveness of the mean adjusted X[superscript 2]/df statistic proposed by Drasgow and colleagues and, because of problems with the method, a new approach for assessing the goodness of fit of an item response theory model was developed. It has been previously recommended that mean adjusted…

12. Correcting for multivariate measurement error by regression calibration in meta-analyses of epidemiological studies.

NARCIS (Netherlands)

Kromhout, D.

2009-01-01

Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements of the

13. Testing Mediation Using Multiple Regression and Structural Equation Modeling Analyses in Secondary Data

Science.gov (United States)

Li, Spencer D.

2011-01-01

Mediation analysis in child and adolescent development research is possible using large secondary data sets. This article provides an overview of two statistical methods commonly used to test mediated effects in secondary analysis: multiple regression and structural equation modeling (SEM). Two empirical studies are presented to illustrate the…

14. Regression Analyses on the Butterfly Ballot Effect: A Statistical Perspective of the US 2000 Election

Science.gov (United States)

Wu, Dane W.

2002-01-01

The year 2000 US presidential election between Al Gore and George Bush has been the most intriguing and controversial one in American history. The state of Florida was the trigger for the controversy, mainly, due to the use of the misleading "butterfly ballot". Using prediction (or confidence) intervals for least squares regression lines…

15. Alpins and thibos vectorial astigmatism analyses: proposal of a linear regression model between methods

Directory of Open Access Journals (Sweden)

Giuliano de Oliveira Freitas

2013-10-01

Full Text Available PURPOSE: To determine linear regression models between Alpins descriptive indices and Thibos astigmatic power vectors (APV, assessing the validity and strength of such correlations. METHODS: This case series prospectively assessed 62 eyes of 31 consecutive cataract patients with preoperative corneal astigmatism between 0.75 and 2.50 diopters in both eyes. Patients were randomly assorted among two phacoemulsification groups: one assigned to receive AcrySof®Toric intraocular lens (IOL in both eyes and another assigned to have AcrySof Natural IOL associated with limbal relaxing incisions, also in both eyes. All patients were reevaluated postoperatively at 6 months, when refractive astigmatism analysis was performed using both Alpins and Thibos methods. The ratio between Thibos postoperative APV and preoperative APV (APVratio and its linear regression to Alpins percentage of success of astigmatic surgery, percentage of astigmatism corrected and percentage of astigmatism reduction at the intended axis were assessed. RESULTS: Significant negative correlation between the ratio of post- and preoperative Thibos APVratio and Alpins percentage of success (%Success was found (Spearman's ρ=-0.93; linear regression is given by the following equation: %Success = (-APVratio + 1.00x100. CONCLUSION: The linear regression we found between APVratio and %Success permits a validated mathematical inference concerning the overall success of astigmatic surgery.

16. Check-all-that-apply data analysed by Partial Least Squares regression

DEFF Research Database (Denmark)

Rinnan, Åsmund; Giacalone, Davide; Frøst, Michael Bom

2015-01-01

are analysed by multivariate techniques. CATA data can be analysed both by setting the CATA as the X and the Y. The former is the PLS-Discriminant Analysis (PLS-DA) version, while the latter is the ANOVA-PLS (A-PLS) version. We investigated the difference between these two approaches, concluding...

17. Differential item functioning (DIF) analyses of health-related quality of life instruments using logistic regression

DEFF Research Database (Denmark)

Scott, Neil W; Fayers, Peter M; Aaronson, Neil K

2010-01-01

Differential item functioning (DIF) methods can be used to determine whether different subgroups respond differently to particular items within a health-related quality of life (HRQoL) subscale, after allowing for overall subgroup differences in that scale. This article reviews issues that arise ...... when testing for DIF in HRQoL instruments. We focus on logistic regression methods, which are often used because of their efficiency, simplicity and ease of application....

18. Analyses of Developmental Rate Isomorphy in Ectotherms: Introducing the Dirichlet Regression.

Directory of Open Access Journals (Sweden)

David S Boukal

Full Text Available Temperature drives development in insects and other ectotherms because their metabolic rate and growth depends directly on thermal conditions. However, relative durations of successive ontogenetic stages often remain nearly constant across a substantial range of temperatures. This pattern, termed 'developmental rate isomorphy' (DRI in insects, appears to be widespread and reported departures from DRI are generally very small. We show that these conclusions may be due to the caveats hidden in the statistical methods currently used to study DRI. Because the DRI concept is inherently based on proportional data, we propose that Dirichlet regression applied to individual-level data is an appropriate statistical method to critically assess DRI. As a case study we analyze data on five aquatic and four terrestrial insect species. We find that results obtained by Dirichlet regression are consistent with DRI violation in at least eight of the studied species, although standard analysis detects significant departure from DRI in only four of them. Moreover, the departures from DRI detected by Dirichlet regression are consistently much larger than previously reported. The proposed framework can also be used to infer whether observed departures from DRI reflect life history adaptations to size- or stage-dependent effects of varying temperature. Our results indicate that the concept of DRI in insects and other ectotherms should be critically re-evaluated and put in a wider context, including the concept of 'equiproportional development' developed for copepods.

19. Correlation and regression analyses of genetic effects for different types of cells in mammals under radiation and chemical treatment

International Nuclear Information System (INIS)

Slutskaya, N.G.; Mosseh, I.B.

2006-01-01

Data about genetic mutations under radiation and chemical treatment for different types of cells have been analyzed with correlation and regression analyses. Linear correlation between different genetic effects in sex cells and somatic cells have found. The results may be extrapolated on sex cells of human and mammals. (authors)

20. Correcting for multivariate measurement error by regression calibration in meta-analyses of epidemiological studies

DEFF Research Database (Denmark)

Tybjærg-Hansen, Anne

2009-01-01

Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements...... of the risk factors are observed on a subsample. We extend the multivariate RC techniques to a meta-analysis framework where multiple studies provide independent repeat measurements and information on disease outcome. We consider the cases where some or all studies have repeat measurements, and compare study......-specific, averaged and empirical Bayes estimates of RC parameters. Additionally, we allow for binary covariates (e.g. smoking status) and for uncertainty and time trends in the measurement error corrections. Our methods are illustrated using a subset of individual participant data from prospective long-term studies...

1. Correlation, Regression and Path Analyses of Seed Yield Components in Crambe abyssinica, a Promising Industrial Oil Crop

OpenAIRE

Huang, Banglian; Yang, Yiming; Luo, Tingting; Wu, S.; Du, Xuezhu; Cai, Detian; Loo, van, E.N.; Huang Bangquan

2013-01-01

In the present study correlation, regression and path analyses were carried out to decide correlations among the agro- nomic traits and their contributions to seed yield per plant in Crambe abyssinica. Partial correlation analysis indicated that plant height (X1) was significantly correlated with branching height and the number of first branches (P <0.01); Branching height (X2) was significantly correlated with pod number of primary inflorescence (P <0.01) and number of secondary branch...

2. Assessing the suitability of summary data for two-sample Mendelian randomization analyses using MR-Egger regression: the role of the I2 statistic.

Science.gov (United States)

Bowden, Jack; Del Greco M, Fabiola; Minelli, Cosetta; Davey Smith, George; Sheehan, Nuala A; Thompson, John R

2016-12-01

: MR-Egger regression has recently been proposed as a method for Mendelian randomization (MR) analyses incorporating summary data estimates of causal effect from multiple individual variants, which is robust to invalid instruments. It can be used to test for directional pleiotropy and provides an estimate of the causal effect adjusted for its presence. MR-Egger regression provides a useful additional sensitivity analysis to the standard inverse variance weighted (IVW) approach that assumes all variants are valid instruments. Both methods use weights that consider the single nucleotide polymorphism (SNP)-exposure associations to be known, rather than estimated. We call this the `NO Measurement Error' (NOME) assumption. Causal effect estimates from the IVW approach exhibit weak instrument bias whenever the genetic variants utilized violate the NOME assumption, which can be reliably measured using the F-statistic. The effect of NOME violation on MR-Egger regression has yet to be studied. An adaptation of the I2 statistic from the field of meta-analysis is proposed to quantify the strength of NOME violation for MR-Egger. It lies between 0 and 1, and indicates the expected relative bias (or dilution) of the MR-Egger causal estimate in the two-sample MR context. We call it IGX2 . The method of simulation extrapolation is also explored to counteract the dilution. Their joint utility is evaluated using simulated data and applied to a real MR example. In simulated two-sample MR analyses we show that, when a causal effect exists, the MR-Egger estimate of causal effect is biased towards the null when NOME is violated, and the stronger the violation (as indicated by lower values of IGX2 ), the stronger the dilution. When additionally all genetic variants are valid instruments, the type I error rate of the MR-Egger test for pleiotropy is inflated and the causal effect underestimated. Simulation extrapolation is shown to substantially mitigate these adverse effects. We

3. Adjusted Analyses in Studies Addressing Therapy and Harm: Users' Guides to the Medical Literature.

Science.gov (United States)

Agoritsas, Thomas; Merglen, Arnaud; Shah, Nilay D; O'Donnell, Martin; Guyatt, Gordon H

2017-02-21

Observational studies almost always have bias because prognostic factors are unequally distributed between patients exposed or not exposed to an intervention. The standard approach to dealing with this problem is adjusted or stratified analysis. Its principle is to use measurement of risk factors to create prognostically homogeneous groups and to combine effect estimates across groups.The purpose of this Users' Guide is to introduce readers to fundamental concepts underlying adjustment as a way of dealing with prognostic imbalance and to the basic principles and relative trustworthiness of various adjustment strategies.One alternative to the standard approach is propensity analysis, in which groups are matched according to the likelihood of membership in exposed or unexposed groups. Propensity methods can deal with multiple prognostic factors, even if there are relatively few patients having outcome events. However, propensity methods do not address other limitations of traditional adjustment: investigators may not have measured all relevant prognostic factors (or not accurately), and unknown factors may bias the results.A second approach, instrumental variable analysis, relies on identifying a variable associated with the likelihood of receiving the intervention but not associated with any prognostic factor or with the outcome (other than through the intervention); this could mimic randomization. However, as with assumptions of other adjustment approaches, it is never certain if an instrumental variable analysis eliminates bias.Although all these approaches can reduce the risk of bias in observational studies, none replace the balance of both known and unknown prognostic factors offered by randomization.

4. Analyses of non-fatal accidents in an opencast mine by logistic regression model - a case study.

Science.gov (United States)

Onder, Seyhan; Mutlu, Mert

2017-09-01

Accidents cause major damage for both workers and enterprises in the mining industry. To reduce the number of occupational accidents, these incidents should be properly registered and carefully analysed. This study efficiently examines the Aegean Lignite Enterprise (ELI) of Turkish Coal Enterprises (TKI) in Soma between 2006 and 2011, and opencast coal mine occupational accident records were used for statistical analyses. A total of 231 occupational accidents were analysed for this study. The accident records were categorized into seven groups: area, reason, occupation, part of body, age, shift hour and lost days. The SPSS package program was used in this study for logistic regression analyses, which predicted the probability of accidents resulting in greater or less than 3 lost workdays for non-fatal injuries. Social facilities-area of surface installations, workshops and opencast mining areas are the areas with the highest probability for accidents with greater than 3 lost workdays for non-fatal injuries, while the reasons with the highest probability for these types of accidents are transporting and manual handling. Additionally, the model was tested for such reported accidents that occurred in 2012 for the ELI in Soma and estimated the probability of exposure to accidents with lost workdays correctly by 70%.

5. Improved Dietary Guidelines for Vitamin D: Application of Individual Participant Data (IPD-Level Meta-Regression Analyses

Directory of Open Access Journals (Sweden)

Kevin D. Cashman

2017-05-01

Full Text Available Dietary Reference Values (DRVs for vitamin D have a key role in the prevention of vitamin D deficiency. However, despite adopting similar risk assessment protocols, estimates from authoritative agencies over the last 6 years have been diverse. This may have arisen from diverse approaches to data analysis. Modelling strategies for pooling of individual subject data from cognate vitamin D randomized controlled trials (RCTs are likely to provide the most appropriate DRV estimates. Thus, the objective of the present work was to undertake the first-ever individual participant data (IPD-level meta-regression, which is increasingly recognized as best practice, from seven winter-based RCTs (with 882 participants ranging in age from 4 to 90 years of the vitamin D intake–serum 25-hydroxyvitamin D (25(OHD dose-response. Our IPD-derived estimates of vitamin D intakes required to maintain 97.5% of 25(OHD concentrations >25, 30, and 50 nmol/L across the population are 10, 13, and 26 µg/day, respectively. In contrast, standard meta-regression analyses with aggregate data (as used by several agencies in recent years from the same RCTs estimated that a vitamin D intake requirement of 14 µg/day would maintain 97.5% of 25(OHD >50 nmol/L. These first IPD-derived estimates offer improved dietary recommendations for vitamin D because the underpinning modeling captures the between-person variability in response of serum 25(OHD to vitamin D intake.

6. Improved Dietary Guidelines for Vitamin D: Application of Individual Participant Data (IPD)-Level Meta-Regression Analyses

Science.gov (United States)

Cashman, Kevin D.; Ritz, Christian; Kiely, Mairead

2017-01-01

Dietary Reference Values (DRVs) for vitamin D have a key role in the prevention of vitamin D deficiency. However, despite adopting similar risk assessment protocols, estimates from authoritative agencies over the last 6 years have been diverse. This may have arisen from diverse approaches to data analysis. Modelling strategies for pooling of individual subject data from cognate vitamin D randomized controlled trials (RCTs) are likely to provide the most appropriate DRV estimates. Thus, the objective of the present work was to undertake the first-ever individual participant data (IPD)-level meta-regression, which is increasingly recognized as best practice, from seven winter-based RCTs (with 882 participants ranging in age from 4 to 90 years) of the vitamin D intake–serum 25-hydroxyvitamin D (25(OH)D) dose-response. Our IPD-derived estimates of vitamin D intakes required to maintain 97.5% of 25(OH)D concentrations >25, 30, and 50 nmol/L across the population are 10, 13, and 26 µg/day, respectively. In contrast, standard meta-regression analyses with aggregate data (as used by several agencies in recent years) from the same RCTs estimated that a vitamin D intake requirement of 14 µg/day would maintain 97.5% of 25(OH)D >50 nmol/L. These first IPD-derived estimates offer improved dietary recommendations for vitamin D because the underpinning modeling captures the between-person variability in response of serum 25(OH)D to vitamin D intake. PMID:28481259

7. Longitudinal changes in telomere length and associated genetic parameters in dairy cattle analysed using random regression models.

Directory of Open Access Journals (Sweden)

Luise A Seeker

Full Text Available Telomeres cap the ends of linear chromosomes and shorten with age in many organisms. In humans short telomeres have been linked to morbidity and mortality. With the accumulation of longitudinal datasets the focus shifts from investigating telomere length (TL to exploring TL change within individuals over time. Some studies indicate that the speed of telomere attrition is predictive of future disease. The objectives of the present study were to 1 characterize the change in bovine relative leukocyte TL (RLTL across the lifetime in Holstein Friesian dairy cattle, 2 estimate genetic parameters of RLTL over time and 3 investigate the association of differences in individual RLTL profiles with productive lifespan. RLTL measurements were analysed using Legendre polynomials in a random regression model to describe TL profiles and genetic variance over age. The analyses were based on 1,328 repeated RLTL measurements of 308 female Holstein Friesian dairy cattle. A quadratic Legendre polynomial was fitted to the fixed effect of age in months and to the random effect of the animal identity. Changes in RLTL, heritability and within-trait genetic correlation along the age trajectory were calculated and illustrated. At a population level, the relationship between RLTL and age was described by a positive quadratic function. Individuals varied significantly regarding the direction and amount of RLTL change over life. The heritability of RLTL ranged from 0.36 to 0.47 (SE = 0.05-0.08 and remained statistically unchanged over time. The genetic correlation of RLTL at birth with measurements later in life decreased with the time interval between samplings from near unity to 0.69, indicating that TL later in life might be regulated by different genes than TL early in life. Even though animals differed in their RLTL profiles significantly, those differences were not correlated with productive lifespan (p = 0.954.

8. Assessment of regression models for adjustment of iron status biomarkers for inflammation in children with moderate acute malnutrition in Burkina Faso

DEFF Research Database (Denmark)

Cichon, Bernardette; Ritz, Christian; Fabiansen, Christian

2017-01-01

BACKGROUND: Biomarkers of iron status are affected by inflammation. In order to interpret them in individuals with inflammation, the use of correction factors (CFs) has been proposed. OBJECTIVE: The objective of this study was to investigate the use of regression models as an alternative to the CF...... approach. METHODS: Morbidity data were collected during clinical examinations with morbidity recalls in a cross-sectional study in children aged 6-23 mo with moderate acute malnutrition. C-reactive protein (CRP), α1-acid glycoprotein (AGP), serum ferritin (SF), and soluble transferrin receptor (sTfR) were......TfR with the use of the best-performing model led to a 17% point increase and iron deficiency. CONCLUSION: Regression analysis is an alternative to adjust SF and may be preferable in research settings, because it can take morbidity and severity...

9. SPECIFICS OF THE APPLICATIONS OF MULTIPLE REGRESSION MODEL IN THE ANALYSES OF THE EFFECTS OF GLOBAL FINANCIAL CRISES

Directory of Open Access Journals (Sweden)

Željko V. Račić

2010-12-01

Full Text Available This paper aims to present the specifics of the application of multiple linear regression model. The economic (financial crisis is analyzed in terms of gross domestic product which is in a function of the foreign trade balance (on one hand and the credit cards, i.e. indebtedness of the population on this basis (on the other hand, in the USA (from 1999. to 2008. We used the extended application model which shows how the analyst should run the whole development process of regression model. This process began with simple statistical features and the application of regression procedures, and ended with residual analysis, intended for the study of compatibility of data and model settings. This paper also analyzes the values of some standard statistics used in the selection of appropriate regression model. Testing of the model is carried out with the use of the Statistics PASW 17 program.

10. Interpreting Mini-Mental State Examination Performance in Highly Proficient Bilingual Spanish-English and Asian Indian-English Speakers: Demographic Adjustments, Item Analyses, and Supplemental Measures.

Science.gov (United States)

Milman, Lisa H; Faroqi-Shah, Yasmeen; Corcoran, Chris D; Damele, Deanna M

2018-04-17

Performance on the Mini-Mental State Examination (MMSE), among the most widely used global screens of adult cognitive status, is affected by demographic variables including age, education, and ethnicity. This study extends prior research by examining the specific effects of bilingualism on MMSE performance. Sixty independent community-dwelling monolingual and bilingual adults were recruited from eastern and western regions of the United States in this cross-sectional group study. Independent sample t tests were used to compare 2 bilingual groups (Spanish-English and Asian Indian-English) with matched monolingual speakers on the MMSE, demographically adjusted MMSE scores, MMSE item scores, and a nonverbal cognitive measure. Regression analyses were also performed to determine whether language proficiency predicted MMSE performance in both groups of bilingual speakers. Group differences were evident on the MMSE, on demographically adjusted MMSE scores, and on a small subset of individual MMSE items. Scores on a standardized screen of language proficiency predicted a significant proportion of the variance in the MMSE scores of both bilingual groups. Bilingual speakers demonstrated distinct performance profiles on the MMSE. Results suggest that supplementing the MMSE with a language screen, administering a nonverbal measure, and/or evaluating item-based patterns of performance may assist with test interpretation for this population.

11. Using synthetic data to evaluate multiple regression and principal component analyses for statistical modeling of daily building energy consumption

Energy Technology Data Exchange (ETDEWEB)

Reddy, T.A. (Energy Systems Lab., Texas A and M Univ., College Station, TX (United States)); Claridge, D.E. (Energy Systems Lab., Texas A and M Univ., College Station, TX (United States))

1994-01-01

Multiple regression modeling of monitored building energy use data is often faulted as a reliable means of predicting energy use on the grounds that multicollinearity between the regressor variables can lead both to improper interpretation of the relative importance of the various physical regressor parameters and to a model with unstable regressor coefficients. Principal component analysis (PCA) has the potential to overcome such drawbacks. While a few case studies have already attempted to apply this technique to building energy data, the objectives of this study were to make a broader evaluation of PCA and multiple regression analysis (MRA) and to establish guidelines under which one approach is preferable to the other. Four geographic locations in the US with different climatic conditions were selected and synthetic data sequence representative of daily energy use in large institutional buildings were generated in each location using a linear model with outdoor temperature, outdoor specific humidity and solar radiation as the three regression variables. MRA and PCA approaches were then applied to these data sets and their relative performances were compared. Conditions under which PCA seems to perform better than MRA were identified and preliminary recommendations on the use of either modeling approach formulated. (orig.)

12. Prevalence and predictors of post-stroke mood disorders: A meta-analysis and meta-regression of depression, anxiety and adjustment disorder.

Science.gov (United States)

2017-07-01

To ascertain the prevalence and predictors of mood disorders, determined by structured clinical interviews (ICD or DSM criteria) in people after stroke. Major electronic databases were searched from inception to June 2016 for studies involving major depression (MDD), minor depression (MnD), dysthymia, adjustment disorder, any depressive disorder (any depressive disorder) and anxiety disorders. Studies were combined using both random and fixed effects meta-analysis and results were stratified as appropriate. Depression was examined on 147 occasions from 2days to 7years after stroke (mean 6.87months, N=33 in acute, N=43 in rehabilitation and N=69 in the community/outpatients). Across 128 analyses involving 15,573 patients assessed for major depressive disorder (MDD), the point prevalence of depression was 17.7% (95% CI=15.6% to 20.0%) 0.65 analyses involving 9720 patients determined MnD was present in 13.1% in all settings (95% CI=10.9% to 15.8%). Dysthymia was present in 3.1% (95% CI=2.1% to 5.3%), adjustment disorder in 6.9% (95% CI=4.6 to 9.7%) and anxiety in 9.8% (95% CI=5.9% to 14.8%). Any depressive disorder was present in 33.5% (95% CI=30.3% to 36.8%). The relative risk of any depressive disorder was higher following left (dominant) hemisphere stroke, aphasia, and among people with a family history and past history of mood disorders. Depression, adjustment disorder and anxiety are common after stroke. Risk factors are aphasia, dominant hemispheric lesions and past personal/family history of depression but not time since stroke. Copyright © 2017. Published by Elsevier Inc.

13. The N400 as a snapshot of interactive processing: evidence from regression analyses of orthographic neighbor and lexical associate effects

Science.gov (United States)

Laszlo, Sarah; Federmeier, Kara D.

2010-01-01

Linking print with meaning tends to be divided into subprocesses, such as recognition of an input's lexical entry and subsequent access of semantics. However, recent results suggest that the set of semantic features activated by an input is broader than implied by a view wherein access serially follows recognition. EEG was collected from participants who viewed items varying in number and frequency of both orthographic neighbors and lexical associates. Regression analysis of single item ERPs replicated past findings, showing that N400 amplitudes are greater for items with more neighbors, and further revealed that N400 amplitudes increase for items with more lexical associates and with higher frequency neighbors or associates. Together, the data suggest that in the N400 time window semantic features of items broadly related to inputs are active, consistent with models in which semantic access takes place in parallel with stimulus recognition. PMID:20624252

14. Personal, social, and game-related correlates of active and non-active gaming among dutch gaming adolescents: survey-based multivariable, multilevel logistic regression analyses.

Science.gov (United States)

Simons, Monique; de Vet, Emely; Chinapaw, Mai Jm; de Boer, Michiel; Seidell, Jacob C; Brug, Johannes

2014-04-04

Playing video games contributes substantially to sedentary behavior in youth. A new generation of video games-active games-seems to be a promising alternative to sedentary games to promote physical activity and reduce sedentary behavior. At this time, little is known about correlates of active and non-active gaming among adolescents. The objective of this study was to examine potential personal, social, and game-related correlates of both active and non-active gaming in adolescents. A survey assessing game behavior and potential personal, social, and game-related correlates was conducted among adolescents (12-16 years, N=353) recruited via schools. Multivariable, multilevel logistic regression analyses, adjusted for demographics (age, sex and educational level of adolescents), were conducted to examine personal, social, and game-related correlates of active gaming ≥1 hour per week (h/wk) and non-active gaming >7 h/wk. Active gaming ≥1 h/wk was significantly associated with a more positive attitude toward active gaming (OR 5.3, CI 2.4-11.8; Pgames (OR 0.30, CI 0.1-0.6; P=.002), a higher score on habit strength regarding gaming (OR 1.9, CI 1.2-3.2; P=.008) and having brothers/sisters (OR 6.7, CI 2.6-17.1; Pgame engagement (OR 0.95, CI 0.91-0.997; P=.04). Non-active gaming >7 h/wk was significantly associated with a more positive attitude toward non-active gaming (OR 2.6, CI 1.1-6.3; P=.035), a stronger habit regarding gaming (OR 3.0, CI 1.7-5.3; P7 h/wk. Active gaming is most strongly (negatively) associated with attitude with respect to non-active games, followed by observed active game behavior of brothers and sisters and attitude with respect to active gaming (positive associations). On the other hand, non-active gaming is most strongly associated with observed non-active game behavior of friends, habit strength regarding gaming and attitude toward non-active gaming (positive associations). Habit strength was a correlate of both active and non-active gaming

15. Personal, Social, and Game-Related Correlates of Active and Non-Active Gaming Among Dutch Gaming Adolescents: Survey-Based Multivariable, Multilevel Logistic Regression Analyses

Science.gov (United States)

de Vet, Emely; Chinapaw, Mai JM; de Boer, Michiel; Seidell, Jacob C; Brug, Johannes

2014-01-01

Background Playing video games contributes substantially to sedentary behavior in youth. A new generation of video games—active games—seems to be a promising alternative to sedentary games to promote physical activity and reduce sedentary behavior. At this time, little is known about correlates of active and non-active gaming among adolescents. Objective The objective of this study was to examine potential personal, social, and game-related correlates of both active and non-active gaming in adolescents. Methods A survey assessing game behavior and potential personal, social, and game-related correlates was conducted among adolescents (12-16 years, N=353) recruited via schools. Multivariable, multilevel logistic regression analyses, adjusted for demographics (age, sex and educational level of adolescents), were conducted to examine personal, social, and game-related correlates of active gaming ≥1 hour per week (h/wk) and non-active gaming >7 h/wk. Results Active gaming ≥1 h/wk was significantly associated with a more positive attitude toward active gaming (OR 5.3, CI 2.4-11.8; Pgames (OR 0.30, CI 0.1-0.6; P=.002), a higher score on habit strength regarding gaming (OR 1.9, CI 1.2-3.2; P=.008) and having brothers/sisters (OR 6.7, CI 2.6-17.1; Pgame engagement (OR 0.95, CI 0.91-0.997; P=.04). Non-active gaming >7 h/wk was significantly associated with a more positive attitude toward non-active gaming (OR 2.6, CI 1.1-6.3; P=.035), a stronger habit regarding gaming (OR 3.0, CI 1.7-5.3; P7 h/wk. Active gaming is most strongly (negatively) associated with attitude with respect to non-active games, followed by observed active game behavior of brothers and sisters and attitude with respect to active gaming (positive associations). On the other hand, non-active gaming is most strongly associated with observed non-active game behavior of friends, habit strength regarding gaming and attitude toward non-active gaming (positive associations). Habit strength was a

16. Sample size adjustments for varying cluster sizes in cluster randomized trials with binary outcomes analyzed with second-order PQL mixed logistic regression.

Science.gov (United States)

Candel, Math J J M; Van Breukelen, Gerard J P

2010-06-30

Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.

17. Modeling the potential risk factors of bovine viral diarrhea prevalence in Egypt using univariable and multivariable logistic regression analyses

Directory of Open Access Journals (Sweden)

Abdelfattah M. Selim

2018-03-01

Full Text Available Aim: The present cross-sectional study was conducted to determine the seroprevalence and potential risk factors associated with Bovine viral diarrhea virus (BVDV disease in cattle and buffaloes in Egypt, to model the potential risk factors associated with the disease using logistic regression (LR models, and to fit the best predictive model for the current data. Materials and Methods: A total of 740 blood samples were collected within November 2012-March 2013 from animals aged between 6 months and 3 years. The potential risk factors studied were species, age, sex, and herd location. All serum samples were examined with indirect ELIZA test for antibody detection. Data were analyzed with different statistical approaches such as Chi-square test, odds ratios (OR, univariable, and multivariable LR models. Results: Results revealed a non-significant association between being seropositive with BVDV and all risk factors, except for species of animal. Seroprevalence percentages were 40% and 23% for cattle and buffaloes, respectively. OR for all categories were close to one with the highest OR for cattle relative to buffaloes, which was 2.237. Likelihood ratio tests showed a significant drop of the -2LL from univariable LR to multivariable LR models. Conclusion: There was an evidence of high seroprevalence of BVDV among cattle as compared with buffaloes with the possibility of infection in different age groups of animals. In addition, multivariable LR model was proved to provide more information for association and prediction purposes relative to univariable LR models and Chi-square tests if we have more than one predictor.

18. Structural vascular disease in Africans: performance of ethnic-specific waist circumference cut points using logistic regression and neural network analyses: the SABPA study

OpenAIRE

Botha, J.; De Ridder, J.H.; Potgieter, J.C.; Steyn, H.S.; Malan, L.

2013-01-01

A recently proposed model for waist circumference cut points (RPWC), driven by increased blood pressure, was demonstrated in an African population. We therefore aimed to validate the RPWC by comparing the RPWC and the Joint Statement Consensus (JSC) models via Logistic Regression (LR) and Neural Networks (NN) analyses. Urban African gender groups (N=171) were stratified according to the JSC and RPWC cut point models. Ultrasound carotid intima media thickness (CIMT), blood pressure (BP) and fa...

19. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing.

Science.gov (United States)

Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

2018-02-01

A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R 2 ), using R 2 as the primary metric of assay agreement. However, the use of R 2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

20. Classification and regression tree (CART) analyses of genomic signatures reveal sets of tetramers that discriminate temperature optima of archaea and bacteria

Science.gov (United States)

Dyer, Betsey D.; Kahn, Michael J.; LeBlanc, Mark D.

2008-01-01

Classification and regression tree (CART) analysis was applied to genome-wide tetranucleotide frequencies (genomic signatures) of 195 archaea and bacteria. Although genomic signatures have typically been used to classify evolutionary divergence, in this study, convergent evolution was the focus. Temperature optima for most of the organisms examined could be distinguished by CART analyses of tetranucleotide frequencies. This suggests that pervasive (nonlinear) qualities of genomes may reflect certain environmental conditions (such as temperature) in which those genomes evolved. The predominant use of GAGA and AGGA as the discriminating tetramers in CART models suggests that purine-loading and codon biases of thermophiles may explain some of the results. PMID:19054742

1. An Adjusted Likelihood Ratio Approach Analysing Distribution of Food Products to Assist the Investigation of Foodborne Outbreaks

Science.gov (United States)

Norström, Madelaine; Kristoffersen, Anja Bråthen; Görlach, Franziska Sophie; Nygård, Karin; Hopp, Petter

2015-01-01

In order to facilitate foodborne outbreak investigations there is a need to improve the methods for identifying the food products that should be sampled for laboratory analysis. The aim of this study was to examine the applicability of a likelihood ratio approach previously developed on simulated data, to real outbreak data. We used human case and food product distribution data from the Norwegian enterohaemorrhagic Escherichia coli outbreak in 2006. The approach was adjusted to include time, space smoothing and to handle missing or misclassified information. The performance of the adjusted likelihood ratio approach on the data originating from the HUS outbreak and control data indicates that the adjusted approach is promising and indicates that the adjusted approach could be a useful tool to assist and facilitate the investigation of food borne outbreaks in the future if good traceability are available and implemented in the distribution chain. However, the approach needs to be further validated on other outbreak data and also including other food products than meat products in order to make a more general conclusion of the applicability of the developed approach. PMID:26237468

2. ALTERNATIVE ANALYTICAL DIGESTION SCHEME FOR THE DEFENSE WASTE PROCESSING FACILITY (DWPF) SLURRY RECEIPT AND ADJUSTMENT TANK (SRAT) ANALYSES

International Nuclear Information System (INIS)

Click, D; Charles02 Coleman, C; Frank Pennebaker, F; Kristine Zeigler, K; Tommy Edwards, T

2007-01-01

As part of the radioactive sludge batch qualification, Savannah River National Laboratory (SRNL) performs a verification of the digestion methods to be used by the Defense Waste Processing Facility (DWPF) Lab for elemental analysis of Sludge Receipt and Adjustment Tank (SRAT) receipt process control samples and SRAT product process control samples. Verification of these methods on Sludge Batch 4 (SB4) radioactive sludge slurry indicated SB4 contains a higher concentration of aluminum (Al) than previous sludge batches. Aluminum plays a direct role in vitrification chemistry. At moderate levels, Al assists in glass forming, but at elevated levels Al can increase the viscosity of the molten glass which can adversely impact glass production rate and the volume of glass produced via limiting waste loading.3 Most of the Al present in SB4 is in the form of Al hydroxide as a mixture of gibbsite [α-aluminum trihydroxide, α-Al(OH) 3 ] and boehmite (α-aluminum oxyhydroxide, α-AlOOH) in an unknown ratio. Testing done at SRNL indicates Gibbsite is soluble at low pH but boehmite has limited solubility in the acid mixture (DWPF Cold Chem Method (CC), 25 mL nitric acid (HNO 3 ) and 25 mL hydrofluoric acid (HF)) used by DWPF to digest process control samples. Because Al plays such an important part in vitrification chemistry, it is necessary to have a robust digestion method that will dissolve all forms of Al present in the radioactive sludge while not increasing the analytical lab turnaround time. SRNL initially suggested that the DWPF lab use the sodium peroxide/hydroxide fusion (PF) digestion method4 to digest SRAT receipt and SRAT product radioactive sludge as an alternative to the acid digestion method to ensure complete digestion based on results obtained from digesting a SB4 radioactive sample.2 However, this change may have a significant impact on the DWPF lab analytical turnaround time due to the inefficiency in drying the radioactive sludge contained in a peanut

3. The more total cognitive load is reduced by cues, the better retention and transfer of multimedia learning: A meta-analysis and two meta-regression analyses.

Science.gov (United States)

Xie, Heping; Wang, Fuxing; Hao, Yanbin; Chen, Jiaxue; An, Jing; Wang, Yuxin; Liu, Huashan

2017-01-01

Cueing facilitates retention and transfer of multimedia learning. From the perspective of cognitive load theory (CLT), cueing has a positive effect on learning outcomes because of the reduction in total cognitive load and avoidance of cognitive overload. However, this has not been systematically evaluated. Moreover, what remains ambiguous is the direct relationship between the cue-related cognitive load and learning outcomes. A meta-analysis and two subsequent meta-regression analyses were conducted to explore these issues. Subjective total cognitive load (SCL) and scores on a retention test and transfer test were selected as dependent variables. Through a systematic literature search, 32 eligible articles encompassing 3,597 participants were included in the SCL-related meta-analysis. Among them, 25 articles containing 2,910 participants were included in the retention-related meta-analysis and the following retention-related meta-regression, while there were 29 articles containing 3,204 participants included in the transfer-related meta-analysis and the transfer-related meta-regression. The meta-analysis revealed a statistically significant cueing effect on subjective ratings of cognitive load (d = -0.11, 95% CI = [-0.19, -0.02], p < 0.05), retention performance (d = 0.27, 95% CI = [0.08, 0.46], p < 0.01), and transfer performance (d = 0.34, 95% CI = [0.12, 0.56], p < 0.01). The subsequent meta-regression analyses showed that dSCL for cueing significantly predicted dretention for cueing (β = -0.70, 95% CI = [-1.02, -0.38], p < 0.001), as well as dtransfer for cueing (β = -0.60, 95% CI = [-0.92, -0.28], p < 0.001). Thus in line with CLT, adding cues in multimedia materials can indeed reduce SCL and promote learning outcomes, and the more SCL is reduced by cues, the better retention and transfer of multimedia learning.

4. Predictors of success of external cephalic version and cephalic presentation at birth among 1253 women with non-cephalic presentation using logistic regression and classification tree analyses.

Science.gov (United States)

Hutton, Eileen K; Simioni, Julia C; Thabane, Lehana

2017-08-01

Among women with a fetus with a non-cephalic presentation, external cephalic version (ECV) has been shown to reduce the rate of breech presentation at birth and cesarean birth. Compared with ECV at term, beginning ECV prior to 37 weeks' gestation decreases the number of infants in a non-cephalic presentation at birth. The purpose of this secondary analysis was to investigate factors associated with a successful ECV procedure and to present this in a clinically useful format. Data were collected as part of the Early ECV Pilot and Early ECV2 Trials, which randomized 1776 women with a fetus in breech presentation to either early ECV (34-36 weeks' gestation) or delayed ECV (at or after 37 weeks). The outcome of interest was successful ECV, defined as the fetus being in a cephalic presentation immediately following the procedure, as well as at the time of birth. The importance of several factors in predicting successful ECV was investigated using two statistical methods: logistic regression and classification and regression tree (CART) analyses. Among nulliparas, non-engagement of the presenting part and an easily palpable fetal head were independently associated with success. Among multiparas, non-engagement of the presenting part, gestation less than 37 weeks and an easily palpable fetal head were found to be independent predictors of success. These findings were consistent with results of the CART analyses. Regardless of parity, descent of the presenting part was the most discriminating factor in predicting successful ECV and cephalic presentation at birth. © 2017 Nordic Federation of Societies of Obstetrics and Gynecology.

5. The use of Quality-Adjusted Life Years in cost-effectiveness analyses in palliative care: Mapping the debate through an integrative review.

Science.gov (United States)

Wichmann, Anne B; Adang, Eddy Mm; Stalmeier, Peep Fm; Kristanti, Sinta; Van den Block, Lieve; Vernooij-Dassen, Myrra Jfj; Engels, Yvonne

2017-04-01

6. Operational and biological analyses of branched water-adjustment and combined treatment of wastewater from a chemical industrial park.

Science.gov (United States)

Xu, Ming; Cao, Jiashun; Li, Chao; Tu, Yong; Wu, Haisuo; Liu, Weijing

2018-01-01

The combined biological processes of branched water-adjustment, chemical precipitation, hydrolysis acidification, secondary sedimentation, Anoxic/Oxic and activated carbon treatment were used for chemical industrial wastewater treatment in the Taihu Lake Basin. Full-scale treatment resulted in effluent chemical oxygen demand, total nitrogen, NH 3 -N and total phosphorus of 35.1, 5.20, 3.10 and 0.15 mg/L, respectively, with a total removal efficiency of 91.1%, 67.1%, 70.5% and 89.3%, respectively. In this process, short-circuited organic carbon from brewery wastewater was beneficial for denitrification and second-sulfate reduction. The concentration of effluent fluoride was 6.22 mg/L, which also met the primary standard. Gas Chromatography-Mass Spectrometry analysis revealed that many types of refractory compounds were present in the inflow. Microbial community analysis performed in the summer by PCR-denaturing gradient gel electrophoresis and MiSeq demonstrated that certain special functional bacteria, such as denitrificans, phosphorus-accumulating bacteria, sulfate- and perhafnate-reducing bacteria, aromatic compound-degrading bacteria and organic fluoride-degrading bacteria, present in the bio-tanks were responsible for the acceptable specific biological pollutant reduction achieved.

7. Analyses of polycyclic aromatic hydrocarbon (PAH) and chiral-PAH analogues-methyl-β-cyclodextrin guest-host inclusion complexes by fluorescence spectrophotometry and multivariate regression analysis.

Science.gov (United States)

Greene, LaVana; Elzey, Brianda; Franklin, Mariah; Fakayode, Sayo O

2017-03-05

The negative health impact of polycyclic aromatic hydrocarbons (PAHs) and differences in pharmacological activity of enantiomers of chiral molecules in humans highlights the need for analysis of PAHs and their chiral analogue molecules in humans. Herein, the first use of cyclodextrin guest-host inclusion complexation, fluorescence spectrophotometry, and chemometric approach to PAH (anthracene) and chiral-PAH analogue derivatives (1-(9-anthryl)-2,2,2-triflouroethanol (TFE)) analyses are reported. The binding constants (K b ), stoichiometry (n), and thermodynamic properties (Gibbs free energy (ΔG), enthalpy (ΔH), and entropy (ΔS)) of anthracene and enantiomers of TFE-methyl-β-cyclodextrin (Me-β-CD) guest-host complexes were also determined. Chemometric partial-least-square (PLS) regression analysis of emission spectra data of Me-β-CD-guest-host inclusion complexes was used for the determination of anthracene and TFE enantiomer concentrations in Me-β-CD-guest-host inclusion complex samples. The values of calculated K b and negative ΔG suggest the thermodynamic favorability of anthracene-Me-β-CD and enantiomeric of TFE-Me-β-CD inclusion complexation reactions. However, anthracene-Me-β-CD and enantiomer TFE-Me-β-CD inclusion complexations showed notable differences in the binding affinity behaviors and thermodynamic properties. The PLS regression analysis resulted in square-correlation-coefficients of 0.997530 or better and a low LOD of 3.81×10 -7 M for anthracene and 3.48×10 -8 M for TFE enantiomers at physiological conditions. Most importantly, PLS regression accurately determined the anthracene and TFE enantiomer concentrations with an average low error of 2.31% for anthracene, 4.44% for R-TFE and 3.60% for S-TFE. The results of the study are highly significant because of its high sensitivity and accuracy for analysis of PAH and chiral PAH analogue derivatives without the need of an expensive chiral column, enantiomeric resolution, or use of a polarized

8. An iteratively reweighted least-squares approach to adaptive robust adjustment of parameters in linear regression models with autoregressive and t-distributed deviations

Science.gov (United States)

Kargoll, Boris; Omidalizarandi, Mohammad; Loth, Ina; Paffenholz, Jens-André; Alkhatib, Hamza

2018-03-01

In this paper, we investigate a linear regression time series model of possibly outlier-afflicted observations and autocorrelated random deviations. This colored noise is represented by a covariance-stationary autoregressive (AR) process, in which the independent error components follow a scaled (Student's) t-distribution. This error model allows for the stochastic modeling of multiple outliers and for an adaptive robust maximum likelihood (ML) estimation of the unknown regression and AR coefficients, the scale parameter, and the degree of freedom of the t-distribution. This approach is meant to be an extension of known estimators, which tend to focus only on the regression model, or on the AR error model, or on normally distributed errors. For the purpose of ML estimation, we derive an expectation conditional maximization either algorithm, which leads to an easy-to-implement version of iteratively reweighted least squares. The estimation performance of the algorithm is evaluated via Monte Carlo simulations for a Fourier as well as a spline model in connection with AR colored noise models of different orders and with three different sampling distributions generating the white noise components. We apply the algorithm to a vibration dataset recorded by a high-accuracy, single-axis accelerometer, focusing on the evaluation of the estimated AR colored noise model.

9. Bisphenol-A exposures and behavioural aberrations: median and linear spline and meta-regression analyses of 12 toxicity studies in rodents.

Science.gov (United States)

Peluso, Marco E M; Munnia, Armelle; Ceppi, Marcello

2014-11-05

Exposures to bisphenol-A, a weak estrogenic chemical, largely used for the production of plastic containers, can affect the rodent behaviour. Thus, we examined the relationships between bisphenol-A and the anxiety-like behaviour, spatial skills, and aggressiveness, in 12 toxicity studies of rodent offspring from females orally exposed to bisphenol-A, while pregnant and/or lactating, by median and linear splines analyses. Subsequently, the meta-regression analysis was applied to quantify the behavioural changes. U-shaped, inverted U-shaped and J-shaped dose-response curves were found to describe the relationships between bisphenol-A with the behavioural outcomes. The occurrence of anxiogenic-like effects and spatial skill changes displayed U-shaped and inverted U-shaped curves, respectively, providing examples of effects that are observed at low-doses. Conversely, a J-dose-response relationship was observed for aggressiveness. When the proportion of rodents expressing certain traits or the time that they employed to manifest an attitude was analysed, the meta-regression indicated that a borderline significant increment of anxiogenic-like effects was present at low-doses regardless of sexes (β)=-0.8%, 95% C.I. -1.7/0.1, P=0.076, at ≤120 μg bisphenol-A. Whereas, only bisphenol-A-males exhibited a significant inhibition of spatial skills (β)=0.7%, 95% C.I. 0.2/1.2, P=0.004, at ≤100 μg/day. A significant increment of aggressiveness was observed in both the sexes (β)=67.9,C.I. 3.4, 172.5, P=0.038, at >4.0 μg. Then, bisphenol-A treatments significantly abrogated spatial learning and ability in males (Pbisphenol-A, e.g. ≤120 μg/day, were associated to behavioural aberrations in offspring. Copyright © 2014. Published by Elsevier Ireland Ltd.

10. Exploring reasons for the observed inconsistent trial reports on intra-articular injections with hyaluronic acid in the treatment of osteoarthritis: Meta-regression analyses of randomized trials.

Science.gov (United States)

Johansen, Mette; Bahrt, Henriette; Altman, Roy D; Bartels, Else M; Juhl, Carsten B; Bliddal, Henning; Lund, Hans; Christensen, Robin

2016-08-01

The aim was to identify factors explaining inconsistent observations concerning the efficacy of intra-articular hyaluronic acid compared to intra-articular sham/control, or non-intervention control, in patients with symptomatic osteoarthritis, based on randomized clinical trials (RCTs). A systematic review and meta-regression analyses of available randomized trials were conducted. The outcome, pain, was assessed according to a pre-specified hierarchy of potentially available outcomes. Hedges׳s standardized mean difference [SMD (95% CI)] served as effect size. REstricted Maximum Likelihood (REML) mixed-effects models were used to combine study results, and heterogeneity was calculated and interpreted as Tau-squared and I-squared, respectively. Overall, 99 studies (14,804 patients) met the inclusion criteria: Of these, only 71 studies (72%), including 85 comparisons (11,216 patients), had adequate data available for inclusion in the primary meta-analysis. Overall, compared with placebo, intra-articular hyaluronic acid reduced pain with an effect size of -0.39 [-0.47 to -0.31; P hyaluronic acid. Based on available trial data, intra-articular hyaluronic acid showed a better effect than intra-articular saline on pain reduction in osteoarthritis. Publication bias and the risk of selective outcome reporting suggest only small clinical effect compared to saline. Copyright © 2016 Elsevier Inc. All rights reserved.

11. Item Response Theory Modeling and Categorical Regression Analyses of the Five-Factor Model Rating Form: A Study on Italian Community-Dwelling Adolescent Participants and Adult Participants.

Science.gov (United States)

Fossati, Andrea; Widiger, Thomas A; Borroni, Serena; Maffei, Cesare; Somma, Antonella

2017-06-01

12. Effective behaviour change techniques for physical activity and healthy eating in overweight and obese adults; systematic review and meta-regression analyses.

Science.gov (United States)

Samdal, Gro Beate; Eide, Geir Egil; Barth, Tom; Williams, Geoffrey; Meland, Eivind

2017-03-28

This systematic review aims to explain the heterogeneity in results of interventions to promote physical activity and healthy eating for overweight and obese adults, by exploring the differential effects of behaviour change techniques (BCTs) and other intervention characteristics. The inclusion criteria specified RCTs with ≥ 12 weeks' duration, from January 2007 to October 2014, for adults (mean age ≥ 40 years, mean BMI ≥ 30). Primary outcomes were measures of healthy diet or physical activity. Two reviewers rated study quality, coded the BCTs, and collected outcome results at short (≤6 months) and long term (≥12 months). Meta-analyses and meta-regressions were used to estimate effect sizes (ES), heterogeneity indices (I 2 ) and regression coefficients. We included 48 studies containing a total of 82 outcome reports. The 32 long term reports had an overall ES = 0.24 with 95% confidence interval (CI): 0.15 to 0.33 and I 2  = 59.4%. The 50 short term reports had an ES = 0.37 with 95% CI: 0.26 to 0.48, and I 2  = 71.3%. The number of BCTs unique to the intervention group, and the BCTs goal setting and self-monitoring of behaviour predicted the effect at short and long term. The total number of BCTs in both intervention arms and using the BCTs goal setting of outcome, feedback on outcome of behaviour, implementing graded tasks, and adding objects to the environment, e.g. using a step counter, significantly predicted the effect at long term. Setting a goal for change; and the presence of reporting bias independently explained 58.8% of inter-study variation at short term. Autonomy supportive and person-centred methods as in Motivational Interviewing, the BCTs goal setting of behaviour, and receiving feedback on the outcome of behaviour, explained all of the between study variations in effects at long term. There are similarities, but also differences in effective BCTs promoting change in healthy eating and physical activity and

13. Routine antenatal anti-D prophylaxis in women who are Rh(D negative: meta-analyses adjusted for differences in study design and quality.

Directory of Open Access Journals (Sweden)

Rebecca M Turner

Full Text Available BACKGROUND: To estimate the effectiveness of routine antenatal anti-D prophylaxis for preventing sensitisation in pregnant Rhesus negative women, and to explore whether this depends on the treatment regimen adopted. METHODS: Ten studies identified in a previous systematic literature search were included. Potential sources of bias were systematically identified using bias checklists, and their impact and uncertainty were quantified using expert opinion. Study results were adjusted for biases and combined, first in a random-effects meta-analysis and then in a random-effects meta-regression analysis. RESULTS: In a conventional meta-analysis, the pooled odds ratio for sensitisation was estimated as 0.25 (95% CI 0.18, 0.36, comparing routine antenatal anti-D prophylaxis to control, with some heterogeneity (I²  =  19%. However, this naïve analysis ignores substantial differences in study quality and design. After adjusting for these, the pooled odds ratio for sensitisation was estimated as 0.31 (95% CI 0.17, 0.56, with no evidence of heterogeneity (I²  =  0%. A meta-regression analysis was performed, which used the data available from the ten anti-D prophylaxis studies to inform us about the relative effectiveness of three licensed treatments. This gave an 83% probability that a dose of 1250 IU at 28 and 34 weeks is most effective and a 76% probability that a single dose of 1500 IU at 28-30 weeks is least effective. CONCLUSION: There is strong evidence for the effectiveness of routine antenatal anti-D prophylaxis for prevention of sensitisation, in support of the policy of offering routine prophylaxis to all non-sensitised pregnant Rhesus negative women. All three licensed dose regimens are expected to be effective.

14. Dual Regression

OpenAIRE

2012-01-01

We propose dual regression as an alternative to the quantile regression process for the global estimation of conditional distribution functions under minimal assumptions. Dual regression provides all the interpretational power of the quantile regression process while avoiding the need for repairing the intersecting conditional quantile surfaces that quantile regression often produces in practice. Our approach introduces a mathematical programming characterization of conditional distribution f...

15. Consequences of kriging and land use regression for PM2.5 predictions in epidemiologic analyses: insights into spatial variability using high-resolution satellite data.

Science.gov (United States)

Alexeeff, Stacey E; Schwartz, Joel; Kloog, Itai; Chudnovsky, Alexandra; Koutrakis, Petros; Coull, Brent A

2015-01-01

Many epidemiological studies use predicted air pollution exposures as surrogates for true air pollution levels. These predicted exposures contain exposure measurement error, yet simulation studies have typically found negligible bias in resulting health effect estimates. However, previous studies typically assumed a statistical spatial model for air pollution exposure, which may be oversimplified. We address this shortcoming by assuming a realistic, complex exposure surface derived from fine-scale (1 km × 1 km) remote-sensing satellite data. Using simulation, we evaluate the accuracy of epidemiological health effect estimates in linear and logistic regression when using spatial air pollution predictions from kriging and land use regression models. We examined chronic (long-term) and acute (short-term) exposure to air pollution. Results varied substantially across different scenarios. Exposure models with low out-of-sample R(2) yielded severe biases in the health effect estimates of some models, ranging from 60% upward bias to 70% downward bias. One land use regression exposure model with >0.9 out-of-sample R(2) yielded upward biases up to 13% for acute health effect estimates. Almost all models drastically underestimated the SEs. Land use regression models performed better in chronic effect simulations. These results can help researchers when interpreting health effect estimates in these types of studies.

16. Regression Phalanxes

OpenAIRE

Zhang, Hongyang; Welch, William J.; Zamar, Ruben H.

2017-01-01

Tomal et al. (2015) introduced the notion of "phalanxes" in the context of rare-class detection in two-class classification problems. A phalanx is a subset of features that work well for classification tasks. In this paper, we propose a different class of phalanxes for application in regression settings. We define a "Regression Phalanx" - a subset of features that work well together for prediction. We propose a novel algorithm which automatically chooses Regression Phalanxes from high-dimensi...

17. Univariate and multiple linear regression analyses for 23 single nucleotide polymorphisms in 14 genes predisposing to chronic glomerular diseases and IgA nephropathy in Han Chinese.

Science.gov (United States)

Wang, Hui; Sui, Weiguo; Xue, Wen; Wu, Junyong; Chen, Jiejing; Dai, Yong

2014-09-01

Immunoglobulin A nephropathy (IgAN) is a complex trait regulated by the interaction among multiple physiologic regulatory systems and probably involving numerous genes, which leads to inconsistent findings in genetic studies. One possibility of failure to replicate some single-locus results is that the underlying genetics of IgAN nephropathy is based on multiple genes with minor effects. To learn the association between 23 single nucleotide polymorphisms (SNPs) in 14 genes predisposing to chronic glomerular diseases and IgAN in Han males, the 23 SNPs genotypes of 21 Han males were detected and analyzed with a BaiO gene chip, and their associations were analyzed with univariate analysis and multiple linear regression analysis. Analysis showed that CTLA4 rs231726 and CR2 rs1048971 revealed a significant association with IgAN. These findings support the multi-gene nature of the etiology of IgAN and propose a potential gene-gene interactive model for future studies.

18. Meta-regression analyses to explain statistical heterogeneity in a systematic review of strategies for guideline implementation in primary health care.

Directory of Open Access Journals (Sweden)

Susanne Unverzagt

Full Text Available This study is an in-depth-analysis to explain statistical heterogeneity in a systematic review of implementation strategies to improve guideline adherence of primary care physicians in the treatment of patients with cardiovascular diseases. The systematic review included randomized controlled trials from a systematic search in MEDLINE, EMBASE, CENTRAL, conference proceedings and registers of ongoing studies. Implementation strategies were shown to be effective with substantial heterogeneity of treatment effects across all investigated strategies. Primary aim of this study was to explain different effects of eligible trials and to identify methodological and clinical effect modifiers. Random effects meta-regression models were used to simultaneously assess the influence of multimodal implementation strategies and effect modifiers on physician adherence. Effect modifiers included the staff responsible for implementation, level of prevention and definition pf the primary outcome, unit of randomization, duration of follow-up and risk of bias. Six clinical and methodological factors were investigated as potential effect modifiers of the efficacy of different implementation strategies on guideline adherence in primary care practices on the basis of information from 75 eligible trials. Five effect modifiers were able to explain a substantial amount of statistical heterogeneity. Physician adherence was improved by 62% (95% confidence interval (95% CI 29 to 104% or 29% (95% CI 5 to 60% in trials where other non-medical professionals or nurses were included in the implementation process. Improvement of physician adherence was more successful in primary and secondary prevention of cardiovascular diseases by around 30% (30%; 95% CI -2 to 71% and 31%; 95% CI 9 to 57%, respectively compared to tertiary prevention. This study aimed to identify effect modifiers of implementation strategies on physician adherence. Especially the cooperation of different health

19. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part II: Evaluation of Sample Models

Science.gov (United States)

Duda, David P.; Minnis, Patrick

2009-01-01

Previous studies have shown that probabilistic forecasting may be a useful method for predicting persistent contrail formation. A probabilistic forecast to accurately predict contrail formation over the contiguous United States (CONUS) is created by using meteorological data based on hourly meteorological analyses from the Advanced Regional Prediction System (ARPS) and from the Rapid Update Cycle (RUC) as well as GOES water vapor channel measurements, combined with surface and satellite observations of contrails. Two groups of logistic models were created. The first group of models (SURFACE models) is based on surface-based contrail observations supplemented with satellite observations of contrail occurrence. The second group of models (OUTBREAK models) is derived from a selected subgroup of satellite-based observations of widespread persistent contrails. The mean accuracies for both the SURFACE and OUTBREAK models typically exceeded 75 percent when based on the RUC or ARPS analysis data, but decreased when the logistic models were derived from ARPS forecast data.

20. The density, the refractive index and the adjustment of the excess thermodynamic properties by means of the multiple linear regression method for the ternary system ethylbenzene–octane–propylbenzene

International Nuclear Information System (INIS)

Lisa, C.; Ungureanu, M.; Cosmaţchi, P.C.; Bolat, G.

2015-01-01

Graphical abstract: - Highlights: • Thermodynamic properties of the ethylbenzene–octane–propylbenzene system. • Equations with much lower standard deviations in comparison with other models. • The prediction of the V E based on the refractive index by means of the MLR method. - Abstract: The density (ρ) and the refractive index (n) have been experimentally determined for the ethylbenzene (1)–octane (2)–propylbenzene (3) ternary system in the entire variation range of the composition, at three temperatures: 298.15, 308.15 and 318.15 K and pressure 0.1 MPa. The excess thermodynamic properties that had been calculated based on the experimental determinations have been used to build empirical models which, despite of the disadvantage of having a greater number of coefficients, result in much lower standard deviations in comparison with the Redlich–Kister type models. The statistical processing of experimental data by means of the multiple linear regression method (MLR) was used in order to model the excess thermodynamic properties. Lower standard deviations than the Redlich–Kister type models were also obtained. The adjustment of the excess molar volume (V E ) based on refractive index by means of the Multiple linear regression of the SigmaPlot 11.2 program was made for the ethylbenzene (1)–octane (2)–propylbenzene (3) ternary system, obtaining a simple mathematical model which correlates the excess molar volume with the refractive index, the normalized temperature and the composition of the ternary mixture: V E = A 0 + A 1 X 1 + A 2 X 2 + A 3 (T/298.15) + A 4 n for which the standard deviation is 0.03.

1. Propensity-score matching in economic analyses: comparison with regression models, instrumental variables, residual inclusion, differences-in-differences, and decomposition methods.

Science.gov (United States)

Crown, William H

2014-02-01

This paper examines the use of propensity score matching in economic analyses of observational data. Several excellent papers have previously reviewed practical aspects of propensity score estimation and other aspects of the propensity score literature. The purpose of this paper is to compare the conceptual foundation of propensity score models with alternative estimators of treatment effects. References are provided to empirical comparisons among methods that have appeared in the literature. These comparisons are available for a subset of the methods considered in this paper. However, in some cases, no pairwise comparisons of particular methods are yet available, and there are no examples of comparisons across all of the methods surveyed here. Irrespective of the availability of empirical comparisons, the goal of this paper is to provide some intuition about the relative merits of alternative estimators in health economic evaluations where nonlinearity, sample size, availability of pre/post data, heterogeneity, and missing variables can have important implications for choice of methodology. Also considered is the potential combination of propensity score matching with alternative methods such as differences-in-differences and decomposition methods that have not yet appeared in the empirical literature.

2. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part I: Effects of Random Error

Science.gov (United States)

Duda, David P.; Minnis, Patrick

2009-01-01

Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.

3. Autistic Regression

Science.gov (United States)

Matson, Johnny L.; Kozlowski, Alison M.

2010-01-01

Autistic regression is one of the many mysteries in the developmental course of autism and pervasive developmental disorders not otherwise specified (PDD-NOS). Various definitions of this phenomenon have been used, further clouding the study of the topic. Despite this problem, some efforts at establishing prevalence have been made. The purpose of…

4. Regression Analysis

CERN Document Server

Freund, Rudolf J; Sa, Ping

2006-01-01

The book provides complete coverage of the classical methods of statistical analysis. It is designed to give students an understanding of the purpose of statistical analyses, to allow the student to determine, at least to some degree, the correct type of statistical analyses to be performed in a given situation, and have some appreciation of what constitutes good experimental design

5. Linear regression

CERN Document Server

Olive, David J

2017-01-01

This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...

6. Area under the curve predictions of dalbavancin, a new lipoglycopeptide agent, using the end of intravenous infusion concentration data point by regression analyses such as linear, log-linear and power models.

Science.gov (United States)

Bhamidipati, Ravi Kanth; Syed, Muzeeb; Mullangi, Ramesh; Srinivas, Nuggehally

2018-02-01

1. Dalbavancin, a lipoglycopeptide, is approved for treating gram-positive bacterial infections. Area under plasma concentration versus time curve (AUC inf ) of dalbavancin is a key parameter and AUC inf /MIC ratio is a critical pharmacodynamic marker. 2. Using end of intravenous infusion concentration (i.e. C max ) C max versus AUC inf relationship for dalbavancin was established by regression analyses (i.e. linear, log-log, log-linear and power models) using 21 pairs of subject data. 3. The predictions of the AUC inf were performed using published C max data by application of regression equations. The quotient of observed/predicted values rendered fold difference. The mean absolute error (MAE)/root mean square error (RMSE) and correlation coefficient (r) were used in the assessment. 4. MAE and RMSE values for the various models were comparable. The C max versus AUC inf exhibited excellent correlation (r > 0.9488). The internal data evaluation showed narrow confinement (0.84-1.14-fold difference) with a RMSE models predicted AUC inf with a RMSE of 3.02-27.46% with fold difference largely contained within 0.64-1.48. 5. Regardless of the regression models, a single time point strategy of using C max (i.e. end of 30-min infusion) is amenable as a prospective tool for predicting AUC inf of dalbavancin in patients.

7. Apparently conclusive meta-analyses may be inconclusive--Trial sequential analysis adjustment of random error risk due to repetitive testing of accumulating data in apparently conclusive neonatal meta-analyses

DEFF Research Database (Denmark)

Brok, Jesper; Thorlund, Kristian; Wetterslev, Jørn

2008-01-01

BACKGROUND: Random error may cause misleading evidence in meta-analyses. The required number of participants in a meta-analysis (i.e. information size) should be at least as large as an adequately powered single trial. Trial sequential analysis (TSA) may reduce risk of random errors due to repeti......BACKGROUND: Random error may cause misleading evidence in meta-analyses. The required number of participants in a meta-analysis (i.e. information size) should be at least as large as an adequately powered single trial. Trial sequential analysis (TSA) may reduce risk of random errors due...

DEFF Research Database (Denmark)

Goutte, Cyril; Larsen, Jan

2000-01-01

Kernel smoothing is a widely used non-parametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this contribution, we propose an algorithm that adapts the input metric used in multivariate...... regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...

DEFF Research Database (Denmark)

Goutte, Cyril; Larsen, Jan

1998-01-01

Kernel smoothing is a widely used nonparametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this paper, we propose an algorithm that adapts the input metric used in multivariate regression...... by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...

DEFF Research Database (Denmark)

M. Gaspar, Raquel; Murgoci, Agatha

2010-01-01

A convexity adjustment (or convexity correction) in fixed income markets arises when one uses prices of standard (plain vanilla) products plus an adjustment to price nonstandard products. We explain the basic and appealing idea behind the use of convexity adjustments and focus on the situations...

11. Secondary mediation and regression analyses of the PTClinResNet database: determining causal relationships among the International Classification of Functioning, Disability and Health levels for four physical therapy intervention trials.

Science.gov (United States)

Mulroy, Sara J; Winstein, Carolee J; Kulig, Kornelia; Beneck, George J; Fowler, Eileen G; DeMuth, Sharon K; Sullivan, Katherine J; Brown, David A; Lane, Christianne J

2011-12-01

Each of the 4 randomized clinical trials (RCTs) hosted by the Physical Therapy Clinical Research Network (PTClinResNet) targeted a different disability group (low back disorder in the Muscle-Specific Strength Training Effectiveness After Lumbar Microdiskectomy [MUSSEL] trial, chronic spinal cord injury in the Strengthening and Optimal Movements for Painful Shoulders in Chronic Spinal Cord Injury [STOMPS] trial, adult stroke in the Strength Training Effectiveness Post-Stroke [STEPS] trial, and pediatric cerebral palsy in the Pediatric Endurance and Limb Strengthening [PEDALS] trial for children with spastic diplegic cerebral palsy) and tested the effectiveness of a muscle-specific or functional activity-based intervention on primary outcomes that captured pain (STOMPS, MUSSEL) or locomotor function (STEPS, PEDALS). The focus of these secondary analyses was to determine causal relationships among outcomes across levels of the International Classification of Functioning, Disability and Health (ICF) framework for the 4 RCTs. With the database from PTClinResNet, we used 2 separate secondary statistical approaches-mediation analysis for the MUSSEL and STOMPS trials and regression analysis for the STEPS and PEDALS trials-to test relationships among muscle performance, primary outcomes (pain related and locomotor related), activity and participation measures, and overall quality of life. Predictive models were stronger for the 2 studies with pain-related primary outcomes. Change in muscle performance mediated or predicted reductions in pain for the MUSSEL and STOMPS trials and, to some extent, walking speed for the STEPS trial. Changes in primary outcome variables were significantly related to changes in activity and participation variables for all 4 trials. Improvement in activity and participation outcomes mediated or predicted increases in overall quality of life for the 3 trials with adult populations. Variables included in the statistical models were limited to those

12. Survival analysis II: Cox regression

NARCIS (Netherlands)

Stel, Vianda S.; Dekker, Friedo W.; Tripepi, Giovanni; Zoccali, Carmine; Jager, Kitty J.

2011-01-01

In contrast to the Kaplan-Meier method, Cox proportional hazards regression can provide an effect estimate by quantifying the difference in survival between patient groups and can adjust for confounding effects of other variables. The purpose of this article is to explain the basic concepts of the

13. Differentiating regressed melanoma from regressed lichenoid keratosis.

Science.gov (United States)

Chan, Aegean H; Shulman, Kenneth J; Lee, Bonnie A

2017-04-01

Distinguishing regressed lichen planus-like keratosis (LPLK) from regressed melanoma can be difficult on histopathologic examination, potentially resulting in mismanagement of patients. We aimed to identify histopathologic features by which regressed melanoma can be differentiated from regressed LPLK. Twenty actively inflamed LPLK, 12 LPLK with regression and 15 melanomas with regression were compared and evaluated by hematoxylin and eosin staining as well as Melan-A, microphthalmia transcription factor (MiTF) and cytokeratin (AE1/AE3) immunostaining. (1) A total of 40% of regressed melanomas showed complete or near complete loss of melanocytes within the epidermis with Melan-A and MiTF immunostaining, while 8% of regressed LPLK exhibited this finding. (2) Necrotic keratinocytes were seen in the epidermis in 33% regressed melanomas as opposed to all of the regressed LPLK. (3) A dense infiltrate of melanophages in the papillary dermis was seen in 40% of regressed melanomas, a feature not seen in regressed LPLK. In summary, our findings suggest that a complete or near complete loss of melanocytes within the epidermis strongly favors a regressed melanoma over a regressed LPLK. In addition, necrotic epidermal keratinocytes and the presence of a dense band-like distribution of dermal melanophages can be helpful in differentiating these lesions. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

14. Regression: A Bibliography.

Science.gov (United States)

Pedrini, D. T.; Pedrini, Bonnie C.

Regression, another mechanism studied by Sigmund Freud, has had much research, e.g., hypnotic regression, frustration regression, schizophrenic regression, and infra-human-animal regression (often directly related to fixation). Many investigators worked with hypnotic age regression, which has a long history, going back to Russian reflexologists.…

15. Better Autologistic Regression

Directory of Open Access Journals (Sweden)

Mark A. Wolters

2017-11-01

Full Text Available Autologistic regression is an important probability model for dichotomous random variables observed along with covariate information. It has been used in various fields for analyzing binary data possessing spatial or network structure. The model can be viewed as an extension of the autologistic model (also known as the Ising model, quadratic exponential binary distribution, or Boltzmann machine to include covariates. It can also be viewed as an extension of logistic regression to handle responses that are not independent. Not all authors use exactly the same form of the autologistic regression model. Variations of the model differ in two respects. First, the variable coding—the two numbers used to represent the two possible states of the variables—might differ. Common coding choices are (zero, one and (minus one, plus one. Second, the model might appear in either of two algebraic forms: a standard form, or a recently proposed centered form. Little attention has been paid to the effect of these differences, and the literature shows ambiguity about their importance. It is shown here that changes to either coding or centering in fact produce distinct, non-nested probability models. Theoretical results, numerical studies, and analysis of an ecological data set all show that the differences among the models can be large and practically significant. Understanding the nature of the differences and making appropriate modeling choices can lead to significantly improved autologistic regression analyses. The results strongly suggest that the standard model with plus/minus coding, which we call the symmetric autologistic model, is the most natural choice among the autologistic variants.

NARCIS (Netherlands)

2010-01-01

A method of adjusting a signal processing parameter for a first hearing aid and a second hearing aid forming parts of a binaural hearing aid system to be worn by a user is provided. The binaural hearing aid system comprises a user specific model representing a desired asymmetry between a first ear

17. Vectors, a tool in statistical regression theory

NARCIS (Netherlands)

Corsten, L.C.A.

1958-01-01

Using linear algebra this thesis developed linear regression analysis including analysis of variance, covariance analysis, special experimental designs, linear and fertility adjustments, analysis of experiments at different places and times. The determination of the orthogonal projection, yielding

CERN Multimedia

HR Department

2008-01-01

In accordance with decisions taken by the Finance Committee and Council in December 2007, salaries are adjusted with effect from 1 January 2008. Scale of basic salaries and scale of stipends paid to fellows (Annex R A 5 and R A 6 respectively): increased by 0.71% with effect from 1 January 2008. As a result of the stability of the Geneva consumer price index, following elements do not increase: a) Family Allowance, Child Allowance and Infant Allowance (Annex R A 3). b) Reimbursement of education fees: maximum amounts of reimbursement (Annex R A 4.01) for the academic year 2007/2008. Related adjustments will be implemented, wherever applicable, to Paid Associates and Students. As in the past, the actual percentage increase of each salary position may vary, due to the application of a constant step value and the rounding effects. Human Resources Department Tel. 73566

CERN Multimedia

HR Department

2008-01-01

In accordance with decisions taken by the Finance Committee and Council in December 2007, salaries are adjusted with effect from 1 January 2008. Scale of basic salaries and scale of stipends paid to fellows (Annex R A 5 and R A 6 respectively): increased by 0.71% with effect from 1 January 2008. As a result of the stability of the Geneva consumer price index, the following elements do not increase: a)\tFamily Allowance, Child Allowance and Infant Allowance (Annex R A 3); b)\tReimbursement of education fees: maximum amounts of reimbursement (Annex R A 4.01) for the academic year 2007/2008. Related adjustments will be applied, wherever applicable, to Paid Associates and Students. As in the past, the actual percentage increase of each salary position may vary, due to the application of a constant step value and rounding effects. Human Resources Department Tel. 73566

Science.gov (United States)

Harry, Herbert H.

1989-01-01

Apparatus and method for the adjustment and alignment of shafts in high power devices. A plurality of adjacent rotatable angled cylinders are positioned between a base and the shaft to be aligned which when rotated introduce an axial offset. The apparatus is electrically conductive and constructed of a structurally rigid material. The angled cylinders allow the shaft such as the center conductor in a pulse line machine to be offset in any desired alignment position within the range of the apparatus.

International Nuclear Information System (INIS)

Carlson, R.W.; Covic, J.; Leininger, G.

1981-01-01

In a rotating fan beam tomographic scanner there is included an adjustable collimator and shutter assembly. The assembly includes a fan angle collimation cylinder having a plurality of different length slots through which the beam may pass for adjusting the fan angle of the beam. It also includes a beam thickness cylinder having a plurality of slots of different widths for adjusting the thickness of the beam. Further, some of the slots have filter materials mounted therein so that the operator may select from a plurality of filters. Also disclosed is a servo motor system which allows the operator to select the desired fan angle, beam thickness and filter from a remote location. An additional feature is a failsafe shutter assembly which includes a spring biased shutter cylinder mounted in the collimation cylinders. The servo motor control circuit checks several system conditions before the shutter is rendered openable. Further, the circuit cuts off the radiation if the shutter fails to open or close properly. A still further feature is a reference radiation intensity monitor which includes a tuning-fork shaped light conducting element having a scintillation crystal mounted on each tine. The monitor is placed adjacent the collimator between it and the source with the pair of crystals to either side of the fan beam

2. Reduced Rank Regression

DEFF Research Database (Denmark)

Johansen, Søren

2008-01-01

The reduced rank regression model is a multivariate regression model with a coefficient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure, which estimates the reduced rank regression model. It is related to canonical correlations and involves calculating...

3. Regression analysis with categorized regression calibrated exposure: some interesting findings

Directory of Open Access Journals (Sweden)

Hjartåker Anette

2006-07-01

Full Text Available Abstract Background Regression calibration as a method for handling measurement error is becoming increasingly well-known and used in epidemiologic research. However, the standard version of the method is not appropriate for exposure analyzed on a categorical (e.g. quintile scale, an approach commonly used in epidemiologic studies. A tempting solution could then be to use the predicted continuous exposure obtained through the regression calibration method and treat it as an approximation to the true exposure, that is, include the categorized calibrated exposure in the main regression analysis. Methods We use semi-analytical calculations and simulations to evaluate the performance of the proposed approach compared to the naive approach of not correcting for measurement error, in situations where analyses are performed on quintile scale and when incorporating the original scale into the categorical variables, respectively. We also present analyses of real data, containing measures of folate intake and depression, from the Norwegian Women and Cancer study (NOWAC. Results In cases where extra information is available through replicated measurements and not validation data, regression calibration does not maintain important qualities of the true exposure distribution, thus estimates of variance and percentiles can be severely biased. We show that the outlined approach maintains much, in some cases all, of the misclassification found in the observed exposure. For that reason, regression analysis with the corrected variable included on a categorical scale is still biased. In some cases the corrected estimates are analytically equal to those obtained by the naive approach. Regression calibration is however vastly superior to the naive method when applying the medians of each category in the analysis. Conclusion Regression calibration in its most well-known form is not appropriate for measurement error correction when the exposure is analyzed on a

4. Evaluation of the efficiency of continuous wavelet transform as processing and preprocessing algorithm for resolution of overlapped signals in univariate and multivariate regression analyses; an application to ternary and quaternary mixtures

Science.gov (United States)

Hegazy, Maha A.; Lotfy, Hayam M.; Mowaka, Shereen; Mohamed, Ekram Hany

2016-07-01

Wavelets have been adapted for a vast number of signal-processing applications due to the amount of information that can be extracted from a signal. In this work, a comparative study on the efficiency of continuous wavelet transform (CWT) as a signal processing tool in univariate regression and a pre-processing tool in multivariate analysis using partial least square (CWT-PLS) was conducted. These were applied to complex spectral signals of ternary and quaternary mixtures. CWT-PLS method succeeded in the simultaneous determination of a quaternary mixture of drotaverine (DRO), caffeine (CAF), paracetamol (PAR) and p-aminophenol (PAP, the major impurity of paracetamol). While, the univariate CWT failed to simultaneously determine the quaternary mixture components and was able to determine only PAR and PAP, the ternary mixtures of DRO, CAF, and PAR and CAF, PAR, and PAP. During the calculations of CWT, different wavelet families were tested. The univariate CWT method was validated according to the ICH guidelines. While for the development of the CWT-PLS model a calibration set was prepared by means of an orthogonal experimental design and their absorption spectra were recorded and processed by CWT. The CWT-PLS model was constructed by regression between the wavelet coefficients and concentration matrices and validation was performed by both cross validation and external validation sets. Both methods were successfully applied for determination of the studied drugs in pharmaceutical formulations.

DEFF Research Database (Denmark)

Selmer, Jan

2006-01-01

Western business expatriates in China. Three sociocultural adjustment variables were examined; general, interaction and work adjustment. Although a negative relationship was hypothesized between cultural novelty and the three adjustment variables, results of the hierarchical multiple regression analysis...

6. Vanadium NMR Chemical Shifts of (Imido)vanadium(V) Dichloride Complexes with Imidazolin-2-iminato and Imidazolidin-2-iminato Ligands: Cooperation with Quantum-Chemical Calculations and Multiple Linear Regression Analyses.

Science.gov (United States)

Yi, Jun; Yang, Wenhong; Sun, Wen-Hua; Nomura, Kotohiro; Hada, Masahiko

2017-11-30

The NMR chemical shifts of vanadium ( 51 V) in (imido)vanadium(V) dichloride complexes with imidazolin-2-iminato and imidazolidin-2-iminato ligands were calculated by the density functional theory (DFT) method with GIAO. The calculated 51 V NMR chemical shifts were analyzed by the multiple linear regression (MLR) analysis (MLRA) method with a series of calculated molecular properties. Some of calculated NMR chemical shifts were incorrect using the optimized molecular geometries of the X-ray structures. After the global minimum geometries of all of the molecules were determined, the trend of the observed chemical shifts was well reproduced by the present DFT method. The MLRA method was performed to investigate the correlation between the 51 V NMR chemical shift and the natural charge, band energy gap, and Wiberg bond index of the V═N bond. The 51 V NMR chemical shifts obtained with the present MLR model were well reproduced with a correlation coefficient of 0.97.

7. Acculturative Stress, Parental and Professor Attachment, and College Adjustment in Asian International Students

Science.gov (United States)

Han, Suejung; Pistole, M. Carole; Caldwell, Jarred M.

2017-01-01

This study examined parental and professor attachment as buffers against acculturative stress and as predictors of college adjustment of 210 Asian international students (AISs). Moderated hierarchical regression analyses revealed that acculturative stress negatively and secure parental and professor attachment positively predicted academic…

8. Regression analysis by example

CERN Document Server

Chatterjee, Samprit

2012-01-01

Praise for the Fourth Edition: ""This book is . . . an excellent source of examples for regression analysis. It has been and still is readily readable and understandable."" -Journal of the American Statistical Association Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. Regression Analysis by Example, Fifth Edition has been expanded

9. Quantile Regression Methods

DEFF Research Database (Denmark)

Fitzenberger, Bernd; Wilke, Ralf Andreas

2015-01-01

if the mean regression model does not. We provide a short informal introduction into the principle of quantile regression which includes an illustrative application from empirical labor market research. This is followed by briefly sketching the underlying statistical model for linear quantile regression based......Quantile regression is emerging as a popular statistical approach, which complements the estimation of conditional mean models. While the latter only focuses on one aspect of the conditional distribution of the dependent variable, the mean, quantile regression provides more detailed insights...... by modeling conditional quantiles. Quantile regression can therefore detect whether the partial effect of a regressor on the conditional quantiles is the same for all quantiles or differs across quantiles. Quantile regression can provide evidence for a statistical relationship between two variables even...

10. Understanding logistic regression analysis

OpenAIRE

Sperandei, Sandro

2014-01-01

Logistic regression is used to obtain odds ratio in the presence of more than one explanatory variable. The procedure is quite similar to multiple linear regression, with the exception that the response variable is binomial. The result is the impact of each variable on the odds ratio of the observed event of interest. The main advantage is to avoid confounding effects by analyzing the association of all variables together. In this article, we explain the logistic regression procedure using ex...

11. Using multilevel modeling to assess case-mix adjusters in consumer experience surveys in health care.

Science.gov (United States)

Damman, Olga C; Stubbe, Janine H; Hendriks, Michelle; Arah, Onyebuchi A; Spreeuwenberg, Peter; Delnoij, Diana M J; Groenewegen, Peter P

2009-04-01

12. Introduction to regression graphics

CERN Document Server

Cook, R Dennis

2009-01-01

Covers the use of dynamic and interactive computer graphics in linear regression analysis, focusing on analytical graphics. Features new techniques like plot rotation. The authors have composed their own regression code, using Xlisp-Stat language called R-code, which is a nearly complete system for linear regression analysis and can be utilized as the main computer program in a linear regression course. The accompanying disks, for both Macintosh and Windows computers, contain the R-code and Xlisp-Stat. An Instructor's Manual presenting detailed solutions to all the problems in the book is ava

13. Alternative Methods of Regression

CERN Document Server

Birkes, David

2011-01-01

Of related interest. Nonlinear Regression Analysis and its Applications Douglas M. Bates and Donald G. Watts ".an extraordinary presentation of concepts and methods concerning the use and analysis of nonlinear regression models.highly recommend[ed].for anyone needing to use and/or understand issues concerning the analysis of nonlinear regression models." --Technometrics This book provides a balance between theory and practice supported by extensive displays of instructive geometrical constructs. Numerous in-depth case studies illustrate the use of nonlinear regression analysis--with all data s

14. College student engaging in cyberbullying victimization: cognitive appraisals, coping strategies, and psychological adjustments.

Science.gov (United States)

Na, Hyunjoo; Dancy, Barbara L; Park, Chang

2015-06-01

The study's purpose was to explore whether frequency of cyberbullying victimization, cognitive appraisals, and coping strategies were associated with psychological adjustments among college student cyberbullying victims. A convenience sample of 121 students completed questionnaires. Linear regression analyses found frequency of cyberbullying victimization, cognitive appraisals, and coping strategies respectively explained 30%, 30%, and 27% of the variance in depression, anxiety, and self-esteem. Frequency of cyberbullying victimization and approach and avoidance coping strategies were associated with psychological adjustments, with avoidance coping strategies being associated with all three psychological adjustments. Interventions should focus on teaching cyberbullying victims to not use avoidance coping strategies. Copyright © 2015 Elsevier Inc. All rights reserved.

15. On logistic regression analysis of dichotomized responses.

Science.gov (United States)

Lu, Kaifeng

2017-01-01

16. Equações de regressão para estimar valores energéticos do grão de trigo e seus subprodutos para frangos de corte, a partir de análises químicas Regression equations to evaluate the energy values of wheat grain and its by-products for broiler chickens from chemical analyses

Directory of Open Access Journals (Sweden)

F.M.O. Borges

2003-12-01

que significou pouca influência da metodologia sobre essa medida. A FDN não mostrou ser melhor preditor de EM do que a FB.One experiment was run with broiler chickens, to obtain prediction equations for metabolizable energy (ME based on feedstuffs chemical analyses, and determined ME of wheat grain and its by-products, using four different methodologies. Seven wheat grain by-products were used in five treatments: wheat grain, wheat germ, white wheat flour, dark wheat flour, wheat bran for human use, wheat bran for animal use and rough wheat bran. Based on chemical analyses of crude fiber (CF, ether extract (EE, crude protein (CP, ash (AS and starch (ST of the feeds and the determined values of apparent energy (MEA, true energy (MEV, apparent corrected energy (MEAn and true energy corrected by nitrogen balance (MEVn in five treatments, prediction equations were obtained using the stepwise procedure. CF showed the best relationship with metabolizable energy values, however, this variable alone was not enough for a good estimate of the energy values (R² below 0.80. When EE and CP were included in the equations, R² increased to 0.90 or higher in most estimates. When the equations were calculated with all treatments, the equation for MEA were less precise and R² decreased. When ME data of the traditional or force-feeding methods were used separately, the precision of the equations increases (R² higher than 0.85. For MEV and MEVn values, the best multiple linear equations included CF, EE and CP (R²>0.90, independently of using all experimental data or separating by methodology. The estimates of MEVn values showed high precision and the linear coefficients (a of the equations were similar for all treatments or methodologies. Therefore, it explains the small influence of the different methodologies on this parameter. NDF was not a better predictor of ME than CF.

17. A Simulation Investigation of Principal Component Regression.

Science.gov (United States)

Allen, David E.

Regression analysis is one of the more common analytic tools used by researchers. However, multicollinearity between the predictor variables can cause problems in using the results of regression analyses. Problems associated with multicollinearity include entanglement of relative influences of variables due to reduced precision of estimation,…

18. Boosted beta regression.

Directory of Open Access Journals (Sweden)

Matthias Schmid

Full Text Available Regression analysis with a bounded outcome is a common problem in applied statistics. Typical examples include regression models for percentage outcomes and the analysis of ratings that are measured on a bounded scale. In this paper, we consider beta regression, which is a generalization of logit models to situations where the response is continuous on the interval (0,1. Consequently, beta regression is a convenient tool for analyzing percentage responses. The classical approach to fit a beta regression model is to use maximum likelihood estimation with subsequent AIC-based variable selection. As an alternative to this established - yet unstable - approach, we propose a new estimation technique called boosted beta regression. With boosted beta regression estimation and variable selection can be carried out simultaneously in a highly efficient way. Additionally, both the mean and the variance of a percentage response can be modeled using flexible nonlinear covariate effects. As a consequence, the new method accounts for common problems such as overdispersion and non-binomial variance structures.

19. Understanding logistic regression analysis.

Science.gov (United States)

Sperandei, Sandro

2014-01-01

Logistic regression is used to obtain odds ratio in the presence of more than one explanatory variable. The procedure is quite similar to multiple linear regression, with the exception that the response variable is binomial. The result is the impact of each variable on the odds ratio of the observed event of interest. The main advantage is to avoid confounding effects by analyzing the association of all variables together. In this article, we explain the logistic regression procedure using examples to make it as simple as possible. After definition of the technique, the basic interpretation of the results is highlighted and then some special issues are discussed.

20. Applied linear regression

CERN Document Server

Weisberg, Sanford

2013-01-01

Praise for the Third Edition ""...this is an excellent book which could easily be used as a course text...""-International Statistical Institute The Fourth Edition of Applied Linear Regression provides a thorough update of the basic theory and methodology of linear regression modeling. Demonstrating the practical applications of linear regression analysis techniques, the Fourth Edition uses interesting, real-world exercises and examples. Stressing central concepts such as model building, understanding parameters, assessing fit and reliability, and drawing conclusions, the new edition illus

1. Applied logistic regression

CERN Document Server

Hosmer, David W; Sturdivant, Rodney X

2013-01-01

A new edition of the definitive guide to logistic regression modeling for health science and other applications This thoroughly expanded Third Edition provides an easily accessible introduction to the logistic regression (LR) model and highlights the power of this model by examining the relationship between a dichotomous outcome and a set of covariables. Applied Logistic Regression, Third Edition emphasizes applications in the health sciences and handpicks topics that best suit the use of modern statistical software. The book provides readers with state-of-

2. Multilingual speaker age recognition: regression analyses on the Lwazi corpus

CSIR Research Space (South Africa)

Feld, M

2009-12-01

Full Text Available Multilinguality represents an area of significant opportunities for automatic speech-processing systems: whereas multilingual societies are commonplace, the majority of speechprocessing systems are developed with a single language in mind. As a step...

3. Understanding poisson regression.

Science.gov (United States)

Hayat, Matthew J; Higgins, Melinda

2014-04-01

Nurse investigators often collect study data in the form of counts. Traditional methods of data analysis have historically approached analysis of count data either as if the count data were continuous and normally distributed or with dichotomization of the counts into the categories of occurred or did not occur. These outdated methods for analyzing count data have been replaced with more appropriate statistical methods that make use of the Poisson probability distribution, which is useful for analyzing count data. The purpose of this article is to provide an overview of the Poisson distribution and its use in Poisson regression. Assumption violations for the standard Poisson regression model are addressed with alternative approaches, including addition of an overdispersion parameter or negative binomial regression. An illustrative example is presented with an application from the ENSPIRE study, and regression modeling of comorbidity data is included for illustrative purposes. Copyright 2014, SLACK Incorporated.

4. Vector regression introduced

Directory of Open Access Journals (Sweden)

Mok Tik

2014-06-01

Full Text Available This study formulates regression of vector data that will enable statistical analysis of various geodetic phenomena such as, polar motion, ocean currents, typhoon/hurricane tracking, crustal deformations, and precursory earthquake signals. The observed vector variable of an event (dependent vector variable is expressed as a function of a number of hypothesized phenomena realized also as vector variables (independent vector variables and/or scalar variables that are likely to impact the dependent vector variable. The proposed representation has the unique property of solving the coefficients of independent vector variables (explanatory variables also as vectors, hence it supersedes multivariate multiple regression models, in which the unknown coefficients are scalar quantities. For the solution, complex numbers are used to rep- resent vector information, and the method of least squares is deployed to estimate the vector model parameters after transforming the complex vector regression model into a real vector regression model through isomorphism. Various operational statistics for testing the predictive significance of the estimated vector parameter coefficients are also derived. A simple numerical example demonstrates the use of the proposed vector regression analysis in modeling typhoon paths.

5. Multicollinearity and Regression Analysis

Science.gov (United States)

Daoud, Jamal I.

2017-12-01

In regression analysis it is obvious to have a correlation between the response and predictor(s), but having correlation among predictors is something undesired. The number of predictors included in the regression model depends on many factors among which, historical data, experience, etc. At the end selection of most important predictors is something objective due to the researcher. Multicollinearity is a phenomena when two or more predictors are correlated, if this happens, the standard error of the coefficients will increase [8]. Increased standard errors means that the coefficients for some or all independent variables may be found to be significantly different from In other words, by overinflating the standard errors, multicollinearity makes some variables statistically insignificant when they should be significant. In this paper we focus on the multicollinearity, reasons and consequences on the reliability of the regression model.

6. Minimax Regression Quantiles

DEFF Research Database (Denmark)

Bache, Stefan Holst

A new and alternative quantile regression estimator is developed and it is shown that the estimator is root n-consistent and asymptotically normal. The estimator is based on a minimax ‘deviance function’ and has asymptotically equivalent properties to the usual quantile regression estimator. It is......, however, a different and therefore new estimator. It allows for both linear- and nonlinear model specifications. A simple algorithm for computing the estimates is proposed. It seems to work quite well in practice but whether it has theoretical justification is still an open question....

7. riskRegression

DEFF Research Database (Denmark)

Ozenne, Brice; Sørensen, Anne Lyngholm; Scheike, Thomas

2017-01-01

In the presence of competing risks a prediction of the time-dynamic absolute risk of an event can be based on cause-specific Cox regression models for the event and the competing risks (Benichou and Gail, 1990). We present computationally fast and memory optimized C++ functions with an R interface...... for predicting the covariate specific absolute risks, their confidence intervals, and their confidence bands based on right censored time to event data. We provide explicit formulas for our implementation of the estimator of the (stratified) baseline hazard function in the presence of tied event times. As a by...... functionals. The software presented here is implemented in the riskRegression package....

8. Prediction, Regression and Critical Realism

DEFF Research Database (Denmark)

Næss, Petter

2004-01-01

This paper considers the possibility of prediction in land use planning, and the use of statistical research methods in analyses of relationships between urban form and travel behaviour. Influential writers within the tradition of critical realism reject the possibility of predicting social...... phenomena. This position is fundamentally problematic to public planning. Without at least some ability to predict the likely consequences of different proposals, the justification for public sector intervention into market mechanisms will be frail. Statistical methods like regression analyses are commonly...... seen as necessary in order to identify aggregate level effects of policy measures, but are questioned by many advocates of critical realist ontology. Using research into the relationship between urban structure and travel as an example, the paper discusses relevant research methods and the kinds...

9. Quantum algorithm for linear regression

Science.gov (United States)

Wang, Guoming

2017-07-01

We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.

10. Multiple linear regression analysis

Science.gov (United States)

Edwards, T. R.

1980-01-01

Program rapidly selects best-suited set of coefficients. User supplies only vectors of independent and dependent data and specifies confidence level required. Program uses stepwise statistical procedure for relating minimal set of variables to set of observations; final regression contains only most statistically significant coefficients. Program is written in FORTRAN IV for batch execution and has been implemented on NOVA 1200.

11. Bayesian logistic regression analysis

NARCIS (Netherlands)

Van Erp, H.R.N.; Van Gelder, P.H.A.J.M.

2012-01-01

In this paper we present a Bayesian logistic regression analysis. It is found that if one wishes to derive the posterior distribution of the probability of some event, then, together with the traditional Bayes Theorem and the integrating out of nuissance parameters, the Jacobian transformation is an

12. Linear Regression Analysis

CERN Document Server

Seber, George A F

2012-01-01

Concise, mathematically clear, and comprehensive treatment of the subject.* Expanded coverage of diagnostics and methods of model fitting.* Requires no specialized knowledge beyond a good grasp of matrix algebra and some acquaintance with straight-line regression and simple analysis of variance models.* More than 200 problems throughout the book plus outline solutions for the exercises.* This revision has been extensively class-tested.

13. Nonlinear Regression with R

CERN Document Server

Ritz, Christian; Parmigiani, Giovanni

2009-01-01

R is a rapidly evolving lingua franca of graphical display and statistical analysis of experiments from the applied sciences. This book provides a coherent treatment of nonlinear regression with R by means of examples from a diversity of applied sciences such as biology, chemistry, engineering, medicine and toxicology.

14. Bayesian ARTMAP for regression.

Science.gov (United States)

Sasu, L M; Andonie, R

2013-10-01

Bayesian ARTMAP (BA) is a recently introduced neural architecture which uses a combination of Fuzzy ARTMAP competitive learning and Bayesian learning. Training is generally performed online, in a single-epoch. During training, BA creates input data clusters as Gaussian categories, and also infers the conditional probabilities between input patterns and categories, and between categories and classes. During prediction, BA uses Bayesian posterior probability estimation. So far, BA was used only for classification. The goal of this paper is to analyze the efficiency of BA for regression problems. Our contributions are: (i) we generalize the BA algorithm using the clustering functionality of both ART modules, and name it BA for Regression (BAR); (ii) we prove that BAR is a universal approximator with the best approximation property. In other words, BAR approximates arbitrarily well any continuous function (universal approximation) and, for every given continuous function, there is one in the set of BAR approximators situated at minimum distance (best approximation); (iii) we experimentally compare the online trained BAR with several neural models, on the following standard regression benchmarks: CPU Computer Hardware, Boston Housing, Wisconsin Breast Cancer, and Communities and Crime. Our results show that BAR is an appropriate tool for regression tasks, both for theoretical and practical reasons. Copyright © 2013 Elsevier Ltd. All rights reserved.

15. Bounded Gaussian process regression

DEFF Research Database (Denmark)

Jensen, Bjørn Sand; Nielsen, Jens Brehm; Larsen, Jan

2013-01-01

We extend the Gaussian process (GP) framework for bounded regression by introducing two bounded likelihood functions that model the noise on the dependent variable explicitly. This is fundamentally different from the implicit noise assumption in the previously suggested warped GP framework. We...... with the proposed explicit noise-model extension....

16. and Multinomial Logistic Regression

African Journals Online (AJOL)

This work presented the results of an experimental comparison of two models: Multinomial Logistic Regression (MLR) and Artificial Neural Network (ANN) for classifying students based on their academic performance. The predictive accuracy for each model was measured by their average Classification Correct Rate (CCR).

17. Mechanisms of neuroblastoma regression

Science.gov (United States)

Brodeur, Garrett M.; Bagatell, Rochelle

2014-01-01

Recent genomic and biological studies of neuroblastoma have shed light on the dramatic heterogeneity in the clinical behaviour of this disease, which spans from spontaneous regression or differentiation in some patients, to relentless disease progression in others, despite intensive multimodality therapy. This evidence also suggests several possible mechanisms to explain the phenomena of spontaneous regression in neuroblastomas, including neurotrophin deprivation, humoral or cellular immunity, loss of telomerase activity and alterations in epigenetic regulation. A better understanding of the mechanisms of spontaneous regression might help to identify optimal therapeutic approaches for patients with these tumours. Currently, the most druggable mechanism is the delayed activation of developmentally programmed cell death regulated by the tropomyosin receptor kinase A pathway. Indeed, targeted therapy aimed at inhibiting neurotrophin receptors might be used in lieu of conventional chemotherapy or radiation in infants with biologically favourable tumours that require treatment. Alternative approaches consist of breaking immune tolerance to tumour antigens or activating neurotrophin receptor pathways to induce neuronal differentiation. These approaches are likely to be most effective against biologically favourable tumours, but they might also provide insights into treatment of biologically unfavourable tumours. We describe the different mechanisms of spontaneous neuroblastoma regression and the consequent therapeutic approaches. PMID:25331179

18. Introduction to the use of regression models in epidemiology.

Science.gov (United States)

Bender, Ralf

2009-01-01

Regression modeling is one of the most important statistical techniques used in analytical epidemiology. By means of regression models the effect of one or several explanatory variables (e.g., exposures, subject characteristics, risk factors) on a response variable such as mortality or cancer can be investigated. From multiple regression models, adjusted effect estimates can be obtained that take the effect of potential confounders into account. Regression methods can be applied in all epidemiologic study designs so that they represent a universal tool for data analysis in epidemiology. Different kinds of regression models have been developed in dependence on the measurement scale of the response variable and the study design. The most important methods are linear regression for continuous outcomes, logistic regression for binary outcomes, Cox regression for time-to-event data, and Poisson regression for frequencies and rates. This chapter provides a nontechnical introduction to these regression models with illustrating examples from cancer research.

19. Acculturation, personality, and psychological adjustment.

Science.gov (United States)

2011-12-01

Two studies investigated relationships between traditional indicators of acculturation, cultural distance, acculturation strategies, and basic dimensions of personality as they pertain to psychological adjustment among Hispanic students. Although personality characteristics have been shown to be important determinants of psychological well-being, acculturation research has put less emphasis on the role of personality in the well-being of immigrants. Hierarchical regression analysis showed that basic dimensions of personality such as extraversion and neuroticism were strongly related to psychological adjustment. Acculturation strategies did not mediate the effect of personality variables, but cultural resistance made a small, independent contribution to the explanation of some aspects of negative psychological adjustment. The implications of the results were discussed.

20. Time series regression model for infectious disease and weather.

Science.gov (United States)

Imai, Chisato; Armstrong, Ben; Chalabi, Zaid; Mangtani, Punam; Hashizume, Masahiro

2015-10-01

1. Ridge Regression Signal Processing

Science.gov (United States)

Kuhl, Mark R.

1990-01-01

The introduction of the Global Positioning System (GPS) into the National Airspace System (NAS) necessitates the development of Receiver Autonomous Integrity Monitoring (RAIM) techniques. In order to guarantee a certain level of integrity, a thorough understanding of modern estimation techniques applied to navigational problems is required. The extended Kalman filter (EKF) is derived and analyzed under poor geometry conditions. It was found that the performance of the EKF is difficult to predict, since the EKF is designed for a Gaussian environment. A novel approach is implemented which incorporates ridge regression to explain the behavior of an EKF in the presence of dynamics under poor geometry conditions. The basic principles of ridge regression theory are presented, followed by the derivation of a linearized recursive ridge estimator. Computer simulations are performed to confirm the underlying theory and to provide a comparative analysis of the EKF and the recursive ridge estimator.

2. Subset selection in regression

CERN Document Server

Miller, Alan

2002-01-01

Originally published in 1990, the first edition of Subset Selection in Regression filled a significant gap in the literature, and its critical and popular success has continued for more than a decade. Thoroughly revised to reflect progress in theory, methods, and computing power, the second edition promises to continue that tradition. The author has thoroughly updated each chapter, incorporated new material on recent developments, and included more examples and references. New in the Second Edition:A separate chapter on Bayesian methodsComplete revision of the chapter on estimationA major example from the field of near infrared spectroscopyMore emphasis on cross-validationGreater focus on bootstrappingStochastic algorithms for finding good subsets from large numbers of predictors when an exhaustive search is not feasible Software available on the Internet for implementing many of the algorithms presentedMore examplesSubset Selection in Regression, Second Edition remains dedicated to the techniques for fitting...

Science.gov (United States)

Kernberg, O F

1979-02-01

The choice of good leaders is a major task for all organizations. Inforamtion regarding the prospective administrator's personality should complement questions regarding his previous experience, his general conceptual skills, his technical knowledge, and the specific skills in the area for which he is being selected. The growing psychoanalytic knowledge about the crucial importance of internal, in contrast to external, object relations, and about the mutual relationships of regression in individuals and in groups, constitutes an important practical tool for the selection of leaders.

4. Classification and regression trees

CERN Document Server

Breiman, Leo; Olshen, Richard A; Stone, Charles J

1984-01-01

The methodology used to construct tree structured rules is the focus of this monograph. Unlike many other statistical procedures, which moved from pencil and paper to calculators, this text's use of trees was unthinkable before computers. Both the practical and theoretical sides have been developed in the authors' study of tree methods. Classification and Regression Trees reflects these two sides, covering the use of trees as a data analysis method, and in a more mathematical framework, proving some of their fundamental properties.

5. Comparison of Classical Linear Regression and Orthogonal Regression According to the Sum of Squares Perpendicular Distances

OpenAIRE

KELEŞ, Taliha; ALTUN, Murat

2016-01-01

Regression analysis is a statistical technique for investigating and modeling the relationship between variables. The purpose of this study was the trivial presentation of the equation for orthogonal regression (OR) and the comparison of classical linear regression (CLR) and OR techniques with respect to the sum of squared perpendicular distances. For that purpose, the analyses were shown by an example. It was found that the sum of squared perpendicular distances of OR is smaller. Thus, it wa...

6. Logistic regression models

CERN Document Server

Hilbe, Joseph M

2009-01-01

This book really does cover everything you ever wanted to know about logistic regression … with updates available on the author's website. Hilbe, a former national athletics champion, philosopher, and expert in astronomy, is a master at explaining statistical concepts and methods. Readers familiar with his other expository work will know what to expect-great clarity.The book provides considerable detail about all facets of logistic regression. No step of an argument is omitted so that the book will meet the needs of the reader who likes to see everything spelt out, while a person familiar with some of the topics has the option to skip "obvious" sections. The material has been thoroughly road-tested through classroom and web-based teaching. … The focus is on helping the reader to learn and understand logistic regression. The audience is not just students meeting the topic for the first time, but also experienced users. I believe the book really does meet the author's goal … .-Annette J. Dobson, Biometric...

7. MEVSİMSEL DÜZELTMEDE KULLANILAN İSTATİSTİKİ YÖNTEMLER ÜZERİNE BİR İNCELEME-AN ANALYSE ON STATISTICAL METHODS WHICH ARE USED FOR SEASONAL ADJUSTMENT

Directory of Open Access Journals (Sweden)

Handan YOLSAL

2012-06-01

Full Text Available Bu makalenin amacı zaman serileri için resmi istatistik ajansları tarafından geliştirilen ve çok yaygın olarak uygulanan mevsim düzeltme programlarını tanıtmaktır. Bu programlar iki ana grupta sınıflanmaktadır. Bunlardan biri, ilk defa olarak NBER tarafından geliştirilen ve hareketli ortalamalar filtreleri kullanan CENSUS II X-11 ailesidir. Bu aile X-11 ARIMA ve X-12 ARIMA tekniklerini içerir. Diğeri ise İspanya Merkez Bankası tarafından geliştirilen ve model bazlı bir yaklaşım olan TRAMO/SEATS programıdır. Bu makalede sözü edilen tekniklerin mevsimsel ayrıştırma süreçleri, bu tekniklerin içerdiği ticari gün, takvim etkisi gibi bazı özel etkiler, avantaj ve dezavantajları ve ayrıca öngörü performansları tartışılacaktır.-This paper’s aim is to introduce most commonly applied seasonal adjustment programs improved by official statistical agencies for the time series. These programs are classified in two main groups. One of them is the family of  CENSUS II X-11 which was using moving average filters and was first developed by NBER. This family involves X-11 ARIMA and X-12 ARIMA techniques. The other one is TRAMO/SEATS program which was a model based approach and has been developed by Spain Central Bank. The seasonal decomposition procedures of these techniques which are mentioned before and consisting of some special effects such as trading day, calendar effects and their advantages-disadvantages and also forecasting performances of them will be discussed in this paper.

8. Goal pursuit, goal adjustment, and affective well-being following lower limb amputation

OpenAIRE

Coffey, Laura; Gallagher, Pamela; Desmond, Deirdre; Ryall, Nicola

2014-01-01

Objectives. This study examined the relationships between tenacious goal pursuit (TGP), flexible goal adjustment (FGA), and affective well-being in a sample of individuals with lower limb amputations. Design. Cross-sectional, quantitative. Methods. Ninety-eight patients recently admitted to a primary prosthetic rehabilitation programme completed measures of TGP, FGA, positive affect, and negative affect. Results. Hierarchical regression analyses revealed that TGP and FGA accounted fo...

9. Steganalysis using logistic regression

Science.gov (United States)

Lubenko, Ivans; Ker, Andrew D.

2011-02-01

We advocate Logistic Regression (LR) as an alternative to the Support Vector Machine (SVM) classifiers commonly used in steganalysis. LR offers more information than traditional SVM methods - it estimates class probabilities as well as providing a simple classification - and can be adapted more easily and efficiently for multiclass problems. Like SVM, LR can be kernelised for nonlinear classification, and it shows comparable classification accuracy to SVM methods. This work is a case study, comparing accuracy and speed of SVM and LR classifiers in detection of LSB Matching and other related spatial-domain image steganography, through the state-of-art 686-dimensional SPAM feature set, in three image sets.

10. SEPARATION PHENOMENA LOGISTIC REGRESSION

Directory of Open Access Journals (Sweden)

Ikaro Daniel de Carvalho Barreto

2014-03-01

Full Text Available This paper proposes an application of concepts about the maximum likelihood estimation of the binomial logistic regression model to the separation phenomena. It generates bias in the estimation and provides different interpretations of the estimates on the different statistical tests (Wald, Likelihood Ratio and Score and provides different estimates on the different iterative methods (Newton-Raphson and Fisher Score. It also presents an example that demonstrates the direct implications for the validation of the model and validation of variables, the implications for estimates of odds ratios and confidence intervals, generated from the Wald statistics. Furthermore, we present, briefly, the Firth correction to circumvent the phenomena of separation.

11. riskRegression

DEFF Research Database (Denmark)

Ozenne, Brice; Sørensen, Anne Lyngholm; Scheike, Thomas

2017-01-01

In the presence of competing risks a prediction of the time-dynamic absolute risk of an event can be based on cause-specific Cox regression models for the event and the competing risks (Benichou and Gail, 1990). We present computationally fast and memory optimized C++ functions with an R interface......-product we obtain fast access to the baseline hazards (compared to survival::basehaz()) and predictions of survival probabilities, their confidence intervals and confidence bands. Confidence intervals and confidence bands are based on point-wise asymptotic expansions of the corresponding statistical...

12. Adjustment of geochemical background by robust multivariate statistics

Science.gov (United States)

Zhou, D.

1985-01-01

Conventional analyses of exploration geochemical data assume that the background is a constant or slowly changing value, equivalent to a plane or a smoothly curved surface. However, it is better to regard the geochemical background as a rugged surface, varying with changes in geology and environment. This rugged surface can be estimated from observed geological, geochemical and environmental properties by using multivariate statistics. A method of background adjustment was developed and applied to groundwater and stream sediment reconnaissance data collected from the Hot Springs Quadrangle, South Dakota, as part of the National Uranium Resource Evaluation (NURE) program. Source-rock lithology appears to be a dominant factor controlling the chemical composition of groundwater or stream sediments. The most efficacious adjustment procedure is to regress uranium concentration on selected geochemical and environmental variables for each lithologic unit, and then to delineate anomalies by a common threshold set as a multiple of the standard deviation of the combined residuals. Robust versions of regression and RQ-mode principal components analysis techniques were used rather than ordinary techniques to guard against distortion caused by outliers Anomalies delineated by this background adjustment procedure correspond with uranium prospects much better than do anomalies delineated by conventional procedures. The procedure should be applicable to geochemical exploration at different scales for other metals. ?? 1985.

13. Is the Relationship Between Marital Adjustment and Parenting Stress Mediated or Moderated by Parenting Alliance?

Directory of Open Access Journals (Sweden)

Elena Camisasca

2014-05-01

14. Aid and growth regressions

DEFF Research Database (Denmark)

Hansen, Henrik; Tarp, Finn

2001-01-01

This paper examines the relationship between foreign aid and growth in real GDP per capita as it emerges from simple augmentations of popular cross country growth specifications. It is shown that aid in all likelihood increases the growth rate, and this result is not conditional on ‘good’ policy....... investment. We conclude by stressing the need for more theoretical work before this kind of cross-country regressions are used for policy purposes.......This paper examines the relationship between foreign aid and growth in real GDP per capita as it emerges from simple augmentations of popular cross country growth specifications. It is shown that aid in all likelihood increases the growth rate, and this result is not conditional on ‘good’ policy...

15. Spatial correlation in Bayesian logistic regression with misclassification

DEFF Research Database (Denmark)

Bihrmann, Kristine; Toft, Nils; Nielsen, Søren Saxmose

2014-01-01

Standard logistic regression assumes that the outcome is measured perfectly. In practice, this is often not the case, which could lead to biased estimates if not accounted for. This study presents Bayesian logistic regression with adjustment for misclassification of the outcome applied to data...

Data.gov (United States)

Department of Housing and Urban Development — The Department of Housing and Urban Development establishes the rent adjustment factors - called Annual Adjustment Factors (AAFs) - on the basis of Consumer Price...

17. Modified Regression Correlation Coefficient for Poisson Regression Model

Science.gov (United States)

Kaengthong, Nattacha; Domthong, Uthumporn

2017-09-01

This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).

18. Neutron spectrum adjustment. The role of covariances

International Nuclear Information System (INIS)

Remec, I.

1992-01-01

Neutron spectrum adjustment method is shortly reviewed. Practical example dealing with power reactor pressure vessel exposure rates determination is analysed. Adjusted exposure rates are found only slightly affected by the covariances of measured reaction rates and activation cross sections, while the multigroup spectra covariances were found important. Approximate spectra covariance matrices, as suggested in Astm E944-89, were found useful but care is advised if they are applied in adjustments of spectra at locations without dosimetry. (author) [sl

CERN Multimedia

Finance Division

2001-01-01

On 15 June 2001 the Council approved the correction of the discrepancy identified in the net salary adjustment implemented on 1st January 2001 by retroactively increasing the scale of basic salaries to achieve the 2.8% average net salary adjustment approved in December 2000. We should like to inform you that the corresponding adjustment will be made to your July salary. Full details of the retroactive adjustments will consequently be shown on your pay slip.

20. Use of multiple linear regression and logistic regression models to investigate changes in birthweight for term singleton infants in Scotland.

Science.gov (United States)

Bonellie, Sandra R

2012-10-01

To illustrate the use of regression and logistic regression models to investigate changes over time in size of babies particularly in relation to social deprivation, age of the mother and smoking. Mean birthweight has been found to be increasing in many countries in recent years, but there are still a group of babies who are born with low birthweights. Population-based retrospective cohort study. Multiple linear regression and logistic regression models are used to analyse data on term 'singleton births' from Scottish hospitals between 1994-2003. Mothers who smoke are shown to give birth to lighter babies on average, a difference of approximately 0.57 Standard deviations lower (95% confidence interval. 0.55-0.58) when adjusted for sex and parity. These mothers are also more likely to have babies that are low birthweight (odds ratio 3.46, 95% confidence interval 3.30-3.63) compared with non-smokers. Low birthweight is 30% more likely where the mother lives in the most deprived areas compared with the least deprived, (odds ratio 1.30, 95% confidence interval 1.21-1.40). Smoking during pregnancy is shown to have a detrimental effect on the size of infants at birth. This effect explains some, though not all, of the observed socioeconomic birthweight. It also explains much of the observed birthweight differences by the age of the mother.   Identifying mothers at greater risk of having a low birthweight baby as important implications for the care and advice this group receives. © 2012 Blackwell Publishing Ltd.

1. Measurement Error in Education and Growth Regressions

NARCIS (Netherlands)

Portela, M.; Teulings, C.N.; Alessie, R.

The perpetual inventory method used for the construction of education data per country leads to systematic measurement error. This paper analyses the effect of this measurement error on GDP regressions. There is a systematic difference in the education level between census data and observations

2. Measurement error in education and growth regressions

NARCIS (Netherlands)

Portela, Miguel; Teulings, Coen; Alessie, R.

2004-01-01

The perpetual inventory method used for the construction of education data per country leads to systematic measurement error. This paper analyses the effect of this measurement error on GDP regressions. There is a systematic difference in the education level between census data and observations

3. Panel data specifications in nonparametric kernel regression

DEFF Research Database (Denmark)

Czekaj, Tomasz Gerard; Henningsen, Arne

parametric panel data estimators to analyse the production technology of Polish crop farms. The results of our nonparametric kernel regressions generally differ from the estimates of the parametric models but they only slightly depend on the choice of the kernel functions. Based on economic reasoning, we...

4. Canonical variate regression.

Science.gov (United States)

Luo, Chongliang; Liu, Jin; Dey, Dipak K; Chen, Kun

2016-07-01

In many fields, multi-view datasets, measuring multiple distinct but interrelated sets of characteristics on the same set of subjects, together with data on certain outcomes or phenotypes, are routinely collected. The objective in such a problem is often two-fold: both to explore the association structures of multiple sets of measurements and to develop a parsimonious model for predicting the future outcomes. We study a unified canonical variate regression framework to tackle the two problems simultaneously. The proposed criterion integrates multiple canonical correlation analysis with predictive modeling, balancing between the association strength of the canonical variates and their joint predictive power on the outcomes. Moreover, the proposed criterion seeks multiple sets of canonical variates simultaneously to enable the examination of their joint effects on the outcomes, and is able to handle multivariate and non-Gaussian outcomes. An efficient algorithm based on variable splitting and Lagrangian multipliers is proposed. Simulation studies show the superior performance of the proposed approach. We demonstrate the effectiveness of the proposed approach in an [Formula: see text] intercross mice study and an alcohol dependence study. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

5. Case mix adjusted variation in cesarean section rate in Sweden.

Science.gov (United States)

Mesterton, Johan; Ladfors, Lars; Ekenberg Abreu, Anna; Lindgren, Peter; Saltvedt, Sissel; Weichselbraun, Marianne; Amer-Wåhlin, Isis

2017-05-01

Cesarean section (CS) rate is a well-established indicator of performance in maternity care and is also related to resource use. Case mix adjustment of CS rates when performing comparisons between hospitals is important. The objective of this study was to estimate case mix adjusted variation in CS rate between hospitals in Sweden. In total, 139 756 deliveries in 2011 and 2012 were identified in administrative systems in seven regions covering 67% of all deliveries in Sweden. Data were linked to the Medical birth register and population data. Twenty-three different sociodemographic and clinical characteristics were used for adjustment. Analyses were performed for the entire study population as well as for two subgroups. Logistic regression was used to analyze differences between hospitals. The overall CS rate was 16.9% (hospital minimum-maximum 12.1-22.6%). Significant variations in CS rate between hospitals were observed after case mix adjustment: hospital odds ratios for CS varied from 0.62 (95% CI 0.53-0.73) to 1.45 (95% CI 1.37-1.52). In nulliparous, cephalic, full-term, singletons the overall CS rate was 14.3% (hospital minimum-maximum: 9.0-19.0%), whereas it was 4.7% for multiparous, cephalic, full-term, singletons with no previous CS (hospital minimum-maximum: 3.2-6.7%). In both subgroups significant variations were observed in case mix adjusted CS rates. Significant differences in CS rate between Swedish hospitals were found after adjusting for differences in case mix. This indicates a potential for fewer interventions and lower resource use in Swedish childbirth care. Best practice sharing and continuous monitoring are important tools for improving childbirth care. © 2017 Nordic Federation of Societies of Obstetrics and Gynecology.

6. Negative life events and school adjustment among Chinese nursing students: The mediating role of psychological capital.

Science.gov (United States)

Liu, Chunqin; Zhao, Yuanyuan; Tian, Xiaohong; Zou, Guiyuan; Li, Ping

2015-06-01

7. Polynomial regression analysis and significance test of the regression function

International Nuclear Information System (INIS)

Gao Zhengming; Zhao Juan; He Shengping

2012-01-01

In order to analyze the decay heating power of a certain radioactive isotope per kilogram with polynomial regression method, the paper firstly demonstrated the broad usage of polynomial function and deduced its parameters with ordinary least squares estimate. Then significance test method of polynomial regression function is derived considering the similarity between the polynomial regression model and the multivariable linear regression model. Finally, polynomial regression analysis and significance test of the polynomial function are done to the decay heating power of the iso tope per kilogram in accord with the authors' real work. (authors)

8. Recursive Algorithm For Linear Regression

Science.gov (United States)

Varanasi, S. V.

1988-01-01

Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.

9. Analysis of Palm Oil Production, Export, and Government Consumption to Gross Domestic Product of Five Districts in West Kalimantan by Panel Regression

Science.gov (United States)

Sulistianingsih, E.; Kiftiah, M.; Rosadi, D.; Wahyuni, H.

2017-04-01

Gross Domestic Product (GDP) is an indicator of economic growth in a region. GDP is a panel data, which consists of cross-section and time series data. Meanwhile, panel regression is a tool which can be utilised to analyse panel data. There are three models in panel regression, namely Common Effect Model (CEM), Fixed Effect Model (FEM) and Random Effect Model (REM). The models will be chosen based on results of Chow Test, Hausman Test and Lagrange Multiplier Test. This research analyses palm oil about production, export, and government consumption to five district GDP are in West Kalimantan, namely Sanggau, Sintang, Sambas, Ketapang and Bengkayang by panel regression. Based on the results of analyses, it concluded that REM, which adjusted-determination-coefficient is 0,823, is the best model in this case. Also, according to the result, only Export and Government Consumption that influence GDP of the districts.

Directory of Open Access Journals (Sweden)

Gamze Arman

2009-12-01

Full Text Available Expatriation is a widely studied area of research in work and organizational psychology. After expatriates accomplish their missions in host countries, they return to their countries and this process is called repatriation. Adjustment constitutes a crucial part in repatriation research. In the present literature review, research about repatriation adjustment was reviewed with the aim of defining the whole picture in this phenomenon. Present research was classified on the basis of a theoretical model of repatriation adjustment. Basic frame consisted of antecedents, adjustment, outcomes as main variables and personal characteristics/coping strategies and organizational strategies as moderating variables.

11. Combining Alphas via Bounded Regression

Directory of Open Access Journals (Sweden)

2015-11-01

Full Text Available We give an explicit algorithm and source code for combining alpha streams via bounded regression. In practical applications, typically, there is insufficient history to compute a sample covariance matrix (SCM for a large number of alphas. To compute alpha allocation weights, one then resorts to (weighted regression over SCM principal components. Regression often produces alpha weights with insufficient diversification and/or skewed distribution against, e.g., turnover. This can be rectified by imposing bounds on alpha weights within the regression procedure. Bounded regression can also be applied to stock and other asset portfolio construction. We discuss illustrative examples.

12. Regression in autistic spectrum disorders.

Science.gov (United States)

Stefanatos, Gerry A

2008-12-01

A significant proportion of children diagnosed with Autistic Spectrum Disorder experience a developmental regression characterized by a loss of previously-acquired skills. This may involve a loss of speech or social responsitivity, but often entails both. This paper critically reviews the phenomena of regression in autistic spectrum disorders, highlighting the characteristics of regression, age of onset, temporal course, and long-term outcome. Important considerations for diagnosis are discussed and multiple etiological factors currently hypothesized to underlie the phenomenon are reviewed. It is argued that regressive autistic spectrum disorders can be conceptualized on a spectrum with other regressive disorders that may share common pathophysiological features. The implications of this viewpoint are discussed.

13. Linear regression in astronomy. I

Science.gov (United States)

Isobe, Takashi; Feigelson, Eric D.; Akritas, Michael G.; Babu, Gutti Jogesh

1990-01-01

Five methods for obtaining linear regression fits to bivariate data with unknown or insignificant measurement errors are discussed: ordinary least-squares (OLS) regression of Y on X, OLS regression of X on Y, the bisector of the two OLS lines, orthogonal regression, and 'reduced major-axis' regression. These methods have been used by various researchers in observational astronomy, most importantly in cosmic distance scale applications. Formulas for calculating the slope and intercept coefficients and their uncertainties are given for all the methods, including a new general form of the OLS variance estimates. The accuracy of the formulas was confirmed using numerical simulations. The applicability of the procedures is discussed with respect to their mathematical properties, the nature of the astronomical data under consideration, and the scientific purpose of the regression. It is found that, for problems needing symmetrical treatment of the variables, the OLS bisector performs significantly better than orthogonal or reduced major-axis regression.

14. Severity-Adjusted Mortality in Trauma Patients Transported by Police

Science.gov (United States)

Band, Roger A.; Salhi, Rama A.; Holena, Daniel N.; Powell, Elizabeth; Branas, Charles C.; Carr, Brendan G.

2018-01-01

Study objective Two decades ago, Philadelphia began allowing police transport of patients with penetrating trauma. We conduct a large, multiyear, citywide analysis of this policy. We examine the association between mode of out-of-hospital transport (police department versus emergency medical services [EMS]) and mortality among patients with penetrating trauma in Philadelphia. Methods This is a retrospective cohort study of trauma registry data. Patients who sustained any proximal penetrating trauma and presented to any Level I or II trauma center in Philadelphia between January 1, 2003, and December 31, 2007, were included. Analyses were conducted with logistic regression models and were adjusted for injury severity with the Trauma and Injury Severity Score and for case mix with a modified Charlson index. Results Four thousand one hundred twenty-two subjects were identified. Overall mortality was 27.4%. In unadjusted analyses, patients transported by police were more likely to die than patients transported by ambulance (29.8% versus 26.5%; OR 1.18; 95% confidence interval [CI] 1.00 to 1.39). In adjusted models, no significant difference was observed in overall mortality between the police department and EMS groups (odds ratio [OR] 0.78; 95% CI 0.61 to 1.01). In subgroup analysis, patients with severe injury (Injury Severity Score >15) (OR 0.73; 95% CI 0.59 to 0.90), patients with gunshot wounds (OR 0.70; 95% CI 0.53 to 0.94), and patients with stab wounds (OR 0.19; 95% CI 0.08 to 0.45) were more likely to survive if transported by police. Conclusion We found no significant overall difference in adjusted mortality between patients transported by the police department compared with EMS but found increased adjusted survival among 3 key subgroups of patients transported by police. This practice may augment traditional care. PMID:24387925

15. Advanced statistics: linear regression, part I: simple linear regression.

Science.gov (United States)

Marill, Keith A

2004-01-01

Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.

DEFF Research Database (Denmark)

2009-01-01

An adjustable microchip holder for holding a microchip is provided having a plurality of displaceable interconnection pads for connecting the connection holes of a microchip with one or more external devices or equipment. The adjustable microchip holder can fit different sizes of microchips...

17. Sickness presence, sick leave and adjustment latitude

Directory of Open Access Journals (Sweden)

Joachim Gerich

2014-10-01

Full Text Available Objectives: Previous research on the association between adjustment latitude (defined as the opportunity to adjust work efforts in case of illness and sickness absence and sickness presence has produced inconsistent results. In particular, low adjustment latitude has been identified as both a risk factor and a deterrent of sick leave. The present study uses an alternative analytical strategy with the aim of joining these results together. Material and Methods: Using a cross-sectional design, a random sample of employees covered by the Upper Austrian Sickness Fund (N = 930 was analyzed. Logistic and ordinary least square (OLS regression models were used to examine the association between adjustment latitude and days of sickness absence, sickness presence, and an estimator for the individual sickness absence and sickness presence propensity. Results: A high level of adjustment latitude was found to be associated with a reduced number of days of sickness absence and sickness presence, but an elevated propensity for sickness absence. Conclusions: Employees with high adjustment latitude experience fewer days of health complaints associated with lower rates of sick leave and sickness presence compared to those with low adjustment latitude. In case of illness, however, high adjustment latitude is associated with a higher pro­bability of taking sick leave rather than sickness presence.

18. Linear regression in astronomy. II

Science.gov (United States)

Feigelson, Eric D.; Babu, Gutti J.

1992-01-01

A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.

DEFF Research Database (Denmark)

Møller, Jan Kloppenborg; Nielsen, Henrik Aalborg; Madsen, Henrik

2008-01-01

and an updating procedure are combined into a new algorithm for time-adaptive quantile regression, which generates new solutions on the basis of the old solution, leading to savings in computation time. The suggested algorithm is tested against a static quantile regression model on a data set with wind power......An algorithm for time-adaptive quantile regression is presented. The algorithm is based on the simplex algorithm, and the linear optimization formulation of the quantile regression problem is given. The observations have been split to allow a direct use of the simplex algorithm. The simplex method...... production, where the models combine splines and quantile regression. The comparison indicates superior performance for the time-adaptive quantile regression in all the performance parameters considered....

20. The contribution of personality traits and academic and social adjustment to life satisfaction and depression in college freshmen

Directory of Open Access Journals (Sweden)

Sanja Smojver-Ažić

2010-11-01

Full Text Available The aim of this study is to investigate the role of personality traits and student academic and social college adjustment to their overall life satisfaction and depression. Sample of 492 freshmen completed a battery of measures. Hierarchical regression analyses are applied to analyze the contribution of predictor variables on life satisfaction and depression in the group of male and female students. After controlling for the personality traits, college adjustment had a significant contribution to student depression and life satisfaction. Optimism has a significant protective role only with male, but not with female students.

1. Retro-regression--another important multivariate regression improvement.

Science.gov (United States)

Randić, M

2001-01-01

We review the serious problem associated with instabilities of the coefficients of regression equations, referred to as the MRA (multivariate regression analysis) "nightmare of the first kind". This is manifested when in a stepwise regression a descriptor is included or excluded from a regression. The consequence is an unpredictable change of the coefficients of the descriptors that remain in the regression equation. We follow with consideration of an even more serious problem, referred to as the MRA "nightmare of the second kind", arising when optimal descriptors are selected from a large pool of descriptors. This process typically causes at different steps of the stepwise regression a replacement of several previously used descriptors by new ones. We describe a procedure that resolves these difficulties. The approach is illustrated on boiling points of nonanes which are considered (1) by using an ordered connectivity basis; (2) by using an ordering resulting from application of greedy algorithm; and (3) by using an ordering derived from an exhaustive search for optimal descriptors. A novel variant of multiple regression analysis, called retro-regression (RR), is outlined showing how it resolves the ambiguities associated with both "nightmares" of the first and the second kind of MRA.

2. Quantile regression theory and applications

CERN Document Server

Davino, Cristina; Vistocco, Domenico

2013-01-01

A guide to the implementation and interpretation of Quantile Regression models This book explores the theory and numerous applications of quantile regression, offering empirical data analysis as well as the software tools to implement the methods. The main focus of this book is to provide the reader with a comprehensivedescription of the main issues concerning quantile regression; these include basic modeling, geometrical interpretation, estimation and inference for quantile regression, as well as issues on validity of the model, diagnostic tools. Each methodological aspect is explored and

3. Logistic regression applied to natural hazards: rare event logistic regression with replications

Directory of Open Access Journals (Sweden)

M. Guns

2012-06-01

Full Text Available Statistical analysis of natural hazards needs particular attention, as most of these phenomena are rare events. This study shows that the ordinary rare event logistic regression, as it is now commonly used in geomorphologic studies, does not always lead to a robust detection of controlling factors, as the results can be strongly sample-dependent. In this paper, we introduce some concepts of Monte Carlo simulations in rare event logistic regression. This technique, so-called rare event logistic regression with replications, combines the strength of probabilistic and statistical methods, and allows overcoming some of the limitations of previous developments through robust variable selection. This technique was here developed for the analyses of landslide controlling factors, but the concept is widely applicable for statistical analyses of natural hazards.

4. Logistic regression applied to natural hazards: rare event logistic regression with replications

Science.gov (United States)

Guns, M.; Vanacker, V.

2012-06-01

Statistical analysis of natural hazards needs particular attention, as most of these phenomena are rare events. This study shows that the ordinary rare event logistic regression, as it is now commonly used in geomorphologic studies, does not always lead to a robust detection of controlling factors, as the results can be strongly sample-dependent. In this paper, we introduce some concepts of Monte Carlo simulations in rare event logistic regression. This technique, so-called rare event logistic regression with replications, combines the strength of probabilistic and statistical methods, and allows overcoming some of the limitations of previous developments through robust variable selection. This technique was here developed for the analyses of landslide controlling factors, but the concept is widely applicable for statistical analyses of natural hazards.

OpenAIRE

Masao Ueki; Kaoru Fueda

2007-01-01

This note presents a direct adjustment of the estimative prediction limit to reduce the coverage error from a target value to third-order accuracy. The adjustment is asymptotically equivalent to those of Barndorff-Nielsen & Cox (1994, 1996) and Vidoni (1998). It has a simpler form with a plug-in estimator of the coverage probability of the estimative limit at the target value. Copyright 2007, Oxford University Press.

Science.gov (United States)

Ashby, George C., Jr.; Robbins, W. Eugene; Horsley, Lewis A.

1991-01-01

Probe readily positionable in core of uniform flow in hypersonic wind tunnel. Formed of pair of mating cylindrical housings: transducer housing and pitot-tube housing. Pitot tube supported by adjustable wedge fairing attached to top of pitot-tube housing with semicircular foot. Probe adjusted both radially and circumferentially. In addition, pressure-sensing transducer cooled internally by water or other cooling fluid passing through annulus of cooling system.

7. Panel Smooth Transition Regression Models

DEFF Research Database (Denmark)

González, Andrés; Terasvirta, Timo; Dijk, Dick van

We introduce the panel smooth transition regression model. This new model is intended for characterizing heterogeneous panels, allowing the regression coefficients to vary both across individuals and over time. Specifically, heterogeneity is allowed for by assuming that these coefficients are bou...

8. Testing discontinuities in nonparametric regression

KAUST Repository

Dai, Wenlin

2017-01-19

In nonparametric regression, it is often needed to detect whether there are jump discontinuities in the mean function. In this paper, we revisit the difference-based method in [13 H.-G. Müller and U. Stadtmüller, Discontinuous versus smooth regression, Ann. Stat. 27 (1999), pp. 299–337. doi: 10.1214/aos/1018031100

9. Testing discontinuities in nonparametric regression

KAUST Repository

Dai, Wenlin; Zhou, Yuejin; Tong, Tiejun

2017-01-01

In nonparametric regression, it is often needed to detect whether there are jump discontinuities in the mean function. In this paper, we revisit the difference-based method in [13 H.-G. Müller and U. Stadtmüller, Discontinuous versus smooth regression, Ann. Stat. 27 (1999), pp. 299–337. doi: 10.1214/aos/1018031100

10. Logistic Regression: Concept and Application

Science.gov (United States)

Cokluk, Omay

2010-01-01

The main focus of logistic regression analysis is classification of individuals in different groups. The aim of the present study is to explain basic concepts and processes of binary logistic regression analysis intended to determine the combination of independent variables which best explain the membership in certain groups called dichotomous…

11. Appraisal of transplant-related stressors, coping strategies, and psychosocial adjustment following kidney transplantation.

Science.gov (United States)

Pisanti, Renato; Lombardo, Caterina; Luszczynska, Aleksandra; Poli, Luca; Bennardi, Linda; Giordanengo, Luca; Berloco, Pasquale Bartolomeo; Violani, Cristiano

2017-10-01

12. The association between adjustment disorder diagnosed at psychiatric treatment facilities and completed suicide

DEFF Research Database (Denmark)

Gradus, Jaimie L; Qin, Ping; Lincoln, Alisa K

2010-01-01

Adjustment disorder is a diagnosis given following a significant psychosocial stressor from which an individual has difficulty recovering. The individual's reaction to this event must exceed what would be observed among similar people experiencing the same stressor. Adjustment disorder is associa...... regression analyses revealed that those diagnosed with adjustment disorder had 12 times the rate of suicide as those without an adjustment disorder diagnosis, after controlling for history of depression diagnosis, marital status, income, and the matched factors....... is associated with suicidal ideation and suicide attempt. However the association between adjustment disorder and completed suicide has yet to be examined. The current study is a population-based case control study examining this association in the population of Denmark aged 15 to 90 years. All suicides...... in Denmark from 1994 to 2006 were included, resulting in 9,612 cases. For each case, up to 30 controls were matched on gender, exact date of birth, and calendar time, yielding 199,306 controls. Adjustment disorder diagnosis was found in 7.6% of suicide cases and 0.52% of controls. Conditional logistic...

13. Fungible weights in logistic regression.

Science.gov (United States)

Jones, Jeff A; Waller, Niels G

2016-06-01

In this article we develop methods for assessing parameter sensitivity in logistic regression models. To set the stage for this work, we first review Waller's (2008) equations for computing fungible weights in linear regression. Next, we describe 2 methods for computing fungible weights in logistic regression. To demonstrate the utility of these methods, we compute fungible logistic regression weights using data from the Centers for Disease Control and Prevention's (2010) Youth Risk Behavior Surveillance Survey, and we illustrate how these alternate weights can be used to evaluate parameter sensitivity. To make our work accessible to the research community, we provide R code (R Core Team, 2015) that will generate both kinds of fungible logistic regression weights. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

14. Ordinary least square regression, orthogonal regression, geometric mean regression and their applications in aerosol science

International Nuclear Information System (INIS)

Leng Ling; Zhang Tianyi; Kleinman, Lawrence; Zhu Wei

2007-01-01

Regression analysis, especially the ordinary least squares method which assumes that errors are confined to the dependent variable, has seen a fair share of its applications in aerosol science. The ordinary least squares approach, however, could be problematic due to the fact that atmospheric data often does not lend itself to calling one variable independent and the other dependent. Errors often exist for both measurements. In this work, we examine two regression approaches available to accommodate this situation. They are orthogonal regression and geometric mean regression. Comparisons are made theoretically as well as numerically through an aerosol study examining whether the ratio of organic aerosol to CO would change with age

15. Tumor regression patterns in retinoblastoma

International Nuclear Information System (INIS)

Zafar, S.N.; Siddique, S.N.; Zaheer, N.

2016-01-01

To observe the types of tumor regression after treatment, and identify the common pattern of regression in our patients. Study Design: Descriptive study. Place and Duration of Study: Department of Pediatric Ophthalmology and Strabismus, Al-Shifa Trust Eye Hospital, Rawalpindi, Pakistan, from October 2011 to October 2014. Methodology: Children with unilateral and bilateral retinoblastoma were included in the study. Patients were referred to Pakistan Institute of Medical Sciences, Islamabad, for chemotherapy. After every cycle of chemotherapy, dilated funds examination under anesthesia was performed to record response of the treatment. Regression patterns were recorded on RetCam II. Results: Seventy-four tumors were included in the study. Out of 74 tumors, 3 were ICRB group A tumors, 43 were ICRB group B tumors, 14 tumors belonged to ICRB group C, and remaining 14 were ICRB group D tumors. Type IV regression was seen in 39.1% (n=29) tumors, type II in 29.7% (n=22), type III in 25.6% (n=19), and type I in 5.4% (n=4). All group A tumors (100%) showed type IV regression. Seventeen (39.5%) group B tumors showed type IV regression. In group C, 5 tumors (35.7%) showed type II regression and 5 tumors (35.7%) showed type IV regression. In group D, 6 tumors (42.9%) regressed to type II non-calcified remnants. Conclusion: The response and success of the focal and systemic treatment, as judged by the appearance of different patterns of tumor regression, varies with the ICRB grouping of the tumor. (author)

16. Regression to Causality : Regression-style presentation influences causal attribution

DEFF Research Database (Denmark)

Bordacconi, Mats Joe; Larsen, Martin Vinæs

2014-01-01

of equivalent results presented as either regression models or as a test of two sample means. Our experiment shows that the subjects who were presented with results as estimates from a regression model were more inclined to interpret these results causally. Our experiment implies that scholars using regression...... models – one of the primary vehicles for analyzing statistical results in political science – encourage causal interpretation. Specifically, we demonstrate that presenting observational results in a regression model, rather than as a simple comparison of means, makes causal interpretation of the results...... more likely. Our experiment drew on a sample of 235 university students from three different social science degree programs (political science, sociology and economics), all of whom had received substantial training in statistics. The subjects were asked to compare and evaluate the validity...

17. Complex regression Doppler optical coherence tomography

Science.gov (United States)

Elahi, Sahar; Gu, Shi; Thrane, Lars; Rollins, Andrew M.; Jenkins, Michael W.

2018-04-01

We introduce a new method to measure Doppler shifts more accurately and extend the dynamic range of Doppler optical coherence tomography (OCT). The two-point estimate of the conventional Doppler method is replaced with a regression that is applied to high-density B-scans in polar coordinates. We built a high-speed OCT system using a 1.68-MHz Fourier domain mode locked laser to acquire high-density B-scans (16,000 A-lines) at high enough frame rates (˜100 fps) to accurately capture the dynamics of the beating embryonic heart. Flow phantom experiments confirm that the complex regression lowers the minimum detectable velocity from 12.25 mm / s to 374 μm / s, whereas the maximum velocity of 400 mm / s is measured without phase wrapping. Complex regression Doppler OCT also demonstrates higher accuracy and precision compared with the conventional method, particularly when signal-to-noise ratio is low. The extended dynamic range allows monitoring of blood flow over several stages of development in embryos without adjusting the imaging parameters. In addition, applying complex averaging recovers hidden features in structural images.

18. Augmenting Data with Published Results in Bayesian Linear Regression

Science.gov (United States)

de Leeuw, Christiaan; Klugkist, Irene

2012-01-01

In most research, linear regression analyses are performed without taking into account published results (i.e., reported summary statistics) of similar previous studies. Although the prior density in Bayesian linear regression could accommodate such prior knowledge, formal models for doing so are absent from the literature. The goal of this…

19. Predicting Word Reading Ability: A Quantile Regression Study

Science.gov (United States)

McIlraith, Autumn L.

2018-01-01

Predictors of early word reading are well established. However, it is unclear if these predictors hold for readers across a range of word reading abilities. This study used quantile regression to investigate predictive relationships at different points in the distribution of word reading. Quantile regression analyses used preschool and…

20. Advanced statistics: linear regression, part II: multiple linear regression.

Science.gov (United States)

Marill, Keith A

2004-01-01

The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.

1. Logic regression and its extensions.

Science.gov (United States)

Schwender, Holger; Ruczinski, Ingo

2010-01-01

Logic regression is an adaptive classification and regression procedure, initially developed to reveal interacting single nucleotide polymorphisms (SNPs) in genetic association studies. In general, this approach can be used in any setting with binary predictors, when the interaction of these covariates is of primary interest. Logic regression searches for Boolean (logic) combinations of binary variables that best explain the variability in the outcome variable, and thus, reveals variables and interactions that are associated with the response and/or have predictive capabilities. The logic expressions are embedded in a generalized linear regression framework, and thus, logic regression can handle a variety of outcome types, such as binary responses in case-control studies, numeric responses, and time-to-event data. In this chapter, we provide an introduction to the logic regression methodology, list some applications in public health and medicine, and summarize some of the direct extensions and modifications of logic regression that have been proposed in the literature. Copyright © 2010 Elsevier Inc. All rights reserved.

2. Differential item functioning analysis with ordinal logistic regression techniques. DIFdetect and difwithpar.

Science.gov (United States)

Crane, Paul K; Gibbons, Laura E; Jolley, Lance; van Belle, Gerald

2006-11-01

We present an ordinal logistic regression model for identification of items with differential item functioning (DIF) and apply this model to a Mini-Mental State Examination (MMSE) dataset. We employ item response theory ability estimation in our models. Three nested ordinal logistic regression models are applied to each item. Model testing begins with examination of the statistical significance of the interaction term between ability and the group indicator, consistent with nonuniform DIF. Then we turn our attention to the coefficient of the ability term in models with and without the group term. If including the group term has a marked effect on that coefficient, we declare that it has uniform DIF. We examined DIF related to language of test administration in addition to self-reported race, Hispanic ethnicity, age, years of education, and sex. We used PARSCALE for IRT analyses and STATA for ordinal logistic regression approaches. We used an iterative technique for adjusting IRT ability estimates on the basis of DIF findings. Five items were found to have DIF related to language. These same items also had DIF related to other covariates. The ordinal logistic regression approach to DIF detection, when combined with IRT ability estimates, provides a reasonable alternative for DIF detection. There appear to be several items with significant DIF related to language of test administration in the MMSE. More attention needs to be paid to the specific criteria used to determine whether an item has DIF, not just the technique used to identify DIF.

3. The Covariance Adjustment Approaches for Combining Incomparable Cox Regressions Caused by Unbalanced Covariates Adjustment: A Multivariate Meta-Analysis Study

Directory of Open Access Journals (Sweden)

Tania Dehesh

2015-01-01

Full Text Available Background. Univariate meta-analysis (UM procedure, as a technique that provides a single overall result, has become increasingly popular. Neglecting the existence of other concomitant covariates in the models leads to loss of treatment efficiency. Our aim was proposing four new approximation approaches for the covariance matrix of the coefficients, which is not readily available for the multivariate generalized least square (MGLS method as a multivariate meta-analysis approach. Methods. We evaluated the efficiency of four new approaches including zero correlation (ZC, common correlation (CC, estimated correlation (EC, and multivariate multilevel correlation (MMC on the estimation bias, mean square error (MSE, and 95% probability coverage of the confidence interval (CI in the synthesis of Cox proportional hazard models coefficients in a simulation study. Result. Comparing the results of the simulation study on the MSE, bias, and CI of the estimated coefficients indicated that MMC approach was the most accurate procedure compared to EC, CC, and ZC procedures. The precision ranking of the four approaches according to all above settings was MMC ≥ EC ≥ CC ≥ ZC. Conclusion. This study highlights advantages of MGLS meta-analysis on UM approach. The results suggested the use of MMC procedure to overcome the lack of information for having a complete covariance matrix of the coefficients.

4. The Covariance Adjustment Approaches for Combining Incomparable Cox Regressions Caused by Unbalanced Covariates Adjustment: A Multivariate Meta-Analysis Study.

Science.gov (United States)

Dehesh, Tania; Zare, Najaf; Ayatollahi, Seyyed Mohammad Taghi

2015-01-01

Univariate meta-analysis (UM) procedure, as a technique that provides a single overall result, has become increasingly popular. Neglecting the existence of other concomitant covariates in the models leads to loss of treatment efficiency. Our aim was proposing four new approximation approaches for the covariance matrix of the coefficients, which is not readily available for the multivariate generalized least square (MGLS) method as a multivariate meta-analysis approach. We evaluated the efficiency of four new approaches including zero correlation (ZC), common correlation (CC), estimated correlation (EC), and multivariate multilevel correlation (MMC) on the estimation bias, mean square error (MSE), and 95% probability coverage of the confidence interval (CI) in the synthesis of Cox proportional hazard models coefficients in a simulation study. Comparing the results of the simulation study on the MSE, bias, and CI of the estimated coefficients indicated that MMC approach was the most accurate procedure compared to EC, CC, and ZC procedures. The precision ranking of the four approaches according to all above settings was MMC ≥ EC ≥ CC ≥ ZC. This study highlights advantages of MGLS meta-analysis on UM approach. The results suggested the use of MMC procedure to overcome the lack of information for having a complete covariance matrix of the coefficients.

5. Relationship between Parenting Styles and Marital Adjustment of ...

African Journals Online (AJOL)

The data obtained from these instruments were subjected to multiple regression analysis using SPSS and the results showed that there was a low, positive and significant relationship between authoritative parenting style and marital adjustment. The relationship between authoritarian parenting style and marital adjustment ...

6. Abstract Expression Grammar Symbolic Regression

Science.gov (United States)

Korns, Michael F.

This chapter examines the use of Abstract Expression Grammars to perform the entire Symbolic Regression process without the use of Genetic Programming per se. The techniques explored produce a symbolic regression engine which has absolutely no bloat, which allows total user control of the search space and output formulas, which is faster, and more accurate than the engines produced in our previous papers using Genetic Programming. The genome is an all vector structure with four chromosomes plus additional epigenetic and constraint vectors, allowing total user control of the search space and the final output formulas. A combination of specialized compiler techniques, genetic algorithms, particle swarm, aged layered populations, plus discrete and continuous differential evolution are used to produce an improved symbolic regression sytem. Nine base test cases, from the literature, are used to test the improvement in speed and accuracy. The improved results indicate that these techniques move us a big step closer toward future industrial strength symbolic regression systems.

7. Quantile Regression With Measurement Error

KAUST Repository

Wei, Ying; Carroll, Raymond J.

2009-01-01

. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a

8. From Rasch scores to regression

DEFF Research Database (Denmark)

Christensen, Karl Bang

2006-01-01

Rasch models provide a framework for measurement and modelling latent variables. Having measured a latent variable in a population a comparison of groups will often be of interest. For this purpose the use of observed raw scores will often be inadequate because these lack interval scale propertie....... This paper compares two approaches to group comparison: linear regression models using estimated person locations as outcome variables and latent regression models based on the distribution of the score....

9. Testing Heteroscedasticity in Robust Regression

Czech Academy of Sciences Publication Activity Database

Kalina, Jan

2011-01-01

Roč. 1, č. 4 (2011), s. 25-28 ISSN 2045-3345 Grant - others:GA ČR(CZ) GA402/09/0557 Institutional research plan: CEZ:AV0Z10300504 Keywords : robust regression * heteroscedasticity * regression quantiles * diagnostics Subject RIV: BB - Applied Statistics , Operational Research http://www.researchjournals.co.uk/documents/Vol4/06%20Kalina.pdf

10. Regression methods for medical research

CERN Document Server

Tai, Bee Choo

2013-01-01

Regression Methods for Medical Research provides medical researchers with the skills they need to critically read and interpret research using more advanced statistical methods. The statistical requirements of interpreting and publishing in medical journals, together with rapid changes in science and technology, increasingly demands an understanding of more complex and sophisticated analytic procedures.The text explains the application of statistical models to a wide variety of practical medical investigative studies and clinical trials. Regression methods are used to appropriately answer the

11. Forecasting with Dynamic Regression Models

CERN Document Server

Pankratz, Alan

2012-01-01

One of the most widely used tools in statistical forecasting, single equation regression models is examined here. A companion to the author's earlier work, Forecasting with Univariate Box-Jenkins Models: Concepts and Cases, the present text pulls together recent time series ideas and gives special attention to possible intertemporal patterns, distributed lag responses of output to input series and the auto correlation patterns of regression disturbance. It also includes six case studies.

12. Convexity Adjustments for ATS Models

DEFF Research Database (Denmark)

Murgoci, Agatha; Gaspar, Raquel M.

. As a result we classify convexity adjustments into forward adjustments and swaps adjustments. We, then, focus on affine term structure (ATS) models and, in this context, conjecture convexity adjustments should be related of affine functionals. In the case of forward adjustments, we show how to obtain exact...

DEFF Research Database (Denmark)

Quitzau, Maj-Britt; Jensen, Jens Stissing; Elle, Morten

2013-01-01

The endogenous agency that urban governments increasingly portray by making conscious and planned efforts to adjust the regimes they operate within is currently not well captured in transition studies. There is a need to acknowledge the ambiguity of regime enactment at the urban scale. This direc...

14. Logistic regression for dichotomized counts.

Science.gov (United States)

Preisser, John S; Das, Kalyan; Benecha, Habtamu; Stamm, John W

2016-12-01

Sometimes there is interest in a dichotomized outcome indicating whether a count variable is positive or zero. Under this scenario, the application of ordinary logistic regression may result in efficiency loss, which is quantifiable under an assumed model for the counts. In such situations, a shared-parameter hurdle model is investigated for more efficient estimation of regression parameters relating to overall effects of covariates on the dichotomous outcome, while handling count data with many zeroes. One model part provides a logistic regression containing marginal log odds ratio effects of primary interest, while an ancillary model part describes the mean count of a Poisson or negative binomial process in terms of nuisance regression parameters. Asymptotic efficiency of the logistic model parameter estimators of the two-part models is evaluated with respect to ordinary logistic regression. Simulations are used to assess the properties of the models with respect to power and Type I error, the latter investigated under both misspecified and correctly specified models. The methods are applied to data from a randomized clinical trial of three toothpaste formulations to prevent incident dental caries in a large population of Scottish schoolchildren. © The Author(s) 2014.

15. Attachment style and adjustment to divorce.

Science.gov (United States)

Yárnoz-Yaben, Sagrario

2010-05-01

Divorce is becoming increasingly widespread in Europe. In this study, I present an analysis of the role played by attachment style (secure, dismissing, preoccupied and fearful, plus the dimensions of anxiety and avoidance) in the adaptation to divorce. Participants comprised divorced parents (N = 40) from a medium-sized city in the Basque Country. The results reveal a lower proportion of people with secure attachment in the sample group of divorcees. Attachment style and dependence (emotional and instrumental) are closely related. I have also found associations between measures that showed a poor adjustment to divorce and the preoccupied and fearful attachment styles. Adjustment is related to a dismissing attachment style and to the avoidance dimension. Multiple regression analysis confirmed that secure attachment and the avoidance dimension predict adjustment to divorce and positive affectivity while preoccupied attachment and the anxiety dimension predicted negative affectivity. Implications for research and interventions with divorcees are discussed.

16. Producing The New Regressive Left

DEFF Research Database (Denmark)

Crone, Christine

members, this thesis investigates a growing political trend and ideological discourse in the Arab world that I have called The New Regressive Left. On the premise that a media outlet can function as a forum for ideology production, the thesis argues that an analysis of this material can help to trace...... the contexture of The New Regressive Left. If the first part of the thesis lays out the theoretical approach and draws the contextual framework, through an exploration of the surrounding Arab media-and ideoscapes, the second part is an analytical investigation of the discourse that permeates the programmes aired...... becomes clear from the analytical chapters is the emergence of the new cross-ideological alliance of The New Regressive Left. This emerging coalition between Shia Muslims, religious minorities, parts of the Arab Left, secular cultural producers, and the remnants of the political,strategic resistance...

17. The stress-buffering effects of hope on adjustment to multiple sclerosis.

Science.gov (United States)

2014-12-01

Hope is an important resource for coping with chronic illness; however, the role of hope in adjusting to multiple sclerosis (MS) has been neglected, and the mechanisms by which hope exerts beneficial impacts are not well understood. This study aims to examine the direct and stress-moderating effects of dispositional hope and its components (agency and pathways) on adjustment to MS. A total of 296 people with MS completed questionnaires at time 1 at 12 months later and time 2. Focal predictors were stress, hope, agency and pathways, and the adjustment outcomes were anxiety, depression, positive affect, positive states of mind and life satisfaction. Results of regression analyses showed that as predicted, greater hope was associated with better adjustment after controlling for the effects of time 1 adjustment and relevant demographics and illness variables. However, these direct effects of hope were subsumed by stress-buffering effects. Regarding the hope components, the beneficial impacts of agency emerged via a direct effects mechanism, whereas the effects of pathways were evidenced via a moderating mechanism. Findings highlight hope as an important protective coping resource for coping with MS and accentuate the roles of both agency and pathways thinking and their different modes of influence in this process.

18. A Matlab program for stepwise regression

Directory of Open Access Journals (Sweden)

Yanhong Qi

2016-03-01

Full Text Available The stepwise linear regression is a multi-variable regression for identifying statistically significant variables in the linear regression equation. In present study, we presented the Matlab program of stepwise regression.

19. Correlation and simple linear regression.

Science.gov (United States)

Zou, Kelly H; Tuncali, Kemal; Silverman, Stuart G

2003-06-01

In this tutorial article, the concepts of correlation and regression are reviewed and demonstrated. The authors review and compare two correlation coefficients, the Pearson correlation coefficient and the Spearman rho, for measuring linear and nonlinear relationships between two continuous variables. In the case of measuring the linear relationship between a predictor and an outcome variable, simple linear regression analysis is conducted. These statistical concepts are illustrated by using a data set from published literature to assess a computed tomography-guided interventional technique. These statistical methods are important for exploring the relationships between variables and can be applied to many radiologic studies.

20. Regression filter for signal resolution

International Nuclear Information System (INIS)

Matthes, W.

1975-01-01

The problem considered is that of resolving a measured pulse height spectrum of a material mixture, e.g. gamma ray spectrum, Raman spectrum, into a weighed sum of the spectra of the individual constituents. The model on which the analytical formulation is based is described. The problem reduces to that of a multiple linear regression. A stepwise linear regression procedure was constructed. The efficiency of this method was then tested by transforming the procedure in a computer programme which was used to unfold test spectra obtained by mixing some spectra, from a library of arbitrary chosen spectra, and adding a noise component. (U.K.)

1. Nonparametric Mixture of Regression Models.

Science.gov (United States)

Huang, Mian; Li, Runze; Wang, Shaoli

2013-07-01

Motivated by an analysis of US house price index data, we propose nonparametric finite mixture of regression models. We study the identifiability issue of the proposed models, and develop an estimation procedure by employing kernel regression. We further systematically study the sampling properties of the proposed estimators, and establish their asymptotic normality. A modified EM algorithm is proposed to carry out the estimation procedure. We show that our algorithm preserves the ascent property of the EM algorithm in an asymptotic sense. Monte Carlo simulations are conducted to examine the finite sample performance of the proposed estimation procedure. An empirical analysis of the US house price index data is illustrated for the proposed methodology.

2. Do daily retail gasoline prices adjust asymmetrically?

NARCIS (Netherlands)

Bettendorf, L.; van der Geest, S. A.; Kuper, G. H.

2009-01-01

This paper analyses adjustments in the Dutch retail gasoline prices. We estimate an error correction model on changes in the daily retail price for gasoline (taxes excluded) for the period 1996-2004, taking care of volatility clustering by estimating an EGARCH model. It turns out that the volatility

African Journals Online (AJOL)

In this paper, we analyse insurance premium adjustment in the context of an epidemiological model where the insurer's future financial liability is greater than the premium from patients. In this situation, it becomes extremely difficult for the insurer since a negative reserve would severely increase its risk of insolvency, ...

Directory of Open Access Journals (Sweden)

Joana Jaureguizar

2018-05-01

5. Cactus: An Introduction to Regression

Science.gov (United States)

Hyde, Hartley

2008-01-01

When the author first used "VisiCalc," the author thought it a very useful tool when he had the formulas. But how could he design a spreadsheet if there was no known formula for the quantities he was trying to predict? A few months later, the author relates he learned to use multiple linear regression software and suddenly it all clicked into…

6. Regression Models for Repairable Systems

Czech Academy of Sciences Publication Activity Database

Novák, Petr

2015-01-01

Roč. 17, č. 4 (2015), s. 963-972 ISSN 1387-5841 Institutional support: RVO:67985556 Keywords : Reliability analysis * Repair models * Regression Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.782, year: 2015 http://library.utia.cas.cz/separaty/2015/SI/novak-0450902.pdf

7. Kernel regression with functional response

OpenAIRE

Ferraty, Frédéric; Laksaci, Ali; Tadj, Amel; Vieu, Philippe

2011-01-01

We consider kernel regression estimate when both the response variable and the explanatory one are functional. The rates of uniform almost complete convergence are stated as function of the small ball probability of the predictor and as function of the entropy of the set on which uniformity is obtained.

8. Linear regression and the normality assumption.

Science.gov (United States)

Schmidt, Amand F; Finan, Chris

2017-12-16

Researchers often perform arbitrary outcome transformations to fulfill the normality assumption of a linear regression model. This commentary explains and illustrates that in large data settings, such transformations are often unnecessary, and worse may bias model estimates. Linear regression assumptions are illustrated using simulated data and an empirical example on the relation between time since type 2 diabetes diagnosis and glycated hemoglobin levels. Simulation results were evaluated on coverage; i.e., the number of times the 95% confidence interval included the true slope coefficient. Although outcome transformations bias point estimates, violations of the normality assumption in linear regression analyses do not. The normality assumption is necessary to unbiasedly estimate standard errors, and hence confidence intervals and P-values. However, in large sample sizes (e.g., where the number of observations per variable is >10) violations of this normality assumption often do not noticeably impact results. Contrary to this, assumptions on, the parametric model, absence of extreme observations, homoscedasticity, and independency of the errors, remain influential even in large sample size settings. Given that modern healthcare research typically includes thousands of subjects focusing on the normality assumption is often unnecessary, does not guarantee valid results, and worse may bias estimates due to the practice of outcome transformations. Copyright © 2017 Elsevier Inc. All rights reserved.

9. Regression calibration with more surrogates than mismeasured variables

KAUST Repository

Kipnis, Victor

2012-06-29

In a recent paper (Weller EA, Milton DK, Eisen EA, Spiegelman D. Regression calibration for logistic regression with multiple surrogates for one exposure. Journal of Statistical Planning and Inference 2007; 137: 449-461), the authors discussed fitting logistic regression models when a scalar main explanatory variable is measured with error by several surrogates, that is, a situation with more surrogates than variables measured with error. They compared two methods of adjusting for measurement error using a regression calibration approximate model as if it were exact. One is the standard regression calibration approach consisting of substituting an estimated conditional expectation of the true covariate given observed data in the logistic regression. The other is a novel two-stage approach when the logistic regression is fitted to multiple surrogates, and then a linear combination of estimated slopes is formed as the estimate of interest. Applying estimated asymptotic variances for both methods in a single data set with some sensitivity analysis, the authors asserted superiority of their two-stage approach. We investigate this claim in some detail. A troubling aspect of the proposed two-stage method is that, unlike standard regression calibration and a natural form of maximum likelihood, the resulting estimates are not invariant to reparameterization of nuisance parameters in the model. We show, however, that, under the regression calibration approximation, the two-stage method is asymptotically equivalent to a maximum likelihood formulation, and is therefore in theory superior to standard regression calibration. However, our extensive finite-sample simulations in the practically important parameter space where the regression calibration model provides a good approximation failed to uncover such superiority of the two-stage method. We also discuss extensions to different data structures.

10. Regression calibration with more surrogates than mismeasured variables

KAUST Repository

Kipnis, Victor; Midthune, Douglas; Freedman, Laurence S.; Carroll, Raymond J.

2012-01-01

In a recent paper (Weller EA, Milton DK, Eisen EA, Spiegelman D. Regression calibration for logistic regression with multiple surrogates for one exposure. Journal of Statistical Planning and Inference 2007; 137: 449-461), the authors discussed fitting logistic regression models when a scalar main explanatory variable is measured with error by several surrogates, that is, a situation with more surrogates than variables measured with error. They compared two methods of adjusting for measurement error using a regression calibration approximate model as if it were exact. One is the standard regression calibration approach consisting of substituting an estimated conditional expectation of the true covariate given observed data in the logistic regression. The other is a novel two-stage approach when the logistic regression is fitted to multiple surrogates, and then a linear combination of estimated slopes is formed as the estimate of interest. Applying estimated asymptotic variances for both methods in a single data set with some sensitivity analysis, the authors asserted superiority of their two-stage approach. We investigate this claim in some detail. A troubling aspect of the proposed two-stage method is that, unlike standard regression calibration and a natural form of maximum likelihood, the resulting estimates are not invariant to reparameterization of nuisance parameters in the model. We show, however, that, under the regression calibration approximation, the two-stage method is asymptotically equivalent to a maximum likelihood formulation, and is therefore in theory superior to standard regression calibration. However, our extensive finite-sample simulations in the practically important parameter space where the regression calibration model provides a good approximation failed to uncover such superiority of the two-stage method. We also discuss extensions to different data structures.

11. The application of the Ten Group classification system (TGCS in caesarean delivery case mix adjustment. A multicenter prospective study.

Directory of Open Access Journals (Sweden)

Gianpaolo Maso

Full Text Available BACKGROUND: Caesarean delivery (CD rates are commonly used as an indicator of quality in obstetric care and risk adjustment evaluation is recommended to assess inter-institutional variations. The aim of this study was to evaluate whether the Ten Group classification system (TGCS can be used in case-mix adjustment. METHODS: Standardized data on 15,255 deliveries from 11 different regional centers were prospectively collected. Crude Risk Ratios of CDs were calculated for each center. Two multiple logistic regression models were herein considered by using: Model 1- maternal (age, Body Mass Index, obstetric variables (gestational age, fetal presentation, single or multiple, previous scar, parity, neonatal birth weight and presence of risk factors; Model 2- TGCS either with or without maternal characteristics and presence of risk factors. Receiver Operating Characteristic (ROC curves of the multivariate logistic regression analyses were used to assess the diagnostic accuracy of each model. The null hypothesis that Areas under ROC Curve (AUC were not different from each other was verified with a Chi Square test and post hoc pairwise comparisons by using a Bonferroni correction. RESULTS: Crude evaluation of CD rates showed all centers had significantly higher Risk Ratios than the referent. Both multiple logistic regression models reduced these variations. However the two methods ranked institutions differently: model 1 and model 2 (adjusted for TGCS identified respectively nine and eight centers with significantly higher CD rates than the referent with slightly different AUCs (0.8758 and 0.8929 respectively. In the adjusted model for TGCS and maternal characteristics/presence of risk factors, three centers had CD rates similar to the referent with the best AUC (0.9024. CONCLUSIONS: The TGCS might be considered as a reliable variable to adjust CD rates. The addition of maternal characteristics and risk factors to TGCS substantially increase the

12. BOX-COX REGRESSION METHOD IN TIME SCALING

Directory of Open Access Journals (Sweden)

ATİLLA GÖKTAŞ

2013-06-01

Full Text Available Box-Cox regression method with λj, for j = 1, 2, ..., k, power transformation can be used when dependent variable and error term of the linear regression model do not satisfy the continuity and normality assumptions. The situation obtaining the smallest mean square error  when optimum power λj, transformation for j = 1, 2, ..., k, of Y has been discussed. Box-Cox regression method is especially appropriate to adjust existence skewness or heteroscedasticity of error terms for a nonlinear functional relationship between dependent and explanatory variables. In this study, the advantage and disadvantage use of Box-Cox regression method have been discussed in differentiation and differantial analysis of time scale concept.

International Nuclear Information System (INIS)

1992-01-01

This patent describes downhole adjustable apparatus for creating a bend angle in order to affect the inclination of a drilled borehole. It comprises an upper tubular member having an upper portion and a lower portion; lower tubular member having an upper portion and a lower portion; one of the portions being received within the other for relative rotational movement about an axis that is inclined with respect to the the longitudinal axes of the members, whereby in a first rotational position the longitudinal axes have one geometrical relationship, and in a second rotational position the longitudinal axes have a second, different geometrical relationship

14. Length bias correction in gene ontology enrichment analysis using logistic regression.

Science.gov (United States)

Mi, Gu; Di, Yanming; Emerson, Sarah; Cumbie, Jason S; Chang, Jeff H

2012-01-01

When assessing differential gene expression from RNA sequencing data, commonly used statistical tests tend to have greater power to detect differential expression of genes encoding longer transcripts. This phenomenon, called "length bias", will influence subsequent analyses such as Gene Ontology enrichment analysis. In the presence of length bias, Gene Ontology categories that include longer genes are more likely to be identified as enriched. These categories, however, are not necessarily biologically more relevant. We show that one can effectively adjust for length bias in Gene Ontology analysis by including transcript length as a covariate in a logistic regression model. The logistic regression model makes the statistical issue underlying length bias more transparent: transcript length becomes a confounding factor when it correlates with both the Gene Ontology membership and the significance of the differential expression test. The inclusion of the transcript length as a covariate allows one to investigate the direct correlation between the Gene Ontology membership and the significance of testing differential expression, conditional on the transcript length. We present both real and simulated data examples to show that the logistic regression approach is simple, effective, and flexible.

15. RELATIONSHIP BETWEEN LIFE BUILDING SKILLS AND SOCIAL ADJUSTMENT OF STUDENTS WITH HEARING IMPAIRMENT: IMPLICATIONS FOR COUNSELING

Directory of Open Access Journals (Sweden)

2017-10-01

16. Quantile Regression With Measurement Error

KAUST Repository

Wei, Ying

2009-08-27

Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.

17. Multivariate and semiparametric kernel regression

OpenAIRE

Härdle, Wolfgang; Müller, Marlene

1997-01-01

The paper gives an introduction to theory and application of multivariate and semiparametric kernel smoothing. Multivariate nonparametric density estimation is an often used pilot tool for examining the structure of data. Regression smoothing helps in investigating the association between covariates and responses. We concentrate on kernel smoothing using local polynomial fitting which includes the Nadaraya-Watson estimator. Some theory on the asymptotic behavior and bandwidth selection is pro...

18. Regression algorithm for emotion detection

OpenAIRE

Berthelon , Franck; Sander , Peter

2013-01-01

International audience; We present here two components of a computational system for emotion detection. PEMs (Personalized Emotion Maps) store links between bodily expressions and emotion values, and are individually calibrated to capture each person's emotion profile. They are an implementation based on aspects of Scherer's theoretical complex system model of emotion~\\cite{scherer00, scherer09}. We also present a regression algorithm that determines a person's emotional feeling from sensor m...

19. Directional quantile regression in R

Czech Academy of Sciences Publication Activity Database

Boček, Pavel; Šiman, Miroslav

2017-01-01

Roč. 53, č. 3 (2017), s. 480-492 ISSN 0023-5954 R&D Projects: GA ČR GA14-07234S Institutional support: RVO:67985556 Keywords : multivariate quantile * regression quantile * halfspace depth * depth contour Subject RIV: BD - Theory of Information OBOR OECD: Applied mathematics Impact factor: 0.379, year: 2016 http://library.utia.cas.cz/separaty/2017/SI/bocek-0476587.pdf

Directory of Open Access Journals (Sweden)

Zelviene P

2018-01-01

Full Text Available Paulina Zelviene, Evaldas Kazlauskas Department of Clinical and Organizational Psychology, Vilnius University, Vilnius, Lithuania Abstract: Adjustment disorder (AjD is among the most often diagnosed mental disorders in clinical practice. This paper reviews current status of AjD research and discusses scientific and clinical issues associated with AjD. AjD has been included in diagnostic classifications for over 50 years. Still, the diagnostic criteria for AjD remain vague and cause difficulties to mental health professionals. Controversies in definition resulted in the lack of reliable and valid measures of AjD. Epidemiological data on prevalence of AjD is scarce and not reliable because prevalence data are biased by the diagnostic algorithm, which is usually developed for each study, as no established diagnostic standards for AjD are available. Considerable changes in the field of AjD could follow after the release of the 11th edition of International Classification of Diseases (ICD-11. A new AjD symptom profile was introduced in ICD-11 with 2 main symptoms as follows: 1 preoccupation and 2 failure to adapt. However, differences between the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition and ICD-11 AjD diagnostic criteria could result in diverse research findings in the future. The best treatment approach for AjD remains unclear, and further treatment studies are needed to provide AjD treatment guidelines to clinicians. Keywords: adjustment disorder, review, diagnosis, prevalence, treatment, DSM, ICD

Science.gov (United States)

Jacobs, Ken; Karpf, Ron

2011-03-01

A number of Pulfrich 3-D movies and TV shows have been produced, but the standard implementation has inherent drawbacks. The movie and TV industries have correctly concluded that the standard Pulfrich 3-D implementation is not a useful 3-D technique. Continuously Adjustable Pulfrich Spectacles (CAPS) is a new implementation of the Pulfrich effect that allows any scene containing movement in a standard 2-D movie, which are most scenes, to be optionally viewed in 3-D using inexpensive viewing specs. Recent scientific results in the fields of human perception, optoelectronics, video compression and video format conversion are translated into a new implementation of Pulfrich 3- D. CAPS uses these results to continuously adjust to the movie so that the viewing spectacles always conform to the optical density that optimizes the Pulfrich stereoscopic illusion. CAPS instantly provides 3-D immersion to any moving scene in any 2-D movie. Without the glasses, the movie will appear as a normal 2-D image. CAPS work on any viewing device, and with any distribution medium. CAPS is appropriate for viewing Internet streamed movies in 3-D.

Directory of Open Access Journals (Sweden)

SONG Yingchun

2015-02-01

Full Text Available Uncertainty often exists in the process of obtaining measurement data, which affects the reliability of parameter estimation. This paper establishes a new adjustment model in which uncertainty is incorporated into the function model as a parameter. A new adjustment criterion and its iterative algorithm are given based on uncertainty propagation law in the residual error, in which the maximum possible uncertainty is minimized. This paper also analyzes, with examples, the different adjustment criteria and features of optimal solutions about the least-squares adjustment, the uncertainty adjustment and total least-squares adjustment. Existing error theory is extended with new observational data processing method about uncertainty.

3. Polylinear regression analysis in radiochemistry

International Nuclear Information System (INIS)

Kopyrin, A.A.; Terent'eva, T.N.; Khramov, N.N.

1995-01-01

A number of radiochemical problems have been formulated in the framework of polylinear regression analysis, which permits the use of conventional mathematical methods for their solution. The authors have considered features of the use of polylinear regression analysis for estimating the contributions of various sources to the atmospheric pollution, for studying irradiated nuclear fuel, for estimating concentrations from spectral data, for measuring neutron fields of a nuclear reactor, for estimating crystal lattice parameters from X-ray diffraction patterns, for interpreting data of X-ray fluorescence analysis, for estimating complex formation constants, and for analyzing results of radiometric measurements. The problem of estimating the target parameters can be incorrect at certain properties of the system under study. The authors showed the possibility of regularization by adding a fictitious set of data open-quotes obtainedclose quotes from the orthogonal design. To estimate only a part of the parameters under consideration, the authors used incomplete rank models. In this case, it is necessary to take into account the possibility of confounding estimates. An algorithm for evaluating the degree of confounding is presented which is realized using standard software or regression analysis

4. Gaussian Process Regression Model in Spatial Logistic Regression

Science.gov (United States)

Sofro, A.; Oktaviarina, A.

2018-01-01

Spatial analysis has developed very quickly in the last decade. One of the favorite approaches is based on the neighbourhood of the region. Unfortunately, there are some limitations such as difficulty in prediction. Therefore, we offer Gaussian process regression (GPR) to accommodate the issue. In this paper, we will focus on spatial modeling with GPR for binomial data with logit link function. The performance of the model will be investigated. We will discuss the inference of how to estimate the parameters and hyper-parameters and to predict as well. Furthermore, simulation studies will be explained in the last section.

5. Goal pursuit, goal adjustment, and affective well-being following lower limb amputation.

Science.gov (United States)

Coffey, Laura; Gallagher, Pamela; Desmond, Deirdre; Ryall, Nicola

2014-05-01

This study examined the relationships between tenacious goal pursuit (TGP), flexible goal adjustment (FGA), and affective well-being in a sample of individuals with lower limb amputations. Cross-sectional, quantitative. Ninety-eight patients recently admitted to a primary prosthetic rehabilitation programme completed measures of TGP, FGA, positive affect, and negative affect. Hierarchical regression analyses revealed that TGP and FGA accounted for a significant proportion of the variance in both positive and negative affect, controlling for sociodemographic and clinical characteristics. TGP was significantly positively associated with positive affect, while FGA was significantly negatively associated with negative affect. Moderated regression analyses indicated that the beneficial effect of FGA on negative affect was strongest at high levels of amputation-related pain intensity and low levels of TGP. TGP and FGA appear to influence subjective well-being in different ways, with TGP promoting the experience of positive affect and FGA buffering against negative affect. TGP and FGA may prove useful in identifying individuals at risk of poor affective outcomes following lower limb amputation and represent important targets for intervention in this patient group. What is already known on this subject? The loss of a limb has a significant impact on several important life domains. Although some individuals experience emotional distress following amputation, the majority adjust well to their limb loss, with some achieving positive change or growth as a result of their experiences. Theories of self-regulation propose that disruptions in goal attainment have negative affective consequences. The physical, social, and psychological upheaval caused by limb loss is likely to threaten the attainment of valued goals, which may leave individuals vulnerable to negative psychosocial outcomes if they do not regulate their goals in response to these challenges. According to the dual

6. Background stratified Poisson regression analysis of cohort data.

Science.gov (United States)

Richardson, David B; Langholz, Bryan

2012-03-01

Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models.

7. Background stratified Poisson regression analysis of cohort data

International Nuclear Information System (INIS)

Richardson, David B.; Langholz, Bryan

2012-01-01

Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models. (orig.)

DEFF Research Database (Denmark)

Kjær, Line; Fode, Mikkel; Nørgaard, Nis

2012-01-01

Abstract Objective. This study aimed to evaluate the results of the Danish experience with the ProACT urinary continence device inserted in men with stress urinary incontinence. Material and methods. The ProACT was inserted in 114 patients. Data were registered prospectively. The main endpoints...... in urinary leakage > 50% was seen in 72 patients (80%). Complications were seen in 23 patients. All of these were treated successfully by removal of the device in the outpatient setting followed by replacement of the device. Another eight patients had a third balloon inserted to improve continence further....... Fourteen patients (12%) ended up with an artificial sphincter or a urethral sling. Sixty patients (63%) experienced no discomfort and 58 (61%) reported being dry or markedly improved. Overall, 50 patients (53%) reported being very or predominantly satisfied. Conclusions. Adjustable continence balloons seem...

DEFF Research Database (Denmark)

Hansen, Frank

2008-01-01

) that vanishes for observables commuting with the state. We show that the skew information is a convex function on the manifold of states. It also satisfies other requirements, proposed by Wigner and Yanase, for an effective measure-of-information content of a state relative to a conserved observable. We...... establish a connection between the geometrical formulation of quantum statistics as proposed by Chentsov and Morozova and measures of quantum information as introduced by Wigner and Yanase and extended in this article. We show that the set of normalized Morozova-Chentsov functions describing the possible......We extend the concept of Wigner-Yanase-Dyson skew information to something we call "metric adjusted skew information" (of a state with respect to a conserved observable). This "skew information" is intended to be a non-negative quantity bounded by the variance (of an observable in a state...

10. Tutorial on Using Regression Models with Count Outcomes Using R

Directory of Open Access Journals (Sweden)

A. Alexander Beaujean

2016-02-01

Full Text Available Education researchers often study count variables, such as times a student reached a goal, discipline referrals, and absences. Most researchers that study these variables use typical regression methods (i.e., ordinary least-squares either with or without transforming the count variables. In either case, using typical regression for count data can produce parameter estimates that are biased, thus diminishing any inferences made from such data. As count-variable regression models are seldom taught in training programs, we present a tutorial to help educational researchers use such methods in their own research. We demonstrate analyzing and interpreting count data using Poisson, negative binomial, zero-inflated Poisson, and zero-inflated negative binomial regression models. The count regression methods are introduced through an example using the number of times students skipped class. The data for this example are freely available and the R syntax used run the example analyses are included in the Appendix.

11. Adjustment of the 235U Fission Spectrum

International Nuclear Information System (INIS)

GRIFFIN, PATRICK J.; WILLIAMS, J.G.

1999-01-01

The latest nuclear data are used to examine the sensitivity of the least squares adjustment of the 235 U fission spectrum to the measured reaction rates, dosimetry cross sections, and prior spectrum covariance matrix. All of these parameters were found to be very important in the spectrum adjustment. The most significant deficiency in the nuclear data is the absence of a good prior covariance matrix. Covariance matrices generated from analytic models of the fission spectra have been used in the past. This analysis reveals some unusual features in the covariance matrix produced with this approach. Specific needs are identified for improved nuclear data to better determine the 235 U spectrum. An improved 235 U covariance matrix and adjusted spectrum are recommended for use in radiation transport sensitivity analyses

12. Spontaneous regression of pulmonary bullae

International Nuclear Information System (INIS)

Satoh, H.; Ishikawa, H.; Ohtsuka, M.; Sekizawa, K.

2002-01-01

The natural history of pulmonary bullae is often characterized by gradual, progressive enlargement. Spontaneous regression of bullae is, however, very rare. We report a case in which complete resolution of pulmonary bullae in the left upper lung occurred spontaneously. The management of pulmonary bullae is occasionally made difficult because of gradual progressive enlargement associated with abnormal pulmonary function. Some patients have multiple bulla in both lungs and/or have a history of pulmonary emphysema. Others have a giant bulla without emphysematous change in the lungs. Our present case had treated lung cancer with no evidence of local recurrence. He had no emphysematous change in lung function test and had no complaints, although the high resolution CT scan shows evidence of underlying minimal changes of emphysema. Ortin and Gurney presented three cases of spontaneous reduction in size of bulla. Interestingly, one of them had a marked decrease in the size of a bulla in association with thickening of the wall of the bulla, which was observed in our patient. This case we describe is of interest, not only because of the rarity with which regression of pulmonary bulla has been reported in the literature, but also because of the spontaneous improvements in the radiological picture in the absence of overt infection or tumor. Copyright (2002) Blackwell Science Pty Ltd

13. Interpretation of commonly used statistical regression models.

Science.gov (United States)

Kasza, Jessica; Wolfe, Rory

2014-01-01

A review of some regression models commonly used in respiratory health applications is provided in this article. Simple linear regression, multiple linear regression, logistic regression and ordinal logistic regression are considered. The focus of this article is on the interpretation of the regression coefficients of each model, which are illustrated through the application of these models to a respiratory health research study. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.

14. Linear regression methods a ccording to objective functions

OpenAIRE

Yasemin Sisman; Sebahattin Bektas

2012-01-01

The aim of the study is to explain the parameter estimation methods and the regression analysis. The simple linear regressionmethods grouped according to the objective function are introduced. The numerical solution is achieved for the simple linear regressionmethods according to objective function of Least Squares and theLeast Absolute Value adjustment methods. The success of the appliedmethods is analyzed using their objective function values.

15. Regression analysis of a chemical reaction fouling model

International Nuclear Information System (INIS)

Vasak, F.; Epstein, N.

1996-01-01

A previously reported mathematical model for the initial chemical reaction fouling of a heated tube is critically examined in the light of the experimental data for which it was developed. A regression analysis of the model with respect to that data shows that the reference point upon which the two adjustable parameters of the model were originally based was well chosen, albeit fortuitously. (author). 3 refs., 2 tabs., 2 figs

16. Multiple regression and beyond an introduction to multiple regression and structural equation modeling

CERN Document Server

Keith, Timothy Z

2014-01-01

Multiple Regression and Beyond offers a conceptually oriented introduction to multiple regression (MR) analysis and structural equation modeling (SEM), along with analyses that flow naturally from those methods. By focusing on the concepts and purposes of MR and related methods, rather than the derivation and calculation of formulae, this book introduces material to students more clearly, and in a less threatening way. In addition to illuminating content necessary for coursework, the accessibility of this approach means students are more likely to be able to conduct research using MR or SEM--and more likely to use the methods wisely. Covers both MR and SEM, while explaining their relevance to one another Also includes path analysis, confirmatory factor analysis, and latent growth modeling Figures and tables throughout provide examples and illustrate key concepts and techniques For additional resources, please visit: http://tzkeith.com/.

17. Gender adjustment or stratification in discerning upper extremity musculoskeletal disorder risk?

Science.gov (United States)

Silverstein, Barbara; Fan, Z Joyce; Smith, Caroline K; Bao, Stephen; Howard, Ninica; Spielholz, Peregrin; Bonauto, David; Viikari-Juntura, Eira

2009-03-01

The aim was to explore whether "adjustment" for gender masks important exposure differences between men and women in a study of rotator cuff syndrome (RCS) and carpal tunnel syndrome (CTS) and work exposures. This cross-sectional study of 733 subjects in 12 health care and manufacturing workplaces used detailed individual health and work exposure assessment methods. Multiple logistic regression analysis was used to compare gender stratified and adjusted models. Prevalence of RCS and CTS among women was 7.1% and 11.3% respectively, and among men 7.8% and 6.4%. In adjusted (gender, age, body mass index) multivariate analyses of RCS and CTS, gender was not statistically significantly different. For RCS, upper arm flexion >/=45 degrees and forceful pinch increased the odds in the gender-adjusted model (OR 2.66, 95% CI 1.26-5.59) but primarily among women in the stratified analysis (OR 6.68, 95% CI 1.81-24.66 versus OR 1.45, 95% CI 0.53-4.00). For CTS, wrist radial/ulnar deviation >/=4% time and lifting >/=4.5kg >3% time, the adjusted OR was higher for women (OR 4.85, 95% CI 2.12-11.11) and in the gender stratified analyses, the odds were increased for both genders (women OR 5.18, 95% CI 1.70-15.81 and men OR 3.63, 95% CI 1.08-12.18). Gender differences in response to physical work exposures may reflect gender segregation in work and potential differences in pinch and lifting capacity. Reduction in these exposures may reduce prevalence of upper extremity disorders for all workers.

18. [The Relationship Between Marital Adjustment and Psychological Symptoms in Women: The Mediator Roles of Coping Strategies and Gender Role Attitudes].

Science.gov (United States)

Yüksel, Özge; Dağ, İhsan

2015-01-01

The aim of this study were to investigate the mediator role of coping strategies and gender roles attitudes on the relationship between women's marital adjustment and psychological symptoms. 248 married women participated in the study. Participants completed Marital Adjustment Scale, Ways of Coping Questionnaire, Brief Symptom Inventory, Gender Role Attitudes Scale and Demographic Information Form. Regression analyses revealed that Submissive (Sobel z= -2.47, prole on the relationship between marital relationship score and psychological symptom level. Also, having Egalitarian Gender Role Attitude effects the psychological symptoms in relation with the marital relationship, but it is seen that this effect is not higher enough to play a mediator role (Sobel z =-1.21, p>.05). Regression analysis showed that there is a statistically significant correlation between women's marital adjustment and their psychological symptoms, indicating that the marital adjustment decreases as the psychological symptoms increases. It is also found out that submissive and helpless coping approach have mediator roles in this relationship. Also, contrary to expectations, having egalitarian gender role attitude effects the psychological symptoms in relation with the marital relationship, but this effect does not seem to play a mediator role. It is thought that the effects of marriage and couple therapy approaches considering coupless problem solving and coping styles should be examined in further studies.

19. [Relationship between family variables and conjugal adjustment].

Science.gov (United States)

Jiménez-Picón, Nerea; Lima-Rodríguez, Joaquín-Salvador; Lima-Serrano, Marta

2018-04-01

Science.gov (United States)

Samuels, Valerie Jarvis; And Others

1994-01-01

Examined adolescent mothers' adjustment to parenting, self-esteem, social support, and perceptions of baby. Subjects (n=52) responded to questionnaires at two time periods approximately six months apart. Mothers with higher self-esteem at Time 1 had better adjustment at Time 2. Adjustment was predicted by Time 2 variables; contact with baby's…

International Nuclear Information System (INIS)

Trnkova, L.; Rulik, P.

2008-01-01

2. Analysing changes of health inequalities in the Nordic welfare states

DEFF Research Database (Denmark)

Lahelma, Eero; Kivelä, Katariina; Roos, Eva

2002-01-01

-standing illness and perceived health were analysed by age, gender, employment status and educational attainment. First, age-adjusted overall prevalence percentages were calculated. Second, changes in the magnitude of relative health inequalities were studied using logistic regression analysis. Within each country......This study examined changes over time in relative health inequalities among men and women in four Nordic countries, Denmark, Finland, Norway and Sweden. A serious economic recession burst out in the early 1990s particularly in Finland and Sweden. We ask whether this adverse social structural......'development influenced health inequalities by employment status and educational attainment, i.e. whether the trends in health inequalities were similar or dissimilar between the Nordic countries. The data derived from comparable interview surveys carried out in 1986/87 and 1994/95 in the four countries. Limiting long...

3. Refining cost-effectiveness analyses using the net benefit approach and econometric methods: an example from a trial of anti-depressant treatment.

Science.gov (United States)

Sabes-Figuera, Ramon; McCrone, Paul; Kendricks, Antony

2013-04-01

Economic evaluation analyses can be enhanced by employing regression methods, allowing for the identification of important sub-groups and to adjust for imperfect randomisation in clinical trials or to analyse non-randomised data. To explore the benefits of combining regression techniques and the standard Bayesian approach to refine cost-effectiveness analyses using data from randomised clinical trials. Data from a randomised trial of anti-depressant treatment were analysed and a regression model was used to explore the factors that have an impact on the net benefit (NB) statistic with the aim of using these findings to adjust the cost-effectiveness acceptability curves. Exploratory sub-samples' analyses were carried out to explore possible differences in cost-effectiveness. Results The analysis found that having suffered a previous similar depression is strongly correlated with a lower NB, independent of the outcome measure or follow-up point. In patients with previous similar depression, adding an selective serotonin reuptake inhibitors (SSRI) to supportive care for mild-to-moderate depression is probably cost-effective at the level used by the English National Institute for Health and Clinical Excellence to make recommendations. This analysis highlights the need for incorporation of econometric methods into cost-effectiveness analyses using the NB approach.

4. On Weighted Support Vector Regression

DEFF Research Database (Denmark)

Han, Xixuan; Clemmensen, Line Katrine Harder

2014-01-01

We propose a new type of weighted support vector regression (SVR), motivated by modeling local dependencies in time and space in prediction of house prices. The classic weights of the weighted SVR are added to the slack variables in the objective function (OF‐weights). This procedure directly...... shrinks the coefficient of each observation in the estimated functions; thus, it is widely used for minimizing influence of outliers. We propose to additionally add weights to the slack variables in the constraints (CF‐weights) and call the combination of weights the doubly weighted SVR. We illustrate...... the differences and similarities of the two types of weights by demonstrating the connection between the Least Absolute Shrinkage and Selection Operator (LASSO) and the SVR. We show that an SVR problem can be transformed to a LASSO problem plus a linear constraint and a box constraint. We demonstrate...

5. Socio-emotional regulation in children with intellectual disability and typically developing children, and teachers' perceptions of their social adjustment.

Science.gov (United States)

Baurain, Céline; Nader-Grosbois, Nathalie; Dionne, Carmen

2013-09-01

6. Sports practice, resilience, body and sexual esteem, and higher educational level are associated with better sexual adjustment in men with acquired paraplegia.

Science.gov (United States)

Dos Passos Porto, Isabela; Cardoso, Fernando Luiz; Sacomori, Cinara

2016-10-12

To analyse the association of team sports practice and physical and psychological factors with sexual adjustment in men with paraplegia. More specifically, we aimed to compare athletes and non-athletes regarding sexual adjustment, resilience, body and sexual self-esteem, and functional independence. Cross-sectional study with a paired design. The study included 60 men with paraplegia (30 athletes and 30 non-athletes). We used a sociodemographic questionnaire (age, education, and time since injury); a physical and sexual esteem questionnaire; a resilience questionnaire; and Functional Independence Measure (FIM). The dependent variable, sexual adjustment, was determined by the sum of 5 questions about sexual frequency, desire, and satisfaction and physical and psychological adjustment. Data were analysed by using the χ2 test, Wilcoxon's test, Spearman's correlation test, and hierarchical multiple linear regression analysis, with p Athletes had significantly higher sexual adjustment (p = 0.001) and higher body and sexual esteem (p esteem, higher educational level, and higher resilience levels (R2 = 58%). There was an interaction between sports practice and body and sexual esteem (p = 0.024; R2 = 62%). Participation in sports influenced the sexual adjustment of the men with paraplegia, even when controlled for psychological (resilience and body and sexual esteem) and physical (functional independence) aspects.

7. Regression-Based Norms for the Symbol Digit Modalities Test in the Dutch Population: Improving Detection of Cognitive Impairment in Multiple Sclerosis?

Science.gov (United States)

Burggraaff, Jessica; Knol, Dirk L; Uitdehaag, Bernard M J

2017-01-01

Appropriate and timely screening instruments that sensitively capture the cognitive functioning of multiple sclerosis (MS) patients are the need of the hour. We evaluated newly derived regression-based norms for the Symbol Digit Modalities Test (SDMT) in a Dutch-speaking sample, as an indicator of the cognitive state of MS patients. Regression-based norms for the SDMT were created from a healthy control sample (n = 96) and used to convert MS patients' (n = 157) raw scores to demographically adjusted Z-scores, correcting for the effects of age, age2, gender, and education. Conventional and regression-based norms were compared on their impairment-classification rates and related to other neuropsychological measures. The regression analyses revealed that age was the only significantly influencing demographic in our healthy sample. Regression-based norms for the SDMT more readily detected impairment in MS patients than conventional normalization methods (32 patients instead of 15). Patients changing from an SDMT-preserved to -impaired status (n = 17) were also impaired on other cognitive domains (p < 0.05), except for visuospatial memory (p = 0.34). Regression-based norms for the SDMT more readily detect abnormal performance in MS patients than conventional norms, identifying those patients at highest risk for cognitive impairment, which was supported by a worse performance on other neuropsychological measures. © 2017 S. Karger AG, Basel.

8. Credit Scoring Problem Based on Regression Analysis

OpenAIRE

2014-01-01

ABSTRACT: This thesis provides an explanatory introduction to the regression models of data mining and contains basic definitions of key terms in the linear, multiple and logistic regression models. Meanwhile, the aim of this study is to illustrate fitting models for the credit scoring problem using simple linear, multiple linear and logistic regression models and also to analyze the found model functions by statistical tools. Keywords: Data mining, linear regression, logistic regression....

9. Variable selection and model choice in geoadditive regression models.

Science.gov (United States)

Kneib, Thomas; Hothorn, Torsten; Tutz, Gerhard

2009-06-01

Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection.

10. An Original Stepwise Multilevel Logistic Regression Analysis of Discriminatory Accuracy

DEFF Research Database (Denmark)

Merlo, Juan; Wagner, Philippe; Ghith, Nermin

2016-01-01

BACKGROUND AND AIM: Many multilevel logistic regression analyses of "neighbourhood and health" focus on interpreting measures of associations (e.g., odds ratio, OR). In contrast, multilevel analysis of variance is rarely considered. We propose an original stepwise analytical approach that disting...

11. Interpreting Multiple Linear Regression: A Guidebook of Variable Importance

Science.gov (United States)

Nathans, Laura L.; Oswald, Frederick L.; Nimon, Kim

2012-01-01

Multiple regression (MR) analyses are commonly employed in social science fields. It is also common for interpretation of results to typically reflect overreliance on beta weights, often resulting in very limited interpretations of variable importance. It appears that few researchers employ other methods to obtain a fuller understanding of what…

12. Regularized Label Relaxation Linear Regression.

Science.gov (United States)

Fang, Xiaozhao; Xu, Yong; Li, Xuelong; Lai, Zhihui; Wong, Wai Keung; Fang, Bingwu

2018-04-01

Linear regression (LR) and some of its variants have been widely used for classification problems. Most of these methods assume that during the learning phase, the training samples can be exactly transformed into a strict binary label matrix, which has too little freedom to fit the labels adequately. To address this problem, in this paper, we propose a novel regularized label relaxation LR method, which has the following notable characteristics. First, the proposed method relaxes the strict binary label matrix into a slack variable matrix by introducing a nonnegative label relaxation matrix into LR, which provides more freedom to fit the labels and simultaneously enlarges the margins between different classes as much as possible. Second, the proposed method constructs the class compactness graph based on manifold learning and uses it as the regularization item to avoid the problem of overfitting. The class compactness graph is used to ensure that the samples sharing the same labels can be kept close after they are transformed. Two different algorithms, which are, respectively, based on -norm and -norm loss functions are devised. These two algorithms have compact closed-form solutions in each iteration so that they are easily implemented. Extensive experiments show that these two algorithms outperform the state-of-the-art algorithms in terms of the classification accuracy and running time.

13. An interpersonal perspective on depression: the role of marital adjustment, conflict communication, attributions, and attachment within a clinical sample.

Science.gov (United States)

Heene, Els; Buysse, Ann; Van Oost, Paulette

2007-12-01

Previous studies have focused on the difficulties in psychosocial functioning in depressed persons, underscoring the distress experienced by both spouses. We selected conflict communication, attribution, and attachment as important domains of depression in the context of marital adjustment, and we analyzed two hypotheses in one single study. First, we analyzed whether a clinical sample of couples with a depressed patient would differ significantly from a control group on these variables. Second, we explored to what degree these variables mediate/moderate the relationship between depressive symptoms and marital adjustment. The perspectives of both spouses were taken into account, as well as gender differences. In total, 69 clinical and 69 control couples were recruited, and a series of multivariate analyses of variance and regression analyses were conducted to test both hypotheses. Results indicated that both patients and their partners reported less marital adjustment associated with more negative perceptions on conflict communication, causal attributions, and insecure attachment. In addition, conflict communication and causal attributions were significant mediators of the association between depressive symptoms and marital adjustment for both depressed men and women, and causal attributions also moderated this link. Ambivalent attachment was a significant mediator only for the female identified patients. Several sex differences and clinical implications are discussed.

14. Estimating the exceedance probability of rain rate by logistic regression

Science.gov (United States)

Chiu, Long S.; Kedem, Benjamin

1990-01-01

Recent studies have shown that the fraction of an area with rain intensity above a fixed threshold is highly correlated with the area-averaged rain rate. To estimate the fractional rainy area, a logistic regression model, which estimates the conditional probability that rain rate over an area exceeds a fixed threshold given the values of related covariates, is developed. The problem of dependency in the data in the estimation procedure is bypassed by the method of partial likelihood. Analyses of simulated scanning multichannel microwave radiometer and observed electrically scanning microwave radiometer data during the Global Atlantic Tropical Experiment period show that the use of logistic regression in pixel classification is superior to multiple regression in predicting whether rain rate at each pixel exceeds a given threshold, even in the presence of noisy data. The potential of the logistic regression technique in satellite rain rate estimation is discussed.

15. Neonatal Sleep-Wake Analyses Predict 18-month Neurodevelopmental Outcomes.

Science.gov (United States)

Shellhaas, Renée A; Burns, Joseph W; Hassan, Fauziya; Carlson, Martha D; Barks, John D E; Chervin, Ronald D

2017-11-01

16. Use of probabilistic weights to enhance linear regression myoelectric control.

Science.gov (United States)

Smith, Lauren H; Kuiken, Todd A; Hargrove, Levi J

2015-12-01

Clinically available prostheses for transradial amputees do not allow simultaneous myoelectric control of degrees of freedom (DOFs). Linear regression methods can provide simultaneous myoelectric control, but frequently also result in difficulty with isolating individual DOFs when desired. This study evaluated the potential of using probabilistic estimates of categories of gross prosthesis movement, which are commonly used in classification-based myoelectric control, to enhance linear regression myoelectric control. Gaussian models were fit to electromyogram (EMG) feature distributions for three movement classes at each DOF (no movement, or movement in either direction) and used to weight the output of linear regression models by the probability that the user intended the movement. Eight able-bodied and two transradial amputee subjects worked in a virtual Fitts' law task to evaluate differences in controllability between linear regression and probability-weighted regression for an intramuscular EMG-based three-DOF wrist and hand system. Real-time and offline analyses in able-bodied subjects demonstrated that probability weighting improved performance during single-DOF tasks (p linear regression control. Use of probability weights can improve the ability to isolate individual during linear regression myoelectric control, while maintaining the ability to simultaneously control multiple DOFs.

17. Independent contrasts and PGLS regression estimators are equivalent.

Science.gov (United States)

Blomberg, Simon P; Lefevre, James G; Wells, Jessie A; Waterhouse, Mary

2012-05-01

We prove that the slope parameter of the ordinary least squares regression of phylogenetically independent contrasts (PICs) conducted through the origin is identical to the slope parameter of the method of generalized least squares (GLSs) regression under a Brownian motion model of evolution. This equivalence has several implications: 1. Understanding the structure of the linear model for GLS regression provides insight into when and why phylogeny is important in comparative studies. 2. The limitations of the PIC regression analysis are the same as the limitations of the GLS model. In particular, phylogenetic covariance applies only to the response variable in the regression and the explanatory variable should be regarded as fixed. Calculation of PICs for explanatory variables should be treated as a mathematical idiosyncrasy of the PIC regression algorithm. 3. Since the GLS estimator is the best linear unbiased estimator (BLUE), the slope parameter estimated using PICs is also BLUE. 4. If the slope is estimated using different branch lengths for the explanatory and response variables in the PIC algorithm, the estimator is no longer the BLUE, so this is not recommended. Finally, we discuss whether or not and how to accommodate phylogenetic covariance in regression analyses, particularly in relation to the problem of phylogenetic uncertainty. This discussion is from both frequentist and Bayesian perspectives.

18. Capital adjustment cost and bias in income based dynamic panel models with fixed effects

OpenAIRE

Yoseph Yilma Getachew; Keshab Bhattarai; Parantap Basu

2012-01-01

The fixed effects (FE) estimator of "conditional convergence" in income based dynamic panel models could be biased downward when capital adjustment cost is present. Such a capital adjustment cost means a rising marginal cost of investment which could slow down the convergence. The standard FE regression fails to take into account of this capital adjustment cost and thus it could overestimate the rate of convergence. Using a Ramsey model with long-run adjustment cost of capital, we characteriz...

DEFF Research Database (Denmark)

Liang, Cai; Hansen, Frank

2010-01-01

on a bipartite system and proved superadditivity of the Wigner-Yanase-Dyson skew informations for such states. We extend this result to the general metric-adjusted skew information. We finally show that a recently introduced extension to parameter values 1 ...We give a truly elementary proof of the convexity of metric-adjusted skew information following an idea of Effros. We extend earlier results of weak forms of superadditivity to general metric-adjusted skew information. Recently, Luo and Zhang introduced the notion of semi-quantum states...... of (unbounded) metric-adjusted skew information....

20. Factors associated with positive adjustment in siblings of children with severe emotional disturbance: the role of family resources and community life.

Science.gov (United States)

Kilmer, Ryan P; Cook, James R; Munsell, Eylin Palamaro; Salvador, Samantha Kane

2010-10-01

1. Simple and multiple linear regression: sample size considerations.

Science.gov (United States)

Hanley, James A

2016-11-01

The suggested "two subjects per variable" (2SPV) rule of thumb in the Austin and Steyerberg article is a chance to bring out some long-established and quite intuitive sample size considerations for both simple and multiple linear regression. This article distinguishes two of the major uses of regression models that imply very different sample size considerations, neither served well by the 2SPV rule. The first is etiological research, which contrasts mean Y levels at differing "exposure" (X) values and thus tends to focus on a single regression coefficient, possibly adjusted for confounders. The second research genre guides clinical practice. It addresses Y levels for individuals with different covariate patterns or "profiles." It focuses on the profile-specific (mean) Y levels themselves, estimating them via linear compounds of regression coefficients and covariates. By drawing on long-established closed-form variance formulae that lie beneath the standard errors in multiple regression, and by rearranging them for heuristic purposes, one arrives at quite intuitive sample size considerations for both research genres. Copyright Â© 2016 Elsevier Inc. All rights reserved.

2. Principal component regression analysis with SPSS.

Science.gov (United States)

Liu, R X; Kuang, J; Gong, Q; Hou, X L

2003-06-01

The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.

3. The Impact of Financial Sophistication on Adjustable Rate Mortgage Ownership

Science.gov (United States)

Smith, Hyrum; Finke, Michael S.; Huston, Sandra J.

2011-01-01

The influence of a financial sophistication scale on adjustable-rate mortgage (ARM) borrowing is explored. Descriptive statistics and regression analysis using recent data from the Survey of Consumer Finances reveal that ARM borrowing is driven by both the least and most financially sophisticated households but for different reasons. Less…

African Journals Online (AJOL)

Data collection was done by using structured questionnaire which contained the locus of control, self-concept, social support and coping scales. Multiple Regressions was used to test the independent and joint influence of these factors on adjustment. The result revealed significant influence of self-concept (t = 0.07, â = 0.03 ...

5. Temporal trends in sperm count: a systematic review and meta-regression analysis.

Science.gov (United States)

Levine, Hagai; Jørgensen, Niels; Martino-Andrade, Anderson; Mendiola, Jaime; Weksler-Derri, Dan; Mindlis, Irina; Pinotti, Rachel; Swan, Shanna H

2017-11-01

Reported declines in sperm counts remain controversial today and recent trends are unknown. A definitive meta-analysis is critical given the predictive value of sperm count for fertility, morbidity and mortality. To provide a systematic review and meta-regression analysis of recent trends in sperm counts as measured by sperm concentration (SC) and total sperm count (TSC), and their modification by fertility and geographic group. PubMed/MEDLINE and EMBASE were searched for English language studies of human SC published in 1981-2013. Following a predefined protocol 7518 abstracts were screened and 2510 full articles reporting primary data on SC were reviewed. A total of 244 estimates of SC and TSC from 185 studies of 42 935 men who provided semen samples in 1973-2011 were extracted for meta-regression analysis, as well as information on years of sample collection and covariates [fertility group ('Unselected by fertility' versus 'Fertile'), geographic group ('Western', including North America, Europe Australia and New Zealand versus 'Other', including South America, Asia and Africa), age, ejaculation abstinence time, semen collection method, method of measuring SC and semen volume, exclusion criteria and indicators of completeness of covariate data]. The slopes of SC and TSC were estimated as functions of sample collection year using both simple linear regression and weighted meta-regression models and the latter were adjusted for pre-determined covariates and modification by fertility and geographic group. Assumptions were examined using multiple sensitivity analyses and nonlinear models. SC declined significantly between 1973 and 2011 (slope in unadjusted simple regression models -0.70 million/ml/year; 95% CI: -0.72 to -0.69; P regression analysis reports a significant decline in sperm counts (as measured by SC and TSC) between 1973 and 2011, driven by a 50-60% decline among men unselected by fertility from North America, Europe, Australia and New Zealand. Because

6. Comparing parametric and nonparametric regression methods for panel data

DEFF Research Database (Denmark)

Czekaj, Tomasz Gerard; Henningsen, Arne

We investigate and compare the suitability of parametric and non-parametric stochastic regression methods for analysing production technologies and the optimal firm size. Our theoretical analysis shows that the most commonly used functional forms in empirical production analysis, Cobb......-Douglas and Translog, are unsuitable for analysing the optimal firm size. We show that the Translog functional form implies an implausible linear relationship between the (logarithmic) firm size and the elasticity of scale, where the slope is artificially related to the substitutability between the inputs....... The practical applicability of the parametric and non-parametric regression methods is scrutinised and compared by an empirical example: we analyse the production technology and investigate the optimal size of Polish crop farms based on a firm-level balanced panel data set. A nonparametric specification test...

7. A Machine Learning Framework for Plan Payment Risk Adjustment.

Science.gov (United States)

Rose, Sherri

2016-12-01

To introduce cross-validation and a nonparametric machine learning framework for plan payment risk adjustment and then assess whether they have the potential to improve risk adjustment. 2011-2012 Truven MarketScan database. We compare the performance of multiple statistical approaches within a broad machine learning framework for estimation of risk adjustment formulas. Total annual expenditure was predicted using age, sex, geography, inpatient diagnoses, and hierarchical condition category variables. The methods included regression, penalized regression, decision trees, neural networks, and an ensemble super learner, all in concert with screening algorithms that reduce the set of variables considered. The performance of these methods was compared based on cross-validated R 2 . Our results indicate that a simplified risk adjustment formula selected via this nonparametric framework maintains much of the efficiency of a traditional larger formula. The ensemble approach also outperformed classical regression and all other algorithms studied. The implementation of cross-validated machine learning techniques provides novel insight into risk adjustment estimation, possibly allowing for a simplified formula, thereby reducing incentives for increased coding intensity as well as the ability of insurers to "game" the system with aggressive diagnostic upcoding. © Health Research and Educational Trust.

8. Discrimination and adjustment among Chinese American adolescents: family conflict and family cohesion as vulnerability and protective factors.

Science.gov (United States)

Juang, Linda P; Alvarez, Alvin A

2010-12-01

We examined racial/ethnic discrimination experiences of Chinese American adolescents to determine how discrimination is linked to poor adjustment (i.e., loneliness, anxiety, and somatization) and how the context of the family can buffer or exacerbate these links. We collected survey data from 181 Chinese American adolescents and their parents in Northern California. We conducted hierarchical regression analyses to examine main effects and 2-way interactions of perceived discrimination with family conflict and family cohesion. Discrimination was related to poorer adjustment in terms of loneliness, anxiety, and somatization, but family conflict and cohesion modified these relations. Greater family conflict exacerbated the negative effects of discrimination, and greater family cohesion buffered the negative effects of discrimination. Our findings highlight the importance of identifying family-level moderators to help adolescents and their families handle experiences of discrimination.

9. A complete generalized adjustment criterion

NARCIS (Netherlands)

Perković, Emilija; Textor, Johannes; Kalisch, Markus; Maathuis, Marloes H.

2015-01-01

Covariate adjustment is a widely used approach to estimate total causal effects from observational data. Several graphical criteria have been developed in recent years to identify valid covariates for adjustment from graphical causal models. These criteria can handle multiple causes, latent

10. Small sample GEE estimation of regression parameters for longitudinal data.

Science.gov (United States)

Paul, Sudhir; Zhang, Xuemao

2014-09-28

Longitudinal (clustered) response data arise in many bio-statistical applications which, in general, cannot be assumed to be independent. Generalized estimating equation (GEE) is a widely used method to estimate marginal regression parameters for correlated responses. The advantage of the GEE is that the estimates of the regression parameters are asymptotically unbiased even if the correlation structure is misspecified, although their small sample properties are not known. In this paper, two bias adjusted GEE estimators of the regression parameters in longitudinal data are obtained when the number of subjects is small. One is based on a bias correction, and the other is based on a bias reduction. Simulations show that the performances of both the bias-corrected methods are similar in terms of bias, efficiency, coverage probability, average coverage length, impact of misspecification of correlation structure, and impact of cluster size on bias correction. Both these methods show superior properties over the GEE estimates for small samples. Further, analysis of data involving a small number of subjects also shows improvement in bias, MSE, standard error, and length of the confidence interval of the estimates by the two bias adjusted methods over the GEE estimates. For small to moderate sample sizes (N ≤50), either of the bias-corrected methods GEEBc and GEEBr can be used. However, the method GEEBc should be preferred over GEEBr, as the former is computationally easier. For large sample sizes, the GEE method can be used. Copyright © 2014 John Wiley & Sons, Ltd.

11. Parental Divorce and Children's Adjustment.

Science.gov (United States)

Lansford, Jennifer E

2009-03-01

12. Unbalanced Regressions and the Predictive Equation

DEFF Research Database (Denmark)

Osterrieder, Daniela; Ventosa-Santaulària, Daniel; Vera-Valdés, J. Eduardo

Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness in the theoreti......Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness...

13. Semiparametric regression during 2003–2007

KAUST Repository

Ruppert, David; Wand, M.P.; Carroll, Raymond J.

2009-01-01

Semiparametric regression is a fusion between parametric regression and nonparametric regression that integrates low-rank penalized splines, mixed model and hierarchical Bayesian methodology – thus allowing more streamlined handling of longitudinal and spatial correlation. We review progress in the field over the five-year period between 2003 and 2007. We find semiparametric regression to be a vibrant field with substantial involvement and activity, continual enhancement and widespread application.

14. Gaussian process regression analysis for functional data

CERN Document Server

Shi, Jian Qing

2011-01-01

Gaussian Process Regression Analysis for Functional Data presents nonparametric statistical methods for functional regression analysis, specifically the methods based on a Gaussian process prior in a functional space. The authors focus on problems involving functional response variables and mixed covariates of functional and scalar variables.Covering the basics of Gaussian process regression, the first several chapters discuss functional data analysis, theoretical aspects based on the asymptotic properties of Gaussian process regression models, and new methodological developments for high dime

15. The relationship between the C-statistic of a risk-adjustment model and the accuracy of hospital report cards: a Monte Carlo Study.

Science.gov (United States)

Austin, Peter C; Reeves, Mathew J

2013-03-01

Hospital report cards, in which outcomes following the provision of medical or surgical care are compared across health care providers, are being published with increasing frequency. Essential to the production of these reports is risk-adjustment, which allows investigators to account for differences in the distribution of patient illness severity across different hospitals. Logistic regression models are frequently used for risk adjustment in hospital report cards. Many applied researchers use the c-statistic (equivalent to the area under the receiver operating characteristic curve) of the logistic regression model as a measure of the credibility and accuracy of hospital report cards. To determine the relationship between the c-statistic of a risk-adjustment model and the accuracy of hospital report cards. Monte Carlo simulations were used to examine this issue. We examined the influence of 3 factors on the accuracy of hospital report cards: the c-statistic of the logistic regression model used for risk adjustment, the number of hospitals, and the number of patients treated at each hospital. The parameters used to generate the simulated datasets came from analyses of patients hospitalized with a diagnosis of acute myocardial infarction in Ontario, Canada. The c-statistic of the risk-adjustment model had, at most, a very modest impact on the accuracy of hospital report cards, whereas the number of patients treated at each hospital had a much greater impact. The c-statistic of a risk-adjustment model should not be used to assess the accuracy of a hospital report card.

16. Regression Analysis by Example. 5th Edition

Science.gov (United States)

2012-01-01

Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. "Regression Analysis by Example, Fifth Edition" has been expanded and thoroughly…

17. Standards for Standardized Logistic Regression Coefficients

Science.gov (United States)

Menard, Scott

2011-01-01

Standardized coefficients in logistic regression analysis have the same utility as standardized coefficients in linear regression analysis. Although there has been no consensus on the best way to construct standardized logistic regression coefficients, there is now sufficient evidence to suggest a single best approach to the construction of a…

18. A Seemingly Unrelated Poisson Regression Model

OpenAIRE

King, Gary

1989-01-01

This article introduces a new estimator for the analysis of two contemporaneously correlated endogenous event count variables. This seemingly unrelated Poisson regression model (SUPREME) estimator combines the efficiencies created by single equation Poisson regression model estimators and insights from "seemingly unrelated" linear regression models.

19. Unemployment and psychosocial outcomes to age 30: A fixed-effects regression analysis.

Science.gov (United States)

Fergusson, David M; McLeod, Geraldine F; Horwood, L John

2014-08-01

We aimed to examine the associations between exposure to unemployment and psychosocial outcomes over the period from 16 to 30 years, using data from a well-studied birth cohort. Data were collected over the course of the Christchurch Health and Development Study, a longitudinal study of a birth cohort of 1265 children, born in Christchurch in 1977, who have been studied to age 30. Assessments of unemployment and psychosocial outcomes (mental health, substance abuse/dependence, criminal offending, adverse life events and life satisfaction) were obtained at ages 18, 21, 25 and 30. Prior to adjustment, an increasing duration of unemployment was associated with significant increases in the risk of all psychosocial outcomes. These associations were adjusted for confounding using conditional, fixed-effects regression techniques. The analyses showed significant (p unemployment and major depression (p = 0.05), alcohol abuse/dependence (p = 0.043), illicit substance abuse/dependence (p = 0.017), property/violent offending (p unemployment. The findings suggested that the association between unemployment and psychosocial outcomes was likely to involve a causal process in which unemployment led to increased risks of adverse psychosocial outcomes. Effect sizes were estimated using attributable risk; exposure to unemployment accounted for between 4.2 and 14.0% (median 10.8%) of the risk of experiencing the significant psychosocial outcomes. The findings of this study suggest that exposure to unemployment had small but pervasive effects on psychosocial adjustment in adolescence and young adulthood. © The Royal Australian and New Zealand College of Psychiatrists 2014.

20. The alarming problems of confounding equivalence using logistic regression models in the perspective of causal diagrams

Directory of Open Access Journals (Sweden)

Yuanyuan Yu

2017-12-01

1. The alarming problems of confounding equivalence using logistic regression models in the perspective of causal diagrams.

Science.gov (United States)

Yu, Yuanyuan; Li, Hongkai; Sun, Xiaoru; Su, Ping; Wang, Tingting; Liu, Yi; Yuan, Zhongshang; Liu, Yanxun; Xue, Fuzhong

2017-12-28

2. PARAMETRIC AND NON PARAMETRIC (MARS: MULTIVARIATE ADDITIVE REGRESSION SPLINES) LOGISTIC REGRESSIONS FOR PREDICTION OF A DICHOTOMOUS RESPONSE VARIABLE WITH AN EXAMPLE FOR PRESENCE/ABSENCE OF AMPHIBIANS

Science.gov (United States)

The purpose of this report is to provide a reference manual that could be used by investigators for making informed use of logistic regression using two methods (standard logistic regression and MARS). The details for analyses of relationships between a dependent binary response ...

3. An in-depth assessment of a diagnosis-based risk adjustment model based on national health insurance claims: the application of the Johns Hopkins Adjusted Clinical Group case-mix system in Taiwan

Directory of Open Access Journals (Sweden)

Weiner Jonathan P

2010-01-01

Full Text Available Abstract Background Diagnosis-based risk adjustment is becoming an important issue globally as a result of its implications for payment, high-risk predictive modelling and provider performance assessment. The Taiwanese National Health Insurance (NHI programme provides universal coverage and maintains a single national computerized claims database, which enables the application of diagnosis-based risk adjustment. However, research regarding risk adjustment is limited. This study aims to examine the performance of the Adjusted Clinical Group (ACG case-mix system using claims-based diagnosis information from the Taiwanese NHI programme. Methods A random sample of NHI enrollees was selected. Those continuously enrolled in 2002 were included for concurrent analyses (n = 173,234, while those in both 2002 and 2003 were included for prospective analyses (n = 164,562. Health status measures derived from 2002 diagnoses were used to explain the 2002 and 2003 health expenditure. A multivariate linear regression model was adopted after comparing the performance of seven different statistical models. Split-validation was performed in order to avoid overfitting. The performance measures were adjusted R2 and mean absolute prediction error of five types of expenditure at individual level, and predictive ratio of total expenditure at group level. Results The more comprehensive models performed better when used for explaining resource utilization. Adjusted R2 of total expenditure in concurrent/prospective analyses were 4.2%/4.4% in the demographic model, 15%/10% in the ACGs or ADGs (Aggregated Diagnosis Group model, and 40%/22% in the models containing EDCs (Expanded Diagnosis Cluster. When predicting expenditure for groups based on expenditure quintiles, all models underpredicted the highest expenditure group and overpredicted the four other groups. For groups based on morbidity burden, the ACGs model had the best performance overall. Conclusions Given the

4. An in-depth assessment of a diagnosis-based risk adjustment model based on national health insurance claims: the application of the Johns Hopkins Adjusted Clinical Group case-mix system in Taiwan.

Science.gov (United States)

Chang, Hsien-Yen; Weiner, Jonathan P

2010-01-18

Diagnosis-based risk adjustment is becoming an important issue globally as a result of its implications for payment, high-risk predictive modelling and provider performance assessment. The Taiwanese National Health Insurance (NHI) programme provides universal coverage and maintains a single national computerized claims database, which enables the application of diagnosis-based risk adjustment. However, research regarding risk adjustment is limited. This study aims to examine the performance of the Adjusted Clinical Group (ACG) case-mix system using claims-based diagnosis information from the Taiwanese NHI programme. A random sample of NHI enrollees was selected. Those continuously enrolled in 2002 were included for concurrent analyses (n = 173,234), while those in both 2002 and 2003 were included for prospective analyses (n = 164,562). Health status measures derived from 2002 diagnoses were used to explain the 2002 and 2003 health expenditure. A multivariate linear regression model was adopted after comparing the performance of seven different statistical models. Split-validation was performed in order to avoid overfitting. The performance measures were adjusted R2 and mean absolute prediction error of five types of expenditure at individual level, and predictive ratio of total expenditure at group level. The more comprehensive models performed better when used for explaining resource utilization. Adjusted R2 of total expenditure in concurrent/prospective analyses were 4.2%/4.4% in the demographic model, 15%/10% in the ACGs or ADGs (Aggregated Diagnosis Group) model, and 40%/22% in the models containing EDCs (Expanded Diagnosis Cluster). When predicting expenditure for groups based on expenditure quintiles, all models underpredicted the highest expenditure group and overpredicted the four other groups. For groups based on morbidity burden, the ACGs model had the best performance overall. Given the widespread availability of claims data and the superior explanatory

5. Regression with Sparse Approximations of Data

DEFF Research Database (Denmark)

2012-01-01

We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected...... by a sparse approximation of the point in terms of the regressors. We show SPARROW can be considered a variant of \\(k\\)-nearest neighbors regression (\\(k\\)-NNR), and more generally, local polynomial kernel regression. Unlike \\(k\\)-NNR, however, SPARROW can adapt the number of regressors to use based...

6. Spontaneous regression of a congenital melanocytic nevus

Directory of Open Access Journals (Sweden)

Amiya Kumar Nath

2011-01-01

Full Text Available Congenital melanocytic nevus (CMN may rarely regress which may also be associated with a halo or vitiligo. We describe a 10-year-old girl who presented with CMN on the left leg since birth, which recently started to regress spontaneously with associated depigmentation in the lesion and at a distant site. Dermoscopy performed at different sites of the regressing lesion demonstrated loss of epidermal pigments first followed by loss of dermal pigments. Histopathology and Masson-Fontana stain demonstrated lymphocytic infiltration and loss of pigment production in the regressing area. Immunohistochemistry staining (S100 and HMB-45, however, showed that nevus cells were present in the regressing areas.

7. The Use of Nonparametric Kernel Regression Methods in Econometric Production Analysis

DEFF Research Database (Denmark)

Czekaj, Tomasz Gerard

and nonparametric estimations of production functions in order to evaluate the optimal firm size. The second paper discusses the use of parametric and nonparametric regression methods to estimate panel data regression models. The third paper analyses production risk, price uncertainty, and farmers' risk preferences...... within a nonparametric panel data regression framework. The fourth paper analyses the technical efficiency of dairy farms with environmental output using nonparametric kernel regression in a semiparametric stochastic frontier analysis. The results provided in this PhD thesis show that nonparametric......This PhD thesis addresses one of the fundamental problems in applied econometric analysis, namely the econometric estimation of regression functions. The conventional approach to regression analysis is the parametric approach, which requires the researcher to specify the form of the regression...

8. Marital status integration and suicide: A meta-analysis and meta-regression.

Science.gov (United States)

Kyung-Sook, Woo; SangSoo, Shin; Sangjin, Shin; Young-Jeon, Shin

2018-01-01

Marital status is an index of the phenomenon of social integration within social structures and has long been identified as an important predictor suicide. However, previous meta-analyses have focused only on a particular marital status, or not sufficiently explored moderators. A meta-analysis of observational studies was conducted to explore the relationships between marital status and suicide and to understand the important moderating factors in this association. Electronic databases were searched to identify studies conducted between January 1, 2000 and June 30, 2016. We performed a meta-analysis, subgroup analysis, and meta-regression of 170 suicide risk estimates from 36 publications. Using random effects model with adjustment for covariates, the study found that the suicide risk for non-married versus married was OR = 1.92 (95% CI: 1.75-2.12). The suicide risk was higher for non-married individuals aged analysis by gender, non-married men exhibited a greater risk of suicide than their married counterparts in all sub-analyses, but women aged 65 years or older showed no significant association between marital status and suicide. The suicide risk in divorced individuals was higher than for non-married individuals in both men and women. The meta-regression showed that gender, age, and sample size affected between-study variation. The results of the study indicated that non-married individuals have an aggregate higher suicide risk than married ones. In addition, gender and age were confirmed as important moderating factors in the relationship between marital status and suicide. Copyright © 2017 Elsevier Ltd. All rights reserved.

9. Model building strategy for logistic regression: purposeful selection.

Science.gov (United States)

Zhang, Zhongheng

2016-03-01

Logistic regression is one of the most commonly used models to account for confounders in medical literature. The article introduces how to perform purposeful selection model building strategy with R. I stress on the use of likelihood ratio test to see whether deleting a variable will have significant impact on model fit. A deleted variable should also be checked for whether it is an important adjustment of remaining covariates. Interaction should be checked to disentangle complex relationship between covariates and their synergistic effect on response variable. Model should be checked for the goodness-of-fit (GOF). In other words, how the fitted model reflects the real data. Hosmer-Lemeshow GOF test is the most widely used for logistic regression model.

10. Boosting structured additive quantile regression for longitudinal childhood obesity data.

Science.gov (United States)

Fenske, Nora; Fahrmeir, Ludwig; Hothorn, Torsten; Rzehak, Peter; Höhle, Michael

2013-07-25

Childhood obesity and the investigation of its risk factors has become an important public health issue. Our work is based on and motivated by a German longitudinal study including 2,226 children with up to ten measurements on their body mass index (BMI) and risk factors from birth to the age of 10 years. We introduce boosting of structured additive quantile regression as a novel distribution-free approach for longitudinal quantile regression. The quantile-specific predictors of our model include conventional linear population effects, smooth nonlinear functional effects, varying-coefficient terms, and individual-specific effects, such as intercepts and slopes. Estimation is based on boosting, a computer intensive inference method for highly complex models. We propose a component-wise functional gradient descent boosting algorithm that allows for penalized estimation of the large variety of different effects, particularly leading to individual-specific effects shrunken toward zero. This concept allows us to flexibly estimate the nonlinear age curves of upper quantiles of the BMI distribution, both on population and on individual-specific level, adjusted for further risk factors and to detect age-varying effects of categorical risk factors. Our model approach can be regarded as the quantile regression analog of Gaussian additive mixed models (or structured additive mean regression models), and we compare both model classes with respect to our obesity data.

11. Best friend attachment versus peer attachment in the prediction of adolescent psychological adjustment.

Science.gov (United States)

Wilkinson, Ross B

2010-10-01

This study examined the utility of the newly developed Adolescent Friendship Attachment Scale (AFAS) for the prediction of adolescent psychological health and school attitude. High school students (266 males, 229 females) were recruited from private and public schools in the Australian Capital Territory with ages of participants ranging from 13 to 19 years. Self-report measures of depression, self-esteem, self-competence and school attitude were administered in addition to the AFAS and a short-form of the Inventory of Parental and Peer Attachment (IPPA). Regression analyses revealed that the AFAS Anxious and Avoidant scales added to the prediction of depression, self-esteem, self-competence, and school attitude beyond the contribution of the IPPA. It is concluded that the AFAS taps aspects of adolescent attachment relationships not assessed by the IPPA and provides a useful contribution to research and practice in the area of adolescent psycho-social adjustment.

12. Normal Stress or Adjustment Disorder?

Science.gov (United States)

... disorder is a type of stress-related mental illness that can affect your feelings, thoughts and behaviors. Signs and symptoms of an adjustment disorder can include: Anxiety Poor school or work performance Relationship problems Sadness ...

Science.gov (United States)

Heyser, R. C.

1972-01-01

Timing mechanism was developed effecting extremely precisioned highly resistant fixed resistor. Switches shunt all or portion of resistor; effective resistance is varied over time interval by adjusting switch closure rate.

14. Analyses of developmental rate isomorphy in ectotherms: Introducing the dirichlet regression

Czech Academy of Sciences Publication Activity Database

Boukal S., David; Ditrich, Tomáš; Kutcherov, D.; Sroka, Pavel; Dudová, Pavla; Papáček, M.

2015-01-01

Roč. 10, č. 6 (2015), e0129341 E-ISSN 1932-6203 R&D Projects: GA ČR GAP505/10/0096 Grant - others:European Fund(CZ) PERG04-GA-2008-239543; GA JU(CZ) 145/2013/P Institutional support: RVO:60077344 Keywords : ectotherms Subject RIV: ED - Physiology Impact factor: 3.057, year: 2015 http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0129341

15. The benefits of using quantile regression for analysing the effect of weeds on organic winter wheat

NARCIS (Netherlands)

Casagrande, M.; Makowski, D.; Jeuffroy, M.H.; Valantin-Morison, M.; David, C.

2010-01-01

P>In organic farming, weeds are one of the threats that limit crop yield. An early prediction of weed effect on yield loss and the size of late weed populations could help farmers and advisors to improve weed management. Numerous studies predicting the effect of weeds on yield have already been

16. Quantitative Research Methods in Chaos and Complexity: From Probability to Post Hoc Regression Analyses

Science.gov (United States)

Gilstrap, Donald L.

2013-01-01

In addition to qualitative methods presented in chaos and complexity theories in educational research, this article addresses quantitative methods that may show potential for future research studies. Although much in the social and behavioral sciences literature has focused on computer simulations, this article explores current chaos and…

17. Differential item functioning (DIF) analyses of health-related quality of life instruments using logistic regression

DEFF Research Database (Denmark)

Scott, Neil W.; Fayers, Peter M.; Aaronson, Neil K.

2010-01-01

Differential item functioning (DIF) methods can be used to determine whether different subgroups respond differently to particular items within a health-related quality of life (HRQoL) subscale, after allowing for overall subgroup differences in that scale. This article reviews issues that arise...

18. Adjustable chain trees for proteins

DEFF Research Database (Denmark)

Winter, Pawel; Fonseca, Rasmus

2012-01-01

A chain tree is a data structure for changing protein conformations. It enables very fast detection of clashes and free energy potential calculations. A modified version of chain trees that adjust themselves to the changing conformations of folding proteins is introduced. This results in much...... tighter bounding volume hierarchies and therefore fewer intersection checks. Computational results indicate that the efficiency of the adjustable chain trees is significantly improved compared to the traditional chain trees....

19. Electronic gaming and psychosocial adjustment.

Science.gov (United States)

Przybylski, Andrew K

2014-09-01

20. Intermediate and advanced topics in multilevel logistic regression analysis.

Science.gov (United States)

Austin, Peter C; Merlo, Juan

2017-09-10

Multilevel data occur frequently in health services, population and public health, and epidemiologic research. In such research, binary outcomes are common. Multilevel logistic regression models allow one to account for the clustering of subjects within clusters of higher-level units when estimating the effect of subject and cluster characteristics on subject outcomes. A search of the PubMed database demonstrated that the use of multilevel or hierarchical regression models is increasing rapidly. However, our impression is that many analysts simply use multilevel regression models to account for the nuisance of within-cluster homogeneity that is induced by clustering. In this article, we describe a suite of analyses that can complement the fitting of multilevel logistic regression models. These ancillary analyses permit analysts to estimate the marginal or population-average effect of covariates measured at the subject and cluster level, in contrast to the within-cluster or cluster-specific effects arising from the original multilevel logistic regression model. We describe the interval odds ratio and the proportion of opposed odds ratios, which are summary measures of effect for cluster-level covariates. We describe the variance partition coefficient and the median odds ratio which are measures of components of variance and heterogeneity in outcomes. These measures allow one to quantify the magnitude of the general contextual effect. We describe an R 2 measure that allows analysts to quantify the proportion of variation explained by different multilevel logistic regression models. We illustrate the application and interpretation of these measures by analyzing mortality in patients hospitalized with a diagnosis of acute myocardial infarction. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

1. Meta-analyses on viral hepatitis

DEFF Research Database (Denmark)

Gluud, Lise L; Gluud, Christian

2009-01-01

This article summarizes the meta-analyses of interventions for viral hepatitis A, B, and C. Some of the interventions assessed are described in small trials with unclear bias control. Other interventions are supported by large, high-quality trials. Although attempts have been made to adjust...

2. Self-Esteem, Coping Efforts and Marital Adjustment

Directory of Open Access Journals (Sweden)

Claude Bélanger

2014-11-01

Full Text Available The main objective of this study is to investigate the relationship between self-esteem, specific coping strategies and marital adjustment. The sample consists of 216 subjects from 108 couples who completed the Dyadic Adjustment Scale, the Rosenberg Self-Esteem Scale and the Ways of Coping Checklist. The results confirm the presence of a relationship between self-esteem, specific coping strategies and marital adjustment in men and women. High self-esteem and marital adjustment are associated with the use of problem solving strategies and less avoidance as a way of coping. Moreover, cross analyses reveal that one’s feelings of self-worth are associated with his/her spouse's marital adjustment. The theoretical implications of these results are discussed.

3. Applied regression analysis a research tool

CERN Document Server

Pantula, Sastry; Dickey, David

1998-01-01

Least squares estimation, when used appropriately, is a powerful research tool. A deeper understanding of the regression concepts is essential for achieving optimal benefits from a least squares analysis. This book builds on the fundamentals of statistical methods and provides appropriate concepts that will allow a scientist to use least squares as an effective research tool. Applied Regression Analysis is aimed at the scientist who wishes to gain a working knowledge of regression analysis. The basic purpose of this book is to develop an understanding of least squares and related statistical methods without becoming excessively mathematical. It is the outgrowth of more than 30 years of consulting experience with scientists and many years of teaching an applied regression course to graduate students. Applied Regression Analysis serves as an excellent text for a service course on regression for non-statisticians and as a reference for researchers. It also provides a bridge between a two-semester introduction to...

4. Potential misinterpretation of treatment effects due to use of odds ratios and logistic regression in randomized controlled trials.

Directory of Open Access Journals (Sweden)

Mirjam J Knol

Full Text Available BACKGROUND: In randomized controlled trials (RCTs, the odds ratio (OR can substantially overestimate the risk ratio (RR if the incidence of the outcome is over 10%. This study determined the frequency of use of ORs, the frequency of overestimation of the OR as compared with its accompanying RR in published RCTs, and we assessed how often regression models that calculate RRs were used. METHODS: We included 288 RCTs published in 2008 in five major general medical journals (Annals of Internal Medicine, British Medical Journal, Journal of the American Medical Association, Lancet, New England Journal of Medicine. If an OR was reported, we calculated the corresponding RR, and we calculated the percentage of overestimation by using the formula . RESULTS: Of 193 RCTs with a dichotomous primary outcome, 24 (12.4% presented a crude and/or adjusted OR for the primary outcome. In five RCTs (2.6%, the OR differed more than 100% from its accompanying RR on the log scale. Forty-one of all included RCTs (n = 288; 14.2% presented ORs for other outcomes, or for subgroup analyses. Nineteen of these RCTs (6.6% had at least one OR that deviated more than 100% from its accompanying RR on the log scale. Of 53 RCTs that adjusted for baseline variables, 15 used logistic regression. Alternative methods to estimate RRs were only used in four RCTs. CONCLUSION: ORs and logistic regression are often used in RCTs and in many articles the OR did not approximate the RR. Although the authors did not explicitly misinterpret these ORs as RRs, misinterpretation by readers can seriously affect treatment decisions and policy making.

International Nuclear Information System (INIS)

Shen Hong

2008-01-01

Miniaturization is a requirement in engineering to produce competitive products in the field of optical and electronic industries. Laser micro-adjustment is a new and promising technology for sheet metal actuator systems. Efforts have been made to understand the mechanisms of metal plate forming using a laser heating source. Three mechanisms have been proposed for describing the laser forming processes in different scenarios, namely the temperature gradient mechanism (TGM), buckling mechanism and upsetting mechanism (UM). However, none of these mechanisms can fully describe the deformation mechanisms involved in laser micro-adjustment. Based on the thermal and elastoplastic analyses, a coupled TGM and UM are presented in this paper to illustrate the thermal mechanical behaviours of two-bridge actuators when applying a laser forming process. To validate the proposed coupling mechanism, numerical simulations are carried out and the corresponding results demonstrate the mechanism proposed. The mechanism of the micro-laser adjustment could be taken as a supplement to the laser forming process.

6. Alcohol use longitudinally predicts adjustment and impairment in college students with ADHD: The role of executive functions.

Science.gov (United States)

Langberg, Joshua M; Dvorsky, Melissa R; Kipperman, Kristen L; Molitor, Stephen J; Eddy, Laura D

2015-06-01

7. Regression models of reactor diagnostic signals

International Nuclear Information System (INIS)

Vavrin, J.

1989-01-01

The application is described of an autoregression model as the simplest regression model of diagnostic signals in experimental analysis of diagnostic systems, in in-service monitoring of normal and anomalous conditions and their diagnostics. The method of diagnostics is described using a regression type diagnostic data base and regression spectral diagnostics. The diagnostics is described of neutron noise signals from anomalous modes in the experimental fuel assembly of a reactor. (author)

8. Attachment to Parents and Depressive Symptoms in College Students: The Mediating Role of Initial Emotional Adjustment and Psychological Needs

Directory of Open Access Journals (Sweden)

Sanja Smojver-Ažić

2015-04-01

Full Text Available The aim of the present study was to explore the role of parental attachment in students' depressive symptoms. We have examined wheather initial emotional adjustment and psychological needs would serve as a mediator of the relationship between attachment dimensions (anxiety and avoidance and depressive symptoms.A sample consisted of 219 students (143 females randomly selected from the University of Rijeka, Croatia, with mean age 19.02 years. Participants provided self-report on the Experiences in Close Relationship Inventory and The Student Adaptation to College Questionnaire at the beginning of the first year of college, and The Basic Psychological Needs Satisfaction Scale and Beck Depression Inventory-II at the third year of college.Results of hierarchical regression analyses confirm that emotional adjustment had a full mediation effect on anxiety dimension and partial mediation on avoidance dimension. Only a partial mediation effect of psychological needs for autonomy and relatedness between attachment and depressive symptoms was found.The findings of this study give support to the researches indicating the importance of parental attachment for college students not only through its direct effects on depressive symptoms, but also through effects on the initial emotional adjustment and satisfaction of psychological needs. The results of the mediation analysis suggest that both attachment dimensions and emotional adjustment as well as psychological need satisfaction have a substantial shared variance when predicting depressive symptoms and that each variable also gives a unique contribution to depressive symptoms.

9. Self-discrepancies in work-related upper extremity pain: relation to emotions and flexible-goal adjustment.

Science.gov (United States)

Goossens, Mariëlle E; Kindermans, Hanne P; Morley, Stephen J; Roelofs, Jeffrey; Verbunt, Jeanine; Vlaeyen, Johan W

2010-08-01

Recurrent pain not only has an impact on disability, but on the long term it may become a threat to one's sense of self. This paper presents a cross-sectional study of patients with work-related upper extremity pain and focuses on: (1) the role of self-discrepancies in this group, (2) the associations between self-discrepancies, pain, emotions and (3) the interaction between self-discrepancies and flexible-goal adjustment. Eighty-nine participants completed standardized self-report measures of pain intensity, pain duration, anxiety, depression and flexible-goal adjustment. A Selves Questionnaire was used to generate self-discrepancies. A series of hierarchical regression analyses showed relationships between actual-ought other, actual-ought self, actual-feared self-discrepancies and depression as well as a significant association between actual-ought other self-discrepancy and anxiety. Furthermore, significant interactions were found between actual-ought other self-discrepancies and flexibility, indicating that less flexible participants with large self-discrepancies score higher on depression. This study showed that self-discrepancies are related to negative emotions and that flexible-goal adjustment served as a moderator in this relationship. The view of self in pain and flexible-goal adjustment should be considered as important variables in the process of chronic pain. Copyright (c) 2009 European Federation of International Association for the Study of Pain Chapters. Published by Elsevier Ltd. All rights reserved.

10. Normalization Ridge Regression in Practice I: Comparisons Between Ordinary Least Squares, Ridge Regression and Normalization Ridge Regression.

Science.gov (United States)

Bulcock, J. W.

The problem of model estimation when the data are collinear was examined. Though the ridge regression (RR) outperforms ordinary least squares (OLS) regression in the presence of acute multicollinearity, it is not a problem free technique for reducing the variance of the estimates. It is a stochastic procedure when it should be nonstochastic and it…

11. Multivariate Regression Analysis and Slaughter Livestock,

Science.gov (United States)

AGRICULTURE, *ECONOMICS), (*MEAT, PRODUCTION), MULTIVARIATE ANALYSIS, REGRESSION ANALYSIS , ANIMALS, WEIGHT, COSTS, PREDICTIONS, STABILITY, MATHEMATICAL MODELS, STORAGE, BEEF, PORK, FOOD, STATISTICAL DATA, ACCURACY

12. [From clinical judgment to linear regression model.

Science.gov (United States)

Palacios-Cruz, Lino; Pérez, Marcela; Rivas-Ruiz, Rodolfo; Talavera, Juan O

2013-01-01

When we think about mathematical models, such as linear regression model, we think that these terms are only used by those engaged in research, a notion that is far from the truth. Legendre described the first mathematical model in 1805, and Galton introduced the formal term in 1886. Linear regression is one of the most commonly used regression models in clinical practice. It is useful to predict or show the relationship between two or more variables as long as the dependent variable is quantitative and has normal distribution. Stated in another way, the regression is used to predict a measure based on the knowledge of at least one other variable. Linear regression has as it's first objective to determine the slope or inclination of the regression line: Y = a + bx, where "a" is the intercept or regression constant and it is equivalent to "Y" value when "X" equals 0 and "b" (also called slope) indicates the increase or decrease that occurs when the variable "x" increases or decreases in one unit. In the regression line, "b" is called regression coefficient. The coefficient of determination (R 2 ) indicates the importance of independent variables in the outcome.

13. Evaluation of Linear Regression Simultaneous Myoelectric Control Using Intramuscular EMG.

Science.gov (United States)

Smith, Lauren H; Kuiken, Todd A; Hargrove, Levi J

2016-04-01

The objective of this study was to evaluate the ability of linear regression models to decode patterns of muscle coactivation from intramuscular electromyogram (EMG) and provide simultaneous myoelectric control of a virtual 3-DOF wrist/hand system. Performance was compared to the simultaneous control of conventional myoelectric prosthesis methods using intramuscular EMG (parallel dual-site control)-an approach that requires users to independently modulate individual muscles in the residual limb, which can be challenging for amputees. Linear regression control was evaluated in eight able-bodied subjects during a virtual Fitts' law task and was compared to performance of eight subjects using parallel dual-site control. An offline analysis also evaluated how different types of training data affected prediction accuracy of linear regression control. The two control systems demonstrated similar overall performance; however, the linear regression method demonstrated improved performance for targets requiring use of all three DOFs, whereas parallel dual-site control demonstrated improved performance for targets that required use of only one DOF. Subjects using linear regression control could more easily activate multiple DOFs simultaneously, but often experienced unintended movements when trying to isolate individual DOFs. Offline analyses also suggested that the method used to train linear regression systems may influence controllability. Linear regression myoelectric control using intramuscular EMG provided an alternative to parallel dual-site control for 3-DOF simultaneous control at the wrist and hand. The two methods demonstrated different strengths in controllability, highlighting the tradeoff between providing simultaneous control and the ability to isolate individual DOFs when desired.

14. Use of probabilistic weights to enhance linear regression myoelectric control

Science.gov (United States)

Smith, Lauren H.; Kuiken, Todd A.; Hargrove, Levi J.

2015-12-01

Objective. Clinically available prostheses for transradial amputees do not allow simultaneous myoelectric control of degrees of freedom (DOFs). Linear regression methods can provide simultaneous myoelectric control, but frequently also result in difficulty with isolating individual DOFs when desired. This study evaluated the potential of using probabilistic estimates of categories of gross prosthesis movement, which are commonly used in classification-based myoelectric control, to enhance linear regression myoelectric control. Approach. Gaussian models were fit to electromyogram (EMG) feature distributions for three movement classes at each DOF (no movement, or movement in either direction) and used to weight the output of linear regression models by the probability that the user intended the movement. Eight able-bodied and two transradial amputee subjects worked in a virtual Fitts’ law task to evaluate differences in controllability between linear regression and probability-weighted regression for an intramuscular EMG-based three-DOF wrist and hand system. Main results. Real-time and offline analyses in able-bodied subjects demonstrated that probability weighting improved performance during single-DOF tasks (p < 0.05) by preventing extraneous movement at additional DOFs. Similar results were seen in experiments with two transradial amputees. Though goodness-of-fit evaluations suggested that the EMG feature distributions showed some deviations from the Gaussian, equal-covariance assumptions used in this experiment, the assumptions were sufficiently met to provide improved performance compared to linear regression control. Significance. Use of probability weights can improve the ability to isolate individual during linear regression myoelectric control, while maintaining the ability to simultaneously control multiple DOFs.

15. Regression modeling methods, theory, and computation with SAS

CERN Document Server

Panik, Michael

2009-01-01

Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,

16. Combination of supervised and semi-supervised regression models for improved unbiased estimation

DEFF Research Database (Denmark)

Arenas-Garía, Jeronimo; Moriana-Varo, Carlos; Larsen, Jan

2010-01-01

In this paper we investigate the steady-state performance of semisupervised regression models adjusted using a modified RLS-like algorithm, identifying the situations where the new algorithm is expected to outperform standard RLS. By using an adaptive combination of the supervised and semisupervi......In this paper we investigate the steady-state performance of semisupervised regression models adjusted using a modified RLS-like algorithm, identifying the situations where the new algorithm is expected to outperform standard RLS. By using an adaptive combination of the supervised...

17. The mediating role of shame in the relationship between childhood bullying victimization and adult psychosocial adjustment.

Science.gov (United States)

Strøm, Ida Frugård; Aakvaag, Helene Flood; Birkeland, Marianne Skogbrott; Felix, Erika; Thoresen, Siri

2018-01-01

Background : Psychological distress following experiencing bullying victimization in childhood has been well documented. Less is known about the impact of bullying victimization on psychosocial adjustment problems in young adulthood and about potential pathways, such as shame. Moreover, bullying victimization is often studied in isolation from other forms of victimization. Objective : This study investigated (1) whether childhood experiences of bullying victimization and violence were associated with psychosocial adjustment (distress, impaired functioning, social support barriers) in young adulthood; (2) the unique effect of bullying victimization on psychosocial adjustment; and (3) whether shame mediated the relationship between bullying victimization and these outcomes in young adulthood. Method : The sample included 681 respondents (aged 19-37 years) from a follow-up study (2017) conducted via phone interviews derived from a community telephone survey collected in 2013. Results : The regression analyses showed that both bullying victimization and severe violence were significantly and independently associated with psychological distress, impaired functioning, and increased barriers to social support in young adulthood. Moreover, causal mediation analyses indicated that when childhood physical violence, sexual abuse, and sociodemographic factors were controlled, shame mediated 70% of the association between bullying victimization and psychological distress, 55% of the association between bullying victimization and impaired functioning, and 40% of the association between bullying victimization and social support barriers. Conclusions : Our findings support the growing literature acknowledging bullying victimization as a trauma with severe and long-lasting consequences and indicate that shame may be an important pathway to continue to explore. The unique effect of bullying victimization, over and above the effect of violence, supports the call to integrate the two

African Journals Online (AJOL)

Fiscal adjustment is an essential element of macro-economic stability and economic growth. Given that economic growth is the most powerful weapon in the fight for higher living standards, poor growth performance in African countries, has been a challenge to economists, policy makers and international development ...

19. GPU Parallel Bundle Block Adjustment

Directory of Open Access Journals (Sweden)

ZHENG Maoteng

2017-09-01

Full Text Available To deal with massive data in photogrammetry, we introduce the GPU parallel computing technology. The preconditioned conjugate gradient and inexact Newton method are also applied to decrease the iteration times while solving the normal equation. A brand new workflow of bundle adjustment is developed to utilize GPU parallel computing technology. Our method can avoid the storage and inversion of the big normal matrix, and compute the normal matrix in real time. The proposed method can not only largely decrease the memory requirement of normal matrix, but also largely improve the efficiency of bundle adjustment. It also achieves the same accuracy as the conventional method. Preliminary experiment results show that the bundle adjustment of a dataset with about 4500 images and 9 million image points can be done in only 1.5 minutes while achieving sub-pixel accuracy.

20. Psychosocial Predictors of Adjustment among First Year College of Education Students

Science.gov (United States)

Salami, Samuel O.

2011-01-01

The purpose of this study was to examine the contribution of psychological and social factors to the prediction of adjustment to college. A total of 250 first year students from colleges of education in Kwara State, Nigeria, completed measures of self-esteem, emotional intelligence, stress, social support and adjustment. Regression analyses…

Science.gov (United States)

Hickman, Gregory P.; Bartholomae, Suzanne; McKenry, Patrick C.

2000-01-01

Examines the relationship between parenting styles and academic achievement and adjustment of traditional college freshmen (N=101). Multiple regression models indicate that authoritative parenting style was positively related to student's academic adjustment. Self-esteem was significantly predictive of social, personal-emotional, goal…

2. Risk adjustment models for interhospital comparison of CS rates using Robson's ten group classification system and other socio-demographic and clinical variables.

Science.gov (United States)

Colais, Paola; Fantini, Maria P; Fusco, Danilo; Carretta, Elisa; Stivanello, Elisa; Lenzi, Jacopo; Pieri, Giulia; Perucci, Carlo A

2012-06-21

Caesarean section (CS) rate is a quality of health care indicator frequently used at national and international level. The aim of this study was to assess whether adjustment for Robson's Ten Group Classification System (TGCS), and clinical and socio-demographic variables of the mother and the fetus is necessary for inter-hospital comparisons of CS rates. The study population includes 64,423 deliveries in Emilia-Romagna between January 1, 2003 and December 31, 2004, classified according to theTGCS. Poisson regression was used to estimate crude and adjusted hospital relative risks of CS compared to a reference category. Analyses were carried out in the overall population and separately according to the Robson groups (groups I, II, III, IV and V-X combined). Adjusted relative risks (RR) of CS were estimated using two risk-adjustment models; the first (M1) including the TGCS group as the only adjustment factor; the second (M2) including in addition demographic and clinical confounders identified using a stepwise selection procedure. Percentage variations between crude and adjusted RRs by hospital were calculated to evaluate the confounding effect of covariates. The percentage variations from crude to adjusted RR proved to be similar in M1 and M2 model. However, stratified analyses by Robson's classification groups showed that residual confounding for clinical and demographic variables was present in groups I (nulliparous, single, cephalic, ≥37 weeks, spontaneous labour) and III (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, spontaneous labour) and IV (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, induced or CS before labour) and to a minor extent in groups II (nulliparous, single, cephalic, ≥37 weeks, induced or CS before labour) and IV (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, induced or CS before labour). The TGCS classification is useful for inter-hospital comparison of CS section rates, but

3. Prediction of radiation levels in residences: A methodological comparison of CART [Classification and Regression Tree Analysis] and conventional regression

International Nuclear Information System (INIS)

Janssen, I.; Stebbings, J.H.

1990-01-01

In environmental epidemiology, trace and toxic substance concentrations frequently have very highly skewed distributions ranging over one or more orders of magnitude, and prediction by conventional regression is often poor. Classification and Regression Tree Analysis (CART) is an alternative in such contexts. To compare the techniques, two Pennsylvania data sets and three independent variables are used: house radon progeny (RnD) and gamma levels as predicted by construction characteristics in 1330 houses; and ∼200 house radon (Rn) measurements as predicted by topographic parameters. CART may identify structural variables of interest not identified by conventional regression, and vice versa, but in general the regression models are similar. CART has major advantages in dealing with other common characteristics of environmental data sets, such as missing values, continuous variables requiring transformations, and large sets of potential independent variables. CART is most useful in the identification and screening of independent variables, greatly reducing the need for cross-tabulations and nested breakdown analyses. There is no need to discard cases with missing values for the independent variables because surrogate variables are intrinsic to CART. The tree-structured approach is also independent of the scale on which the independent variables are measured, so that transformations are unnecessary. CART identifies important interactions as well as main effects. The major advantages of CART appear to be in exploring data. Once the important variables are identified, conventional regressions seem to lead to results similar but more interpretable by most audiences. 12 refs., 8 figs., 10 tabs

4. Body mass index adjustments to increase the validity of body fatness assessment in UK Black African and South Asian children.

Science.gov (United States)

Hudda, M T; Nightingale, C M; Donin, A S; Fewtrell, M S; Haroun, D; Lum, S; Williams, J E; Owen, C G; Rudnicka, A R; Wells, J C K; Cook, D G; Whincup, P H

2017-07-01

Body mass index (BMI) (weight per height 2 ) is the most widely used marker of childhood obesity and total body fatness (BF). However, its validity is limited, especially in children of South Asian and Black African origins. We aimed to quantify BMI adjustments needed for UK children of Black African and South Asian origins so that adjusted BMI related to BF in the same way as for White European children. We used data from four recent UK studies that made deuterium dilution BF measurements in UK children of White European, South Asian and Black African origins. A height-standardized fat mass index (FMI) was derived to represent BF. Linear regression models were then fitted, separately for boys and girls, to quantify ethnic differences in BMI-FMI relationships and to provide ethnic-specific BMI adjustments. We restricted analyses to 4-12 year olds, to whom a single consistent FMI (fat mass per height 5 ) could be applied. BMI consistently underestimated BF in South Asians, requiring positive BMI adjustments of +1.12 kg m - 2 (95% confidence interval (CI): 0.83, 1.41 kg m - 2 ; PAfricans, requiring negative BMI adjustments for Black African children. However, these were complex because there were statistically significant interactions between Black African ethnicity and FMI (P=0.004 boys; P=0.003 girls) and also between FMI and age group (PAfricans. Ethnic-specific adjustments, increasing BMI in South Asians and reducing BMI in Black Africans, can improve the accuracy of BF assessment in these children.

5. RAWS II: A MULTIPLE REGRESSION ANALYSIS PROGRAM,

Science.gov (United States)

This memorandum gives instructions for the use and operation of a revised version of RAWS, a multiple regression analysis program. The program...of preprocessed data, the directed retention of variable, listing of the matrix of the normal equations and its inverse, and the bypassing of the regression analysis to provide the input variable statistics only. (Author)

6. Hierarchical regression analysis in structural Equation Modeling

NARCIS (Netherlands)

de Jong, P.F.

1999-01-01

In a hierarchical or fixed-order regression analysis, the independent variables are entered into the regression equation in a prespecified order. Such an analysis is often performed when the extra amount of variance accounted for in a dependent variable by a specific independent variable is the main

7. Categorical regression dose-response modeling

Science.gov (United States)

The goal of this training is to provide participants with training on the use of the U.S. EPA’s Categorical Regression soft¬ware (CatReg) and its application to risk assessment. Categorical regression fits mathematical models to toxicity data that have been assigned ord...

8. Variable importance in latent variable regression models

NARCIS (Netherlands)

Kvalheim, O.M.; Arneberg, R.; Bleie, O.; Rajalahti, T.; Smilde, A.K.; Westerhuis, J.A.

2014-01-01

The quality and practical usefulness of a regression model are a function of both interpretability and prediction performance. This work presents some new graphical tools for improved interpretation of latent variable regression models that can also assist in improved algorithms for variable

9. Stepwise versus Hierarchical Regression: Pros and Cons

Science.gov (United States)

Lewis, Mitzi

2007-01-01

Multiple regression is commonly used in social and behavioral data analysis. In multiple regression contexts, researchers are very often interested in determining the "best" predictors in the analysis. This focus may stem from a need to identify those predictors that are supportive of theory. Alternatively, the researcher may simply be interested…

10. Suppression Situations in Multiple Linear Regression

Science.gov (United States)

Shieh, Gwowen

2006-01-01

This article proposes alternative expressions for the two most prevailing definitions of suppression without resorting to the standardized regression modeling. The formulation provides a simple basis for the examination of their relationship. For the two-predictor regression, the author demonstrates that the previous results in the literature are…

11. Gibrat’s law and quantile regressions

DEFF Research Database (Denmark)

Distante, Roberta; Petrella, Ivan; Santoro, Emiliano

2017-01-01

The nexus between firm growth, size and age in U.S. manufacturing is examined through the lens of quantile regression models. This methodology allows us to overcome serious shortcomings entailed by linear regression models employed by much of the existing literature, unveiling a number of important...

12. Regression Analysis and the Sociological Imagination

Science.gov (United States)

De Maio, Fernando

2014-01-01

Regression analysis is an important aspect of most introductory statistics courses in sociology but is often presented in contexts divorced from the central concerns that bring students into the discipline. Consequently, we present five lesson ideas that emerge from a regression analysis of income inequality and mortality in the USA and Canada.

13. Repeated Results Analysis for Middleware Regression Benchmarking

Czech Academy of Sciences Publication Activity Database

Bulej, Lubomír; Kalibera, T.; Tůma, P.

2005-01-01

Roč. 60, - (2005), s. 345-358 ISSN 0166-5316 R&D Projects: GA ČR GA102/03/0672 Institutional research plan: CEZ:AV0Z10300504 Keywords : middleware benchmarking * regression benchmarking * regression testing Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.756, year: 2005

14. Principles of Quantile Regression and an Application

Science.gov (United States)

Chen, Fang; Chalhoub-Deville, Micheline

2014-01-01

Newer statistical procedures are typically introduced to help address the limitations of those already in practice or to deal with emerging research needs. Quantile regression (QR) is introduced in this paper as a relatively new methodology, which is intended to overcome some of the limitations of least squares mean regression (LMR). QR is more…

15. ON REGRESSION REPRESENTATIONS OF STOCHASTIC-PROCESSES

NARCIS (Netherlands)

RUSCHENDORF, L; DEVALK, [No Value

We construct a.s. nonlinear regression representations of general stochastic processes (X(n))n is-an-element-of N. As a consequence we obtain in particular special regression representations of Markov chains and of certain m-dependent sequences. For m-dependent sequences we obtain a constructive

16. Regression of environmental noise in LIGO data

International Nuclear Information System (INIS)

Tiwari, V; Klimenko, S; Mitselmakher, G; Necula, V; Drago, M; Prodi, G; Frolov, V; Yakushin, I; Re, V; Salemi, F; Vedovato, G

2015-01-01

We address the problem of noise regression in the output of gravitational-wave (GW) interferometers, using data from the physical environmental monitors (PEM). The objective of the regression analysis is to predict environmental noise in the GW channel from the PEM measurements. One of the most promising regression methods is based on the construction of Wiener–Kolmogorov (WK) filters. Using this method, the seismic noise cancellation from the LIGO GW channel has already been performed. In the presented approach the WK method has been extended, incorporating banks of Wiener filters in the time–frequency domain, multi-channel analysis and regulation schemes, which greatly enhance the versatility of the regression analysis. Also we present the first results on regression of the bi-coherent noise in the LIGO data. (paper)

17. Pathological assessment of liver fibrosis regression

Directory of Open Access Journals (Sweden)

WANG Bingqiong

2017-03-01

Full Text Available Hepatic fibrosis is the common pathological outcome of chronic hepatic diseases. An accurate assessment of fibrosis degree provides an important reference for a definite diagnosis of diseases, treatment decision-making, treatment outcome monitoring, and prognostic evaluation. At present, many clinical studies have proven that regression of hepatic fibrosis and early-stage liver cirrhosis can be achieved by effective treatment, and a correct evaluation of fibrosis regression has become a hot topic in clinical research. Liver biopsy has long been regarded as the gold standard for the assessment of hepatic fibrosis, and thus it plays an important role in the evaluation of fibrosis regression. This article reviews the clinical application of current pathological staging systems in the evaluation of fibrosis regression from the perspectives of semi-quantitative scoring system, quantitative approach, and qualitative approach, in order to propose a better pathological evaluation system for the assessment of fibrosis regression.

18. Should metacognition be measured by logistic regression?

Science.gov (United States)

Rausch, Manuel; Zehetleitner, Michael

2017-03-01

Are logistic regression slopes suitable to quantify metacognitive sensitivity, i.e. the efficiency with which subjective reports differentiate between correct and incorrect task responses? We analytically show that logistic regression slopes are independent from rating criteria in one specific model of metacognition, which assumes (i) that rating decisions are based on sensory evidence generated independently of the sensory evidence used for primary task responses and (ii) that the distributions of evidence are logistic. Given a hierarchical model of metacognition, logistic regression slopes depend on rating criteria. According to all considered models, regression slopes depend on the primary task criterion. A reanalysis of previous data revealed that massive numbers of trials are required to distinguish between hierarchical and independent models with tolerable accuracy. It is argued that researchers who wish to use logistic regression as measure of metacognitive sensitivity need to control the primary task criterion and rating criteria. Copyright © 2017 Elsevier Inc. All rights reserved.

19. The Unified Levelling Network of Sarawak and its Adjustment

Science.gov (United States)

Som, Z. A. M.; Yazid, A. M.; Ming, T. K.; Yazid, N. M.

2016-09-01

20. THE UNIFIED LEVELLING NETWORK OF SARAWAK AND ITS ADJUSTMENT

Directory of Open Access Journals (Sweden)

Z. A. M. Som

2016-09-01

1. A SURVEY OF DEATH ADJUSTMENT IN THE INDIAN SUBCONTINENT.

Science.gov (United States)

Hossain, Mohammad Samir; Irfan, Muhammad; Balhara, Yatan Pal Singh; Giasuddin, Noor Ahmed; Sultana, Syeda Naheed

2015-01-01

2. Accounting for standard errors of vision-specific latent trait in regression models.

Science.gov (United States)

Wong, Wan Ling; Li, Xiang; Li, Jialiang; Wong, Tien Yin; Cheng, Ching-Yu; Lamoureux, Ecosse L

2014-07-11

To demonstrate the effectiveness of Hierarchical Bayesian (HB) approach in a modeling framework for association effects that accounts for SEs of vision-specific latent traits assessed using Rasch analysis. A systematic literature review was conducted in four major ophthalmic journals to evaluate Rasch analysis performed on vision-specific instruments. The HB approach was used to synthesize the Rasch model and multiple linear regression model for the assessment of the association effects related to vision-specific latent traits. The effectiveness of this novel HB one-stage "joint-analysis" approach allows all model parameters to be estimated simultaneously and was compared with the frequently used two-stage "separate-analysis" approach in our simulation study (Rasch analysis followed by traditional statistical analyses without adjustment for SE of latent trait). Sixty-six reviewed articles performed evaluation and validation of vision-specific instruments using Rasch analysis, and 86.4% (n = 57) performed further statistical analyses on the Rasch-scaled data using traditional statistical methods; none took into consideration SEs of the estimated Rasch-scaled scores. The two models on real data differed for effect size estimations and the identification of "independent risk factors." Simulation results showed that our proposed HB one-stage "joint-analysis" approach produces greater accuracy (average of 5-fold decrease in bias) with comparable power and precision in estimation of associations when compared with the frequently used two-stage "separate-analysis" procedure despite accounting for greater uncertainty due to the latent trait. Patient-reported data, using Rasch analysis techniques, do not take into account the SE of latent trait in association analyses. The HB one-stage "joint-analysis" is a better approach, producing accurate effect size estimations and information about the independent association of exposure variables with vision-specific latent traits

3. Logistic Regression Analysis of Operational Errors and Routine Operations Using Sector Characteristics

National Research Council Canada - National Science Library

Pfleiderer, Elaine M; Scroggins, Cheryl L; Manning, Carol A

2009-01-01

Two separate logistic regression analyses were conducted for low- and high-altitude sectors to determine whether a set of dynamic sector characteristics variables could reliably discriminate between operational error (OE...

4. Is antipsychotic polypharmacy associated with metabolic syndrome even after adjustment for lifestyle effects?: a cross-sectional study

Directory of Open Access Journals (Sweden)

Okumura Yasuyuki

2011-07-01

Full Text Available Abstract Background Although the validity and safety of antipsychotic polypharmacy remains unclear, it is commonplace in the treatment of schizophrenia. This study aimed to investigate the degree that antipsychotic polypharmacy contributed to metabolic syndrome in outpatients with schizophrenia, after adjustment for the effects of lifestyle. Methods A cross-sectional survey was carried out between April 2007 and October 2007 at Yamanashi Prefectural KITA hospital in Japan. 334 patients consented to this cross-sectional study. We measured the components consisting metabolic syndrome, and interviewed the participants about their lifestyle. We classified metabolic syndrome into four groups according to the severity of metabolic disturbance: the metabolic syndrome; the pre-metabolic syndrome; the visceral fat obesity; and the normal group. We used multinomial logistic regression models to assess the association of metabolic syndrome with antipsychotic polypharmacy, adjusting for lifestyle. Results Seventy-four (22.2% patients were in the metabolic syndrome group, 61 (18.3% patients were in the pre-metabolic syndrome group, and 41 (12.3% patients were in visceral fat obesity group. Antipsychotic polypharmacy was present in 167 (50.0% patients. In multinomial logistic regression analyses, antipsychotic polypharmacy was significantly associated with the pre-metabolic syndrome group (adjusted odds ratio [AOR], 2.348; 95% confidence interval [CI], 1.181-4.668, but not with the metabolic syndrome group (AOR, 1.269; 95%CI, 0.679-2.371. Conclusions These results suggest that antipsychotic polypharmacy, compared with monotherapy, may be independently associated with an increased risk of having pre-metabolic syndrome, even after adjusting for patients' lifestyle characteristics. As metabolic syndrome is associated with an increased risk of cardiovascular mortality, further studies are needed to clarify the validity and safety of antipsychotic polypharmacy.

5. Kinematic adjustments to seismic recordings

Energy Technology Data Exchange (ETDEWEB)

Telegin, A.N.; Levii, N.V.; Volovik, U.M.

1981-01-01

The introduction of kinematic adjustments by adding the displaced blocks is studied theoretically and in test seismograms. The advantage to this method resulting from the weight variation in the trace is demonstrated together with its kinematic drawback. A variation on the displaced block addition method that does not involve realignment of the travel time curves and that has improved amplitude characteristics is proposed.

6. Risk-adjusted hospital outcomes for children's surgery.

Science.gov (United States)

Saito, Jacqueline M; Chen, Li Ern; Hall, Bruce L; Kraemer, Kari; Barnhart, Douglas C; Byrd, Claudia; Cohen, Mark E; Fei, Chunyuan; Heiss, Kurt F; Huffman, Kristopher; Ko, Clifford Y; Latus, Melissa; Meara, John G; Oldham, Keith T; Raval, Mehul V; Richards, Karen E; Shah, Rahul K; Sutton, Laura C; Vinocur, Charles D; Moss, R Lawrence

2013-09-01

BACKGROUND The American College of Surgeons National Surgical Quality Improvement Program-Pediatric was initiated in 2008 to drive quality improvement in children's surgery. Low mortality and morbidity in previous analyses limited differentiation of hospital performance. Participating institutions included children's units within general hospitals and free-standing children's hospitals. Cases selected by Current Procedural Terminology codes encompassed procedures within pediatric general, otolaryngologic, orthopedic, urologic, plastic, neurologic, thoracic, and gynecologic surgery. Trained personnel abstracted demographic, surgical profile, preoperative, intraoperative, and postoperative variables. Incorporating procedure-specific risk, hierarchical models for 30-day mortality and morbidities were developed with significant predictors identified by stepwise logistic regression. Reliability was estimated to assess the balance of information versus error within models. In 2011, 46 281 patients from 43 hospitals were accrued; 1467 codes were aggregated into 226 groupings. Overall mortality was 0.3%, composite morbidity 5.8%, and surgical site infection (SSI) 1.8%. Hierarchical models revealed outlier hospitals with above or below expected performance for composite morbidity in the entire cohort, pediatric abdominal subgroup, and spine subgroup; SSI in the entire cohort and pediatric abdominal subgroup; and urinary tract infection in the entire cohort. Based on reliability estimates, mortality discriminates performance poorly due to very low event rate; however, reliable model construction for composite morbidity and SSI that differentiate institutions is feasible. The National Surgical Quality Improvement Program-Pediatric expansion has yielded risk-adjusted models to differentiate hospital performance in composite and specific morbidities. However, mortality has low utility as a children's surgery performance indicator. Programmatic improvements have resulted in

7. Easy methods for extracting individual regression slopes: Comparing SPSS, R, and Excel

Directory of Open Access Journals (Sweden)

Roland Pfister

2013-10-01

Full Text Available Three different methods for extracting coefficientsof linear regression analyses are presented. The focus is on automatic and easy-to-use approaches for common statistical packages: SPSS, R, and MS Excel / LibreOffice Calc. Hands-on examples are included for each analysis, followed by a brief description of how a subsequent regression coefficient analysis is performed.

8. Aqua/Aura Updated Inclination Adjust Maneuver Performance Prediction Model

Science.gov (United States)

Boone, Spencer

2017-01-01

This presentation will discuss the updated Inclination Adjust Maneuver (IAM) performance prediction model that was developed for Aqua and Aura following the 2017 IAM series. This updated model uses statistical regression methods to identify potential long-term trends in maneuver parameters, yielding improved predictions when re-planning past maneuvers. The presentation has been reviewed and approved by Eric Moyer, ESMO Deputy Project Manager.

9. Regression modeling of ground-water flow

Science.gov (United States)

Cooley, R.L.; Naff, R.L.

1985-01-01

Nonlinear multiple regression methods are developed to model and analyze groundwater flow systems. Complete descriptions of regression methodology as applied to groundwater flow models allow scientists and engineers engaged in flow modeling to apply the methods to a wide range of problems. Organization of the text proceeds from an introduction that discusses the general topic of groundwater flow modeling, to a review of basic statistics necessary to properly apply regression techniques, and then to the main topic: exposition and use of linear and nonlinear regression to model groundwater flow. Statistical procedures are given to analyze and use the regression models. A number of exercises and answers are included to exercise the student on nearly all the methods that are presented for modeling and statistical analysis. Three computer programs implement the more complex methods. These three are a general two-dimensional, steady-state regression model for flow in an anisotropic, heterogeneous porous medium, a program to calculate a measure of model nonlinearity with respect to the regression parameters, and a program to analyze model errors in computed dependent variables such as hydraulic head. (USGS)

10. Logistic Regression in the Identification of Hazards in Construction

Science.gov (United States)

Drozd, Wojciech

2017-10-01

The construction site and its elements create circumstances that are conducive to the formation of risks to safety during the execution of works. Analysis indicates the critical importance of these factors in the set of characteristics that describe the causes of accidents in the construction industry. This article attempts to analyse the characteristics related to the construction site, in order to indicate their importance in defining the circumstances of accidents at work. The study includes sites inspected in 2014 - 2016 by the employees of the District Labour Inspectorate in Krakow (Poland). The analysed set of detailed (disaggregated) data includes both quantitative and qualitative characteristics. The substantive task focused on classification modelling in the identification of hazards in construction and identifying those of the analysed characteristics that are important in an accident. In terms of methodology, resource data analysis using statistical classifiers, in the form of logistic regression, was the method used.

11. Genetic analysis of body weights of individually fed beef bulls in South Africa using random regression models.

Science.gov (United States)

Selapa, N W; Nephawe, K A; Maiwashe, A; Norris, D

2012-02-08

The aim of this study was to estimate genetic parameters for body weights of individually fed beef bulls measured at centralized testing stations in South Africa using random regression models. Weekly body weights of Bonsmara bulls (N = 2919) tested between 1999 and 2003 were available for the analyses. The model included a fixed regression of the body weights on fourth-order orthogonal Legendre polynomials of the actual days on test (7, 14, 21, 28, 35, 42, 49, 56, 63, 70, 77, and 84) for starting age and contemporary group effects. Random regressions on fourth-order orthogonal Legendre polynomials of the actual days on test were included for additive genetic effects and additional uncorrelated random effects of the weaning-herd-year and the permanent environment of the animal. Residual effects were assumed to be independently distributed with heterogeneous variance for each test day. Variance ratios for additive genetic, permanent environment and weaning-herd-year for weekly body weights at different test days ranged from 0.26 to 0.29, 0.37 to 0.44 and 0.26 to 0.34, respectively. The weaning-herd-year was found to have a significant effect on the variation of body weights of bulls despite a 28-day adjustment period. Genetic correlations amongst body weights at different test days were high, ranging from 0.89 to 1.00. Heritability estimates were comparable to literature using multivariate models. Therefore, random regression model could be applied in the genetic evaluation of body weight of individually fed beef bulls in South Africa.

12. Variable and subset selection in PLS regression

DEFF Research Database (Denmark)

Høskuldsson, Agnar

2001-01-01

The purpose of this paper is to present some useful methods for introductory analysis of variables and subsets in relation to PLS regression. We present here methods that are efficient in finding the appropriate variables or subset to use in the PLS regression. The general conclusion...... is that variable selection is important for successful analysis of chemometric data. An important aspect of the results presented is that lack of variable selection can spoil the PLS regression, and that cross-validation measures using a test set can show larger variation, when we use different subsets of X, than...

13. Applied Regression Modeling A Business Approach

CERN Document Server

Pardoe, Iain

2012-01-01

An applied and concise treatment of statistical regression techniques for business students and professionals who have little or no background in calculusRegression analysis is an invaluable statistical methodology in business settings and is vital to model the relationship between a response variable and one or more predictor variables, as well as the prediction of a response value given values of the predictors. In view of the inherent uncertainty of business processes, such as the volatility of consumer spending and the presence of market uncertainty, business professionals use regression a

14. Meta-analyses of the 5-HTTLPR polymorphisms and post-traumatic stress disorder.

Science.gov (United States)

Navarro-Mateu, Fernando; Escámez, Teresa; Koenen, Karestan C; Alonso, Jordi; Sánchez-Meca, Julio

2013-01-01

To conduct a meta-analysis of all published genetic association studies of 5-HTTLPR polymorphisms performed in PTSD cases. Potential studies were identified through PubMed/MEDLINE, EMBASE, Web of Science databases (Web of Knowledge, WoK), PsychINFO, PsychArticles and HuGeNet (Human Genome Epidemiology Network) up until December 2011. Published observational studies reporting genotype or allele frequencies of this genetic factor in PTSD cases and in non-PTSD controls were all considered eligible for inclusion in this systematic review. Two reviewers selected studies for possible inclusion and extracted data independently following a standardized protocol. A biallelic and a triallelic meta-analysis, including the total S and S' frequencies, the dominant (S+/LL and S'+/L'L') and the recessive model (SS/L+ and S'S'/L'+), was performed with a random-effect model to calculate the pooled OR and its corresponding 95% CI. Forest plots and Cochran's Q-Statistic and I(2) index were calculated to check for heterogeneity. Subgroup analyses and meta-regression were carried out to analyze potential moderators. Publication bias and quality of reporting were also analyzed. 13 studies met our inclusion criteria, providing a total sample of 1874 patients with PTSD and 7785 controls in the biallelic meta-analyses and 627 and 3524, respectively, in the triallelic. None of the meta-analyses showed evidence of an association between 5-HTTLPR and PTSD but several characteristics (exposure to the same principal stressor for PTSD cases and controls, adjustment for potential confounding variables, blind assessment, study design, type of PTSD, ethnic distribution and Total Quality Score) influenced the results in subgroup analyses and meta-regression. There was no evidence of potential publication bias. Current evidence does not support a direct effect of 5-HTTLPR polymorphisms on PTSD. Further analyses of gene-environment interactions, epigenetic modulation and new studies with large samples

15. How do Dutch regional labour markets adjust to demand shocks?

OpenAIRE

Broersma, Lourens; Dijk, Jouke van

2002-01-01

This paper analyses the response of regional labour markets in The Netherlands to region specific labour demand shocks. Whereas previous studies analyse only average patterns of all regions in a country, this paper provides also a more in debt analysis of within country differences in labour market adjustment processes. Previous studies show remarkable differences in response between regions in European countries and regions in the United States. The analysis in the present paper shows that i...

16. Birth-Order Complementarity and Marital Adjustment.

Science.gov (United States)

Vos, Cornelia J. Vanderkooy; Hayden, Delbert J.

1985-01-01

Tested the influence of birth-order complementarity on marital adjustment among 327 married women using the Spanier Dyadic Adjustment Scale (1976). Birth-order complementarity was found to be unassociated with marital adjustment. (Author/BL)

17. Linear regression metamodeling as a tool to summarize and present simulation model results.

Science.gov (United States)

Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M

2013-10-01

Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.

18. An epidemiological survey on road traffic crashes in Iran: application of the two logistic regression models.

Science.gov (United States)

Bakhtiyari, Mahmood; Mehmandar, Mohammad Reza; Mirbagheri, Babak; Hariri, Gholam Reza; Delpisheh, Ali; Soori, Hamid

2014-01-01

Risk factors of human-related traffic crashes are the most important and preventable challenges for community health due to their noteworthy burden in developing countries in particular. The present study aims to investigate the role of human risk factors of road traffic crashes in Iran. Through a cross-sectional study using the COM 114 data collection forms, the police records of almost 600,000 crashes occurred in 2010 are investigated. The binary logistic regression and proportional odds regression models are used. The odds ratio for each risk factor is calculated. These models are adjusted for known confounding factors including age, sex and driving time. The traffic crash reports of 537,688 men (90.8%) and 54,480 women (9.2%) are analysed. The mean age is 34.1 ± 14 years. Not maintaining eyes on the road (53.7%) and losing control of the vehicle (21.4%) are the main causes of drivers' deaths in traffic crashes within cities. Not maintaining eyes on the road is also the most frequent human risk factor for road traffic crashes out of cities. Sudden lane excursion (OR = 9.9, 95% CI: 8.2-11.9) and seat belt non-compliance (OR = 8.7, CI: 6.7-10.1), exceeding authorised speed (OR = 17.9, CI: 12.7-25.1) and exceeding safe speed (OR = 9.7, CI: 7.2-13.2) are the most significant human risk factors for traffic crashes in Iran. The high mortality rate of 39 people for every 100,000 population emphasises on the importance of traffic crashes in Iran. Considering the important role of human risk factors in traffic crashes, struggling efforts are required to control dangerous driving behaviours such as exceeding speed, illegal overtaking and not maintaining eyes on the road.

19. Sparse multivariate factor analysis regression models and its applications to integrative genomics analysis.

Science.gov (United States)

Zhou, Yan; Wang, Pei; Wang, Xianlong; Zhu, Ji; Song, Peter X-K

2017-01-01

The multivariate regression model is a useful tool to explore complex associations between two kinds of molecular markers, which enables the understanding of the biological pathways underlying disease etiology. For a set of correlated response variables, accounting for such dependency can increase statistical power. Motivated by integrative genomic data analyses, we propose a new methodology-sparse multivariate factor analysis regression model (smFARM), in which correlations of response variables are assumed to follow a factor analysis model with latent factors. This proposed method not only allows us to address the challenge that the number of association parameters is larger than the sample size, but also to adjust for unobserved genetic and/or nongenetic factors that potentially conceal the underlying response-predictor associations. The proposed smFARM is implemented by the EM algorithm and the blockwise coordinate descent algorithm. The proposed methodology is evaluated and compared to the existing methods through extensive simulation studies. Our results show that accounting for latent factors through the proposed smFARM can improve sensitivity of signal detection and accuracy of sparse association map estimation. We illustrate smFARM by two integrative genomics analysis examples, a breast cancer dataset, and an ovarian cancer dataset, to assess the relationship between DNA copy numbers and gene expression arrays to understand genetic regulatory patterns relevant to the disease. We identify two trans-hub regions: one in cytoband 17q12 whose amplification influences the RNA expression levels of important breast cancer genes, and the other in cytoband 9q21.32-33, which is associated with chemoresistance in ovarian cancer. © 2016 WILEY PERIODICALS, INC.

20. Belgium: risk adjustment and financial responsibility in a centralised system.

Science.gov (United States)

Schokkaert, Erik; Van de Voorde, Carine

2003-07-01

Since 1995 Belgian sickness funds are partially financed through a risk adjustment system and are held partially financially responsible for the difference between their actual and their risk-adjusted expenditures. However, they did not get the necessary instruments for exerting a real influence on expenditures and the health insurance market has not been opened for new entrants. At the same time the sickness funds have powerful tools for risk selection, because they also dominate the market for supplementary health insurance. The present risk-adjustment system is based on the results of a regression analysis with aggregate data. The main proclaimed purpose of this system is to guarantee a fair treatment to all the sickness funds. Until now the danger of risk selection has not been taken seriously. Consumer mobility has remained rather low. However, since the degree of financial responsibility is programmed to increase in the near future, the potential profits from cream skimming will increase.

1. Genetics Home Reference: caudal regression syndrome

Science.gov (United States)

... umbilical artery: Further support for a caudal regression-sirenomelia spectrum. Am J Med Genet A. 2007 Dec ... AK, Dickinson JE, Bower C. Caudal dysgenesis and sirenomelia-single centre experience suggests common pathogenic basis. Am ...

2. Dynamic travel time estimation using regression trees.

Science.gov (United States)

2008-10-01

This report presents a methodology for travel time estimation by using regression trees. The dissemination of travel time information has become crucial for effective traffic management, especially under congested road conditions. In the absence of c...

3. Two Paradoxes in Linear Regression Analysis

Science.gov (United States)

FENG, Ge; PENG, Jing; TU, Dongke; ZHENG, Julia Z.; FENG, Changyong

2016-01-01

Summary Regression is one of the favorite tools in applied statistics. However, misuse and misinterpretation of results from regression analysis are common in biomedical research. In this paper we use statistical theory and simulation studies to clarify some paradoxes around this popular statistical method. In particular, we show that a widely used model selection procedure employed in many publications in top medical journals is wrong. Formal procedures based on solid statistical theory should be used in model selection. PMID:28638214

4. Discriminative Elastic-Net Regularized Linear Regression.

Science.gov (United States)

Zhang, Zheng; Lai, Zhihui; Xu, Yong; Shao, Ling; Wu, Jian; Xie, Guo-Sen

2017-03-01

In this paper, we aim at learning compact and discriminative linear regression models. Linear regression has been widely used in different problems. However, most of the existing linear regression methods exploit the conventional zero-one matrix as the regression targets, which greatly narrows the flexibility of the regression model. Another major limitation of these methods is that the learned projection matrix fails to precisely project the image features to the target space due to their weak discriminative capability. To this end, we present an elastic-net regularized linear regression (ENLR) framework, and develop two robust linear regression models which possess the following special characteristics. First, our methods exploit two particular strategies to enlarge the margins of different classes by relaxing the strict binary targets into a more feasible variable matrix. Second, a robust elastic-net regularization of singular values is introduced to enhance the compactness and effectiveness of the learned projection matrix. Third, the resulting optimization problem of ENLR has a closed-form solution in each iteration, which can be solved efficiently. Finally, rather than directly exploiting the projection matrix for recognition, our methods employ the transformed features as the new discriminate representations to make final image classification. Compared with the traditional linear regression model and some of its variants, our method is much more accurate in image classification. Extensive experiments conducted on publicly available data sets well demonstrate that the proposed framework can outperform the state-of-the-art methods. The MATLAB codes of our methods can be available at http://www.yongxu.org/lunwen.html.

5. Fuzzy multiple linear regression: A computational approach

Science.gov (United States)

Juang, C. H.; Huang, X. H.; Fleming, J. W.

1992-01-01

This paper presents a new computational approach for performing fuzzy regression. In contrast to Bardossy's approach, the new approach, while dealing with fuzzy variables, closely follows the conventional regression technique. In this approach, treatment of fuzzy input is more 'computational' than 'symbolic.' The following sections first outline the formulation of the new approach, then deal with the implementation and computational scheme, and this is followed by examples to illustrate the new procedure.

6. Computing multiple-output regression quantile regions

Czech Academy of Sciences Publication Activity Database

Paindaveine, D.; Šiman, Miroslav

2012-01-01

Roč. 56, č. 4 (2012), s. 840-853 ISSN 0167-9473 R&D Projects: GA MŠk(CZ) 1M06047 Institutional research plan: CEZ:AV0Z10750506 Keywords : halfspace depth * multiple-output regression * parametric linear programming * quantile regression Subject RIV: BA - General Mathematics Impact factor: 1.304, year: 2012 http://library.utia.cas.cz/separaty/2012/SI/siman-0376413.pdf

7. There is No Quantum Regression Theorem

International Nuclear Information System (INIS)

Ford, G.W.; OConnell, R.F.

1996-01-01

The Onsager regression hypothesis states that the regression of fluctuations is governed by macroscopic equations describing the approach to equilibrium. It is here asserted that this hypothesis fails in the quantum case. This is shown first by explicit calculation for the example of quantum Brownian motion of an oscillator and then in general from the fluctuation-dissipation theorem. It is asserted that the correct generalization of the Onsager hypothesis is the fluctuation-dissipation theorem. copyright 1996 The American Physical Society

8. Caudal regression syndrome : a case report

International Nuclear Information System (INIS)

Lee, Eun Joo; Kim, Hi Hye; Kim, Hyung Sik; Park, So Young; Han, Hye Young; Lee, Kwang Hun

1998-01-01

Caudal regression syndrome is a rare congenital anomaly, which results from a developmental failure of the caudal mesoderm during the fetal period. We present a case of caudal regression syndrome composed of a spectrum of anomalies including sirenomelia, dysplasia of the lower lumbar vertebrae, sacrum, coccyx and pelvic bones,genitourinary and anorectal anomalies, and dysplasia of the lung, as seen during infantography and MR imaging

9. Caudal regression syndrome : a case report

Energy Technology Data Exchange (ETDEWEB)

Lee, Eun Joo; Kim, Hi Hye; Kim, Hyung Sik; Park, So Young; Han, Hye Young; Lee, Kwang Hun [Chungang Gil Hospital, Incheon (Korea, Republic of)

1998-07-01

Caudal regression syndrome is a rare congenital anomaly, which results from a developmental failure of the caudal mesoderm during the fetal period. We present a case of caudal regression syndrome composed of a spectrum of anomalies including sirenomelia, dysplasia of the lower lumbar vertebrae, sacrum, coccyx and pelvic bones,genitourinary and anorectal anomalies, and dysplasia of the lung, as seen during infantography and MR imaging.

10. Spontaneous regression of metastatic Merkel cell carcinoma.

LENUS (Irish Health Repository)

Hassan, S J

2010-01-01

Merkel cell carcinoma is a rare aggressive neuroendocrine carcinoma of the skin predominantly affecting elderly Caucasians. It has a high rate of local recurrence and regional lymph node metastases. It is associated with a poor prognosis. Complete spontaneous regression of Merkel cell carcinoma has been reported but is a poorly understood phenomenon. Here we present a case of complete spontaneous regression of metastatic Merkel cell carcinoma demonstrating a markedly different pattern of events from those previously published.

11. Forecasting exchange rates: a robust regression approach

OpenAIRE

Preminger, Arie; Franck, Raphael

2005-01-01

The least squares estimation method as well as other ordinary estimation method for regression models can be severely affected by a small number of outliers, thus providing poor out-of-sample forecasts. This paper suggests a robust regression approach, based on the S-estimation method, to construct forecasting models that are less sensitive to data contamination by outliers. A robust linear autoregressive (RAR) and a robust neural network (RNN) models are estimated to study the predictabil...

12. Marginal longitudinal semiparametric regression via penalized splines

KAUST Repository

2010-08-01

We study the marginal longitudinal nonparametric regression problem and some of its semiparametric extensions. We point out that, while several elaborate proposals for efficient estimation have been proposed, a relative simple and straightforward one, based on penalized splines, has not. After describing our approach, we then explain how Gibbs sampling and the BUGS software can be used to achieve quick and effective implementation. Illustrations are provided for nonparametric regression and additive models.

13. Marginal longitudinal semiparametric regression via penalized splines

KAUST Repository

Al Kadiri, M.; Carroll, R.J.; Wand, M.P.

2010-01-01

We study the marginal longitudinal nonparametric regression problem and some of its semiparametric extensions. We point out that, while several elaborate proposals for efficient estimation have been proposed, a relative simple and straightforward one, based on penalized splines, has not. After describing our approach, we then explain how Gibbs sampling and the BUGS software can be used to achieve quick and effective implementation. Illustrations are provided for nonparametric regression and additive models.

14. Ensemble of trees approaches to risk adjustment for evaluating a hospital's performance.

Science.gov (United States)

Liu, Yang; Traskin, Mikhail; Lorch, Scott A; George, Edward I; Small, Dylan

2015-03-01

A commonly used method for evaluating a hospital's performance on an outcome is to compare the hospital's observed outcome rate to the hospital's expected outcome rate given its patient (case) mix and service. The process of calculating the hospital's expected outcome rate given its patient mix and service is called risk adjustment (Iezzoni 1997). Risk adjustment is critical for accurately evaluating and comparing hospitals' performances since we would not want to unfairly penalize a hospital just because it treats sicker patients. The key to risk adjustment is accurately estimating the probability of an Outcome given patient characteristics. For cases with binary outcomes, the method that is commonly used in risk adjustment is logistic regression. In this paper, we consider ensemble of trees methods as alternatives for risk adjustment, including random forests and Bayesian additive regression trees (BART). Both random forests and BART are modern machine learning methods that have been shown recently to have excellent performance for prediction of outcomes in many settings. We apply these methods to carry out risk adjustment for the performance of neonatal intensive care units (NICU). We show that these ensemble of trees methods outperform logistic regression in predicting mortality among babies treated in NICU, and provide a superior method of risk adjustment compared to logistic regression.

Science.gov (United States)

Attar-Schwartz, Shalhevet; Tan, Jo-Pei; Buchanan, Ann; Flouri, Eirini; Griggs, Julia

2009-02-01

16. Post-processing through linear regression

Science.gov (United States)

van Schaeybroeck, B.; Vannitsem, S.

2011-03-01

Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS) method, a new time-dependent Tikhonov regularization (TDTR) method, the total least-square method, a new geometric-mean regression (GM), a recently introduced error-in-variables (EVMOS) method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified. These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise). At long lead times the regression schemes (EVMOS, TDTR) which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.

17. Post-processing through linear regression

Directory of Open Access Journals (Sweden)

B. Van Schaeybroeck

2011-03-01

Full Text Available Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS method, a new time-dependent Tikhonov regularization (TDTR method, the total least-square method, a new geometric-mean regression (GM, a recently introduced error-in-variables (EVMOS method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified.

These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise. At long lead times the regression schemes (EVMOS, TDTR which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.

18. Unbalanced Regressions and the Predictive Equation

DEFF Research Database (Denmark)

Osterrieder, Daniela; Ventosa-Santaulària, Daniel; Vera-Valdés, J. Eduardo

Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness in the theoreti......Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness...... in the theoretical predictive equation by suggesting a data generating process, where returns are generated as linear functions of a lagged latent I(0) risk process. The observed predictor is a function of this latent I(0) process, but it is corrupted by a fractionally integrated noise. Such a process may arise due...... to aggregation or unexpected level shifts. In this setup, the practitioner estimates a misspecified, unbalanced, and endogenous predictive regression. We show that the OLS estimate of this regression is inconsistent, but standard inference is possible. To obtain a consistent slope estimate, we then suggest...

19. Adjusting the Danish industrial relations system after Laval

DEFF Research Database (Denmark)

Refslund, Bjarke

2015-01-01

Adjusting the Danish IR-system after Laval: Re-calibration rather than erosion Following some significant rulings that later became known as the Laval-quartet from the European Court of Justice (ECJ) many analyses have been concerned with the future implications for the highly regulated Nordic...

20. Parenting Perfectionism and Parental Adjustment

OpenAIRE

Lee, Meghan A.; Schoppe-Sullivan, Sarah J.; Kamp Dush, Claire M.

2012-01-01

The parental role is expected to be one of the most gratifying and rewarding roles in life. As expectations of parenting become ever higher, the implications of parenting perfectionism for parental adjustment warrant investigation. Using longitudinal data from 182 couples, this study examined the associations between societal- and self-oriented parenting perfectionism and new mothers’ and fathers’ parenting self-efficacy, stress, and satisfaction. For mothers, societal-oriented parenting perf...

1. Longitudinal Psychosocial Adjustment of Women to Human Papillomavirus Infection.

Science.gov (United States)

Hsu, Yu-Yun; Wang, Wei-Ming; Fetzer, Susan Jane; Cheng, Ya-Min; Hsu, Keng-Fu

2018-05-29

2. An introduction to using Bayesian linear regression with clinical data.

Science.gov (United States)

Baldwin, Scott A; Larson, Michael J

2017-11-01

Statistical training psychology focuses on frequentist methods. Bayesian methods are an alternative to standard frequentist methods. This article provides researchers with an introduction to fundamental ideas in Bayesian modeling. We use data from an electroencephalogram (EEG) and anxiety study to illustrate Bayesian models. Specifically, the models examine the relationship between error-related negativity (ERN), a particular event-related potential, and trait anxiety. Methodological topics covered include: how to set up a regression model in a Bayesian framework, specifying priors, examining convergence of the model, visualizing and interpreting posterior distributions, interval estimates, expected and predicted values, and model comparison tools. We also discuss situations where Bayesian methods can outperform frequentist methods as well has how to specify more complicated regression models. Finally, we conclude with recommendations about reporting guidelines for those using Bayesian methods in their own research. We provide data and R code for replicating our analyses. Copyright © 2017 Elsevier Ltd. All rights reserved.

3. Economic Analyses of Ware Yam Production in Orlu Agricultural ...

African Journals Online (AJOL)

Economic Analyses of Ware Yam Production in Orlu Agricultural Zone of Imo State. ... International Journal of Agriculture and Rural Development ... statistics, gross margin analysis, marginal analysis and multiple regression analysis. Results ...

4. Positive and negative meanings are simultaneously ascribed to colorectal cancer: relationship to quality of life and psychosocial adjustment.

Science.gov (United States)

Camacho, Aldo Aguirre; Garland, Sheila N; Martopullo, Celestina; Pelletier, Guy

2014-08-01

Experiencing cancer can give rise to existential concerns causing great distress, and consequently drive individuals to make sense of what cancer may mean to their lives. To date, meaning-based research in the context of cancer has largely focused on one possible outcome of this process, the emergence of positive meanings (e.g. post-traumatic growth). However, negative meanings may also be ascribed to cancer, simultaneously with positive meanings. This study focused on the nature of the co-existence of positive and negative meanings in a sample of individuals diagnosed with colorectal cancer to find out whether negative meaning had an impact on quality of life and psychosocial adjustment above and beyond positive meaning. Participants were given questionnaires measuring meaning-made, quality of life, and psychological distress. Semi structured interviews were conducted with a subgroup from the original sample. Hierarchical multiple regression analyses revealed that negative meaning-made (i.e. helplessness) was a significant predictor of poor quality of life and increased levels of depression/anxiety above and beyond positive meaning-made (i.e. life meaningfulness, acceptance, and perceived benefits). Correlational analyses and interview data revealed that negative meaning-made was mainly associated with physical and functional disability, while positive meaning-made was mostly related to emotional and psychological well-being. Meanings of varying valence may simultaneously be ascribed to cancer as it impacts different life dimensions, and they may independently influence quality of life and psychosocial adjustment. The presence of positive meaning was not enough to prevent the detrimental effects of negative meaning on psychosocial adjustment and quality of life among individuals taking part in this study. Future attention to negative meaning is warranted, as it may be at least as important as positive meaning in predicting psychosocial adjustment and quality of

5. The best of both worlds: Phylogenetic eigenvector regression and mapping

Directory of Open Access Journals (Sweden)

José Alexandre Felizola Diniz Filho

2015-09-01

Full Text Available Eigenfunction analyses have been widely used to model patterns of autocorrelation in time, space and phylogeny. In a phylogenetic context, Diniz-Filho et al. (1998 proposed what they called Phylogenetic Eigenvector Regression (PVR, in which pairwise phylogenetic distances among species are submitted to a Principal Coordinate Analysis, and eigenvectors are then used as explanatory variables in regression, correlation or ANOVAs. More recently, a new approach called Phylogenetic Eigenvector Mapping (PEM was proposed, with the main advantage of explicitly incorporating a model-based warping in phylogenetic distance in which an Ornstein-Uhlenbeck (O-U process is fitted to data before eigenvector extraction. Here we compared PVR and PEM in respect to estimated phylogenetic signal, correlated evolution under alternative evolutionary models and phylogenetic imputation, using simulated data. Despite similarity between the two approaches, PEM has a slightly higher prediction ability and is more general than the original PVR. Even so, in a conceptual sense, PEM may provide a technique in the best of both worlds, combining the flexibility of data-driven and empirical eigenfunction analyses and the sounding insights provided by evolutionary models well known in comparative analyses.

6. Sexual satisfaction, sexual compatibility, and relationship adjustment in couples: the role of sexual behaviors, orgasm, and men's discernment of women's intercourse orgasm.

Science.gov (United States)

Klapilová, Kateřina; Brody, Stuart; Krejčová, Lucie; Husárová, Barbara; Binter, Jakub

2015-03-01

7. El “Anclaje y Ajuste”, una herramienta de Marketing para analizar el poder de las referencias en el Arte, el Diseño y la Arquitectura = "Anchoring and Adjustment", a Marketing tool to analyse references in Art, Design and Architecture

Directory of Open Access Journals (Sweden)

2014-12-01

, it is taken for granted that artistic and creative works' judgments are also influenced by references. However, there is a big lack of study in the way those judgments are made. From an economic point of view, we would like to describe how a product would be sold only knowing how it has been designed. However, in terms of Marketing it seems nonsensical to think about the selling consequences without studying the Consumer Behaviour before the definitive choice. The “Anchoring and Adjustment” effect describes, from a Marketing point of view, how references are needed to judge any product. Therefore, the purpose of this paper is to explain how “Anchoring and Adjustment” effect works and how it could be used to further Art, Design, and Architecture analyses.

8. Association between intake of dairy products and short-term memory with and without adjustment for genetic and family environmental factors: A twin study.

Science.gov (United States)

Ogata, Soshiro; Tanaka, Haruka; Omura, Kayoko; Honda, Chika; Hayakawa, Kazuo

2016-04-01

Previous studies have indicated associations between intake of dairy products and better cognitive function and reduced risk of dementia. However, these studies did not adjust for genetic and family environmental factors that may influence food intake, cognitive function, and metabolism of dairy product nutrients. In the present study, we investigated the association between intake of dairy products and short-term memory with and without adjustment for almost all genetic and family environmental factors using a genetically informative sample of twin pairs. A cross-sectional study was conducted among twin pairs aged between 20 and 74. Short-term memory was assessed as primary outcome variable, intake of dairy products was analyzed as the predictive variable, and sex, age, education level, marital status, current smoking status, body mass index, dietary alcohol intake, and medical history of hypertension or diabetes were included as possible covariates. Generalized estimating equations (GEE) were performed by treating twins as individuals and regression analyses were used to identify within-pair differences of a twin pair to adjust for genetic and family environmental factors. Data are reported as standardized coefficients and 95% confidence intervals (CI). Analyses were performed on data from 78 men and 278 women. Among men, high intake of dairy products was significantly associated with better short-term memory after adjustment for the possible covariates (standardized coefficients = 0.22; 95% CI, 0.06-0.38) and almost all genetic and family environmental factors (standardized coefficients = 0.38; 95% CI, 0.07-0.69). Among women, no significant associations were found between intake of dairy products and short-term memory. Subsequent sensitivity analyses were adjusted for small samples and showed similar results. Intake of dairy product may prevent cognitive declines regardless of genetic and family environmental factors in men. Copyright © 2015 Elsevier Ltd

9. Parental divorce and adjustment in adulthood: findings from a community sample. The ALSPAC Study Team. Avon Longitudinal Study of Pregnancy and Childhood.

Science.gov (United States)

O'Connor, T G; Thorpe, K; Dunn, J; Golding, J

1999-07-01

The current study examines the link between the experience of divorce in childhood and several indices of adjustment in adulthood in a large community sample of women. Results replicated previous research on the long-term correlation between parental divorce and depression and divorce in adulthood. Results further suggested that parental divorce was associated with a wide range of early risk factors, life course patterns, and several indices of adult adjustment. Regression analyses indicated that the long-term correlation between parental divorce and depression in adulthood is explained by quality of parent-child and parental marital relations (in childhood), concurrent levels of stressful life events and social support, and cohabitation. The long-term association between parental divorce and experiencing a divorce in adulthood was partly mediated through quality of parent-child relations, teenage pregnancy, leaving home before 18 years, and educational attainment.

10. Body mass index adjustments to increase the validity of body fatness assessment in UK Black African and South Asian children

Science.gov (United States)

Hudda, M T; Nightingale, C M; Donin, A S; Fewtrell, M S; Haroun, D; Lum, S; Williams, J E; Owen, C G; Rudnicka, A R; Wells, J C K; Cook, D G; Whincup, P H

2017-01-01

Background/Objectives: Body mass index (BMI) (weight per height2) is the most widely used marker of childhood obesity and total body fatness (BF). However, its validity is limited, especially in children of South Asian and Black African origins. We aimed to quantify BMI adjustments needed for UK children of Black African and South Asian origins so that adjusted BMI related to BF in the same way as for White European children. Methods: We used data from four recent UK studies that made deuterium dilution BF measurements in UK children of White European, South Asian and Black African origins. A height-standardized fat mass index (FMI) was derived to represent BF. Linear regression models were then fitted, separately for boys and girls, to quantify ethnic differences in BMI–FMI relationships and to provide ethnic-specific BMI adjustments. Results: We restricted analyses to 4–12 year olds, to whom a single consistent FMI (fat mass per height5) could be applied. BMI consistently underestimated BF in South Asians, requiring positive BMI adjustments of +1.12 kg m−2 (95% confidence interval (CI): 0.83, 1.41 kg m−2; Pchildren. However, these were complex because there were statistically significant interactions between Black African ethnicity and FMI (P=0.004 boys; P=0.003 girls) and also between FMI and age group (Pchildren with higher unadjusted BMI and the smallest in older children with lower unadjusted BMI. Conclusions: BMI underestimated BF in South Asians and overestimated BF in Black Africans. Ethnic-specific adjustments, increasing BMI in South Asians and reducing BMI in Black Africans, can improve the accuracy of BF assessment in these children. PMID:28325931

11. Regression analysis using dependent Polya trees.

Science.gov (United States)

2013-11-30

Many commonly used models for linear regression analysis force overly simplistic shape and scale constraints on the residual structure of data. We propose a semiparametric Bayesian model for regression analysis that produces data-driven inference by using a new type of dependent Polya tree prior to model arbitrary residual distributions that are allowed to evolve across increasing levels of an ordinal covariate (e.g., time, in repeated measurement studies). By modeling residual distributions at consecutive covariate levels or time points using separate, but dependent Polya tree priors, distributional information is pooled while allowing for broad pliability to accommodate many types of changing residual distributions. We can use the proposed dependent residual structure in a wide range of regression settings, including fixed-effects and mixed-effects linear and nonlinear models for cross-sectional, prospective, and repeated measurement data. A simulation study illustrates the flexibility of our novel semiparametric regression model to accurately capture evolving residual distributions. In an application to immune development data on immunoglobulin G antibodies in children, our new model outperforms several contemporary semiparametric regression models based on a predictive model selection criterion. Copyright © 2013 John Wiley & Sons, Ltd.

12. Is past life regression therapy ethical?

Science.gov (United States)

2017-01-01

Past life regression therapy is used by some physicians in cases with some mental diseases. Anxiety disorders, mood disorders, and gender dysphoria have all been treated using life regression therapy by some doctors on the assumption that they reflect problems in past lives. Although it is not supported by psychiatric associations, few medical associations have actually condemned it as unethical. In this article, I argue that past life regression therapy is unethical for two basic reasons. First, it is not evidence-based. Past life regression is based on the reincarnation hypothesis, but this hypothesis is not supported by evidence, and in fact, it faces some insurmountable conceptual problems. If patients are not fully informed about these problems, they cannot provide an informed consent, and hence, the principle of autonomy is violated. Second, past life regression therapy has the great risk of implanting false memories in patients, and thus, causing significant harm. This is a violation of the principle of non-malfeasance, which is surely the most important principle in medical ethics.

13. 12 CFR 19.240 - Inflation adjustments.

Science.gov (United States)

2010-01-01

... 12 Banks and Banking 1 2010-01-01 2010-01-01 false Inflation adjustments. 19.240 Section 19.240... PROCEDURE Civil Money Penalty Inflation Adjustments § 19.240 Inflation adjustments. (a) The maximum amount... Civil Penalties Inflation Adjustment Act of 1990 (28 U.S.C. 2461 note) as follows: ER10NO08.001 (b) The...

14. Interpret with caution: multicollinearity in multiple regression of cognitive data.

Science.gov (United States)

Morrison, Catriona M

2003-08-01

Shibihara and Kondo in 2002 reported a reanalysis of the 1997 Kanji picture-naming data of Yamazaki, Ellis, Morrison, and Lambon-Ralph in which independent variables were highly correlated. Their addition of the variable visual familiarity altered the previously reported pattern of results, indicating that visual familiarity, but not age of acquisition, was important in predicting Kanji naming speed. The present paper argues that caution should be taken when drawing conclusions from multiple regression analyses in which the independent variables are so highly correlated, as such multicollinearity can lead to unreliable output.

15. Preference learning with evolutionary Multivariate Adaptive Regression Spline model

DEFF Research Database (Denmark)

Abou-Zleikha, Mohamed; Shaker, Noor; Christensen, Mads Græsbøll

2015-01-01

This paper introduces a novel approach for pairwise preference learning through combining an evolutionary method with Multivariate Adaptive Regression Spline (MARS). Collecting users' feedback through pairwise preferences is recommended over other ranking approaches as this method is more appealing...... for function approximation as well as being relatively easy to interpret. MARS models are evolved based on their efficiency in learning pairwise data. The method is tested on two datasets that collectively provide pairwise preference data of five cognitive states expressed by users. The method is analysed...

16. Nonparametric regression using the concept of minimum energy

International Nuclear Information System (INIS)

Williams, Mike

2011-01-01

It has recently been shown that an unbinned distance-based statistic, the energy, can be used to construct an extremely powerful nonparametric multivariate two sample goodness-of-fit test. An extension to this method that makes it possible to perform nonparametric regression using multiple multivariate data sets is presented in this paper. The technique, which is based on the concept of minimizing the energy of the system, permits determination of parameters of interest without the need for parametric expressions of the parent distributions of the data sets. The application and performance of this new method is discussed in the context of some simple example analyses.

17. Schizotypy and Behavioural Adjustment and the Role of Neuroticism

Science.gov (United States)

Völter, Christoph; Strobach, Tilo; Aichert, Désirée S.; Wöstmann, Nicola; Costa, Anna; Möller, Hans-Jürgen; Schubert, Torsten; Ettinger, Ulrich

2012-01-01

Objective In the present study the relationship between behavioural adjustment following cognitive conflict and schizotypy was investigated using a Stroop colour naming paradigm. Previous research has found deficits with behavioural adjustment in schizophrenia patients. Based on these findings, we hypothesized that individual differences in schizotypy, a personality trait reflecting the subclinical expression of the schizophrenia phenotype, would be associated with behavioural adjustment. Additionally, we investigated whether such a relationship would be explained by individual differences in neuroticism, a non-specific measure of negative trait emotionality known to be correlated with schizotypy. Methods 106 healthy volunteers (mean age: 25.1, 60% females) took part. Post-conflict adjustment was measured in a computer-based version of the Stroop paradigm. Schizotypy was assessed using the Schizotypal Personality Questionnaire (SPQ) and Neuroticism using the NEO-FFI. Results We found a negative correlation between schizotypy and post-conflict adjustment (r = −.30, p<.01); this relationship remained significant when controlling for effects of neuroticism. Regression analysis revealed that particularly the subscale No Close Friends drove the effect. Conclusion Previous findings of deficits in cognitive control in schizophrenia patients were extended to the subclinical personality expression of the schizophrenia phenotype and found to be specific to schizotypal traits over and above the effects of negative emotionality. PMID:22363416

18. Schizotypy and behavioural adjustment and the role of neuroticism.

Directory of Open Access Journals (Sweden)

Christoph Völter

Full Text Available OBJECTIVE: In the present study the relationship between behavioural adjustment following cognitive conflict and schizotypy was investigated using a Stroop colour naming paradigm. Previous research has found deficits with behavioural adjustment in schizophrenia patients. Based on these findings, we hypothesized that individual differences in schizotypy, a personality trait reflecting the subclinical expression of the schizophrenia phenotype, would be associated with behavioural adjustment. Additionally, we investigated whether such a relationship would be explained by individual differences in neuroticism, a non-specific measure of negative trait emotionality known to be correlated with schizotypy. METHODS: 106 healthy volunteers (mean age: 25.1, 60% females took part. Post-conflict adjustment was measured in a computer-based version of the Stroop paradigm. Schizotypy was assessed using the Schizotypal Personality Questionnaire (SPQ and Neuroticism using the NEO-FFI. RESULTS: We found a negative correlation between schizotypy and post-conflict adjustment (r = -.30, p<.01; this relationship remained significant when controlling for effects of neuroticism. Regression analysis revealed that particularly the subscale No Close Friends drove the effect. CONCLUSION: Previous findings of deficits in cognitive control in schizophrenia patients were extended to the subclinical personality expression of the schizophrenia phenotype and found to be specific to schizotypal traits over and above the effects of negative emotionality.

19. On Solving Lq-Penalized Regressions

Directory of Open Access Journals (Sweden)

Tracy Zhou Wu

2007-01-01

Full Text Available Lq-penalized regression arises in multidimensional statistical modelling where all or part of the regression coefficients are penalized to achieve both accuracy and parsimony of statistical models. There is often substantial computational difficulty except for the quadratic penalty case. The difficulty is partly due to the nonsmoothness of the objective function inherited from the use of the absolute value. We propose a new solution method for the general Lq-penalized regression problem based on space transformation and thus efficient optimization algorithms. The new method has immediate applications in statistics, notably in penalized spline smoothing problems. In particular, the LASSO problem is shown to be polynomial time solvable. Numerical studies show promise of our approach.

20. Refractive regression after laser in situ keratomileusis.

Science.gov (United States)

Yan, Mabel K; Chang, John Sm; Chan, Tommy Cy

2018-04-26

Uncorrected refractive errors are a leading cause of visual impairment across the world. In today's society, laser in situ keratomileusis (LASIK) has become the most commonly performed surgical procedure to correct refractive errors. However, regression of the initially achieved refractive correction has been a widely observed phenomenon following LASIK since its inception more than two decades ago. Despite technological advances in laser refractive surgery and various proposed management strategies, post-LASIK regression is still frequently observed and has significant implications for the long-term visual performance and quality of life of patients. This review explores the mechanism of refractive regression after both myopic and hyperopic LASIK, predisposing risk factors and its clinical course. In addition, current preventative strategies and therapies are also reviewed. © 2018 Royal Australian and New Zealand College of Ophthalmologists.

1. Influence diagnostics in meta-regression model.

Science.gov (United States)

Shi, Lei; Zuo, ShanShan; Yu, Dalei; Zhou, Xiaohua

2017-09-01

This paper studies the influence diagnostics in meta-regression model including case deletion diagnostic and local influence analysis. We derive the subset deletion formulae for the estimation of regression coefficient and heterogeneity variance and obtain the corresponding influence measures. The DerSimonian and Laird estimation and maximum likelihood estimation methods in meta-regression are considered, respectively, to derive the results. Internal and external residual and leverage measure are defined. The local influence analysis based on case-weights perturbation scheme, responses perturbation scheme, covariate perturbation scheme, and within-variance perturbation scheme are explored. We introduce a method by simultaneous perturbing responses, covariate, and within-variance to obtain the local influence measure, which has an advantage of capable to compare the influence magnitude of influential studies from different perturbations. An example is used to illustrate the proposed methodology. Copyright © 2017 John Wiley & Sons, Ltd.

2. Principal component regression for crop yield estimation

CERN Document Server

Suryanarayana, T M V

2016-01-01

This book highlights the estimation of crop yield in Central Gujarat, especially with regard to the development of Multiple Regression Models and Principal Component Regression (PCR) models using climatological parameters as independent variables and crop yield as a dependent variable. It subsequently compares the multiple linear regression (MLR) and PCR results, and discusses the significance of PCR for crop yield estimation. In this context, the book also covers Principal Component Analysis (PCA), a statistical procedure used to reduce a number of correlated variables into a smaller number of uncorrelated variables called principal components (PC). This book will be helpful to the students and researchers, starting their works on climate and agriculture, mainly focussing on estimation models. The flow of chapters takes the readers in a smooth path, in understanding climate and weather and impact of climate change, and gradually proceeds towards downscaling techniques and then finally towards development of ...

3. Regression Models for Market-Shares

DEFF Research Database (Denmark)

Birch, Kristina; Olsen, Jørgen Kai; Tjur, Tue

2005-01-01

On the background of a data set of weekly sales and prices for three brands of coffee, this paper discusses various regression models and their relation to the multiplicative competitive-interaction model (the MCI model, see Cooper 1988, 1993) for market-shares. Emphasis is put on the interpretat......On the background of a data set of weekly sales and prices for three brands of coffee, this paper discusses various regression models and their relation to the multiplicative competitive-interaction model (the MCI model, see Cooper 1988, 1993) for market-shares. Emphasis is put...... on the interpretation of the parameters in relation to models for the total sales based on discrete choice models.Key words and phrases. MCI model, discrete choice model, market-shares, price elasitcity, regression model....

4. Gender Gaps in Mathematics, Science and Reading Achievements in Muslim Countries: A Quantile Regression Approach

Science.gov (United States)

Shafiq, M. Najeeb

2013-01-01

Using quantile regression analyses, this study examines gender gaps in mathematics, science, and reading in Azerbaijan, Indonesia, Jordan, the Kyrgyz Republic, Qatar, Tunisia, and Turkey among 15-year-old students. The analyses show that girls in Azerbaijan achieve as well as boys in mathematics and science and overachieve in reading. In Jordan,…

5. The analysis of nonstationary time series using regression, correlation and cointegration

DEFF Research Database (Denmark)

Johansen, Søren

2012-01-01

There are simple well-known conditions for the validity of regression and correlation as statistical tools. We analyse by examples the effect of nonstationarity on inference using these methods and compare them to model based inference using the cointegrated vector autoregressive model. Finally we...... analyse some monthly data from US on interest rates as an illustration of the methods...

6. The Analysis of Nonstationary Time Series Using Regression, Correlation and Cointegration

Directory of Open Access Journals (Sweden)

Søren Johansen

2012-06-01

Full Text Available There are simple well-known conditions for the validity of regression and correlation as statistical tools. We analyse by examples the effect of nonstationarity on inference using these methods and compare them to model based inference using the cointegrated vector autoregressive model. Finally we analyse some monthly data from US on interest rates as an illustration of the methods.

7. Adjustable extender for instrument module

International Nuclear Information System (INIS)

Sevec, J.B.; Stein, A.D.

1975-01-01

A blank extender module used to mount an instrument module in front of its console for repair or test purposes has been equipped with a rotatable mount and means for locking the mount at various angles of rotation for easy accessibility. The rotatable mount includes a horizontal conduit supported by bearings within the blank module. The conduit is spring-biased in a retracted position within the blank module and in this position a small gear mounted on the conduit periphery is locked by a fixed pawl. The conduit and instrument mount can be pulled into an extended position with the gear clearing the pawl to permit rotation and adjustment of the instrument

8. Adjustable Tooling for Bending Brake

Science.gov (United States)

Ellis, J. M.

1986-01-01

Deep metal boxes and other parts easily fabricated. Adjustable tooling jig for bending brake accommodates spacing blocks and either standard male press-brake die or bar die. Holds spacer blocks, press-brake die, bar window die, or combination of three. Typical bending operations include bending of cut metal sheet into box and bending of metal strip into bracket with multiple inward 90 degree bends. By increasing free space available for bending sheet-metal parts jig makes it easier to fabricate such items as deep metal boxes or brackets with right-angle bends.

9. Ancestry-Adjusted Vitamin D Metabolite Concentrations in Association With Cytochrome P450 3A Polymorphisms.

Science.gov (United States)

Wilson, Robin Taylor; Masters, Loren D; Barnholtz-Sloan, Jill S; Salzberg, Anna C; Hartman, Terryl J

2018-04-01

We investigated the association between genetic polymorphisms in cytochrome P450 (CYP2R1, CYP24A1, and the CYP3A family) with nonsummer plasma concentrations of vitamin D metabolites (25-hydroxyvitamin D3 (25(OH)D3) and proportion 24,25-dihydroxyvitamin D3 (24,25(OH)2D3)) among healthy individuals of sub-Saharan African and European ancestry, matched on age (within 5 years; n = 188 in each ancestral group), in central suburban Pennsylvania (2006-2009). Vitamin D metabolites were measured using high-performance liquid chromatography with tandem mass spectrometry. Paired multiple regression and adjusted least-squares mean analyses were used to test for associations between genotype and log-transformed metabolite concentrations, adjusted for age, sex, proportion of West-African genetic ancestry, body mass index, oral contraceptive (OC) use, tanning bed use, vitamin D intake, days from summer solstice, time of day of blood draw, and isoforms of the vitamin D receptor (VDR) and vitamin D binding protein. Polymorphisms in CYP2R1, CYP3A43, vitamin D binding protein, and genetic ancestry proportion remained associated with plasma 25(OH)D3 after adjustment. Only CYP3A43 and VDR polymorphisms were associated with proportion 24,25(OH)2D3. Magnitudes of association with 25(OH)D3 were similar for CYP3A43, tanning bed use, and OC use. Significant least-squares mean interactions (CYP2R1/OC use (P = 0.030) and CYP3A43/VDR (P = 0.013)) were identified. A CYP3A43 genotype, previously implicated in cancer, is strongly associated with biomarkers of vitamin D metabolism. Interactive associations should be further investigated.

10. On directional multiple-output quantile regression

Czech Academy of Sciences Publication Activity Database

Paindaveine, D.; Šiman, Miroslav

2011-01-01

Roč. 102, č. 2 (2011), s. 193-212 ISSN 0047-259X R&D Projects: GA MŠk(CZ) 1M06047 Grant - others:Commision EC(BE) Fonds National de la Recherche Scientifique Institutional research plan: CEZ:AV0Z10750506 Keywords : multivariate quantile * quantile regression * multiple-output regression * halfspace depth * portfolio optimization * value-at risk Subject RIV: BA - General Mathematics Impact factor: 0.879, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/siman-0364128.pdf

11. Removing Malmquist bias from linear regressions

Science.gov (United States)

Verter, Frances

1993-01-01

Malmquist bias is present in all astronomical surveys where sources are observed above an apparent brightness threshold. Those sources which can be detected at progressively larger distances are progressively more limited to the intrinsically luminous portion of the true distribution. This bias does not distort any of the measurements, but distorts the sample composition. We have developed the first treatment to correct for Malmquist bias in linear regressions of astronomical data. A demonstration of the corrected linear regression that is computed in four steps is presented.

12. Robust median estimator in logisitc regression

Czech Academy of Sciences Publication Activity Database

Hobza, T.; Pardo, L.; Vajda, Igor

2008-01-01

Roč. 138, č. 12 (2008), s. 3822-3840 ISSN 0378-3758 R&D Projects: GA MŠk 1M0572 Grant - others:Instituto Nacional de Estadistica (ES) MPO FI - IM3/136; GA MŠk(CZ) MTM 2006-06872 Institutional research plan: CEZ:AV0Z10750506 Keywords : Logistic regression * Median * Robustness * Consistency and asymptotic normality * Morgenthaler * Bianco and Yohai * Croux and Hasellbroeck Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.679, year: 2008 http://library.utia.cas.cz/separaty/2008/SI/vajda-robust%20median%20estimator%20in%20logistic%20regression.pdf

13. Weighing Evidence "Steampunk" Style via the Meta-Analyser.

Science.gov (United States)

Bowden, Jack; Jackson, Chris

2016-10-01

The funnel plot is a graphical visualization of summary data estimates from a meta-analysis, and is a useful tool for detecting departures from the standard modeling assumptions. Although perhaps not widely appreciated, a simple extension of the funnel plot can help to facilitate an intuitive interpretation of the mathematics underlying a meta-analysis at a more fundamental level, by equating it to determining the center of mass of a physical system. We used this analogy to explain the concepts of weighing evidence and of biased evidence to a young audience at the Cambridge Science Festival, without recourse to precise definitions or statistical formulas and with a little help from Sherlock Holmes! Following on from the science fair, we have developed an interactive web-application (named the Meta-Analyser) to bring these ideas to a wider audience. We envisage that our application will be a useful tool for researchers when interpreting their data. First, to facilitate a simple understanding of fixed and random effects modeling approaches; second, to assess the importance of outliers; and third, to show the impact of adjusting for small study bias. This final aim is realized by introducing a novel graphical interpretation of the well-known method of Egger regression.

14. Laser Beam Focus Analyser

DEFF Research Database (Denmark)

Nielsen, Peter Carøe; Hansen, Hans Nørgaard; Olsen, Flemming Ove

2007-01-01

the obtainable features in direct laser machining as well as heat affected zones in welding processes. This paper describes the development of a measuring unit capable of analysing beam shape and diameter of lasers to be used in manufacturing processes. The analyser is based on the principle of a rotating......The quantitative and qualitative description of laser beam characteristics is important for process implementation and optimisation. In particular, a need for quantitative characterisation of beam diameter was identified when using fibre lasers for micro manufacturing. Here the beam diameter limits...... mechanical wire being swept through the laser beam at varying Z-heights. The reflected signal is analysed and the resulting beam profile determined. The development comprised the design of a flexible fixture capable of providing both rotation and Z-axis movement, control software including data capture...

15. Wheat flour dough Alveograph characteristics predicted by Mixolab regression models.

Science.gov (United States)

Codină, Georgiana Gabriela; Mironeasa, Silvia; Mironeasa, Costel; Popa, Ciprian N; Tamba-Berehoiu, Radiana

2012-02-01

In Romania, the Alveograph is the most used device to evaluate the rheological properties of wheat flour dough, but lately the Mixolab device has begun to play an important role in the breadmaking industry. These two instruments are based on different principles but there are some correlations that can be found between the parameters determined by the Mixolab and the rheological properties of wheat dough measured with the Alveograph. Statistical analysis on 80 wheat flour samples using the backward stepwise multiple regression method showed that Mixolab values using the ‘Chopin S’ protocol (40 samples) and ‘Chopin + ’ protocol (40 samples) can be used to elaborate predictive models for estimating the value of the rheological properties of wheat dough: baking strength (W), dough tenacity (P) and extensibility (L). The correlation analysis confirmed significant findings (P 0.70 for P, R²(adjusted) > 0.70 for W and R²(adjusted) > 0.38 for L, at a 95% confidence interval. Copyright © 2011 Society of Chemical Industry.

16. Global Land Use Regression Model for Nitrogen Dioxide Air Pollution.

Science.gov (United States)

Larkin, Andrew; Geddes, Jeffrey A; Martin, Randall V; Xiao, Qingyang; Liu, Yang; Marshall, Julian D; Brauer, Michael; Hystad, Perry

2017-06-20

Nitrogen dioxide is a common air pollutant with growing evidence of health impacts independent of other common pollutants such as ozone and particulate matter. However, the worldwide distribution of NO 2 exposure and associated impacts on health is still largely uncertain. To advance global exposure estimates we created a global nitrogen dioxide (NO 2 ) land use regression model for 2011 using annual measurements from 5,220 air monitors in 58 countries. The model captured 54% of global NO 2 variation, with a mean absolute error of 3.7 ppb. Regional performance varied from R 2 = 0.42 (Africa) to 0.67 (South America). Repeated 10% cross-validation using bootstrap sampling (n = 10,000) demonstrated a robust performance with respect to air monitor sampling in North America, Europe, and Asia (adjusted R 2 within 2%) but not for Africa and Oceania (adjusted R 2 within 11%) where NO 2 monitoring data are sparse. The final model included 10 variables that captured both between and within-city spatial gradients in NO 2 concentrations. Variable contributions differed between continental regions, but major roads within 100 m and satellite-derived NO 2 were consistently the strongest predictors. The resulting model can be used for global risk assessments and health studies, particularly in countries without existing NO 2 monitoring data or models.

17. Contesting Citizenship: Comparative Analyses

DEFF Research Database (Denmark)

Siim, Birte; Squires, Judith

2007-01-01

importance of particularized experiences and multiple ineequality agendas). These developments shape the way citizenship is both practiced and analysed. Mapping neat citizenship modles onto distinct nation-states and evaluating these in relation to formal equality is no longer an adequate approach....... Comparative citizenship analyses need to be considered in relation to multipleinequalities and their intersections and to multiple governance and trans-national organisinf. This, in turn, suggests that comparative citizenship analysis needs to consider new spaces in which struggles for equal citizenship occur...

18. Demonstration of a Fiber Optic Regression Probe

Science.gov (United States)

Korman, Valentin; Polzin, Kurt A.

2010-01-01

The capability to provide localized, real-time monitoring of material regression rates in various applications has the potential to provide a new stream of data for development testing of various components and systems, as well as serving as a monitoring tool in flight applications. These applications include, but are not limited to, the regression of a combusting solid fuel surface, the ablation of the throat in a chemical rocket or the heat shield of an aeroshell, and the monitoring of erosion in long-life plasma thrusters. The rate of regression in the first application is very fast, while the second and third are increasingly slower. A recent fundamental sensor development effort has led to a novel regression, erosion, and ablation sensor technology (REAST). The REAST sensor allows for measurement of real-time surface erosion rates at a discrete surface location. The sensor is optical, using two different, co-located fiber-optics to perform the regression measurement. The disparate optical transmission properties of the two fiber-optics makes it possible to measure the regression rate by monitoring the relative light attenuation through the fibers. As the fibers regress along with the parent material in which they are embedded, the relative light intensities through the two fibers changes, providing a measure of the regression rate. The optical nature of the system makes it relatively easy to use in a variety of harsh, high temperature environments, and it is also unaffected by the presence of electric and magnetic fields. In addition, the sensor could be used to perform optical spectroscopy on the light emitted by a process and collected by fibers, giving localized measurements of various properties. The capability to perform an in-situ measurement of material regression rates is useful in addressing a variety of physical issues in various applications. An in-situ measurement allows for real-time data regarding the erosion rates, providing a quick method for

19. Household water treatment in developing countries: comparing different intervention types using meta-regression.

Science.gov (United States)

Hunter, Paul R

2009-12-01

Household water treatment (HWT) is being widely promoted as an appropriate intervention for reducing the burden of waterborne disease in poor communities in developing countries. A recent study has raised concerns about the effectiveness of HWT, in part because of concerns over the lack of blinding and in part because of considerable heterogeneity in the reported effectiveness of randomized controlled trials. This study set out to attempt to investigate the causes of this heterogeneity and so identify factors associated with good health gains. Studies identified in an earlier systematic review and meta-analysis were supplemented with more recently published randomized controlled trials. A total of 28 separate studies of randomized controlled trials of HWT with 39 intervention arms were included in the analysis. Heterogeneity was studied using the "metareg" command in Stata. Initial analyses with single candidate predictors were undertaken and all variables significant at the P Risk and the parameter estimates from the final regression model. The overall effect size of all unblinded studies was relative risk = 0.56 (95% confidence intervals 0.51-0.63), but after adjusting for bias due to lack of blinding the effect size was much lower (RR = 0.85, 95% CI = 0.76-0.97). Four main variables were significant predictors of effectiveness of intervention in a multipredictor meta regression model: Log duration of study follow-up (regression coefficient of log effect size = 0.186, standard error (SE) = 0.072), whether or not the study was blinded (coefficient 0.251, SE 0.066) and being conducted in an emergency setting (coefficient -0.351, SE 0.076) were all significant predictors of effect size in the final model. Compared to the ceramic filter all other interventions were much less effective (Biosand 0.247, 0.073; chlorine and safe waste storage 0.295, 0.061; combined coagulant-chlorine 0.2349, 0.067; SODIS 0.302, 0.068). A Monte Carlo model predicted that over 12 months

20. Poisson Regression Analysis of Illness and Injury Surveillance Data

Energy Technology Data Exchange (ETDEWEB)

Frome E.L., Watkins J.P., Ellis E.D.

2012-12-12

The Department of Energy (DOE) uses illness and injury surveillance to monitor morbidity and assess the overall health of the work force. Data collected from each participating site include health events and a roster file with demographic information. The source data files are maintained in a relational data base, and are used to obtain stratified tables of health event counts and person time at risk that serve as the starting point for Poisson regression analysis. The explanatory variables that define these tables are age, gender, occupational group, and time. Typical response variables of interest are the number of absences due to illness or injury, i.e., the response variable is a count. Poisson regression methods are used to describe the effect of the explanatory variables on the health event rates using a log-linear main effects model. Results of fitting the main effects model are summarized in a tabular and graphical form and interpretation of model parameters is provided. An analysis of deviance table is used to evaluate the importance of each of the explanatory variables on the event rate of interest and to determine if interaction terms should be considered in the analysis. Although Poisson regression methods are widely used in the analysis of count data, there are situations in which over-dispersion occurs. This could be due to lack-of-fit of the regression model, extra-Poisson variation, or both. A score test statistic and regression diagnostics are used to identify over-dispersion. A quasi-likelihood method of moments procedure is used to evaluate and adjust for extra-Poisson variation when necessary. Two examples are presented using respiratory disease absence rates at two DOE sites to illustrate the methods and interpretation of the results. In the first example the Poisson main effects model is adequate. In the second example the score test indicates considerable over-dispersion and a more detailed analysis attributes the over-dispersion to extra

1. Reconstruction of missing daily streamflow data using dynamic regression models

Science.gov (United States)

Tencaliec, Patricia; Favre, Anne-Catherine; Prieur, Clémentine; Mathevet, Thibault

2015-12-01

River discharge is one of the most important quantities in hydrology. It provides fundamental records for water resources management and climate change monitoring. Even very short data-gaps in this information can cause extremely different analysis outputs. Therefore, reconstructing missing data of incomplete data sets is an important step regarding the performance of the environmental models, engineering, and research applications, thus it presents a great challenge. The objective of this paper is to introduce an effective technique for reconstructing missing daily discharge data when one has access to only daily streamflow data. The proposed procedure uses a combination of regression and autoregressive integrated moving average models (ARIMA) called dynamic regression model. This model uses the linear relationship between neighbor and correlated stations and then adjusts the residual term by fitting an ARIMA structure. Application of the model to eight daily streamflow data for the Durance river watershed showed that the model yields reliable estimates for the missing data in the time series. Simulation studies were also conducted to evaluate the performance of the procedure.

2. Do effects of common case-mix adjusters on patient experiences vary across patient groups?

Science.gov (United States)

de Boer, Dolf; van der Hoek, Lucas; Rademakers, Jany; Delnoij, Diana; van den Berg, Michael

2017-11-22

Many survey studies in health care adjust for demographic characteristics such as age, gender, educational attainment and general health when performing statistical analyses. Whether the effects of these demographic characteristics are consistent between patient groups remains to be determined. This is important as the rationale for adjustment is often that demographic sub-groups differ in their so-called 'response tendency'. This rationale may be less convincing if the effects of response tendencies vary across patient groups. The present paper examines whether the impact of these characteristics on patients' global rating of care varies across patient groups. Secondary analyses using multi-level regression models were performed on a dataset including 32 different patient groups and 145,578 observations. For each demographic variable, the 95% expected range of case-mix coefficients across patient groups is presented. In addition, we report whether the variance of coefficients for demographic variables across patient groups is significant. Overall, men, elderly, lower educated people and people in good health tend to give higher global ratings. However, these effects varied significantly across patient groups and included the possibility of no effect or an opposite effect in some patient groups. The response tendency attributed to demographic characteristics - such as older respondents being milder, or higher educated respondents being more critical - is not general or universal. As such, the mechanism linking demographic characteristics to survey results on patient experiences with quality of care is more complicated than a general response tendency. It is possible that the response tendency interacts with patient group, but it is also possible that other mechanisms are at play.

3. Do effects of common case-mix adjusters on patient experiences vary across patient groups?

Directory of Open Access Journals (Sweden)

Dolf de Boer

2017-11-01

Full Text Available Abstract Background Many survey studies in health care adjust for demographic characteristics such as age, gender, educational attainment and general health when performing statistical analyses. Whether the effects of these demographic characteristics are consistent between patient groups remains to be determined. This is important as the rationale for adjustment is often that demographic sub-groups differ in their so-called ‘response tendency’. This rationale may be less convincing if the effects of response tendencies vary across patient groups. The present paper examines whether the impact of these characteristics on patients’ global rating of care varies across patient groups. Methods Secondary analyses using multi-level regression models were performed on a dataset including 32 different patient groups and 145,578 observations. For each demographic variable, the 95% expected range of case-mix coefficients across patient groups is presented. In addition, we report whether the variance of coefficients for demographic variables across patient groups is significant. Results Overall, men, elderly, lower educated people and people in good health tend to give higher global ratings. However, these effects varied significantly across patient groups and included the possibility of no effect or an opposite effect in some patient groups. Conclusion The response tendency attributed to demographic characteristics – such as older respondents being milder, or higher educated respondents being more critical – is not general or universal. As such, the mechanism linking demographic characteristics to survey results on patient experiences with quality of care is more complicated than a general response tendency. It is possible that the response tendency interacts with patient group, but it is also possible that other mechanisms are at play.

4. Child behaviour problems, parenting behaviours and parental adjustment in mothers and fathers in Sweden.

Science.gov (United States)

Salari, Raziye; Wells, Michael B; Sarkadi, Anna

2014-11-01

We aim to examine the relationship between child behavioural problems and several parental factors, particularly parental behaviours as reported by both mothers and fathers in a sample of preschool children in Sweden. Participants were mothers and fathers of 504 3- to 5-year-olds that were recruited through preschools. They completed a set of questionnaires including the Eyberg Child Behavior Inventory, Parenting Sense of Competence Scale, Parenting Scale, Parent Problem Checklist, Dyadic Adjustment Scale and Depression Anxiety Stress Scale. Correlational analyses showed that parent-reported child behaviour problems were positively associated with ineffective parenting practices and interparental conflicts and negatively related to parental competence. Regression analyses showed that, for both mothers and fathers, higher levels of parental over-reactivity and interparental conflict over child-rearing issues and lower levels of parental satisfaction were the most salient factors in predicting their reports of disruptive child behaviour. This study revealed that swedish parents' perceptions of their parenting is related to their ratings of child behaviour problems which therefore implies that parent training programs can be useful in addressing behavioural problems in Swedish children. © 2014 the Nordic Societies of Public Health.

5. Do Afterlife Beliefs Affect Psychological Adjustment to Late-Life Spousal Loss?

Science.gov (United States)

2014-01-01

Objectives. We explore whether beliefs about the existence and nature of an afterlife affect 5 psychological symptoms (anxiety, anger, depression, intrusive thoughts, and yearning) among recently bereaved older spouses. Method. We conduct multivariate regression analyses using data from the Changing Lives of Older Couples (CLOC), a prospective study of spousal loss. The CLOC obtained data from bereaved persons prior to loss and both 6 and 18 months postloss. All analyses are adjusted for health, sociodemographic characteristics, and preloss marital quality. Results. Bleak or uncertain views about the afterlife are associated with multiple aspects of distress postloss. Uncertainty about the existence of an afterlife is associated with elevated intrusive thoughts, a symptom similar to posttraumatic distress. Widowed persons who do not expect to be reunited with loved ones in the afterlife report significantly more depressive symptoms, anger, and intrusive thoughts at both 6 and 18 months postloss. Discussion. Beliefs in an afterlife may be maladaptive for coping with late-life spousal loss, particularly if one is uncertain about its existence or holds a pessimistic view of what the afterlife entails. Our findings are broadly consistent with recent work suggesting that “continuing bonds” with the decedent may not be adaptive for older bereaved spouses. PMID:23811692

6. Method for nonlinear exponential regression analysis

Science.gov (United States)

Junkin, B. G.

1972-01-01

Two computer programs developed according to two general types of exponential models for conducting nonlinear exponential regression analysis are described. Least squares procedure is used in which the nonlinear problem is linearized by expanding in a Taylor series. Program is written in FORTRAN 5 for the Univac 1108 computer.

7. Measurement Error in Education and Growth Regressions

NARCIS (Netherlands)

Portela, Miguel; Alessie, Rob; Teulings, Coen

2010-01-01

The use of the perpetual inventory method for the construction of education data per country leads to systematic measurement error. This paper analyzes its effect on growth regressions. We suggest a methodology for correcting this error. The standard attenuation bias suggests that using these

8. The M Word: Multicollinearity in Multiple Regression.

Science.gov (United States)

Morrow-Howell, Nancy

1994-01-01

Notes that existence of substantial correlation between two or more independent variables creates problems of multicollinearity in multiple regression. Discusses multicollinearity problem in social work research in which independent variables are usually intercorrelated. Clarifies problems created by multicollinearity, explains detection of…

9. Regression Discontinuity Designs Based on Population Thresholds

DEFF Research Database (Denmark)

Eggers, Andrew C.; Freier, Ronny; Grembi, Veronica

In many countries, important features of municipal government (such as the electoral system, mayors' salaries, and the number of councillors) depend on whether the municipality is above or below arbitrary population thresholds. Several papers have used a regression discontinuity design (RDD...

10. Deriving the Regression Line with Algebra

Science.gov (United States)

Quintanilla, John A.

2017-01-01

Exploration with spreadsheets and reliance on previous skills can lead students to determine the line of best fit. To perform linear regression on a set of data, students in Algebra 2 (or, in principle, Algebra 1) do not have to settle for using the mysterious "black box" of their graphing calculators (or other classroom technologies).…

11. Piecewise linear regression splines with hyperbolic covariates

International Nuclear Information System (INIS)

Cologne, John B.; Sposto, Richard

1992-09-01

Consider the problem of fitting a curve to data that exhibit a multiphase linear response with smooth transitions between phases. We propose substituting hyperbolas as covariates in piecewise linear regression splines to obtain curves that are smoothly joined. The method provides an intuitive and easy way to extend the two-phase linear hyperbolic response model of Griffiths and Miller and Watts and Bacon to accommodate more than two linear segments. The resulting regression spline with hyperbolic covariates may be fit by nonlinear regression methods to estimate the degree of curvature between adjoining linear segments. The added complexity of fitting nonlinear, as opposed to linear, regression models is not great. The extra effort is particularly worthwhile when investigators are unwilling to assume that the slope of the response changes abruptly at the join points. We can also estimate the join points (the values of the abscissas where the linear segments would intersect if extrapolated) if their number and approximate locations may be presumed known. An example using data on changing age at menarche in a cohort of Japanese women illustrates the use of the method for exploratory data analysis. (author)

12. Targeting: Logistic Regression, Special Cases and Extensions

Directory of Open Access Journals (Sweden)

Helmut Schaeben

2014-12-01

Full Text Available Logistic regression is a classical linear model for logit-transformed conditional probabilities of a binary target variable. It recovers the true conditional probabilities if the joint distribution of predictors and the target is of log-linear form. Weights-of-evidence is an ordinary logistic regression with parameters equal to the differences of the weights of evidence if all predictor variables are discrete and conditionally independent given the target variable. The hypothesis of conditional independence can be tested in terms of log-linear models. If the assumption of conditional independence is violated, the application of weights-of-evidence does not only corrupt the predicted conditional probabilities, but also their rank transform. Logistic regression models, including the interaction terms, can account for the lack of conditional independence, appropriate interaction terms compensate exactly for violations of conditional independence. Multilayer artificial neural nets may be seen as nested regression-like models, with some sigmoidal activation function. Most often, the logistic function is used as the activation function. If the net topology, i.e., its control, is sufficiently versatile to mimic interaction terms, artificial neural nets are able to account for violations of conditional independence and yield very similar results. Weights-of-evidence cannot reasonably include interaction terms; subsequent modifications of the weights, as often suggested, cannot emulate the effect of interaction terms.

13. Functional data analysis of generalized regression quantiles

KAUST Repository

Guo, Mengmeng

2013-11-05

Generalized regression quantiles, including the conditional quantiles and expectiles as special cases, are useful alternatives to the conditional means for characterizing a conditional distribution, especially when the interest lies in the tails. We develop a functional data analysis approach to jointly estimate a family of generalized regression quantiles. Our approach assumes that the generalized regression quantiles share some common features that can be summarized by a small number of principal component functions. The principal component functions are modeled as splines and are estimated by minimizing a penalized asymmetric loss measure. An iterative least asymmetrically weighted squares algorithm is developed for computation. While separate estimation of individual generalized regression quantiles usually suffers from large variability due to lack of sufficient data, by borrowing strength across data sets, our joint estimation approach significantly improves the estimation efficiency, which is demonstrated in a simulation study. The proposed method is applied to data from 159 weather stations in China to obtain the generalized quantile curves of the volatility of the temperature at these stations. © 2013 Springer Science+Business Media New York.

14. Regression testing Ajax applications : Coping with dynamism

NARCIS (Netherlands)

Roest, D.; Mesbah, A.; Van Deursen, A.

2009-01-01

Note: This paper is a pre-print of: Danny Roest, Ali Mesbah and Arie van Deursen. Regression Testing AJAX Applications: Coping with Dynamism. In Proceedings of the 3rd International Conference on Software Testing, Verification and Validation (ICST’10), Paris, France. IEEE Computer Society, 2010.

15. Group-wise partial least square regression

NARCIS (Netherlands)

Camacho, José; Saccenti, Edoardo

2018-01-01

This paper introduces the group-wise partial least squares (GPLS) regression. GPLS is a new sparse PLS technique where the sparsity structure is defined in terms of groups of correlated variables, similarly to what is done in the related group-wise principal component analysis. These groups are

16. Functional data analysis of generalized regression quantiles

KAUST Repository

Guo, Mengmeng; Zhou, Lan; Huang, Jianhua Z.; Hä rdle, Wolfgang Karl

2013-01-01

Generalized regression quantiles, including the conditional quantiles and expectiles as special cases, are useful alternatives to the conditional means for characterizing a conditional distribution, especially when the interest lies in the tails. We develop a functional data analysis approach to jointly estimate a family of generalized regression quantiles. Our approach assumes that the generalized regression quantiles share some common features that can be summarized by a small number of principal component functions. The principal component functions are modeled as splines and are estimated by minimizing a penalized asymmetric loss measure. An iterative least asymmetrically weighted squares algorithm is developed for computation. While separate estimation of individual generalized regression quantiles usually suffers from large variability due to lack of sufficient data, by borrowing strength across data sets, our joint estimation approach significantly improves the estimation efficiency, which is demonstrated in a simulation study. The proposed method is applied to data from 159 weather stations in China to obtain the generalized quantile curves of the volatility of the temperature at these stations. © 2013 Springer Science+Business Media New York.

17. Finite Algorithms for Robust Linear Regression

DEFF Research Database (Denmark)

1990-01-01

The Huber M-estimator for robust linear regression is analyzed. Newton type methods for solution of the problem are defined and analyzed, and finite convergence is proved. Numerical experiments with a large number of test problems demonstrate efficiency and indicate that this kind of approach may...

18. Function approximation with polynomial regression slines

International Nuclear Information System (INIS)

Urbanski, P.

1996-01-01

Principles of the polynomial regression splines as well as algorithms and programs for their computation are presented. The programs prepared using software package MATLAB are generally intended for approximation of the X-ray spectra and can be applied in the multivariate calibration of radiometric gauges. (author)

19. Assessing risk factors for periodontitis using regression

Science.gov (United States)

Lobo Pereira, J. A.; Ferreira, Maria Cristina; Oliveira, Teresa

2013-10-01

Multivariate statistical analysis is indispensable to assess the associations and interactions between different factors and the risk of periodontitis. Among others, regression analysis is a statistical technique widely used in healthcare to investigate and model the relationship between variables. In our work we study the impact of socio-demographic, medical and behavioral factors on periodontal health. Using regression, linear and logistic models, we can assess the relevance, as risk factors for periodontitis disease, of the following independent variables (IVs): Age, Gender, Diabetic Status, Education, Smoking status and Plaque Index. The multiple linear regression analysis model was built to evaluate the influence of IVs on mean Attachment Loss (AL). Thus, the regression coefficients along with respective p-values will be obtained as well as the respective p-values from the significance tests. The classification of a case (individual) adopted in the logistic model was the extent of the destruction of periodontal tissues defined by an Attachment Loss greater than or equal to 4 mm in 25% (AL≥4mm/≥25%) of sites surveyed. The association measures include the Odds Ratios together with the correspondent 95% confidence intervals.

20. Predicting Social Trust with Binary Logistic Regression

Science.gov (United States)

2015-01-01

This study used binary logistic regression to predict social trust with five demographic variables from a national sample of adult individuals who participated in The General Social Survey (GSS) in 2012. The five predictor variables were respondents' highest degree earned, race, sex, general happiness and the importance of personally assisting…

1. Yet another look at MIDAS regression

NARCIS (Netherlands)

Ph.H.B.F. Franses (Philip Hans)

2016-01-01

textabstractA MIDAS regression involves a dependent variable observed at a low frequency and independent variables observed at a higher frequency. This paper relates a true high frequency data generating process, where also the dependent variable is observed (hypothetically) at the high frequency,

2. Revisiting Regression in Autism: Heller's "Dementia Infantilis"

Science.gov (United States)

Westphal, Alexander; Schelinski, Stefanie; Volkmar, Fred; Pelphrey, Kevin

2013-01-01

Theodor Heller first described a severe regression of adaptive function in normally developing children, something he termed dementia infantilis, over one 100 years ago. Dementia infantilis is most closely related to the modern diagnosis, childhood disintegrative disorder. We translate Heller's paper, Uber Dementia Infantilis, and discuss…

3. Fast multi-output relevance vector regression

OpenAIRE

Ha, Youngmin

2017-01-01

This paper aims to decrease the time complexity of multi-output relevance vector regression from O(VM^3) to O(V^3+M^3), where V is the number of output dimensions, M is the number of basis functions, and V

4. Regression Equations for Birth Weight Estimation using ...

African Journals Online (AJOL)

In this study, Birth Weight has been estimated from anthropometric measurements of hand and foot. Linear regression equations were formed from each of the measured variables. These simple equations can be used to estimate Birth Weight of new born babies, in order to identify those with low birth weight and referred to ...

5. Superquantile Regression: Theory, Algorithms, and Applications

Science.gov (United States)

2014-12-01

Highway, Suite 1204, Arlington, Va 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1...Navy submariners, reliability engineering, uncertainty quantification, and financial risk management . Superquantile, superquantile regression...Royset Carlos F. Borges Associate Professor of Operations Research Dissertation Supervisor Professor of Applied Mathematics Lyn R. Whitaker Javier

6. transformation of independent variables in polynomial regression ...

African Journals Online (AJOL)

preferable when possible to work with a simple functional form in transformed variables rather than with a more complicated form in the original variables. In this paper, it is shown that linear transformations applied to independent variables in polynomial regression models affect the t ratio and hence the statistical ...

7. Multiple Linear Regression: A Realistic Reflector.

Science.gov (United States)

Nutt, A. T.; Batsell, R. R.

Examples of the use of Multiple Linear Regression (MLR) techniques are presented. This is done to show how MLR aids data processing and decision-making by providing the decision-maker with freedom in phrasing questions and by accurately reflecting the data on hand. A brief overview of the rationale underlying MLR is given, some basic definitions…

8. Risico-analyse brandstofpontons

NARCIS (Netherlands)

Uijt de Haag P; Post J; LSO

2001-01-01

Voor het bepalen van de risico's van brandstofpontons in een jachthaven is een generieke risico-analyse uitgevoerd. Er is een referentiesysteem gedefinieerd, bestaande uit een betonnen brandstofponton met een relatief grote inhoud en doorzet. Aangenomen is dat de ponton gelegen is in een

9. Fast multichannel analyser

Energy Technology Data Exchange (ETDEWEB)

Berry, A; Przybylski, M M; Sumner, I [Science Research Council, Daresbury (UK). Daresbury Lab.

1982-10-01

A fast multichannel analyser (MCA) capable of sampling at a rate of 10/sup 7/ s/sup -1/ has been developed. The instrument is based on an 8 bit parallel encoding analogue to digital converter (ADC) reading into a fast histogramming random access memory (RAM) system, giving 256 channels of 64 k count capacity. The prototype unit is in CAMAC format.

10. A fast multichannel analyser

International Nuclear Information System (INIS)

Berry, A.; Przybylski, M.M.; Sumner, I.

1982-01-01

A fast multichannel analyser (MCA) capable of sampling at a rate of 10 7 s -1 has been developed. The instrument is based on an 8 bit parallel encoding analogue to digital converter (ADC) reading into a fast histogramming random access memory (RAM) system, giving 256 channels of 64 k count capacity. The prototype unit is in CAMAC format. (orig.)

Science.gov (United States)

Alshalalfah, Abdel-Latif; Daoud, Mohammad I.; Al-Najar, Mahasen

2017-03-01

Freehand three-dimensional (3D) ultrasound imaging enables low-cost and flexible 3D scanning of arbitrary-shaped organs, where the operator can freely move a two-dimensional (2D) ultrasound probe to acquire a sequence of tracked cross-sectional images of the anatomy. Often, the acquired 2D ultrasound images are irregularly and sparsely distributed in the 3D space. Several 3D reconstruction algorithms have been proposed to synthesize 3D ultrasound volumes based on the acquired 2D images. A challenging task during the reconstruction process is to preserve the texture patterns in the synthesized volume and ensure that all gaps in the volume are correctly filled. This paper presents an adaptive kernel regression algorithm that can effectively reconstruct high-quality freehand 3D ultrasound volumes. The algorithm employs a kernel regression model that enables nonparametric interpolation of the voxel gray-level values. The kernel size of the regression model is adaptively adjusted based on the characteristics of the voxel that is being interpolated. In particular, when the algorithm is employed to interpolate a voxel located in a region with dense ultrasound data samples, the size of the kernel is reduced to preserve the texture patterns. On the other hand, the size of the kernel is increased in areas that include large gaps to enable effective gap filling. The performance of the proposed algorithm was compared with seven previous interpolation approaches by synthesizing freehand 3D ultrasound volumes of a benign breast tumor. The experimental results show that the proposed algorithm outperforms the other interpolation approaches.

12. Regression Models and Fuzzy Logic Prediction of TBM Penetration Rate

Directory of Open Access Journals (Sweden)

Minh Vu Trieu

2017-03-01

Full Text Available This paper presents statistical analyses of rock engineering properties and the measured penetration rate of tunnel boring machine (TBM based on the data of an actual project. The aim of this study is to analyze the influence of rock engineering properties including uniaxial compressive strength (UCS, Brazilian tensile strength (BTS, rock brittleness index (BI, the distance between planes of weakness (DPW, and the alpha angle (Alpha between the tunnel axis and the planes of weakness on the TBM rate of penetration (ROP. Four (4 statistical regression models (two linear and two nonlinear are built to predict the ROP of TBM. Finally a fuzzy logic model is developed as an alternative method and compared to the four statistical regression models. Results show that the fuzzy logic model provides better estimations and can be applied to predict the TBM performance. The R-squared value (R2 of the fuzzy logic model scores the highest value of 0.714 over the second runner-up of 0.667 from the multiple variables nonlinear regression model.

13. Regression Models and Fuzzy Logic Prediction of TBM Penetration Rate

Science.gov (United States)

Minh, Vu Trieu; Katushin, Dmitri; Antonov, Maksim; Veinthal, Renno

2017-03-01

This paper presents statistical analyses of rock engineering properties and the measured penetration rate of tunnel boring machine (TBM) based on the data of an actual project. The aim of this study is to analyze the influence of rock engineering properties including uniaxial compressive strength (UCS), Brazilian tensile strength (BTS), rock brittleness index (BI), the distance between planes of weakness (DPW), and the alpha angle (Alpha) between the tunnel axis and the planes of weakness on the TBM rate of penetration (ROP). Four (4) statistical regression models (two linear and two nonlinear) are built to predict the ROP of TBM. Finally a fuzzy logic model is developed as an alternative method and compared to the four statistical regression models. Results show that the fuzzy logic model provides better estimations and can be applied to predict the TBM performance. The R-squared value (R2) of the fuzzy logic model scores the highest value of 0.714 over the second runner-up of 0.667 from the multiple variables nonlinear regression model.

14. Physics constrained nonlinear regression models for time series

International Nuclear Information System (INIS)

Majda, Andrew J; Harlim, John

2013-01-01

A central issue in contemporary science is the development of data driven statistical nonlinear dynamical models for time series of partial observations of nature or a complex physical model. It has been established recently that ad hoc quadratic multi-level regression (MLR) models can have finite-time blow up of statistical solutions and/or pathological behaviour of their invariant measure. Here a new class of physics constrained multi-level quadratic regression models are introduced, analysed and applied to build reduced stochastic models from data of nonlinear systems. These models have the advantages of incorporating memory effects in time as well as the nonlinear noise from energy conserving nonlinear interactions. The mathematical guidelines for the performance and behaviour of these physics constrained MLR models as well as filtering algorithms for their implementation are developed here. Data driven applications of these new multi-level nonlinear regression models are developed for test models involving a nonlinear oscillator with memory effects and the difficult test case of the truncated Burgers–Hopf model. These new physics constrained quadratic MLR models are proposed here as process models for Bayesian estimation through Markov chain Monte Carlo algorithms of low frequency behaviour in complex physical data. (paper)

15. Relationship between postural control and fine motor skills in preterm infants at 6 and 12 months adjusted age.

Science.gov (United States)

Wang, Tien-Ni; Howe, Tsu-Hsin; Hinojosa, Jim; Weinberg, Sharon L

2011-01-01

We examined the relationship between postural control and fine motor skills of preterm infants at 6 and 12 mo adjusted age. The Alberta Infant Motor Scale was used to measure postural control, and the Peabody Developmental Motor Scales II was used to measure fine motor skills. The data analyzed were taken from 105 medical records from a preterm infant follow-up clinic at an urban academic medical center in south Taiwan. Using multiple regression analyses, we found that the development of postural control is related to the development of fine motor skills, especially in the group of preterm infants with delayed postural control. This finding supports the theoretical assumption of proximal-distal development used by many occupational therapists to guide intervention. Further research is suggested to corroborate findings.

16. Tax System in Poland – Progressive or Regressive?

Directory of Open Access Journals (Sweden)

Jacek Tomkiewicz

2016-03-01

Full Text Available Purpose: To analyse the impact of the Polish fiscal regime on the general revenue of the country, and specifically to establish whether the cumulative tax burden borne by Polish households is progressive or regressive.Methodology: On the basis of Eurostat and OECD data, the author has analysed fiscal regimes in EU Member States and in OECD countries. The tax burden of households within different income groups has also been examined pursuant to applicable fiscal laws and data pertaining to the revenue and expenditure of households published by the Central Statistical Office (CSO.Conclusions: The fiscal regime in Poland is regressive; that is, the relative fiscal burden decreases as the taxpayer’s income increases.Research Implications: The article contributes to the on-going discussion on social cohesion, in particular with respect to economic policy instruments aimed at the redistribution of income within the economy.Originality: The author presents an analysis of data pertaining to fiscal policies in EU Member States and OECD countries and assesses the impact of the legal environment (fiscal regime and social security system in Poland on income distribution within the economy. The impact of the total tax burden (direct and indirect taxes, social security contributions on the economic situation of households from different income groups has been calculated using an original formula.

17. Detecting overdispersion in count data: A zero-inflated Poisson regression analysis

Science.gov (United States)

Afiqah Muhamad Jamil, Siti; Asrul Affendi Abdullah, M.; Kek, Sie Long; Nor, Maria Elena; Mohamed, Maryati; Ismail, Norradihah

2017-09-01

This study focusing on analysing count data of butterflies communities in Jasin, Melaka. In analysing count dependent variable, the Poisson regression model has been known as a benchmark model for regression analysis. Continuing from the previous literature that used Poisson regression analysis, this study comprising the used of zero-inflated Poisson (ZIP) regression analysis to gain acute precision on analysing the count data of butterfly communities in Jasin, Melaka. On the other hands, Poisson regression should be abandoned in the favour of count data models, which are capable of taking into account the extra zeros explicitly. By far, one of the most popular models include ZIP regression model. The data of butterfly communities which had been called as the number of subjects in this study had been taken in Jasin, Melaka and consisted of 131 number of subjects visits Jasin, Melaka. Since the researchers are considering the number of subjects, this data set consists of five families of butterfly and represent the five variables involve in the analysis which are the types of subjects. Besides, the analysis of ZIP used the SAS procedure of overdispersion in analysing zeros value and the main purpose of continuing the previous study is to compare which models would be better than when exists zero values for the observation of the count data. The analysis used AIC, BIC and Voung test of 5% level significance in order to achieve the objectives. The finding indicates that there is a presence of over-dispersion in analysing zero value. The ZIP regression model is better than Poisson regression model when zero values exist.

18. Culture, emotion regulation, and adjustment.

Science.gov (United States)

Matsumoto, David; Yoo, Seung Hee; Nakagawa, Sanae

2008-06-01

This article reports differences across 23 countries on 2 processes of emotion regulation--reappraisal and suppression. Cultural dimensions were correlated with country means on both and the relationship between them. Cultures that emphasized the maintenance of social order--that is, those that were long-term oriented and valued embeddedness and hierarchy--tended to have higher scores on suppression, and reappraisal and suppression tended to be positively correlated. In contrast, cultures that minimized the maintenance of social order and valued individual Affective Autonomy and Egalitarianism tended to have lower scores on Suppression, and Reappraisal and Suppression tended to be negatively correlated. Moreover, country-level emotion regulation was significantly correlated with country-level indices of both positive and negative adjustment. (PsycINFO Database Record (c) 2008 APA, all rights reserved).

19. Comparison of multinomial logistic regression and logistic regression: which is more efficient in allocating land use?

Science.gov (United States)

Lin, Yingzhi; Deng, Xiangzheng; Li, Xing; Ma, Enjun

2014-12-01

Spatially explicit simulation of land use change is the basis for estimating the effects of land use and cover change on energy fluxes, ecology and the environment. At the pixel level, logistic regression is one of the most common approaches used in spatially explicit land use allocation models to determine the relationship between land use and its causal factors in driving land use change, and thereby to evaluate land use suitability. However, these models have a drawback in that they do not determine/allocate land use based on the direct relationship between land use change and its driving factors. Consequently, a multinomial logistic regression method was introduced to address this flaw, and thereby, judge the suitability of a type of land use in any given pixel in a case study area of the Jiangxi Province, China. A comparison of the two regression methods indicated that the proportion of correctly allocated pixels using multinomial logistic regression was 92.98%, which was 8.47% higher than that obtained using logistic regression. Paired t-test results also showed that pixels were more clearly distinguished by multinomial logistic regression than by logistic regression. In conclusion, multinomial logistic regression is a more efficient and accurate method for the spatial allocation of land use changes. The application of this method in future land use change studies may improve the accuracy of predicting the effects of land use and cover change on energy fluxes, ecology, and environment.

20. A risk-adjusted financial model to estimate the cost of a video-assisted thoracoscopic surgery lobectomy programme.

Science.gov (United States)

Brunelli, Alessandro; Tentzeris, Vasileios; Sandri, Alberto; McKenna, Alexandra; Liew, Shan Liung; Milton, Richard; Chaudhuri, Nilanjan; Kefaloyannis, Emmanuel; Papagiannopoulos, Kostas

2016-05-01

1. Possible future HERA analyses

International Nuclear Information System (INIS)

Geiser, Achim

2015-12-01

A variety of possible future analyses of HERA data in the context of the HERA data preservation programme is collected, motivated, and commented. The focus is placed on possible future analyses of the existing ep collider data and their physics scope. Comparisons to the original scope of the HERA pro- gramme are made, and cross references to topics also covered by other participants of the workshop are given. This includes topics on QCD, proton structure, diffraction, jets, hadronic final states, heavy flavours, electroweak physics, and the application of related theory and phenomenology topics like NNLO QCD calculations, low-x related models, nonperturbative QCD aspects, and electroweak radiative corrections. Synergies with other collider programmes are also addressed. In summary, the range of physics topics which can still be uniquely covered using the existing data is very broad and of considerable physics interest, often matching the interest of results from colliders currently in operation. Due to well-established data and MC sets, calibrations, and analysis procedures the manpower and expertise needed for a particular analysis is often very much smaller than that needed for an ongoing experiment. Since centrally funded manpower to carry out such analyses is not available any longer, this contribution not only targets experienced self-funded experimentalists, but also theorists and master-level students who might wish to carry out such an analysis.

2. Biomass feedstock analyses

Energy Technology Data Exchange (ETDEWEB)

Wilen, C.; Moilanen, A.; Kurkela, E. [VTT Energy, Espoo (Finland). Energy Production Technologies

1996-12-31

The overall objectives of the project `Feasibility of electricity production from biomass by pressurized gasification systems` within the EC Research Programme JOULE II were to evaluate the potential of advanced power production systems based on biomass gasification and to study the technical and economic feasibility of these new processes with different type of biomass feed stocks. This report was prepared as part of this R and D project. The objectives of this task were to perform fuel analyses of potential woody and herbaceous biomasses with specific regard to the gasification properties of the selected feed stocks. The analyses of 15 Scandinavian and European biomass feed stock included density, proximate and ultimate analyses, trace compounds, ash composition and fusion behaviour in oxidizing and reducing atmospheres. The wood-derived fuels, such as whole-tree chips, forest residues, bark and to some extent willow, can be expected to have good gasification properties. Difficulties caused by ash fusion and sintering in straw combustion and gasification are generally known. The ash and alkali metal contents of the European biomasses harvested in Italy resembled those of the Nordic straws, and it is expected that they behave to a great extent as straw in gasification. Any direct relation between the ash fusion behavior (determined according to the standard method) and, for instance, the alkali metal content was not found in the laboratory determinations. A more profound characterisation of the fuels would require gasification experiments in a thermobalance and a PDU (Process development Unit) rig. (orig.) (10 refs.)

3. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

Science.gov (United States)

Vaeth, Michael; Skovlund, Eva

2004-06-15

For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

4. Kidney function changes with aging in adults: comparison between cross-sectional and longitudinal data analyses in renal function assessment.

Science.gov (United States)

Chung, Sang M; Lee, David J; Hand, Austin; Young, Philip; Vaidyanathan, Jayabharathi; Sahajwalla, Chandrahas

2015-12-01

The study evaluated whether the renal function decline rate per year with age in adults varies based on two primary statistical analyses: cross-section (CS), using one observation per subject, and longitudinal (LT), using multiple observations per subject over time. A total of 16628 records (3946 subjects; age range 30-92 years) of creatinine clearance and relevant demographic data were used. On average, four samples per subject were collected for up to 2364 days (mean: 793 days). A simple linear regression and random coefficient models were selected for CS and LT analyses, respectively. The renal function decline rates per year were 1.33 and 0.95 ml/min/year for CS and LT analyses, respectively, and were slower when the repeated individual measurements were considered. The study confirms that rates are different based on statistical analyses, and that a statistically robust longitudinal model with a proper sampling design provides reliable individual as well as population estimates of the renal function decline rates per year with age in adults. In conclusion, our findings indicated that one should be cautious in interpreting the renal function decline rate with aging information because its estimation was highly dependent on the statistical analyses. From our analyses, a population longitudinal analysis (e.g. random coefficient model) is recommended if individualization is critical, such as a dose adjustment based on renal function during a chronic therapy. Copyright © 2015 John Wiley & Sons, Ltd.

5. Controlling attribute effect in linear regression

KAUST Repository

Calders, Toon; Karim, Asim A.; Kamiran, Faisal; Ali, Wasif Mohammad; Zhang, Xiangliang

2013-01-01

In data mining we often have to learn from biased data, because, for instance, data comes from different batches or there was a gender or racial bias in the collection of social data. In some applications it may be necessary to explicitly control this bias in the models we learn from the data. This paper is the first to study learning linear regression models under constraints that control the biasing effect of a given attribute such as gender or batch number. We show how propensity modeling can be used for factoring out the part of the bias that can be justified by externally provided explanatory attributes. Then we analytically derive linear models that minimize squared error while controlling the bias by imposing constraints on the mean outcome or residuals of the models. Experiments with discrimination-aware crime prediction and batch effect normalization tasks show that the proposed techniques are successful in controlling attribute effects in linear regression models. © 2013 IEEE.

6. Stochastic development regression using method of moments

DEFF Research Database (Denmark)

Kühnel, Line; Sommer, Stefan Horst

2017-01-01

This paper considers the estimation problem arising when inferring parameters in the stochastic development regression model for manifold valued non-linear data. Stochastic development regression captures the relation between manifold-valued response and Euclidean covariate variables using...... the stochastic development construction. It is thereby able to incorporate several covariate variables and random effects. The model is intrinsically defined using the connection of the manifold, and the use of stochastic development avoids linearizing the geometry. We propose to infer parameters using...... the Method of Moments procedure that matches known constraints on moments of the observations conditional on the latent variables. The performance of the model is investigated in a simulation example using data on finite dimensional landmark manifolds....

7. Beta-binomial regression and bimodal utilization.

Science.gov (United States)

Liu, Chuan-Fen; Burgess, James F; Manning, Willard G; Maciejewski, Matthew L

2013-10-01

To illustrate how the analysis of bimodal U-shaped distributed utilization can be modeled with beta-binomial regression, which is rarely used in health services research. Veterans Affairs (VA) administrative data and Medicare claims in 2001-2004 for 11,123 Medicare-eligible VA primary care users in 2000. We compared means and distributions of VA reliance (the proportion of all VA/Medicare primary care visits occurring in VA) predicted from beta-binomial, binomial, and ordinary least-squares (OLS) models. Beta-binomial model fits the bimodal distribution of VA reliance better than binomial and OLS models due to the nondependence on normality and the greater flexibility in shape parameters. Increased awareness of beta-binomial regression may help analysts apply appropriate methods to outcomes with bimodal or U-shaped distributions. © Health Research and Educational Trust.

8. Testing homogeneity in Weibull-regression models.

Science.gov (United States)

Bolfarine, Heleno; Valença, Dione M

2005-10-01

In survival studies with families or geographical units it may be of interest testing whether such groups are homogeneous for given explanatory variables. In this paper we consider score type tests for group homogeneity based on a mixing model in which the group effect is modelled as a random variable. As opposed to hazard-based frailty models, this model presents survival times that conditioned on the random effect, has an accelerated failure time representation. The test statistics requires only estimation of the conventional regression model without the random effect and does not require specifying the distribution of the random effect. The tests are derived for a Weibull regression model and in the uncensored situation, a closed form is obtained for the test statistic. A simulation study is used for comparing the power of the tests. The proposed tests are applied to real data sets with censored data.

9. Are increases in cigarette taxation regressive?

Science.gov (United States)

Borren, P; Sutton, M

1992-12-01

Using the latest published data from Tobacco Advisory Council surveys, this paper re-evaluates the question of whether or not increases in cigarette taxation are regressive in the United Kingdom. The extended data set shows no evidence of increasing price-elasticity by social class as found in a major previous study. To the contrary, there appears to be no clear pattern in the price responsiveness of smoking behaviour across different social classes. Increases in cigarette taxation, while reducing smoking levels in all groups, fall most heavily on men and women in the lowest social class. Men and women in social class five can expect to pay eight and eleven times more of a tax increase respectively, than their social class one counterparts. Taken as a proportion of relative incomes, the regressive nature of increases in cigarette taxation is even more pronounced.

10. Controlling attribute effect in linear regression

KAUST Repository

Calders, Toon

2013-12-01

In data mining we often have to learn from biased data, because, for instance, data comes from different batches or there was a gender or racial bias in the collection of social data. In some applications it may be necessary to explicitly control this bias in the models we learn from the data. This paper is the first to study learning linear regression models under constraints that control the biasing effect of a given attribute such as gender or batch number. We show how propensity modeling can be used for factoring out the part of the bias that can be justified by externally provided explanatory attributes. Then we analytically derive linear models that minimize squared error while controlling the bias by imposing constraints on the mean outcome or residuals of the models. Experiments with discrimination-aware crime prediction and batch effect normalization tasks show that the proposed techniques are successful in controlling attribute effects in linear regression models. © 2013 IEEE.

11. Regression Models For Multivariate Count Data.

Science.gov (United States)

Zhang, Yiwen; Zhou, Hua; Zhou, Jin; Sun, Wei

2017-01-01

Data with multivariate count responses frequently occur in modern applications. The commonly used multinomial-logit model is limiting due to its restrictive mean-variance structure. For instance, analyzing count data from the recent RNA-seq technology by the multinomial-logit model leads to serious errors in hypothesis testing. The ubiquity of over-dispersion and complicated correlation structures among multivariate counts calls for more flexible regression models. In this article, we study some generalized linear models that incorporate various correlation structures among the counts. Current literature lacks a treatment of these models, partly due to the fact that they do not belong to the natural exponential family. We study the estimation, testing, and variable selection for these models in a unifying framework. The regression models are compared on both synthetic and real RNA-seq data.

12. Model selection in kernel ridge regression

DEFF Research Database (Denmark)

Exterkate, Peter

2013-01-01

Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This method is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts....... The influence of the choice of kernel and the setting of tuning parameters on forecast accuracy is investigated. Several popular kernels are reviewed, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. The latter two kernels are interpreted in terms of their smoothing properties......, and the tuning parameters associated to all these kernels are related to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, guidelines are provided for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study...

13. Confidence bands for inverse regression models

International Nuclear Information System (INIS)

Birke, Melanie; Bissantz, Nicolai; Holzmann, Hajo

2010-01-01

We construct uniform confidence bands for the regression function in inverse, homoscedastic regression models with convolution-type operators. Here, the convolution is between two non-periodic functions on the whole real line rather than between two periodic functions on a compact interval, since the former situation arguably arises more often in applications. First, following Bickel and Rosenblatt (1973 Ann. Stat. 1 1071–95) we construct asymptotic confidence bands which are based on strong approximations and on a limit theorem for the supremum of a stationary Gaussian process. Further, we propose bootstrap confidence bands based on the residual bootstrap and prove consistency of the bootstrap procedure. A simulation study shows that the bootstrap confidence bands perform reasonably well for moderate sample sizes. Finally, we apply our method to data from a gel electrophoresis experiment with genetically engineered neuronal receptor subunits incubated with rat brain extract

Science.gov (United States)

Brownbridge, G; Fielding, D M

1994-12-01

Sixty children and adolescents in end-stage renal failure who were undergoing either haemodialysis or continuous ambulatory peritoneal dialysis at one of five United Kingdom dialysis centres were assessed on psychosocial adjustment and adherence to their fluid intake, diet and medication regimes. Parental adjustment was also measured and data on sociodemographic and treatment history variables collected. A structured family interview and standardised questionnaire measures of anxiety, depression and behavioural disturbance were used. Multiple measures of treatment adherence were obtained, utilising children's and parents' self-reports, weight gain between dialysis, blood pressure, serum potassium level, blood urea level, dietitians' surveys and consultants' ratings. Correlational analyses showed that low treatment adherence was associated with poor adjustment to diagnosis and dialysis by children and parents (P adherence than younger children, P dialysis (P treatment of this group of children. Future research should develop and evaluate psychosocial interventions aimed at improving treatment adherence.

15. Regressing Atherosclerosis by Resolving Plaque Inflammation

Science.gov (United States)

2017-07-01

regression requires the alteration of macrophages in the plaques to a tissue repair “alternatively” activated state. This switch in activation state... tissue repair “alternatively” activated state. This switch in activation state requires the action of TH2 cytokines interleukin (IL)-4 or IL-13. To...regulation of tissue macrophage and dendritic cell population dynamics by CSF-1. J Exp Med. 2011;208(9):1901–1916. 35. Xu H, Exner BG, Chilton PM

16. Determination of regression laws: Linear and nonlinear

International Nuclear Information System (INIS)

Onishchenko, A.M.

1994-01-01

A detailed mathematical determination of regression laws is presented in the article. Particular emphasis is place on determining the laws of X j on X l to account for source nuclei decay and detector errors in nuclear physics instrumentation. Both linear and nonlinear relations are presented. Linearization of 19 functions is tabulated, including graph, relation, variable substitution, obtained linear function, and remarks. 6 refs., 1 tab

17. Directional quantile regression in Octave (and MATLAB)

Czech Academy of Sciences Publication Activity Database

Boček, Pavel; Šiman, Miroslav

2016-01-01

Roč. 52, č. 1 (2016), s. 28-51 ISSN 0023-5954 R&D Projects: GA ČR GA14-07234S Institutional support: RVO:67985556 Keywords : quantile regression * multivariate quantile * depth contour * Matlab Subject RIV: IN - Informatics, Computer Science Impact factor: 0.379, year: 2016 http://library.utia.cas.cz/separaty/2016/SI/bocek-0458380.pdf

18. Logistic regression a self-learning text

CERN Document Server

Kleinbaum, David G

1994-01-01

This textbook provides students and professionals in the health sciences with a presentation of the use of logistic regression in research. The text is self-contained, and designed to be used both in class or as a tool for self-study. It arises from the author's many years of experience teaching this material and the notes on which it is based have been extensively used throughout the world.

19. Theory of Work Adjustment Personality Constructs.

Science.gov (United States)

Lawson, Loralie

1993-01-01

To measure Theory of Work Adjustment personality and adjustment style dimensions, content-based scales were analyzed for homogeneity and successively reanalyzed for reliability improvement. Three sound scales were developed: inflexibility, activeness, and reactiveness. (SK)

20. Multitask Quantile Regression under the Transnormal Model.

Science.gov (United States)

Fan, Jianqing; Xue, Lingzhou; Zou, Hui

2016-01-01

We consider estimating multi-task quantile regression under the transnormal model, with focus on high-dimensional setting. We derive a surprisingly simple closed-form solution through rank-based covariance regularization. In particular, we propose the rank-based ℓ 1 penalization with positive definite constraints for estimating sparse covariance matrices, and the rank-based banded Cholesky decomposition regularization for estimating banded precision matrices. By taking advantage of alternating direction method of multipliers, nearest correlation matrix projection is introduced that inherits sampling properties of the unprojected one. Our work combines strengths of quantile regression and rank-based covariance regularization to simultaneously deal with nonlinearity and nonnormality for high-dimensional regression. Furthermore, the proposed method strikes a good balance between robustness and efficiency, achieves the "oracle"-like convergence rate, and provides the provable prediction interval under the high-dimensional setting. The finite-sample performance of the proposed method is also examined. The performance of our proposed rank-based method is demonstrated in a real application to analyze the protein mass spectroscopy data.