WorldWideScience

Sample records for regression analyses confirmed

  1. Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.

    Science.gov (United States)

    Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg

    2009-11-01

    G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.

  2. USE OF THE SIMPLE LINEAR REGRESSION MODEL IN MACRO-ECONOMICAL ANALYSES

    Directory of Open Access Journals (Sweden)

    Constantin ANGHELACHE

    2011-10-01

    Full Text Available The article presents the fundamental aspects of the linear regression, as a toolbox which can be used in macroeconomic analyses. The article describes the estimation of the parameters, the statistical tests used, the homoscesasticity and heteroskedasticity. The use of econometrics instrument in macroeconomics is an important factor that guarantees the quality of the models, analyses, results and possible interpretation that can be drawn at this level.

  3. Applications of MIDAS regression in analysing trends in water quality

    Science.gov (United States)

    Penev, Spiridon; Leonte, Daniela; Lazarov, Zdravetz; Mann, Rob A.

    2014-04-01

    We discuss novel statistical methods in analysing trends in water quality. Such analysis uses complex data sets of different classes of variables, including water quality, hydrological and meteorological. We analyse the effect of rainfall and flow on trends in water quality utilising a flexible model called Mixed Data Sampling (MIDAS). This model arises because of the mixed frequency in the data collection. Typically, water quality variables are sampled fortnightly, whereas the rain data is sampled daily. The advantage of using MIDAS regression is in the flexible and parsimonious modelling of the influence of the rain and flow on trends in water quality variables. We discuss the model and its implementation on a data set from the Shoalhaven Supply System and Catchments in the state of New South Wales, Australia. Information criteria indicate that MIDAS modelling improves upon simplistic approaches that do not utilise the mixed data sampling nature of the data.

  4. Activation analyses of authenticated hairs of Napoleon Bonaparte confirm arsenic poisoning.

    Science.gov (United States)

    Weider, B; Fournier, J H

    1999-12-01

    In 1960, activation analyses at the Harwell Nuclear Research Laboratory of the University of Glascow, London of authenticated hairs of Napoleon Bonaparte taken immediately after his death confirmed Napoleon's chronic arsenic poisoning on the island of St. Helena. Timeline correlation of his clinical symptomatology of the preceding 4 months, as reported in the written diaries of his exiled companions, further supports the effect of fluctuating, elevated toxic levels of arsenic on his health. Independent analyses of authenticated hairs of Napoleon by the Toxicology Crime Laboratory of the United States Federal Bureau of Investigation in 1995 reveals toxic levels of arsenic. The successful assassination of Napoleon included both a cosmetic and lethal phase. The cosmetic phase consisted of arsenic poisoning over time to weaken Napoleon, making the associated debility appear to be a natural illness and thus allay any suspicions prior to instituting the lethal phase. On May 3, 1821, at 5:30 P.M., the lethal phase was carried out. Napoleon was given Calomel (HgCl), a cathartic, and a popular orange-flavored drink called orgeat, which was flavored with the oil of bitter almonds. Together they formed mercury cyanide, which is lethal. Napoleon lost consciousness and died two days later.

  5. Multicollinearity in Regression Analyses Conducted in Epidemiologic Studies.

    Science.gov (United States)

    Vatcheva, Kristina P; Lee, MinJae; McCormick, Joseph B; Rahbar, Mohammad H

    2016-04-01

    The adverse impact of ignoring multicollinearity on findings and data interpretation in regression analysis is very well documented in the statistical literature. The failure to identify and report multicollinearity could result in misleading interpretations of the results. A review of epidemiological literature in PubMed from January 2004 to December 2013, illustrated the need for a greater attention to identifying and minimizing the effect of multicollinearity in analysis of data from epidemiologic studies. We used simulated datasets and real life data from the Cameron County Hispanic Cohort to demonstrate the adverse effects of multicollinearity in the regression analysis and encourage researchers to consider the diagnostic for multicollinearity as one of the steps in regression analysis.

  6. How to deal with continuous and dichotomic outcomes in epidemiological research: linear and logistic regression analyses

    NARCIS (Netherlands)

    Tripepi, Giovanni; Jager, Kitty J.; Stel, Vianda S.; Dekker, Friedo W.; Zoccali, Carmine

    2011-01-01

    Because of some limitations of stratification methods, epidemiologists frequently use multiple linear and logistic regression analyses to address specific epidemiological questions. If the dependent variable is a continuous one (for example, systolic pressure and serum creatinine), the researcher

  7. The number of subjects per variable required in linear regression analyses.

    Science.gov (United States)

    Austin, Peter C; Steyerberg, Ewout W

    2015-06-01

    To determine the number of independent variables that can be included in a linear regression model. We used a series of Monte Carlo simulations to examine the impact of the number of subjects per variable (SPV) on the accuracy of estimated regression coefficients and standard errors, on the empirical coverage of estimated confidence intervals, and on the accuracy of the estimated R(2) of the fitted model. A minimum of approximately two SPV tended to result in estimation of regression coefficients with relative bias of less than 10%. Furthermore, with this minimum number of SPV, the standard errors of the regression coefficients were accurately estimated and estimated confidence intervals had approximately the advertised coverage rates. A much higher number of SPV were necessary to minimize bias in estimating the model R(2), although adjusted R(2) estimates behaved well. The bias in estimating the model R(2) statistic was inversely proportional to the magnitude of the proportion of variation explained by the population regression model. Linear regression models require only two SPV for adequate estimation of regression coefficients, standard errors, and confidence intervals. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Statistical and regression analyses of detected extrasolar systems

    Czech Academy of Sciences Publication Activity Database

    Pintr, Pavel; Peřinová, V.; Lukš, A.; Pathak, A.

    2013-01-01

    Roč. 75, č. 1 (2013), s. 37-45 ISSN 0032-0633 Institutional support: RVO:61389021 Keywords : Exoplanets * Kepler candidates * Regression analysis Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics Impact factor: 1.630, year: 2013 http://www.sciencedirect.com/science/article/pii/S0032063312003066

  9. Reducing Inter-Laboratory Differences between Semen Analyses Using Z Score and Regression Transformations

    Directory of Open Access Journals (Sweden)

    Esther Leushuis

    2016-12-01

    Full Text Available Background: Standardization of the semen analysis may improve reproducibility. We assessed variability between laboratories in semen analyses and evaluated whether a transformation using Z scores and regression statistics was able to reduce this variability. Materials and Methods: We performed a retrospective cohort study. We calculated between-laboratory coefficients of variation (CVB for sperm concentration and for morphology. Subsequently, we standardized the semen analysis results by calculating laboratory specific Z scores, and by using regression. We used analysis of variance for four semen parameters to assess systematic differences between laboratories before and after the transformations, both in the circulation samples and in the samples obtained in the prospective cohort study in the Netherlands between January 2002 and February 2004. Results: The mean CVB was 7% for sperm concentration (range 3 to 13% and 32% for sperm morphology (range 18 to 51%. The differences between the laboratories were statistically significant for all semen parameters (all P<0.001. Standardization using Z scores did not reduce the differences in semen analysis results between the laboratories (all P<0.001. Conclusion: There exists large between-laboratory variability for sperm morphology and small, but statistically significant, between-laboratory variation for sperm concentration. Standardization using Z scores does not eliminate between-laboratory variability.

  10. Performance Confirmation Plan

    International Nuclear Information System (INIS)

    Lindner, E.N.

    2000-01-01

    As described, the purpose of the Performance Confirmation Plan is to specify monitoring, testing, and analysis activities for evaluating the accuracy and adequacy of the information used to determine that performance objectives for postclosure will be met. This plan defines a number of specific performance confirmation activities and associated test concepts in support of the MGR that will be implemented to fulfill this purpose. In doing so, the plan defines an approach to identify key factors and processes, predict performance, establish tolerances and test criteria, collect data (through monitoring, testing, and experiments), analyze these data, and recommend appropriate action. The process of defining which factors to address under performance confirmation incorporates input from several areas. In all cases, key performance confirmation factors are those factors which are: (1) important to safety, (2) measurable and predictable, and (3) relevant to the program (i.e., a factor that is affected by construction, emplacement, or is a time-dependent variable). For the present version of the plan, performance confirmation factors important to safety are identified using the principal factors from the RSS (CRWMS M and O 2000a) (which is derived from TSPA analyses) together with other available performance assessment analyses. With this basis, key performance confirmation factors have been identified, and test concepts and test descriptions have been developed in the plan. Other activities are also incorporated into the performance confirmation program outside of these key factors. Additional activities and tests have been incorporated when they are prescribed by requirements and regulations or are necessary to address data needs and model validation requirements relevant to postclosure safety. These other activities have been included with identified factors to construct the overall performance confirmation program

  11. Performance Confirmation Plan

    International Nuclear Information System (INIS)

    Lindner, E.N.

    2000-01-01

    As described, the purpose of the Performance Confirmation Plan is to specify monitoring, testing, and analysis activities for evaluating the accuracy and adequacy of the information used to determine that performance objectives for postclosure will be met. This plan defines a number of specific performance confirmation activities and associated test concepts in support of the MGR that will be implemented to fulfill this purpose. In doing so, the plan defines an approach to identify key factors and processes, predict performance, establish tolerances and test criteria, collect data (through monitoring, testing, and experiments), analyze these data, and recommend appropriate action. The process of defining which factors to address under performance confirmation incorporates input from several areas. In all cases, key performance confirmation factors are those factors which are: (1) important to safety, (2) measurable and predictable, and (3) relevant to the program (i.e., a factor that i s affected by construction, emplacement, or is a time-dependent variable). For the present version of the plan, performance confirmation factors important to safety are identified using the principal factors from the RSS (CRWMS M and O 2000a) (which is derived from TSPA analyses) together with other available performance assessment analyses. With this basis, key performance confirmation factors have been identified, and test concepts and test descriptions have been developed in the plan. Other activities are also incorporated into the performance confirmation program outside of these key factors. Additional activities and tests have been incorporated when they are prescribed by requirements and regulations or are necessary to address data needs and model validation requirements relevant to postclosure safety. These other activities have been included with identified factors to construct the overall performance confirmation program

  12. Correlation and regression analyses of genetic effects for different types of cells in mammals under radiation and chemical treatment

    International Nuclear Information System (INIS)

    Slutskaya, N.G.; Mosseh, I.B.

    2006-01-01

    Data about genetic mutations under radiation and chemical treatment for different types of cells have been analyzed with correlation and regression analyses. Linear correlation between different genetic effects in sex cells and somatic cells have found. The results may be extrapolated on sex cells of human and mammals. (authors)

  13. Paradox of spontaneous cancer regression: implications for fluctuational radiothermy and radiotherapy

    International Nuclear Information System (INIS)

    Roy, Prasun K.; Dutta Majumder, D.; Biswas, Jaydip

    1999-01-01

    Spontaneous regression of malignant tumours without treatment is a most enigmatic phenomenon with immense therapeutic potentialities. We analyse such cases to find that the commonest cause is a preceding episode of high fever-induced thermal fluctuation which produce fluctuation of biochemical and immunological parameters. Using Prigogine-Glansdorff thermodynamic stability formalism and biocybernetic principles, we develop the theoretical foundation of tumour regression induced by thermal, radiational or oxygenational fluctuations. For regression, a preliminary threshold condition of fluctuations is derived, namely σ > 2.83. We present some striking confirmation of such fluctuation-induced regression of various therapy-resistant masses as Ewing tumour, neurogranuloma and Lewis lung carcinoma by utilising σ > 2.83. Our biothermodynamic stability model of malignancy appears to illuminate the marked increase of aggressiveness of mammalian malignancy which occurred around 250 million years ago when homeothermic warm-blooded pre-mammals evolved. Using experimental data, we propose a novel approach of multi-modal hyper-fluctuation therapy involving modulation of radiotherapeutic hyper-fractionation, temperature, radiothermy and immune-status. (author)

  14. Exploring Selective Exposure and Confirmation Bias as Processes Underlying Employee Work Happiness: An Intervention Study.

    Science.gov (United States)

    Williams, Paige; Kern, Margaret L; Waters, Lea

    2016-01-01

    Employee psychological capital (PsyCap), perceptions of organizational virtue (OV), and work happiness have been shown to be associated within and over time. This study examines selective exposure and confirmation bias as potential processes underlying PsyCap, OV, and work happiness associations. As part of a quasi-experimental study design, school staff (N = 69) completed surveys at three time points. After the first assessment, some staff (n = 51) completed a positive psychology training intervention. Results of descriptive statistics, correlation, and regression analyses on the intervention group provide some support for selective exposure and confirmation bias as explanatory mechanisms. In focusing on the processes through which employee attitudes may influence work happiness this study advances theoretical understanding, specifically of selective exposure and confirmation bias in a field study context.

  15. The number of subjects per variable required in linear regression analyses

    NARCIS (Netherlands)

    P.C. Austin (Peter); E.W. Steyerberg (Ewout)

    2015-01-01

    textabstractObjectives To determine the number of independent variables that can be included in a linear regression model. Study Design and Setting We used a series of Monte Carlo simulations to examine the impact of the number of subjects per variable (SPV) on the accuracy of estimated regression

  16. Multicollinearity in Regression Analyses Conducted in Epidemiologic Studies

    OpenAIRE

    Vatcheva, Kristina P.; Lee, MinJae; McCormick, Joseph B.; Rahbar, Mohammad H.

    2016-01-01

    The adverse impact of ignoring multicollinearity on findings and data interpretation in regression analysis is very well documented in the statistical literature. The failure to identify and report multicollinearity could result in misleading interpretations of the results. A review of epidemiological literature in PubMed from January 2004 to December 2013, illustrated the need for a greater attention to identifying and minimizing the effect of multicollinearity in analysis of data from epide...

  17. Analyses of non-fatal accidents in an opencast mine by logistic regression model - a case study.

    Science.gov (United States)

    Onder, Seyhan; Mutlu, Mert

    2017-09-01

    Accidents cause major damage for both workers and enterprises in the mining industry. To reduce the number of occupational accidents, these incidents should be properly registered and carefully analysed. This study efficiently examines the Aegean Lignite Enterprise (ELI) of Turkish Coal Enterprises (TKI) in Soma between 2006 and 2011, and opencast coal mine occupational accident records were used for statistical analyses. A total of 231 occupational accidents were analysed for this study. The accident records were categorized into seven groups: area, reason, occupation, part of body, age, shift hour and lost days. The SPSS package program was used in this study for logistic regression analyses, which predicted the probability of accidents resulting in greater or less than 3 lost workdays for non-fatal injuries. Social facilities-area of surface installations, workshops and opencast mining areas are the areas with the highest probability for accidents with greater than 3 lost workdays for non-fatal injuries, while the reasons with the highest probability for these types of accidents are transporting and manual handling. Additionally, the model was tested for such reported accidents that occurred in 2012 for the ELI in Soma and estimated the probability of exposure to accidents with lost workdays correctly by 70%.

  18. Logistic regression applied to natural hazards: rare event logistic regression with replications

    Science.gov (United States)

    Guns, M.; Vanacker, V.

    2012-06-01

    Statistical analysis of natural hazards needs particular attention, as most of these phenomena are rare events. This study shows that the ordinary rare event logistic regression, as it is now commonly used in geomorphologic studies, does not always lead to a robust detection of controlling factors, as the results can be strongly sample-dependent. In this paper, we introduce some concepts of Monte Carlo simulations in rare event logistic regression. This technique, so-called rare event logistic regression with replications, combines the strength of probabilistic and statistical methods, and allows overcoming some of the limitations of previous developments through robust variable selection. This technique was here developed for the analyses of landslide controlling factors, but the concept is widely applicable for statistical analyses of natural hazards.

  19. Correlation, Regression and Path Analyses of Seed Yield Components in Crambe abyssinica, a Promising Industrial Oil Crop

    OpenAIRE

    Huang, Banglian; Yang, Yiming; Luo, Tingting; Wu, S.; Du, Xuezhu; Cai, Detian; Loo, van, E.N.; Huang Bangquan

    2013-01-01

    In the present study correlation, regression and path analyses were carried out to decide correlations among the agro- nomic traits and their contributions to seed yield per plant in Crambe abyssinica. Partial correlation analysis indicated that plant height (X1) was significantly correlated with branching height and the number of first branches (P <0.01); Branching height (X2) was significantly correlated with pod number of primary inflorescence (P <0.01) and number of secondary branch...

  20. Logistic regression applied to natural hazards: rare event logistic regression with replications

    Directory of Open Access Journals (Sweden)

    M. Guns

    2012-06-01

    Full Text Available Statistical analysis of natural hazards needs particular attention, as most of these phenomena are rare events. This study shows that the ordinary rare event logistic regression, as it is now commonly used in geomorphologic studies, does not always lead to a robust detection of controlling factors, as the results can be strongly sample-dependent. In this paper, we introduce some concepts of Monte Carlo simulations in rare event logistic regression. This technique, so-called rare event logistic regression with replications, combines the strength of probabilistic and statistical methods, and allows overcoming some of the limitations of previous developments through robust variable selection. This technique was here developed for the analyses of landslide controlling factors, but the concept is widely applicable for statistical analyses of natural hazards.

  1. Regression analysis with categorized regression calibrated exposure: some interesting findings

    Directory of Open Access Journals (Sweden)

    Hjartåker Anette

    2006-07-01

    Full Text Available Abstract Background Regression calibration as a method for handling measurement error is becoming increasingly well-known and used in epidemiologic research. However, the standard version of the method is not appropriate for exposure analyzed on a categorical (e.g. quintile scale, an approach commonly used in epidemiologic studies. A tempting solution could then be to use the predicted continuous exposure obtained through the regression calibration method and treat it as an approximation to the true exposure, that is, include the categorized calibrated exposure in the main regression analysis. Methods We use semi-analytical calculations and simulations to evaluate the performance of the proposed approach compared to the naive approach of not correcting for measurement error, in situations where analyses are performed on quintile scale and when incorporating the original scale into the categorical variables, respectively. We also present analyses of real data, containing measures of folate intake and depression, from the Norwegian Women and Cancer study (NOWAC. Results In cases where extra information is available through replicated measurements and not validation data, regression calibration does not maintain important qualities of the true exposure distribution, thus estimates of variance and percentiles can be severely biased. We show that the outlined approach maintains much, in some cases all, of the misclassification found in the observed exposure. For that reason, regression analysis with the corrected variable included on a categorical scale is still biased. In some cases the corrected estimates are analytically equal to those obtained by the naive approach. Regression calibration is however vastly superior to the naive method when applying the medians of each category in the analysis. Conclusion Regression calibration in its most well-known form is not appropriate for measurement error correction when the exposure is analyzed on a

  2. Genetic analyses of partial egg production in Japanese quail using multi-trait random regression models.

    Science.gov (United States)

    Karami, K; Zerehdaran, S; Barzanooni, B; Lotfi, E

    2017-12-01

    1. The aim of the present study was to estimate genetic parameters for average egg weight (EW) and egg number (EN) at different ages in Japanese quail using multi-trait random regression (MTRR) models. 2. A total of 8534 records from 900 quail, hatched between 2014 and 2015, were used in the study. Average weekly egg weights and egg numbers were measured from second until sixth week of egg production. 3. Nine random regression models were compared to identify the best order of the Legendre polynomials (LP). The most optimal model was identified by the Bayesian Information Criterion. A model with second order of LP for fixed effects, second order of LP for additive genetic effects and third order of LP for permanent environmental effects (MTRR23) was found to be the best. 4. According to the MTRR23 model, direct heritability for EW increased from 0.26 in the second week to 0.53 in the sixth week of egg production, whereas the ratio of permanent environment to phenotypic variance decreased from 0.48 to 0.1. Direct heritability for EN was low, whereas the ratio of permanent environment to phenotypic variance decreased from 0.57 to 0.15 during the production period. 5. For each trait, estimated genetic correlations among weeks of egg production were high (from 0.85 to 0.98). Genetic correlations between EW and EN were low and negative for the first two weeks, but they were low and positive for the rest of the egg production period. 6. In conclusion, random regression models can be used effectively for analysing egg production traits in Japanese quail. Response to selection for increased egg weight would be higher at older ages because of its higher heritability and such a breeding program would have no negative genetic impact on egg production.

  3. Linear regression in astronomy. I

    Science.gov (United States)

    Isobe, Takashi; Feigelson, Eric D.; Akritas, Michael G.; Babu, Gutti Jogesh

    1990-01-01

    Five methods for obtaining linear regression fits to bivariate data with unknown or insignificant measurement errors are discussed: ordinary least-squares (OLS) regression of Y on X, OLS regression of X on Y, the bisector of the two OLS lines, orthogonal regression, and 'reduced major-axis' regression. These methods have been used by various researchers in observational astronomy, most importantly in cosmic distance scale applications. Formulas for calculating the slope and intercept coefficients and their uncertainties are given for all the methods, including a new general form of the OLS variance estimates. The accuracy of the formulas was confirmed using numerical simulations. The applicability of the procedures is discussed with respect to their mathematical properties, the nature of the astronomical data under consideration, and the scientific purpose of the regression. It is found that, for problems needing symmetrical treatment of the variables, the OLS bisector performs significantly better than orthogonal or reduced major-axis regression.

  4. Analyses of Developmental Rate Isomorphy in Ectotherms: Introducing the Dirichlet Regression.

    Directory of Open Access Journals (Sweden)

    David S Boukal

    Full Text Available Temperature drives development in insects and other ectotherms because their metabolic rate and growth depends directly on thermal conditions. However, relative durations of successive ontogenetic stages often remain nearly constant across a substantial range of temperatures. This pattern, termed 'developmental rate isomorphy' (DRI in insects, appears to be widespread and reported departures from DRI are generally very small. We show that these conclusions may be due to the caveats hidden in the statistical methods currently used to study DRI. Because the DRI concept is inherently based on proportional data, we propose that Dirichlet regression applied to individual-level data is an appropriate statistical method to critically assess DRI. As a case study we analyze data on five aquatic and four terrestrial insect species. We find that results obtained by Dirichlet regression are consistent with DRI violation in at least eight of the studied species, although standard analysis detects significant departure from DRI in only four of them. Moreover, the departures from DRI detected by Dirichlet regression are consistently much larger than previously reported. The proposed framework can also be used to infer whether observed departures from DRI reflect life history adaptations to size- or stage-dependent effects of varying temperature. Our results indicate that the concept of DRI in insects and other ectotherms should be critically re-evaluated and put in a wider context, including the concept of 'equiproportional development' developed for copepods.

  5. Comparison of Classical Linear Regression and Orthogonal Regression According to the Sum of Squares Perpendicular Distances

    OpenAIRE

    KELEŞ, Taliha; ALTUN, Murat

    2016-01-01

    Regression analysis is a statistical technique for investigating and modeling the relationship between variables. The purpose of this study was the trivial presentation of the equation for orthogonal regression (OR) and the comparison of classical linear regression (CLR) and OR techniques with respect to the sum of squared perpendicular distances. For that purpose, the analyses were shown by an example. It was found that the sum of squared perpendicular distances of OR is smaller. Thus, it wa...

  6. Improved Dietary Guidelines for Vitamin D: Application of Individual Participant Data (IPD)-Level Meta-Regression Analyses

    Science.gov (United States)

    Cashman, Kevin D.; Ritz, Christian; Kiely, Mairead

    2017-01-01

    Dietary Reference Values (DRVs) for vitamin D have a key role in the prevention of vitamin D deficiency. However, despite adopting similar risk assessment protocols, estimates from authoritative agencies over the last 6 years have been diverse. This may have arisen from diverse approaches to data analysis. Modelling strategies for pooling of individual subject data from cognate vitamin D randomized controlled trials (RCTs) are likely to provide the most appropriate DRV estimates. Thus, the objective of the present work was to undertake the first-ever individual participant data (IPD)-level meta-regression, which is increasingly recognized as best practice, from seven winter-based RCTs (with 882 participants ranging in age from 4 to 90 years) of the vitamin D intake–serum 25-hydroxyvitamin D (25(OH)D) dose-response. Our IPD-derived estimates of vitamin D intakes required to maintain 97.5% of 25(OH)D concentrations >25, 30, and 50 nmol/L across the population are 10, 13, and 26 µg/day, respectively. In contrast, standard meta-regression analyses with aggregate data (as used by several agencies in recent years) from the same RCTs estimated that a vitamin D intake requirement of 14 µg/day would maintain 97.5% of 25(OH)D >50 nmol/L. These first IPD-derived estimates offer improved dietary recommendations for vitamin D because the underpinning modeling captures the between-person variability in response of serum 25(OH)D to vitamin D intake. PMID:28481259

  7. Improved Dietary Guidelines for Vitamin D: Application of Individual Participant Data (IPD-Level Meta-Regression Analyses

    Directory of Open Access Journals (Sweden)

    Kevin D. Cashman

    2017-05-01

    Full Text Available Dietary Reference Values (DRVs for vitamin D have a key role in the prevention of vitamin D deficiency. However, despite adopting similar risk assessment protocols, estimates from authoritative agencies over the last 6 years have been diverse. This may have arisen from diverse approaches to data analysis. Modelling strategies for pooling of individual subject data from cognate vitamin D randomized controlled trials (RCTs are likely to provide the most appropriate DRV estimates. Thus, the objective of the present work was to undertake the first-ever individual participant data (IPD-level meta-regression, which is increasingly recognized as best practice, from seven winter-based RCTs (with 882 participants ranging in age from 4 to 90 years of the vitamin D intake–serum 25-hydroxyvitamin D (25(OHD dose-response. Our IPD-derived estimates of vitamin D intakes required to maintain 97.5% of 25(OHD concentrations >25, 30, and 50 nmol/L across the population are 10, 13, and 26 µg/day, respectively. In contrast, standard meta-regression analyses with aggregate data (as used by several agencies in recent years from the same RCTs estimated that a vitamin D intake requirement of 14 µg/day would maintain 97.5% of 25(OHD >50 nmol/L. These first IPD-derived estimates offer improved dietary recommendations for vitamin D because the underpinning modeling captures the between-person variability in response of serum 25(OHD to vitamin D intake.

  8. Analysing inequalities in Germany a structured additive distributional regression approach

    CERN Document Server

    Silbersdorff, Alexander

    2017-01-01

    This book seeks new perspectives on the growing inequalities that our societies face, putting forward Structured Additive Distributional Regression as a means of statistical analysis that circumvents the common problem of analytical reduction to simple point estimators. This new approach allows the observed discrepancy between the individuals’ realities and the abstract representation of those realities to be explicitly taken into consideration using the arithmetic mean alone. In turn, the method is applied to the question of economic inequality in Germany.

  9. Development of the Performance Confirmation Program at Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    G.D. LeCain; D. Barr; D. Weaver; R. Snell; S.W. Goodin; F.D. Hansen

    2006-01-01

    The Yucca Mountain Performance Confirmation program consists of tests, monitoring activities, experiments, and analyses to evaluate the adequacy of assumptions, data, and analyses that form the basis of the conceptual and numerical models of flow and transport associated with a proposed radioactive waste repository at Yucca Mountain, Nevada. The Performance Confirmation program uses an eight-stage risk-informed, performance-based approach. Selection of the Performance Confirmation activities (a parameter and a test method) for inclusion in the Performance Confirmation program was done using a risk-informed performance-based decision analysis. The result of this analysis and review was a Performance Confirmation base portfolio that consists of 20 activities. The 20 Performance Confirmation activities include geologic, hydrologic, and construction/engineering testing. Several of the activities were initiated during site characterization and are ongoing. Others activities will commence during construction and/or post emplacement and will continue until repository closure

  10. A Simulation Investigation of Principal Component Regression.

    Science.gov (United States)

    Allen, David E.

    Regression analysis is one of the more common analytic tools used by researchers. However, multicollinearity between the predictor variables can cause problems in using the results of regression analyses. Problems associated with multicollinearity include entanglement of relative influences of variables due to reduced precision of estimation,…

  11. Quantifying Shapes: Mathematical Techniques for Analysing Visual Representations of Sound and Music

    Directory of Open Access Journals (Sweden)

    Genevieve L. Noyce

    2013-12-01

    Full Text Available Research on auditory-visual correspondences has a long tradition but innovative experimental paradigms and analytic tools are sparse. In this study, we explore different ways of analysing real-time visual representations of sound and music drawn by both musically-trained and untrained individuals. To that end, participants' drawing responses captured by an electronic graphics tablet were analysed using various regression, clustering, and classification techniques. Results revealed that a Gaussian process (GP regression model with a linear plus squared-exponential covariance function was able to model the data sufficiently, whereas a simpler GP was not a good fit. Spectral clustering analysis was the best of a variety of clustering techniques, though no strong groupings are apparent in these data. This was confirmed by variational Bayes analysis, which only fitted one Gaussian over the dataset. Slight trends in the optimised hyperparameters between musically-trained and untrained individuals allowed for the building of a successful GP classifier that differentiated between these two groups. In conclusion, this set of techniques provides useful mathematical tools for analysing real-time visualisations of sound and can be applied to similar datasets as well.

  12. Check-all-that-apply data analysed by Partial Least Squares regression

    DEFF Research Database (Denmark)

    Rinnan, Åsmund; Giacalone, Davide; Frøst, Michael Bom

    2015-01-01

    are analysed by multivariate techniques. CATA data can be analysed both by setting the CATA as the X and the Y. The former is the PLS-Discriminant Analysis (PLS-DA) version, while the latter is the ANOVA-PLS (A-PLS) version. We investigated the difference between these two approaches, concluding...

  13. The Use of Nonparametric Kernel Regression Methods in Econometric Production Analysis

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard

    and nonparametric estimations of production functions in order to evaluate the optimal firm size. The second paper discusses the use of parametric and nonparametric regression methods to estimate panel data regression models. The third paper analyses production risk, price uncertainty, and farmers' risk preferences...... within a nonparametric panel data regression framework. The fourth paper analyses the technical efficiency of dairy farms with environmental output using nonparametric kernel regression in a semiparametric stochastic frontier analysis. The results provided in this PhD thesis show that nonparametric......This PhD thesis addresses one of the fundamental problems in applied econometric analysis, namely the econometric estimation of regression functions. The conventional approach to regression analysis is the parametric approach, which requires the researcher to specify the form of the regression...

  14. Determination of benzo(apyrene content in PM10 using regression methods

    Directory of Open Access Journals (Sweden)

    Jacek Gębicki

    2015-12-01

    Full Text Available The paper presents an attempt of application of multidimensional linear regression to estimation of an empirical model describing the factors influencing on B(aP content in suspended dust PM10 in Olsztyn and Elbląg city regions between 2010 and 2013. During this period annual average concentration of B(aP in PM10 exceeded the admissible level 1.5-3 times. Conducted investigations confirm that the reasons of B(aP concentration increase are low-efficiency individual home heat stations or low-temperature heat sources, which are responsible for so-called low emission during heating period. Dependences between the following quantities were analysed: concentration of PM10 dust in air, air temperature, wind velocity, air humidity. A measure of model fitting to actual B(aP concentration in PM10 was the coefficient of determination of the model. Application of multidimensional linear regression yielded the equations characterized by high values of the coefficient of determination of the model, especially during heating season. This parameter ranged from 0.54 to 0.80 during the analyzed period.

  15. Detecting overdispersion in count data: A zero-inflated Poisson regression analysis

    Science.gov (United States)

    Afiqah Muhamad Jamil, Siti; Asrul Affendi Abdullah, M.; Kek, Sie Long; Nor, Maria Elena; Mohamed, Maryati; Ismail, Norradihah

    2017-09-01

    This study focusing on analysing count data of butterflies communities in Jasin, Melaka. In analysing count dependent variable, the Poisson regression model has been known as a benchmark model for regression analysis. Continuing from the previous literature that used Poisson regression analysis, this study comprising the used of zero-inflated Poisson (ZIP) regression analysis to gain acute precision on analysing the count data of butterfly communities in Jasin, Melaka. On the other hands, Poisson regression should be abandoned in the favour of count data models, which are capable of taking into account the extra zeros explicitly. By far, one of the most popular models include ZIP regression model. The data of butterfly communities which had been called as the number of subjects in this study had been taken in Jasin, Melaka and consisted of 131 number of subjects visits Jasin, Melaka. Since the researchers are considering the number of subjects, this data set consists of five families of butterfly and represent the five variables involve in the analysis which are the types of subjects. Besides, the analysis of ZIP used the SAS procedure of overdispersion in analysing zeros value and the main purpose of continuing the previous study is to compare which models would be better than when exists zero values for the observation of the count data. The analysis used AIC, BIC and Voung test of 5% level significance in order to achieve the objectives. The finding indicates that there is a presence of over-dispersion in analysing zero value. The ZIP regression model is better than Poisson regression model when zero values exist.

  16. Alpins and thibos vectorial astigmatism analyses: proposal of a linear regression model between methods

    Directory of Open Access Journals (Sweden)

    Giuliano de Oliveira Freitas

    2013-10-01

    Full Text Available PURPOSE: To determine linear regression models between Alpins descriptive indices and Thibos astigmatic power vectors (APV, assessing the validity and strength of such correlations. METHODS: This case series prospectively assessed 62 eyes of 31 consecutive cataract patients with preoperative corneal astigmatism between 0.75 and 2.50 diopters in both eyes. Patients were randomly assorted among two phacoemulsification groups: one assigned to receive AcrySof®Toric intraocular lens (IOL in both eyes and another assigned to have AcrySof Natural IOL associated with limbal relaxing incisions, also in both eyes. All patients were reevaluated postoperatively at 6 months, when refractive astigmatism analysis was performed using both Alpins and Thibos methods. The ratio between Thibos postoperative APV and preoperative APV (APVratio and its linear regression to Alpins percentage of success of astigmatic surgery, percentage of astigmatism corrected and percentage of astigmatism reduction at the intended axis were assessed. RESULTS: Significant negative correlation between the ratio of post- and preoperative Thibos APVratio and Alpins percentage of success (%Success was found (Spearman's ρ=-0.93; linear regression is given by the following equation: %Success = (-APVratio + 1.00x100. CONCLUSION: The linear regression we found between APVratio and %Success permits a validated mathematical inference concerning the overall success of astigmatic surgery.

  17. Linear regression metamodeling as a tool to summarize and present simulation model results.

    Science.gov (United States)

    Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M

    2013-10-01

    Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.

  18. From Lucy to Kadanuumuu: balanced analyses of Australopithecus afarensis assemblages confirm only moderate skeletal dimorphism

    Directory of Open Access Journals (Sweden)

    Philip L. Reno

    2015-04-01

    Full Text Available Sexual dimorphism in body size is often used as a correlate of social and reproductive behavior in Australopithecus afarensis. In addition to a number of isolated specimens, the sample for this species includes two small associated skeletons (A.L. 288-1 or “Lucy” and A.L. 128/129 and a geologically contemporaneous death assemblage of several larger individuals (A.L. 333. These have driven both perceptions and quantitative analyses concluding that Au. afarensis was markedly dimorphic. The Template Method enables simultaneous evaluation of multiple skeletal sites, thereby greatly expanding sample size, and reveals that A. afarensis dimorphism was similar to that of modern humans. A new very large partial skeleton (KSD-VP-1/1 or “Kadanuumuu” can now also be used, like Lucy, as a template specimen. In addition, the recently developed Geometric Mean Method has been used to argue that Au. afarensis was equally or even more dimorphic than gorillas. However, in its previous application Lucy and A.L. 128/129 accounted for 10 of 11 estimates of female size. Here we directly compare the two methods and demonstrate that including multiple measurements from the same partial skeleton that falls at the margin of the species size range dramatically inflates dimorphism estimates. Prevention of the dominance of a single specimen’s contribution to calculations of multiple dimorphism estimates confirms that Au. afarensis was only moderately dimorphic.

  19. A new Python library to analyse skeleton images confirms malaria parasite remodelling of the red blood cell membrane skeleton

    Directory of Open Access Journals (Sweden)

    Juan Nunez-Iglesias

    2018-02-01

    Full Text Available We present Skan (Skeleton analysis, a Python library for the analysis of the skeleton structures of objects. It was inspired by the “analyse skeletons” plugin for the Fiji image analysis software, but its extensive Application Programming Interface (API allows users to examine and manipulate any intermediate data structures produced during the analysis. Further, its use of common Python data structures such as SciPy sparse matrices and pandas data frames opens the results to analysis within the extensive ecosystem of scientific libraries available in Python. We demonstrate the validity of Skan’s measurements by comparing its output to the established Analyze Skeletons Fiji plugin, and, with a new scanning electron microscopy (SEM-based method, we confirm that the malaria parasite Plasmodium falciparum remodels the host red blood cell cytoskeleton, increasing the average distance between spectrin-actin junctions.

  20. A new Python library to analyse skeleton images confirms malaria parasite remodelling of the red blood cell membrane skeleton.

    Science.gov (United States)

    Nunez-Iglesias, Juan; Blanch, Adam J; Looker, Oliver; Dixon, Matthew W; Tilley, Leann

    2018-01-01

    We present Skan (Skeleton analysis), a Python library for the analysis of the skeleton structures of objects. It was inspired by the "analyse skeletons" plugin for the Fiji image analysis software, but its extensive Application Programming Interface (API) allows users to examine and manipulate any intermediate data structures produced during the analysis. Further, its use of common Python data structures such as SciPy sparse matrices and pandas data frames opens the results to analysis within the extensive ecosystem of scientific libraries available in Python. We demonstrate the validity of Skan's measurements by comparing its output to the established Analyze Skeletons Fiji plugin, and, with a new scanning electron microscopy (SEM)-based method, we confirm that the malaria parasite Plasmodium falciparum remodels the host red blood cell cytoskeleton, increasing the average distance between spectrin-actin junctions.

  1. Clinical efficacy of BG-12 (dimethyl fumarate) in patients with relapsing-remitting multiple sclerosis: subgroup analyses of the CONFIRM study.

    LENUS (Irish Health Repository)

    Hutchinson, Michael

    2013-09-01

    In the phase 3, randomized, placebo-controlled and active reference (glatiramer acetate) comparator CONFIRM study in patients with relapsing-remitting multiple sclerosis, oral BG-12 (dimethyl fumarate) reduced the annualized relapse rate (ARR; primary endpoint), as well as the proportion of patients relapsed, magnetic resonance imaging lesion activity, and confirmed disability progression, compared with placebo. We investigated the clinical efficacy of BG-12 240 mg twice daily (BID) and three times daily (TID) in patient subgroups stratified according to baseline demographic and disease characteristics including gender, age, relapse history, McDonald criteria, treatment history, Expanded Disability Status Scale score, T2 lesion volume, and gadolinium-enhancing lesions. BG-12 treatment demonstrated generally consistent benefits on relapse-related outcomes across patient subgroups, reflecting the positive findings in the overall CONFIRM study population. Treatment with BG-12 BID and TID reduced the ARR and the proportion of patients relapsed at 2 years compared with placebo in all subgroups analyzed. Reductions in ARR with BG-12 BID versus placebo ranged from 34% [rate ratio 0.664 (95% confidence interval 0.422-1.043)] to 53% [0.466 (0.313-0.694)] and from 13% [0.870 (0.551-1.373)] to 67% [0.334 (0.226-0.493)] with BG-12 TID versus placebo. Treatment with glatiramer acetate reduced the ARR and the proportion of patients relapsed at 2 years compared with placebo in most patient subgroups. The results of these analyses indicate that treatment with BG-12 is effective on relapses across a broad range of patients with relapsing-remitting multiple sclerosis with varied demographic and disease characteristics.

  2. Support Vector Regression Model Based on Empirical Mode Decomposition and Auto Regression for Electric Load Forecasting

    Directory of Open Access Journals (Sweden)

    Hong-Juan Li

    2013-04-01

    Full Text Available Electric load forecasting is an important issue for a power utility, associated with the management of daily operations such as energy transfer scheduling, unit commitment, and load dispatch. Inspired by strong non-linear learning capability of support vector regression (SVR, this paper presents a SVR model hybridized with the empirical mode decomposition (EMD method and auto regression (AR for electric load forecasting. The electric load data of the New South Wales (Australia market are employed for comparing the forecasting performances of different forecasting models. The results confirm the validity of the idea that the proposed model can simultaneously provide forecasting with good accuracy and interpretability.

  3. Assessing the suitability of summary data for two-sample Mendelian randomization analyses using MR-Egger regression: the role of the I2 statistic.

    Science.gov (United States)

    Bowden, Jack; Del Greco M, Fabiola; Minelli, Cosetta; Davey Smith, George; Sheehan, Nuala A; Thompson, John R

    2016-12-01

    : MR-Egger regression has recently been proposed as a method for Mendelian randomization (MR) analyses incorporating summary data estimates of causal effect from multiple individual variants, which is robust to invalid instruments. It can be used to test for directional pleiotropy and provides an estimate of the causal effect adjusted for its presence. MR-Egger regression provides a useful additional sensitivity analysis to the standard inverse variance weighted (IVW) approach that assumes all variants are valid instruments. Both methods use weights that consider the single nucleotide polymorphism (SNP)-exposure associations to be known, rather than estimated. We call this the `NO Measurement Error' (NOME) assumption. Causal effect estimates from the IVW approach exhibit weak instrument bias whenever the genetic variants utilized violate the NOME assumption, which can be reliably measured using the F-statistic. The effect of NOME violation on MR-Egger regression has yet to be studied. An adaptation of the I2 statistic from the field of meta-analysis is proposed to quantify the strength of NOME violation for MR-Egger. It lies between 0 and 1, and indicates the expected relative bias (or dilution) of the MR-Egger causal estimate in the two-sample MR context. We call it IGX2 . The method of simulation extrapolation is also explored to counteract the dilution. Their joint utility is evaluated using simulated data and applied to a real MR example. In simulated two-sample MR analyses we show that, when a causal effect exists, the MR-Egger estimate of causal effect is biased towards the null when NOME is violated, and the stronger the violation (as indicated by lower values of IGX2 ), the stronger the dilution. When additionally all genetic variants are valid instruments, the type I error rate of the MR-Egger test for pleiotropy is inflated and the causal effect underestimated. Simulation extrapolation is shown to substantially mitigate these adverse effects. We

  4. The more total cognitive load is reduced by cues, the better retention and transfer of multimedia learning: A meta-analysis and two meta-regression analyses.

    Science.gov (United States)

    Xie, Heping; Wang, Fuxing; Hao, Yanbin; Chen, Jiaxue; An, Jing; Wang, Yuxin; Liu, Huashan

    2017-01-01

    Cueing facilitates retention and transfer of multimedia learning. From the perspective of cognitive load theory (CLT), cueing has a positive effect on learning outcomes because of the reduction in total cognitive load and avoidance of cognitive overload. However, this has not been systematically evaluated. Moreover, what remains ambiguous is the direct relationship between the cue-related cognitive load and learning outcomes. A meta-analysis and two subsequent meta-regression analyses were conducted to explore these issues. Subjective total cognitive load (SCL) and scores on a retention test and transfer test were selected as dependent variables. Through a systematic literature search, 32 eligible articles encompassing 3,597 participants were included in the SCL-related meta-analysis. Among them, 25 articles containing 2,910 participants were included in the retention-related meta-analysis and the following retention-related meta-regression, while there were 29 articles containing 3,204 participants included in the transfer-related meta-analysis and the transfer-related meta-regression. The meta-analysis revealed a statistically significant cueing effect on subjective ratings of cognitive load (d = -0.11, 95% CI = [-0.19, -0.02], p < 0.05), retention performance (d = 0.27, 95% CI = [0.08, 0.46], p < 0.01), and transfer performance (d = 0.34, 95% CI = [0.12, 0.56], p < 0.01). The subsequent meta-regression analyses showed that dSCL for cueing significantly predicted dretention for cueing (β = -0.70, 95% CI = [-1.02, -0.38], p < 0.001), as well as dtransfer for cueing (β = -0.60, 95% CI = [-0.92, -0.28], p < 0.001). Thus in line with CLT, adding cues in multimedia materials can indeed reduce SCL and promote learning outcomes, and the more SCL is reduced by cues, the better retention and transfer of multimedia learning.

  5. Efficiency of insurance companies: Application of DEA and Tobit analyses

    Directory of Open Access Journals (Sweden)

    Eva Grmanová

    2017-10-01

    Full Text Available The aim of this paper is to determine the relationship between technical efficiency and profitability of insurance companies. The profitability of insurance companies was expressed by such indicators as ROA, ROE and the size of assets. We analysed 15 commercial insurance companies in Slovakia in the period of 2013-2015. Technical efficiency scores were expressed using DEA models. The relationship between the technical efficiency score and the indicators of profitability was expressed using censored regression, i.e. the Tobit regression model and the Mann-Whitney U-test. The relationship between the technical efficiency score in the CCR and BCC models and all the groups formed on the basis of the return on assets and the group formed basing on the return on equity was not confirmed. Statistically significant difference between average technical efficiency score in the CCR model in the group of insurance companies with ROA

  6. Predicting Word Reading Ability: A Quantile Regression Study

    Science.gov (United States)

    McIlraith, Autumn L.

    2018-01-01

    Predictors of early word reading are well established. However, it is unclear if these predictors hold for readers across a range of word reading abilities. This study used quantile regression to investigate predictive relationships at different points in the distribution of word reading. Quantile regression analyses used preschool and…

  7. Structural vascular disease in Africans: performance of ethnic-specific waist circumference cut points using logistic regression and neural network analyses: the SABPA study

    OpenAIRE

    Botha, J.; De Ridder, J.H.; Potgieter, J.C.; Steyn, H.S.; Malan, L.

    2013-01-01

    A recently proposed model for waist circumference cut points (RPWC), driven by increased blood pressure, was demonstrated in an African population. We therefore aimed to validate the RPWC by comparing the RPWC and the Joint Statement Consensus (JSC) models via Logistic Regression (LR) and Neural Networks (NN) analyses. Urban African gender groups (N=171) were stratified according to the JSC and RPWC cut point models. Ultrasound carotid intima media thickness (CIMT), blood pressure (BP) and fa...

  8. Bisphenol-A exposures and behavioural aberrations: median and linear spline and meta-regression analyses of 12 toxicity studies in rodents.

    Science.gov (United States)

    Peluso, Marco E M; Munnia, Armelle; Ceppi, Marcello

    2014-11-05

    Exposures to bisphenol-A, a weak estrogenic chemical, largely used for the production of plastic containers, can affect the rodent behaviour. Thus, we examined the relationships between bisphenol-A and the anxiety-like behaviour, spatial skills, and aggressiveness, in 12 toxicity studies of rodent offspring from females orally exposed to bisphenol-A, while pregnant and/or lactating, by median and linear splines analyses. Subsequently, the meta-regression analysis was applied to quantify the behavioural changes. U-shaped, inverted U-shaped and J-shaped dose-response curves were found to describe the relationships between bisphenol-A with the behavioural outcomes. The occurrence of anxiogenic-like effects and spatial skill changes displayed U-shaped and inverted U-shaped curves, respectively, providing examples of effects that are observed at low-doses. Conversely, a J-dose-response relationship was observed for aggressiveness. When the proportion of rodents expressing certain traits or the time that they employed to manifest an attitude was analysed, the meta-regression indicated that a borderline significant increment of anxiogenic-like effects was present at low-doses regardless of sexes (β)=-0.8%, 95% C.I. -1.7/0.1, P=0.076, at ≤120 μg bisphenol-A. Whereas, only bisphenol-A-males exhibited a significant inhibition of spatial skills (β)=0.7%, 95% C.I. 0.2/1.2, P=0.004, at ≤100 μg/day. A significant increment of aggressiveness was observed in both the sexes (β)=67.9,C.I. 3.4, 172.5, P=0.038, at >4.0 μg. Then, bisphenol-A treatments significantly abrogated spatial learning and ability in males (Pbisphenol-A, e.g. ≤120 μg/day, were associated to behavioural aberrations in offspring. Copyright © 2014. Published by Elsevier Ireland Ltd.

  9. Ridge Regression Signal Processing

    Science.gov (United States)

    Kuhl, Mark R.

    1990-01-01

    The introduction of the Global Positioning System (GPS) into the National Airspace System (NAS) necessitates the development of Receiver Autonomous Integrity Monitoring (RAIM) techniques. In order to guarantee a certain level of integrity, a thorough understanding of modern estimation techniques applied to navigational problems is required. The extended Kalman filter (EKF) is derived and analyzed under poor geometry conditions. It was found that the performance of the EKF is difficult to predict, since the EKF is designed for a Gaussian environment. A novel approach is implemented which incorporates ridge regression to explain the behavior of an EKF in the presence of dynamics under poor geometry conditions. The basic principles of ridge regression theory are presented, followed by the derivation of a linearized recursive ridge estimator. Computer simulations are performed to confirm the underlying theory and to provide a comparative analysis of the EKF and the recursive ridge estimator.

  10. Classification and regression tree (CART) analyses of genomic signatures reveal sets of tetramers that discriminate temperature optima of archaea and bacteria

    Science.gov (United States)

    Dyer, Betsey D.; Kahn, Michael J.; LeBlanc, Mark D.

    2008-01-01

    Classification and regression tree (CART) analysis was applied to genome-wide tetranucleotide frequencies (genomic signatures) of 195 archaea and bacteria. Although genomic signatures have typically been used to classify evolutionary divergence, in this study, convergent evolution was the focus. Temperature optima for most of the organisms examined could be distinguished by CART analyses of tetranucleotide frequencies. This suggests that pervasive (nonlinear) qualities of genomes may reflect certain environmental conditions (such as temperature) in which those genomes evolved. The predominant use of GAGA and AGGA as the discriminating tetramers in CART models suggests that purine-loading and codon biases of thermophiles may explain some of the results. PMID:19054742

  11. Marital status integration and suicide: A meta-analysis and meta-regression.

    Science.gov (United States)

    Kyung-Sook, Woo; SangSoo, Shin; Sangjin, Shin; Young-Jeon, Shin

    2018-01-01

    Marital status is an index of the phenomenon of social integration within social structures and has long been identified as an important predictor suicide. However, previous meta-analyses have focused only on a particular marital status, or not sufficiently explored moderators. A meta-analysis of observational studies was conducted to explore the relationships between marital status and suicide and to understand the important moderating factors in this association. Electronic databases were searched to identify studies conducted between January 1, 2000 and June 30, 2016. We performed a meta-analysis, subgroup analysis, and meta-regression of 170 suicide risk estimates from 36 publications. Using random effects model with adjustment for covariates, the study found that the suicide risk for non-married versus married was OR = 1.92 (95% CI: 1.75-2.12). The suicide risk was higher for non-married individuals aged analysis by gender, non-married men exhibited a greater risk of suicide than their married counterparts in all sub-analyses, but women aged 65 years or older showed no significant association between marital status and suicide. The suicide risk in divorced individuals was higher than for non-married individuals in both men and women. The meta-regression showed that gender, age, and sample size affected between-study variation. The results of the study indicated that non-married individuals have an aggregate higher suicide risk than married ones. In addition, gender and age were confirmed as important moderating factors in the relationship between marital status and suicide. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Testing Mediation Using Multiple Regression and Structural Equation Modeling Analyses in Secondary Data

    Science.gov (United States)

    Li, Spencer D.

    2011-01-01

    Mediation analysis in child and adolescent development research is possible using large secondary data sets. This article provides an overview of two statistical methods commonly used to test mediated effects in secondary analysis: multiple regression and structural equation modeling (SEM). Two empirical studies are presented to illustrate the…

  13. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing.

    Science.gov (United States)

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-02-01

    A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R 2 ), using R 2 as the primary metric of assay agreement. However, the use of R 2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  14. Augmenting Data with Published Results in Bayesian Linear Regression

    Science.gov (United States)

    de Leeuw, Christiaan; Klugkist, Irene

    2012-01-01

    In most research, linear regression analyses are performed without taking into account published results (i.e., reported summary statistics) of similar previous studies. Although the prior density in Bayesian linear regression could accommodate such prior knowledge, formal models for doing so are absent from the literature. The goal of this…

  15. Quantification of the impact of a confounding variable on functional connectivity confirms anti-correlated networks in the resting-state.

    Science.gov (United States)

    Carbonell, F; Bellec, P; Shmuel, A

    2014-02-01

    The effect of regressing out the global average signal (GAS) in resting state fMRI data has become a concern for interpreting functional connectivity analyses. It is not clear whether the reported anti-correlations between the Default Mode and the Dorsal Attention Networks are intrinsic to the brain, or are artificially created by regressing out the GAS. Here we introduce a concept, Impact of the Global Average on Functional Connectivity (IGAFC), for quantifying the sensitivity of seed-based correlation analyses to the regression of the GAS. This voxel-wise IGAFC index is defined as the product of two correlation coefficients: the correlation between the GAS and the fMRI time course of a voxel, times the correlation between the GAS and the seed time course. This definition enables the calculation of a threshold at which the impact of regressing-out the GAS would be large enough to introduce spurious negative correlations. It also yields a post-hoc impact correction procedure via thresholding, which eliminates spurious correlations introduced by regressing out the GAS. In addition, we introduce an Artificial Negative Correlation Index (ANCI), defined as the absolute difference between the IGAFC index and the impact threshold. The ANCI allows a graded confidence scale for ranking voxels according to their likelihood of showing artificial correlations. By applying this method, we observed regions in the Default Mode and Dorsal Attention Networks that were anti-correlated. These findings confirm that the previously reported negative correlations between the Dorsal Attention and Default Mode Networks are intrinsic to the brain and not the result of statistical manipulations. Our proposed quantification of the impact that a confound may have on functional connectivity can be generalized to global effect estimators other than the GAS. It can be readily applied to other confounds, such as systemic physiological or head movement interferences, in order to quantify their

  16. Better Autologistic Regression

    Directory of Open Access Journals (Sweden)

    Mark A. Wolters

    2017-11-01

    Full Text Available Autologistic regression is an important probability model for dichotomous random variables observed along with covariate information. It has been used in various fields for analyzing binary data possessing spatial or network structure. The model can be viewed as an extension of the autologistic model (also known as the Ising model, quadratic exponential binary distribution, or Boltzmann machine to include covariates. It can also be viewed as an extension of logistic regression to handle responses that are not independent. Not all authors use exactly the same form of the autologistic regression model. Variations of the model differ in two respects. First, the variable coding—the two numbers used to represent the two possible states of the variables—might differ. Common coding choices are (zero, one and (minus one, plus one. Second, the model might appear in either of two algebraic forms: a standard form, or a recently proposed centered form. Little attention has been paid to the effect of these differences, and the literature shows ambiguity about their importance. It is shown here that changes to either coding or centering in fact produce distinct, non-nested probability models. Theoretical results, numerical studies, and analysis of an ecological data set all show that the differences among the models can be large and practically significant. Understanding the nature of the differences and making appropriate modeling choices can lead to significantly improved autologistic regression analyses. The results strongly suggest that the standard model with plus/minus coding, which we call the symmetric autologistic model, is the most natural choice among the autologistic variants.

  17. Comparing parametric and nonparametric regression methods for panel data

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    We investigate and compare the suitability of parametric and non-parametric stochastic regression methods for analysing production technologies and the optimal firm size. Our theoretical analysis shows that the most commonly used functional forms in empirical production analysis, Cobb......-Douglas and Translog, are unsuitable for analysing the optimal firm size. We show that the Translog functional form implies an implausible linear relationship between the (logarithmic) firm size and the elasticity of scale, where the slope is artificially related to the substitutability between the inputs....... The practical applicability of the parametric and non-parametric regression methods is scrutinised and compared by an empirical example: we analyse the production technology and investigate the optimal size of Polish crop farms based on a firm-level balanced panel data set. A nonparametric specification test...

  18. Differentiating regressed melanoma from regressed lichenoid keratosis.

    Science.gov (United States)

    Chan, Aegean H; Shulman, Kenneth J; Lee, Bonnie A

    2017-04-01

    Distinguishing regressed lichen planus-like keratosis (LPLK) from regressed melanoma can be difficult on histopathologic examination, potentially resulting in mismanagement of patients. We aimed to identify histopathologic features by which regressed melanoma can be differentiated from regressed LPLK. Twenty actively inflamed LPLK, 12 LPLK with regression and 15 melanomas with regression were compared and evaluated by hematoxylin and eosin staining as well as Melan-A, microphthalmia transcription factor (MiTF) and cytokeratin (AE1/AE3) immunostaining. (1) A total of 40% of regressed melanomas showed complete or near complete loss of melanocytes within the epidermis with Melan-A and MiTF immunostaining, while 8% of regressed LPLK exhibited this finding. (2) Necrotic keratinocytes were seen in the epidermis in 33% regressed melanomas as opposed to all of the regressed LPLK. (3) A dense infiltrate of melanophages in the papillary dermis was seen in 40% of regressed melanomas, a feature not seen in regressed LPLK. In summary, our findings suggest that a complete or near complete loss of melanocytes within the epidermis strongly favors a regressed melanoma over a regressed LPLK. In addition, necrotic epidermal keratinocytes and the presence of a dense band-like distribution of dermal melanophages can be helpful in differentiating these lesions. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  19. Prenatal diagnosis of Caudal Regression Syndrome : a case report

    Directory of Open Access Journals (Sweden)

    Celikaslan Nurgul

    2001-12-01

    Full Text Available Abstract Background Caudal regression is a rare syndrome which has a spectrum of congenital malformations ranging from simple anal atresia to absence of sacral, lumbar and possibly lower thoracic vertebrae, to the most severe form which is known as sirenomelia. Maternal diabetes, genetic predisposition and vascular hypoperfusion have been suggested as possible causative factors. Case presentation We report a case of caudal regression syndrome diagnosed in utero at 22 weeks' of gestation. Prenatal ultrasound examination revealed a sudden interruption of the spine and "frog-like" position of lower limbs. Termination of pregnancy and autopsy findings confirmed the diagnosis. Conclusion Prenatal ultrasonographic diagnosis of caudal regression syndrome is possible at 22 weeks' of gestation by ultrasound examination.

  20. Covariate Imbalance and Adjustment for Logistic Regression Analysis of Clinical Trial Data

    Science.gov (United States)

    Ciolino, Jody D.; Martin, Reneé H.; Zhao, Wenle; Jauch, Edward C.; Hill, Michael D.; Palesch, Yuko Y.

    2014-01-01

    In logistic regression analysis for binary clinical trial data, adjusted treatment effect estimates are often not equivalent to unadjusted estimates in the presence of influential covariates. This paper uses simulation to quantify the benefit of covariate adjustment in logistic regression. However, International Conference on Harmonization guidelines suggest that covariate adjustment be pre-specified. Unplanned adjusted analyses should be considered secondary. Results suggest that that if adjustment is not possible or unplanned in a logistic setting, balance in continuous covariates can alleviate some (but never all) of the shortcomings of unadjusted analyses. The case of log binomial regression is also explored. PMID:24138438

  1. Variable selection and model choice in geoadditive regression models.

    Science.gov (United States)

    Kneib, Thomas; Hothorn, Torsten; Tutz, Gerhard

    2009-06-01

    Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection.

  2. Multiple regression and beyond an introduction to multiple regression and structural equation modeling

    CERN Document Server

    Keith, Timothy Z

    2014-01-01

    Multiple Regression and Beyond offers a conceptually oriented introduction to multiple regression (MR) analysis and structural equation modeling (SEM), along with analyses that flow naturally from those methods. By focusing on the concepts and purposes of MR and related methods, rather than the derivation and calculation of formulae, this book introduces material to students more clearly, and in a less threatening way. In addition to illuminating content necessary for coursework, the accessibility of this approach means students are more likely to be able to conduct research using MR or SEM--and more likely to use the methods wisely. Covers both MR and SEM, while explaining their relevance to one another Also includes path analysis, confirmatory factor analysis, and latent growth modeling Figures and tables throughout provide examples and illustrate key concepts and techniques For additional resources, please visit: http://tzkeith.com/.

  3. PARAMETRIC AND NON PARAMETRIC (MARS: MULTIVARIATE ADDITIVE REGRESSION SPLINES) LOGISTIC REGRESSIONS FOR PREDICTION OF A DICHOTOMOUS RESPONSE VARIABLE WITH AN EXAMPLE FOR PRESENCE/ABSENCE OF AMPHIBIANS

    Science.gov (United States)

    The purpose of this report is to provide a reference manual that could be used by investigators for making informed use of logistic regression using two methods (standard logistic regression and MARS). The details for analyses of relationships between a dependent binary response ...

  4. Complex regression Doppler optical coherence tomography

    Science.gov (United States)

    Elahi, Sahar; Gu, Shi; Thrane, Lars; Rollins, Andrew M.; Jenkins, Michael W.

    2018-04-01

    We introduce a new method to measure Doppler shifts more accurately and extend the dynamic range of Doppler optical coherence tomography (OCT). The two-point estimate of the conventional Doppler method is replaced with a regression that is applied to high-density B-scans in polar coordinates. We built a high-speed OCT system using a 1.68-MHz Fourier domain mode locked laser to acquire high-density B-scans (16,000 A-lines) at high enough frame rates (˜100 fps) to accurately capture the dynamics of the beating embryonic heart. Flow phantom experiments confirm that the complex regression lowers the minimum detectable velocity from 12.25 mm / s to 374 μm / s, whereas the maximum velocity of 400 mm / s is measured without phase wrapping. Complex regression Doppler OCT also demonstrates higher accuracy and precision compared with the conventional method, particularly when signal-to-noise ratio is low. The extended dynamic range allows monitoring of blood flow over several stages of development in embryos without adjusting the imaging parameters. In addition, applying complex averaging recovers hidden features in structural images.

  5. Longitudinal changes in telomere length and associated genetic parameters in dairy cattle analysed using random regression models.

    Directory of Open Access Journals (Sweden)

    Luise A Seeker

    Full Text Available Telomeres cap the ends of linear chromosomes and shorten with age in many organisms. In humans short telomeres have been linked to morbidity and mortality. With the accumulation of longitudinal datasets the focus shifts from investigating telomere length (TL to exploring TL change within individuals over time. Some studies indicate that the speed of telomere attrition is predictive of future disease. The objectives of the present study were to 1 characterize the change in bovine relative leukocyte TL (RLTL across the lifetime in Holstein Friesian dairy cattle, 2 estimate genetic parameters of RLTL over time and 3 investigate the association of differences in individual RLTL profiles with productive lifespan. RLTL measurements were analysed using Legendre polynomials in a random regression model to describe TL profiles and genetic variance over age. The analyses were based on 1,328 repeated RLTL measurements of 308 female Holstein Friesian dairy cattle. A quadratic Legendre polynomial was fitted to the fixed effect of age in months and to the random effect of the animal identity. Changes in RLTL, heritability and within-trait genetic correlation along the age trajectory were calculated and illustrated. At a population level, the relationship between RLTL and age was described by a positive quadratic function. Individuals varied significantly regarding the direction and amount of RLTL change over life. The heritability of RLTL ranged from 0.36 to 0.47 (SE = 0.05-0.08 and remained statistically unchanged over time. The genetic correlation of RLTL at birth with measurements later in life decreased with the time interval between samplings from near unity to 0.69, indicating that TL later in life might be regulated by different genes than TL early in life. Even though animals differed in their RLTL profiles significantly, those differences were not correlated with productive lifespan (p = 0.954.

  6. Correcting for multivariate measurement error by regression calibration in meta-analyses of epidemiological studies.

    NARCIS (Netherlands)

    Kromhout, D.

    2009-01-01

    Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements of the

  7. Measurement Error in Education and Growth Regressions

    NARCIS (Netherlands)

    Portela, M.; Teulings, C.N.; Alessie, R.

    The perpetual inventory method used for the construction of education data per country leads to systematic measurement error. This paper analyses the effect of this measurement error on GDP regressions. There is a systematic difference in the education level between census data and observations

  8. Measurement error in education and growth regressions

    NARCIS (Netherlands)

    Portela, Miguel; Teulings, Coen; Alessie, R.

    2004-01-01

    The perpetual inventory method used for the construction of education data per country leads to systematic measurement error. This paper analyses the effect of this measurement error on GDP regressions. There is a systematic difference in the education level between census data and observations

  9. Splicing analysis for exonic and intronic mismatch repair gene variants associated with Lynch syndrome confirms high concordance between minigene assays and patient RNA analyses

    Science.gov (United States)

    van der Klift, Heleen M; Jansen, Anne M L; van der Steenstraten, Niki; Bik, Elsa C; Tops, Carli M J; Devilee, Peter; Wijnen, Juul T

    2015-01-01

    A subset of DNA variants causes genetic disease through aberrant splicing. Experimental splicing assays, either RT-PCR analyses of patient RNA or functional splicing reporter minigene assays, are required to evaluate the molecular nature of the splice defect. Here, we present minigene assays performed for 17 variants in the consensus splice site regions, 14 exonic variants outside these regions, and two deep intronic variants, all in the DNA mismatch-repair (MMR) genes MLH1, MSH2, MSH6, and PMS2, associated with Lynch syndrome. We also included two deep intronic variants in APC and PKD2. For one variant (MLH1 c.122A>G), our minigene assay and patient RNA analysis could not confirm the previously reported aberrant splicing. The aim of our study was to further investigate the concordance between minigene splicing assays and patient RNA analyses. For 30 variants results from patient RNA analyses were available, either performed by our laboratory or presented in literature. Some variants were deliberately included in this study because they resulted in multiple aberrant transcripts in patient RNA analysis, or caused a splice effect other than the prevalent exon skip. While both methods were completely concordant in the assessment of splice effects, four variants exhibited major differences in aberrant splice patterns. Based on the present and earlier studies, together showing an almost 100% concordance of minigene assays with patient RNA analyses, we discuss the weight given to minigene splicing assays in the current criteria proposed by InSiGHT for clinical classification of MMR variants. PMID:26247049

  10. Panel data specifications in nonparametric kernel regression

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    parametric panel data estimators to analyse the production technology of Polish crop farms. The results of our nonparametric kernel regressions generally differ from the estimates of the parametric models but they only slightly depend on the choice of the kernel functions. Based on economic reasoning, we...

  11. Culture confirmation of tuberculosis cases in Birmingham, UK.

    Science.gov (United States)

    Hayer, Kalbir S; Sitch, Alice J; Dedicoat, Martin; Wood, Annette L

    2013-10-01

    The proportion of culture-confirmed tuberculosis (TB) cases in Birmingham had gradually decreased to less than 65% in 2008. Reasons for this were unclear, therefore this study assessed diagnostic methods used for confirming TB and reviewed factors involved in positive culture. A cross-sectional study was carried out. A list of notified TB cases for Birmingham in those aged 16 y and over in 2009 was collated. Where no positive culture was recorded, further data were collected from hospital databases and case notes. Of 449 TB cases, 419 (93%) had samples taken for culture testing. Of all cases, 309 (69%) were confirmed by culture testing; of those receiving culture testing, 73% were confirmed. Pulmonary TB was identified as a predictor of positive culture in both the unadjusted and adjusted analyses: odds ratio (OR) 2.05, 95% confidence interval (CI) 1.32-3.19, and OR 2.32, 95% CI 1.29-4.17, respectively. Gender, age, ethnicity, UK born, and treatment delay were not significantly associated with positive culture. Of 140 cases not confirmed by culture, 129 (92%) had their diagnosis supported by at least one other test. The vast majority of TB cases had microbiological specimens taken to help confirm the disease. Furthermore, culture confirmation rates in Birmingham were meeting national targets in 2009. However culture confirmation rates were significantly lower in extrapulmonary TB, therefore further work is suggested in this group. The role of other investigations (e.g. interferon-gamma release assay (IGRA), Mantoux) is unclear. Further collaboration between clinicians, histopathologists, and microbiologists is advised to ensure samples are sent appropriately and culture confirmation is optimized.

  12. Retro-regression--another important multivariate regression improvement.

    Science.gov (United States)

    Randić, M

    2001-01-01

    We review the serious problem associated with instabilities of the coefficients of regression equations, referred to as the MRA (multivariate regression analysis) "nightmare of the first kind". This is manifested when in a stepwise regression a descriptor is included or excluded from a regression. The consequence is an unpredictable change of the coefficients of the descriptors that remain in the regression equation. We follow with consideration of an even more serious problem, referred to as the MRA "nightmare of the second kind", arising when optimal descriptors are selected from a large pool of descriptors. This process typically causes at different steps of the stepwise regression a replacement of several previously used descriptors by new ones. We describe a procedure that resolves these difficulties. The approach is illustrated on boiling points of nonanes which are considered (1) by using an ordered connectivity basis; (2) by using an ordering resulting from application of greedy algorithm; and (3) by using an ordering derived from an exhaustive search for optimal descriptors. A novel variant of multiple regression analysis, called retro-regression (RR), is outlined showing how it resolves the ambiguities associated with both "nightmares" of the first and the second kind of MRA.

  13. Modified Regression Correlation Coefficient for Poisson Regression Model

    Science.gov (United States)

    Kaengthong, Nattacha; Domthong, Uthumporn

    2017-09-01

    This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).

  14. Registered error between PET and CT images confirmed by a water model

    International Nuclear Information System (INIS)

    Chen Yangchun; Fan Mingwu; Xu Hao; Chen Ping; Zhang Chunlin

    2012-01-01

    The registered error between PET and CT imaging system was confirmed by a water model simulating clinical cases. A barrel of 6750 mL was filled with 59.2 MBq [ 18 F]-FDG and scanned after 80 min by 2 dimension model PET/CT. The CT images were used to attenuate the PET images. The CT/PET images were obtained by image morphological processing analyses without barrel wall. The relationship of the water image centroids of CT and PET images was established by linear regression analysis, and the registered error between PET and CT image could be computed one slice by one slice. The alignment program was done 4 times following the protocol given by GE Healthcare. Compared with centroids of water CT images, centroids of PET images were shifted to X-axis (0.011slice+0.63) mm, to Y-axis (0.022×slice+1.35) mm. To match CT images, PET images should be translated along X-axis (-2.69±0.15) mm, Y-axis (0.43±0.11) mm, Z-axis (0.86±0.23) mm, and X-axis be rotated by (0.06±0.07)°, Y-axis by (-0.01±0.08)°, and Z-axis by (0.11±0.07)°. So, the systematic registered error was not affected by load and its distribution. By finding the registered error between PET and CT images for coordinate rotation random error, the water model could confirm the registered results of PET-CT system corrected by Alignment parameters. (authors)

  15. SPECIFICS OF THE APPLICATIONS OF MULTIPLE REGRESSION MODEL IN THE ANALYSES OF THE EFFECTS OF GLOBAL FINANCIAL CRISES

    Directory of Open Access Journals (Sweden)

    Željko V. Račić

    2010-12-01

    Full Text Available This paper aims to present the specifics of the application of multiple linear regression model. The economic (financial crisis is analyzed in terms of gross domestic product which is in a function of the foreign trade balance (on one hand and the credit cards, i.e. indebtedness of the population on this basis (on the other hand, in the USA (from 1999. to 2008. We used the extended application model which shows how the analyst should run the whole development process of regression model. This process began with simple statistical features and the application of regression procedures, and ended with residual analysis, intended for the study of compatibility of data and model settings. This paper also analyzes the values of some standard statistics used in the selection of appropriate regression model. Testing of the model is carried out with the use of the Statistics PASW 17 program.

  16. Evaluation of histologically confirmed carcinoma of the cervix in ...

    African Journals Online (AJOL)

    Data collected was analysed with SPSS version 20.0 software and presented in tables and charts. Results: Sixty two patients with histological confirmation of ... The commonest histological type of cervical cancer was squamous cell carcinoma accounting for 88.9%. Twenty (44.4%) patients were referred for radiotherapy and ...

  17. Selecting a Regression Saturated by Indicators

    DEFF Research Database (Denmark)

    Hendry, David F.; Johansen, Søren; Santos, Carlos

    We consider selecting a regression model, using a variant of Gets, when there are more variables than observations, in the special case that the variables are impulse dummies (indicators) for every observation. We show that the setting is unproblematic if tackled appropriately, and obtain the fin...... the finite-sample distribution of estimators of the mean and variance in a simple location-scale model under the null that no impulses matter. A Monte Carlo simulation confirms the null distribution, and shows power against an alternative of interest....

  18. Selecting a Regression Saturated by Indicators

    DEFF Research Database (Denmark)

    Hendry, David F.; Johansen, Søren; Santos, Carlos

    We consider selecting a regression model, using a variant of Gets, when there are more variables than observations, in the special case that the variables are impulse dummies (indicators) for every observation. We show that the setting is unproblematic if tackled appropriately, and obtain the fin...... the finite-sample distribution of estimators of the mean and variance in a simple location-scale model under the null that no impulses matter. A Monte Carlo simulation confirms the null distribution, and shows power against an alternative of interest...

  19. Independent contrasts and PGLS regression estimators are equivalent.

    Science.gov (United States)

    Blomberg, Simon P; Lefevre, James G; Wells, Jessie A; Waterhouse, Mary

    2012-05-01

    We prove that the slope parameter of the ordinary least squares regression of phylogenetically independent contrasts (PICs) conducted through the origin is identical to the slope parameter of the method of generalized least squares (GLSs) regression under a Brownian motion model of evolution. This equivalence has several implications: 1. Understanding the structure of the linear model for GLS regression provides insight into when and why phylogeny is important in comparative studies. 2. The limitations of the PIC regression analysis are the same as the limitations of the GLS model. In particular, phylogenetic covariance applies only to the response variable in the regression and the explanatory variable should be regarded as fixed. Calculation of PICs for explanatory variables should be treated as a mathematical idiosyncrasy of the PIC regression algorithm. 3. Since the GLS estimator is the best linear unbiased estimator (BLUE), the slope parameter estimated using PICs is also BLUE. 4. If the slope is estimated using different branch lengths for the explanatory and response variables in the PIC algorithm, the estimator is no longer the BLUE, so this is not recommended. Finally, we discuss whether or not and how to accommodate phylogenetic covariance in regression analyses, particularly in relation to the problem of phylogenetic uncertainty. This discussion is from both frequentist and Bayesian perspectives.

  20. Logistic regression and multiple classification analyses to explore risk factors of under-5 mortality in bangladesh

    International Nuclear Information System (INIS)

    Bhowmik, K.R.; Islam, S.

    2016-01-01

    Logistic regression (LR) analysis is the most common statistical methodology to find out the determinants of childhood mortality. However, the significant predictors cannot be ranked according to their influence on the response variable. Multiple classification (MC) analysis can be applied to identify the significant predictors with a priority index which helps to rank the predictors. The main objective of the study is to find the socio-demographic determinants of childhood mortality at neonatal, post-neonatal, and post-infant period by fitting LR model as well as to rank those through MC analysis. The study is conducted using the data of Bangladesh Demographic and Health Survey 2007 where birth and death information of children were collected from their mothers. Three dichotomous response variables are constructed from children age at death to fit the LR and MC models. Socio-economic and demographic variables significantly associated with the response variables separately are considered in LR and MC analyses. Both the LR and MC models identified the same significant predictors for specific childhood mortality. For both the neonatal and child mortality, biological factors of children, regional settings, and parents socio-economic status are found as 1st, 2nd, and 3rd significant groups of predictors respectively. Mother education and household environment are detected as major significant predictors of post-neonatal mortality. This study shows that MC analysis with or without LR analysis can be applied to detect determinants with rank which help the policy makers taking initiatives on a priority basis. (author)

  1. Prediction, Regression and Critical Realism

    DEFF Research Database (Denmark)

    Næss, Petter

    2004-01-01

    This paper considers the possibility of prediction in land use planning, and the use of statistical research methods in analyses of relationships between urban form and travel behaviour. Influential writers within the tradition of critical realism reject the possibility of predicting social...... phenomena. This position is fundamentally problematic to public planning. Without at least some ability to predict the likely consequences of different proposals, the justification for public sector intervention into market mechanisms will be frail. Statistical methods like regression analyses are commonly...... seen as necessary in order to identify aggregate level effects of policy measures, but are questioned by many advocates of critical realist ontology. Using research into the relationship between urban structure and travel as an example, the paper discusses relevant research methods and the kinds...

  2. Dual Regression

    OpenAIRE

    Spady, Richard; Stouli, Sami

    2012-01-01

    We propose dual regression as an alternative to the quantile regression process for the global estimation of conditional distribution functions under minimal assumptions. Dual regression provides all the interpretational power of the quantile regression process while avoiding the need for repairing the intersecting conditional quantile surfaces that quantile regression often produces in practice. Our approach introduces a mathematical programming characterization of conditional distribution f...

  3. Intermediate and advanced topics in multilevel logistic regression analysis.

    Science.gov (United States)

    Austin, Peter C; Merlo, Juan

    2017-09-10

    Multilevel data occur frequently in health services, population and public health, and epidemiologic research. In such research, binary outcomes are common. Multilevel logistic regression models allow one to account for the clustering of subjects within clusters of higher-level units when estimating the effect of subject and cluster characteristics on subject outcomes. A search of the PubMed database demonstrated that the use of multilevel or hierarchical regression models is increasing rapidly. However, our impression is that many analysts simply use multilevel regression models to account for the nuisance of within-cluster homogeneity that is induced by clustering. In this article, we describe a suite of analyses that can complement the fitting of multilevel logistic regression models. These ancillary analyses permit analysts to estimate the marginal or population-average effect of covariates measured at the subject and cluster level, in contrast to the within-cluster or cluster-specific effects arising from the original multilevel logistic regression model. We describe the interval odds ratio and the proportion of opposed odds ratios, which are summary measures of effect for cluster-level covariates. We describe the variance partition coefficient and the median odds ratio which are measures of components of variance and heterogeneity in outcomes. These measures allow one to quantify the magnitude of the general contextual effect. We describe an R 2 measure that allows analysts to quantify the proportion of variation explained by different multilevel logistic regression models. We illustrate the application and interpretation of these measures by analyzing mortality in patients hospitalized with a diagnosis of acute myocardial infarction. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  4. The use of bootstrap methods for analysing health-related quality of life outcomes (particularly the SF-36

    Directory of Open Access Journals (Sweden)

    Campbell Michael J

    2004-12-01

    Full Text Available Abstract Health-Related Quality of Life (HRQoL measures are becoming increasingly used in clinical trials as primary outcome measures. Investigators are now asking statisticians for advice on how to analyse studies that have used HRQoL outcomes. HRQoL outcomes, like the SF-36, are usually measured on an ordinal scale. However, most investigators assume that there exists an underlying continuous latent variable that measures HRQoL, and that the actual measured outcomes (the ordered categories, reflect contiguous intervals along this continuum. The ordinal scaling of HRQoL measures means they tend to generate data that have discrete, bounded and skewed distributions. Thus, standard methods of analysis such as the t-test and linear regression that assume Normality and constant variance may not be appropriate. For this reason, conventional statistical advice would suggest that non-parametric methods be used to analyse HRQoL data. The bootstrap is one such computer intensive non-parametric method for analysing data. We used the bootstrap for hypothesis testing and the estimation of standard errors and confidence intervals for parameters, in four datasets (which illustrate the different aspects of study design. We then compared and contrasted the bootstrap with standard methods of analysing HRQoL outcomes. The standard methods included t-tests, linear regression, summary measures and General Linear Models. Overall, in the datasets we studied, using the SF-36 outcome, bootstrap methods produce results similar to conventional statistical methods. This is likely because the t-test and linear regression are robust to the violations of assumptions that HRQoL data are likely to cause (i.e. non-Normality. While particular to our datasets, these findings are likely to generalise to other HRQoL outcomes, which have discrete, bounded and skewed distributions. Future research with other HRQoL outcome measures, interventions and populations, is required to

  5. JT-60 configuration parameters for feedback control determined by regression analysis

    Energy Technology Data Exchange (ETDEWEB)

    Matsukawa, Makoto; Hosogane, Nobuyuki; Ninomiya, Hiromasa (Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment)

    1991-12-01

    The stepwise regression procedure was applied to obtain measurement formulas for equilibrium parameters used in the feedback control of JT-60. This procedure automatically selects variables necessary for the measurements, and selects a set of variables which are not likely to be picked up by physical considerations. Regression equations with stable and small multicollinearity were obtained and it was experimentally confirmed that the measurement formulas obtained through this procedure were accurate enough to be applicable to the feedback control of plasma configurations in JT-60. (author).

  6. JT-60 configuration parameters for feedback control determined by regression analysis

    International Nuclear Information System (INIS)

    Matsukawa, Makoto; Hosogane, Nobuyuki; Ninomiya, Hiromasa

    1991-12-01

    The stepwise regression procedure was applied to obtain measurement formulas for equilibrium parameters used in the feedback control of JT-60. This procedure automatically selects variables necessary for the measurements, and selects a set of variables which are not likely to be picked up by physical considerations. Regression equations with stable and small multicollinearity were obtained and it was experimentally confirmed that the measurement formulas obtained through this procedure were accurate enough to be applicable to the feedback control of plasma configurations in JT-60. (author)

  7. Regression periods in infancy: a case study from Catalonia.

    Science.gov (United States)

    Sadurní, Marta; Rostan, Carlos

    2002-05-01

    Based on Rijt-Plooij and Plooij's (1992) research on emergence of regression periods in the first two years of life, the presence of such periods in a group of 18 babies (10 boys and 8 girls, aged between 3 weeks and 14 months) from a Catalonian population was analyzed. The measurements were a questionnaire filled in by the infants' mothers, a semi-structured weekly tape-recorded interview, and observations in their homes. The procedure and the instruments used in the project follow those proposed by Rijt-Plooij and Plooij. Our results confirm the existence of the regression periods in the first year of children's life. Inter-coder agreement for trained coders was 78.2% and within-coder agreement was 90.1%. In the discussion, the possible meaning and relevance of regression periods in order to understand development from a psychobiological and social framework is commented upon.

  8. Regression Analyses on the Butterfly Ballot Effect: A Statistical Perspective of the US 2000 Election

    Science.gov (United States)

    Wu, Dane W.

    2002-01-01

    The year 2000 US presidential election between Al Gore and George Bush has been the most intriguing and controversial one in American history. The state of Florida was the trigger for the controversy, mainly, due to the use of the misleading "butterfly ballot". Using prediction (or confidence) intervals for least squares regression lines…

  9. Weight change in control group participants in behavioural weight loss interventions: a systematic review and meta-regression study

    Directory of Open Access Journals (Sweden)

    Waters Lauren

    2012-08-01

    Full Text Available Abstract Background Unanticipated control group improvements have been observed in intervention trials targeting various health behaviours. This phenomenon has not been studied in the context of behavioural weight loss intervention trials. The purpose of this study is to conduct a systematic review and meta-regression of behavioural weight loss interventions to quantify control group weight change, and relate the size of this effect to specific trial and sample characteristics. Methods Database searches identified reports of intervention trials meeting the inclusion criteria. Data on control group weight change and possible explanatory factors were abstracted and analysed descriptively and quantitatively. Results 85 trials were reviewed and 72 were included in the meta-regression. While there was no change in control group weight, control groups receiving usual care lost 1 kg more than control groups that received no intervention, beyond measurement. Conclusions There are several possible explanations why control group changes occur in intervention trials targeting other behaviours, but not for weight loss. Control group participation may prevent weight gain, although more research is needed to confirm this hypothesis.

  10. Predictors of success of external cephalic version and cephalic presentation at birth among 1253 women with non-cephalic presentation using logistic regression and classification tree analyses.

    Science.gov (United States)

    Hutton, Eileen K; Simioni, Julia C; Thabane, Lehana

    2017-08-01

    Among women with a fetus with a non-cephalic presentation, external cephalic version (ECV) has been shown to reduce the rate of breech presentation at birth and cesarean birth. Compared with ECV at term, beginning ECV prior to 37 weeks' gestation decreases the number of infants in a non-cephalic presentation at birth. The purpose of this secondary analysis was to investigate factors associated with a successful ECV procedure and to present this in a clinically useful format. Data were collected as part of the Early ECV Pilot and Early ECV2 Trials, which randomized 1776 women with a fetus in breech presentation to either early ECV (34-36 weeks' gestation) or delayed ECV (at or after 37 weeks). The outcome of interest was successful ECV, defined as the fetus being in a cephalic presentation immediately following the procedure, as well as at the time of birth. The importance of several factors in predicting successful ECV was investigated using two statistical methods: logistic regression and classification and regression tree (CART) analyses. Among nulliparas, non-engagement of the presenting part and an easily palpable fetal head were independently associated with success. Among multiparas, non-engagement of the presenting part, gestation less than 37 weeks and an easily palpable fetal head were found to be independent predictors of success. These findings were consistent with results of the CART analyses. Regardless of parity, descent of the presenting part was the most discriminating factor in predicting successful ECV and cephalic presentation at birth. © 2017 Nordic Federation of Societies of Obstetrics and Gynecology.

  11. Geodesic least squares regression for scaling studies in magnetic confinement fusion

    International Nuclear Information System (INIS)

    Verdoolaege, Geert

    2015-01-01

    In regression analyses for deriving scaling laws that occur in various scientific disciplines, usually standard regression methods have been applied, of which ordinary least squares (OLS) is the most popular. However, concerns have been raised with respect to several assumptions underlying OLS in its application to scaling laws. We here discuss a new regression method that is robust in the presence of significant uncertainty on both the data and the regression model. The method, which we call geodesic least squares regression (GLS), is based on minimization of the Rao geodesic distance on a probabilistic manifold. We demonstrate the superiority of the method using synthetic data and we present an application to the scaling law for the power threshold for the transition to the high confinement regime in magnetic confinement fusion devices

  12. Easy methods for extracting individual regression slopes: Comparing SPSS, R, and Excel

    Directory of Open Access Journals (Sweden)

    Roland Pfister

    2013-10-01

    Full Text Available Three different methods for extracting coefficientsof linear regression analyses are presented. The focus is on automatic and easy-to-use approaches for common statistical packages: SPSS, R, and MS Excel / LibreOffice Calc. Hands-on examples are included for each analysis, followed by a brief description of how a subsequent regression coefficient analysis is performed.

  13. The analysis of nonstationary time series using regression, correlation and cointegration

    DEFF Research Database (Denmark)

    Johansen, Søren

    2012-01-01

    There are simple well-known conditions for the validity of regression and correlation as statistical tools. We analyse by examples the effect of nonstationarity on inference using these methods and compare them to model based inference using the cointegrated vector autoregressive model. Finally we...... analyse some monthly data from US on interest rates as an illustration of the methods...

  14. Regression: A Bibliography.

    Science.gov (United States)

    Pedrini, D. T.; Pedrini, Bonnie C.

    Regression, another mechanism studied by Sigmund Freud, has had much research, e.g., hypnotic regression, frustration regression, schizophrenic regression, and infra-human-animal regression (often directly related to fixation). Many investigators worked with hypnotic age regression, which has a long history, going back to Russian reflexologists.…

  15. Statistical approach for selection of regression model during validation of bioanalytical method

    Directory of Open Access Journals (Sweden)

    Natalija Nakov

    2014-06-01

    Full Text Available The selection of an adequate regression model is the basis for obtaining accurate and reproducible results during the bionalytical method validation. Given the wide concentration range, frequently present in bioanalytical assays, heteroscedasticity of the data may be expected. Several weighted linear and quadratic regression models were evaluated during the selection of the adequate curve fit using nonparametric statistical tests: One sample rank test and Wilcoxon signed rank test for two independent groups of samples. The results obtained with One sample rank test could not give statistical justification for the selection of linear vs. quadratic regression models because slight differences between the error (presented through the relative residuals were obtained. Estimation of the significance of the differences in the RR was achieved using Wilcoxon signed rank test, where linear and quadratic regression models were treated as two independent groups. The application of this simple non-parametric statistical test provides statistical confirmation of the choice of an adequate regression model.

  16. Tutorial on Using Regression Models with Count Outcomes Using R

    Directory of Open Access Journals (Sweden)

    A. Alexander Beaujean

    2016-02-01

    Full Text Available Education researchers often study count variables, such as times a student reached a goal, discipline referrals, and absences. Most researchers that study these variables use typical regression methods (i.e., ordinary least-squares either with or without transforming the count variables. In either case, using typical regression for count data can produce parameter estimates that are biased, thus diminishing any inferences made from such data. As count-variable regression models are seldom taught in training programs, we present a tutorial to help educational researchers use such methods in their own research. We demonstrate analyzing and interpreting count data using Poisson, negative binomial, zero-inflated Poisson, and zero-inflated negative binomial regression models. The count regression methods are introduced through an example using the number of times students skipped class. The data for this example are freely available and the R syntax used run the example analyses are included in the Appendix.

  17. Use of probabilistic weights to enhance linear regression myoelectric control.

    Science.gov (United States)

    Smith, Lauren H; Kuiken, Todd A; Hargrove, Levi J

    2015-12-01

    Clinically available prostheses for transradial amputees do not allow simultaneous myoelectric control of degrees of freedom (DOFs). Linear regression methods can provide simultaneous myoelectric control, but frequently also result in difficulty with isolating individual DOFs when desired. This study evaluated the potential of using probabilistic estimates of categories of gross prosthesis movement, which are commonly used in classification-based myoelectric control, to enhance linear regression myoelectric control. Gaussian models were fit to electromyogram (EMG) feature distributions for three movement classes at each DOF (no movement, or movement in either direction) and used to weight the output of linear regression models by the probability that the user intended the movement. Eight able-bodied and two transradial amputee subjects worked in a virtual Fitts' law task to evaluate differences in controllability between linear regression and probability-weighted regression for an intramuscular EMG-based three-DOF wrist and hand system. Real-time and offline analyses in able-bodied subjects demonstrated that probability weighting improved performance during single-DOF tasks (p linear regression control. Use of probability weights can improve the ability to isolate individual during linear regression myoelectric control, while maintaining the ability to simultaneously control multiple DOFs.

  18. Advanced statistics: linear regression, part I: simple linear regression.

    Science.gov (United States)

    Marill, Keith A

    2004-01-01

    Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.

  19. The Analysis of Nonstationary Time Series Using Regression, Correlation and Cointegration

    Directory of Open Access Journals (Sweden)

    Søren Johansen

    2012-06-01

    Full Text Available There are simple well-known conditions for the validity of regression and correlation as statistical tools. We analyse by examples the effect of nonstationarity on inference using these methods and compare them to model based inference using the cointegrated vector autoregressive model. Finally we analyse some monthly data from US on interest rates as an illustration of the methods.

  20. Tools to support interpreting multiple regression in the face of multicollinearity.

    Science.gov (United States)

    Kraha, Amanda; Turner, Heather; Nimon, Kim; Zientek, Linda Reichwein; Henson, Robin K

    2012-01-01

    While multicollinearity may increase the difficulty of interpreting multiple regression (MR) results, it should not cause undue problems for the knowledgeable researcher. In the current paper, we argue that rather than using one technique to investigate regression results, researchers should consider multiple indices to understand the contributions that predictors make not only to a regression model, but to each other as well. Some of the techniques to interpret MR effects include, but are not limited to, correlation coefficients, beta weights, structure coefficients, all possible subsets regression, commonality coefficients, dominance weights, and relative importance weights. This article will review a set of techniques to interpret MR effects, identify the elements of the data on which the methods focus, and identify statistical software to support such analyses.

  1. Testing and Modeling Fuel Regression Rate in a Miniature Hybrid Burner

    Directory of Open Access Journals (Sweden)

    Luciano Fanton

    2012-01-01

    Full Text Available Ballistic characterization of an extended group of innovative HTPB-based solid fuel formulations for hybrid rocket propulsion was performed in a lab-scale burner. An optical time-resolved technique was used to assess the quasisteady regression history of single perforation, cylindrical samples. The effects of metalized additives and radiant heat transfer on the regression rate of such formulations were assessed. Under the investigated operating conditions and based on phenomenological models from the literature, analyses of the collected experimental data show an appreciable influence of the radiant heat flux from burnt gases and soot for both unloaded and loaded fuel formulations. Pure HTPB regression rate data are satisfactorily reproduced, while the impressive initial regression rates of metalized formulations require further assessment.

  2. Differential item functioning (DIF) analyses of health-related quality of life instruments using logistic regression

    DEFF Research Database (Denmark)

    Scott, Neil W; Fayers, Peter M; Aaronson, Neil K

    2010-01-01

    Differential item functioning (DIF) methods can be used to determine whether different subgroups respond differently to particular items within a health-related quality of life (HRQoL) subscale, after allowing for overall subgroup differences in that scale. This article reviews issues that arise ...... when testing for DIF in HRQoL instruments. We focus on logistic regression methods, which are often used because of their efficiency, simplicity and ease of application....

  3. Does the Magnitude of the Link between Unemployment and Crime Depend on the Crime Level? A Quantile Regression Approach

    Directory of Open Access Journals (Sweden)

    Horst Entorf

    2015-07-01

    Full Text Available Two alternative hypotheses – referred to as opportunity- and stigma-based behavior – suggest that the magnitude of the link between unemployment and crime also depends on preexisting local crime levels. In order to analyze conjectured nonlinearities between both variables, we use quantile regressions applied to German district panel data. While both conventional OLS and quantile regressions confirm the positive link between unemployment and crime for property crimes, results for assault differ with respect to the method of estimation. Whereas conventional mean regressions do not show any significant effect (which would confirm the usual result found for violent crimes in the literature, quantile regression reveals that size and importance of the relationship are conditional on the crime rate. The partial effect is significantly positive for moderately low and median quantiles of local assault rates.

  4. The impact of global signal regression on resting state correlations: are anti-correlated networks introduced?

    Science.gov (United States)

    Murphy, Kevin; Birn, Rasmus M; Handwerker, Daniel A; Jones, Tyler B; Bandettini, Peter A

    2009-02-01

    Low-frequency fluctuations in fMRI signal have been used to map several consistent resting state networks in the brain. Using the posterior cingulate cortex as a seed region, functional connectivity analyses have found not only positive correlations in the default mode network but negative correlations in another resting state network related to attentional processes. The interpretation is that the human brain is intrinsically organized into dynamic, anti-correlated functional networks. Global variations of the BOLD signal are often considered nuisance effects and are commonly removed using a general linear model (GLM) technique. This global signal regression method has been shown to introduce negative activation measures in standard fMRI analyses. The topic of this paper is whether such a correction technique could be the cause of anti-correlated resting state networks in functional connectivity analyses. Here we show that, after global signal regression, correlation values to a seed voxel must sum to a negative value. Simulations also show that small phase differences between regions can lead to spurious negative correlation values. A combination breath holding and visual task demonstrates that the relative phase of global and local signals can affect connectivity measures and that, experimentally, global signal regression leads to bell-shaped correlation value distributions, centred on zero. Finally, analyses of negatively correlated networks in resting state data show that global signal regression is most likely the cause of anti-correlations. These results call into question the interpretation of negatively correlated regions in the brain when using global signal regression as an initial processing step.

  5. An Original Stepwise Multilevel Logistic Regression Analysis of Discriminatory Accuracy

    DEFF Research Database (Denmark)

    Merlo, Juan; Wagner, Philippe; Ghith, Nermin

    2016-01-01

    BACKGROUND AND AIM: Many multilevel logistic regression analyses of "neighbourhood and health" focus on interpreting measures of associations (e.g., odds ratio, OR). In contrast, multilevel analysis of variance is rarely considered. We propose an original stepwise analytical approach that disting...

  6. Repository performance confirmation

    International Nuclear Information System (INIS)

    Hansen, Francis D.

    2011-01-01

    Repository performance confirmation links the technical bases of repository science and societal acceptance. This paper explores the myriad aspects of what has been labeled performance confirmation in U.S. programs, which involves monitoring as a collection of distinct activities combining technical and social significance in radioactive waste management. This paper is divided into four parts: (1) A distinction is drawn between performance confirmation monitoring and other testing and monitoring objectives; (2) A case study illustrates confirmation activities integrated within a long-term testing and monitoring strategy for Yucca Mountain; (3) A case study reviews compliance monitoring developed and implemented for the Waste Isolation Pilot Plant; and (4) An approach for developing, evaluating and implementing the next generation of performance confirmation monitoring is presented. International interest in repository monitoring is exhibited by the European Commission Seventh Framework Programme 'Monitoring Developments for Safe Repository Operation and Staged Closure' (MoDeRn) Project. The MoDeRn partners are considering the role of monitoring in a phased approach to the geological disposal of radioactive waste. As repository plans advance in different countries, the need to consider monitoring strategies within a controlled framework has become more apparent. The MoDeRn project pulls together technical and societal experts to assimilate a common understanding of a process that could be followed to develop a monitoring program. A fundamental consideration is the differentiation of confirmation monitoring from the many other testing and monitoring activities. Recently, the license application for Yucca Mountain provided a case study including a technical process for meeting regulatory requirements to confirm repository performance as well as considerations related to the preservation of retrievability. The performance confirmation plan developed as part of the

  7. [Milk yield and environmental factors: Multiple regression analysis of the association between milk yield and udder health, fertility data and replacement rate].

    Science.gov (United States)

    Fölsche, C; Staufenbiel, R

    2014-01-01

    The relationship between milk yield and both fertility and general animal health in dairy herds is discussed from opposing viewpoints. The hypothesis (1) that raising the herd milk yield would decrease fertility results, the number of milk cells as an indicator for udder health and the replacement rate as a global indicator for animal health as well as increasing the occurrence of specific diseases as a herd problem was compared to the opposing hypotheses that there is no relationship (2) or that there is a differentiated and changing relationship (3). A total of 743 herd examinations, considered independent, were performed in 489 herds between 1995 and 2010. The milk yield, fertility rate, milk cell count, replacement rate, categorized herd problems and management information were recorded. The relationship between the milk yield and both the fertility data and animal health was evaluated using simple and multiple regression analyses. The period between calving and the first service displayed no significant relationship to the herd milk yield. Simple regression analysis showed that the period between calving and gestation, the calving interval and the insemination number were significantly positively associated with the herd milk yield. This positive correlation was lost in multiple regression analysis. The milk cell count and replacement rate using both the simple and multiple regression analyses displayed a significant negative relationship to the milk yield. The alternative hypothesis (3) was confirmed. A higher milk yield has no negative influence on the milk cell count and the replacement rate in terms of the udder and general health. When parameterizing the fertility, the herd milk yield should be considered. Extending the resting time may increase the milk yield while preventing a decline in the insemination index.

  8. Ordinary least square regression, orthogonal regression, geometric mean regression and their applications in aerosol science

    International Nuclear Information System (INIS)

    Leng Ling; Zhang Tianyi; Kleinman, Lawrence; Zhu Wei

    2007-01-01

    Regression analysis, especially the ordinary least squares method which assumes that errors are confined to the dependent variable, has seen a fair share of its applications in aerosol science. The ordinary least squares approach, however, could be problematic due to the fact that atmospheric data often does not lend itself to calling one variable independent and the other dependent. Errors often exist for both measurements. In this work, we examine two regression approaches available to accommodate this situation. They are orthogonal regression and geometric mean regression. Comparisons are made theoretically as well as numerically through an aerosol study examining whether the ratio of organic aerosol to CO would change with age

  9. Interpreting Multiple Linear Regression: A Guidebook of Variable Importance

    Science.gov (United States)

    Nathans, Laura L.; Oswald, Frederick L.; Nimon, Kim

    2012-01-01

    Multiple regression (MR) analyses are commonly employed in social science fields. It is also common for interpretation of results to typically reflect overreliance on beta weights, often resulting in very limited interpretations of variable importance. It appears that few researchers employ other methods to obtain a fuller understanding of what…

  10. Estimating the exceedance probability of rain rate by logistic regression

    Science.gov (United States)

    Chiu, Long S.; Kedem, Benjamin

    1990-01-01

    Recent studies have shown that the fraction of an area with rain intensity above a fixed threshold is highly correlated with the area-averaged rain rate. To estimate the fractional rainy area, a logistic regression model, which estimates the conditional probability that rain rate over an area exceeds a fixed threshold given the values of related covariates, is developed. The problem of dependency in the data in the estimation procedure is bypassed by the method of partial likelihood. Analyses of simulated scanning multichannel microwave radiometer and observed electrically scanning microwave radiometer data during the Global Atlantic Tropical Experiment period show that the use of logistic regression in pixel classification is superior to multiple regression in predicting whether rain rate at each pixel exceeds a given threshold, even in the presence of noisy data. The potential of the logistic regression technique in satellite rain rate estimation is discussed.

  11. Linear regression and the normality assumption.

    Science.gov (United States)

    Schmidt, Amand F; Finan, Chris

    2017-12-16

    Researchers often perform arbitrary outcome transformations to fulfill the normality assumption of a linear regression model. This commentary explains and illustrates that in large data settings, such transformations are often unnecessary, and worse may bias model estimates. Linear regression assumptions are illustrated using simulated data and an empirical example on the relation between time since type 2 diabetes diagnosis and glycated hemoglobin levels. Simulation results were evaluated on coverage; i.e., the number of times the 95% confidence interval included the true slope coefficient. Although outcome transformations bias point estimates, violations of the normality assumption in linear regression analyses do not. The normality assumption is necessary to unbiasedly estimate standard errors, and hence confidence intervals and P-values. However, in large sample sizes (e.g., where the number of observations per variable is >10) violations of this normality assumption often do not noticeably impact results. Contrary to this, assumptions on, the parametric model, absence of extreme observations, homoscedasticity, and independency of the errors, remain influential even in large sample size settings. Given that modern healthcare research typically includes thousands of subjects focusing on the normality assumption is often unnecessary, does not guarantee valid results, and worse may bias estimates due to the practice of outcome transformations. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    Science.gov (United States)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  13. Polynomial regression analysis and significance test of the regression function

    International Nuclear Information System (INIS)

    Gao Zhengming; Zhao Juan; He Shengping

    2012-01-01

    In order to analyze the decay heating power of a certain radioactive isotope per kilogram with polynomial regression method, the paper firstly demonstrated the broad usage of polynomial function and deduced its parameters with ordinary least squares estimate. Then significance test method of polynomial regression function is derived considering the similarity between the polynomial regression model and the multivariable linear regression model. Finally, polynomial regression analysis and significance test of the polynomial function are done to the decay heating power of the iso tope per kilogram in accord with the authors' real work. (authors)

  14. The best of both worlds: Phylogenetic eigenvector regression and mapping

    Directory of Open Access Journals (Sweden)

    José Alexandre Felizola Diniz Filho

    2015-09-01

    Full Text Available Eigenfunction analyses have been widely used to model patterns of autocorrelation in time, space and phylogeny. In a phylogenetic context, Diniz-Filho et al. (1998 proposed what they called Phylogenetic Eigenvector Regression (PVR, in which pairwise phylogenetic distances among species are submitted to a Principal Coordinate Analysis, and eigenvectors are then used as explanatory variables in regression, correlation or ANOVAs. More recently, a new approach called Phylogenetic Eigenvector Mapping (PEM was proposed, with the main advantage of explicitly incorporating a model-based warping in phylogenetic distance in which an Ornstein-Uhlenbeck (O-U process is fitted to data before eigenvector extraction. Here we compared PVR and PEM in respect to estimated phylogenetic signal, correlated evolution under alternative evolutionary models and phylogenetic imputation, using simulated data. Despite similarity between the two approaches, PEM has a slightly higher prediction ability and is more general than the original PVR. Even so, in a conceptual sense, PEM may provide a technique in the best of both worlds, combining the flexibility of data-driven and empirical eigenfunction analyses and the sounding insights provided by evolutionary models well known in comparative analyses.

  15. Reduced Rank Regression

    DEFF Research Database (Denmark)

    Johansen, Søren

    2008-01-01

    The reduced rank regression model is a multivariate regression model with a coefficient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure, which estimates the reduced rank regression model. It is related to canonical correlations and involves calculating...

  16. Prediction of radiation levels in residences: A methodological comparison of CART [Classification and Regression Tree Analysis] and conventional regression

    International Nuclear Information System (INIS)

    Janssen, I.; Stebbings, J.H.

    1990-01-01

    In environmental epidemiology, trace and toxic substance concentrations frequently have very highly skewed distributions ranging over one or more orders of magnitude, and prediction by conventional regression is often poor. Classification and Regression Tree Analysis (CART) is an alternative in such contexts. To compare the techniques, two Pennsylvania data sets and three independent variables are used: house radon progeny (RnD) and gamma levels as predicted by construction characteristics in 1330 houses; and ∼200 house radon (Rn) measurements as predicted by topographic parameters. CART may identify structural variables of interest not identified by conventional regression, and vice versa, but in general the regression models are similar. CART has major advantages in dealing with other common characteristics of environmental data sets, such as missing values, continuous variables requiring transformations, and large sets of potential independent variables. CART is most useful in the identification and screening of independent variables, greatly reducing the need for cross-tabulations and nested breakdown analyses. There is no need to discard cases with missing values for the independent variables because surrogate variables are intrinsic to CART. The tree-structured approach is also independent of the scale on which the independent variables are measured, so that transformations are unnecessary. CART identifies important interactions as well as main effects. The major advantages of CART appear to be in exploring data. Once the important variables are identified, conventional regressions seem to lead to results similar but more interpretable by most audiences. 12 refs., 8 figs., 10 tabs

  17. Logistic Regression Analysis of Operational Errors and Routine Operations Using Sector Characteristics

    National Research Council Canada - National Science Library

    Pfleiderer, Elaine M; Scroggins, Cheryl L; Manning, Carol A

    2009-01-01

    Two separate logistic regression analyses were conducted for low- and high-altitude sectors to determine whether a set of dynamic sector characteristics variables could reliably discriminate between operational error (OE...

  18. Quantile Regression Methods

    DEFF Research Database (Denmark)

    Fitzenberger, Bernd; Wilke, Ralf Andreas

    2015-01-01

    if the mean regression model does not. We provide a short informal introduction into the principle of quantile regression which includes an illustrative application from empirical labor market research. This is followed by briefly sketching the underlying statistical model for linear quantile regression based......Quantile regression is emerging as a popular statistical approach, which complements the estimation of conditional mean models. While the latter only focuses on one aspect of the conditional distribution of the dependent variable, the mean, quantile regression provides more detailed insights...... by modeling conditional quantiles. Quantile regression can therefore detect whether the partial effect of a regressor on the conditional quantiles is the same for all quantiles or differs across quantiles. Quantile regression can provide evidence for a statistical relationship between two variables even...

  19. [Epidemiological investigation on confirmed cases of schistosomiasis in Hubei Province].

    Science.gov (United States)

    Yan-Yan, Chen; Shun-Xiang, Cai; Guo, Li; Ying, Xiao; Xiao-Wei, Shan; Juan, Zhang; Jian-Bing, Liu

    2016-05-10

    To grasp the distribution and epidemiology of confirmed cases of schistosomiasis in Hubei Province, so as to provide the evidence for promoting the prevention and control work. The confirmed cases of schistosomiasis in Hubei Province from 2010 to 2014 were epidemiologically investigated, and the prevalence characteristics and main influencing factors were analyzed. A total of 10 102 confirmed cases from 2010 to 2014 were surveyed. There were 1 062 local infected patients, accounting for 10.51% and including 354 repeated infections and 17 newly infected. There were 290 foreigninfected patients, accounting for 2.87%, with 206 repeated infection cases and 84 newly infected. There were 8 750 historical patients, including 2 229 patients who leaked the former schistosomiasis investigations, accounting for 22.06%; 570 patients missed treatment, accounting for 5.64%; 3 640 patients were treated with non-standard therapy, accounting for 36.03%; 2 311 patients were treated with poor medication efficacy, accounting for 22.88%. The multivariate non-conditional Logistic regression, targeting at confirmed cases in 2014, showed that, for the leaking investigations, the potential risk factors included the age, educational level, and latrine renovation ( b >0, OR >1), the protective factors were the times of previous treatment, cattle feeding in villager team, and Oncomelania hupensis snails in surroundings ( b <0, OR <1); for the treatment-missing, the age, educational level, snails in the surroundings of residence were risk factors ( b <0, OR <1); for the substandard treatment, the risk factors included the occupation and snails in the surroundings of residence ( b >0, OR >1), and the educational level and snails in the own field were protective factors ( b <0, OR <1). The epidemiological investigation on the confirmed cases of schistosomiasis could grasp the epidemic factors so as to improve the management and carry out the scientific control.

  20. A Monte Carlo simulation study comparing linear regression, beta regression, variable-dispersion beta regression and fractional logit regression at recovering average difference measures in a two sample design.

    Science.gov (United States)

    Meaney, Christopher; Moineddin, Rahim

    2014-01-24

    In biomedical research, response variables are often encountered which have bounded support on the open unit interval--(0,1). Traditionally, researchers have attempted to estimate covariate effects on these types of response data using linear regression. Alternative modelling strategies may include: beta regression, variable-dispersion beta regression, and fractional logit regression models. This study employs a Monte Carlo simulation design to compare the statistical properties of the linear regression model to that of the more novel beta regression, variable-dispersion beta regression, and fractional logit regression models. In the Monte Carlo experiment we assume a simple two sample design. We assume observations are realizations of independent draws from their respective probability models. The randomly simulated draws from the various probability models are chosen to emulate average proportion/percentage/rate differences of pre-specified magnitudes. Following simulation of the experimental data we estimate average proportion/percentage/rate differences. We compare the estimators in terms of bias, variance, type-1 error and power. Estimates of Monte Carlo error associated with these quantities are provided. If response data are beta distributed with constant dispersion parameters across the two samples, then all models are unbiased and have reasonable type-1 error rates and power profiles. If the response data in the two samples have different dispersion parameters, then the simple beta regression model is biased. When the sample size is small (N0 = N1 = 25) linear regression has superior type-1 error rates compared to the other models. Small sample type-1 error rates can be improved in beta regression models using bias correction/reduction methods. In the power experiments, variable-dispersion beta regression and fractional logit regression models have slightly elevated power compared to linear regression models. Similar results were observed if the

  1. Regression Phalanxes

    OpenAIRE

    Zhang, Hongyang; Welch, William J.; Zamar, Ruben H.

    2017-01-01

    Tomal et al. (2015) introduced the notion of "phalanxes" in the context of rare-class detection in two-class classification problems. A phalanx is a subset of features that work well for classification tasks. In this paper, we propose a different class of phalanxes for application in regression settings. We define a "Regression Phalanx" - a subset of features that work well together for prediction. We propose a novel algorithm which automatically chooses Regression Phalanxes from high-dimensi...

  2. Analyses of polycyclic aromatic hydrocarbon (PAH) and chiral-PAH analogues-methyl-β-cyclodextrin guest-host inclusion complexes by fluorescence spectrophotometry and multivariate regression analysis.

    Science.gov (United States)

    Greene, LaVana; Elzey, Brianda; Franklin, Mariah; Fakayode, Sayo O

    2017-03-05

    The negative health impact of polycyclic aromatic hydrocarbons (PAHs) and differences in pharmacological activity of enantiomers of chiral molecules in humans highlights the need for analysis of PAHs and their chiral analogue molecules in humans. Herein, the first use of cyclodextrin guest-host inclusion complexation, fluorescence spectrophotometry, and chemometric approach to PAH (anthracene) and chiral-PAH analogue derivatives (1-(9-anthryl)-2,2,2-triflouroethanol (TFE)) analyses are reported. The binding constants (K b ), stoichiometry (n), and thermodynamic properties (Gibbs free energy (ΔG), enthalpy (ΔH), and entropy (ΔS)) of anthracene and enantiomers of TFE-methyl-β-cyclodextrin (Me-β-CD) guest-host complexes were also determined. Chemometric partial-least-square (PLS) regression analysis of emission spectra data of Me-β-CD-guest-host inclusion complexes was used for the determination of anthracene and TFE enantiomer concentrations in Me-β-CD-guest-host inclusion complex samples. The values of calculated K b and negative ΔG suggest the thermodynamic favorability of anthracene-Me-β-CD and enantiomeric of TFE-Me-β-CD inclusion complexation reactions. However, anthracene-Me-β-CD and enantiomer TFE-Me-β-CD inclusion complexations showed notable differences in the binding affinity behaviors and thermodynamic properties. The PLS regression analysis resulted in square-correlation-coefficients of 0.997530 or better and a low LOD of 3.81×10 -7 M for anthracene and 3.48×10 -8 M for TFE enantiomers at physiological conditions. Most importantly, PLS regression accurately determined the anthracene and TFE enantiomer concentrations with an average low error of 2.31% for anthracene, 4.44% for R-TFE and 3.60% for S-TFE. The results of the study are highly significant because of its high sensitivity and accuracy for analysis of PAH and chiral PAH analogue derivatives without the need of an expensive chiral column, enantiomeric resolution, or use of a polarized

  3. Quantifying Fire Cycle from Dendroecological Records Using Survival Analyses

    Directory of Open Access Journals (Sweden)

    Dominic Cyr

    2016-06-01

    Full Text Available Quantifying fire regimes in the boreal forest ecosystem is crucial for understanding the past and present dynamics, as well as for predicting its future dynamics. Survival analyses have often been used to estimate the fire cycle in eastern Canada because they make it possible to take into account the censored information that is made prevalent by the typically long fire return intervals and the limited scope of the dendroecological methods that are used to quantify them. Here, we assess how the true length of the fire cycle, the short-term temporal variations in fire activity, and the sampling effort affect the accuracy and precision of estimates obtained from two types of parametric survival models, the Weibull and the exponential models, and one non-parametric model obtained with the Cox regression. Then, we apply those results in a case area located in eastern Canada. Our simulation experiment confirms some documented concerns regarding the detrimental effects of temporal variations in fire activity on parametric estimation of the fire cycle. Cox regressions appear to provide the most accurate and robust estimator, being by far the least affected by temporal variations in fire activity. The Cox-based estimate of the fire cycle for the last 300 years in the case study area is 229 years (CI95: 162–407, compared with the likely overestimated 319 years obtained with the commonly used exponential model.

  4. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    Science.gov (United States)

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  5. Confidential donation confirmation as an alternative to confidential unit exclusion: 15 months experience of the HEMOMINAS foundation

    Directory of Open Access Journals (Sweden)

    Flávia Cristine Martineli Loureiro

    2011-01-01

    Full Text Available BACKGROUND: Confidential unit exclusion remains a controversial strategy to reduce the residual risk of transfusion-transmitted infections. OBJECTIVE: This study aimed to analyze confidential unit exclusion from its development in a large institution in light of confidential donation confirmation. METHODS: Data of individuals who donated from October 1, 2008 to December 31, 2009 were analyzed in a case-control study. The serological results and sociodemographic characteristics of donors who did not confirm their donations were compared to those who did. Variables with p-values < 0.20 in univariate analysis were included in a logistic multivariate analysis. RESULTS: In the univariate analysis there was a statically significant association between positive serological results and response to confidential donation confirmation of "No". Donation type, (firsttime or return donor - OR 1.69, CI 1.37-2.09, gender (OR 1.66, CI 1.35-2.04, education level (OR 2.82, CI 2.30-3.47 and ethnic background (OR 0.67, CI 0.55-0.82 were included in the final logistic regression model. In all logistic regression models analyzed, the serological suitability and confidential donation confirmation were not found to be statistically associated. The adoption of new measures of clinical classification such as audiovisual touch-screen computer-assisted self-administered interviews might be more effective than confidential unit exclusion in the identification of donor risk behavior. The requirement that transfusion services continue to use confidential unit exclusion needs to be debated in countries where more specific and sensitive clinical and serological screening methods are available. CONCLUSION: Our findings suggest that there are not enough benefits to justify continued use of confidential donation confirmation in the analyzed institution.

  6. The study of logistic regression of risk factor on the death cause of uranium miners

    International Nuclear Information System (INIS)

    Wen Jinai; Yuan Liyun; Jiang Ruyi

    1999-01-01

    Logistic regression model has widely been used in the field of medicine. The computer software on this model is popular, but it is worth to discuss how to use this model correctly. Using SPSS (Statistical Package for the Social Science) software, unconditional logistic regression method was adopted to carry out multi-factor analyses on the cause of total death, cancer death and lung cancer death of uranium miners. The data is from radioepidemiological database of one uranium mine. The result show that attained age is a risk factor in the logistic regression analyses of total death, cancer death and lung cancer death. In the logistic regression analysis of cancer death, there is a negative correlation between the age of exposure and cancer death. This shows that the younger the age at exposure, the bigger the risk of cancer death. In the logistic regression analysis of lung cancer death, there is a positive correlation between the cumulated exposure and lung cancer death, this show that cumulated exposure is a most important risk factor of lung cancer death on uranium miners. It has been documented by many foreign reports that the lung cancer death rate is higher in uranium miners

  7. Spatial Regression and Prediction of Water Quality in a Watershed with Complex Pollution Sources.

    Science.gov (United States)

    Yang, Xiaoying; Liu, Qun; Luo, Xingzhang; Zheng, Zheng

    2017-08-16

    Fast economic development, burgeoning population growth, and rapid urbanization have led to complex pollution sources contributing to water quality deterioration simultaneously in many developing countries including China. This paper explored the use of spatial regression to evaluate the impacts of watershed characteristics on ambient total nitrogen (TN) concentration in a heavily polluted watershed and make predictions across the region. Regression results have confirmed the substantial impact on TN concentration by a variety of point and non-point pollution sources. In addition, spatial regression has yielded better performance than ordinary regression in predicting TN concentrations. Due to its best performance in cross-validation, the river distance based spatial regression model was used to predict TN concentrations across the watershed. The prediction results have revealed a distinct pattern in the spatial distribution of TN concentrations and identified three critical sub-regions in priority for reducing TN loads. Our study results have indicated that spatial regression could potentially serve as an effective tool to facilitate water pollution control in watersheds under diverse physical and socio-economical conditions.

  8. Advanced statistics: linear regression, part II: multiple linear regression.

    Science.gov (United States)

    Marill, Keith A

    2004-01-01

    The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.

  9. Boosted beta regression.

    Directory of Open Access Journals (Sweden)

    Matthias Schmid

    Full Text Available Regression analysis with a bounded outcome is a common problem in applied statistics. Typical examples include regression models for percentage outcomes and the analysis of ratings that are measured on a bounded scale. In this paper, we consider beta regression, which is a generalization of logit models to situations where the response is continuous on the interval (0,1. Consequently, beta regression is a convenient tool for analyzing percentage responses. The classical approach to fit a beta regression model is to use maximum likelihood estimation with subsequent AIC-based variable selection. As an alternative to this established - yet unstable - approach, we propose a new estimation technique called boosted beta regression. With boosted beta regression estimation and variable selection can be carried out simultaneously in a highly efficient way. Additionally, both the mean and the variance of a percentage response can be modeled using flexible nonlinear covariate effects. As a consequence, the new method accounts for common problems such as overdispersion and non-binomial variance structures.

  10. Regression to Causality : Regression-style presentation influences causal attribution

    DEFF Research Database (Denmark)

    Bordacconi, Mats Joe; Larsen, Martin Vinæs

    2014-01-01

    of equivalent results presented as either regression models or as a test of two sample means. Our experiment shows that the subjects who were presented with results as estimates from a regression model were more inclined to interpret these results causally. Our experiment implies that scholars using regression...... models – one of the primary vehicles for analyzing statistical results in political science – encourage causal interpretation. Specifically, we demonstrate that presenting observational results in a regression model, rather than as a simple comparison of means, makes causal interpretation of the results...... more likely. Our experiment drew on a sample of 235 university students from three different social science degree programs (political science, sociology and economics), all of whom had received substantial training in statistics. The subjects were asked to compare and evaluate the validity...

  11. Correcting for multivariate measurement error by regression calibration in meta-analyses of epidemiological studies

    DEFF Research Database (Denmark)

    Tybjærg-Hansen, Anne

    2009-01-01

    Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements...... of the risk factors are observed on a subsample. We extend the multivariate RC techniques to a meta-analysis framework where multiple studies provide independent repeat measurements and information on disease outcome. We consider the cases where some or all studies have repeat measurements, and compare study......-specific, averaged and empirical Bayes estimates of RC parameters. Additionally, we allow for binary covariates (e.g. smoking status) and for uncertainty and time trends in the measurement error corrections. Our methods are illustrated using a subset of individual participant data from prospective long-term studies...

  12. Automatic selection of indicators in a fully saturated regression

    DEFF Research Database (Denmark)

    Hendry, David F.; Johansen, Søren; Santos, Carlos

    2008-01-01

    We consider selecting a regression model, using a variant of Gets, when there are more variables than observations, in the special case that the variables are impulse dummies (indicators) for every observation. We show that the setting is unproblematic if tackled appropriately, and obtain the fin...... the finite-sample distribution of estimators of the mean and variance in a simple location-scale model under the null that no impulses matter. A Monte Carlo simulation confirms the null distribution, and shows power against an alternative of interest....

  13. [Application of detecting and taking overdispersion into account in Poisson regression model].

    Science.gov (United States)

    Bouche, G; Lepage, B; Migeot, V; Ingrand, P

    2009-08-01

    Researchers often use the Poisson regression model to analyze count data. Overdispersion can occur when a Poisson regression model is used, resulting in an underestimation of variance of the regression model parameters. Our objective was to take overdispersion into account and assess its impact with an illustration based on the data of a study investigating the relationship between use of the Internet to seek health information and number of primary care consultations. Three methods, overdispersed Poisson, a robust estimator, and negative binomial regression, were performed to take overdispersion into account in explaining variation in the number (Y) of primary care consultations. We tested overdispersion in the Poisson regression model using the ratio of the sum of Pearson residuals over the number of degrees of freedom (chi(2)/df). We then fitted the three models and compared parameter estimation to the estimations given by Poisson regression model. Variance of the number of primary care consultations (Var[Y]=21.03) was greater than the mean (E[Y]=5.93) and the chi(2)/df ratio was 3.26, which confirmed overdispersion. Standard errors of the parameters varied greatly between the Poisson regression model and the three other regression models. Interpretation of estimates from two variables (using the Internet to seek health information and single parent family) would have changed according to the model retained, with significant levels of 0.06 and 0.002 (Poisson), 0.29 and 0.09 (overdispersed Poisson), 0.29 and 0.13 (use of a robust estimator) and 0.45 and 0.13 (negative binomial) respectively. Different methods exist to solve the problem of underestimating variance in the Poisson regression model when overdispersion is present. The negative binomial regression model seems to be particularly accurate because of its theorical distribution ; in addition this regression is easy to perform with ordinary statistical software packages.

  14. Longitudinal strain predicts left ventricular mass regression after aortic valve replacement for severe aortic stenosis and preserved left ventricular function.

    Science.gov (United States)

    Gelsomino, Sandro; Lucà, Fabiana; Parise, Orlando; Lorusso, Roberto; Rao, Carmelo Massimiliano; Vizzardi, Enrico; Gensini, Gian Franco; Maessen, Jos G

    2013-11-01

    We explored the influence of global longitudinal strain (GLS) measured with two-dimensional speckle-tracking echocardiography on left ventricular mass regression (LVMR) in patients with pure aortic stenosis (AS) and normal left ventricular function undergoing aortic valve replacement (AVR). The study population included 83 patients with severe AS (aortic valve area regression (all P regression in patients with pure AS undergoing AVR. Our findings must be confirmed by further larger studies.

  15. Information fusion via constrained principal component regression for robust quantification with incomplete calibrations

    International Nuclear Information System (INIS)

    Vogt, Frank

    2013-01-01

    Graphical abstract: Analysis Task: Determine the albumin (= protein) concentration in microalgae cells as a function of the cells’ nutrient availability. Left Panel: The predicted albumin concentrations as obtained by conventional principal component regression features low reproducibility and are partially higher than the concentrations of algae in which albumin is contained. Right Panel: Augmenting an incomplete PCR calibration with additional expert information derives reasonable albumin concentrations which now reveal a significant dependency on the algae's nutrient situation. -- Highlights: •Make quantitative analyses of compounds embedded in largely unknown chemical matrices robust. •Improved concentration prediction with originally insufficient calibration models. •Chemometric approach for incorporating expertise from other fields and/or researchers. •Ensure chemical, biological, or medicinal meaningfulness of quantitative analyses. -- Abstract: Incomplete calibrations are encountered in many applications and hamper chemometric data analyses. Such situations arise when target analytes are embedded in a chemically complex matrix from which calibration concentrations cannot be determined with reasonable efforts. In other cases, the samples’ chemical composition may fluctuate in an unpredictable way and thus cannot be comprehensively covered by calibration samples. The reason for calibration model to fail is the regression principle itself which seeks to explain measured data optimally in terms of the (potentially incomplete) calibration model but does not consider chemical meaningfulness. This study presents a novel chemometric approach which is based on experimentally feasible calibrations, i.e. concentration series of the target analytes outside the chemical matrix (‘ex situ calibration’). The inherent lack-of-information is then compensated by incorporating additional knowledge in form of regression constraints. Any outside knowledge can be

  16. Physics constrained nonlinear regression models for time series

    International Nuclear Information System (INIS)

    Majda, Andrew J; Harlim, John

    2013-01-01

    A central issue in contemporary science is the development of data driven statistical nonlinear dynamical models for time series of partial observations of nature or a complex physical model. It has been established recently that ad hoc quadratic multi-level regression (MLR) models can have finite-time blow up of statistical solutions and/or pathological behaviour of their invariant measure. Here a new class of physics constrained multi-level quadratic regression models are introduced, analysed and applied to build reduced stochastic models from data of nonlinear systems. These models have the advantages of incorporating memory effects in time as well as the nonlinear noise from energy conserving nonlinear interactions. The mathematical guidelines for the performance and behaviour of these physics constrained MLR models as well as filtering algorithms for their implementation are developed here. Data driven applications of these new multi-level nonlinear regression models are developed for test models involving a nonlinear oscillator with memory effects and the difficult test case of the truncated Burgers–Hopf model. These new physics constrained quadratic MLR models are proposed here as process models for Bayesian estimation through Markov chain Monte Carlo algorithms of low frequency behaviour in complex physical data. (paper)

  17. Temporal Synchronization Analysis for Improving Regression Modeling of Fecal Indicator Bacteria Levels

    Science.gov (United States)

    Multiple linear regression models are often used to predict levels of fecal indicator bacteria (FIB) in recreational swimming waters based on independent variables (IVs) such as meteorologic, hydrodynamic, and water-quality measures. The IVs used for these analyses are traditiona...

  18. New robust statistical procedures for the polytomous logistic regression models.

    Science.gov (United States)

    Castilla, Elena; Ghosh, Abhik; Martin, Nirian; Pardo, Leandro

    2018-05-17

    This article derives a new family of estimators, namely the minimum density power divergence estimators, as a robust generalization of the maximum likelihood estimator for the polytomous logistic regression model. Based on these estimators, a family of Wald-type test statistics for linear hypotheses is introduced. Robustness properties of both the proposed estimators and the test statistics are theoretically studied through the classical influence function analysis. Appropriate real life examples are presented to justify the requirement of suitable robust statistical procedures in place of the likelihood based inference for the polytomous logistic regression model. The validity of the theoretical results established in the article are further confirmed empirically through suitable simulation studies. Finally, an approach for the data-driven selection of the robustness tuning parameter is proposed with empirical justifications. © 2018, The International Biometric Society.

  19. Regressão múltipla stepwise e hierárquica em Psicologia Organizacional: aplicações, problemas e soluções Stepwise and hierarchical multiple regression in organizational psychology: Applications, problemas and solutions

    Directory of Open Access Journals (Sweden)

    Gardênia Abbad

    2002-01-01

    Full Text Available Este artigo discute algumas aplicações das técnicas de análise de regressão múltipla stepwise e hierárquica, as quais são muito utilizadas em pesquisas da área de Psicologia Organizacional. São discutidas algumas estratégias de identificação e de solução de problemas relativos à ocorrência de erros do Tipo I e II e aos fenômenos de supressão, complementaridade e redundância nas equações de regressão múltipla. São apresentados alguns exemplos de pesquisas nas quais esses padrões de associação entre variáveis estiveram presentes e descritas as estratégias utilizadas pelos pesquisadores para interpretá-los. São discutidas as aplicações dessas análises no estudo de interação entre variáveis e na realização de testes para avaliação da linearidade do relacionamento entre variáveis. Finalmente, são apresentadas sugestões para lidar com as limitações das análises de regressão múltipla (stepwise e hierárquica.This article discusses applications of stepwise and hierarchical multiple regression analyses to research in organizational psychology. Strategies for identifying type I and II errors, and solutions to potential problems that may arise from such errors are proposed. In addition, phenomena such as suppression, complementarity, and redundancy are reviewed. The article presents examples of research where these phenomena occurred, and the manner in which they were explained by researchers. Some applications of multiple regression analyses to studies involving between-variable interactions are presented, along with tests used to analyze the presence of linearity among variables. Finally, some suggestions are provided for dealing with limitations implicit in multiple regression analyses (stepwise and hierarchical.

  20. OPLS statistical model versus linear regression to assess sonographic predictors of stroke prognosis.

    Science.gov (United States)

    Vajargah, Kianoush Fathi; Sadeghi-Bazargani, Homayoun; Mehdizadeh-Esfanjani, Robab; Savadi-Oskouei, Daryoush; Farhoudi, Mehdi

    2012-01-01

    The objective of the present study was to assess the comparable applicability of orthogonal projections to latent structures (OPLS) statistical model vs traditional linear regression in order to investigate the role of trans cranial doppler (TCD) sonography in predicting ischemic stroke prognosis. The study was conducted on 116 ischemic stroke patients admitted to a specialty neurology ward. The Unified Neurological Stroke Scale was used once for clinical evaluation on the first week of admission and again six months later. All data was primarily analyzed using simple linear regression and later considered for multivariate analysis using PLS/OPLS models through the SIMCA P+12 statistical software package. The linear regression analysis results used for the identification of TCD predictors of stroke prognosis were confirmed through the OPLS modeling technique. Moreover, in comparison to linear regression, the OPLS model appeared to have higher sensitivity in detecting the predictors of ischemic stroke prognosis and detected several more predictors. Applying the OPLS model made it possible to use both single TCD measures/indicators and arbitrarily dichotomized measures of TCD single vessel involvement as well as the overall TCD result. In conclusion, the authors recommend PLS/OPLS methods as complementary rather than alternative to the available classical regression models such as linear regression.

  1. A female Viking warrior confirmed by genomics.

    Science.gov (United States)

    Hedenstierna-Jonson, Charlotte; Kjellström, Anna; Zachrisson, Torun; Krzewińska, Maja; Sobrado, Veronica; Price, Neil; Günther, Torsten; Jakobsson, Mattias; Götherström, Anders; Storå, Jan

    2017-12-01

    The objective of this study has been to confirm the sex and the affinity of an individual buried in a well-furnished warrior grave (Bj 581) in the Viking Age town of Birka, Sweden. Previously, based on the material and historical records, the male sex has been associated with the gender of the warrior and such was the case with Bj 581. An earlier osteological classification of the individual as female was considered controversial in a historical and archaeological context. A genomic confirmation of the biological sex of the individual was considered necessary to solve the issue. Genome-wide sequence data was generated in order to confirm the biological sex, to support skeletal integrity, and to investigate the genetic relationship of the individual to ancient individuals as well as modern-day groups. Additionally, a strontium isotope analysis was conducted to highlight the mobility of the individual. The genomic results revealed the lack of a Y-chromosome and thus a female biological sex, and the mtDNA analyses support a single-individual origin of sampled elements. The genetic affinity is close to present-day North Europeans, and within Sweden to the southern and south-central region. Nevertheless, the Sr values are not conclusive as to whether she was of local or nonlocal origin. The identification of a female Viking warrior provides a unique insight into the Viking society, social constructions, and exceptions to the norm in the Viking time-period. The results call for caution against generalizations regarding social orders in past societies. © 2017 The Authors American Journal of Physical Anthropology Published by Wiley Periodicals, Inc.

  2. Use of probabilistic weights to enhance linear regression myoelectric control

    Science.gov (United States)

    Smith, Lauren H.; Kuiken, Todd A.; Hargrove, Levi J.

    2015-12-01

    Objective. Clinically available prostheses for transradial amputees do not allow simultaneous myoelectric control of degrees of freedom (DOFs). Linear regression methods can provide simultaneous myoelectric control, but frequently also result in difficulty with isolating individual DOFs when desired. This study evaluated the potential of using probabilistic estimates of categories of gross prosthesis movement, which are commonly used in classification-based myoelectric control, to enhance linear regression myoelectric control. Approach. Gaussian models were fit to electromyogram (EMG) feature distributions for three movement classes at each DOF (no movement, or movement in either direction) and used to weight the output of linear regression models by the probability that the user intended the movement. Eight able-bodied and two transradial amputee subjects worked in a virtual Fitts’ law task to evaluate differences in controllability between linear regression and probability-weighted regression for an intramuscular EMG-based three-DOF wrist and hand system. Main results. Real-time and offline analyses in able-bodied subjects demonstrated that probability weighting improved performance during single-DOF tasks (p < 0.05) by preventing extraneous movement at additional DOFs. Similar results were seen in experiments with two transradial amputees. Though goodness-of-fit evaluations suggested that the EMG feature distributions showed some deviations from the Gaussian, equal-covariance assumptions used in this experiment, the assumptions were sufficiently met to provide improved performance compared to linear regression control. Significance. Use of probability weights can improve the ability to isolate individual during linear regression myoelectric control, while maintaining the ability to simultaneously control multiple DOFs.

  3. Evaluation of Linear Regression Simultaneous Myoelectric Control Using Intramuscular EMG.

    Science.gov (United States)

    Smith, Lauren H; Kuiken, Todd A; Hargrove, Levi J

    2016-04-01

    The objective of this study was to evaluate the ability of linear regression models to decode patterns of muscle coactivation from intramuscular electromyogram (EMG) and provide simultaneous myoelectric control of a virtual 3-DOF wrist/hand system. Performance was compared to the simultaneous control of conventional myoelectric prosthesis methods using intramuscular EMG (parallel dual-site control)-an approach that requires users to independently modulate individual muscles in the residual limb, which can be challenging for amputees. Linear regression control was evaluated in eight able-bodied subjects during a virtual Fitts' law task and was compared to performance of eight subjects using parallel dual-site control. An offline analysis also evaluated how different types of training data affected prediction accuracy of linear regression control. The two control systems demonstrated similar overall performance; however, the linear regression method demonstrated improved performance for targets requiring use of all three DOFs, whereas parallel dual-site control demonstrated improved performance for targets that required use of only one DOF. Subjects using linear regression control could more easily activate multiple DOFs simultaneously, but often experienced unintended movements when trying to isolate individual DOFs. Offline analyses also suggested that the method used to train linear regression systems may influence controllability. Linear regression myoelectric control using intramuscular EMG provided an alternative to parallel dual-site control for 3-DOF simultaneous control at the wrist and hand. The two methods demonstrated different strengths in controllability, highlighting the tradeoff between providing simultaneous control and the ability to isolate individual DOFs when desired.

  4. Using synthetic data to evaluate multiple regression and principal component analyses for statistical modeling of daily building energy consumption

    Energy Technology Data Exchange (ETDEWEB)

    Reddy, T.A. (Energy Systems Lab., Texas A and M Univ., College Station, TX (United States)); Claridge, D.E. (Energy Systems Lab., Texas A and M Univ., College Station, TX (United States))

    1994-01-01

    Multiple regression modeling of monitored building energy use data is often faulted as a reliable means of predicting energy use on the grounds that multicollinearity between the regressor variables can lead both to improper interpretation of the relative importance of the various physical regressor parameters and to a model with unstable regressor coefficients. Principal component analysis (PCA) has the potential to overcome such drawbacks. While a few case studies have already attempted to apply this technique to building energy data, the objectives of this study were to make a broader evaluation of PCA and multiple regression analysis (MRA) and to establish guidelines under which one approach is preferable to the other. Four geographic locations in the US with different climatic conditions were selected and synthetic data sequence representative of daily energy use in large institutional buildings were generated in each location using a linear model with outdoor temperature, outdoor specific humidity and solar radiation as the three regression variables. MRA and PCA approaches were then applied to these data sets and their relative performances were compared. Conditions under which PCA seems to perform better than MRA were identified and preliminary recommendations on the use of either modeling approach formulated. (orig.)

  5. The Application of Classical and Neural Regression Models for the Valuation of Residential Real Estate

    Directory of Open Access Journals (Sweden)

    Mach Łukasz

    2017-06-01

    Full Text Available The research process aimed at building regression models, which helps to valuate residential real estate, is presented in the following article. Two widely used computational tools i.e. the classical multiple regression and regression models of artificial neural networks were used in order to build models. An attempt to define the utilitarian usefulness of the above-mentioned tools and comparative analysis of them is the aim of the conducted research. Data used for conducting analyses refers to the secondary transactional residential real estate market.

  6. On macroeconomic values investigation using fuzzy linear regression analysis

    Directory of Open Access Journals (Sweden)

    Richard Pospíšil

    2017-06-01

    Full Text Available The theoretical background for abstract formalization of the vague phenomenon of complex systems is the fuzzy set theory. In the paper, vague data is defined as specialized fuzzy sets - fuzzy numbers and there is described a fuzzy linear regression model as a fuzzy function with fuzzy numbers as vague parameters. To identify the fuzzy coefficients of the model, the genetic algorithm is used. The linear approximation of the vague function together with its possibility area is analytically and graphically expressed. A suitable application is performed in the tasks of the time series fuzzy regression analysis. The time-trend and seasonal cycles including their possibility areas are calculated and expressed. The examples are presented from the economy field, namely the time-development of unemployment, agricultural production and construction respectively between 2009 and 2011 in the Czech Republic. The results are shown in the form of the fuzzy regression models of variables of time series. For the period 2009-2011, the analysis assumptions about seasonal behaviour of variables and the relationship between them were confirmed; in 2010, the system behaved fuzzier and the relationships between the variables were vaguer, that has a lot of causes, from the different elasticity of demand, through state interventions to globalization and transnational impacts.

  7. Complete regression of myocardial involvement associated with lymphoma following chemotherapy.

    Science.gov (United States)

    Vinicki, Juan Pablo; Cianciulli, Tomás F; Farace, Gustavo A; Saccheri, María C; Lax, Jorge A; Kazelian, Lucía R; Wachs, Adolfo

    2013-09-26

    Cardiac involvement as an initial presentation of malignant lymphoma is a rare occurrence. We describe the case of a 26 year old man who had initially been diagnosed with myocardial infiltration on an echocardiogram, presenting with a testicular mass and unilateral peripheral facial paralysis. On admission, electrocardiograms (ECG) revealed negative T-waves in all leads and ST-segment elevation in the inferior leads. On two-dimensional echocardiography, there was infiltration of the pericardium with mild effusion, infiltrative thickening of the aortic walls, both atria and the interatrial septum and a mildly depressed systolic function of both ventricles. An axillary biopsy was performed and reported as a T-cell lymphoblastic lymphoma (T-LBL). Following the diagnosis and staging, chemotherapy was started. Twenty-two days after finishing the first cycle of chemotherapy, the ECG showed regression of T-wave changes in all leads and normalization of the ST-segment elevation in the inferior leads. A follow-up Two-dimensional echocardiography confirmed regression of the myocardial infiltration. This case report illustrates a lymphoma presenting with testicular mass, unilateral peripheral facial paralysis and myocardial involvement, and demonstrates that regression of infiltration can be achieved by intensive chemotherapy treatment. To our knowledge, there are no reported cases of T-LBL presenting as a testicular mass and unilateral peripheral facial paralysis, with complete regression of myocardial involvement.

  8. Tax System in Poland – Progressive or Regressive?

    Directory of Open Access Journals (Sweden)

    Jacek Tomkiewicz

    2016-03-01

    Full Text Available Purpose: To analyse the impact of the Polish fiscal regime on the general revenue of the country, and specifically to establish whether the cumulative tax burden borne by Polish households is progressive or regressive.Methodology: On the basis of Eurostat and OECD data, the author has analysed fiscal regimes in EU Member States and in OECD countries. The tax burden of households within different income groups has also been examined pursuant to applicable fiscal laws and data pertaining to the revenue and expenditure of households published by the Central Statistical Office (CSO.Conclusions: The fiscal regime in Poland is regressive; that is, the relative fiscal burden decreases as the taxpayer’s income increases.Research Implications: The article contributes to the on-going discussion on social cohesion, in particular with respect to economic policy instruments aimed at the redistribution of income within the economy.Originality: The author presents an analysis of data pertaining to fiscal policies in EU Member States and OECD countries and assesses the impact of the legal environment (fiscal regime and social security system in Poland on income distribution within the economy. The impact of the total tax burden (direct and indirect taxes, social security contributions on the economic situation of households from different income groups has been calculated using an original formula.

  9. The non-condition logistic regression analysis of the reason of hypothyroidism after hyperthyroidism with 131I treatment

    International Nuclear Information System (INIS)

    Dang Yaping; Hu Guoying; Meng Xianwen

    1994-01-01

    There are many opinions on the reason of hypothyroidism after hyperthyroidism with 131 I treatment. In this respect, there are a few scientific analyses and reports. The non-condition logistic regression solved this problem successfully. It has a higher scientific value and confidence in the risk factor analysis. 748 follow-up patients' data were analysed by the non-condition logistic regression. The results shown that the half-life and 131 I dose were the main causes of the incidence of hypothyroidism. The degree of confidence is 92.4%

  10. Spatial regression analysis on 32 years of total column ozone data

    NARCIS (Netherlands)

    Knibbe, J.S.; van der A, J.R.; de Laat, A.T.J.

    2014-01-01

    Multiple-regression analyses have been performed on 32 years of total ozone column data that was spatially gridded with a 1 × 1.5° resolution. The total ozone data consist of the MSR (Multi Sensor Reanalysis; 1979-2008) and 2 years of assimilated SCIAMACHY (SCanning Imaging Absorption spectroMeter

  11. Economic Analyses of Ware Yam Production in Orlu Agricultural ...

    African Journals Online (AJOL)

    Economic Analyses of Ware Yam Production in Orlu Agricultural Zone of Imo State. ... International Journal of Agriculture and Rural Development ... statistics, gross margin analysis, marginal analysis and multiple regression analysis. Results ...

  12. Is Posidonia oceanica regression a general feature in the Mediterranean Sea?

    Directory of Open Access Journals (Sweden)

    M. BONACORSI

    2013-03-01

    Full Text Available Over the last few years, a widespread regression of Posidonia oceanica meadows has been noticed in the Mediterranean Sea. However, the magnitude of this decline is still debated. The objectives of this study are (i to assess the spatio-temporal evolution of Posidonia oceanica around Cap Corse (Corsica over time comparing available ancient maps (from 1960 with a new (2011 detailed map realized combining different techniques (aerial photographs, SSS, ROV, scuba diving; (ii evaluate the reliability of ancient maps; (iii discuss observed regression of the meadows in relation to human pressure along the 110 km of coast. Thus, the comparison with previous data shows that, apart from sites clearly identified with the actual evolution, there is a relative stability of the surfaces occupied by the seagrass Posidonia oceanica. The recorded differences seem more related to changes in mapping techniques. These results confirm that in areas characterized by a moderate anthropogenic impact, the Posidonia oceanica meadow has no significant regression and that the changes due to the evolution of mapping techniques are not negligible. However, others facts should be taken into account before extrapolating to the Mediterranean Sea (e.g. actually mapped surfaces and assessing the amplitude of the actual regression.

  13. Stereotype confirmation concerns predict dropout from cognitive behavioral therapy for social anxiety disorder.

    Science.gov (United States)

    Johnson, Suzanne; Price, Matthew; Mehta, Natasha; Anderson, Page L

    2014-08-19

    There are high attrition rates observed in efficacy studies for social anxiety disorder, and research has not identified consistent nor theoretically meaningful predictors of dropout. Pre-treatment symptom severity and demographic factors, such as age and gender, are sometimes predictive of dropout. The current study examines a theoretically meaningful predictor of attrition based on experiences associated with social group membership rather than differences between social group categories--fear of confirming stereotypes. This is a secondary data analysis of a randomized controlled trial comparing two cognitive behavioral treatments for social anxiety disorder: virtual reality exposure therapy and exposure group therapy. Participants (N = 74) with a primary diagnosis of social anxiety disorder who were eligible to participate in the parent study and who self-identified as either "African American" (n = 31) or "Caucasian" (n = 43) completed standardized self-report measures of stereotype confirmation concerns (SCC) and social anxiety symptoms as part of a pre-treatment assessment battery. Hierarchical logistic regression showed that greater stereotype confirmation concerns were associated with higher dropout from therapy--race, age, gender, and pre-treatment symptom severity were not. Group treatment also was associated with higher dropout. These findings urge further research on theoretically meaningful predictors of attrition and highlight the importance of addressing cultural variables, such as the experience of stereotype confirmation concerns, during treatment of social anxiety to minimize dropout from therapy.

  14. Quantile regression for the statistical analysis of immunological data with many non-detects.

    Science.gov (United States)

    Eilers, Paul H C; Röder, Esther; Savelkoul, Huub F J; van Wijk, Roy Gerth

    2012-07-07

    Immunological parameters are hard to measure. A well-known problem is the occurrence of values below the detection limit, the non-detects. Non-detects are a nuisance, because classical statistical analyses, like ANOVA and regression, cannot be applied. The more advanced statistical techniques currently available for the analysis of datasets with non-detects can only be used if a small percentage of the data are non-detects. Quantile regression, a generalization of percentiles to regression models, models the median or higher percentiles and tolerates very high numbers of non-detects. We present a non-technical introduction and illustrate it with an implementation to real data from a clinical trial. We show that by using quantile regression, groups can be compared and that meaningful linear trends can be computed, even if more than half of the data consists of non-detects. Quantile regression is a valuable addition to the statistical methods that can be used for the analysis of immunological datasets with non-detects.

  15. Gender Gaps in Mathematics, Science and Reading Achievements in Muslim Countries: A Quantile Regression Approach

    Science.gov (United States)

    Shafiq, M. Najeeb

    2013-01-01

    Using quantile regression analyses, this study examines gender gaps in mathematics, science, and reading in Azerbaijan, Indonesia, Jordan, the Kyrgyz Republic, Qatar, Tunisia, and Turkey among 15-year-old students. The analyses show that girls in Azerbaijan achieve as well as boys in mathematics and science and overachieve in reading. In Jordan,…

  16. Application of geographically-weighted regression analysis to assess risk factors for malaria hotspots in Keur Soce health and demographic surveillance site.

    Science.gov (United States)

    Ndiath, Mansour M; Cisse, Badara; Ndiaye, Jean Louis; Gomis, Jules F; Bathiery, Ousmane; Dia, Anta Tal; Gaye, Oumar; Faye, Babacar

    2015-11-18

    In Senegal, considerable efforts have been made to reduce malaria morbidity and mortality during the last decade. This resulted in a marked decrease of malaria cases. With the decline of malaria cases, transmission has become sparse in most Senegalese health districts. This study investigated malaria hotspots in Keur Soce sites by using geographically-weighted regression. Because of the occurrence of hotspots, spatial modelling of malaria cases could have a considerable effect in disease surveillance. This study explored and analysed the spatial relationships between malaria occurrence and socio-economic and environmental factors in small communities in Keur Soce, Senegal, using 6 months passive surveillance. Geographically-weighted regression was used to explore the spatial variability of relationships between malaria incidence or persistence and the selected socio-economic, and human predictors. A model comparison of between ordinary least square and geographically-weighted regression was also explored. Vector dataset (spatial) of the study area by village levels and statistical data (non-spatial) on malaria confirmed cases, socio-economic status (bed net use), population data (size of the household) and environmental factors (temperature, rain fall) were used in this exploratory analysis. ArcMap 10.2 and Stata 11 were used to perform malaria hotspots analysis. From Jun to December, a total of 408 confirmed malaria cases were notified. The explanatory variables-household size, housing materials, sleeping rooms, sheep and distance to breeding site returned significant t values of -0.25, 2.3, 4.39, 1.25 and 2.36, respectively. The OLS global model revealed that it explained about 70 % (adjusted R(2) = 0.70) of the variation in malaria occurrence with AIC = 756.23. The geographically-weighted regression of malaria hotspots resulted in coefficient intercept ranging from 1.89 to 6.22 with a median of 3.5. Large positive values are distributed mainly in the southeast

  17. Area under the curve predictions of dalbavancin, a new lipoglycopeptide agent, using the end of intravenous infusion concentration data point by regression analyses such as linear, log-linear and power models.

    Science.gov (United States)

    Bhamidipati, Ravi Kanth; Syed, Muzeeb; Mullangi, Ramesh; Srinivas, Nuggehally

    2018-02-01

    1. Dalbavancin, a lipoglycopeptide, is approved for treating gram-positive bacterial infections. Area under plasma concentration versus time curve (AUC inf ) of dalbavancin is a key parameter and AUC inf /MIC ratio is a critical pharmacodynamic marker. 2. Using end of intravenous infusion concentration (i.e. C max ) C max versus AUC inf relationship for dalbavancin was established by regression analyses (i.e. linear, log-log, log-linear and power models) using 21 pairs of subject data. 3. The predictions of the AUC inf were performed using published C max data by application of regression equations. The quotient of observed/predicted values rendered fold difference. The mean absolute error (MAE)/root mean square error (RMSE) and correlation coefficient (r) were used in the assessment. 4. MAE and RMSE values for the various models were comparable. The C max versus AUC inf exhibited excellent correlation (r > 0.9488). The internal data evaluation showed narrow confinement (0.84-1.14-fold difference) with a RMSE models predicted AUC inf with a RMSE of 3.02-27.46% with fold difference largely contained within 0.64-1.48. 5. Regardless of the regression models, a single time point strategy of using C max (i.e. end of 30-min infusion) is amenable as a prospective tool for predicting AUC inf of dalbavancin in patients.

  18. Logistic Regression in the Identification of Hazards in Construction

    Science.gov (United States)

    Drozd, Wojciech

    2017-10-01

    The construction site and its elements create circumstances that are conducive to the formation of risks to safety during the execution of works. Analysis indicates the critical importance of these factors in the set of characteristics that describe the causes of accidents in the construction industry. This article attempts to analyse the characteristics related to the construction site, in order to indicate their importance in defining the circumstances of accidents at work. The study includes sites inspected in 2014 - 2016 by the employees of the District Labour Inspectorate in Krakow (Poland). The analysed set of detailed (disaggregated) data includes both quantitative and qualitative characteristics. The substantive task focused on classification modelling in the identification of hazards in construction and identifying those of the analysed characteristics that are important in an accident. In terms of methodology, resource data analysis using statistical classifiers, in the form of logistic regression, was the method used.

  19. A comparison of Cox and logistic regression for use in genome-wide association studies of cohort and case-cohort design.

    Science.gov (United States)

    Staley, James R; Jones, Edmund; Kaptoge, Stephen; Butterworth, Adam S; Sweeting, Michael J; Wood, Angela M; Howson, Joanna M M

    2017-06-01

    Logistic regression is often used instead of Cox regression to analyse genome-wide association studies (GWAS) of single-nucleotide polymorphisms (SNPs) and disease outcomes with cohort and case-cohort designs, as it is less computationally expensive. Although Cox and logistic regression models have been compared previously in cohort studies, this work does not completely cover the GWAS setting nor extend to the case-cohort study design. Here, we evaluated Cox and logistic regression applied to cohort and case-cohort genetic association studies using simulated data and genetic data from the EPIC-CVD study. In the cohort setting, there was a modest improvement in power to detect SNP-disease associations using Cox regression compared with logistic regression, which increased as the disease incidence increased. In contrast, logistic regression had more power than (Prentice weighted) Cox regression in the case-cohort setting. Logistic regression yielded inflated effect estimates (assuming the hazard ratio is the underlying measure of association) for both study designs, especially for SNPs with greater effect on disease. Given logistic regression is substantially more computationally efficient than Cox regression in both settings, we propose a two-step approach to GWAS in cohort and case-cohort studies. First to analyse all SNPs with logistic regression to identify associated variants below a pre-defined P-value threshold, and second to fit Cox regression (appropriately weighted in case-cohort studies) to those identified SNPs to ensure accurate estimation of association with disease.

  20. Use of multiple linear regression and logistic regression models to investigate changes in birthweight for term singleton infants in Scotland.

    Science.gov (United States)

    Bonellie, Sandra R

    2012-10-01

    To illustrate the use of regression and logistic regression models to investigate changes over time in size of babies particularly in relation to social deprivation, age of the mother and smoking. Mean birthweight has been found to be increasing in many countries in recent years, but there are still a group of babies who are born with low birthweights. Population-based retrospective cohort study. Multiple linear regression and logistic regression models are used to analyse data on term 'singleton births' from Scottish hospitals between 1994-2003. Mothers who smoke are shown to give birth to lighter babies on average, a difference of approximately 0.57 Standard deviations lower (95% confidence interval. 0.55-0.58) when adjusted for sex and parity. These mothers are also more likely to have babies that are low birthweight (odds ratio 3.46, 95% confidence interval 3.30-3.63) compared with non-smokers. Low birthweight is 30% more likely where the mother lives in the most deprived areas compared with the least deprived, (odds ratio 1.30, 95% confidence interval 1.21-1.40). Smoking during pregnancy is shown to have a detrimental effect on the size of infants at birth. This effect explains some, though not all, of the observed socioeconomic birthweight. It also explains much of the observed birthweight differences by the age of the mother.   Identifying mothers at greater risk of having a low birthweight baby as important implications for the care and advice this group receives. © 2012 Blackwell Publishing Ltd.

  1. Model Selection in Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels......, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely...

  2. Time-adaptive quantile regression

    DEFF Research Database (Denmark)

    Møller, Jan Kloppenborg; Nielsen, Henrik Aalborg; Madsen, Henrik

    2008-01-01

    and an updating procedure are combined into a new algorithm for time-adaptive quantile regression, which generates new solutions on the basis of the old solution, leading to savings in computation time. The suggested algorithm is tested against a static quantile regression model on a data set with wind power......An algorithm for time-adaptive quantile regression is presented. The algorithm is based on the simplex algorithm, and the linear optimization formulation of the quantile regression problem is given. The observations have been split to allow a direct use of the simplex algorithm. The simplex method...... production, where the models combine splines and quantile regression. The comparison indicates superior performance for the time-adaptive quantile regression in all the performance parameters considered....

  3. Contour plot assessment of existing meta-analyses confirms robust association of statin use and acute kidney injury risk.

    Science.gov (United States)

    Chevance, Aurélie; Schuster, Tibor; Steele, Russell; Ternès, Nils; Platt, Robert W

    2015-10-01

    Robustness of an existing meta-analysis can justify decisions on whether to conduct an additional study addressing the same research question. We illustrate the graphical assessment of the potential impact of an additional study on an existing meta-analysis using published data on statin use and the risk of acute kidney injury. A previously proposed graphical augmentation approach is used to assess the sensitivity of the current test and heterogeneity statistics extracted from existing meta-analysis data. In addition, we extended the graphical augmentation approach to assess potential changes in the pooled effect estimate after updating a current meta-analysis and applied the three graphical contour definitions to data from meta-analyses on statin use and acute kidney injury risk. In the considered example data, the pooled effect estimates and heterogeneity indices demonstrated to be considerably robust to the addition of a future study. Supportingly, for some previously inconclusive meta-analyses, a study update might yield statistically significant kidney injury risk increase associated with higher statin exposure. The illustrated contour approach should become a standard tool for the assessment of the robustness of meta-analyses. It can guide decisions on whether to conduct additional studies addressing a relevant research question. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Multiplication factor versus regression analysis in stature estimation from hand and foot dimensions.

    Science.gov (United States)

    Krishan, Kewal; Kanchan, Tanuj; Sharma, Abhilasha

    2012-05-01

    Estimation of stature is an important parameter in identification of human remains in forensic examinations. The present study is aimed to compare the reliability and accuracy of stature estimation and to demonstrate the variability in estimated stature and actual stature using multiplication factor and regression analysis methods. The study is based on a sample of 246 subjects (123 males and 123 females) from North India aged between 17 and 20 years. Four anthropometric measurements; hand length, hand breadth, foot length and foot breadth taken on the left side in each subject were included in the study. Stature was measured using standard anthropometric techniques. Multiplication factors were calculated and linear regression models were derived for estimation of stature from hand and foot dimensions. Derived multiplication factors and regression formula were applied to the hand and foot measurements in the study sample. The estimated stature from the multiplication factors and regression analysis was compared with the actual stature to find the error in estimated stature. The results indicate that the range of error in estimation of stature from regression analysis method is less than that of multiplication factor method thus, confirming that the regression analysis method is better than multiplication factor analysis in stature estimation. Copyright © 2012 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  5. Spatial-Temporal Variations of Turbidity and Ocean Current Velocity of the Ariake Sea Area, Kyushu, Japan Through Regression Analysis with Remote Sensing Satellite Data

    OpenAIRE

    Yuichi Sarusawa; Kohei Arai

    2013-01-01

    Regression analysis based method for turbidity and ocean current velocity estimation with remote sensing satellite data is proposed. Through regressive analysis with MODIS data and measured data of turbidity and ocean current velocity, regressive equation which allows estimation of turbidity and ocean current velocity is obtained. With the regressive equation as well as long term MODIS data, turbidity and ocean current velocity trends in Ariake Sea area are clarified. It is also confirmed tha...

  6. Econometric analysis of realised covariation: high frequency covariance, regression and correlation in financial economics

    OpenAIRE

    Ole E. Barndorff-Nielsen; Neil Shephard

    2002-01-01

    This paper analyses multivariate high frequency financial data using realised covariation. We provide a new asymptotic distribution theory for standard methods such as regression, correlation analysis and covariance. It will be based on a fixed interval of time (e.g. a day or week), allowing the number of high frequency returns during this period to go to infinity. Our analysis allows us to study how high frequency correlations, regressions and covariances change through time. In particular w...

  7. Regression analysis by example

    CERN Document Server

    Chatterjee, Samprit

    2012-01-01

    Praise for the Fourth Edition: ""This book is . . . an excellent source of examples for regression analysis. It has been and still is readily readable and understandable."" -Journal of the American Statistical Association Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. Regression Analysis by Example, Fifth Edition has been expanded

  8. Performance confirmation data acquisition system

    International Nuclear Information System (INIS)

    McAffee, D.A.; Raczka, N.T.

    1997-01-01

    As part of the Viability Assessment (VA) work, this QAP-3-9 document presents and evaluates a comprehensive set of viable concepts for collecting Performance Confirmation (PC) related data. The concepts include: monitoring subsurface repository air temperatures, humidity levels and gaseous emissions via the subsurface ventilation systems, and monitoring the repository geo-technical parameters and rock mass from bore-holes located along the perimeter main drifts and throughout a series of human-rated Observation Drifts to be located in a plane 25 meters above the plane of the emplacement drifts. A key element of this document is the development and analysis of a purposed multi-purpose Remote Inspection Gantry that would provide direct, real-time visual, thermal, and radiological monitoring of conditions inside operational emplacement drifts and close-up observations of in-situ Waste Packages. Preliminary finite-element analyses are presented that indicate the technological feasibility of operating an inspection gantry inside the operational emplacement drifts for short inspection missions lasting 2--3 hours. Overall reliability, availability, and maintainability of the PC data collection concepts are discussed. Preliminary concepts for PC data collection network are also provided

  9. Performance confirmation data acquisition system

    Energy Technology Data Exchange (ETDEWEB)

    McAffee, D.A.; Raczka, N.T. [Yucca Mountain Project, Las Vegas, NV (United States)

    1997-12-31

    As part of the Viability Assessment (VA) work, this QAP-3-9 document presents and evaluates a comprehensive set of viable concepts for collecting Performance Confirmation (PC) related data. The concepts include: monitoring subsurface repository air temperatures, humidity levels and gaseous emissions via the subsurface ventilation systems, and monitoring the repository geo-technical parameters and rock mass from bore-holes located along the perimeter main drifts and throughout a series of human-rated Observation Drifts to be located in a plane 25 meters above the plane of the emplacement drifts. A key element of this document is the development and analysis of a purposed multi-purpose Remote Inspection Gantry that would provide direct, real-time visual, thermal, and radiological monitoring of conditions inside operational emplacement drifts and close-up observations of in-situ Waste Packages. Preliminary finite-element analyses are presented that indicate the technological feasibility of operating an inspection gantry inside the operational emplacement drifts for short inspection missions lasting 2--3 hours. Overall reliability, availability, and maintainability of the PC data collection concepts are discussed. Preliminary concepts for PC data collection network are also provided.

  10. Applied logistic regression

    CERN Document Server

    Hosmer, David W; Sturdivant, Rodney X

    2013-01-01

     A new edition of the definitive guide to logistic regression modeling for health science and other applications This thoroughly expanded Third Edition provides an easily accessible introduction to the logistic regression (LR) model and highlights the power of this model by examining the relationship between a dichotomous outcome and a set of covariables. Applied Logistic Regression, Third Edition emphasizes applications in the health sciences and handpicks topics that best suit the use of modern statistical software. The book provides readers with state-of-

  11. Privacy-Preserving Distributed Linear Regression on High-Dimensional Data

    Directory of Open Access Journals (Sweden)

    Gascón Adrià

    2017-10-01

    Full Text Available We propose privacy-preserving protocols for computing linear regression models, in the setting where the training dataset is vertically distributed among several parties. Our main contribution is a hybrid multi-party computation protocol that combines Yao’s garbled circuits with tailored protocols for computing inner products. Like many machine learning tasks, building a linear regression model involves solving a system of linear equations. We conduct a comprehensive evaluation and comparison of different techniques for securely performing this task, including a new Conjugate Gradient Descent (CGD algorithm. This algorithm is suitable for secure computation because it uses an efficient fixed-point representation of real numbers while maintaining accuracy and convergence rates comparable to what can be obtained with a classical solution using floating point numbers. Our technique improves on Nikolaenko et al.’s method for privacy-preserving ridge regression (S&P 2013, and can be used as a building block in other analyses. We implement a complete system and demonstrate that our approach is highly scalable, solving data analysis problems with one million records and one hundred features in less than one hour of total running time.

  12. Normalization Ridge Regression in Practice I: Comparisons Between Ordinary Least Squares, Ridge Regression and Normalization Ridge Regression.

    Science.gov (United States)

    Bulcock, J. W.

    The problem of model estimation when the data are collinear was examined. Though the ridge regression (RR) outperforms ordinary least squares (OLS) regression in the presence of acute multicollinearity, it is not a problem free technique for reducing the variance of the estimates. It is a stochastic procedure when it should be nonstochastic and it…

  13. Quantile regression for the statistical analysis of immunological data with many non-detects

    NARCIS (Netherlands)

    Eilers, P.H.C.; Roder, E.; Savelkoul, H.F.J.; Wijk, van R.G.

    2012-01-01

    Background Immunological parameters are hard to measure. A well-known problem is the occurrence of values below the detection limit, the non-detects. Non-detects are a nuisance, because classical statistical analyses, like ANOVA and regression, cannot be applied. The more advanced statistical

  14. Quantile regression for the statistical analysis of immunological data with many non-detects

    NARCIS (Netherlands)

    P.H.C. Eilers (Paul); E. Röder (Esther); H.F.J. Savelkoul (Huub); R. Gerth van Wijk (Roy)

    2012-01-01

    textabstractBackground: Immunological parameters are hard to measure. A well-known problem is the occurrence of values below the detection limit, the non-detects. Non-detects are a nuisance, because classical statistical analyses, like ANOVA and regression, cannot be applied. The more advanced

  15. Patterns of medicinal plant use: an examination of the Ecuadorian Shuar medicinal flora using contingency table and binomial analyses.

    Science.gov (United States)

    Bennett, Bradley C; Husby, Chad E

    2008-03-28

    Botanical pharmacopoeias are non-random subsets of floras, with some taxonomic groups over- or under-represented. Moerman [Moerman, D.E., 1979. Symbols and selectivity: a statistical analysis of Native American medical ethnobotany, Journal of Ethnopharmacology 1, 111-119] introduced linear regression/residual analysis to examine these patterns. However, regression, the commonly-employed analysis, suffers from several statistical flaws. We use contingency table and binomial analyses to examine patterns of Shuar medicinal plant use (from Amazonian Ecuador). We first analyzed the Shuar data using Moerman's approach, modified to better meet requirements of linear regression analysis. Second, we assessed the exact randomization contingency table test for goodness of fit. Third, we developed a binomial model to test for non-random selection of plants in individual families. Modified regression models (which accommodated assumptions of linear regression) reduced R(2) to from 0.59 to 0.38, but did not eliminate all problems associated with regression analyses. Contingency table analyses revealed that the entire flora departs from the null model of equal proportions of medicinal plants in all families. In the binomial analysis, only 10 angiosperm families (of 115) differed significantly from the null model. These 10 families are largely responsible for patterns seen at higher taxonomic levels. Contingency table and binomial analyses offer an easy and statistically valid alternative to the regression approach.

  16. Vector regression introduced

    Directory of Open Access Journals (Sweden)

    Mok Tik

    2014-06-01

    Full Text Available This study formulates regression of vector data that will enable statistical analysis of various geodetic phenomena such as, polar motion, ocean currents, typhoon/hurricane tracking, crustal deformations, and precursory earthquake signals. The observed vector variable of an event (dependent vector variable is expressed as a function of a number of hypothesized phenomena realized also as vector variables (independent vector variables and/or scalar variables that are likely to impact the dependent vector variable. The proposed representation has the unique property of solving the coefficients of independent vector variables (explanatory variables also as vectors, hence it supersedes multivariate multiple regression models, in which the unknown coefficients are scalar quantities. For the solution, complex numbers are used to rep- resent vector information, and the method of least squares is deployed to estimate the vector model parameters after transforming the complex vector regression model into a real vector regression model through isomorphism. Various operational statistics for testing the predictive significance of the estimated vector parameter coefficients are also derived. A simple numerical example demonstrates the use of the proposed vector regression analysis in modeling typhoon paths.

  17. A note on the use of multiple linear regression in molecular ecology.

    Science.gov (United States)

    Frasier, Timothy R

    2016-03-01

    Multiple linear regression analyses (also often referred to as generalized linear models--GLMs, or generalized linear mixed models--GLMMs) are widely used in the analysis of data in molecular ecology, often to assess the relative effects of genetic characteristics on individual fitness or traits, or how environmental characteristics influence patterns of genetic differentiation. However, the coefficients resulting from multiple regression analyses are sometimes misinterpreted, which can lead to incorrect interpretations and conclusions within individual studies, and can propagate to wider-spread errors in the general understanding of a topic. The primary issue revolves around the interpretation of coefficients for independent variables when interaction terms are also included in the analyses. In this scenario, the coefficients associated with each independent variable are often interpreted as the independent effect of each predictor variable on the predicted variable. However, this interpretation is incorrect. The correct interpretation is that these coefficients represent the effect of each predictor variable on the predicted variable when all other predictor variables are zero. This difference may sound subtle, but the ramifications cannot be overstated. Here, my goals are to raise awareness of this issue, to demonstrate and emphasize the problems that can result and to provide alternative approaches for obtaining the desired information. © 2015 John Wiley & Sons Ltd.

  18. Applied linear regression

    CERN Document Server

    Weisberg, Sanford

    2013-01-01

    Praise for the Third Edition ""...this is an excellent book which could easily be used as a course text...""-International Statistical Institute The Fourth Edition of Applied Linear Regression provides a thorough update of the basic theory and methodology of linear regression modeling. Demonstrating the practical applications of linear regression analysis techniques, the Fourth Edition uses interesting, real-world exercises and examples. Stressing central concepts such as model building, understanding parameters, assessing fit and reliability, and drawing conclusions, the new edition illus

  19. Regression Models and Fuzzy Logic Prediction of TBM Penetration Rate

    Directory of Open Access Journals (Sweden)

    Minh Vu Trieu

    2017-03-01

    Full Text Available This paper presents statistical analyses of rock engineering properties and the measured penetration rate of tunnel boring machine (TBM based on the data of an actual project. The aim of this study is to analyze the influence of rock engineering properties including uniaxial compressive strength (UCS, Brazilian tensile strength (BTS, rock brittleness index (BI, the distance between planes of weakness (DPW, and the alpha angle (Alpha between the tunnel axis and the planes of weakness on the TBM rate of penetration (ROP. Four (4 statistical regression models (two linear and two nonlinear are built to predict the ROP of TBM. Finally a fuzzy logic model is developed as an alternative method and compared to the four statistical regression models. Results show that the fuzzy logic model provides better estimations and can be applied to predict the TBM performance. The R-squared value (R2 of the fuzzy logic model scores the highest value of 0.714 over the second runner-up of 0.667 from the multiple variables nonlinear regression model.

  20. Regression Models and Fuzzy Logic Prediction of TBM Penetration Rate

    Science.gov (United States)

    Minh, Vu Trieu; Katushin, Dmitri; Antonov, Maksim; Veinthal, Renno

    2017-03-01

    This paper presents statistical analyses of rock engineering properties and the measured penetration rate of tunnel boring machine (TBM) based on the data of an actual project. The aim of this study is to analyze the influence of rock engineering properties including uniaxial compressive strength (UCS), Brazilian tensile strength (BTS), rock brittleness index (BI), the distance between planes of weakness (DPW), and the alpha angle (Alpha) between the tunnel axis and the planes of weakness on the TBM rate of penetration (ROP). Four (4) statistical regression models (two linear and two nonlinear) are built to predict the ROP of TBM. Finally a fuzzy logic model is developed as an alternative method and compared to the four statistical regression models. Results show that the fuzzy logic model provides better estimations and can be applied to predict the TBM performance. The R-squared value (R2) of the fuzzy logic model scores the highest value of 0.714 over the second runner-up of 0.667 from the multiple variables nonlinear regression model.

  1. Accounting for estimated IQ in neuropsychological test performance with regression-based techniques.

    Science.gov (United States)

    Testa, S Marc; Winicki, Jessica M; Pearlson, Godfrey D; Gordon, Barry; Schretlen, David J

    2009-11-01

    Regression-based normative techniques account for variability in test performance associated with multiple predictor variables and generate expected scores based on algebraic equations. Using this approach, we show that estimated IQ, based on oral word reading, accounts for 1-9% of the variability beyond that explained by individual differences in age, sex, race, and years of education for most cognitive measures. These results confirm that adding estimated "premorbid" IQ to demographic predictors in multiple regression models can incrementally improve the accuracy with which regression-based norms (RBNs) benchmark expected neuropsychological test performance in healthy adults. It remains to be seen whether the incremental variance in test performance explained by estimated "premorbid" IQ translates to improved diagnostic accuracy in patient samples. We describe these methods, and illustrate the step-by-step application of RBNs with two cases. We also discuss the rationale, assumptions, and caveats of this approach. More broadly, we note that adjusting test scores for age and other characteristics might actually decrease the accuracy with which test performance predicts absolute criteria, such as the ability to drive or live independently.

  2. SOCR Analyses: Implementation and Demonstration of a New Graphical Statistics Educational Toolkit

    Directory of Open Access Journals (Sweden)

    Annie Chu

    2009-04-01

    Full Text Available The web-based, Java-written SOCR (Statistical Online Computational Resource toolshave been utilized in many undergraduate and graduate level statistics courses for sevenyears now (Dinov 2006; Dinov et al. 2008b. It has been proven that these resourcescan successfully improve students' learning (Dinov et al. 2008b. Being rst publishedonline in 2005, SOCR Analyses is a somewhat new component and it concentrate on datamodeling for both parametric and non-parametric data analyses with graphical modeldiagnostics. One of the main purposes of SOCR Analyses is to facilitate statistical learn-ing for high school and undergraduate students. As we have already implemented SOCRDistributions and Experiments, SOCR Analyses and Charts fulll the rest of a standardstatistics curricula. Currently, there are four core components of SOCR Analyses. Linearmodels included in SOCR Analyses are simple linear regression, multiple linear regression,one-way and two-way ANOVA. Tests for sample comparisons include t-test in the para-metric category. Some examples of SOCR Analyses' in the non-parametric category areWilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, Kolmogorov-Smirno testand Fligner-Killeen test. Hypothesis testing models include contingency table, Friedman'stest and Fisher's exact test. The last component of Analyses is a utility for computingsample sizes for normal distribution. In this article, we present the design framework,computational implementation and the utilization of SOCR Analyses.

  3. Can Confirmation Measures Reflect Statistically Sound Dependencies in Data? The Concordance-based Assessment

    Directory of Open Access Journals (Sweden)

    Susmaga Robert

    2018-03-01

    Full Text Available The paper considers particular interestingness measures, called confirmation measures (also known as Bayesian confirmation measures, used for the evaluation of “if evidence, then hypothesis” rules. The agreement of such measures with a statistically sound (significant dependency between the evidence and the hypothesis in data is thoroughly investigated. The popular confirmation measures were not defined to possess such form of agreement. However, in error-prone environments, potential lack of agreement may lead to undesired effects, e.g. when a measure indicates either strong confirmation or strong disconfirmation, while in fact there is only weak dependency between the evidence and the hypothesis. In order to detect and prevent such situations, the paper employs a coefficient allowing to assess the level of dependency between the evidence and the hypothesis in data, and introduces a method of quantifying the level of agreement (referred to as a concordance between this coefficient and the measure being analysed. The concordance is characterized and visualised using specialized histograms, scatter-plots, etc. Moreover, risk-related interpretations of the concordance are introduced. Using a set of 12 confirmation measures, the paper presents experiments designed to establish the actual concordance as well as other useful characteristics of the measures.

  4. Synchronized multiple regression of diagnostic radiation-induced rather than spontaneous: disseminated primary intracranial germinoma in a woman: a case report

    Directory of Open Access Journals (Sweden)

    Natsumeda Manabu

    2011-01-01

    Full Text Available Abstract Introduction Examples of the spontaneous regression of primary intracranial germinomas can be found in the literature. We present the case of a patient with disseminated lesions of primary intracranial germinoma which synchronously shrunk following diagnostic irradiation. We will discuss whether this regression was spontaneous or radiation-induced. Case presentation A 43-year-old Japanese woman presented to our hospital complaining of memory problems over a period of one year and blurred vision over a period of three months. Following magnetic resonance imaging, she was found to have a massive lesion in the third ventricle and small lesions in the pineal region, fourth ventricle, and in the anterior horn of the left lateral ventricle. Prior to an open biopsy to confirm the pathology of the lesions, she underwent a single cranial computed tomography scan and a single cranial digital subtraction angiography for a transcranial biopsy. Fourteen days after the first magnetic resonance image - 12 and eight days after the computed tomography scan and digital subtraction angiography, respectively - a pre-operative magnetic resonance image was taken, which showed a notable synchronous shrinkage of the third ventricle tumor, as well as shrinkage of the lesions in the pineal region and in the fourth ventricle. She did not undergo steroid administration until after a biopsy that confirmed the pathological diagnosis of pure germinoma. She then underwent whole craniospinal irradiation and went into a complete remission. Conclusions In our case report, we state that diagnostic radiation can induce the regression of germinomas; this is the most reasonable explanation for the synchronous multiple regression observed in this case of germinoma. Clinicians should keep this non-spontaneous regression in mind and monitor germinoma lesions with minimal exposure to diagnostic radiation before diagnostic confirmation, and also before radiation treatment with or

  5. Understanding poisson regression.

    Science.gov (United States)

    Hayat, Matthew J; Higgins, Melinda

    2014-04-01

    Nurse investigators often collect study data in the form of counts. Traditional methods of data analysis have historically approached analysis of count data either as if the count data were continuous and normally distributed or with dichotomization of the counts into the categories of occurred or did not occur. These outdated methods for analyzing count data have been replaced with more appropriate statistical methods that make use of the Poisson probability distribution, which is useful for analyzing count data. The purpose of this article is to provide an overview of the Poisson distribution and its use in Poisson regression. Assumption violations for the standard Poisson regression model are addressed with alternative approaches, including addition of an overdispersion parameter or negative binomial regression. An illustrative example is presented with an application from the ENSPIRE study, and regression modeling of comorbidity data is included for illustrative purposes. Copyright 2014, SLACK Incorporated.

  6. Alternative Methods of Regression

    CERN Document Server

    Birkes, David

    2011-01-01

    Of related interest. Nonlinear Regression Analysis and its Applications Douglas M. Bates and Donald G. Watts ".an extraordinary presentation of concepts and methods concerning the use and analysis of nonlinear regression models.highly recommend[ed].for anyone needing to use and/or understand issues concerning the analysis of nonlinear regression models." --Technometrics This book provides a balance between theory and practice supported by extensive displays of instructive geometrical constructs. Numerous in-depth case studies illustrate the use of nonlinear regression analysis--with all data s

  7. Time series regression model for infectious disease and weather.

    Science.gov (United States)

    Imai, Chisato; Armstrong, Ben; Chalabi, Zaid; Mangtani, Punam; Hashizume, Masahiro

    2015-10-01

    Time series regression has been developed and long used to evaluate the short-term associations of air pollution and weather with mortality or morbidity of non-infectious diseases. The application of the regression approaches from this tradition to infectious diseases, however, is less well explored and raises some new issues. We discuss and present potential solutions for five issues often arising in such analyses: changes in immune population, strong autocorrelations, a wide range of plausible lag structures and association patterns, seasonality adjustments, and large overdispersion. The potential approaches are illustrated with datasets of cholera cases and rainfall from Bangladesh and influenza and temperature in Tokyo. Though this article focuses on the application of the traditional time series regression to infectious diseases and weather factors, we also briefly introduce alternative approaches, including mathematical modeling, wavelet analysis, and autoregressive integrated moving average (ARIMA) models. Modifications proposed to standard time series regression practice include using sums of past cases as proxies for the immune population, and using the logarithm of lagged disease counts to control autocorrelation due to true contagion, both of which are motivated from "susceptible-infectious-recovered" (SIR) models. The complexity of lag structures and association patterns can often be informed by biological mechanisms and explored by using distributed lag non-linear models. For overdispersed models, alternative distribution models such as quasi-Poisson and negative binomial should be considered. Time series regression can be used to investigate dependence of infectious diseases on weather, but may need modifying to allow for features specific to this context. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Building a new predictor for multiple linear regression technique-based corrective maintenance turnaround time.

    Science.gov (United States)

    Cruz, Antonio M; Barr, Cameron; Puñales-Pozo, Elsa

    2008-01-01

    This research's main goals were to build a predictor for a turnaround time (TAT) indicator for estimating its values and use a numerical clustering technique for finding possible causes of undesirable TAT values. The following stages were used: domain understanding, data characterisation and sample reduction and insight characterisation. Building the TAT indicator multiple linear regression predictor and clustering techniques were used for improving corrective maintenance task efficiency in a clinical engineering department (CED). The indicator being studied was turnaround time (TAT). Multiple linear regression was used for building a predictive TAT value model. The variables contributing to such model were clinical engineering department response time (CE(rt), 0.415 positive coefficient), stock service response time (Stock(rt), 0.734 positive coefficient), priority level (0.21 positive coefficient) and service time (0.06 positive coefficient). The regression process showed heavy reliance on Stock(rt), CE(rt) and priority, in that order. Clustering techniques revealed the main causes of high TAT values. This examination has provided a means for analysing current technical service quality and effectiveness. In doing so, it has demonstrated a process for identifying areas and methods of improvement and a model against which to analyse these methods' effectiveness.

  9. A classical regression framework for mediation analysis: fitting one model to estimate mediation effects.

    Science.gov (United States)

    Saunders, Christina T; Blume, Jeffrey D

    2017-10-26

    Mediation analysis explores the degree to which an exposure's effect on an outcome is diverted through a mediating variable. We describe a classical regression framework for conducting mediation analyses in which estimates of causal mediation effects and their variance are obtained from the fit of a single regression model. The vector of changes in exposure pathway coefficients, which we named the essential mediation components (EMCs), is used to estimate standard causal mediation effects. Because these effects are often simple functions of the EMCs, an analytical expression for their model-based variance follows directly. Given this formula, it is instructive to revisit the performance of routinely used variance approximations (e.g., delta method and resampling methods). Requiring the fit of only one model reduces the computation time required for complex mediation analyses and permits the use of a rich suite of regression tools that are not easily implemented on a system of three equations, as would be required in the Baron-Kenny framework. Using data from the BRAIN-ICU study, we provide examples to illustrate the advantages of this framework and compare it with the existing approaches. © The Author 2017. Published by Oxford University Press.

  10. Effective behaviour change techniques for physical activity and healthy eating in overweight and obese adults; systematic review and meta-regression analyses.

    Science.gov (United States)

    Samdal, Gro Beate; Eide, Geir Egil; Barth, Tom; Williams, Geoffrey; Meland, Eivind

    2017-03-28

    This systematic review aims to explain the heterogeneity in results of interventions to promote physical activity and healthy eating for overweight and obese adults, by exploring the differential effects of behaviour change techniques (BCTs) and other intervention characteristics. The inclusion criteria specified RCTs with ≥ 12 weeks' duration, from January 2007 to October 2014, for adults (mean age ≥ 40 years, mean BMI ≥ 30). Primary outcomes were measures of healthy diet or physical activity. Two reviewers rated study quality, coded the BCTs, and collected outcome results at short (≤6 months) and long term (≥12 months). Meta-analyses and meta-regressions were used to estimate effect sizes (ES), heterogeneity indices (I 2 ) and regression coefficients. We included 48 studies containing a total of 82 outcome reports. The 32 long term reports had an overall ES = 0.24 with 95% confidence interval (CI): 0.15 to 0.33 and I 2  = 59.4%. The 50 short term reports had an ES = 0.37 with 95% CI: 0.26 to 0.48, and I 2  = 71.3%. The number of BCTs unique to the intervention group, and the BCTs goal setting and self-monitoring of behaviour predicted the effect at short and long term. The total number of BCTs in both intervention arms and using the BCTs goal setting of outcome, feedback on outcome of behaviour, implementing graded tasks, and adding objects to the environment, e.g. using a step counter, significantly predicted the effect at long term. Setting a goal for change; and the presence of reporting bias independently explained 58.8% of inter-study variation at short term. Autonomy supportive and person-centred methods as in Motivational Interviewing, the BCTs goal setting of behaviour, and receiving feedback on the outcome of behaviour, explained all of the between study variations in effects at long term. There are similarities, but also differences in effective BCTs promoting change in healthy eating and physical activity and

  11. Introduction to regression graphics

    CERN Document Server

    Cook, R Dennis

    2009-01-01

    Covers the use of dynamic and interactive computer graphics in linear regression analysis, focusing on analytical graphics. Features new techniques like plot rotation. The authors have composed their own regression code, using Xlisp-Stat language called R-code, which is a nearly complete system for linear regression analysis and can be utilized as the main computer program in a linear regression course. The accompanying disks, for both Macintosh and Windows computers, contain the R-code and Xlisp-Stat. An Instructor's Manual presenting detailed solutions to all the problems in the book is ava

  12. Kidney function changes with aging in adults: comparison between cross-sectional and longitudinal data analyses in renal function assessment.

    Science.gov (United States)

    Chung, Sang M; Lee, David J; Hand, Austin; Young, Philip; Vaidyanathan, Jayabharathi; Sahajwalla, Chandrahas

    2015-12-01

    The study evaluated whether the renal function decline rate per year with age in adults varies based on two primary statistical analyses: cross-section (CS), using one observation per subject, and longitudinal (LT), using multiple observations per subject over time. A total of 16628 records (3946 subjects; age range 30-92 years) of creatinine clearance and relevant demographic data were used. On average, four samples per subject were collected for up to 2364 days (mean: 793 days). A simple linear regression and random coefficient models were selected for CS and LT analyses, respectively. The renal function decline rates per year were 1.33 and 0.95 ml/min/year for CS and LT analyses, respectively, and were slower when the repeated individual measurements were considered. The study confirms that rates are different based on statistical analyses, and that a statistically robust longitudinal model with a proper sampling design provides reliable individual as well as population estimates of the renal function decline rates per year with age in adults. In conclusion, our findings indicated that one should be cautious in interpreting the renal function decline rate with aging information because its estimation was highly dependent on the statistical analyses. From our analyses, a population longitudinal analysis (e.g. random coefficient model) is recommended if individualization is critical, such as a dose adjustment based on renal function during a chronic therapy. Copyright © 2015 John Wiley & Sons, Ltd.

  13. Multicollinearity in spatial genetics: separating the wheat from the chaff using commonality analyses.

    Science.gov (United States)

    Prunier, J G; Colyn, M; Legendre, X; Nimon, K F; Flamand, M C

    2015-01-01

    Direct gradient analyses in spatial genetics provide unique opportunities to describe the inherent complexity of genetic variation in wildlife species and are the object of many methodological developments. However, multicollinearity among explanatory variables is a systemic issue in multivariate regression analyses and is likely to cause serious difficulties in properly interpreting results of direct gradient analyses, with the risk of erroneous conclusions, misdirected research and inefficient or counterproductive conservation measures. Using simulated data sets along with linear and logistic regressions on distance matrices, we illustrate how commonality analysis (CA), a detailed variance-partitioning procedure that was recently introduced in the field of ecology, can be used to deal with nonindependence among spatial predictors. By decomposing model fit indices into unique and common (or shared) variance components, CA allows identifying the location and magnitude of multicollinearity, revealing spurious correlations and thus thoroughly improving the interpretation of multivariate regressions. Despite a few inherent limitations, especially in the case of resistance model optimization, this review highlights the great potential of CA to account for complex multicollinearity patterns in spatial genetics and identifies future applications and lines of research. We strongly urge spatial geneticists to systematically investigate commonalities when performing direct gradient analyses. © 2014 John Wiley & Sons Ltd.

  14. Regression analysis of growth responses to water depth in three wetland plant species

    DEFF Research Database (Denmark)

    Sorrell, Brian K; Tanner, Chris C; Brix, Hans

    2012-01-01

    depths from 0 – 0.5 m. Morphological and growth responses to depth were followed for 54 days before harvest, and then analysed by repeated measures analysis of covariance, and non-linear and quantile regression analysis (QRA), to compare flooding tolerances. Principal results Growth responses to depth...

  15. Time-trend of melanoma screening practice by primary care physicians: A meta-regression analysis

    OpenAIRE

    Valachis, Antonis; Mauri, Davide; Karampoiki, Vassiliki; Polyzos, Nikolaos P; Cortinovis, Ivan; Koukourakis, Georgios; Zacharias, Georgios; Xilomenos, Apostolos; Tsappi, Maria; Casazza, Giovanni

    2009-01-01

    Objective To assess whether the proportion of primary care physicians implementing full body skin examination (FBSE) to screen for melanoma changed over time. Methods Meta-regression analyses of available data. Data Sources: MEDLINE, ISI, Cochrane Central Register of Controlled Trials. Results Fifteen studies surveying 10,336 physicians were included in the analyses. Overall, 15%?82% of them reported to perform FBSE to screen for melanoma. The proportion of physicians using FBSE screening ten...

  16. Spontaneous regression of primary cutaneous diffuse large B-cell lymphoma, leg type with significant T-cell immune response

    Directory of Open Access Journals (Sweden)

    Paul M. Graham, DO

    2018-05-01

    Full Text Available We report a case of histologically confirmed primary cutaneous diffuse large B-cell lymphoma, leg type (PCDLBCL-LT that subsequently underwent spontaneous regression in the absence of systemic treatment. The case showed an atypical lymphoid infiltrate that was CD20+ and MUM-1+ and CD10–. A subsequent biopsy of the spontaneously regressed lesion showed fibrosis associated with a lymphocytic infiltrate comprising reactive T cells. PCDLBCL-LT is a cutaneous B-cell lymphoma with a poor prognosis, which is usually treated with chemotherapy. We describe a case of clinical and histologic spontaneous regression in a patient with PCDLBCL-LT who had a negative systemic workup but a recurrence over a year after his initial presentation. Key words: B cell, lymphoma, primary cutaneous diffuse large B-cell lymphoma, leg type, regression

  17. Multivariate differential analyses of adolescents' experiences of aggression in families

    Directory of Open Access Journals (Sweden)

    Chris Myburgh

    2011-01-01

    Full Text Available Aggression is part of South African society and has implications for the mental health of persons living in South Africa. If parents are aggressive adolescents are also likely to be aggressive and that will impact negatively on their mental health. In this article the nature and extent of adolescents' experiences of aggression and aggressive behaviour in the family are investigated. A deductive explorative quantitative approach was followed. Aggression is reasoned to be dependent on aspects such as self-concept, moral reasoning, communication, frustration tolerance and family relationships. To analyse the data from questionnaires of 101 families (95 adolescents, 95 mothers and 91 fathers Cronbach Alpha, various consecutive first and second order factor analyses, correlations, multiple regression, MANOVA, ANOVA and Scheffè/ Dunnett tests were used. It was found that aggression correlated negatively with the independent variables; and the correlations between adolescents and their parents were significant. Regression analyses indicated that different predictors predicted aggression. Furthermore, differences between adolescents and their parents indicated that the experienced levels of aggression between adolescents and their parents were small. Implications for education are given.

  18. Preference learning with evolutionary Multivariate Adaptive Regression Spline model

    DEFF Research Database (Denmark)

    Abou-Zleikha, Mohamed; Shaker, Noor; Christensen, Mads Græsbøll

    2015-01-01

    This paper introduces a novel approach for pairwise preference learning through combining an evolutionary method with Multivariate Adaptive Regression Spline (MARS). Collecting users' feedback through pairwise preferences is recommended over other ranking approaches as this method is more appealing...... for function approximation as well as being relatively easy to interpret. MARS models are evolved based on their efficiency in learning pairwise data. The method is tested on two datasets that collectively provide pairwise preference data of five cognitive states expressed by users. The method is analysed...

  19. A random regression model in analysis of litter size in pigs | Lukovi& ...

    African Journals Online (AJOL)

    Dispersion parameters for number of piglets born alive (NBA) were estimated using a random regression model (RRM). Two data sets of litter records from the Nemščak farm in Slovenia were used for analyses. The first dataset (DS1) included records from the first to the sixth parity. The second dataset (DS2) was extended ...

  20. Prediction of unwanted pregnancies using logistic regression, probit regression and discriminant analysis.

    Science.gov (United States)

    Ebrahimzadeh, Farzad; Hajizadeh, Ebrahim; Vahabi, Nasim; Almasian, Mohammad; Bakhteyar, Katayoon

    2015-01-01

    Unwanted pregnancy not intended by at least one of the parents has undesirable consequences for the family and the society. In the present study, three classification models were used and compared to predict unwanted pregnancies in an urban population. In this cross-sectional study, 887 pregnant mothers referring to health centers in Khorramabad, Iran, in 2012 were selected by the stratified and cluster sampling; relevant variables were measured and for prediction of unwanted pregnancy, logistic regression, discriminant analysis, and probit regression models and SPSS software version 21 were used. To compare these models, indicators such as sensitivity, specificity, the area under the ROC curve, and the percentage of correct predictions were used. The prevalence of unwanted pregnancies was 25.3%. The logistic and probit regression models indicated that parity and pregnancy spacing, contraceptive methods, household income and number of living male children were related to unwanted pregnancy. The performance of the models based on the area under the ROC curve was 0.735, 0.733, and 0.680 for logistic regression, probit regression, and linear discriminant analysis, respectively. Given the relatively high prevalence of unwanted pregnancies in Khorramabad, it seems necessary to revise family planning programs. Despite the similar accuracy of the models, if the researcher is interested in the interpretability of the results, the use of the logistic regression model is recommended.

  1. Analytical approach for confirming the achievement of LMFBR reliability goals

    International Nuclear Information System (INIS)

    Ingram, G.E.; Elerath, J.G.; Wood, A.P.

    1981-01-01

    The approach, recommended by GE-ARSD, for confirming the achievement of LMFBR reliability goals relies upon a comprehensive understanding of the physical and operational characteristics of the system and the environments to which the system will be subjected during its operational life. This kind of understanding is required for an approach based on system hardware testing or analyses, as recommended in this report. However, for a system as complex and expensive as the LMFBR, an approach which relies primarily on system hardware testing would be prohibitive both in cost and time to obtain the required system reliability test information. By using an analytical approach, results of tests (reliability and functional) at a low level within the specific system of interest, as well as results from other similar systems can be used to form the data base for confirming the achievement of the system reliability goals. This data, along with information relating to the design characteristics and operating environments of the specific system, will be used in the assessment of the system's reliability

  2. Plutonian Moon confirmed

    Science.gov (United States)

    In late February, two separate observations confirmed the 1978 discovery by U.S. Naval Observatory scientist James W. Christy of a moon orbiting the planet Pluto. According to the U.S. Naval Observatory, these two observations were needed before the International Astronomical Society (IAS) would officially recognize the discovery.Two types of observations of the moon, which was named Charon after the ferryman in Greek mythology who carried the dead to Pluto's realm, were needed for confirmation: a transit, in which the moon passes in front of Pluto, and an occultation, in which the moon passes behind the planet. These two phenomena occur only during an 8-year period every 124 years that had been calculated to take place during 1984-1985. Both events were observed in late February.

  3. Cystatin C: a novel predictor of outcome in suspected or confirmed non-ST-elevation acute coronary syndrome.

    Science.gov (United States)

    Jernberg, Tomas; Lindahl, Bertil; James, Stefan; Larsson, Anders; Hansson, Lars-Olof; Wallentin, Lars

    2004-10-19

    Patients with suspected or confirmed non-ST-elevation acute coronary syndrome (ACS) constitute a large and heterogeneous group. Measurements of renal function such as serum creatinine and estimation of creatinine clearance carry independent prognostic information in this population. Cystatin C is a new and better marker of renal function than creatinine. The aim was therefore to evaluate the prognostic value of cystatin C in this population. Cystatin C was analyzed on admission in 726 patients admitted because of symptoms suggestive of an acute coronary syndrome and no ST-segment elevations. Patients were followed up with regard to death and myocardial infarction for a median of 40 and 6 months, respectively. The median cystatin C level was 1.00 mg/L (25th to 75th percentile, 0.83 to 1.24 mg/L). The risk of death during follow-up increased with increasing levels of cystatin C. In the group with non-ST-elevation ACS, patients in the second, third, and fourth quartiles had a relative risk of subsequent death of 1.8 (95% CI, 0.6 to 5.3), 3.2 (95% CI, 1.2 to 8.5), and 11.7 (95% CI, 4.7 to 29.3) compared with the lowest quartile. In Cox regression models including well-known predictors of outcome, cystatin C level was independently associated with mortality but not with the risk of subsequent myocardial infarction. In a comparison of the markers of renal function in receiver-operating curve analyses, cystatin C had the best ability to discriminate between survivors and nonsurvivors. A single measurement of cystatin C will substantially improve the early risk stratification of patients with suspected or confirmed non-ST-elevation ACS.

  4. Association between response rates and survival outcomes in patients with newly diagnosed multiple myeloma. A systematic review and meta-regression analysis.

    Science.gov (United States)

    Mainou, Maria; Madenidou, Anastasia-Vasiliki; Liakos, Aris; Paschos, Paschalis; Karagiannis, Thomas; Bekiari, Eleni; Vlachaki, Efthymia; Wang, Zhen; Murad, Mohammad Hassan; Kumar, Shaji; Tsapas, Apostolos

    2017-06-01

    We performed a systematic review and meta-regression analysis of randomized control trials to investigate the association between response to initial treatment and survival outcomes in patients with newly diagnosed multiple myeloma (MM). Response outcomes included complete response (CR) and the combined outcome of CR or very good partial response (VGPR), while survival outcomes were overall survival (OS) and progression-free survival (PFS). We used random-effect meta-regression models and conducted sensitivity analyses based on definition of CR and study quality. Seventy-two trials were included in the systematic review, 63 of which contributed data in meta-regression analyses. There was no association between OS and CR in patients without autologous stem cell transplant (ASCT) (regression coefficient: .02, 95% confidence interval [CI] -0.06, 0.10), in patients undergoing ASCT (-.11, 95% CI -0.44, 0.22) and in trials comparing ASCT with non-ASCT patients (.04, 95% CI -0.29, 0.38). Similarly, OS did not correlate with the combined metric of CR or VGPR, and no association was evident between response outcomes and PFS. Sensitivity analyses yielded similar results. This meta-regression analysis suggests that there is no association between conventional response outcomes and survival in patients with newly diagnosed MM. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  5. Application of support vector regression (SVR) for stream flow prediction on the Amazon basin

    CSIR Research Space (South Africa)

    Du Toit, Melise

    2016-10-01

    Full Text Available regression technique is used in this study to analyse historical stream flow occurrences and predict stream flow values for the Amazon basin. Up to twelve month predictions are made and the coefficient of determination and root-mean-square error are used...

  6. Regression and regression analysis time series prediction modeling on climate data of quetta, pakistan

    International Nuclear Information System (INIS)

    Jafri, Y.Z.; Kamal, L.

    2007-01-01

    Various statistical techniques was used on five-year data from 1998-2002 of average humidity, rainfall, maximum and minimum temperatures, respectively. The relationships to regression analysis time series (RATS) were developed for determining the overall trend of these climate parameters on the basis of which forecast models can be corrected and modified. We computed the coefficient of determination as a measure of goodness of fit, to our polynomial regression analysis time series (PRATS). The correlation to multiple linear regression (MLR) and multiple linear regression analysis time series (MLRATS) were also developed for deciphering the interdependence of weather parameters. Spearman's rand correlation and Goldfeld-Quandt test were used to check the uniformity or non-uniformity of variances in our fit to polynomial regression (PR). The Breusch-Pagan test was applied to MLR and MLRATS, respectively which yielded homoscedasticity. We also employed Bartlett's test for homogeneity of variances on a five-year data of rainfall and humidity, respectively which showed that the variances in rainfall data were not homogenous while in case of humidity, were homogenous. Our results on regression and regression analysis time series show the best fit to prediction modeling on climatic data of Quetta, Pakistan. (author)

  7. Logic regression and its extensions.

    Science.gov (United States)

    Schwender, Holger; Ruczinski, Ingo

    2010-01-01

    Logic regression is an adaptive classification and regression procedure, initially developed to reveal interacting single nucleotide polymorphisms (SNPs) in genetic association studies. In general, this approach can be used in any setting with binary predictors, when the interaction of these covariates is of primary interest. Logic regression searches for Boolean (logic) combinations of binary variables that best explain the variability in the outcome variable, and thus, reveals variables and interactions that are associated with the response and/or have predictive capabilities. The logic expressions are embedded in a generalized linear regression framework, and thus, logic regression can handle a variety of outcome types, such as binary responses in case-control studies, numeric responses, and time-to-event data. In this chapter, we provide an introduction to the logic regression methodology, list some applications in public health and medicine, and summarize some of the direct extensions and modifications of logic regression that have been proposed in the literature. Copyright © 2010 Elsevier Inc. All rights reserved.

  8. Practical Aspects of Log-ratio Coordinate Representations in Regression with Compositional Response

    Directory of Open Access Journals (Sweden)

    Fišerová Eva

    2016-10-01

    Full Text Available Regression analysis with compositional response, observations carrying relative information, is an appropriate tool for statistical modelling in many scientific areas (e.g. medicine, geochemistry, geology, economics. Even though this technique has been recently intensively studied, there are still some practical aspects that deserve to be further analysed. Here we discuss the issue related to the coordinate representation of compositional data. It is shown that linear relation between particular orthonormal coordinates and centred log-ratio coordinates can be utilized to simplify the computation concerning regression parameters estimation and hypothesis testing. To enhance interpretation of regression parameters, the orthogonal coordinates and their relation with orthonormal and centred log-ratio coordinates are presented. Further we discuss the quality of prediction in different coordinate system. It is shown that the mean squared error (MSE for orthonormal coordinates is less or equal to the MSE for log-transformed data. Finally, an illustrative real-world example from geology is presented.

  9. Tumor regression patterns in retinoblastoma

    International Nuclear Information System (INIS)

    Zafar, S.N.; Siddique, S.N.; Zaheer, N.

    2016-01-01

    To observe the types of tumor regression after treatment, and identify the common pattern of regression in our patients. Study Design: Descriptive study. Place and Duration of Study: Department of Pediatric Ophthalmology and Strabismus, Al-Shifa Trust Eye Hospital, Rawalpindi, Pakistan, from October 2011 to October 2014. Methodology: Children with unilateral and bilateral retinoblastoma were included in the study. Patients were referred to Pakistan Institute of Medical Sciences, Islamabad, for chemotherapy. After every cycle of chemotherapy, dilated funds examination under anesthesia was performed to record response of the treatment. Regression patterns were recorded on RetCam II. Results: Seventy-four tumors were included in the study. Out of 74 tumors, 3 were ICRB group A tumors, 43 were ICRB group B tumors, 14 tumors belonged to ICRB group C, and remaining 14 were ICRB group D tumors. Type IV regression was seen in 39.1% (n=29) tumors, type II in 29.7% (n=22), type III in 25.6% (n=19), and type I in 5.4% (n=4). All group A tumors (100%) showed type IV regression. Seventeen (39.5%) group B tumors showed type IV regression. In group C, 5 tumors (35.7%) showed type II regression and 5 tumors (35.7%) showed type IV regression. In group D, 6 tumors (42.9%) regressed to type II non-calcified remnants. Conclusion: The response and success of the focal and systemic treatment, as judged by the appearance of different patterns of tumor regression, varies with the ICRB grouping of the tumor. (author)

  10. Analysis of Palm Oil Production, Export, and Government Consumption to Gross Domestic Product of Five Districts in West Kalimantan by Panel Regression

    Science.gov (United States)

    Sulistianingsih, E.; Kiftiah, M.; Rosadi, D.; Wahyuni, H.

    2017-04-01

    Gross Domestic Product (GDP) is an indicator of economic growth in a region. GDP is a panel data, which consists of cross-section and time series data. Meanwhile, panel regression is a tool which can be utilised to analyse panel data. There are three models in panel regression, namely Common Effect Model (CEM), Fixed Effect Model (FEM) and Random Effect Model (REM). The models will be chosen based on results of Chow Test, Hausman Test and Lagrange Multiplier Test. This research analyses palm oil about production, export, and government consumption to five district GDP are in West Kalimantan, namely Sanggau, Sintang, Sambas, Ketapang and Bengkayang by panel regression. Based on the results of analyses, it concluded that REM, which adjusted-determination-coefficient is 0,823, is the best model in this case. Also, according to the result, only Export and Government Consumption that influence GDP of the districts.

  11. Combining Alphas via Bounded Regression

    Directory of Open Access Journals (Sweden)

    Zura Kakushadze

    2015-11-01

    Full Text Available We give an explicit algorithm and source code for combining alpha streams via bounded regression. In practical applications, typically, there is insufficient history to compute a sample covariance matrix (SCM for a large number of alphas. To compute alpha allocation weights, one then resorts to (weighted regression over SCM principal components. Regression often produces alpha weights with insufficient diversification and/or skewed distribution against, e.g., turnover. This can be rectified by imposing bounds on alpha weights within the regression procedure. Bounded regression can also be applied to stock and other asset portfolio construction. We discuss illustrative examples.

  12. riskRegression

    DEFF Research Database (Denmark)

    Ozenne, Brice; Sørensen, Anne Lyngholm; Scheike, Thomas

    2017-01-01

    In the presence of competing risks a prediction of the time-dynamic absolute risk of an event can be based on cause-specific Cox regression models for the event and the competing risks (Benichou and Gail, 1990). We present computationally fast and memory optimized C++ functions with an R interface...... for predicting the covariate specific absolute risks, their confidence intervals, and their confidence bands based on right censored time to event data. We provide explicit formulas for our implementation of the estimator of the (stratified) baseline hazard function in the presence of tied event times. As a by...... functionals. The software presented here is implemented in the riskRegression package....

  13. Econometric analysis of realized covariation: high frequency based covariance, regression, and correlation in financial economics

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Shephard, N.

    2004-01-01

    This paper analyses multivariate high frequency financial data using realized covariation. We provide a new asymptotic distribution theory for standard methods such as regression, correlation analysis, and covariance. It will be based on a fixed interval of time (e.g., a day or week), allowing...... the number of high frequency returns during this period to go to infinity. Our analysis allows us to study how high frequency correlations, regressions, and covariances change through time. In particular we provide confidence intervals for each of these quantities....

  14. Influence of Malnutrition on Adverse Outcome in Children with Confirmed or Probable Viral Encephalitis: A Prospective Observational Study

    Directory of Open Access Journals (Sweden)

    Priyanka Singh

    2015-01-01

    Full Text Available A prospective observational study was conducted in a tertiary care teaching hospital from August 2008 to August 2009 to explore the independent predictors of adverse outcome in the patients with confirmed/probable viral encephalitis. The primary outcome variable was the incidence of adverse outcomes defined as death or severe neurological deficit such as loss of speech, motor deficits, behavioural problems, blindness, and cognitive impairment. Patients with confirmed or probable viral encephalitis were classified into two groups based on their Z-score of weight-for-age as per WHO growth charts. Group I. Patients with confirmed or probable viral encephalitis with weight-for-age (W/A Z-scores below −2SD were classified as undernourished. Group II. Patients with confirmed or probable viral encephalitis were classified as having normal nutritional status (weight-for-age Z-score >−2SD. A total of 114 patients were classified as confirmed or probable viral encephalitis based on detailed investigations. On multivariate logistic regression, undernutrition (adjusted OR: 5.05; 95% CI: 1.92 to 13.44 and requirement of ventilation (adjusted OR: 6.75; 95% CI: 3.63 to 77.34 were independent predictors of adverse outcomes in these patients. Thus, the results from our study highlight that the association between undernutrition and adverse outcome could be extended to the patients with confirmed/probable viral encephalitis.

  15. Regression in autistic spectrum disorders.

    Science.gov (United States)

    Stefanatos, Gerry A

    2008-12-01

    A significant proportion of children diagnosed with Autistic Spectrum Disorder experience a developmental regression characterized by a loss of previously-acquired skills. This may involve a loss of speech or social responsitivity, but often entails both. This paper critically reviews the phenomena of regression in autistic spectrum disorders, highlighting the characteristics of regression, age of onset, temporal course, and long-term outcome. Important considerations for diagnosis are discussed and multiple etiological factors currently hypothesized to underlie the phenomenon are reviewed. It is argued that regressive autistic spectrum disorders can be conceptualized on a spectrum with other regressive disorders that may share common pathophysiological features. The implications of this viewpoint are discussed.

  16. Item Response Theory Modeling and Categorical Regression Analyses of the Five-Factor Model Rating Form: A Study on Italian Community-Dwelling Adolescent Participants and Adult Participants.

    Science.gov (United States)

    Fossati, Andrea; Widiger, Thomas A; Borroni, Serena; Maffei, Cesare; Somma, Antonella

    2017-06-01

    To extend the evidence on the reliability and construct validity of the Five-Factor Model Rating Form (FFMRF) in its self-report version, two independent samples of Italian participants, which were composed of 510 adolescent high school students and 457 community-dwelling adults, respectively, were administered the FFMRF in its Italian translation. Adolescent participants were also administered the Italian translation of the Borderline Personality Features Scale for Children-11 (BPFSC-11), whereas adult participants were administered the Italian translation of the Triarchic Psychopathy Measure (TriPM). Cronbach α values were consistent with previous findings; in both samples, average interitem r values indicated acceptable internal consistency for all FFMRF scales. A multidimensional graded item response theory model indicated that the majority of FFMRF items had adequate discrimination parameters; information indices supported the reliability of the FFMRF scales. Both categorical (i.e., item-level) and scale-level regression analyses suggested that the FFMRF scores may predict a nonnegligible amount of variance in the BPFSC-11 total score in adolescent participants, and in the TriPM scale scores in adult participants.

  17. Understanding logistic regression analysis

    OpenAIRE

    Sperandei, Sandro

    2014-01-01

    Logistic regression is used to obtain odds ratio in the presence of more than one explanatory variable. The procedure is quite similar to multiple linear regression, with the exception that the response variable is binomial. The result is the impact of each variable on the odds ratio of the observed event of interest. The main advantage is to avoid confounding effects by analyzing the association of all variables together. In this article, we explain the logistic regression procedure using ex...

  18. Nonparametric regression using the concept of minimum energy

    International Nuclear Information System (INIS)

    Williams, Mike

    2011-01-01

    It has recently been shown that an unbinned distance-based statistic, the energy, can be used to construct an extremely powerful nonparametric multivariate two sample goodness-of-fit test. An extension to this method that makes it possible to perform nonparametric regression using multiple multivariate data sets is presented in this paper. The technique, which is based on the concept of minimizing the energy of the system, permits determination of parameters of interest without the need for parametric expressions of the parent distributions of the data sets. The application and performance of this new method is discussed in the context of some simple example analyses.

  19. Linear regression in astronomy. II

    Science.gov (United States)

    Feigelson, Eric D.; Babu, Gutti J.

    1992-01-01

    A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.

  20. A Matlab program for stepwise regression

    Directory of Open Access Journals (Sweden)

    Yanhong Qi

    2016-03-01

    Full Text Available The stepwise linear regression is a multi-variable regression for identifying statistically significant variables in the linear regression equation. In present study, we presented the Matlab program of stepwise regression.

  1. Calibration and Confirmation in Geophysical Models

    Science.gov (United States)

    Werndl, Charlotte

    2016-04-01

    For policy decisions the best geophysical models are needed. To evaluate geophysical models, it is essential that the best available methods for confirmation are used. A hotly debated issue on confirmation in climate science (as well as in philosophy) is the requirement of use-novelty (i.e. that data can only confirm models if they have not already been used before. This talk investigates the issue of use-novelty and double-counting for geophysical models. We will see that the conclusions depend on the framework of confirmation and that it is not clear that use-novelty is a valid requirement and that double-counting is illegitimate.

  2. Pathological Confirmation of Optic Neuropathy in Familial Dysautonomia.

    Science.gov (United States)

    Mendoza-Santiesteban, Carlos E; Palma, Jose-Alberto; Hedges, Thomas R; Laver, Nora V; Farhat, Nada; Norcliffe-Kaufmann, Lucy; Kaufmann, Horacio

    2017-03-01

    Clinical data suggest that optic neuropathy and retinal ganglion cell loss are the main cause of visual decline in patients with familial dysautonomia, but this has not previously been confirmed by pathological analyses. We studied retinas and optic nerves in 6 eyes from 3 affected patients obtained at autopsy. Analyses included routine neurohistology and immunohistochemistry for neurofilaments, cytochrome c oxidase (COX), and melanopsin-containing ganglion cells. We observed profound axon loss in the temporal portions of optic nerves with relative preservation in the nasal portions; this correlated with clinical and optical coherence tomography findings in 1 patient. Retinal ganglion cell layers were markedly reduced in the central retina, whereas melanopsin-containing ganglion cells were relatively spared. COX staining was reduced in the temporal portions of the optic nerve indicating reduced mitochondrial density. Axonal swelling with degenerating lysosomes and mitochondria were observed by electron microscopy. These findings support the concept that there is a specific optic neuropathy and retinopathy in patients with familial dysautonomia similar to that seen in other optic neuropathies with mitochondrial dysfunction. This raises the possibility that defective expression of the IkB kinase complex-associated protein (IKAP) resulting from mutations in IKBKAP affects mitochondrial function in the metabolism-dependent retinal parvocellular ganglion cells in this condition. © 2017 American Association of Neuropathologists, Inc. All rights reserved.

  3. Quantile regression theory and applications

    CERN Document Server

    Davino, Cristina; Vistocco, Domenico

    2013-01-01

    A guide to the implementation and interpretation of Quantile Regression models This book explores the theory and numerous applications of quantile regression, offering empirical data analysis as well as the software tools to implement the methods. The main focus of this book is to provide the reader with a comprehensivedescription of the main issues concerning quantile regression; these include basic modeling, geometrical interpretation, estimation and inference for quantile regression, as well as issues on validity of the model, diagnostic tools. Each methodological aspect is explored and

  4. Fungible weights in logistic regression.

    Science.gov (United States)

    Jones, Jeff A; Waller, Niels G

    2016-06-01

    In this article we develop methods for assessing parameter sensitivity in logistic regression models. To set the stage for this work, we first review Waller's (2008) equations for computing fungible weights in linear regression. Next, we describe 2 methods for computing fungible weights in logistic regression. To demonstrate the utility of these methods, we compute fungible logistic regression weights using data from the Centers for Disease Control and Prevention's (2010) Youth Risk Behavior Surveillance Survey, and we illustrate how these alternate weights can be used to evaluate parameter sensitivity. To make our work accessible to the research community, we provide R code (R Core Team, 2015) that will generate both kinds of fungible logistic regression weights. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  5. Application of logistic regression for landslide susceptibility zoning of Cekmece Area, Istanbul, Turkey

    Science.gov (United States)

    Duman, T. Y.; Can, T.; Gokceoglu, C.; Nefeslioglu, H. A.; Sonmez, H.

    2006-11-01

    As a result of industrialization, throughout the world, cities have been growing rapidly for the last century. One typical example of these growing cities is Istanbul, the population of which is over 10 million. Due to rapid urbanization, new areas suitable for settlement and engineering structures are necessary. The Cekmece area located west of the Istanbul metropolitan area is studied, because the landslide activity is extensive in this area. The purpose of this study is to develop a model that can be used to characterize landslide susceptibility in map form using logistic regression analysis of an extensive landslide database. A database of landslide activity was constructed using both aerial-photography and field studies. About 19.2% of the selected study area is covered by deep-seated landslides. The landslides that occur in the area are primarily located in sandstones with interbedded permeable and impermeable layers such as claystone, siltstone and mudstone. About 31.95% of the total landslide area is located at this unit. To apply logistic regression analyses, a data matrix including 37 variables was constructed. The variables used in the forwards stepwise analyses are different measures of slope, aspect, elevation, stream power index (SPI), plan curvature, profile curvature, geology, geomorphology and relative permeability of lithological units. A total of 25 variables were identified as exerting strong influence on landslide occurrence, and included by the logistic regression equation. Wald statistics values indicate that lithology, SPI and slope are more important than the other parameters in the equation. Beta coefficients of the 25 variables included the logistic regression equation provide a model for landslide susceptibility in the Cekmece area. This model is used to generate a landslide susceptibility map that correctly classified 83.8% of the landslide-prone areas.

  6. Discussion on Regression Methods Based on Ensemble Learning and Applicability Domains of Linear Submodels.

    Science.gov (United States)

    Kaneko, Hiromasa

    2018-02-26

    To develop a new ensemble learning method and construct highly predictive regression models in chemoinformatics and chemometrics, applicability domains (ADs) are introduced into the ensemble learning process of prediction. When estimating values of an objective variable using subregression models, only the submodels with ADs that cover a query sample, i.e., the sample is inside the model's AD, are used. By constructing submodels and changing a list of selected explanatory variables, the union of the submodels' ADs, which defines the overall AD, becomes large, and the prediction performance is enhanced for diverse compounds. By analyzing a quantitative structure-activity relationship data set and a quantitative structure-property relationship data set, it is confirmed that the ADs can be enlarged and the estimation performance of regression models is improved compared with traditional methods.

  7. Clinical role of pathological downgrading after radical prostatectomy in patients with biopsy confirmed Gleason score 3 + 4 prostate cancer.

    Science.gov (United States)

    Gondo, Tatsuo; Poon, Bing Ying; Matsumoto, Kazuhiro; Bernstein, Melanie; Sjoberg, Daniel D; Eastham, James A

    2015-01-01

    To identify preoperative factors predicting Gleason score downgrading after radical prostatectomy (RP) in patients with biopsy Gleason score 3+4 prostate cancer and to determine if prediction of downgrading can identify potential candidates for active surveillance (AS). We identified 1317 patients with biopsy Gleason score 3+4 prostate cancers who underwent RP at the Memorial Sloan-Kettering Cancer Center between 2005 and 2013. Several preoperative and biopsy characteristics were evaluated by forward selection regression, and selected predictors of downgrading were analysed by multivariable logistic regression. Decision curve analysis was used to evaluate the clinical utility of the multivariate model. Gleason score was downgraded after RP in 115 patients (9%). We developed a multivariable model using age, prostate-specific antigen density, percentage of positive cores with Gleason pattern 4 cancer out of all cores taken, and maximum percentage of cancer involvement within a positive core with Gleason pattern 4 cancer. The area under the curve for this model was 0.75 after 10-fold cross validation. However, decision curve analysis revealed that the model was not clinically helpful in identifying patients who will downgrade at RP for the purpose of reassigning them to AS. While patients with pathological Gleason score 3 + 3 with tertiary Gleason pattern ≤4 at RP in patients with biopsy Gleason score 3 + 4 prostate cancer may be potential candidates for AS, decision curve analysis showed limited utility of our model to identify such men. Future study is needed to identify new predictors to help identify potential candidates for AS among patients with biopsy confirmed Gleason score 3 + 4 prostate cancer. © 2014 The Authors. BJU International © 2014 BJU International.

  8. Preliminary Analysis of Remote Monitoring and Robotic Concepts for Performance Confirmation

    International Nuclear Information System (INIS)

    McAffee, D.A.

    1997-01-01

    As defined in 10 CFR Part 60.2, Performance Confirmation is the ''program of tests, experiments and analyses which is conducted to evaluate the accuracy and adequacy of the information used to determine with reasonable assurance that the performance objectives for the period after permanent closure will be met''. The overall Performance Confirmation program begins during site characterization and continues up to repository closure. The main purpose of this document is to develop, explore and analyze initial concepts for using remotely operated and robotic systems in gathering repository performance information during Performance Confirmation. This analysis focuses primarily on possible Performance Confirmation related applications within the emplacement drifts after waste packages have been emplaced (post-emplacement) and before permanent closure of the repository (preclosure). This will be a period of time lasting approximately 100 years and basically coincides with the Caretaker phase of the project. This analysis also examines, to a lesser extent, some applications related to Caretaker operations. A previous report examined remote handling and robotic technologies that could be employed during the waste package emplacement phase of the project (Reference 5.1). This analysis is being prepared to provide an early investigation of possible design concepts and technical challenges associated with developing remote systems for monitoring and inspecting activities during Performance Confirmation. The writing of this analysis preceded formal development of Performance Confirmation functional requirements and program plans and therefore examines, in part, the fundamental Performance Confirmation monitoring needs and operating conditions. The scope and primary objectives of this analysis are to: (1) Describe the operating environment and conditions expected in the emplacement drifts during the preclosure period. (Presented in Section 7.2). (2) Identify and discuss the

  9. An introduction to using Bayesian linear regression with clinical data.

    Science.gov (United States)

    Baldwin, Scott A; Larson, Michael J

    2017-11-01

    Statistical training psychology focuses on frequentist methods. Bayesian methods are an alternative to standard frequentist methods. This article provides researchers with an introduction to fundamental ideas in Bayesian modeling. We use data from an electroencephalogram (EEG) and anxiety study to illustrate Bayesian models. Specifically, the models examine the relationship between error-related negativity (ERN), a particular event-related potential, and trait anxiety. Methodological topics covered include: how to set up a regression model in a Bayesian framework, specifying priors, examining convergence of the model, visualizing and interpreting posterior distributions, interval estimates, expected and predicted values, and model comparison tools. We also discuss situations where Bayesian methods can outperform frequentist methods as well has how to specify more complicated regression models. Finally, we conclude with recommendations about reporting guidelines for those using Bayesian methods in their own research. We provide data and R code for replicating our analyses. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Entrepreneurial intention modeling using hierarchical multiple regression

    Directory of Open Access Journals (Sweden)

    Marina Jeger

    2014-12-01

    Full Text Available The goal of this study is to identify the contribution of effectuation dimensions to the predictive power of the entrepreneurial intention model over and above that which can be accounted for by other predictors selected and confirmed in previous studies. As is often the case in social and behavioral studies, some variables are likely to be highly correlated with each other. Therefore, the relative amount of variance in the criterion variable explained by each of the predictors depends on several factors such as the order of variable entry and sample specifics. The results show the modest predictive power of two dimensions of effectuation prior to the introduction of the theory of planned behavior elements. The article highlights the main advantages of applying hierarchical regression in social sciences as well as in the specific context of entrepreneurial intention formation, and addresses some of the potential pitfalls that this type of analysis entails.

  11. Principal component regression analysis with SPSS.

    Science.gov (United States)

    Liu, R X; Kuang, J; Gong, Q; Hou, X L

    2003-06-01

    The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.

  12. Complete genome of a European hepatitis C virus subtype 1g isolate: phylogenetic and genetic analyses.

    Science.gov (United States)

    Bracho, Maria A; Saludes, Verónica; Martró, Elisa; Bargalló, Ana; González-Candelas, Fernando; Ausina, Vicent

    2008-06-05

    Hepatitis C virus isolates have been classified into six main genotypes and a variable number of subtypes within each genotype, mainly based on phylogenetic analysis. Analyses of the genetic relationship among genotypes and subtypes are more reliable when complete genome sequences (or at least the full coding region) are used; however, so far 31 of 80 confirmed or proposed subtypes have at least one complete genome available. Of these, 20 correspond to confirmed subtypes of epidemic interest. We present and analyse the first complete genome sequence of a HCV subtype 1g isolate. Phylogenetic and genetic distance analyses reveal that HCV-1g is the most divergent subtype among the HCV-1 confirmed subtypes. Potential genomic recombination events between genotypes or subtype 1 genomes were ruled out. We demonstrate phylogenetic congruence of previously deposited partial sequences of HCV-1g with respect to our sequence. In light of this, we propose changing the current status of its subtype-specific designation from provisional to confirmed.

  13. Estimating the Performance of Random Forest versus Multiple Regression for Predicting Prices of the Apartments

    Directory of Open Access Journals (Sweden)

    Marjan Čeh

    2018-05-01

    Full Text Available The goal of this study is to analyse the predictive performance of the random forest machine learning technique in comparison to commonly used hedonic models based on multiple regression for the prediction of apartment prices. A data set that includes 7407 records of apartment transactions referring to real estate sales from 2008–2013 in the city of Ljubljana, the capital of Slovenia, was used in order to test and compare the predictive performances of both models. Apparent challenges faced during modelling included (1 the non-linear nature of the prediction assignment task; (2 input data being based on transactions occurring over a period of great price changes in Ljubljana whereby a 28% decline was noted in six consecutive testing years; and (3 the complex urban form of the case study area. Available explanatory variables, organised as a Geographic Information Systems (GIS ready dataset, including the structural and age characteristics of the apartments as well as environmental and neighbourhood information were considered in the modelling procedure. All performance measures (R2 values, sales ratios, mean average percentage error (MAPE, coefficient of dispersion (COD revealed significantly better results for predictions obtained by the random forest method, which confirms the prospective of this machine learning technique on apartment price prediction.

  14. Length bias correction in gene ontology enrichment analysis using logistic regression.

    Science.gov (United States)

    Mi, Gu; Di, Yanming; Emerson, Sarah; Cumbie, Jason S; Chang, Jeff H

    2012-01-01

    When assessing differential gene expression from RNA sequencing data, commonly used statistical tests tend to have greater power to detect differential expression of genes encoding longer transcripts. This phenomenon, called "length bias", will influence subsequent analyses such as Gene Ontology enrichment analysis. In the presence of length bias, Gene Ontology categories that include longer genes are more likely to be identified as enriched. These categories, however, are not necessarily biologically more relevant. We show that one can effectively adjust for length bias in Gene Ontology analysis by including transcript length as a covariate in a logistic regression model. The logistic regression model makes the statistical issue underlying length bias more transparent: transcript length becomes a confounding factor when it correlates with both the Gene Ontology membership and the significance of the differential expression test. The inclusion of the transcript length as a covariate allows one to investigate the direct correlation between the Gene Ontology membership and the significance of testing differential expression, conditional on the transcript length. We present both real and simulated data examples to show that the logistic regression approach is simple, effective, and flexible.

  15. A Quantile Regression Approach to Estimating the Distribution of Anesthetic Procedure Time during Induction.

    Directory of Open Access Journals (Sweden)

    Hsin-Lun Wu

    Full Text Available Although procedure time analyses are important for operating room management, it is not easy to extract useful information from clinical procedure time data. A novel approach was proposed to analyze procedure time during anesthetic induction. A two-step regression analysis was performed to explore influential factors of anesthetic induction time (AIT. Linear regression with stepwise model selection was used to select significant correlates of AIT and then quantile regression was employed to illustrate the dynamic relationships between AIT and selected variables at distinct quantiles. A total of 1,060 patients were analyzed. The first and second-year residents (R1-R2 required longer AIT than the third and fourth-year residents and attending anesthesiologists (p = 0.006. Factors prolonging AIT included American Society of Anesthesiologist physical status ≧ III, arterial, central venous and epidural catheterization, and use of bronchoscopy. Presence of surgeon before induction would decrease AIT (p < 0.001. Types of surgery also had significant influence on AIT. Quantile regression satisfactorily estimated extra time needed to complete induction for each influential factor at distinct quantiles. Our analysis on AIT demonstrated the benefit of quantile regression analysis to provide more comprehensive view of the relationships between procedure time and related factors. This novel two-step regression approach has potential applications to procedure time analysis in operating room management.

  16. Logistic regression models

    CERN Document Server

    Hilbe, Joseph M

    2009-01-01

    This book really does cover everything you ever wanted to know about logistic regression … with updates available on the author's website. Hilbe, a former national athletics champion, philosopher, and expert in astronomy, is a master at explaining statistical concepts and methods. Readers familiar with his other expository work will know what to expect-great clarity.The book provides considerable detail about all facets of logistic regression. No step of an argument is omitted so that the book will meet the needs of the reader who likes to see everything spelt out, while a person familiar with some of the topics has the option to skip "obvious" sections. The material has been thoroughly road-tested through classroom and web-based teaching. … The focus is on helping the reader to learn and understand logistic regression. The audience is not just students meeting the topic for the first time, but also experienced users. I believe the book really does meet the author's goal … .-Annette J. Dobson, Biometric...

  17. Institutions and deforestation in the Brazilian amazon: a geographic regression discontinuity analysis

    OpenAIRE

    Bogetvedt, Ingvild Engen; Hauge, Mari Johnsrud

    2017-01-01

    This study explores the impact of institutional quality at the municipal level on deforestation in the Legal Amazon. We add to this insufficiently understood topic by implementing a geographic regression discontinuity design. By taking advantage of high-resolution spatial data on deforestation combined with an objective measure of corruption used as a proxy for institutional quality, we analyse 138 Brazilian municipalities in the period of 2002-2004. Our empirical findings show...

  18. Item parameters dissociate between expectation formats: A regression analysis of time-frequency decomposed EEG data

    Directory of Open Access Journals (Sweden)

    Irene Fernández Monsalve

    2014-08-01

    Full Text Available During language comprehension, semantic contextual information is used to generate expectations about upcoming items. This has been commonly studied through the N400 event-related potential (ERP, as a measure of facilitated lexical retrieval. However, the associative relationships in multi-word expressions (MWE may enable the generation of a categorical expectation, leading to lexical retrieval before target word onset. Processing of the target word would thus reflect a target-identification mechanism, possibly indexed by a P3 ERP component. However, given their time overlap (200-500 ms post-stimulus onset, differentiating between N400/P3 ERP responses (averaged over multiple linguistically variable trials is problematic. In the present study, we analyzed EEG data from a previous experiment, which compared ERP responses to highly expected words that were placed either in a MWE or a regular non-fixed compositional context, and to low predictability controls. We focused on oscillatory dynamics and regression analyses, in order to dissociate between the two contexts by modeling the electrophysiological response as a function of item-level parameters. A significant interaction between word position and condition was found in the regression model for power in a theta range (~7-9 Hz, providing evidence for the presence of qualitative differences between conditions. Power levels within this band were lower for MWE than compositional contexts then the target word appeared later on in the sentence, confirming that in the former lexical retrieval would have taken place before word onset. On the other hand, gamma-power (~50-70 Hz was also modulated by predictability of the item in all conditions, which is interpreted as an index of a similar `matching' sub-step for both types of contexts, binding an expected representation and the external input.

  19. Consequences of kriging and land use regression for PM2.5 predictions in epidemiologic analyses: insights into spatial variability using high-resolution satellite data.

    Science.gov (United States)

    Alexeeff, Stacey E; Schwartz, Joel; Kloog, Itai; Chudnovsky, Alexandra; Koutrakis, Petros; Coull, Brent A

    2015-01-01

    Many epidemiological studies use predicted air pollution exposures as surrogates for true air pollution levels. These predicted exposures contain exposure measurement error, yet simulation studies have typically found negligible bias in resulting health effect estimates. However, previous studies typically assumed a statistical spatial model for air pollution exposure, which may be oversimplified. We address this shortcoming by assuming a realistic, complex exposure surface derived from fine-scale (1 km × 1 km) remote-sensing satellite data. Using simulation, we evaluate the accuracy of epidemiological health effect estimates in linear and logistic regression when using spatial air pollution predictions from kriging and land use regression models. We examined chronic (long-term) and acute (short-term) exposure to air pollution. Results varied substantially across different scenarios. Exposure models with low out-of-sample R(2) yielded severe biases in the health effect estimates of some models, ranging from 60% upward bias to 70% downward bias. One land use regression exposure model with >0.9 out-of-sample R(2) yielded upward biases up to 13% for acute health effect estimates. Almost all models drastically underestimated the SEs. Land use regression models performed better in chronic effect simulations. These results can help researchers when interpreting health effect estimates in these types of studies.

  20. Interpret with caution: multicollinearity in multiple regression of cognitive data.

    Science.gov (United States)

    Morrison, Catriona M

    2003-08-01

    Shibihara and Kondo in 2002 reported a reanalysis of the 1997 Kanji picture-naming data of Yamazaki, Ellis, Morrison, and Lambon-Ralph in which independent variables were highly correlated. Their addition of the variable visual familiarity altered the previously reported pattern of results, indicating that visual familiarity, but not age of acquisition, was important in predicting Kanji naming speed. The present paper argues that caution should be taken when drawing conclusions from multiple regression analyses in which the independent variables are so highly correlated, as such multicollinearity can lead to unreliable output.

  1. Understanding logistic regression analysis.

    Science.gov (United States)

    Sperandei, Sandro

    2014-01-01

    Logistic regression is used to obtain odds ratio in the presence of more than one explanatory variable. The procedure is quite similar to multiple linear regression, with the exception that the response variable is binomial. The result is the impact of each variable on the odds ratio of the observed event of interest. The main advantage is to avoid confounding effects by analyzing the association of all variables together. In this article, we explain the logistic regression procedure using examples to make it as simple as possible. After definition of the technique, the basic interpretation of the results is highlighted and then some special issues are discussed.

  2. Minimax Regression Quantiles

    DEFF Research Database (Denmark)

    Bache, Stefan Holst

    A new and alternative quantile regression estimator is developed and it is shown that the estimator is root n-consistent and asymptotically normal. The estimator is based on a minimax ‘deviance function’ and has asymptotically equivalent properties to the usual quantile regression estimator. It is......, however, a different and therefore new estimator. It allows for both linear- and nonlinear model specifications. A simple algorithm for computing the estimates is proposed. It seems to work quite well in practice but whether it has theoretical justification is still an open question....

  3. [TECOS: confirmation of the cardiovascular safety of sitaliptin].

    Science.gov (United States)

    Scheen, A J; Paquot, N

    2015-10-01

    The cardiovascular safety of sitagliptin has been evaluated in TECOS ("Trial Evaluating Cardiovascular Outcomes with Sitagliptin"). TECOS recruited patients with type 2 diabetes and a history of cardiovascular disease who received, as add-on to their usual therapy, either sitagliptin (n = 7.257) or placebo (n = 7.266), with a median follow-up of 3 years. The primary cardiovascular outcome was a composite of cardiovascular death, nonfatal myocardial infarction, nonfatal stroke, or hospitalization for unstable angina. Sitagliptin was noninferior to placebo for the primary composite cardiovascular outcome (hazard ratio, 0.98; 95% confidence interval, 0.88 to 1.09; P<0.001). Rates of hospitalization for heart failure did not differ between the two groups (hazard ratio, 1.00; 95% CI, 0.83 to 1.20; P=0.98). The cardiovascular safety of sitagliptin, which was already shown in meta-analyses of phase II-III randomised controlled trials and in observational cohort studies in real life, is now confirmed in the landmark prospective cardiovascular outcome study TECOS.

  4. Regression with Sparse Approximations of Data

    DEFF Research Database (Denmark)

    Noorzad, Pardis; Sturm, Bob L.

    2012-01-01

    We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected...... by a sparse approximation of the point in terms of the regressors. We show SPARROW can be considered a variant of \\(k\\)-nearest neighbors regression (\\(k\\)-NNR), and more generally, local polynomial kernel regression. Unlike \\(k\\)-NNR, however, SPARROW can adapt the number of regressors to use based...

  5. Evaluating the Performance of Polynomial Regression Method with Different Parameters during Color Characterization

    Directory of Open Access Journals (Sweden)

    Bangyong Sun

    2014-01-01

    Full Text Available The polynomial regression method is employed to calculate the relationship of device color space and CIE color space for color characterization, and the performance of different expressions with specific parameters is evaluated. Firstly, the polynomial equation for color conversion is established and the computation of polynomial coefficients is analysed. And then different forms of polynomial equations are used to calculate the RGB and CMYK’s CIE color values, while the corresponding color errors are compared. At last, an optimal polynomial expression is obtained by analysing several related parameters during color conversion, including polynomial numbers, the degree of polynomial terms, the selection of CIE visual spaces, and the linearization.

  6. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    Science.gov (United States)

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  7. Detection of increase in corneal irregularity due to pterygium using Fourier series harmonic analyses with multiple diameters.

    Science.gov (United States)

    Minami, Keiichiro; Miyata, Kazunori; Otani, Atsushi; Tokunaga, Tadatoshi; Tokuda, Shouta; Amano, Shiro

    2018-05-01

    To determine steep increase of corneal irregularity induced by advancement of pterygium. A total of 456 eyes from 456 consecutive patients with primary pterygia were examined for corneal topography and advancement of pterygium with respect to the corneal diameter. Corneal irregularity induced by the pterygium advancement was evaluated by Fourier harmonic analyses of the topographic data that were modified for a series of analysis diameters from 1 mm to 6 mm. Incidences of steep increases in the asymmetry or higher-order irregularity components (inflection points) were determined by using segmented regression analysis for each analysis diameter. The pterygium advancement ranged from 2% to 57%, with a mean of 22.0%. Both components showed steep increases from the inflection points. The inflection points in the higher-order irregularity component altered with the analysis diameter (14.0%-30.6%), while there was no alternation in the asymmetry components (35.5%-36.8%). For the former component, the values at the inflection points were obtained in a range of 0.16 to 0.25 D. The Fourier harmonic analyses for a series of analysis diameters revealed that the higher-order irregularity component increased with the pterygium advancement. The analysis results confirmed the precedence of corneal irregularity due to pterygium advancement.

  8. Development of Super-Ensemble techniques for ocean analyses: the Mediterranean Sea case

    Science.gov (United States)

    Pistoia, Jenny; Pinardi, Nadia; Oddo, Paolo; Collins, Matthew; Korres, Gerasimos; Drillet, Yann

    2017-04-01

    Short-term ocean analyses for Sea Surface Temperature SST in the Mediterranean Sea can be improved by a statistical post-processing technique, called super-ensemble. This technique consists in a multi-linear regression algorithm applied to a Multi-Physics Multi-Model Super-Ensemble (MMSE) dataset, a collection of different operational forecasting analyses together with ad-hoc simulations produced by modifying selected numerical model parameterizations. A new linear regression algorithm based on Empirical Orthogonal Function filtering techniques is capable to prevent overfitting problems, even if best performances are achieved when we add correlation to the super-ensemble structure using a simple spatial filter applied after the linear regression. Our outcomes show that super-ensemble performances depend on the selection of an unbiased operator and the length of the learning period, but the quality of the generating MMSE dataset has the largest impact on the MMSE analysis Root Mean Square Error (RMSE) evaluated with respect to observed satellite SST. Lower RMSE analysis estimates result from the following choices: 15 days training period, an overconfident MMSE dataset (a subset with the higher quality ensemble members), and the least square algorithm being filtered a posteriori.

  9. Post-processing through linear regression

    Science.gov (United States)

    van Schaeybroeck, B.; Vannitsem, S.

    2011-03-01

    Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS) method, a new time-dependent Tikhonov regularization (TDTR) method, the total least-square method, a new geometric-mean regression (GM), a recently introduced error-in-variables (EVMOS) method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified. These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise). At long lead times the regression schemes (EVMOS, TDTR) which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.

  10. Single-electron multiplication statistics as a combination of Poissonian pulse height distributions using constraint regression methods

    International Nuclear Information System (INIS)

    Ballini, J.-P.; Cazes, P.; Turpin, P.-Y.

    1976-01-01

    Analysing the histogram of anode pulse amplitudes allows a discussion of the hypothesis that has been proposed to account for the statistical processes of secondary multiplication in a photomultiplier. In an earlier work, good agreement was obtained between experimental and reconstructed spectra, assuming a first dynode distribution including two Poisson distributions of distinct mean values. This first approximation led to a search for a method which could give the weights of several Poisson distributions of distinct mean values. Three methods have been briefly exposed: classical linear regression, constraint regression (d'Esopo's method), and regression on variables subject to error. The use of these methods gives an approach of the frequency function which represents the dispersion of the punctual mean gain around the whole first dynode mean gain value. Comparison between this function and the one employed in Polya distribution allows the statement that the latter is inadequate to describe the statistical process of secondary multiplication. Numerous spectra obtained with two kinds of photomultiplier working under different physical conditions have been analysed. Then two points are discussed: - Does the frequency function represent the dynode structure and the interdynode collection process. - Is the model (the multiplication process of all dynodes but the first one, is Poissonian) valid whatever the photomultiplier and the utilization conditions. (Auth.)

  11. CONFIRMATION OF THE BASIC PSYCHOLOGICAL NEEDS IN EXERCISE SCALE (BPNES WITH A SAMPLE OF PEOPLE WHO DO HEALTHY EXERCISE

    Directory of Open Access Journals (Sweden)

    Juan A. Moreno-Murcia

    2012-03-01

    Full Text Available This study aimed to cross-validate the psychometric properties of the Basic Psychological Needs in Exercise Scale (BPNES by Vlachopoulos and Michailidou, 2006 in a Spanish context. Two studies were conducted. Confirmatory factor analysis results confirmed the hypothesized three-factor solution In addition, we documented evidence of reliability, analysed as internal consistency and temporal stability. Future studies should analyse the scale's validity and reliability with different populations and check their experimental effect

  12. Quantitative morphological descriptors confirm traditionally ...

    African Journals Online (AJOL)

    SARAH

    2015-09-30

    Sep 30, 2015 ... from Guinea, Sierra-Leone, Côte d'Ivoire, Togo, ... Atacora, Southern Borgou and Northern Borgou. Table .... sedimentary rocks. Savanna raised and shrubby with forests gallery, ... define the model of regression according to:.

  13. Regression modeling methods, theory, and computation with SAS

    CERN Document Server

    Panik, Michael

    2009-01-01

    Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,

  14. Complete genome of a European hepatitis C virus subtype 1g isolate: phylogenetic and genetic analyses

    Directory of Open Access Journals (Sweden)

    Bargalló Ana

    2008-06-01

    Full Text Available Abstract Background Hepatitis C virus isolates have been classified into six main genotypes and a variable number of subtypes within each genotype, mainly based on phylogenetic analysis. Analyses of the genetic relationship among genotypes and subtypes are more reliable when complete genome sequences (or at least the full coding region are used; however, so far 31 of 80 confirmed or proposed subtypes have at least one complete genome available. Of these, 20 correspond to confirmed subtypes of epidemic interest. Results We present and analyse the first complete genome sequence of a HCV subtype 1g isolate. Phylogenetic and genetic distance analyses reveal that HCV-1g is the most divergent subtype among the HCV-1 confirmed subtypes. Potential genomic recombination events between genotypes or subtype 1 genomes were ruled out. We demonstrate phylogenetic congruence of previously deposited partial sequences of HCV-1g with respect to our sequence. Conclusion In light of this, we propose changing the current status of its subtype-specific designation from provisional to confirmed.

  15. Semiparametric regression during 2003–2007

    KAUST Repository

    Ruppert, David; Wand, M.P.; Carroll, Raymond J.

    2009-01-01

    Semiparametric regression is a fusion between parametric regression and nonparametric regression that integrates low-rank penalized splines, mixed model and hierarchical Bayesian methodology – thus allowing more streamlined handling of longitudinal and spatial correlation. We review progress in the field over the five-year period between 2003 and 2007. We find semiparametric regression to be a vibrant field with substantial involvement and activity, continual enhancement and widespread application.

  16. Unbalanced Regressions and the Predictive Equation

    DEFF Research Database (Denmark)

    Osterrieder, Daniela; Ventosa-Santaulària, Daniel; Vera-Valdés, J. Eduardo

    Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness in the theoreti......Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness...

  17. Comparison of multinomial logistic regression and logistic regression: which is more efficient in allocating land use?

    Science.gov (United States)

    Lin, Yingzhi; Deng, Xiangzheng; Li, Xing; Ma, Enjun

    2014-12-01

    Spatially explicit simulation of land use change is the basis for estimating the effects of land use and cover change on energy fluxes, ecology and the environment. At the pixel level, logistic regression is one of the most common approaches used in spatially explicit land use allocation models to determine the relationship between land use and its causal factors in driving land use change, and thereby to evaluate land use suitability. However, these models have a drawback in that they do not determine/allocate land use based on the direct relationship between land use change and its driving factors. Consequently, a multinomial logistic regression method was introduced to address this flaw, and thereby, judge the suitability of a type of land use in any given pixel in a case study area of the Jiangxi Province, China. A comparison of the two regression methods indicated that the proportion of correctly allocated pixels using multinomial logistic regression was 92.98%, which was 8.47% higher than that obtained using logistic regression. Paired t-test results also showed that pixels were more clearly distinguished by multinomial logistic regression than by logistic regression. In conclusion, multinomial logistic regression is a more efficient and accurate method for the spatial allocation of land use changes. The application of this method in future land use change studies may improve the accuracy of predicting the effects of land use and cover change on energy fluxes, ecology, and environment.

  18. Adjusting for Confounding in Early Postlaunch Settings: Going Beyond Logistic Regression Models.

    Science.gov (United States)

    Schmidt, Amand F; Klungel, Olaf H; Groenwold, Rolf H H

    2016-01-01

    Postlaunch data on medical treatments can be analyzed to explore adverse events or relative effectiveness in real-life settings. These analyses are often complicated by the number of potential confounders and the possibility of model misspecification. We conducted a simulation study to compare the performance of logistic regression, propensity score, disease risk score, and stabilized inverse probability weighting methods to adjust for confounding. Model misspecification was induced in the independent derivation dataset. We evaluated performance using relative bias confidence interval coverage of the true effect, among other metrics. At low events per coefficient (1.0 and 0.5), the logistic regression estimates had a large relative bias (greater than -100%). Bias of the disease risk score estimates was at most 13.48% and 18.83%. For the propensity score model, this was 8.74% and >100%, respectively. At events per coefficient of 1.0 and 0.5, inverse probability weighting frequently failed or reduced to a crude regression, resulting in biases of -8.49% and 24.55%. Coverage of logistic regression estimates became less than the nominal level at events per coefficient ≤5. For the disease risk score, inverse probability weighting, and propensity score, coverage became less than nominal at events per coefficient ≤2.5, ≤1.0, and ≤1.0, respectively. Bias of misspecified disease risk score models was 16.55%. In settings with low events/exposed subjects per coefficient, disease risk score methods can be useful alternatives to logistic regression models, especially when propensity score models cannot be used. Despite better performance of disease risk score methods than logistic regression and propensity score models in small events per coefficient settings, bias, and coverage still deviated from nominal.

  19. Interpretation of commonly used statistical regression models.

    Science.gov (United States)

    Kasza, Jessica; Wolfe, Rory

    2014-01-01

    A review of some regression models commonly used in respiratory health applications is provided in this article. Simple linear regression, multiple linear regression, logistic regression and ordinal logistic regression are considered. The focus of this article is on the interpretation of the regression coefficients of each model, which are illustrated through the application of these models to a respiratory health research study. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.

  20. Linear regression

    CERN Document Server

    Olive, David J

    2017-01-01

    This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...

  1. Regression modeling of ground-water flow

    Science.gov (United States)

    Cooley, R.L.; Naff, R.L.

    1985-01-01

    Nonlinear multiple regression methods are developed to model and analyze groundwater flow systems. Complete descriptions of regression methodology as applied to groundwater flow models allow scientists and engineers engaged in flow modeling to apply the methods to a wide range of problems. Organization of the text proceeds from an introduction that discusses the general topic of groundwater flow modeling, to a review of basic statistics necessary to properly apply regression techniques, and then to the main topic: exposition and use of linear and nonlinear regression to model groundwater flow. Statistical procedures are given to analyze and use the regression models. A number of exercises and answers are included to exercise the student on nearly all the methods that are presented for modeling and statistical analysis. Three computer programs implement the more complex methods. These three are a general two-dimensional, steady-state regression model for flow in an anisotropic, heterogeneous porous medium, a program to calculate a measure of model nonlinearity with respect to the regression parameters, and a program to analyze model errors in computed dependent variables such as hydraulic head. (USGS)

  2. Extending the linear model with R generalized linear, mixed effects and nonparametric regression models

    CERN Document Server

    Faraway, Julian J

    2005-01-01

    Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ...

  3. Post-processing through linear regression

    Directory of Open Access Journals (Sweden)

    B. Van Schaeybroeck

    2011-03-01

    Full Text Available Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS method, a new time-dependent Tikhonov regularization (TDTR method, the total least-square method, a new geometric-mean regression (GM, a recently introduced error-in-variables (EVMOS method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified.

    These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise. At long lead times the regression schemes (EVMOS, TDTR which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.

  4. A comparison of random forest regression and multiple linear regression for prediction in neuroscience.

    Science.gov (United States)

    Smith, Paul F; Ganesh, Siva; Liu, Ping

    2013-10-30

    Regression is a common statistical tool for prediction in neuroscience. However, linear regression is by far the most common form of regression used, with regression trees receiving comparatively little attention. In this study, the results of conventional multiple linear regression (MLR) were compared with those of random forest regression (RFR), in the prediction of the concentrations of 9 neurochemicals in the vestibular nucleus complex and cerebellum that are part of the l-arginine biochemical pathway (agmatine, putrescine, spermidine, spermine, l-arginine, l-ornithine, l-citrulline, glutamate and γ-aminobutyric acid (GABA)). The R(2) values for the MLRs were higher than the proportion of variance explained values for the RFRs: 6/9 of them were ≥ 0.70 compared to 4/9 for RFRs. Even the variables that had the lowest R(2) values for the MLRs, e.g. ornithine (0.50) and glutamate (0.61), had much lower proportion of variance explained values for the RFRs (0.27 and 0.49, respectively). The RSE values for the MLRs were lower than those for the RFRs in all but two cases. In general, MLRs seemed to be superior to the RFRs in terms of predictive value and error. In the case of this data set, MLR appeared to be superior to RFR in terms of its explanatory value and error. This result suggests that MLR may have advantages over RFR for prediction in neuroscience with this kind of data set, but that RFR can still have good predictive value in some cases. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Logistic regression applied to natural hazards: rare event logistic regression with replications

    OpenAIRE

    Guns, M.; Vanacker, Veerle

    2012-01-01

    Statistical analysis of natural hazards needs particular attention, as most of these phenomena are rare events. This study shows that the ordinary rare event logistic regression, as it is now commonly used in geomorphologic studies, does not always lead to a robust detection of controlling factors, as the results can be strongly sample-dependent. In this paper, we introduce some concepts of Monte Carlo simulations in rare event logistic regression. This technique, so-called rare event logisti...

  6. Multilevel covariance regression with correlated random effects in the mean and variance structure.

    Science.gov (United States)

    Quintero, Adrian; Lesaffre, Emmanuel

    2017-09-01

    Multivariate regression methods generally assume a constant covariance matrix for the observations. In case a heteroscedastic model is needed, the parametric and nonparametric covariance regression approaches can be restrictive in the literature. We propose a multilevel regression model for the mean and covariance structure, including random intercepts in both components and allowing for correlation between them. The implied conditional covariance function can be different across clusters as a result of the random effect in the variance structure. In addition, allowing for correlation between the random intercepts in the mean and covariance makes the model convenient for skewedly distributed responses. Furthermore, it permits us to analyse directly the relation between the mean response level and the variability in each cluster. Parameter estimation is carried out via Gibbs sampling. We compare the performance of our model to other covariance modelling approaches in a simulation study. Finally, the proposed model is applied to the RN4CAST dataset to identify the variables that impact burnout of nurses in Belgium. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Isotopic Generation and Confirmation of the PWR Application Model?

    International Nuclear Information System (INIS)

    L.B. Wimmer

    2003-01-01

    The objective of this calculation is to establish an isotopic database to represent commercial spent nuclear fuel (CSNF) from pressurized water reactors (PWRs) in criticality analyses performed for the proposed Monitored Geologic Repository at Yucca Mountain, Nevada. Confirmation of the conservatism with respect to criticality in the isotopic concentration values represented by this isotopic database is performed as described in Section 3.5.3.1.2 of the ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2000). The isotopic database consists of the set of 14 actinides and 15 fission products presented in Section 3.5.2.1.1 of YMP 2000 for use in CSNF burnup credit. This set of 29 isotopes is referred to as the principal isotopes. The oxygen isotope from the UO 2 fuel is also included in the database. The isotopic database covers enrichments of 235 U ranging from 1.5 to 5.5 weight percent (wt%) and burnups ranging from approximately zero to 75 GWd per metric ton of uranium (mtU). The choice of fuel assembly and operating history values used in generating the isotopic database are provided is Section 5. Tables of isotopic concentrations for the 29 principal isotopes (plus oxygen) as a function of enrichment and burnup are provided in Section 6.1. Results of the confirmation of the conservatism with respect to criticality in the isotopic concentration values are provided in Section 6.2

  8. CERN confirms LHC schedule

    CERN Document Server

    2003-01-01

    The CERN Council held its 125th session on 20 June. Highlights of the meeting included confirmation that the LHC is on schedule for a 2007 start-up, and the announcement of a new organizational structure in 2004.

  9. A Seemingly Unrelated Poisson Regression Model

    OpenAIRE

    King, Gary

    1989-01-01

    This article introduces a new estimator for the analysis of two contemporaneously correlated endogenous event count variables. This seemingly unrelated Poisson regression model (SUPREME) estimator combines the efficiencies created by single equation Poisson regression model estimators and insights from "seemingly unrelated" linear regression models.

  10. Recursive Algorithm For Linear Regression

    Science.gov (United States)

    Varanasi, S. V.

    1988-01-01

    Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.

  11. Applied regression analysis a research tool

    CERN Document Server

    Pantula, Sastry; Dickey, David

    1998-01-01

    Least squares estimation, when used appropriately, is a powerful research tool. A deeper understanding of the regression concepts is essential for achieving optimal benefits from a least squares analysis. This book builds on the fundamentals of statistical methods and provides appropriate concepts that will allow a scientist to use least squares as an effective research tool. Applied Regression Analysis is aimed at the scientist who wishes to gain a working knowledge of regression analysis. The basic purpose of this book is to develop an understanding of least squares and related statistical methods without becoming excessively mathematical. It is the outgrowth of more than 30 years of consulting experience with scientists and many years of teaching an applied regression course to graduate students. Applied Regression Analysis serves as an excellent text for a service course on regression for non-statisticians and as a reference for researchers. It also provides a bridge between a two-semester introduction to...

  12. Standards for Standardized Logistic Regression Coefficients

    Science.gov (United States)

    Menard, Scott

    2011-01-01

    Standardized coefficients in logistic regression analysis have the same utility as standardized coefficients in linear regression analysis. Although there has been no consensus on the best way to construct standardized logistic regression coefficients, there is now sufficient evidence to suggest a single best approach to the construction of a…

  13. [Application of negative binomial regression and modified Poisson regression in the research of risk factors for injury frequency].

    Science.gov (United States)

    Cao, Qingqing; Wu, Zhenqiang; Sun, Ying; Wang, Tiezhu; Han, Tengwei; Gu, Chaomei; Sun, Yehuan

    2011-11-01

    To Eexplore the application of negative binomial regression and modified Poisson regression analysis in analyzing the influential factors for injury frequency and the risk factors leading to the increase of injury frequency. 2917 primary and secondary school students were selected from Hefei by cluster random sampling method and surveyed by questionnaire. The data on the count event-based injuries used to fitted modified Poisson regression and negative binomial regression model. The risk factors incurring the increase of unintentional injury frequency for juvenile students was explored, so as to probe the efficiency of these two models in studying the influential factors for injury frequency. The Poisson model existed over-dispersion (P Poisson regression and negative binomial regression model, was fitted better. respectively. Both showed that male gender, younger age, father working outside of the hometown, the level of the guardian being above junior high school and smoking might be the results of higher injury frequencies. On a tendency of clustered frequency data on injury event, both the modified Poisson regression analysis and negative binomial regression analysis can be used. However, based on our data, the modified Poisson regression fitted better and this model could give a more accurate interpretation of relevant factors affecting the frequency of injury.

  14. Logistic regression for dichotomized counts.

    Science.gov (United States)

    Preisser, John S; Das, Kalyan; Benecha, Habtamu; Stamm, John W

    2016-12-01

    Sometimes there is interest in a dichotomized outcome indicating whether a count variable is positive or zero. Under this scenario, the application of ordinary logistic regression may result in efficiency loss, which is quantifiable under an assumed model for the counts. In such situations, a shared-parameter hurdle model is investigated for more efficient estimation of regression parameters relating to overall effects of covariates on the dichotomous outcome, while handling count data with many zeroes. One model part provides a logistic regression containing marginal log odds ratio effects of primary interest, while an ancillary model part describes the mean count of a Poisson or negative binomial process in terms of nuisance regression parameters. Asymptotic efficiency of the logistic model parameter estimators of the two-part models is evaluated with respect to ordinary logistic regression. Simulations are used to assess the properties of the models with respect to power and Type I error, the latter investigated under both misspecified and correctly specified models. The methods are applied to data from a randomized clinical trial of three toothpaste formulations to prevent incident dental caries in a large population of Scottish schoolchildren. © The Author(s) 2014.

  15. Bayesian ARTMAP for regression.

    Science.gov (United States)

    Sasu, L M; Andonie, R

    2013-10-01

    Bayesian ARTMAP (BA) is a recently introduced neural architecture which uses a combination of Fuzzy ARTMAP competitive learning and Bayesian learning. Training is generally performed online, in a single-epoch. During training, BA creates input data clusters as Gaussian categories, and also infers the conditional probabilities between input patterns and categories, and between categories and classes. During prediction, BA uses Bayesian posterior probability estimation. So far, BA was used only for classification. The goal of this paper is to analyze the efficiency of BA for regression problems. Our contributions are: (i) we generalize the BA algorithm using the clustering functionality of both ART modules, and name it BA for Regression (BAR); (ii) we prove that BAR is a universal approximator with the best approximation property. In other words, BAR approximates arbitrarily well any continuous function (universal approximation) and, for every given continuous function, there is one in the set of BAR approximators situated at minimum distance (best approximation); (iii) we experimentally compare the online trained BAR with several neural models, on the following standard regression benchmarks: CPU Computer Hardware, Boston Housing, Wisconsin Breast Cancer, and Communities and Crime. Our results show that BAR is an appropriate tool for regression tasks, both for theoretical and practical reasons. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Mechanisms of neuroblastoma regression

    Science.gov (United States)

    Brodeur, Garrett M.; Bagatell, Rochelle

    2014-01-01

    Recent genomic and biological studies of neuroblastoma have shed light on the dramatic heterogeneity in the clinical behaviour of this disease, which spans from spontaneous regression or differentiation in some patients, to relentless disease progression in others, despite intensive multimodality therapy. This evidence also suggests several possible mechanisms to explain the phenomena of spontaneous regression in neuroblastomas, including neurotrophin deprivation, humoral or cellular immunity, loss of telomerase activity and alterations in epigenetic regulation. A better understanding of the mechanisms of spontaneous regression might help to identify optimal therapeutic approaches for patients with these tumours. Currently, the most druggable mechanism is the delayed activation of developmentally programmed cell death regulated by the tropomyosin receptor kinase A pathway. Indeed, targeted therapy aimed at inhibiting neurotrophin receptors might be used in lieu of conventional chemotherapy or radiation in infants with biologically favourable tumours that require treatment. Alternative approaches consist of breaking immune tolerance to tumour antigens or activating neurotrophin receptor pathways to induce neuronal differentiation. These approaches are likely to be most effective against biologically favourable tumours, but they might also provide insights into treatment of biologically unfavourable tumours. We describe the different mechanisms of spontaneous neuroblastoma regression and the consequent therapeutic approaches. PMID:25331179

  17. Using the Ridge Regression Procedures to Estimate the Multiple Linear Regression Coefficients

    Science.gov (United States)

    Gorgees, HazimMansoor; Mahdi, FatimahAssim

    2018-05-01

    This article concerns with comparing the performance of different types of ordinary ridge regression estimators that have been already proposed to estimate the regression parameters when the near exact linear relationships among the explanatory variables is presented. For this situations we employ the data obtained from tagi gas filling company during the period (2008-2010). The main result we reached is that the method based on the condition number performs better than other methods since it has smaller mean square error (MSE) than the other stated methods.

  18. Multicollinearity and Regression Analysis

    Science.gov (United States)

    Daoud, Jamal I.

    2017-12-01

    In regression analysis it is obvious to have a correlation between the response and predictor(s), but having correlation among predictors is something undesired. The number of predictors included in the regression model depends on many factors among which, historical data, experience, etc. At the end selection of most important predictors is something objective due to the researcher. Multicollinearity is a phenomena when two or more predictors are correlated, if this happens, the standard error of the coefficients will increase [8]. Increased standard errors means that the coefficients for some or all independent variables may be found to be significantly different from In other words, by overinflating the standard errors, multicollinearity makes some variables statistically insignificant when they should be significant. In this paper we focus on the multicollinearity, reasons and consequences on the reliability of the regression model.

  19. Exploring reasons for the observed inconsistent trial reports on intra-articular injections with hyaluronic acid in the treatment of osteoarthritis: Meta-regression analyses of randomized trials.

    Science.gov (United States)

    Johansen, Mette; Bahrt, Henriette; Altman, Roy D; Bartels, Else M; Juhl, Carsten B; Bliddal, Henning; Lund, Hans; Christensen, Robin

    2016-08-01

    The aim was to identify factors explaining inconsistent observations concerning the efficacy of intra-articular hyaluronic acid compared to intra-articular sham/control, or non-intervention control, in patients with symptomatic osteoarthritis, based on randomized clinical trials (RCTs). A systematic review and meta-regression analyses of available randomized trials were conducted. The outcome, pain, was assessed according to a pre-specified hierarchy of potentially available outcomes. Hedges׳s standardized mean difference [SMD (95% CI)] served as effect size. REstricted Maximum Likelihood (REML) mixed-effects models were used to combine study results, and heterogeneity was calculated and interpreted as Tau-squared and I-squared, respectively. Overall, 99 studies (14,804 patients) met the inclusion criteria: Of these, only 71 studies (72%), including 85 comparisons (11,216 patients), had adequate data available for inclusion in the primary meta-analysis. Overall, compared with placebo, intra-articular hyaluronic acid reduced pain with an effect size of -0.39 [-0.47 to -0.31; P hyaluronic acid. Based on available trial data, intra-articular hyaluronic acid showed a better effect than intra-articular saline on pain reduction in osteoarthritis. Publication bias and the risk of selective outcome reporting suggest only small clinical effect compared to saline. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Panel Smooth Transition Regression Models

    DEFF Research Database (Denmark)

    González, Andrés; Terasvirta, Timo; Dijk, Dick van

    We introduce the panel smooth transition regression model. This new model is intended for characterizing heterogeneous panels, allowing the regression coefficients to vary both across individuals and over time. Specifically, heterogeneity is allowed for by assuming that these coefficients are bou...

  1. Short term load forecasting technique based on the seasonal exponential adjustment method and the regression model

    International Nuclear Information System (INIS)

    Wu, Jie; Wang, Jianzhou; Lu, Haiyan; Dong, Yao; Lu, Xiaoxiao

    2013-01-01

    Highlights: ► The seasonal and trend items of the data series are forecasted separately. ► Seasonal item in the data series is verified by the Kendall τ correlation testing. ► Different regression models are applied to the trend item forecasting. ► We examine the superiority of the combined models by the quartile value comparison. ► Paired-sample T test is utilized to confirm the superiority of the combined models. - Abstract: For an energy-limited economy system, it is crucial to forecast load demand accurately. This paper devotes to 1-week-ahead daily load forecasting approach in which load demand series are predicted by employing the information of days before being similar to that of the forecast day. As well as in many nonlinear systems, seasonal item and trend item are coexisting in load demand datasets. In this paper, the existing of the seasonal item in the load demand data series is firstly verified according to the Kendall τ correlation testing method. Then in the belief of the separate forecasting to the seasonal item and the trend item would improve the forecasting accuracy, hybrid models by combining seasonal exponential adjustment method (SEAM) with the regression methods are proposed in this paper, where SEAM and the regression models are employed to seasonal and trend items forecasting respectively. Comparisons of the quartile values as well as the mean absolute percentage error values demonstrate this forecasting technique can significantly improve the accuracy though models applied to the trend item forecasting are eleven different ones. This superior performance of this separate forecasting technique is further confirmed by the paired-sample T tests

  2. Credit Scoring Problem Based on Regression Analysis

    OpenAIRE

    Khassawneh, Bashar Suhil Jad Allah

    2014-01-01

    ABSTRACT: This thesis provides an explanatory introduction to the regression models of data mining and contains basic definitions of key terms in the linear, multiple and logistic regression models. Meanwhile, the aim of this study is to illustrate fitting models for the credit scoring problem using simple linear, multiple linear and logistic regression models and also to analyze the found model functions by statistical tools. Keywords: Data mining, linear regression, logistic regression....

  3. Comparing lagged linear correlation, lagged regression, Granger causality, and vector autoregression for uncovering associations in EHR data.

    Science.gov (United States)

    Levine, Matthew E; Albers, David J; Hripcsak, George

    2016-01-01

    Time series analysis methods have been shown to reveal clinical and biological associations in data collected in the electronic health record. We wish to develop reliable high-throughput methods for identifying adverse drug effects that are easy to implement and produce readily interpretable results. To move toward this goal, we used univariate and multivariate lagged regression models to investigate associations between twenty pairs of drug orders and laboratory measurements. Multivariate lagged regression models exhibited higher sensitivity and specificity than univariate lagged regression in the 20 examples, and incorporating autoregressive terms for labs and drugs produced more robust signals in cases of known associations among the 20 example pairings. Moreover, including inpatient admission terms in the model attenuated the signals for some cases of unlikely associations, demonstrating how multivariate lagged regression models' explicit handling of context-based variables can provide a simple way to probe for health-care processes that confound analyses of EHR data.

  4. Incidental Risk of Type 2 Diabetes Mellitus among Patients with Confirmed and Unconfirmed Prediabetes.

    Directory of Open Access Journals (Sweden)

    Kimberly D Brunisholz

    Full Text Available To determine the risk of type 2 diabetes (T2DM diagnosis among patients with confirmed and unconfirmed prediabetes (preDM relative to an at-risk group receiving care from primary care physicians over a 5-year period.Utilizing data from the Intermountain Healthcare (IH Enterprise Data Warehouse (EDW from 2006-2013, we performed a prospective analysis using discrete survival analysis to estimate the time to diagnosis of T2DM among groups.Adult patients who had at least one outpatient visit with a primary care physician during 2006-2008 at an IH clinic and subsequent visits through 2013. Patients were included for the study if they were (a at-risk for diabetes (BMI ≥ 25 kg/m2 and one additional risk factor: high risk ethnicity, first degree relative with diabetes, elevated triglycerides or blood pressure, low HDL, diagnosis of gestational diabetes or polycystic ovarian syndrome, or birth of a baby weighing >9 lbs; or (b confirmed preDM (HbA1c ≥ 5.7-6.49% or fasting blood glucose 100-125 mg/dL; or (c unconfirmed preDM (documented fasting lipid panel and glucose 100-125 mg/dL on the same day.Of the 33,838 patients who were eligible for study, 57.0% were considered at-risk, 38.4% had unconfirmed preDM, and 4.6% had confirmed preDM. Those with unconfirmed and confirmed preDM tended to be Caucasian and a greater proportion were obese compared to those at-risk for disease. Patients with unconfirmed and confirmed preDM tended to have more prevalent high blood pressure and depression as compared to the at-risk group. Based on the discrete survival analyses, patients with unconfirmed preDM and confirmed preDM were more likely to develop T2DM when compared to at-risk patients.Unconfirmed and confirmed preDM are strongly associated with the development of T2DM as compared to patients with only risk factors for disease.

  5. Unbalanced Regressions and the Predictive Equation

    DEFF Research Database (Denmark)

    Osterrieder, Daniela; Ventosa-Santaulària, Daniel; Vera-Valdés, J. Eduardo

    Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness in the theoreti......Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness...... in the theoretical predictive equation by suggesting a data generating process, where returns are generated as linear functions of a lagged latent I(0) risk process. The observed predictor is a function of this latent I(0) process, but it is corrupted by a fractionally integrated noise. Such a process may arise due...... to aggregation or unexpected level shifts. In this setup, the practitioner estimates a misspecified, unbalanced, and endogenous predictive regression. We show that the OLS estimate of this regression is inconsistent, but standard inference is possible. To obtain a consistent slope estimate, we then suggest...

  6. [From clinical judgment to linear regression model.

    Science.gov (United States)

    Palacios-Cruz, Lino; Pérez, Marcela; Rivas-Ruiz, Rodolfo; Talavera, Juan O

    2013-01-01

    When we think about mathematical models, such as linear regression model, we think that these terms are only used by those engaged in research, a notion that is far from the truth. Legendre described the first mathematical model in 1805, and Galton introduced the formal term in 1886. Linear regression is one of the most commonly used regression models in clinical practice. It is useful to predict or show the relationship between two or more variables as long as the dependent variable is quantitative and has normal distribution. Stated in another way, the regression is used to predict a measure based on the knowledge of at least one other variable. Linear regression has as it's first objective to determine the slope or inclination of the regression line: Y = a + bx, where "a" is the intercept or regression constant and it is equivalent to "Y" value when "X" equals 0 and "b" (also called slope) indicates the increase or decrease that occurs when the variable "x" increases or decreases in one unit. In the regression line, "b" is called regression coefficient. The coefficient of determination (R 2 ) indicates the importance of independent variables in the outcome.

  7. Refining cost-effectiveness analyses using the net benefit approach and econometric methods: an example from a trial of anti-depressant treatment.

    Science.gov (United States)

    Sabes-Figuera, Ramon; McCrone, Paul; Kendricks, Antony

    2013-04-01

    Economic evaluation analyses can be enhanced by employing regression methods, allowing for the identification of important sub-groups and to adjust for imperfect randomisation in clinical trials or to analyse non-randomised data. To explore the benefits of combining regression techniques and the standard Bayesian approach to refine cost-effectiveness analyses using data from randomised clinical trials. Data from a randomised trial of anti-depressant treatment were analysed and a regression model was used to explore the factors that have an impact on the net benefit (NB) statistic with the aim of using these findings to adjust the cost-effectiveness acceptability curves. Exploratory sub-samples' analyses were carried out to explore possible differences in cost-effectiveness. Results The analysis found that having suffered a previous similar depression is strongly correlated with a lower NB, independent of the outcome measure or follow-up point. In patients with previous similar depression, adding an selective serotonin reuptake inhibitors (SSRI) to supportive care for mild-to-moderate depression is probably cost-effective at the level used by the English National Institute for Health and Clinical Excellence to make recommendations. This analysis highlights the need for incorporation of econometric methods into cost-effectiveness analyses using the NB approach.

  8. Autistic Regression

    Science.gov (United States)

    Matson, Johnny L.; Kozlowski, Alison M.

    2010-01-01

    Autistic regression is one of the many mysteries in the developmental course of autism and pervasive developmental disorders not otherwise specified (PDD-NOS). Various definitions of this phenomenon have been used, further clouding the study of the topic. Despite this problem, some efforts at establishing prevalence have been made. The purpose of…

  9. Ridge regression estimator: combining unbiased and ordinary ridge regression methods of estimation

    Directory of Open Access Journals (Sweden)

    Sharad Damodar Gore

    2009-10-01

    Full Text Available Statistical literature has several methods for coping with multicollinearity. This paper introduces a new shrinkage estimator, called modified unbiased ridge (MUR. This estimator is obtained from unbiased ridge regression (URR in the same way that ordinary ridge regression (ORR is obtained from ordinary least squares (OLS. Properties of MUR are derived. Results on its matrix mean squared error (MMSE are obtained. MUR is compared with ORR and URR in terms of MMSE. These results are illustrated with an example based on data generated by Hoerl and Kennard (1975.

  10. Discriminative Elastic-Net Regularized Linear Regression.

    Science.gov (United States)

    Zhang, Zheng; Lai, Zhihui; Xu, Yong; Shao, Ling; Wu, Jian; Xie, Guo-Sen

    2017-03-01

    In this paper, we aim at learning compact and discriminative linear regression models. Linear regression has been widely used in different problems. However, most of the existing linear regression methods exploit the conventional zero-one matrix as the regression targets, which greatly narrows the flexibility of the regression model. Another major limitation of these methods is that the learned projection matrix fails to precisely project the image features to the target space due to their weak discriminative capability. To this end, we present an elastic-net regularized linear regression (ENLR) framework, and develop two robust linear regression models which possess the following special characteristics. First, our methods exploit two particular strategies to enlarge the margins of different classes by relaxing the strict binary targets into a more feasible variable matrix. Second, a robust elastic-net regularization of singular values is introduced to enhance the compactness and effectiveness of the learned projection matrix. Third, the resulting optimization problem of ENLR has a closed-form solution in each iteration, which can be solved efficiently. Finally, rather than directly exploiting the projection matrix for recognition, our methods employ the transformed features as the new discriminate representations to make final image classification. Compared with the traditional linear regression model and some of its variants, our method is much more accurate in image classification. Extensive experiments conducted on publicly available data sets well demonstrate that the proposed framework can outperform the state-of-the-art methods. The MATLAB codes of our methods can be available at http://www.yongxu.org/lunwen.html.

  11. Identification of Sexually Abused Female Adolescents at Risk for Suicidal Ideations: A Classification and Regression Tree Analysis

    Science.gov (United States)

    Brabant, Marie-Eve; Hebert, Martine; Chagnon, Francois

    2013-01-01

    This study explored the clinical profiles of 77 female teenager survivors of sexual abuse and examined the association of abuse-related and personal variables with suicidal ideations. Analyses revealed that 64% of participants experienced suicidal ideations. Findings from classification and regression tree analysis indicated that depression,…

  12. Diagnostic Algorithm to Reflect Regressive Changes of Human Papilloma Virus in Tissue Biopsies

    Science.gov (United States)

    Lhee, Min Jin; Cha, Youn Jin; Bae, Jong Man; Kim, Young Tae

    2014-01-01

    Purpose Landmark indicators have not yet to be developed to detect the regression of cervical intraepithelial neoplasia (CIN). We propose that quantitative viral load and indicative histological criteria can be used to differentiate between atypical squamous cells of undetermined significance (ASCUS) and a CIN of grade 1. Materials and Methods We collected 115 tissue biopsies from women who tested positive for the human papilloma virus (HPV). Nine morphological parameters including nuclear size, perinuclear halo, hyperchromasia, typical koilocyte (TK), abortive koilocyte (AK), bi-/multi-nucleation, keratohyaline granules, inflammation, and dyskeratosis were examined for each case. Correlation analyses, cumulative logistic regression, and binary logistic regression were used to determine optimal cut-off values of HPV copy numbers. The parameters TK, perinuclear halo, multi-nucleation, and nuclear size were significantly correlated quantitatively to HPV copy number. Results An HPV loading number of 58.9 and AK number of 20 were optimal to discriminate between negative and subtle findings in biopsies. An HPV loading number of 271.49 and AK of 20 were optimal for discriminating between equivocal changes and obvious koilocytosis. Conclusion We propose that a squamous epithelial lesion with AK of >20 and quantitative HPV copy number between 58.9-271.49 represents a new spectrum of subtle pathological findings, characterized by AK in ASCUS. This can be described as a distinct entity and called "regressing koilocytosis". PMID:24532500

  13. Appropriate assessment of neighborhood effects on individual health: integrating random and fixed effects in multilevel logistic regression

    DEFF Research Database (Denmark)

    Larsen, Klaus; Merlo, Juan

    2005-01-01

    The logistic regression model is frequently used in epidemiologic studies, yielding odds ratio or relative risk interpretations. Inspired by the theory of linear normal models, the logistic regression model has been extended to allow for correlated responses by introducing random effects. However......, the model does not inherit the interpretational features of the normal model. In this paper, the authors argue that the existing measures are unsatisfactory (and some of them are even improper) when quantifying results from multilevel logistic regression analyses. The authors suggest a measure...... of heterogeneity, the median odds ratio, that quantifies cluster heterogeneity and facilitates a direct comparison between covariate effects and the magnitude of heterogeneity in terms of well-known odds ratios. Quantifying cluster-level covariates in a meaningful way is a challenge in multilevel logistic...

  14. The occurrence of Toxocara malaysiensis in cats in China, confirmed by sequence-based analyses of ribosomal DNA.

    Science.gov (United States)

    Li, Ming-Wei; Zhu, Xing-Quan; Gasser, Robin B; Lin, Rui-Qing; Sani, Rehana A; Lun, Zhao-Rong; Jacobs, Dennis E

    2006-10-01

    Non-isotopic polymerase chain reaction (PCR)-based single-strand conformation polymorphism and sequence analyses of the second internal transcribed spacer (ITS-2) of nuclear ribosomal DNA (rDNA) were utilized to genetically characterise ascaridoids from dogs and cats from China by comparison with those from other countries. The study showed that Toxocara canis, Toxocara cati, and Toxascaris leonina from China were genetically the same as those from other geographical origins. Specimens from cats from Guangzhou, China, which were morphologically consistent with Toxocara malaysiensis, were the same genetically as those from Malaysia, with the exception of a polymorphism in the ITS-2 but no unequivocal sequence difference. This is the first report of T. malaysiensis in cats outside of Malaysia (from where it was originally described), supporting the proposal that this species has a broader geographical distribution. The molecular approach employed provides a powerful tool for elucidating the biology, epidemiology, and zoonotic significance of T. malaysiensis.

  15. Categorical regression dose-response modeling

    Science.gov (United States)

    The goal of this training is to provide participants with training on the use of the U.S. EPA’s Categorical Regression soft¬ware (CatReg) and its application to risk assessment. Categorical regression fits mathematical models to toxicity data that have been assigned ord...

  16. Predictors of postoperative outcomes of cubital tunnel syndrome treatments using multiple logistic regression analysis.

    Science.gov (United States)

    Suzuki, Taku; Iwamoto, Takuji; Shizu, Kanae; Suzuki, Katsuji; Yamada, Harumoto; Sato, Kazuki

    2017-05-01

    This retrospective study was designed to investigate prognostic factors for postoperative outcomes for cubital tunnel syndrome (CubTS) using multiple logistic regression analysis with a large number of patients. Eighty-three patients with CubTS who underwent surgeries were enrolled. The following potential prognostic factors for disease severity were selected according to previous reports: sex, age, type of surgery, disease duration, body mass index, cervical lesion, presence of diabetes mellitus, Workers' Compensation status, preoperative severity, and preoperative electrodiagnostic testing. Postoperative severity of disease was assessed 2 years after surgery by Messina's criteria which is an outcome measure specifically for CubTS. Bivariate analysis was performed to select candidate prognostic factors for multiple linear regression analyses. Multiple logistic regression analysis was conducted to identify the association between postoperative severity and selected prognostic factors. Both bivariate and multiple linear regression analysis revealed only preoperative severity as an independent risk factor for poor prognosis, while other factors did not show any significant association. Although conflicting results exist regarding prognosis of CubTS, this study supports evidence from previous studies and concludes early surgical intervention portends the most favorable prognosis. Copyright © 2017 The Japanese Orthopaedic Association. Published by Elsevier B.V. All rights reserved.

  17. Application of Robust Regression and Bootstrap in Poductivity Analysis of GERD Variable in EU27

    Directory of Open Access Journals (Sweden)

    Dagmar Blatná

    2014-06-01

    Full Text Available The GERD is one of Europe 2020 headline indicators being tracked within the Europe 2020 strategy. The headline indicator is the 3% target for the GERD to be reached within the EU by 2020. Eurostat defi nes “GERD” as total gross domestic expenditure on research and experimental development in a percentage of GDP. GERD depends on numerous factors of a general economic background, namely of employment, innovation and research, science and technology. The values of these indicators vary among the European countries, and consequently the occurrence of outliers can be anticipated in corresponding analyses. In such a case, a classical statistical approach – the least squares method – can be highly unreliable, the robust regression methods representing an acceptable and useful tool. The aim of the present paper is to demonstrate the advantages of robust regression and applicability of the bootstrap approach in regression based on both classical and robust methods.

  18. Prediction of hearing outcomes by multiple regression analysis in patients with idiopathic sudden sensorineural hearing loss.

    Science.gov (United States)

    Suzuki, Hideaki; Tabata, Takahisa; Koizumi, Hiroki; Hohchi, Nobusuke; Takeuchi, Shoko; Kitamura, Takuro; Fujino, Yoshihisa; Ohbuchi, Toyoaki

    2014-12-01

    This study aimed to create a multiple regression model for predicting hearing outcomes of idiopathic sudden sensorineural hearing loss (ISSNHL). The participants were 205 consecutive patients (205 ears) with ISSNHL (hearing level ≥ 40 dB, interval between onset and treatment ≤ 30 days). They received systemic steroid administration combined with intratympanic steroid injection. Data were examined by simple and multiple regression analyses. Three hearing indices (percentage hearing improvement, hearing gain, and posttreatment hearing level [HLpost]) and 7 prognostic factors (age, days from onset to treatment, initial hearing level, initial hearing level at low frequencies, initial hearing level at high frequencies, presence of vertigo, and contralateral hearing level) were included in the multiple regression analysis as dependent and explanatory variables, respectively. In the simple regression analysis, the percentage hearing improvement, hearing gain, and HLpost showed significant correlation with 2, 5, and 6 of the 7 prognostic factors, respectively. The multiple correlation coefficients were 0.396, 0.503, and 0.714 for the percentage hearing improvement, hearing gain, and HLpost, respectively. Predicted values of HLpost calculated by the multiple regression equation were reliable with 70% probability with a 40-dB-width prediction interval. Prediction of HLpost by the multiple regression model may be useful to estimate the hearing prognosis of ISSNHL. © The Author(s) 2014.

  19. Abstract Expression Grammar Symbolic Regression

    Science.gov (United States)

    Korns, Michael F.

    This chapter examines the use of Abstract Expression Grammars to perform the entire Symbolic Regression process without the use of Genetic Programming per se. The techniques explored produce a symbolic regression engine which has absolutely no bloat, which allows total user control of the search space and output formulas, which is faster, and more accurate than the engines produced in our previous papers using Genetic Programming. The genome is an all vector structure with four chromosomes plus additional epigenetic and constraint vectors, allowing total user control of the search space and the final output formulas. A combination of specialized compiler techniques, genetic algorithms, particle swarm, aged layered populations, plus discrete and continuous differential evolution are used to produce an improved symbolic regression sytem. Nine base test cases, from the literature, are used to test the improvement in speed and accuracy. The improved results indicate that these techniques move us a big step closer toward future industrial strength symbolic regression systems.

  20. Finite element analyses of a linear-accelerator electron gun

    Science.gov (United States)

    Iqbal, M.; Wasy, A.; Islam, G. U.; Zhou, Z.

    2014-02-01

    Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gun is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator.

  1. Finite element analyses of a linear-accelerator electron gun

    Energy Technology Data Exchange (ETDEWEB)

    Iqbal, M., E-mail: muniqbal.chep@pu.edu.pk, E-mail: muniqbal@ihep.ac.cn [Centre for High Energy Physics, University of the Punjab, Lahore 45590 (Pakistan); Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China); Wasy, A. [Department of Mechanical Engineering, Changwon National University, Changwon 641773 (Korea, Republic of); Islam, G. U. [Centre for High Energy Physics, University of the Punjab, Lahore 45590 (Pakistan); Zhou, Z. [Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China)

    2014-02-15

    Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gun is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator.

  2. Finite element analyses of a linear-accelerator electron gun

    International Nuclear Information System (INIS)

    Iqbal, M.; Wasy, A.; Islam, G. U.; Zhou, Z.

    2014-01-01

    Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gun is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator

  3. EMD-regression for modelling multi-scale relationships, and application to weather-related cardiovascular mortality

    Science.gov (United States)

    Masselot, Pierre; Chebana, Fateh; Bélanger, Diane; St-Hilaire, André; Abdous, Belkacem; Gosselin, Pierre; Ouarda, Taha B. M. J.

    2018-01-01

    In a number of environmental studies, relationships between natural processes are often assessed through regression analyses, using time series data. Such data are often multi-scale and non-stationary, leading to a poor accuracy of the resulting regression models and therefore to results with moderate reliability. To deal with this issue, the present paper introduces the EMD-regression methodology consisting in applying the empirical mode decomposition (EMD) algorithm on data series and then using the resulting components in regression models. The proposed methodology presents a number of advantages. First, it accounts of the issues of non-stationarity associated to the data series. Second, this approach acts as a scan for the relationship between a response variable and the predictors at different time scales, providing new insights about this relationship. To illustrate the proposed methodology it is applied to study the relationship between weather and cardiovascular mortality in Montreal, Canada. The results shed new knowledge concerning the studied relationship. For instance, they show that the humidity can cause excess mortality at the monthly time scale, which is a scale not visible in classical models. A comparison is also conducted with state of the art methods which are the generalized additive models and distributed lag models, both widely used in weather-related health studies. The comparison shows that EMD-regression achieves better prediction performances and provides more details than classical models concerning the relationship.

  4. Bacteriologically confirmed tuberculosis in children

    International Nuclear Information System (INIS)

    Ozere, I.; Sele, A.; Ozolina, A.

    2005-01-01

    Tuberculosis in children and adults is a growing problem with important public health implications. In Latvia the incidence of tuberculosis (TB) in children up to age 14 has increased from 7,1 per 100000 in 1992 to 28,8 per 100000 in 2003. The diagnosis of TB is confirmed by isolation and identification of M. tuberculosis (MT) from clinical specimen. Confirmation of the disease, however, is difficult in children due to poor bacilli excretion and even under the best circumstances only about 30-40% of pediatric TB cases are proved bacteriologically. Of the 370 pediatric TB cases diagnosed between January 1, 2001 and December 1, 2003 in Latvia, 53 (14,3%) were confirmed bacteriologically. The clinical, radiological, immunological and anamnestic features of confirmed TB can serve as cornerstones for diagnosing of TB, when culture is not available. Objective To evaluate the sensitivity of diagnostic criteria of TB, clinical and radiological manifestation of TB and drug susceptibility of MT isolated also. Methods All consecutive children (53 in total) up to age 14 diagnosed with bacteriologically confirmed TB during 01.01.2001. -01.12.2003. were prospectively evaluated for reasons mentioned above. Results Of the 53 children identified all but one had respiratory tract TB. 17(32,1 %) children were under 4 years of age, 9 (17%) children were 5-9 years old, but 27 (50,9%) patients were 10-14 years old. During evaluation data on TB source case were found in addition in 13 children and total TB contact history was positive in 37 (69,8%) patients. All clinical and radiographical forms of respiratory tract TB were diagnosed. The most common encountered forms were intrathoracic adenopathy in 10 (18,9%) cases and TB pneumonia in 6 (11,3%) cases in children aged 10-14 years. lnthrathoracic adenopathy associated with segmental parenchymal lesion was the most common form in children under 4 years of age -7 (13,2%) cases respectively. Conclusions 1. The clinical and radiological

  5. Pathological assessment of liver fibrosis regression

    Directory of Open Access Journals (Sweden)

    WANG Bingqiong

    2017-03-01

    Full Text Available Hepatic fibrosis is the common pathological outcome of chronic hepatic diseases. An accurate assessment of fibrosis degree provides an important reference for a definite diagnosis of diseases, treatment decision-making, treatment outcome monitoring, and prognostic evaluation. At present, many clinical studies have proven that regression of hepatic fibrosis and early-stage liver cirrhosis can be achieved by effective treatment, and a correct evaluation of fibrosis regression has become a hot topic in clinical research. Liver biopsy has long been regarded as the gold standard for the assessment of hepatic fibrosis, and thus it plays an important role in the evaluation of fibrosis regression. This article reviews the clinical application of current pathological staging systems in the evaluation of fibrosis regression from the perspectives of semi-quantitative scoring system, quantitative approach, and qualitative approach, in order to propose a better pathological evaluation system for the assessment of fibrosis regression.

  6. Logistic Regression: Concept and Application

    Science.gov (United States)

    Cokluk, Omay

    2010-01-01

    The main focus of logistic regression analysis is classification of individuals in different groups. The aim of the present study is to explain basic concepts and processes of binary logistic regression analysis intended to determine the combination of independent variables which best explain the membership in certain groups called dichotomous…

  7. Predictors of course in obsessive-compulsive disorder: logistic regression versus Cox regression for recurrent events.

    Science.gov (United States)

    Kempe, P T; van Oppen, P; de Haan, E; Twisk, J W R; Sluis, A; Smit, J H; van Dyck, R; van Balkom, A J L M

    2007-09-01

    Two methods for predicting remissions in obsessive-compulsive disorder (OCD) treatment are evaluated. Y-BOCS measurements of 88 patients with a primary OCD (DSM-III-R) diagnosis were performed over a 16-week treatment period, and during three follow-ups. Remission at any measurement was defined as a Y-BOCS score lower than thirteen combined with a reduction of seven points when compared with baseline. Logistic regression models were compared with a Cox regression for recurrent events model. Logistic regression yielded different models at different evaluation times. The recurrent events model remained stable when fewer measurements were used. Higher baseline levels of neuroticism and more severe OCD symptoms were associated with a lower chance of remission, early age of onset and more depressive symptoms with a higher chance. Choice of outcome time affects logistic regression prediction models. Recurrent events analysis uses all information on remissions and relapses. Short- and long-term predictors for OCD remission show overlap.

  8. Sparse reduced-rank regression with covariance estimation

    KAUST Repository

    Chen, Lisha

    2014-12-08

    Improving the predicting performance of the multiple response regression compared with separate linear regressions is a challenging question. On the one hand, it is desirable to seek model parsimony when facing a large number of parameters. On the other hand, for certain applications it is necessary to take into account the general covariance structure for the errors of the regression model. We assume a reduced-rank regression model and work with the likelihood function with general error covariance to achieve both objectives. In addition we propose to select relevant variables for reduced-rank regression by using a sparsity-inducing penalty, and to estimate the error covariance matrix simultaneously by using a similar penalty on the precision matrix. We develop a numerical algorithm to solve the penalized regression problem. In a simulation study and real data analysis, the new method is compared with two recent methods for multivariate regression and exhibits competitive performance in prediction and variable selection.

  9. Sparse reduced-rank regression with covariance estimation

    KAUST Repository

    Chen, Lisha; Huang, Jianhua Z.

    2014-01-01

    Improving the predicting performance of the multiple response regression compared with separate linear regressions is a challenging question. On the one hand, it is desirable to seek model parsimony when facing a large number of parameters. On the other hand, for certain applications it is necessary to take into account the general covariance structure for the errors of the regression model. We assume a reduced-rank regression model and work with the likelihood function with general error covariance to achieve both objectives. In addition we propose to select relevant variables for reduced-rank regression by using a sparsity-inducing penalty, and to estimate the error covariance matrix simultaneously by using a similar penalty on the precision matrix. We develop a numerical algorithm to solve the penalized regression problem. In a simulation study and real data analysis, the new method is compared with two recent methods for multivariate regression and exhibits competitive performance in prediction and variable selection.

  10. PERFORMANCE CONFIRMATION IN-SITU INSTRUMENTATION

    International Nuclear Information System (INIS)

    N.T. Raczka

    2000-01-01

    The purpose of this document is to identify and analyze the types of in-situ instruments and methods that could be used in support of the data acquisition portion of the Performance Confirmation (PC) program at the potential nuclear waste repository at Yucca Mountain. The PC program will require geomechanical , geophysical, thermal, and hydrologic instrumentation of several kinds. This analysis is being prepared to document the technical issues associated with each type of measurement during the PC period. This analysis utilizes the ''Performance Confirmation Input Criteria'' (CRWMS M andO 1999a) as its starting point. The scope of this analysis is primarily on the period after the start of waste package emplacement and before permanent closure of the repository, a period lasting between 15 and 300 years after last package emplacement (Stroupe 2000, Attachment 1, p. 1). The primary objectives of this analysis are to: (1) Review the design criteria as presented in the ''Performance Confirmation Input Criteria'' (CRWMS M andO 1999a). The scope of this analysis will be limited to the instrumentation related to parameters that require continuous monitoring of the conditions underground. (2) Preliminary identification and listing of the data requirements and parameters as related to the current repository layout in support of PC monitoring. (3) Preliminary identification of methods and instrumentation for the acquisition of the required data. Although the ''Performance Confirmation Input Criteria'' (CRWMS M andO 1999a) defines a broad range of data that must be obtained from a variety of methods, the focus of this analysis is on instrumentation related to the performance of the rock mass and the formation of water in the repository environment, that is obtainable from in-situ observation, testing, and monitoring

  11. Remote sensing and GIS-based landslide hazard analysis and cross-validation using multivariate logistic regression model on three test areas in Malaysia

    Science.gov (United States)

    Pradhan, Biswajeet

    2010-05-01

    This paper presents the results of the cross-validation of a multivariate logistic regression model using remote sensing data and GIS for landslide hazard analysis on the Penang, Cameron, and Selangor areas in Malaysia. Landslide locations in the study areas were identified by interpreting aerial photographs and satellite images, supported by field surveys. SPOT 5 and Landsat TM satellite imagery were used to map landcover and vegetation index, respectively. Maps of topography, soil type, lineaments and land cover were constructed from the spatial datasets. Ten factors which influence landslide occurrence, i.e., slope, aspect, curvature, distance from drainage, lithology, distance from lineaments, soil type, landcover, rainfall precipitation, and normalized difference vegetation index (ndvi), were extracted from the spatial database and the logistic regression coefficient of each factor was computed. Then the landslide hazard was analysed using the multivariate logistic regression coefficients derived not only from the data for the respective area but also using the logistic regression coefficients calculated from each of the other two areas (nine hazard maps in all) as a cross-validation of the model. For verification of the model, the results of the analyses were then compared with the field-verified landslide locations. Among the three cases of the application of logistic regression coefficient in the same study area, the case of Selangor based on the Selangor logistic regression coefficients showed the highest accuracy (94%), where as Penang based on the Penang coefficients showed the lowest accuracy (86%). Similarly, among the six cases from the cross application of logistic regression coefficient in other two areas, the case of Selangor based on logistic coefficient of Cameron showed highest (90%) prediction accuracy where as the case of Penang based on the Selangor logistic regression coefficients showed the lowest accuracy (79%). Qualitatively, the cross

  12. Regression models of reactor diagnostic signals

    International Nuclear Information System (INIS)

    Vavrin, J.

    1989-01-01

    The application is described of an autoregression model as the simplest regression model of diagnostic signals in experimental analysis of diagnostic systems, in in-service monitoring of normal and anomalous conditions and their diagnostics. The method of diagnostics is described using a regression type diagnostic data base and regression spectral diagnostics. The diagnostics is described of neutron noise signals from anomalous modes in the experimental fuel assembly of a reactor. (author)

  13. Performance Confirmation Data Acquisition System

    International Nuclear Information System (INIS)

    D.W. Markman

    2000-01-01

    The purpose of this analysis is to identify and analyze concepts for the acquisition of data in support of the Performance Confirmation (PC) program at the potential subsurface nuclear waste repository at Yucca Mountain. The scope and primary objectives of this analysis are to: (1) Review the criteria for design as presented in the Performance Confirmation Data Acquisition/Monitoring System Description Document, by way of the Input Transmittal, Performance Confirmation Input Criteria (CRWMS M and O 1999c). (2) Identify and describe existing and potential new trends in data acquisition system software and hardware that would support the PC plan. The data acquisition software and hardware will support the field instruments and equipment that will be installed for the observation and perimeter drift borehole monitoring, and in-situ monitoring within the emplacement drifts. The exhaust air monitoring requirements will be supported by a data communication network interface with the ventilation monitoring system database. (3) Identify the concepts and features that a data acquisition system should have in order to support the PC process and its activities. (4) Based on PC monitoring needs and available technologies, further develop concepts of a potential data acquisition system network in support of the PC program and the Site Recommendation and License Application

  14. A New Way to Confirm Planet Candidates

    Science.gov (United States)

    Kohler, Susanna

    2016-05-01

    What was the big deal behind the Kepler news conference yesterday? Its not just that the number of confirmed planets found by Kepler has more than doubled (though thats certainly exciting news!). Whats especially interesting is the way in which these new planets were confirmed.Number of planet discoveries by year since 1995, including previous non-Kepler discoveries (blue), previous Kepler discoveries (light blue) and the newly validated Kepler planets (orange). [NASA Ames/W. Stenzel; Princeton University/T. Morton]No Need for Follow-UpBefore Kepler, the way we confirmed planet candidates was with follow-up observations. The candidate could be validated either by directly imaging (which is rare) or obtaining a large number radial-velocity measurements of the wobble of the planets host star due to the planets orbit. But once Kepler started producing planet candidates, these approaches to validation became less feasible. A lot of Kepler candidates are small and orbit faint stars, making follow-up observations difficult or impossible.This problem is what inspired the development of whats known as probabilistic validation, an analysis technique that involves assessing the likelihood that the candidates signal is caused by various false-positive scenarios. Using this technique allows astronomers to estimate the likelihood of a candidate signal being a true planet detection; if that likelihood is high enough, the planet candidate can be confirmed without the need for follow-up observations.A breakdown of the catalog of Kepler Objects of Interest. Just over half had previously been identified as false positives or confirmed as candidates. 1284 are newly validated, and another 455 have FPP of1090%. [Morton et al. 2016]Probabilistic validation has been used in the past to confirm individual planet candidates in Kepler data, but now Timothy Morton (Princeton University) and collaborators have taken this to a new level: they developed the first code thats designed to do fully

  15. Investigating the Determinants of Toxoplasma gondii Prevalence in Meat: A Systematic Review and Meta-Regression.

    Directory of Open Access Journals (Sweden)

    Simone Belluco

    Full Text Available Toxoplasma gondii is one of the most widespread parasites in humans and can cause severe illness in immunocompromised individuals. However, its role in healthy people is probably under-appreciated. The complex epidemiology of this protozoan recognizes several infection routes but consumption of contaminated food is likely to be the predominant one. Among food, consumption of raw and undercooked meat is a relevant route of transmission, but the role of different meat producing animal species and meats thereof is controversial.The aim of the present work is to summarize and analyse literature data reporting prevalence estimates of T. gondii in meat animals/meats.We searched Medline, Web of Science, Science Direct (last update 31/03/2015.Relevant papers should report data from primary studies dealing with the prevalence of T. gondii in meat from livestock species as obtained through direct detection methods. Meta-analysis and meta-regression were performed.Of 1915 papers screened, 69 papers were included, dealing mainly with cattle, pigs and sheep. Pooled prevalences, based on random-effect models, were 2.6% (CI95 [0.5-5.8] for cattle, 12.3% (CI95 [7.6-17.8] for pigs and 14.7% (CI95 [8.9-21.5] for sheep. Due to the high heterogeneity observed, univariable and multivariable meta-regression models were fitted showing that the geographic area for cattle (p = 0.032, the farming type for pigs (p = 0.0004 and the sample composition for sheep (p = 0.03 had significant effects on the prevalences of Toxoplasma detected/estimated. Moreover, the role of different animal species was dependent on the geographic location of animals' origin.Limitations were due mainly to a possible publication bias.The present work confirms the role of meat, including beef, as T. gondii sources, and highlights the need for a control system for this parasite to be implemented along the meat production chain. Moreover, consumer knowledge should be strengthened in order to reduce

  16. Regression and Sparse Regression Methods for Viscosity Estimation of Acid Milk From it’s Sls Features

    DEFF Research Database (Denmark)

    Sharifzadeh, Sara; Skytte, Jacob Lercke; Nielsen, Otto Højager Attermann

    2012-01-01

    Statistical solutions find wide spread use in food and medicine quality control. We investigate the effect of different regression and sparse regression methods for a viscosity estimation problem using the spectro-temporal features from new Sub-Surface Laser Scattering (SLS) vision system. From...... with sparse LAR, lasso and Elastic Net (EN) sparse regression methods. Due to the inconsistent measurement condition, Locally Weighted Scatter plot Smoothing (Loess) has been employed to alleviate the undesired variation in the estimated viscosity. The experimental results of applying different methods show...

  17. Testing discontinuities in nonparametric regression

    KAUST Repository

    Dai, Wenlin

    2017-01-19

    In nonparametric regression, it is often needed to detect whether there are jump discontinuities in the mean function. In this paper, we revisit the difference-based method in [13 H.-G. Müller and U. Stadtmüller, Discontinuous versus smooth regression, Ann. Stat. 27 (1999), pp. 299–337. doi: 10.1214/aos/1018031100

  18. Testing discontinuities in nonparametric regression

    KAUST Repository

    Dai, Wenlin; Zhou, Yuejin; Tong, Tiejun

    2017-01-01

    In nonparametric regression, it is often needed to detect whether there are jump discontinuities in the mean function. In this paper, we revisit the difference-based method in [13 H.-G. Müller and U. Stadtmüller, Discontinuous versus smooth regression, Ann. Stat. 27 (1999), pp. 299–337. doi: 10.1214/aos/1018031100

  19. Differential item functioning analysis with ordinal logistic regression techniques. DIFdetect and difwithpar.

    Science.gov (United States)

    Crane, Paul K; Gibbons, Laura E; Jolley, Lance; van Belle, Gerald

    2006-11-01

    We present an ordinal logistic regression model for identification of items with differential item functioning (DIF) and apply this model to a Mini-Mental State Examination (MMSE) dataset. We employ item response theory ability estimation in our models. Three nested ordinal logistic regression models are applied to each item. Model testing begins with examination of the statistical significance of the interaction term between ability and the group indicator, consistent with nonuniform DIF. Then we turn our attention to the coefficient of the ability term in models with and without the group term. If including the group term has a marked effect on that coefficient, we declare that it has uniform DIF. We examined DIF related to language of test administration in addition to self-reported race, Hispanic ethnicity, age, years of education, and sex. We used PARSCALE for IRT analyses and STATA for ordinal logistic regression approaches. We used an iterative technique for adjusting IRT ability estimates on the basis of DIF findings. Five items were found to have DIF related to language. These same items also had DIF related to other covariates. The ordinal logistic regression approach to DIF detection, when combined with IRT ability estimates, provides a reasonable alternative for DIF detection. There appear to be several items with significant DIF related to language of test administration in the MMSE. More attention needs to be paid to the specific criteria used to determine whether an item has DIF, not just the technique used to identify DIF.

  20. On Solving Lq-Penalized Regressions

    Directory of Open Access Journals (Sweden)

    Tracy Zhou Wu

    2007-01-01

    Full Text Available Lq-penalized regression arises in multidimensional statistical modelling where all or part of the regression coefficients are penalized to achieve both accuracy and parsimony of statistical models. There is often substantial computational difficulty except for the quadratic penalty case. The difficulty is partly due to the nonsmoothness of the objective function inherited from the use of the absolute value. We propose a new solution method for the general Lq-penalized regression problem based on space transformation and thus efficient optimization algorithms. The new method has immediate applications in statistics, notably in penalized spline smoothing problems. In particular, the LASSO problem is shown to be polynomial time solvable. Numerical studies show promise of our approach.

  1. Boosted regression trees, multivariate adaptive regression splines and their two-step combinations with multiple linear regression or partial least squares to predict blood-brain barrier passage: a case study.

    Science.gov (United States)

    Deconinck, E; Zhang, M H; Petitet, F; Dubus, E; Ijjaali, I; Coomans, D; Vander Heyden, Y

    2008-02-18

    The use of some unconventional non-linear modeling techniques, i.e. classification and regression trees and multivariate adaptive regression splines-based methods, was explored to model the blood-brain barrier (BBB) passage of drugs and drug-like molecules. The data set contains BBB passage values for 299 structural and pharmacological diverse drugs, originating from a structured knowledge-based database. Models were built using boosted regression trees (BRT) and multivariate adaptive regression splines (MARS), as well as their respective combinations with stepwise multiple linear regression (MLR) and partial least squares (PLS) regression in two-step approaches. The best models were obtained using combinations of MARS with either stepwise MLR or PLS. It could be concluded that the use of combinations of a linear with a non-linear modeling technique results in some improved properties compared to the individual linear and non-linear models and that, when the use of such a combination is appropriate, combinations using MARS as non-linear technique should be preferred over those with BRT, due to some serious drawbacks of the BRT approaches.

  2. Testing Heteroscedasticity in Robust Regression

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2011-01-01

    Roč. 1, č. 4 (2011), s. 25-28 ISSN 2045-3345 Grant - others:GA ČR(CZ) GA402/09/0557 Institutional research plan: CEZ:AV0Z10300504 Keywords : robust regression * heteroscedasticity * regression quantiles * diagnostics Subject RIV: BB - Applied Statistics , Operational Research http://www.researchjournals.co.uk/documents/Vol4/06%20Kalina.pdf

  3. An Ordered Regression Model to Predict Transit Passengers’ Behavioural Intentions

    Energy Technology Data Exchange (ETDEWEB)

    Oña, J. de; Oña, R. de; Eboli, L.; Forciniti, C.; Mazzulla, G.

    2016-07-01

    Passengers’ behavioural intentions after experiencing transit services can be viewed as signals that show if a customer continues to utilise a company’s service. Users’ behavioural intentions can depend on a series of aspects that are difficult to measure directly. More recently, transit passengers’ behavioural intentions have been just considered together with the concepts of service quality and customer satisfaction. Due to the characteristics of the ways for evaluating passengers’ behavioural intentions, service quality and customer satisfaction, we retain that this kind of issue could be analysed also by applying ordered regression models. This work aims to propose just an ordered probit model for analysing service quality factors that can influence passengers’ behavioural intentions towards the use of transit services. The case study is the LRT of Seville (Spain), where a survey was conducted in order to collect the opinions of the passengers about the existing transit service, and to have a measure of the aspects that can influence the intentions of the users to continue using the transit service in the future. (Author)

  4. Spontaneous regression of a congenital melanocytic nevus

    Directory of Open Access Journals (Sweden)

    Amiya Kumar Nath

    2011-01-01

    Full Text Available Congenital melanocytic nevus (CMN may rarely regress which may also be associated with a halo or vitiligo. We describe a 10-year-old girl who presented with CMN on the left leg since birth, which recently started to regress spontaneously with associated depigmentation in the lesion and at a distant site. Dermoscopy performed at different sites of the regressing lesion demonstrated loss of epidermal pigments first followed by loss of dermal pigments. Histopathology and Masson-Fontana stain demonstrated lymphocytic infiltration and loss of pigment production in the regressing area. Immunohistochemistry staining (S100 and HMB-45, however, showed that nevus cells were present in the regressing areas.

  5. DNBR Prediction Using a Support Vector Regression

    International Nuclear Information System (INIS)

    Yang, Heon Young; Na, Man Gyun

    2008-01-01

    PWRs (Pressurized Water Reactors) generally operate in the nucleate boiling state. However, the conversion of nucleate boiling into film boiling with conspicuously reduced heat transfer induces a boiling crisis that may cause the fuel clad melting in the long run. This type of boiling crisis is called Departure from Nucleate Boiling (DNB) phenomena. Because the prediction of minimum DNBR in a reactor core is very important to prevent the boiling crisis such as clad melting, a lot of research has been conducted to predict DNBR values. The object of this research is to predict minimum DNBR applying support vector regression (SVR) by using the measured signals of a reactor coolant system (RCS). The SVR has extensively and successfully been applied to nonlinear function approximation like the proposed problem for estimating DNBR values that will be a function of various input variables such as reactor power, reactor pressure, core mass flowrate, control rod positions and so on. The minimum DNBR in a reactor core is predicted using these various operating condition data as the inputs to the SVR. The minimum DBNR values predicted by the SVR confirm its correctness compared with COLSS values

  6. Apoptosis in the transplanted canine transmissible venereal tumor during growth and regression phases Apoptose no tumor venéreo transmissível canino durante as fases de crescimento e regressão

    Directory of Open Access Journals (Sweden)

    F.G.A. Santos

    2008-06-01

    Full Text Available Twelve male, mongrel, adult dogs were subcutaneously transplanted with cells originated from two canine transmissible venereal tumors (TVT. The aim was to demonstrate and to quantify the occurrence of apoptosis in the TVT regression. After six months of transplantation, a tumor sample was obtained from each dog, being six dogs with TVT in the growing phase and six in the regression phase as verified by daily measurements. Samples were processed for histological and ultrastructural purposes as well as for DNA extraction. Sections of 4µm were stained by HE, Shorr, methyl green pyronine, Van Gieson, TUNEL reaction and immunostained for P53. The Shorr stained sections went through morphometry that demonstrated an increase of the apoptotic cells per field in the regressive tumors. It was also confirmed by transmission electron microscopy, which showed cells with typical morphology of apoptosis and by the TUNEL reaction that detected in situ the 3'OH nick end labeling mainly in the regressive tumors. The regressive TVTs also showed an intensified immunostaining for P53 besides a more intense genomic DNA fragmentation detected by the agarose gel electrophoresis. In conclusion, apoptosis has an important role in the regression of the experimental TVT in a way that is P53-dependent.Doze cães, adultos, machos e sem raça definida foram transplantados subcutaneamente, na região hipogástrica, com células originadas de dois tumores venéreos transmissíveis caninos (TVT. O objetivo do estudo foi demonstrar e quantificar a ocorrência de apoptose na regressão do TVT. Após seis meses, foi obtido um tumor de cada animal, totalizando seis em crescimento e seis em regressão. Fragmentos dos tumores foram processados para avaliação histológica, ultra-estrutural e também para extração de DNA. Cortes de 4µm foram corados em HE, Shorr, verde de metila pironina e Van Gieson e alguns foram submetidos à reação do TUNEL e à imunoistoquímica para P53

  7. Regression Analysis by Example. 5th Edition

    Science.gov (United States)

    Chatterjee, Samprit; Hadi, Ali S.

    2012-01-01

    Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. "Regression Analysis by Example, Fifth Edition" has been expanded and thoroughly…

  8. Gaussian process regression analysis for functional data

    CERN Document Server

    Shi, Jian Qing

    2011-01-01

    Gaussian Process Regression Analysis for Functional Data presents nonparametric statistical methods for functional regression analysis, specifically the methods based on a Gaussian process prior in a functional space. The authors focus on problems involving functional response variables and mixed covariates of functional and scalar variables.Covering the basics of Gaussian process regression, the first several chapters discuss functional data analysis, theoretical aspects based on the asymptotic properties of Gaussian process regression models, and new methodological developments for high dime

  9. Is past life regression therapy ethical?

    Science.gov (United States)

    Andrade, Gabriel

    2017-01-01

    Past life regression therapy is used by some physicians in cases with some mental diseases. Anxiety disorders, mood disorders, and gender dysphoria have all been treated using life regression therapy by some doctors on the assumption that they reflect problems in past lives. Although it is not supported by psychiatric associations, few medical associations have actually condemned it as unethical. In this article, I argue that past life regression therapy is unethical for two basic reasons. First, it is not evidence-based. Past life regression is based on the reincarnation hypothesis, but this hypothesis is not supported by evidence, and in fact, it faces some insurmountable conceptual problems. If patients are not fully informed about these problems, they cannot provide an informed consent, and hence, the principle of autonomy is violated. Second, past life regression therapy has the great risk of implanting false memories in patients, and thus, causing significant harm. This is a violation of the principle of non-malfeasance, which is surely the most important principle in medical ethics.

  10. Scrapping of student bursaries confirmed.

    Science.gov (United States)

    Longhurst, Chris

    2016-07-27

    Student bursaries for nurses will be scrapped from next year, the government has confirmed. Undergraduate nursing and midwifery students in England will now face tuition fees and student loans from August 2017.

  11. Regression Models for Market-Shares

    DEFF Research Database (Denmark)

    Birch, Kristina; Olsen, Jørgen Kai; Tjur, Tue

    2005-01-01

    On the background of a data set of weekly sales and prices for three brands of coffee, this paper discusses various regression models and their relation to the multiplicative competitive-interaction model (the MCI model, see Cooper 1988, 1993) for market-shares. Emphasis is put on the interpretat......On the background of a data set of weekly sales and prices for three brands of coffee, this paper discusses various regression models and their relation to the multiplicative competitive-interaction model (the MCI model, see Cooper 1988, 1993) for market-shares. Emphasis is put...... on the interpretation of the parameters in relation to models for the total sales based on discrete choice models.Key words and phrases. MCI model, discrete choice model, market-shares, price elasitcity, regression model....

  12. The Analysis of Nonstationary Time Series Using Regression, Correlation and Cointegration with an Application to Annual Mean Temperature and Sea Level

    DEFF Research Database (Denmark)

    Johansen, Søren

    There are simple well-known conditions for the validity of regression and correlation as statistical tools. We analyse by examples the effect of nonstationarity on inference using these methods and compare them to model based inference. Finally we analyse some data on annual mean temperature...... and sea level, by applying the cointegrated vector autoregressive model, which explicitly takes into account the nonstationarity of the variables....

  13. Regression Trees Identify Relevant Interactions: Can This Improve the Predictive Performance of Risk Adjustment?

    Science.gov (United States)

    Buchner, Florian; Wasem, Jürgen; Schillo, Sonja

    2017-01-01

    Risk equalization formulas have been refined since their introduction about two decades ago. Because of the complexity and the abundance of possible interactions between the variables used, hardly any interactions are considered. A regression tree is used to systematically search for interactions, a methodologically new approach in risk equalization. Analyses are based on a data set of nearly 2.9 million individuals from a major German social health insurer. A two-step approach is applied: In the first step a regression tree is built on the basis of the learning data set. Terminal nodes characterized by more than one morbidity-group-split represent interaction effects of different morbidity groups. In the second step the 'traditional' weighted least squares regression equation is expanded by adding interaction terms for all interactions detected by the tree, and regression coefficients are recalculated. The resulting risk adjustment formula shows an improvement in the adjusted R 2 from 25.43% to 25.81% on the evaluation data set. Predictive ratios are calculated for subgroups affected by the interactions. The R 2 improvement detected is only marginal. According to the sample level performance measures used, not involving a considerable number of morbidity interactions forms no relevant loss in accuracy. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  14. Predicting Charging Time of Battery Electric Vehicles Based on Regression and Time-Series Methods: A Case Study of Beijing

    Directory of Open Access Journals (Sweden)

    Jun Bi

    2018-04-01

    Full Text Available Battery electric vehicles (BEVs reduce energy consumption and air pollution as compared with conventional vehicles. However, the limited driving range and potential long charging time of BEVs create new problems. Accurate charging time prediction of BEVs helps drivers determine travel plans and alleviate their range anxiety during trips. This study proposed a combined model for charging time prediction based on regression and time-series methods according to the actual data from BEVs operating in Beijing, China. After data analysis, a regression model was established by considering the charged amount for charging time prediction. Furthermore, a time-series method was adopted to calibrate the regression model, which significantly improved the fitting accuracy of the model. The parameters of the model were determined by using the actual data. Verification results confirmed the accuracy of the model and showed that the model errors were small. The proposed model can accurately depict the charging time characteristics of BEVs in Beijing.

  15. Recursive and non-linear logistic regression: moving on from the original EuroSCORE and EuroSCORE II methodologies.

    Science.gov (United States)

    Poullis, Michael

    2014-11-01

    EuroSCORE II, despite improving on the original EuroSCORE system, has not solved all the calibration and predictability issues. Recursive, non-linear and mixed recursive and non-linear regression analysis were assessed with regard to sensitivity, specificity and predictability of the original EuroSCORE and EuroSCORE II systems. The original logistic EuroSCORE, EuroSCORE II and recursive, non-linear and mixed recursive and non-linear regression analyses of these risk models were assessed via receiver operator characteristic curves (ROC) and Hosmer-Lemeshow statistic analysis with regard to the accuracy of predicting in-hospital mortality. Analysis was performed for isolated coronary artery bypass grafts (CABGs) (n = 2913), aortic valve replacement (AVR) (n = 814), mitral valve surgery (n = 340), combined AVR and CABG (n = 517), aortic (n = 350), miscellaneous cases (n = 642), and combinations of the above cases (n = 5576). The original EuroSCORE had an ROC below 0.7 for isolated AVR and combined AVR and CABG. None of the methods described increased the ROC above 0.7. The EuroSCORE II risk model had an ROC below 0.7 for isolated AVR only. Recursive regression, non-linear regression, and mixed recursive and non-linear regression all increased the ROC above 0.7 for isolated AVR. The original EuroSCORE had a Hosmer-Lemeshow statistic that was above 0.05 for all patients and the subgroups analysed. All of the techniques markedly increased the Hosmer-Lemeshow statistic. The EuroSCORE II risk model had a Hosmer-Lemeshow statistic that was significant for all patients (P linear regression failed to improve on the original Hosmer-Lemeshow statistic. The mixed recursive and non-linear regression using the EuroSCORE II risk model was the only model that produced an ROC of 0.7 or above for all patients and procedures and had a Hosmer-Lemeshow statistic that was highly non-significant. The original EuroSCORE and the EuroSCORE II risk models do not have adequate ROC and Hosmer

  16. SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.

    Science.gov (United States)

    Chu, Annie; Cui, Jenny; Dinov, Ivo D

    2009-03-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most

  17. Detection of epistatic effects with logic regression and a classical linear regression model.

    Science.gov (United States)

    Malina, Magdalena; Ickstadt, Katja; Schwender, Holger; Posch, Martin; Bogdan, Małgorzata

    2014-02-01

    To locate multiple interacting quantitative trait loci (QTL) influencing a trait of interest within experimental populations, usually methods as the Cockerham's model are applied. Within this framework, interactions are understood as the part of the joined effect of several genes which cannot be explained as the sum of their additive effects. However, if a change in the phenotype (as disease) is caused by Boolean combinations of genotypes of several QTLs, this Cockerham's approach is often not capable to identify them properly. To detect such interactions more efficiently, we propose a logic regression framework. Even though with the logic regression approach a larger number of models has to be considered (requiring more stringent multiple testing correction) the efficient representation of higher order logic interactions in logic regression models leads to a significant increase of power to detect such interactions as compared to a Cockerham's approach. The increase in power is demonstrated analytically for a simple two-way interaction model and illustrated in more complex settings with simulation study and real data analysis.

  18. Comparing the cancer in Ninawa during three periods (1980-1990, 1991-2000, 2001-2010) using Poisson regression.

    Science.gov (United States)

    Al-Hashimi, Muzahem Mohammed Yahya; Wang, Xiangjun

    2013-12-01

    Iraq fought three wars in three consecutive decades, Iran-Iraq war (1980-1988), Persian Gulf War in 1991, and the Iraq's war in 2003. In the nineties of the last century and up to the present time, there have been anecdotal reports of increase in cancer in Ninawa as in all provinces of Iraq, possibly as a result of exposure to depleted uranium used by American troops in the last two wars. This paper deals with cancer incidence in Ninawa, the most importance province in Iraq, where many of her sons were soldiers in the Iraqi army, and they have participated in the wars. The data was derived from the Directorate of Health in Ninawa. The data was divided into three sub periods: 1980-1990, 1991-2000, and 2001-2010. The analyses are performed using Poisson regressions. The response variable is the cancer incidence number. Cancer cases, age, sex, and years were considered as the explanatory variables. The logarithm of the population of Ninawa is used as an offset. The aim of this paper is to model the cancer incidence data and estimate the cancer incidence rate ratio (IRR) to illustrate the changes that have occurred of incidence cancer in Ninawa in these three periods. There is evidence of a reduction in the cancer IRR in Ninawa in the third period as well as in the second period. Our analyses found that breast cancer remained the first common cancer; while the lung, trachea, and bronchus the second in spite of decreasing as dramatically. Modest increases in incidence of prostate, penis, and other male genitals for the duration of the study period and stability in incidence of colon in the second and third periods. Modest increases in incidence of placenta and metastatic tumors, while the highest increase was in leukemia in the third period relates to the second period but not to the first period. The cancer IRR in men was decreased from more than 33% than those of females in the first period, more than 39% in the second period, and regressed to 9.56% in the third

  19. Comparing the cancer in Ninawa during three periods (1980-1990, 1991-2000, 2001-2010 using Poisson regression

    Directory of Open Access Journals (Sweden)

    Muzahem Mohammed Yahya AL-Hashimi

    2013-01-01

    Full Text Available Background: Iraq fought three wars in three consecutive decades, Iran-Iraq war (1980-1988, Persian Gulf War in 1991, and the Iraq′s war in 2003. In the nineties of the last century and up to the present time, there have been anecdotal reports of increase in cancer in Ninawa as in all provinces of Iraq, possibly as a result of exposure to depleted uranium used by American troops in the last two wars. This paper deals with cancer incidence in Ninawa, the most importance province in Iraq, where many of her sons were soldiers in the Iraqi army, and they have participated in the wars. Materials and Methods: The data was derived from the Directorate of Health in Ninawa. The data was divided into three sub periods: 1980-1990, 1991-2000, and 2001-2010. The analyses are performed using Poisson regressions. The response variable is the cancer incidence number. Cancer cases, age, sex, and years were considered as the explanatory variables. The logarithm of the population of Ninawa is used as an offset. The aim of this paper is to model the cancer incidence data and estimate the cancer incidence rate ratio (IRR to illustrate the changes that have occurred of incidence cancer in Ninawa in these three periods. Results: There is evidence of a reduction in the cancer IRR in Ninawa in the third period as well as in the second period. Our analyses found that breast cancer remained the first common cancer; while the lung, trachea, and bronchus the second in spite of decreasing as dramatically. Modest increases in incidence of prostate, penis, and other male genitals for the duration of the study period and stability in incidence of colon in the second and third periods. Modest increases in incidence of placenta and metastatic tumors, while the highest increase was in leukemia in the third period relates to the second period but not to the first period. The cancer IRR in men was decreased from more than 33% than those of females in the first period, more than 39

  20. CUSUM-Logistic Regression analysis for the rapid detection of errors in clinical laboratory test results.

    Science.gov (United States)

    Sampson, Maureen L; Gounden, Verena; van Deventer, Hendrik E; Remaley, Alan T

    2016-02-01

    The main drawback of the periodic analysis of quality control (QC) material is that test performance is not monitored in time periods between QC analyses, potentially leading to the reporting of faulty test results. The objective of this study was to develop a patient based QC procedure for the more timely detection of test errors. Results from a Chem-14 panel measured on the Beckman LX20 analyzer were used to develop the model. Each test result was predicted from the other 13 members of the panel by multiple regression, which resulted in correlation coefficients between the predicted and measured result of >0.7 for 8 of the 14 tests. A logistic regression model, which utilized the measured test result, the predicted test result, the day of the week and time of day, was then developed for predicting test errors. The output of the logistic regression was tallied by a daily CUSUM approach and used to predict test errors, with a fixed specificity of 90%. The mean average run length (ARL) before error detection by CUSUM-Logistic Regression (CSLR) was 20 with a mean sensitivity of 97%, which was considerably shorter than the mean ARL of 53 (sensitivity 87.5%) for a simple prediction model that only used the measured result for error detection. A CUSUM-Logistic Regression analysis of patient laboratory data can be an effective approach for the rapid and sensitive detection of clinical laboratory errors. Published by Elsevier Inc.

  1. Modelling the risk of Pb and PAH intervention value exceedance in allotment soils by robust logistic regression

    International Nuclear Information System (INIS)

    Papritz, A.; Reichard, P.U.

    2009-01-01

    Soils of allotments are often contaminated by heavy metals and persistent organic pollutants. In particular, lead (Pb) and polycyclic aromatic hydrocarbons (PAHs) frequently exceed legal intervention values (IVs). Allotments are popular in European countries; cities may own and let several thousand allotment plots. Assessing soil contamination for all the plots would be very costly. Soil contamination in allotments is often linked to gardening practice and historic land use. Hence, we predict the risk of IV exceedance from attributes that characterize the history and management of allotment areas (age, nearby presence of pollutant sources, prior land use). Robust logistic regression analyses of data of Swiss allotments demonstrate that the risk of IV exceedance can be predicted quite precisely without costly soil analyses. Thus, the new method allows screening many allotments at small costs, and it helps to deploy the resources available for soil contamination surveying more efficiently. - The contamination of allotment soils, expressed as frequency of intervention value exceedance, depends on the age and further attributes of the allotments and can be predicted by logistic regression.

  2. High-throughput DNA methylation analysis in anorexia nervosa confirms TNXB hypermethylation.

    Science.gov (United States)

    Kesselmeier, Miriam; Pütter, Carolin; Volckmar, Anna-Lena; Baurecht, Hansjörg; Grallert, Harald; Illig, Thomas; Ismail, Khadeeja; Ollikainen, Miina; Silén, Yasmina; Keski-Rahkonen, Anna; Bulik, Cynthia M; Collier, David A; Zeggini, Eleftheria; Hebebrand, Johannes; Scherag, André; Hinney, Anke

    2018-04-01

    Patients with anorexia nervosa (AN) are ideally suited to identify differentially methylated genes in response to starvation. We examined high-throughput DNA methylation derived from whole blood of 47 females with AN, 47 lean females without AN and 100 population-based females to compare AN with both controls. To account for different cell type compositions, we applied two reference-free methods (FastLMM-EWASher, RefFreeEWAS) and searched for consensus CpG sites identified by both methods. We used a validation sample of five monozygotic AN-discordant twin pairs. Fifty-one consensus sites were identified in AN vs. lean and 81 in AN vs. population-based comparisons. These sites have not been reported in AN methylation analyses, but for the latter comparison 54/81 sites showed directionally consistent differential methylation effects in the AN-discordant twins. For a single nucleotide polymorphism rs923768 in CSGALNACT1 a nearby site was nominally associated with AN. At the gene level, we confirmed hypermethylated sites at TNXB. We found support for a locus at NR1H3 in the AN vs. lean control comparison, but the methylation direction was opposite to the one previously reported. We confirm genes like TNXB previously described to comprise differentially methylated sites, and highlight further sites that might be specifically involved in AN starvation processes.

  3. Selective enrichment of volatiles confirmed

    Science.gov (United States)

    de Pater, Imke

    2018-05-01

    Hydrogen sulfide gas is detected above Uranus's main cloud deck, confirming the prevalence of H2S ice particles as the main cloud component and a strongly unbalanced nitrogen/sulfur ratio in the planet's deep atmosphere.

  4. Regression Analysis

    CERN Document Server

    Freund, Rudolf J; Sa, Ping

    2006-01-01

    The book provides complete coverage of the classical methods of statistical analysis. It is designed to give students an understanding of the purpose of statistical analyses, to allow the student to determine, at least to some degree, the correct type of statistical analyses to be performed in a given situation, and have some appreciation of what constitutes good experimental design

  5. Poisson Mixture Regression Models for Heart Disease Prediction.

    Science.gov (United States)

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  6. Poisson Mixture Regression Models for Heart Disease Prediction

    Science.gov (United States)

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  7. Hydrologic and Hydraulic Analyses of Selected Streams in Lorain County, Ohio, 2003

    Science.gov (United States)

    Jackson, K. Scott; Ostheimer, Chad J.; Whitehead, Matthew T.

    2003-01-01

    Hydrologic and hydraulic analyses were done for selected reaches of nine streams in Lorain County Ohio. To assess the alternatives for flood-damage mitigation, the Lorain County Engineer and the U.S. Geological Survey (USGS) initiated a cooperative study to investigate aspects of the hydrology and hydraulics of the nine streams. Historical streamflow data and regional regression equations were used to estimate instantaneous peak discharges for floods having recurrence intervals of 2, 5, 10, 25, 50, and 100 years. Explanatory variables used in the regression equations were drainage area, main-channel slope, and storage area. Drainage areas of the nine stream reaches studied ranged from 1.80 to 19.3 square miles. The step-backwater model HEC-RAS was used to determine water-surface-elevation profiles for the 10-year-recurrence-interval (10-year) flood along a selected reach of each stream. The water-surface pro-file information was used then to generate digital mapping of flood-plain boundaries. The analyses indicate that at the 10-year flood elevation, road overflow results at numerous hydraulic structures along the nine streams.

  8. Assessing the health status of farmed mussels (Mytilus galloprovincialis) through histological, microbiological and biomarker analyses.

    Science.gov (United States)

    Matozzo, Valerio; Ercolini, Carlo; Serracca, Laura; Battistini, Roberta; Rossini, Irene; Granato, Giulia; Quaglieri, Elisabetta; Perolo, Alberto; Finos, Livio; Arcangeli, Giuseppe; Bertotto, Daniela; Radaelli, Giuseppe; Chollet, Bruno; Arzul, Isabelle; Quaglio, Francesco

    2018-03-01

    confirmed by in situ hybridization. Cells morphologically similar to Perkinsus sp. trophozoites were observed in 0.7% of the mussels analysed; however, infection with Perkinsus spp. could neither be confirmed by ISH nor by PCR. Different pathological aspects, such as host defence responses and regressive/progressive changes were detected in the gills, digestive glands, gonads and mantle. Only one single case of disseminated neoplasia (0.2%) was observed. As for the biomarker evaluation, the MANOVA analysis revealed the statistically significant effect that the variable "sampling site" had on the biological parameter measured, thus suggesting that the multibiomarker approach was able to differentiate the rearing sites. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Polygenic scores via penalized regression on summary statistics.

    Science.gov (United States)

    Mak, Timothy Shin Heng; Porsch, Robert Milan; Choi, Shing Wan; Zhou, Xueya; Sham, Pak Chung

    2017-09-01

    Polygenic scores (PGS) summarize the genetic contribution of a person's genotype to a disease or phenotype. They can be used to group participants into different risk categories for diseases, and are also used as covariates in epidemiological analyses. A number of possible ways of calculating PGS have been proposed, and recently there is much interest in methods that incorporate information available in published summary statistics. As there is no inherent information on linkage disequilibrium (LD) in summary statistics, a pertinent question is how we can use LD information available elsewhere to supplement such analyses. To answer this question, we propose a method for constructing PGS using summary statistics and a reference panel in a penalized regression framework, which we call lassosum. We also propose a general method for choosing the value of the tuning parameter in the absence of validation data. In our simulations, we showed that pseudovalidation often resulted in prediction accuracy that is comparable to using a dataset with validation phenotype and was clearly superior to the conservative option of setting the tuning parameter of lassosum to its lowest value. We also showed that lassosum achieved better prediction accuracy than simple clumping and P-value thresholding in almost all scenarios. It was also substantially faster and more accurate than the recently proposed LDpred. © 2017 WILEY PERIODICALS, INC.

  10. TRANSIT TIMING OBSERVATIONS FROM KEPLER. II. CONFIRMATION OF TWO MULTIPLANET SYSTEMS VIA A NON-PARAMETRIC CORRELATION ANALYSIS

    International Nuclear Information System (INIS)

    Ford, Eric B.; Moorhead, Althea V.; Morehead, Robert C.; Fabrycky, Daniel C.; Steffen, Jason H.; Carter, Joshua A.; Fressin, Francois; Holman, Matthew J.; Ragozzine, Darin; Charbonneau, David; Lissauer, Jack J.; Rowe, Jason F.; Borucki, William J.; Bryson, Stephen T.; Burke, Christopher J.; Caldwell, Douglas A.; Welsh, William F.; Allen, Christopher; Batalha, Natalie M.; Buchhave, Lars A.

    2012-01-01

    We present a new method for confirming transiting planets based on the combination of transit timing variations (TTVs) and dynamical stability. Correlated TTVs provide evidence that the pair of bodies is in the same physical system. Orbital stability provides upper limits for the masses of the transiting companions that are in the planetary regime. This paper describes a non-parametric technique for quantifying the statistical significance of TTVs based on the correlation of two TTV data sets. We apply this method to an analysis of the TTVs of two stars with multiple transiting planet candidates identified by Kepler. We confirm four transiting planets in two multiple-planet systems based on their TTVs and the constraints imposed by dynamical stability. An additional three candidates in these same systems are not confirmed as planets, but are likely to be validated as real planets once further observations and analyses are possible. If all were confirmed, these systems would be near 4:6:9 and 2:4:6:9 period commensurabilities. Our results demonstrate that TTVs provide a powerful tool for confirming transiting planets, including low-mass planets and planets around faint stars for which Doppler follow-up is not practical with existing facilities. Continued Kepler observations will dramatically improve the constraints on the planet masses and orbits and provide sensitivity for detecting additional non-transiting planets. If Kepler observations were extended to eight years, then a similar analysis could likely confirm systems with multiple closely spaced, small transiting planets in or near the habitable zone of solar-type stars.

  11. TRANSIT TIMING OBSERVATIONS FROM KEPLER. II. CONFIRMATION OF TWO MULTIPLANET SYSTEMS VIA A NON-PARAMETRIC CORRELATION ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    Ford, Eric B.; Moorhead, Althea V.; Morehead, Robert C. [Astronomy Department, University of Florida, 211 Bryant Space Sciences Center, Gainesville, FL 32611 (United States); Fabrycky, Daniel C. [UCO/Lick Observatory, University of California, Santa Cruz, CA 95064 (United States); Steffen, Jason H. [Fermilab Center for Particle Astrophysics, P.O. Box 500, MS 127, Batavia, IL 60510 (United States); Carter, Joshua A.; Fressin, Francois; Holman, Matthew J.; Ragozzine, Darin; Charbonneau, David [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Lissauer, Jack J.; Rowe, Jason F.; Borucki, William J.; Bryson, Stephen T.; Burke, Christopher J.; Caldwell, Douglas A. [NASA Ames Research Center, Moffett Field, CA 94035 (United States); Welsh, William F. [Astronomy Department, San Diego State University, San Diego, CA 92182-1221 (United States); Allen, Christopher [Orbital Sciences Corporation/NASA Ames Research Center, Moffett Field, CA 94035 (United States); Batalha, Natalie M. [Department of Physics and Astronomy, San Jose State University, San Jose, CA 95192 (United States); Buchhave, Lars A., E-mail: eford@astro.ufl.edu [Niels Bohr Institute, Copenhagen University, DK-2100 Copenhagen (Denmark); Collaboration: Kepler Science Team; and others

    2012-05-10

    We present a new method for confirming transiting planets based on the combination of transit timing variations (TTVs) and dynamical stability. Correlated TTVs provide evidence that the pair of bodies is in the same physical system. Orbital stability provides upper limits for the masses of the transiting companions that are in the planetary regime. This paper describes a non-parametric technique for quantifying the statistical significance of TTVs based on the correlation of two TTV data sets. We apply this method to an analysis of the TTVs of two stars with multiple transiting planet candidates identified by Kepler. We confirm four transiting planets in two multiple-planet systems based on their TTVs and the constraints imposed by dynamical stability. An additional three candidates in these same systems are not confirmed as planets, but are likely to be validated as real planets once further observations and analyses are possible. If all were confirmed, these systems would be near 4:6:9 and 2:4:6:9 period commensurabilities. Our results demonstrate that TTVs provide a powerful tool for confirming transiting planets, including low-mass planets and planets around faint stars for which Doppler follow-up is not practical with existing facilities. Continued Kepler observations will dramatically improve the constraints on the planet masses and orbits and provide sensitivity for detecting additional non-transiting planets. If Kepler observations were extended to eight years, then a similar analysis could likely confirm systems with multiple closely spaced, small transiting planets in or near the habitable zone of solar-type stars.

  12. Transit Timing Observations from Kepler: II. Confirmation of Two Multiplanet Systems via a Non-parametric Correlation Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ford, Eric B.; /Florida U.; Fabrycky, Daniel C.; /Lick Observ.; Steffen, Jason H.; /Fermilab; Carter, Joshua A.; /Harvard-Smithsonian Ctr. Astrophys.; Fressin, Francois; /Harvard-Smithsonian Ctr. Astrophys.; Holman, Matthew J.; /Harvard-Smithsonian Ctr. Astrophys.; Lissauer, Jack J.; /NASA, Ames; Moorhead, Althea V.; /Florida U.; Morehead, Robert C.; /Florida U.; Ragozzine, Darin; /Harvard-Smithsonian Ctr. Astrophys.; Rowe, Jason F.; /NASA, Ames /SETI Inst., Mtn. View /San Diego State U., Astron. Dept.

    2012-01-01

    We present a new method for confirming transiting planets based on the combination of transit timing variations (TTVs) and dynamical stability. Correlated TTVs provide evidence that the pair of bodies are in the same physical system. Orbital stability provides upper limits for the masses of the transiting companions that are in the planetary regime. This paper describes a non-parametric technique for quantifying the statistical significance of TTVs based on the correlation of two TTV data sets. We apply this method to an analysis of the transit timing variations of two stars with multiple transiting planet candidates identified by Kepler. We confirm four transiting planets in two multiple planet systems based on their TTVs and the constraints imposed by dynamical stability. An additional three candidates in these same systems are not confirmed as planets, but are likely to be validated as real planets once further observations and analyses are possible. If all were confirmed, these systems would be near 4:6:9 and 2:4:6:9 period commensurabilities. Our results demonstrate that TTVs provide a powerful tool for confirming transiting planets, including low-mass planets and planets around faint stars for which Doppler follow-up is not practical with existing facilities. Continued Kepler observations will dramatically improve the constraints on the planet masses and orbits and provide sensitivity for detecting additional non-transiting planets. If Kepler observations were extended to eight years, then a similar analysis could likely confirm systems with multiple closely spaced, small transiting planets in or near the habitable zone of solar-type stars.

  13. Regression analysis using dependent Polya trees.

    Science.gov (United States)

    Schörgendorfer, Angela; Branscum, Adam J

    2013-11-30

    Many commonly used models for linear regression analysis force overly simplistic shape and scale constraints on the residual structure of data. We propose a semiparametric Bayesian model for regression analysis that produces data-driven inference by using a new type of dependent Polya tree prior to model arbitrary residual distributions that are allowed to evolve across increasing levels of an ordinal covariate (e.g., time, in repeated measurement studies). By modeling residual distributions at consecutive covariate levels or time points using separate, but dependent Polya tree priors, distributional information is pooled while allowing for broad pliability to accommodate many types of changing residual distributions. We can use the proposed dependent residual structure in a wide range of regression settings, including fixed-effects and mixed-effects linear and nonlinear models for cross-sectional, prospective, and repeated measurement data. A simulation study illustrates the flexibility of our novel semiparametric regression model to accurately capture evolving residual distributions. In an application to immune development data on immunoglobulin G antibodies in children, our new model outperforms several contemporary semiparametric regression models based on a predictive model selection criterion. Copyright © 2013 John Wiley & Sons, Ltd.

  14. A hydrologic regression sediment-yield model for two ungaged watershed outlet stations in Africa

    International Nuclear Information System (INIS)

    Moussa, O.M.; Smith, S.E.; Shrestha, R.L.

    1991-01-01

    A hydrologic regression sediment-yield model was established to determine the relationship between water discharge and suspended sediment discharge at the Blue Nile and the Atbara River outlet stations during the flood season. The model consisted of two main submodels: (1) a suspended sediment discharge model, which was used to determine suspended sediment discharge for each basin outlet; and (2) a sediment rating model, which related water discharge and suspended sediment discharge for each outlet station. Due to the absence of suspended sediment concentration measurements at or near the outlet stations, a minimum norm solution, which is based on the minimization of the unknowns rather than the residuals, was used to determine the suspended sediment discharges at the stations. In addition, the sediment rating submodel was regressed by using an observation equations procedure. Verification analyses on the model were carried out and the mean percentage errors were found to be +12.59 and -12.39, respectively, for the Blue Nile and Atbara. The hydrologic regression model was found to be most sensitive to the relative weight matrix, moderately sensitive to the mean water discharge ratio, and slightly sensitive to the concentration variation along the River Nile's course

  15. Regressive Imagery in Creative Problem-Solving: Comparing Verbal Protocols of Expert and Novice Visual Artists and Computer Programmers

    Science.gov (United States)

    Kozbelt, Aaron; Dexter, Scott; Dolese, Melissa; Meredith, Daniel; Ostrofsky, Justin

    2015-01-01

    We applied computer-based text analyses of regressive imagery to verbal protocols of individuals engaged in creative problem-solving in two domains: visual art (23 experts, 23 novices) and computer programming (14 experts, 14 novices). Percentages of words involving primary process and secondary process thought, plus emotion-related words, were…

  16. Applied Regression Modeling A Business Approach

    CERN Document Server

    Pardoe, Iain

    2012-01-01

    An applied and concise treatment of statistical regression techniques for business students and professionals who have little or no background in calculusRegression analysis is an invaluable statistical methodology in business settings and is vital to model the relationship between a response variable and one or more predictor variables, as well as the prediction of a response value given values of the predictors. In view of the inherent uncertainty of business processes, such as the volatility of consumer spending and the presence of market uncertainty, business professionals use regression a

  17. Regression of environmental noise in LIGO data

    International Nuclear Information System (INIS)

    Tiwari, V; Klimenko, S; Mitselmakher, G; Necula, V; Drago, M; Prodi, G; Frolov, V; Yakushin, I; Re, V; Salemi, F; Vedovato, G

    2015-01-01

    We address the problem of noise regression in the output of gravitational-wave (GW) interferometers, using data from the physical environmental monitors (PEM). The objective of the regression analysis is to predict environmental noise in the GW channel from the PEM measurements. One of the most promising regression methods is based on the construction of Wiener–Kolmogorov (WK) filters. Using this method, the seismic noise cancellation from the LIGO GW channel has already been performed. In the presented approach the WK method has been extended, incorporating banks of Wiener filters in the time–frequency domain, multi-channel analysis and regulation schemes, which greatly enhance the versatility of the regression analysis. Also we present the first results on regression of the bi-coherent noise in the LIGO data. (paper)

  18. Hydrogen-combustion analyses of large-scale tests

    International Nuclear Information System (INIS)

    Gido, R.G.; Koestel, A.

    1986-01-01

    This report uses results of the large-scale tests with turbulence performed by the Electric Power Research Institute at the Nevada Test Site to evaluate hydrogen burn-analysis procedures based on lumped-parameter codes like COMPARE-H2 and associated burn-parameter models. The test results: (1) confirmed, in a general way, the procedures for application to pulsed burning, (2) increased significantly our understanding of the burn phenomenon by demonstrating that continuous burning can occur, and (3) indicated that steam can terminate continuous burning. Future actions recommended include: (1) modification of the code to perform continuous-burn analyses, which is demonstrated, (2) analyses to determine the type of burning (pulsed or continuous) that will exist in nuclear containments and the stable location if the burning is continuous, and (3) changes to the models for estimating burn parameters

  19. Hydrogen-combustion analyses of large-scale tests

    International Nuclear Information System (INIS)

    Gido, R.G.; Koestel, A.

    1986-01-01

    This report uses results of the large-scale tests with turbulence performed by the Electric Power Research Institute at the Nevada Test Site to evaluate hydrogen burn-analysis procedures based on lumped-parameter codes like COMPARE-H2 and associated burn-parameter models. The test results (a) confirmed, in a general way, the procedures for application to pulsed burning, (b) increased significantly our understanding of the burn phenomenon by demonstrating that continuous burning can occur and (c) indicated that steam can terminate continuous burning. Future actions recommended include (a) modification of the code to perform continuous-burn analyses, which is demonstrated, (b) analyses to determine the type of burning (pulsed or continuous) that will exist in nuclear containments and the stable location if the burning is continuous, and (c) changes to the models for estimating burn parameters

  20. Temporal trends in sperm count: a systematic review and meta-regression analysis.

    Science.gov (United States)

    Levine, Hagai; Jørgensen, Niels; Martino-Andrade, Anderson; Mendiola, Jaime; Weksler-Derri, Dan; Mindlis, Irina; Pinotti, Rachel; Swan, Shanna H

    2017-11-01

    Reported declines in sperm counts remain controversial today and recent trends are unknown. A definitive meta-analysis is critical given the predictive value of sperm count for fertility, morbidity and mortality. To provide a systematic review and meta-regression analysis of recent trends in sperm counts as measured by sperm concentration (SC) and total sperm count (TSC), and their modification by fertility and geographic group. PubMed/MEDLINE and EMBASE were searched for English language studies of human SC published in 1981-2013. Following a predefined protocol 7518 abstracts were screened and 2510 full articles reporting primary data on SC were reviewed. A total of 244 estimates of SC and TSC from 185 studies of 42 935 men who provided semen samples in 1973-2011 were extracted for meta-regression analysis, as well as information on years of sample collection and covariates [fertility group ('Unselected by fertility' versus 'Fertile'), geographic group ('Western', including North America, Europe Australia and New Zealand versus 'Other', including South America, Asia and Africa), age, ejaculation abstinence time, semen collection method, method of measuring SC and semen volume, exclusion criteria and indicators of completeness of covariate data]. The slopes of SC and TSC were estimated as functions of sample collection year using both simple linear regression and weighted meta-regression models and the latter were adjusted for pre-determined covariates and modification by fertility and geographic group. Assumptions were examined using multiple sensitivity analyses and nonlinear models. SC declined significantly between 1973 and 2011 (slope in unadjusted simple regression models -0.70 million/ml/year; 95% CI: -0.72 to -0.69; P regression analysis reports a significant decline in sperm counts (as measured by SC and TSC) between 1973 and 2011, driven by a 50-60% decline among men unselected by fertility from North America, Europe, Australia and New Zealand. Because

  1. Análise de fatores e regressão bissegmentada em estudos de estratificação ambiental e adaptabilidade em milho Factor analysis and bissegmented regression for studies about environmental stratification and maize adaptability

    Directory of Open Access Journals (Sweden)

    Deoclécio Domingos Garbuglio

    2007-02-01

    Full Text Available O objetivo deste trabalho foi verificar possíveis divergências entre os resultados obtidos nas avaliações da adaptabilidade de 27 genótipos de milho (Zea mays L., e na estratificação de 22 ambientes no Estado do Paraná, por meio de técnicas baseadas na análise de fatores e regressão bissegmentada. As estratificações ambientais foram feitas por meio do método tradicional e por análise de fatores, aliada ao porcentual da porção simples da interação GxA (PS%. As análises de adaptabilidade foram realizadas por meio de regressão bissegmentada e análise de fatores. Pela análise de regressão bissegmentada, os genótipos estudados apresentaram alta performance produtiva; no entanto, não foi constatado o genótipo considerado como ideal. A adaptabilidade dos genótipos, analisada por meio de plotagens gráficas, apresentou respostas diferenciadas quando comparada à regressão bissegmentada. A análise de fatores mostrou-se eficiente nos processos de estratificação ambiental e adaptabilidade dos genótipos de milho.The objective of this work was to verify possible divergences among results obtained on adaptability evaluations of 27 maize genotypes (Zea mays L., and on stratification of 22 environments on Paraná State, Brazil, through techniques of factor analysis and bissegmented regression. The environmental stratifications were made through the traditional methodology and by factor analysis, allied to the percentage of the simple portion of GxE interaction (PS%. Adaptability analyses were carried out through bissegmented regression and factor analysis. By the analysis of bissegmented regression, studied genotypes had presented high productive performance; however, it was not evidenced the genotype considered as ideal. The adaptability of the genotypes, analyzed through graphs, presented different answers when compared to bissegmented regression. Factor analysis was efficient in the processes of environment stratification and

  2. Consumer satisfaction and confirmation of habits of comprehension

    DEFF Research Database (Denmark)

    Sørensen, Bent; Andersen, Christian; Andersen, Morten Purup

    2014-01-01

    the formation of consumer satisfaction; the perspective is that of the confirmation paradigm within advertisement research. Inductive advertisements support cognitive habit formation through confirmation, and the confirmation paradigm explains exactly consumer satisfaction with reference to confirmation. Hence......The purpose of this article is twofold: First, within a Peircean framework it shall be demonstrated how there is a relation between the compositional structure of certain types of print advertisements and their bringing about inductive comprehension, and how the consumer can be understood...... as a bundle of habits. It is the assumption that advertising that supports an inductive effect particularly appeals to the cognitive tendency of habit formation in the consumer. Second, it is asked whether advertisements that predominantly invite inductive processes of comprehension also influence...

  3. Forecasting with Dynamic Regression Models

    CERN Document Server

    Pankratz, Alan

    2012-01-01

    One of the most widely used tools in statistical forecasting, single equation regression models is examined here. A companion to the author's earlier work, Forecasting with Univariate Box-Jenkins Models: Concepts and Cases, the present text pulls together recent time series ideas and gives special attention to possible intertemporal patterns, distributed lag responses of output to input series and the auto correlation patterns of regression disturbance. It also includes six case studies.

  4. The analysis of nonstationary time series using regression, correlation and cointegration – with an application to annual mean temperature and sea level

    DEFF Research Database (Denmark)

    Johansen, Søren

    There are simple well-known conditions for the validity of regression and correlation as statistical tools. We analyse by examples the effect of nonstationarity on inference using these methods and compare them to model based inference. Finally we analyse some data on annual mean temperature...... and sea level, by applying the cointegrated vector autoregressive model, which explicitly takes into account the nonstationarity of the variables....

  5. Application of nonlinear regression analysis for ammonium exchange by natural (Bigadic) clinoptilolite

    International Nuclear Information System (INIS)

    Gunay, Ahmet

    2007-01-01

    The experimental data of ammonium exchange by natural Bigadic clinoptilolite was evaluated using nonlinear regression analysis. Three two-parameters isotherm models (Langmuir, Freundlich and Temkin) and three three-parameters isotherm models (Redlich-Peterson, Sips and Khan) were used to analyse the equilibrium data. Fitting of isotherm models was determined using values of standard normalization error procedure (SNE) and coefficient of determination (R 2 ). HYBRID error function provided lowest sum of normalized error and Khan model had better performance for modeling the equilibrium data. Thermodynamic investigation indicated that ammonium removal by clinoptilolite was favorable at lower temperatures and exothermic in nature

  6. Estimating the causes of traffic accidents using logistic regression and discriminant analysis.

    Science.gov (United States)

    Karacasu, Murat; Ergül, Barış; Altin Yavuz, Arzu

    2014-01-01

    Factors that affect traffic accidents have been analysed in various ways. In this study, we use the methods of logistic regression and discriminant analysis to determine the damages due to injury and non-injury accidents in the Eskisehir Province. Data were obtained from the accident reports of the General Directorate of Security in Eskisehir; 2552 traffic accidents between January and December 2009 were investigated regarding whether they resulted in injury. According to the results, the effects of traffic accidents were reflected in the variables. These results provide a wealth of information that may aid future measures toward the prevention of undesired results.

  7. Predicting Taxi-Out Time at Congested Airports with Optimization-Based Support Vector Regression Methods

    Directory of Open Access Journals (Sweden)

    Guan Lian

    2018-01-01

    Full Text Available Accurate prediction of taxi-out time is significant precondition for improving the operationality of the departure process at an airport, as well as reducing the long taxi-out time, congestion, and excessive emission of greenhouse gases. Unfortunately, several of the traditional methods of predicting taxi-out time perform unsatisfactorily at congested airports. This paper describes and tests three of those conventional methods which include Generalized Linear Model, Softmax Regression Model, and Artificial Neural Network method and two improved Support Vector Regression (SVR approaches based on swarm intelligence algorithm optimization, which include Particle Swarm Optimization (PSO and Firefly Algorithm. In order to improve the global searching ability of Firefly Algorithm, adaptive step factor and Lévy flight are implemented simultaneously when updating the location function. Six factors are analysed, of which delay is identified as one significant factor in congested airports. Through a series of specific dynamic analyses, a case study of Beijing International Airport (PEK is tested with historical data. The performance measures show that the proposed two SVR approaches, especially the Improved Firefly Algorithm (IFA optimization-based SVR method, not only perform as the best modelling measures and accuracy rate compared with the representative forecast models, but also can achieve a better predictive performance when dealing with abnormal taxi-out time states.

  8. Predictors of the number of under-five malnourished children in Bangladesh: application of the generalized poisson regression model.

    Science.gov (United States)

    Islam, Mohammad Mafijul; Alam, Morshed; Tariquzaman, Md; Kabir, Mohammad Alamgir; Pervin, Rokhsona; Begum, Munni; Khan, Md Mobarak Hossain

    2013-01-08

    Malnutrition is one of the principal causes of child mortality in developing countries including Bangladesh. According to our knowledge, most of the available studies, that addressed the issue of malnutrition among under-five children, considered the categorical (dichotomous/polychotomous) outcome variables and applied logistic regression (binary/multinomial) to find their predictors. In this study malnutrition variable (i.e. outcome) is defined as the number of under-five malnourished children in a family, which is a non-negative count variable. The purposes of the study are (i) to demonstrate the applicability of the generalized Poisson regression (GPR) model as an alternative of other statistical methods and (ii) to find some predictors of this outcome variable. The data is extracted from the Bangladesh Demographic and Health Survey (BDHS) 2007. Briefly, this survey employs a nationally representative sample which is based on a two-stage stratified sample of households. A total of 4,460 under-five children is analysed using various statistical techniques namely Chi-square test and GPR model. The GPR model (as compared to the standard Poisson regression and negative Binomial regression) is found to be justified to study the above-mentioned outcome variable because of its under-dispersion (variance variable namely mother's education, father's education, wealth index, sanitation status, source of drinking water, and total number of children ever born to a woman. Consistencies of our findings in light of many other studies suggest that the GPR model is an ideal alternative of other statistical models to analyse the number of under-five malnourished children in a family. Strategies based on significant predictors may improve the nutritional status of children in Bangladesh.

  9. Estimating Loess Plateau Average Annual Precipitation with Multiple Linear Regression Kriging and Geographically Weighted Regression Kriging

    Directory of Open Access Journals (Sweden)

    Qiutong Jin

    2016-06-01

    Full Text Available Estimating the spatial distribution of precipitation is an important and challenging task in hydrology, climatology, ecology, and environmental science. In order to generate a highly accurate distribution map of average annual precipitation for the Loess Plateau in China, multiple linear regression Kriging (MLRK and geographically weighted regression Kriging (GWRK methods were employed using precipitation data from the period 1980–2010 from 435 meteorological stations. The predictors in regression Kriging were selected by stepwise regression analysis from many auxiliary environmental factors, such as elevation (DEM, normalized difference vegetation index (NDVI, solar radiation, slope, and aspect. All predictor distribution maps had a 500 m spatial resolution. Validation precipitation data from 130 hydrometeorological stations were used to assess the prediction accuracies of the MLRK and GWRK approaches. Results showed that both prediction maps with a 500 m spatial resolution interpolated by MLRK and GWRK had a high accuracy and captured detailed spatial distribution data; however, MLRK produced a lower prediction error and a higher variance explanation than GWRK, although the differences were small, in contrast to conclusions from similar studies.

  10. Confirmation of Essure placement using transvaginal ultrasound.

    Science.gov (United States)

    Veersema, Sebastiaan; Vleugels, Michel; Koks, Caroline; Thurkow, Andreas; van der Vaart, Huub; Brölmann, Hans

    2011-01-01

    To evaluate the protocol for confirmation of satisfactory Essure placement using transvaginal ultrasound. Prospective multicenter cohort study (Canadian Task Force classification II-2). Outpatient departments of 4 teaching hospitals in the Netherlands. Eleven hundred forty-five women who underwent hysteroscopic sterilization using the Essure device between March 2005 and December 2007. Transvaginal ultrasound examination 12 weeks after uncomplicated successful bilateral placement or as indicated according to the transvaginal ultrasound protocol after 4 weeks, and hysterosalpingography (HSG) at 12 weeks to confirm correct placement of the device after 3 months. The rate of successful placement was 88.4% initially. In 164 women (15%), successful placement was confirmed at HSG according the protocol. In 9 patients (0.84%), incorrect position of the device was observed at HSG. The cumulative pregnancy rate after 18 months was 3.85 per thousand women. Transvaginal ultrasound should be the first diagnostic test used to confirm the adequacy of hysteroscopic Essure sterilization because it is minimally invasive, averts ionizing radiation, and does not decrease the effectiveness of the Essure procedure. Copyright © 2011 AAGL. Published by Elsevier Inc. All rights reserved.

  11. Non-proportional odds multivariate logistic regression of ordinal family data.

    Science.gov (United States)

    Zaloumis, Sophie G; Scurrah, Katrina J; Harrap, Stephen B; Ellis, Justine A; Gurrin, Lyle C

    2015-03-01

    Methods to examine whether genetic and/or environmental sources can account for the residual variation in ordinal family data usually assume proportional odds. However, standard software to fit the non-proportional odds model to ordinal family data is limited because the correlation structure of family data is more complex than for other types of clustered data. To perform these analyses we propose the non-proportional odds multivariate logistic regression model and take a simulation-based approach to model fitting using Markov chain Monte Carlo methods, such as partially collapsed Gibbs sampling and the Metropolis algorithm. We applied the proposed methodology to male pattern baldness data from the Victorian Family Heart Study. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Gibrat’s law and quantile regressions

    DEFF Research Database (Denmark)

    Distante, Roberta; Petrella, Ivan; Santoro, Emiliano

    2017-01-01

    The nexus between firm growth, size and age in U.S. manufacturing is examined through the lens of quantile regression models. This methodology allows us to overcome serious shortcomings entailed by linear regression models employed by much of the existing literature, unveiling a number of important...

  13. ON REGRESSION REPRESENTATIONS OF STOCHASTIC-PROCESSES

    NARCIS (Netherlands)

    RUSCHENDORF, L; DEVALK, [No Value

    We construct a.s. nonlinear regression representations of general stochastic processes (X(n))n is-an-element-of N. As a consequence we obtain in particular special regression representations of Markov chains and of certain m-dependent sequences. For m-dependent sequences we obtain a constructive

  14. Introduction to the use of regression models in epidemiology.

    Science.gov (United States)

    Bender, Ralf

    2009-01-01

    Regression modeling is one of the most important statistical techniques used in analytical epidemiology. By means of regression models the effect of one or several explanatory variables (e.g., exposures, subject characteristics, risk factors) on a response variable such as mortality or cancer can be investigated. From multiple regression models, adjusted effect estimates can be obtained that take the effect of potential confounders into account. Regression methods can be applied in all epidemiologic study designs so that they represent a universal tool for data analysis in epidemiology. Different kinds of regression models have been developed in dependence on the measurement scale of the response variable and the study design. The most important methods are linear regression for continuous outcomes, logistic regression for binary outcomes, Cox regression for time-to-event data, and Poisson regression for frequencies and rates. This chapter provides a nontechnical introduction to these regression models with illustrating examples from cancer research.

  15. From Rasch scores to regression

    DEFF Research Database (Denmark)

    Christensen, Karl Bang

    2006-01-01

    Rasch models provide a framework for measurement and modelling latent variables. Having measured a latent variable in a population a comparison of groups will often be of interest. For this purpose the use of observed raw scores will often be inadequate because these lack interval scale propertie....... This paper compares two approaches to group comparison: linear regression models using estimated person locations as outcome variables and latent regression models based on the distribution of the score....

  16. Opinion dynamics with confirmation bias.

    Directory of Open Access Journals (Sweden)

    Armen E Allahverdyan

    Full Text Available Confirmation bias is the tendency to acquire or evaluate new information in a way that is consistent with one's preexisting beliefs. It is omnipresent in psychology, economics, and even scientific practices. Prior theoretical research of this phenomenon has mainly focused on its economic implications possibly missing its potential connections with broader notions of cognitive science.We formulate a (non-Bayesian model for revising subjective probabilistic opinion of a confirmationally-biased agent in the light of a persuasive opinion. The revision rule ensures that the agent does not react to persuasion that is either far from his current opinion or coincides with it. We demonstrate that the model accounts for the basic phenomenology of the social judgment theory, and allows to study various phenomena such as cognitive dissonance and boomerang effect. The model also displays the order of presentation effect-when consecutively exposed to two opinions, the preference is given to the last opinion (recency or the first opinion (primacy -and relates recency to confirmation bias. Finally, we study the model in the case of repeated persuasion and analyze its convergence properties.The standard Bayesian approach to probabilistic opinion revision is inadequate for describing the observed phenomenology of persuasion process. The simple non-Bayesian model proposed here does agree with this phenomenology and is capable of reproducing a spectrum of effects observed in psychology: primacy-recency phenomenon, boomerang effect and cognitive dissonance. We point out several limitations of the model that should motivate its future development.

  17. Producing The New Regressive Left

    DEFF Research Database (Denmark)

    Crone, Christine

    members, this thesis investigates a growing political trend and ideological discourse in the Arab world that I have called The New Regressive Left. On the premise that a media outlet can function as a forum for ideology production, the thesis argues that an analysis of this material can help to trace...... the contexture of The New Regressive Left. If the first part of the thesis lays out the theoretical approach and draws the contextual framework, through an exploration of the surrounding Arab media-and ideoscapes, the second part is an analytical investigation of the discourse that permeates the programmes aired...... becomes clear from the analytical chapters is the emergence of the new cross-ideological alliance of The New Regressive Left. This emerging coalition between Shia Muslims, religious minorities, parts of the Arab Left, secular cultural producers, and the remnants of the political,strategic resistance...

  18. Mixture of Regression Models with Single-Index

    OpenAIRE

    Xiang, Sijia; Yao, Weixin

    2016-01-01

    In this article, we propose a class of semiparametric mixture regression models with single-index. We argue that many recently proposed semiparametric/nonparametric mixture regression models can be considered special cases of the proposed model. However, unlike existing semiparametric mixture regression models, the new pro- posed model can easily incorporate multivariate predictors into the nonparametric components. Backfitting estimates and the corresponding algorithms have been proposed for...

  19. Local bilinear multiple-output quantile/depth regression

    Czech Academy of Sciences Publication Activity Database

    Hallin, M.; Lu, Z.; Paindaveine, D.; Šiman, Miroslav

    2015-01-01

    Roč. 21, č. 3 (2015), s. 1435-1466 ISSN 1350-7265 R&D Projects: GA MŠk(CZ) 1M06047 Institutional support: RVO:67985556 Keywords : conditional depth * growth chart * halfspace depth * local bilinear regression * multivariate quantile * quantile regression * regression depth Subject RIV: BA - General Mathematics Impact factor: 1.372, year: 2015 http://library.utia.cas.cz/separaty/2015/SI/siman-0446857.pdf

  20. Do clinical and translational science graduate students understand linear regression? Development and early validation of the REGRESS quiz.

    Science.gov (United States)

    Enders, Felicity

    2013-12-01

    Although regression is widely used for reading and publishing in the medical literature, no instruments were previously available to assess students' understanding. The goal of this study was to design and assess such an instrument for graduate students in Clinical and Translational Science and Public Health. A 27-item REsearch on Global Regression Expectations in StatisticS (REGRESS) quiz was developed through an iterative process. Consenting students taking a course on linear regression in a Clinical and Translational Science program completed the quiz pre- and postcourse. Student results were compared to practicing statisticians with a master's or doctoral degree in statistics or a closely related field. Fifty-two students responded precourse, 59 postcourse , and 22 practicing statisticians completed the quiz. The mean (SD) score was 9.3 (4.3) for students precourse and 19.0 (3.5) postcourse (P REGRESS quiz was internally reliable (Cronbach's alpha 0.89). The initial validation is quite promising with statistically significant and meaningful differences across time and study populations. Further work is needed to validate the quiz across multiple institutions. © 2013 Wiley Periodicals, Inc.

  1. The MIDAS Touch: Mixed Data Sampling Regression Models

    OpenAIRE

    Ghysels, Eric; Santa-Clara, Pedro; Valkanov, Rossen

    2004-01-01

    We introduce Mixed Data Sampling (henceforth MIDAS) regression models. The regressions involve time series data sampled at different frequencies. Technically speaking MIDAS models specify conditional expectations as a distributed lag of regressors recorded at some higher sampling frequencies. We examine the asymptotic properties of MIDAS regression estimation and compare it with traditional distributed lag models. MIDAS regressions have wide applicability in macroeconomics and �nance.

  2. Suppression Situations in Multiple Linear Regression

    Science.gov (United States)

    Shieh, Gwowen

    2006-01-01

    This article proposes alternative expressions for the two most prevailing definitions of suppression without resorting to the standardized regression modeling. The formulation provides a simple basis for the examination of their relationship. For the two-predictor regression, the author demonstrates that the previous results in the literature are…

  3. Support vector regression model for the estimation of γ-ray buildup factors for multi-layer shields

    International Nuclear Information System (INIS)

    Trontl, Kresimir; Smuc, Tomislav; Pevec, Dubravko

    2007-01-01

    The accuracy of the point-kernel method, which is a widely used practical tool for γ-ray shielding calculations, strongly depends on the quality and accuracy of buildup factors used in the calculations. Although, buildup factors for single-layer shields comprised of a single material are well known, calculation of buildup factors for stratified shields, each layer comprised of different material or a combination of materials, represent a complex physical problem. Recently, a new compact mathematical model for multi-layer shield buildup factor representation has been suggested for embedding into point-kernel codes thus replacing traditionally generated complex mathematical expressions. The new regression model is based on support vector machines learning technique, which is an extension of Statistical Learning Theory. The paper gives complete description of the novel methodology with results pertaining to realistic engineering multi-layer shielding geometries. The results based on support vector regression machine learning confirm that this approach provides a framework for general, accurate and computationally acceptable multi-layer buildup factor model

  4. Significance testing in ridge regression for genetic data

    Directory of Open Access Journals (Sweden)

    De Iorio Maria

    2011-09-01

    Full Text Available Abstract Background Technological developments have increased the feasibility of large scale genetic association studies. Densely typed genetic markers are obtained using SNP arrays, next-generation sequencing technologies and imputation. However, SNPs typed using these methods can be highly correlated due to linkage disequilibrium among them, and standard multiple regression techniques fail with these data sets due to their high dimensionality and correlation structure. There has been increasing interest in using penalised regression in the analysis of high dimensional data. Ridge regression is one such penalised regression technique which does not perform variable selection, instead estimating a regression coefficient for each predictor variable. It is therefore desirable to obtain an estimate of the significance of each ridge regression coefficient. Results We develop and evaluate a test of significance for ridge regression coefficients. Using simulation studies, we demonstrate that the performance of the test is comparable to that of a permutation test, with the advantage of a much-reduced computational cost. We introduce the p-value trace, a plot of the negative logarithm of the p-values of ridge regression coefficients with increasing shrinkage parameter, which enables the visualisation of the change in p-value of the regression coefficients with increasing penalisation. We apply the proposed method to a lung cancer case-control data set from EPIC, the European Prospective Investigation into Cancer and Nutrition. Conclusions The proposed test is a useful alternative to a permutation test for the estimation of the significance of ridge regression coefficients, at a much-reduced computational cost. The p-value trace is an informative graphical tool for evaluating the results of a test of significance of ridge regression coefficients as the shrinkage parameter increases, and the proposed test makes its production computationally feasible.

  5. Regression calibration with more surrogates than mismeasured variables

    KAUST Repository

    Kipnis, Victor

    2012-06-29

    In a recent paper (Weller EA, Milton DK, Eisen EA, Spiegelman D. Regression calibration for logistic regression with multiple surrogates for one exposure. Journal of Statistical Planning and Inference 2007; 137: 449-461), the authors discussed fitting logistic regression models when a scalar main explanatory variable is measured with error by several surrogates, that is, a situation with more surrogates than variables measured with error. They compared two methods of adjusting for measurement error using a regression calibration approximate model as if it were exact. One is the standard regression calibration approach consisting of substituting an estimated conditional expectation of the true covariate given observed data in the logistic regression. The other is a novel two-stage approach when the logistic regression is fitted to multiple surrogates, and then a linear combination of estimated slopes is formed as the estimate of interest. Applying estimated asymptotic variances for both methods in a single data set with some sensitivity analysis, the authors asserted superiority of their two-stage approach. We investigate this claim in some detail. A troubling aspect of the proposed two-stage method is that, unlike standard regression calibration and a natural form of maximum likelihood, the resulting estimates are not invariant to reparameterization of nuisance parameters in the model. We show, however, that, under the regression calibration approximation, the two-stage method is asymptotically equivalent to a maximum likelihood formulation, and is therefore in theory superior to standard regression calibration. However, our extensive finite-sample simulations in the practically important parameter space where the regression calibration model provides a good approximation failed to uncover such superiority of the two-stage method. We also discuss extensions to different data structures.

  6. Regression calibration with more surrogates than mismeasured variables

    KAUST Repository

    Kipnis, Victor; Midthune, Douglas; Freedman, Laurence S.; Carroll, Raymond J.

    2012-01-01

    In a recent paper (Weller EA, Milton DK, Eisen EA, Spiegelman D. Regression calibration for logistic regression with multiple surrogates for one exposure. Journal of Statistical Planning and Inference 2007; 137: 449-461), the authors discussed fitting logistic regression models when a scalar main explanatory variable is measured with error by several surrogates, that is, a situation with more surrogates than variables measured with error. They compared two methods of adjusting for measurement error using a regression calibration approximate model as if it were exact. One is the standard regression calibration approach consisting of substituting an estimated conditional expectation of the true covariate given observed data in the logistic regression. The other is a novel two-stage approach when the logistic regression is fitted to multiple surrogates, and then a linear combination of estimated slopes is formed as the estimate of interest. Applying estimated asymptotic variances for both methods in a single data set with some sensitivity analysis, the authors asserted superiority of their two-stage approach. We investigate this claim in some detail. A troubling aspect of the proposed two-stage method is that, unlike standard regression calibration and a natural form of maximum likelihood, the resulting estimates are not invariant to reparameterization of nuisance parameters in the model. We show, however, that, under the regression calibration approximation, the two-stage method is asymptotically equivalent to a maximum likelihood formulation, and is therefore in theory superior to standard regression calibration. However, our extensive finite-sample simulations in the practically important parameter space where the regression calibration model provides a good approximation failed to uncover such superiority of the two-stage method. We also discuss extensions to different data structures.

  7. Few crystal balls are crystal clear : eyeballing regression

    International Nuclear Information System (INIS)

    Wittebrood, R.T.

    1998-01-01

    The theory of regression and statistical analysis as it applies to reservoir analysis was discussed. It was argued that regression lines are not always the final truth. It was suggested that regression lines and eyeballed lines are often equally accurate. The many conditions that must be fulfilled to calculate a proper regression were discussed. Mentioned among these conditions were the distribution of the data, hidden variables, knowledge of how the data was obtained, the need for causal correlation of the variables, and knowledge of the manner in which the regression results are going to be used. 1 tab., 13 figs

  8. Meta-analyses of the 5-HTTLPR polymorphisms and post-traumatic stress disorder.

    Directory of Open Access Journals (Sweden)

    Fernando Navarro-Mateu

    Full Text Available OBJECTIVE: To conduct a meta-analysis of all published genetic association studies of 5-HTTLPR polymorphisms performed in PTSD cases. METHODS DATA SOURCES: Potential studies were identified through PubMed/MEDLINE, EMBASE, Web of Science databases (Web of Knowledge, WoK, PsychINFO, PsychArticles and HuGeNet (Human Genome Epidemiology Network up until December 2011. STUDY SELECTION: Published observational studies reporting genotype or allele frequencies of this genetic factor in PTSD cases and in non-PTSD controls were all considered eligible for inclusion in this systematic review. DATA EXTRACTION: Two reviewers selected studies for possible inclusion and extracted data independently following a standardized protocol. STATISTICAL ANALYSIS: A biallelic and a triallelic meta-analysis, including the total S and S' frequencies, the dominant (S+/LL and S'+/L'L' and the recessive model (SS/L+ and S'S'/L'+, was performed with a random-effect model to calculate the pooled OR and its corresponding 95% CI. Forest plots and Cochran's Q-Statistic and I(2 index were calculated to check for heterogeneity. Subgroup analyses and meta-regression were carried out to analyze potential moderators. Publication bias and quality of reporting were also analyzed. RESULTS: 13 studies met our inclusion criteria, providing a total sample of 1874 patients with PTSD and 7785 controls in the biallelic meta-analyses and 627 and 3524, respectively, in the triallelic. None of the meta-analyses showed evidence of an association between 5-HTTLPR and PTSD but several characteristics (exposure to the same principal stressor for PTSD cases and controls, adjustment for potential confounding variables, blind assessment, study design, type of PTSD, ethnic distribution and Total Quality Score influenced the results in subgroup analyses and meta-regression. There was no evidence of potential publication bias. CONCLUSIONS: Current evidence does not support a direct effect of 5-HTTLPR

  9. Meta-analyses of the 5-HTTLPR polymorphisms and post-traumatic stress disorder.

    Science.gov (United States)

    Navarro-Mateu, Fernando; Escámez, Teresa; Koenen, Karestan C; Alonso, Jordi; Sánchez-Meca, Julio

    2013-01-01

    To conduct a meta-analysis of all published genetic association studies of 5-HTTLPR polymorphisms performed in PTSD cases. Potential studies were identified through PubMed/MEDLINE, EMBASE, Web of Science databases (Web of Knowledge, WoK), PsychINFO, PsychArticles and HuGeNet (Human Genome Epidemiology Network) up until December 2011. Published observational studies reporting genotype or allele frequencies of this genetic factor in PTSD cases and in non-PTSD controls were all considered eligible for inclusion in this systematic review. Two reviewers selected studies for possible inclusion and extracted data independently following a standardized protocol. A biallelic and a triallelic meta-analysis, including the total S and S' frequencies, the dominant (S+/LL and S'+/L'L') and the recessive model (SS/L+ and S'S'/L'+), was performed with a random-effect model to calculate the pooled OR and its corresponding 95% CI. Forest plots and Cochran's Q-Statistic and I(2) index were calculated to check for heterogeneity. Subgroup analyses and meta-regression were carried out to analyze potential moderators. Publication bias and quality of reporting were also analyzed. 13 studies met our inclusion criteria, providing a total sample of 1874 patients with PTSD and 7785 controls in the biallelic meta-analyses and 627 and 3524, respectively, in the triallelic. None of the meta-analyses showed evidence of an association between 5-HTTLPR and PTSD but several characteristics (exposure to the same principal stressor for PTSD cases and controls, adjustment for potential confounding variables, blind assessment, study design, type of PTSD, ethnic distribution and Total Quality Score) influenced the results in subgroup analyses and meta-regression. There was no evidence of potential publication bias. Current evidence does not support a direct effect of 5-HTTLPR polymorphisms on PTSD. Further analyses of gene-environment interactions, epigenetic modulation and new studies with large samples

  10. Regression methods for medical research

    CERN Document Server

    Tai, Bee Choo

    2013-01-01

    Regression Methods for Medical Research provides medical researchers with the skills they need to critically read and interpret research using more advanced statistical methods. The statistical requirements of interpreting and publishing in medical journals, together with rapid changes in science and technology, increasingly demands an understanding of more complex and sophisticated analytic procedures.The text explains the application of statistical models to a wide variety of practical medical investigative studies and clinical trials. Regression methods are used to appropriately answer the

  11. Should metacognition be measured by logistic regression?

    Science.gov (United States)

    Rausch, Manuel; Zehetleitner, Michael

    2017-03-01

    Are logistic regression slopes suitable to quantify metacognitive sensitivity, i.e. the efficiency with which subjective reports differentiate between correct and incorrect task responses? We analytically show that logistic regression slopes are independent from rating criteria in one specific model of metacognition, which assumes (i) that rating decisions are based on sensory evidence generated independently of the sensory evidence used for primary task responses and (ii) that the distributions of evidence are logistic. Given a hierarchical model of metacognition, logistic regression slopes depend on rating criteria. According to all considered models, regression slopes depend on the primary task criterion. A reanalysis of previous data revealed that massive numbers of trials are required to distinguish between hierarchical and independent models with tolerable accuracy. It is argued that researchers who wish to use logistic regression as measure of metacognitive sensitivity need to control the primary task criterion and rating criteria. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. A confirmative clinimetric analysis of the 36-item Family Assessment Device.

    Science.gov (United States)

    Timmerby, Nina; Cosci, Fiammetta; Watson, Maggie; Csillag, Claudio; Schmitt, Florence; Steck, Barbara; Bech, Per; Thastum, Mikael

    2018-02-07

    The Family Assessment Device (FAD) is a 60-item questionnaire widely used to evaluate self-reported family functioning. However, the factor structure as well as the number of items has been questioned. A shorter and more user-friendly version of the original FAD-scale, the 36-item FAD, has therefore previously been proposed, based on findings in a nonclinical population of adults. We aimed in this study to evaluate the brief 36-item version of the FAD in a clinical population. Data from a European multinational study, examining factors associated with levels of family functioning in adult cancer patients' families, were used. Both healthy and ill parents completed the 60-item version FAD. The psychometric analyses conducted were Principal Component Analysis and Mokken-analysis. A total of 564 participants were included. Based on the psychometric analysis we confirmed that the 36-item version of the FAD has robust psychometric properties and can be used in clinical populations. The present analysis confirmed that the 36-item version of the FAD (18 items assessing 'well-being' and 18 items assessing 'dysfunctional' family function) is a brief scale where the summed total score is a valid measure of the dimensions of family functioning. This shorter version of the FAD is, in accordance with the concept of 'measurement-based care', an easy to use scale that could be considered when the aim is to evaluate self-reported family functioning.

  13. BOX-COX REGRESSION METHOD IN TIME SCALING

    Directory of Open Access Journals (Sweden)

    ATİLLA GÖKTAŞ

    2013-06-01

    Full Text Available Box-Cox regression method with λj, for j = 1, 2, ..., k, power transformation can be used when dependent variable and error term of the linear regression model do not satisfy the continuity and normality assumptions. The situation obtaining the smallest mean square error  when optimum power λj, transformation for j = 1, 2, ..., k, of Y has been discussed. Box-Cox regression method is especially appropriate to adjust existence skewness or heteroscedasticity of error terms for a nonlinear functional relationship between dependent and explanatory variables. In this study, the advantage and disadvantage use of Box-Cox regression method have been discussed in differentiation and differantial analysis of time scale concept.

  14. Gaussian Process Regression Model in Spatial Logistic Regression

    Science.gov (United States)

    Sofro, A.; Oktaviarina, A.

    2018-01-01

    Spatial analysis has developed very quickly in the last decade. One of the favorite approaches is based on the neighbourhood of the region. Unfortunately, there are some limitations such as difficulty in prediction. Therefore, we offer Gaussian process regression (GPR) to accommodate the issue. In this paper, we will focus on spatial modeling with GPR for binomial data with logit link function. The performance of the model will be investigated. We will discuss the inference of how to estimate the parameters and hyper-parameters and to predict as well. Furthermore, simulation studies will be explained in the last section.

  15. Regression Analysis and the Sociological Imagination

    Science.gov (United States)

    De Maio, Fernando

    2014-01-01

    Regression analysis is an important aspect of most introductory statistics courses in sociology but is often presented in contexts divorced from the central concerns that bring students into the discipline. Consequently, we present five lesson ideas that emerge from a regression analysis of income inequality and mortality in the USA and Canada.

  16. Regression toward the mean – a detection method for unknown population mean based on Mee and Chua's algorithm

    Directory of Open Access Journals (Sweden)

    Lüdtke Rainer

    2008-08-01

    Full Text Available Abstract Background Regression to the mean (RTM occurs in situations of repeated measurements when extreme values are followed by measurements in the same subjects that are closer to the mean of the basic population. In uncontrolled studies such changes are likely to be interpreted as a real treatment effect. Methods Several statistical approaches have been developed to analyse such situations, including the algorithm of Mee and Chua which assumes a known population mean μ. We extend this approach to a situation where μ is unknown and suggest to vary it systematically over a range of reasonable values. Using differential calculus we provide formulas to estimate the range of μ where treatment effects are likely to occur when RTM is present. Results We successfully applied our method to three real world examples denoting situations when (a no treatment effect can be confirmed regardless which μ is true, (b when a treatment effect must be assumed independent from the true μ and (c in the appraisal of results of uncontrolled studies. Conclusion Our method can be used to separate the wheat from the chaff in situations, when one has to interpret the results of uncontrolled studies. In meta-analysis, health-technology reports or systematic reviews this approach may be helpful to clarify the evidence given from uncontrolled observational studies.

  17. An Additive-Multiplicative Cox-Aalen Regression Model

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    2002-01-01

    Aalen model; additive risk model; counting processes; Cox regression; survival analysis; time-varying effects......Aalen model; additive risk model; counting processes; Cox regression; survival analysis; time-varying effects...

  18. Model confirmation in climate economics

    Science.gov (United States)

    Millner, Antony; McDermott, Thomas K. J.

    2016-01-01

    Benefit–cost integrated assessment models (BC-IAMs) inform climate policy debates by quantifying the trade-offs between alternative greenhouse gas abatement options. They achieve this by coupling simplified models of the climate system to models of the global economy and the costs and benefits of climate policy. Although these models have provided valuable qualitative insights into the sensitivity of policy trade-offs to different ethical and empirical assumptions, they are increasingly being used to inform the selection of policies in the real world. To the extent that BC-IAMs are used as inputs to policy selection, our confidence in their quantitative outputs must depend on the empirical validity of their modeling assumptions. We have a degree of confidence in climate models both because they have been tested on historical data in hindcasting experiments and because the physical principles they are based on have been empirically confirmed in closely related applications. By contrast, the economic components of BC-IAMs often rely on untestable scenarios, or on structural models that are comparatively untested on relevant time scales. Where possible, an approach to model confirmation similar to that used in climate science could help to build confidence in the economic components of BC-IAMs, or focus attention on which components might need refinement for policy applications. We illustrate the potential benefits of model confirmation exercises by performing a long-run hindcasting experiment with one of the leading BC-IAMs. We show that its model of long-run economic growth—one of its most important economic components—had questionable predictive power over the 20th century. PMID:27432964

  19. Wheat flour dough Alveograph characteristics predicted by Mixolab regression models.

    Science.gov (United States)

    Codină, Georgiana Gabriela; Mironeasa, Silvia; Mironeasa, Costel; Popa, Ciprian N; Tamba-Berehoiu, Radiana

    2012-02-01

    In Romania, the Alveograph is the most used device to evaluate the rheological properties of wheat flour dough, but lately the Mixolab device has begun to play an important role in the breadmaking industry. These two instruments are based on different principles but there are some correlations that can be found between the parameters determined by the Mixolab and the rheological properties of wheat dough measured with the Alveograph. Statistical analysis on 80 wheat flour samples using the backward stepwise multiple regression method showed that Mixolab values using the ‘Chopin S’ protocol (40 samples) and ‘Chopin + ’ protocol (40 samples) can be used to elaborate predictive models for estimating the value of the rheological properties of wheat dough: baking strength (W), dough tenacity (P) and extensibility (L). The correlation analysis confirmed significant findings (P 0.70 for P, R²(adjusted) > 0.70 for W and R²(adjusted) > 0.38 for L, at a 95% confidence interval. Copyright © 2011 Society of Chemical Industry.

  20. Electricity Load Forecasting Using Support Vector Regression with Memetic Algorithms

    Directory of Open Access Journals (Sweden)

    Zhongyi Hu

    2013-01-01

    Full Text Available Electricity load forecasting is an important issue that is widely explored and examined in power systems operation literature and commercial transactions in electricity markets literature as well. Among the existing forecasting models, support vector regression (SVR has gained much attention. Considering the performance of SVR highly depends on its parameters; this study proposed a firefly algorithm (FA based memetic algorithm (FA-MA to appropriately determine the parameters of SVR forecasting model. In the proposed FA-MA algorithm, the FA algorithm is applied to explore the solution space, and the pattern search is used to conduct individual learning and thus enhance the exploitation of FA. Experimental results confirm that the proposed FA-MA based SVR model can not only yield more accurate forecasting results than the other four evolutionary algorithms based SVR models and three well-known forecasting models but also outperform the hybrid algorithms in the related existing literature.

  1. VIPRE modeling of VVER-1000 reactor core for DNB analyses

    Energy Technology Data Exchange (ETDEWEB)

    Sung, Y.; Nguyen, Q. [Westinghouse Electric Corporation, Pittsburgh, PA (United States); Cizek, J. [Nuclear Research Institute, Prague, (Czech Republic)

    1995-09-01

    Based on the one-pass modeling approach, the hot channels and the VVER-1000 reactor core can be modeled in 30 channels for DNB analyses using the VIPRE-01/MOD02 (VIPRE) code (VIPRE is owned by Electric Power Research Institute, Palo Alto, California). The VIPRE one-pass model does not compromise any accuracy in the hot channel local fluid conditions. Extensive qualifications include sensitivity studies of radial noding and crossflow parameters and comparisons with the results from THINC and CALOPEA subchannel codes. The qualifications confirm that the VIPRE code with the Westinghouse modeling method provides good computational performance and accuracy for VVER-1000 DNB analyses.

  2. Confirmation of Maslow’s Hypothesis of Synergy: Developing an Acceptance of Selfishness at the Workplace Scale

    Directory of Open Access Journals (Sweden)

    Jiro Takaki

    2016-04-01

    Full Text Available This study aimed to develop a new Acceptance of Selfishness at the Workplace Scale (ASWS and to confirm Maslow’s hypothesis of synergy: if both a sense of contribution and acceptance of selfishness at the workplace are high, workers are psychologically healthy. In a cross-sectional study with employees of three Japanese companies, 656 workers answered a self-administered questionnaire on paper completely (response rate = 66.8%. Each questionnaire was submitted to us in a sealed envelope and analyzed. The ASWS indicated high internal consistency (Cronbach’s alpha = 0.86. Significant (p < 0.001 positive moderate correlations between ASWS scores and job control scores support the ASWS’s convergent and discriminant validity. Significant (p < 0.001 associations of ASWS scores with psychological distress and work engagement supported the ASWS’s criterion validity. In short, ASWS was a psychometrically satisfactory measure. Significant (p < 0.05 interactions between a sense of contribution and acceptance of selfishness at the workplace in linear regression models showed that when those two factors are low, psychological distress becomes high. However, when a sense of contribution and acceptance of selfishness are high, work engagement also becomes high. Thus, Maslow’s hypothesis of synergy was confirmed.

  3. Outbreak bias in illness reporting and case confirmation in ciguatera fish poisoning surveillance in south Florida.

    Science.gov (United States)

    Begier, Elizabeth M; Backer, Lorraine C; Weisman, Richard S; Hammond, Roberta M; Fleming, Lora E; Blythe, Donna

    2006-01-01

    Ciguatera fish poisoning is a potentially life-threatening disease caused by eating coral reef fish contaminated with ciguatoxins and is the most common marine poisoning. However, existing surveillance systems capture few cases. To improve regional ciguatera surveillance in South Florida, this study compared ciguatera illnesses in the Florida Poison Information Center-Miami (FPICM) call database to ciguatera cases in the Florida Department of Health (FDOH) disease surveillance systems. Univariate and multivariate logistic regression were used to identify predictors of when FPICM reported ciguatera illnesses to FDOH and whether FDOH confirmed reported ciguatera cases. FPICM staff preferentially reported ciguatera illnesses that were of shorter duration (adjusted odds ratio [AOR] = 0.84 per additional illness day; 95% confidence interval [CI] 0.74, 0.97); outbreak-associated (AOR = 7.0; 95% CI 2.5, 19.5); and clinically more severe (AOR = 21.6; 95% CI 2.3, 198.5). Among ciguatera illnesses reported to FDOH, outbreak-associated illnesses were more likely than single, sporadic illnesses to become confirmed surveillance cases (crude OR = 11.1; 95% CI 2.0, 62.5). The over-representation of outbreak-associated ciguatera cases underestimates the true contribution of sporadic illnesses to ciguatera disease burden. This bias should be considered when evaluating surveillance systems that include both outbreak-associated and sporadic illness reports.

  4. Exhaustive Search for Sparse Variable Selection in Linear Regression

    Science.gov (United States)

    Igarashi, Yasuhiko; Takenaka, Hikaru; Nakanishi-Ohno, Yoshinori; Uemura, Makoto; Ikeda, Shiro; Okada, Masato

    2018-04-01

    We propose a K-sparse exhaustive search (ES-K) method and a K-sparse approximate exhaustive search method (AES-K) for selecting variables in linear regression. With these methods, K-sparse combinations of variables are tested exhaustively assuming that the optimal combination of explanatory variables is K-sparse. By collecting the results of exhaustively computing ES-K, various approximate methods for selecting sparse variables can be summarized as density of states. With this density of states, we can compare different methods for selecting sparse variables such as relaxation and sampling. For large problems where the combinatorial explosion of explanatory variables is crucial, the AES-K method enables density of states to be effectively reconstructed by using the replica-exchange Monte Carlo method and the multiple histogram method. Applying the ES-K and AES-K methods to type Ia supernova data, we confirmed the conventional understanding in astronomy when an appropriate K is given beforehand. However, we found the difficulty to determine K from the data. Using virtual measurement and analysis, we argue that this is caused by data shortage.

  5. Regression tools for CO2 inversions: application of a shrinkage estimator to process attribution

    International Nuclear Information System (INIS)

    Shaby, Benjamin A.; Field, Christopher B.

    2006-01-01

    In this study we perform an atmospheric inversion based on a shrinkage estimator. This method is used to estimate surface fluxes of CO 2 , first partitioned according to constituent geographic regions, and then according to constituent processes that are responsible for the total flux. Our approach differs from previous approaches in two important ways. The first is that the technique of linear Bayesian inversion is recast as a regression problem. Seen as such, standard regression tools are employed to analyse and reduce errors in the resultant estimates. A shrinkage estimator, which combines standard ridge regression with the linear 'Bayesian inversion' model, is introduced. This method introduces additional bias into the model with the aim of reducing variance such that errors are decreased overall. Compared with standard linear Bayesian inversion, the ridge technique seems to reduce both flux estimation errors and prediction errors. The second divergence from previous studies is that instead of dividing the world into geographically distinct regions and estimating the CO 2 flux in each region, the flux space is divided conceptually into processes that contribute to the total global flux. Formulating the problem in this manner adds to the interpretability of the resultant estimates and attempts to shed light on the problem of attributing sources and sinks to their underlying mechanisms

  6. The spatial prediction of landslide susceptibility applying artificial neural network and logistic regression models: A case study of Inje, Korea

    Science.gov (United States)

    Saro, Lee; Woo, Jeon Seong; Kwan-Young, Oh; Moung-Jin, Lee

    2016-02-01

    The aim of this study is to predict landslide susceptibility caused using the spatial analysis by the application of a statistical methodology based on the GIS. Logistic regression models along with artificial neutral network were applied and validated to analyze landslide susceptibility in Inje, Korea. Landslide occurrence area in the study were identified based on interpretations of optical remote sensing data (Aerial photographs) followed by field surveys. A spatial database considering forest, geophysical, soil and topographic data, was built on the study area using the Geographical Information System (GIS). These factors were analysed using artificial neural network (ANN) and logistic regression models to generate a landslide susceptibility map. The study validates the landslide susceptibility map by comparing them with landslide occurrence areas. The locations of landslide occurrence were divided randomly into a training set (50%) and a test set (50%). A training set analyse the landslide susceptibility map using the artificial network along with logistic regression models, and a test set was retained to validate the prediction map. The validation results revealed that the artificial neural network model (with an accuracy of 80.10%) was better at predicting landslides than the logistic regression model (with an accuracy of 77.05%). Of the weights used in the artificial neural network model, `slope' yielded the highest weight value (1.330), and `aspect' yielded the lowest value (1.000). This research applied two statistical analysis methods in a GIS and compared their results. Based on the findings, we were able to derive a more effective method for analyzing landslide susceptibility.

  7. The spatial prediction of landslide susceptibility applying artificial neural network and logistic regression models: A case study of Inje, Korea

    Directory of Open Access Journals (Sweden)

    Saro Lee

    2016-02-01

    Full Text Available The aim of this study is to predict landslide susceptibility caused using the spatial analysis by the application of a statistical methodology based on the GIS. Logistic regression models along with artificial neutral network were applied and validated to analyze landslide susceptibility in Inje, Korea. Landslide occurrence area in the study were identified based on interpretations of optical remote sensing data (Aerial photographs followed by field surveys. A spatial database considering forest, geophysical, soil and topographic data, was built on the study area using the Geographical Information System (GIS. These factors were analysed using artificial neural network (ANN and logistic regression models to generate a landslide susceptibility map. The study validates the landslide susceptibility map by comparing them with landslide occurrence areas. The locations of landslide occurrence were divided randomly into a training set (50% and a test set (50%. A training set analyse the landslide susceptibility map using the artificial network along with logistic regression models, and a test set was retained to validate the prediction map. The validation results revealed that the artificial neural network model (with an accuracy of 80.10% was better at predicting landslides than the logistic regression model (with an accuracy of 77.05%. Of the weights used in the artificial neural network model, ‘slope’ yielded the highest weight value (1.330, and ‘aspect’ yielded the lowest value (1.000. This research applied two statistical analysis methods in a GIS and compared their results. Based on the findings, we were able to derive a more effective method for analyzing landslide susceptibility.

  8. A new approach to analyse longitudinal epidemiological data with an excess of zeros.

    Science.gov (United States)

    Spriensma, Alette S; Hajos, Tibor R S; de Boer, Michiel R; Heymans, Martijn W; Twisk, Jos W R

    2013-02-20

    Within longitudinal epidemiological research, 'count' outcome variables with an excess of zeros frequently occur. Although these outcomes are frequently analysed with a linear mixed model, or a Poisson mixed model, a two-part mixed model would be better in analysing outcome variables with an excess of zeros. Therefore, objective of this paper was to introduce the relatively 'new' method of two-part joint regression modelling in longitudinal data analysis for outcome variables with an excess of zeros, and to compare the performance of this method to current approaches. Within an observational longitudinal dataset, we compared three techniques; two 'standard' approaches (a linear mixed model, and a Poisson mixed model), and a two-part joint mixed model (a binomial/Poisson mixed distribution model), including random intercepts and random slopes. Model fit indicators, and differences between predicted and observed values were used for comparisons. The analyses were performed with STATA using the GLLAMM procedure. Regarding the random intercept models, the two-part joint mixed model (binomial/Poisson) performed best. Adding random slopes for time to the models changed the sign of the regression coefficient for both the Poisson mixed model and the two-part joint mixed model (binomial/Poisson) and resulted into a much better fit. This paper showed that a two-part joint mixed model is a more appropriate method to analyse longitudinal data with an excess of zeros compared to a linear mixed model and a Poisson mixed model. However, in a model with random slopes for time a Poisson mixed model also performed remarkably well.

  9. Is adult gait less susceptible than paediatric gait to hip joint centre regression equation error?

    Science.gov (United States)

    Kiernan, D; Hosking, J; O'Brien, T

    2016-03-01

    Hip joint centre (HJC) regression equation error during paediatric gait has recently been shown to have clinical significance. In relation to adult gait, it has been inferred that comparable errors with children in absolute HJC position may in fact result in less significant kinematic and kinetic error. This study investigated the clinical agreement of three commonly used regression equation sets (Bell et al., Davis et al. and Orthotrak) for adult subjects against the equations of Harrington et al. The relationship between HJC position error and subject size was also investigated for the Davis et al. set. Full 3-dimensional gait analysis was performed on 12 healthy adult subjects with data for each set compared to Harrington et al. The Gait Profile Score, Gait Variable Score and GDI-kinetic were used to assess clinical significance while differences in HJC position between the Davis and Harrington sets were compared to leg length and subject height using regression analysis. A number of statistically significant differences were present in absolute HJC position. However, all sets fell below the clinically significant thresholds (GPS <1.6°, GDI-Kinetic <3.6 points). Linear regression revealed a statistically significant relationship for both increasing leg length and increasing subject height with decreasing error in anterior/posterior and superior/inferior directions. Results confirm a negligible clinical error for adult subjects suggesting that any of the examined sets could be used interchangeably. Decreasing error with both increasing leg length and increasing subject height suggests that the Davis set should be used cautiously on smaller subjects. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Model-based Quantile Regression for Discrete Data

    KAUST Repository

    Padellini, Tullia

    2018-04-10

    Quantile regression is a class of methods voted to the modelling of conditional quantiles. In a Bayesian framework quantile regression has typically been carried out exploiting the Asymmetric Laplace Distribution as a working likelihood. Despite the fact that this leads to a proper posterior for the regression coefficients, the resulting posterior variance is however affected by an unidentifiable parameter, hence any inferential procedure beside point estimation is unreliable. We propose a model-based approach for quantile regression that considers quantiles of the generating distribution directly, and thus allows for a proper uncertainty quantification. We then create a link between quantile regression and generalised linear models by mapping the quantiles to the parameter of the response variable, and we exploit it to fit the model with R-INLA. We extend it also in the case of discrete responses, where there is no 1-to-1 relationship between quantiles and distribution\\'s parameter, by introducing continuous generalisations of the most common discrete variables (Poisson, Binomial and Negative Binomial) to be exploited in the fitting.

  11. riskRegression

    DEFF Research Database (Denmark)

    Ozenne, Brice; Sørensen, Anne Lyngholm; Scheike, Thomas

    2017-01-01

    In the presence of competing risks a prediction of the time-dynamic absolute risk of an event can be based on cause-specific Cox regression models for the event and the competing risks (Benichou and Gail, 1990). We present computationally fast and memory optimized C++ functions with an R interface......-product we obtain fast access to the baseline hazards (compared to survival::basehaz()) and predictions of survival probabilities, their confidence intervals and confidence bands. Confidence intervals and confidence bands are based on point-wise asymptotic expansions of the corresponding statistical...

  12. Real estate value prediction using multivariate regression models

    Science.gov (United States)

    Manjula, R.; Jain, Shubham; Srivastava, Sharad; Rajiv Kher, Pranav

    2017-11-01

    The real estate market is one of the most competitive in terms of pricing and the same tends to vary significantly based on a lot of factors, hence it becomes one of the prime fields to apply the concepts of machine learning to optimize and predict the prices with high accuracy. Therefore in this paper, we present various important features to use while predicting housing prices with good accuracy. We have described regression models, using various features to have lower Residual Sum of Squares error. While using features in a regression model some feature engineering is required for better prediction. Often a set of features (multiple regressions) or polynomial regression (applying a various set of powers in the features) is used for making better model fit. For these models are expected to be susceptible towards over fitting ridge regression is used to reduce it. This paper thus directs to the best application of regression models in addition to other techniques to optimize the result.

  13. Drug treatment rates with beta-blockers and ACE-inhibitors/angiotensin receptor blockers and recurrences in takotsubo cardiomyopathy: A meta-regression analysis.

    Science.gov (United States)

    Brunetti, Natale Daniele; Santoro, Francesco; De Gennaro, Luisa; Correale, Michele; Gaglione, Antonio; Di Biase, Matteo

    2016-07-01

    In a recent paper Singh et al. analyzed the effect of drug treatment on recurrence of takotsubo cardiomyopathy (TTC) in a comprehensive meta-analysis. The study found that recurrence rates were independent of clinic utilization of BB prescription, but inversely correlated with ACEi/ARB prescription: authors therefore conclude that ACEi/ARB rather than BB may reduce risk of recurrence. We aimed to re-analyze data reported in the study, now weighted for populations' size, in a meta-regression analysis. After multiple meta-regression analysis, we found a significant regression between rates of prescription of ACEi and rates of recurrence of TTC; regression was not statistically significant for BBs. On the bases of our re-analysis, we confirm that rates of recurrence of TTC are lower in populations of patients with higher rates of treatment with ACEi/ARB. That could not necessarily imply that ACEi may prevent recurrence of TTC, but barely that, for example, rates of recurrence are lower in cohorts more compliant with therapy or more prescribed with ACEi because more carefully followed. Randomized prospective studies are surely warranted. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. Computing multiple-output regression quantile regions

    Czech Academy of Sciences Publication Activity Database

    Paindaveine, D.; Šiman, Miroslav

    2012-01-01

    Roč. 56, č. 4 (2012), s. 840-853 ISSN 0167-9473 R&D Projects: GA MŠk(CZ) 1M06047 Institutional research plan: CEZ:AV0Z10750506 Keywords : halfspace depth * multiple-output regression * parametric linear programming * quantile regression Subject RIV: BA - General Mathematics Impact factor: 1.304, year: 2012 http://library.utia.cas.cz/separaty/2012/SI/siman-0376413.pdf

  15. Preface to Berk's "Regression Analysis: A Constructive Critique"

    OpenAIRE

    de Leeuw, Jan

    2003-01-01

    It is pleasure to write a preface for the book ”Regression Analysis” of my fellow series editor Dick Berk. And it is a pleasure in particular because the book is about regression analysis, the most popular and the most fundamental technique in applied statistics. And because it is critical of the way regression analysis is used in the sciences, in particular in the social and behavioral sciences. Although the book can be read as an introduction to regression analysis, it can also be read as a...

  16. Risk Factors for Fatal Hyperglycaemia Confirmed by Forensic Postmortem Examination - A Nationwide Cohort in Sweden

    Science.gov (United States)

    Walz, Lotta; Jönsson, Anna K.; Zilg, Brita; Östgren, Carl Johan; Druid, Henrik

    2016-01-01

    Aims/Hypothesis The aim of this study was to identify risk factors associated with confirmed fatal hyperglycaemia, which could predispose potentially preventable deaths in individuals on glucose lowering drugs. Methods A retrospective register-based case-control study conducted on a nationwide cohort with individuals who died due to hyperglycaemia as determined by forensic postmortem examination, in Sweden August 2006 to December 2012. Vitreous glucose was used to diagnose hyperglycaemia postmortem. The forensic findings stored in the National Forensic Medicine Database were linked to nationwide registers. Cases that died due to confirmed hyperglycemia with dispensed glucose lowering drugs were identified and living controls with dispensed glucose lowering drugs were randomly selected in the Swedish prescribed drug register and matched on age and sex. Information on comorbidities, dispensed pharmaceuticals, clinical data and socioeconomic factors were obtained for cases and controls. Adjusted multiple logistic regression models were used to identify risk factors associated with fatal hyperglycaemia. Results During the study period 322 individuals, mostly males (79%) with the mean age of 53.9 years (SD.± 14) died due to confirmed hyperglycaemia. Risk factors for fatal hyperglycaemia included; insulin treatment (OR = 4.40; 95%CI,1.96, 9.85), poor glycaemic control (OR = 2.00 95%CI,1.23, 3.27), inadequate refill-adherence before death (OR = 3.87; 95%CI,1.99, 7.53), microvascular disease (OR = 3.26; 95% CI, 1.84, 5.79), psychiatric illness (OR = 2.30; 95% CI,1.32, 4.01), substance abuse (OR = 8.85; 95%CI,2.34, 35.0) and/or living alone (OR = 2.25; 95%CI,1.21, 4.18). Conclusions/Interpretation Our results demonstrate the importance of clinical attention to poor glycaemic control in subjects with psychosocial problems since it may indicate serious non-adherence, which consequently could lead to fatal hyperglycaemia. PMID:27768720

  17. Experience with confirmation measurement at Los Alamos

    International Nuclear Information System (INIS)

    Marshall, R.S.; Wagner, R.P.; Hsue, F.

    1985-01-01

    Confirmation measurements are used at Los Alamos in support of incoming and outgoing shipment accountibility and for support of both at 235 U and Pu inventories. Statistical data are presented to show the consistency of measurements on items of identical composition and on items measured at two facilitis using similar instruments. A description of confirmation measurement techniques used in support of 235 U and Pu inventories and a discussion on the ability of the measurements to identify items with misstated SNM are given

  18. Experience with confirmation measurement at Los Alamos

    International Nuclear Information System (INIS)

    Marshall, R.S.; Wagner, R.P.

    1985-01-01

    Confirmation measurements are used at Los Alamos in support of incoming and outgoing shipment accountability and for support of both 235 U and Pu inventories. Statistical data are presented to show the consistency of measurements on items of identical composition and on items measured at two facilities using similar instruments. A description of confirmation measurement techniques used in support of 235 U and Pu inventories and a discussion on the ability of the measurements to identify items with misstated SNM are given

  19. Mobile Genome Express (MGE: A comprehensive automatic genetic analyses pipeline with a mobile device.

    Directory of Open Access Journals (Sweden)

    Jun-Hee Yoon

    Full Text Available The development of next-generation sequencing (NGS technology allows to sequence whole exomes or genome. However, data analysis is still the biggest bottleneck for its wide implementation. Most laboratories still depend on manual procedures for data handling and analyses, which translates into a delay and decreased efficiency in the delivery of NGS results to doctors and patients. Thus, there is high demand for developing an automatic and an easy-to-use NGS data analyses system. We developed comprehensive, automatic genetic analyses controller named Mobile Genome Express (MGE that works in smartphones or other mobile devices. MGE can handle all the steps for genetic analyses, such as: sample information submission, sequencing run quality check from the sequencer, secured data transfer and results review. We sequenced an Actrometrix control DNA containing multiple proven human mutations using a targeted sequencing panel, and the whole analysis was managed by MGE, and its data reviewing program called ELECTRO. All steps were processed automatically except for the final sequencing review procedure with ELECTRO to confirm mutations. The data analysis process was completed within several hours. We confirmed the mutations that we have identified were consistent with our previous results obtained by using multi-step, manual pipelines.

  20. Mobile Genome Express (MGE): A comprehensive automatic genetic analyses pipeline with a mobile device.

    Science.gov (United States)

    Yoon, Jun-Hee; Kim, Thomas W; Mendez, Pedro; Jablons, David M; Kim, Il-Jin

    2017-01-01

    The development of next-generation sequencing (NGS) technology allows to sequence whole exomes or genome. However, data analysis is still the biggest bottleneck for its wide implementation. Most laboratories still depend on manual procedures for data handling and analyses, which translates into a delay and decreased efficiency in the delivery of NGS results to doctors and patients. Thus, there is high demand for developing an automatic and an easy-to-use NGS data analyses system. We developed comprehensive, automatic genetic analyses controller named Mobile Genome Express (MGE) that works in smartphones or other mobile devices. MGE can handle all the steps for genetic analyses, such as: sample information submission, sequencing run quality check from the sequencer, secured data transfer and results review. We sequenced an Actrometrix control DNA containing multiple proven human mutations using a targeted sequencing panel, and the whole analysis was managed by MGE, and its data reviewing program called ELECTRO. All steps were processed automatically except for the final sequencing review procedure with ELECTRO to confirm mutations. The data analysis process was completed within several hours. We confirmed the mutations that we have identified were consistent with our previous results obtained by using multi-step, manual pipelines.

  1. Troubleshooting Requests e-mail Confirmation

    CERN Multimedia

    TS Department

    2004-01-01

    In an ongoing effort to improve quality of the repair requests, a new e-mail confirmation automatic system will be implemented starting from the 21st October. All repair requests transmitted to the TCR (72201) or the FM Helpdesk (77777) will be confirmed in an e-mail to the requestor, provided that the latter has a valid e-mail address in the HR database. The e-mail will contain a reference number, a brief description of the problem, the location and a contact where more information can be obtained. A second e-mail will be sent when the processing of the repair request is finished. We hope that this initiative will improve the transparency and quality of our service. Helpdesk Troubleshooting Requests (reminder) We remind you that all the repair requests and other communication concerning the CERN machine buildings have to be transmitted to the TCR via 72201, whereas the ones concerning tertiary buildings are handled directly by the FM helpdesk under the phone number 77777, i.e. problems on systems and equ...

  2. Five cases of caudal regression with an aberrant abdominal umbilical artery: Further support for a caudal regression-sirenomelia spectrum.

    Science.gov (United States)

    Duesterhoeft, Sara M; Ernst, Linda M; Siebert, Joseph R; Kapur, Raj P

    2007-12-15

    Sirenomelia and caudal regression have sparked centuries of interest and recent debate regarding their classification and pathogenetic relationship. Specific anomalies are common to both conditions, but aside from fusion of the lower extremities, an aberrant abdominal umbilical artery ("persistent vitelline artery") has been invoked as the chief anatomic finding that distinguishes sirenomelia from caudal regression. This observation is important from a pathogenetic viewpoint, in that diversion of blood away from the caudal portion of the embryo through the abdominal umbilical artery ("vascular steal") has been proposed as the primary mechanism leading to sirenomelia. In contrast, caudal regression is hypothesized to arise from primary deficiency of caudal mesoderm. We present five cases of caudal regression that exhibit an aberrant abdominal umbilical artery similar to that typically associated with sirenomelia. Review of the literature identified four similar cases. Collectively, the series lends support for a caudal regression-sirenomelia spectrum with a common pathogenetic basis and suggests that abnormal umbilical arterial anatomy may be the consequence, rather than the cause, of deficient caudal mesoderm. (c) 2007 Wiley-Liss, Inc.

  3. Model-based Quantile Regression for Discrete Data

    KAUST Repository

    Padellini, Tullia; Rue, Haavard

    2018-01-01

    Quantile regression is a class of methods voted to the modelling of conditional quantiles. In a Bayesian framework quantile regression has typically been carried out exploiting the Asymmetric Laplace Distribution as a working likelihood. Despite

  4. Linear Regression Analysis

    CERN Document Server

    Seber, George A F

    2012-01-01

    Concise, mathematically clear, and comprehensive treatment of the subject.* Expanded coverage of diagnostics and methods of model fitting.* Requires no specialized knowledge beyond a good grasp of matrix algebra and some acquaintance with straight-line regression and simple analysis of variance models.* More than 200 problems throughout the book plus outline solutions for the exercises.* This revision has been extensively class-tested.

  5. Moderation analysis using a two-level regression model.

    Science.gov (United States)

    Yuan, Ke-Hai; Cheng, Ying; Maxwell, Scott

    2014-10-01

    Moderation analysis is widely used in social and behavioral research. The most commonly used model for moderation analysis is moderated multiple regression (MMR) in which the explanatory variables of the regression model include product terms, and the model is typically estimated by least squares (LS). This paper argues for a two-level regression model in which the regression coefficients of a criterion variable on predictors are further regressed on moderator variables. An algorithm for estimating the parameters of the two-level model by normal-distribution-based maximum likelihood (NML) is developed. Formulas for the standard errors (SEs) of the parameter estimates are provided and studied. Results indicate that, when heteroscedasticity exists, NML with the two-level model gives more efficient and more accurate parameter estimates than the LS analysis of the MMR model. When error variances are homoscedastic, NML with the two-level model leads to essentially the same results as LS with the MMR model. Most importantly, the two-level regression model permits estimating the percentage of variance of each regression coefficient that is due to moderator variables. When applied to data from General Social Surveys 1991, NML with the two-level model identified a significant moderation effect of race on the regression of job prestige on years of education while LS with the MMR model did not. An R package is also developed and documented to facilitate the application of the two-level model.

  6. Transcriptome analysis of spermatogenically regressed, recrudescent and active phase testis of seasonally breeding wall lizards Hemidactylus flaviviridis.

    Directory of Open Access Journals (Sweden)

    Mukesh Gautam

    Full Text Available Reptiles are phylogenically important group of organisms as mammals have evolved from them. Wall lizard testis exhibits clearly distinct morphology during various phases of a reproductive cycle making them an interesting model to study regulation of spermatogenesis. Studies on reptile spermatogenesis are negligible hence this study will prove to be an important resource.Histological analyses show complete regression of seminiferous tubules during regressed phase with retracted Sertoli cells and spermatognia. In the recrudescent phase, regressed testis regain cellular activity showing presence of normal Sertoli cells and developing germ cells. In the active phase, testis reaches up to its maximum size with enlarged seminiferous tubules and presence of sperm in seminiferous lumen. Total RNA extracted from whole testis of regressed, recrudescent and active phase of wall lizard was hybridized on Mouse Whole Genome 8×60 K format gene chip. Microarray data from regressed phase was deemed as control group. Microarray data were validated by assessing the expression of some selected genes using Quantitative Real-Time PCR. The genes prominently expressed in recrudescent and active phase testis are cytoskeleton organization GO 0005856, cell growth GO 0045927, GTpase regulator activity GO: 0030695, transcription GO: 0006352, apoptosis GO: 0006915 and many other biological processes. The genes showing higher expression in regressed phase belonged to functional categories such as negative regulation of macromolecule metabolic process GO: 0010605, negative regulation of gene expression GO: 0010629 and maintenance of stem cell niche GO: 0045165.This is the first exploratory study profiling transcriptome of three drastically different conditions of any reptilian testis. The genes expressed in the testis during regressed, recrudescent and active phase of reproductive cycle are in concordance with the testis morphology during these phases. This study will pave

  7. A method to determine the necessity for global signal regression in resting-state fMRI studies.

    Science.gov (United States)

    Chen, Gang; Chen, Guangyu; Xie, Chunming; Ward, B Douglas; Li, Wenjun; Antuono, Piero; Li, Shi-Jiang

    2012-12-01

    In resting-state functional MRI studies, the global signal (operationally defined as the global average of resting-state functional MRI time courses) is often considered a nuisance effect and commonly removed in preprocessing. This global signal regression method can introduce artifacts, such as false anticorrelated resting-state networks in functional connectivity analyses. Therefore, the efficacy of this technique as a correction tool remains questionable. In this article, we establish that the accuracy of the estimated global signal is determined by the level of global noise (i.e., non-neural noise that has a global effect on the resting-state functional MRI signal). When the global noise level is low, the global signal resembles the resting-state functional MRI time courses of the largest cluster, but not those of the global noise. Using real data, we demonstrate that the global signal is strongly correlated with the default mode network components and has biological significance. These results call into question whether or not global signal regression should be applied. We introduce a method to quantify global noise levels. We show that a criteria for global signal regression can be found based on the method. By using the criteria, one can determine whether to include or exclude the global signal regression in minimizing errors in functional connectivity measures. Copyright © 2012 Wiley Periodicals, Inc.

  8. Methods for identifying SNP interactions: a review on variations of Logic Regression, Random Forest and Bayesian logistic regression.

    Science.gov (United States)

    Chen, Carla Chia-Ming; Schwender, Holger; Keith, Jonathan; Nunkesser, Robin; Mengersen, Kerrie; Macrossan, Paula

    2011-01-01

    Due to advancements in computational ability, enhanced technology and a reduction in the price of genotyping, more data are being generated for understanding genetic associations with diseases and disorders. However, with the availability of large data sets comes the inherent challenges of new methods of statistical analysis and modeling. Considering a complex phenotype may be the effect of a combination of multiple loci, various statistical methods have been developed for identifying genetic epistasis effects. Among these methods, logic regression (LR) is an intriguing approach incorporating tree-like structures. Various methods have built on the original LR to improve different aspects of the model. In this study, we review four variations of LR, namely Logic Feature Selection, Monte Carlo Logic Regression, Genetic Programming for Association Studies, and Modified Logic Regression-Gene Expression Programming, and investigate the performance of each method using simulated and real genotype data. We contrast these with another tree-like approach, namely Random Forests, and a Bayesian logistic regression with stochastic search variable selection.

  9. Demonstration of a Fiber Optic Regression Probe

    Science.gov (United States)

    Korman, Valentin; Polzin, Kurt A.

    2010-01-01

    The capability to provide localized, real-time monitoring of material regression rates in various applications has the potential to provide a new stream of data for development testing of various components and systems, as well as serving as a monitoring tool in flight applications. These applications include, but are not limited to, the regression of a combusting solid fuel surface, the ablation of the throat in a chemical rocket or the heat shield of an aeroshell, and the monitoring of erosion in long-life plasma thrusters. The rate of regression in the first application is very fast, while the second and third are increasingly slower. A recent fundamental sensor development effort has led to a novel regression, erosion, and ablation sensor technology (REAST). The REAST sensor allows for measurement of real-time surface erosion rates at a discrete surface location. The sensor is optical, using two different, co-located fiber-optics to perform the regression measurement. The disparate optical transmission properties of the two fiber-optics makes it possible to measure the regression rate by monitoring the relative light attenuation through the fibers. As the fibers regress along with the parent material in which they are embedded, the relative light intensities through the two fibers changes, providing a measure of the regression rate. The optical nature of the system makes it relatively easy to use in a variety of harsh, high temperature environments, and it is also unaffected by the presence of electric and magnetic fields. In addition, the sensor could be used to perform optical spectroscopy on the light emitted by a process and collected by fibers, giving localized measurements of various properties. The capability to perform an in-situ measurement of material regression rates is useful in addressing a variety of physical issues in various applications. An in-situ measurement allows for real-time data regarding the erosion rates, providing a quick method for

  10. Caudal regression syndrome : a case report

    International Nuclear Information System (INIS)

    Lee, Eun Joo; Kim, Hi Hye; Kim, Hyung Sik; Park, So Young; Han, Hye Young; Lee, Kwang Hun

    1998-01-01

    Caudal regression syndrome is a rare congenital anomaly, which results from a developmental failure of the caudal mesoderm during the fetal period. We present a case of caudal regression syndrome composed of a spectrum of anomalies including sirenomelia, dysplasia of the lower lumbar vertebrae, sacrum, coccyx and pelvic bones,genitourinary and anorectal anomalies, and dysplasia of the lung, as seen during infantography and MR imaging

  11. Caudal regression syndrome : a case report

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Eun Joo; Kim, Hi Hye; Kim, Hyung Sik; Park, So Young; Han, Hye Young; Lee, Kwang Hun [Chungang Gil Hospital, Incheon (Korea, Republic of)

    1998-07-01

    Caudal regression syndrome is a rare congenital anomaly, which results from a developmental failure of the caudal mesoderm during the fetal period. We present a case of caudal regression syndrome composed of a spectrum of anomalies including sirenomelia, dysplasia of the lower lumbar vertebrae, sacrum, coccyx and pelvic bones,genitourinary and anorectal anomalies, and dysplasia of the lung, as seen during infantography and MR imaging.

  12. Correlation and simple linear regression.

    Science.gov (United States)

    Zou, Kelly H; Tuncali, Kemal; Silverman, Stuart G

    2003-06-01

    In this tutorial article, the concepts of correlation and regression are reviewed and demonstrated. The authors review and compare two correlation coefficients, the Pearson correlation coefficient and the Spearman rho, for measuring linear and nonlinear relationships between two continuous variables. In the case of measuring the linear relationship between a predictor and an outcome variable, simple linear regression analysis is conducted. These statistical concepts are illustrated by using a data set from published literature to assess a computed tomography-guided interventional technique. These statistical methods are important for exploring the relationships between variables and can be applied to many radiologic studies.

  13. bayesQR: A Bayesian Approach to Quantile Regression

    Directory of Open Access Journals (Sweden)

    Dries F. Benoit

    2017-01-01

    Full Text Available After its introduction by Koenker and Basset (1978, quantile regression has become an important and popular tool to investigate the conditional response distribution in regression. The R package bayesQR contains a number of routines to estimate quantile regression parameters using a Bayesian approach based on the asymmetric Laplace distribution. The package contains functions for the typical quantile regression with continuous dependent variable, but also supports quantile regression for binary dependent variables. For both types of dependent variables, an approach to variable selection using the adaptive lasso approach is provided. For the binary quantile regression model, the package also contains a routine that calculates the fitted probabilities for each vector of predictors. In addition, functions for summarizing the results, creating traceplots, posterior histograms and drawing quantile plots are included. This paper starts with a brief overview of the theoretical background of the models used in the bayesQR package. The main part of this paper discusses the computational problems that arise in the implementation of the procedure and illustrates the usefulness of the package through selected examples.

  14. Confirmation and lucidity during conversations with a woman with severe dementia.

    Science.gov (United States)

    Normann, Hans Ketil; Norberg, Astrid; Asplund, Kenneth

    2002-08-01

    Patients with severe dementia sometimes surprise the care providers because they seem to be much more aware of their situation and function much more adequately than usual. Such episodes are labelled 'episodes of lucidity' (ELs). The aim of this study was to describe the characteristics of the particular conversations with a woman with severe dementia when ELs occurred as compared with conversations when she was not lucid. A woman with a probable diagnosis of Alzheimer's disease (AD) was selected. Her Mini Mental State Examination (MMSE) was estimated as 3. The first author met the woman for 4 hours five times over a period of 2 weeks. RESEARCH ETHODS: The conversations were tape-recorded and transcribed verbatim. The text was divided into 278 content units and analysed. Lucidity is promoted by supporting the patient in various ways, that is sharing the patient's view, repeating and reformulating the patient's utterance, reinforcing the patient by using positive utterances, not emphasizing errors and supporting the patient's language in various ways, and avoiding making demands. The relation between the patient and her conversation partner during ELs is characterized by confirmation and communion. This case study indicates that a supportive attitude in conversation with the patient with severe dementia promotes lucidity. A supportive attitude includes the avoidance on the part of the conversational partner making demands on the patient, confirming the patient as an important, unique and valuable person and creating communion. The connection between supporting and avoiding demands and lucidity/nonlucidity during conversation needs further study.

  15. Multivariate Linear Regression and CART Regression Analysis of TBM Performance at Abu Hamour Phase-I Tunnel

    Science.gov (United States)

    Jakubowski, J.; Stypulkowski, J. B.; Bernardeau, F. G.

    2017-12-01

    The first phase of the Abu Hamour drainage and storm tunnel was completed in early 2017. The 9.5 km long, 3.7 m diameter tunnel was excavated with two Earth Pressure Balance (EPB) Tunnel Boring Machines from Herrenknecht. TBM operation processes were monitored and recorded by Data Acquisition and Evaluation System. The authors coupled collected TBM drive data with available information on rock mass properties, cleansed, completed with secondary variables and aggregated by weeks and shifts. Correlations and descriptive statistics charts were examined. Multivariate Linear Regression and CART regression tree models linking TBM penetration rate (PR), penetration per revolution (PPR) and field penetration index (FPI) with TBM operational and geotechnical characteristics were performed for the conditions of the weak/soft rock of Doha. Both regression methods are interpretable and the data were screened with different computational approaches allowing enriched insight. The primary goal of the analysis was to investigate empirical relations between multiple explanatory and responding variables, to search for best subsets of explanatory variables and to evaluate the strength of linear and non-linear relations. For each of the penetration indices, a predictive model coupling both regression methods was built and validated. The resultant models appeared to be stronger than constituent ones and indicated an opportunity for more accurate and robust TBM performance predictions.

  16. Estimated prevalence of halitosis: a systematic review and meta-regression analysis.

    Science.gov (United States)

    Silva, Manuela F; Leite, Fábio R M; Ferreira, Larissa B; Pola, Natália M; Scannapieco, Frank A; Demarco, Flávio F; Nascimento, Gustavo G

    2018-01-01

    This study aims to conduct a systematic review to determine the prevalence of halitosis in adolescents and adults. Electronic searches were performed using four different databases without restrictions: PubMed, Scopus, Web of Science, and SciELO. Population-based observational studies that provided data about the prevalence of halitosis in adolescents and adults were included. Additionally, meta-analyses, meta-regression, and sensitivity analyses were conducted to synthesize the evidence. A total of 584 articles were initially found and considered for title and abstract evaluation. Thirteen articles met inclusion criteria. The combined prevalence of halitosis was found to be 31.8% (95% CI 24.6-39.0%). Methodological aspects such as the year of publication and the socioeconomic status of the country where the study was conducted seemed to influence the prevalence of halitosis. Our results demonstrated that the estimated prevalence of halitosis was 31.8%, with high heterogeneity between studies. The results suggest a worldwide trend towards a rise in halitosis prevalence. Given the high prevalence of halitosis and its complex etiology, dental professionals should be aware of their roles in halitosis prevention and treatment.

  17. Mercury biomagnification in a contaminated estuary food web: Effects of age and trophic position using stable isotope analyses

    International Nuclear Information System (INIS)

    Coelho, J.P.; Mieiro, C.L.; Pereira, E.; Duarte, A.C.; Pardal, M.A.

    2013-01-01

    Highlights: ► High trophic magnification potential of mercury in a temperate contaminated estuary. ► The use of age adjusted data provided better fitting to linear regression curves. ► Similar TMFs in other studies suggest stable magnification regardless of latitude. -- Abstract: The main aim of this study was to ascertain the biomagnification processes in a mercury-contaminated estuary, by clarifying the trophic web structure through stable isotope ratios. For this purpose, primary producers (seagrasses and macroalgae), invertebrates (detritivores and benthic predators) and fish were analysed for total and organic mercury and for stable carbon and nitrogen isotopic signatures. Trophic structure was accurately described by δ 15 N, while δ 13 C reflected the carbon source for each species. An increase of mercury levels was observed with trophic level, particularly for organic mercury. Results confirm mercury biomagnification to occur in this estuarine food web, especially in the organic form, both in absolute concentrations and fraction of total mercury load. Age can be considered an important variable in mercury biomagnification studies, and data adjustments to account for the different exposure periods may be necessary for a correct assessment of trophic magnification rates and ecological risk

  18. Background stratified Poisson regression analysis of cohort data.

    Science.gov (United States)

    Richardson, David B; Langholz, Bryan

    2012-03-01

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models.

  19. Variable importance in latent variable regression models

    NARCIS (Netherlands)

    Kvalheim, O.M.; Arneberg, R.; Bleie, O.; Rajalahti, T.; Smilde, A.K.; Westerhuis, J.A.

    2014-01-01

    The quality and practical usefulness of a regression model are a function of both interpretability and prediction performance. This work presents some new graphical tools for improved interpretation of latent variable regression models that can also assist in improved algorithms for variable

  20. Regression: The Apple Does Not Fall Far From the Tree.

    Science.gov (United States)

    Vetter, Thomas R; Schober, Patrick

    2018-05-15

    Researchers and clinicians are frequently interested in either: (1) assessing whether there is a relationship or association between 2 or more variables and quantifying this association; or (2) determining whether 1 or more variables can predict another variable. The strength of such an association is mainly described by the correlation. However, regression analysis and regression models can be used not only to identify whether there is a significant relationship or association between variables but also to generate estimations of such a predictive relationship between variables. This basic statistical tutorial discusses the fundamental concepts and techniques related to the most common types of regression analysis and modeling, including simple linear regression, multiple regression, logistic regression, ordinal regression, and Poisson regression, as well as the common yet often underrecognized phenomenon of regression toward the mean. The various types of regression analysis are powerful statistical techniques, which when appropriately applied, can allow for the valid interpretation of complex, multifactorial data. Regression analysis and models can assess whether there is a relationship or association between 2 or more observed variables and estimate the strength of this association, as well as determine whether 1 or more variables can predict another variable. Regression is thus being applied more commonly in anesthesia, perioperative, critical care, and pain research. However, it is crucial to note that regression can identify plausible risk factors; it does not prove causation (a definitive cause and effect relationship). The results of a regression analysis instead identify independent (predictor) variable(s) associated with the dependent (outcome) variable. As with other statistical methods, applying regression requires that certain assumptions be met, which can be tested with specific diagnostics.

  1. Quasi-experimental evidence on tobacco tax regressivity.

    Science.gov (United States)

    Koch, Steven F

    2018-01-01

    Tobacco taxes are known to reduce tobacco consumption and to be regressive, such that tobacco control policy may have the perverse effect of further harming the poor. However, if tobacco consumption falls faster amongst the poor than the rich, tobacco control policy can actually be progressive. We take advantage of persistent and committed tobacco control activities in South Africa to examine the household tobacco expenditure burden. For the analysis, we make use of two South African Income and Expenditure Surveys (2005/06 and 2010/11) that span a series of such tax increases and have been matched across the years, yielding 7806 matched pairs of tobacco consuming households and 4909 matched pairs of cigarette consuming households. By matching households across the surveys, we are able to examine both the regressivity of the household tobacco burden, and any change in that regressivity, and since tobacco taxes have been a consistent component of tobacco prices, our results also relate to the regressivity of tobacco taxes. Like previous research into cigarette and tobacco expenditures, we find that the tobacco burden is regressive; thus, so are tobacco taxes. However, we find that over the five-year period considered, the tobacco burden has decreased, and, most importantly, falls less heavily on the poor. Thus, the tobacco burden and the tobacco tax is less regressive in 2010/11 than in 2005/06. Thus, increased tobacco taxes can, in at least some circumstances, reduce the financial burden that tobacco places on households. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Polylinear regression analysis in radiochemistry

    International Nuclear Information System (INIS)

    Kopyrin, A.A.; Terent'eva, T.N.; Khramov, N.N.

    1995-01-01

    A number of radiochemical problems have been formulated in the framework of polylinear regression analysis, which permits the use of conventional mathematical methods for their solution. The authors have considered features of the use of polylinear regression analysis for estimating the contributions of various sources to the atmospheric pollution, for studying irradiated nuclear fuel, for estimating concentrations from spectral data, for measuring neutron fields of a nuclear reactor, for estimating crystal lattice parameters from X-ray diffraction patterns, for interpreting data of X-ray fluorescence analysis, for estimating complex formation constants, and for analyzing results of radiometric measurements. The problem of estimating the target parameters can be incorrect at certain properties of the system under study. The authors showed the possibility of regularization by adding a fictitious set of data open-quotes obtainedclose quotes from the orthogonal design. To estimate only a part of the parameters under consideration, the authors used incomplete rank models. In this case, it is necessary to take into account the possibility of confounding estimates. An algorithm for evaluating the degree of confounding is presented which is realized using standard software or regression analysis

  3. Time-trend of melanoma screening practice by primary care physicians: a meta-regression analysis.

    Science.gov (United States)

    Valachis, Antonis; Mauri, Davide; Karampoiki, Vassiliki; Polyzos, Nikolaos P; Cortinovis, Ivan; Koukourakis, Georgios; Zacharias, Georgios; Xilomenos, Apostolos; Tsappi, Maria; Casazza, Giovanni

    2009-01-01

    To assess whether the proportion of primary care physicians implementing full body skin examination (FBSE) to screen for melanoma changed over time. Meta-regression analyses of available data. MEDLINE, ISI, Cochrane Central Register of Controlled Trials. Fifteen studies surveying 10,336 physicians were included in the analyses. Overall, 15%-82% of them reported to perform FBSE to screen for melanoma. The proportion of physicians using FBSE screening tended to decrease by 1.72% per year (P =0.086). Corresponding annual changes in European, North American, and Australian settings were -0.68% (P =0.494), -2.02% (P =0.044), and +2.59% (P =0.010), respectively. Changes were not influenced by national guide-lines. Considering the increasing incidence of melanoma and other skin malignancies, as well as their relative potential consequences, the FBSE implementation time-trend we retrieved should be considered a worrisome phenomenon.

  4. Influence diagnostics in meta-regression model.

    Science.gov (United States)

    Shi, Lei; Zuo, ShanShan; Yu, Dalei; Zhou, Xiaohua

    2017-09-01

    This paper studies the influence diagnostics in meta-regression model including case deletion diagnostic and local influence analysis. We derive the subset deletion formulae for the estimation of regression coefficient and heterogeneity variance and obtain the corresponding influence measures. The DerSimonian and Laird estimation and maximum likelihood estimation methods in meta-regression are considered, respectively, to derive the results. Internal and external residual and leverage measure are defined. The local influence analysis based on case-weights perturbation scheme, responses perturbation scheme, covariate perturbation scheme, and within-variance perturbation scheme are explored. We introduce a method by simultaneous perturbing responses, covariate, and within-variance to obtain the local influence measure, which has an advantage of capable to compare the influence magnitude of influential studies from different perturbations. An example is used to illustrate the proposed methodology. Copyright © 2017 John Wiley & Sons, Ltd.

  5. Antibiotic Resistances in Livestock: A Comparative Approach to Identify an Appropriate Regression Model for Count Data

    Directory of Open Access Journals (Sweden)

    Anke Hüls

    2017-05-01

    Full Text Available Antimicrobial resistance in livestock is a matter of general concern. To develop hygiene measures and methods for resistance prevention and control, epidemiological studies on a population level are needed to detect factors associated with antimicrobial resistance in livestock holdings. In general, regression models are used to describe these relationships between environmental factors and resistance outcome. Besides the study design, the correlation structures of the different outcomes of antibiotic resistance and structural zero measurements on the resistance outcome as well as on the exposure side are challenges for the epidemiological model building process. The use of appropriate regression models that acknowledge these complexities is essential to assure valid epidemiological interpretations. The aims of this paper are (i to explain the model building process comparing several competing models for count data (negative binomial model, quasi-Poisson model, zero-inflated model, and hurdle model and (ii to compare these models using data from a cross-sectional study on antibiotic resistance in animal husbandry. These goals are essential to evaluate which model is most suitable to identify potential prevention measures. The dataset used as an example in our analyses was generated initially to study the prevalence and associated factors for the appearance of cefotaxime-resistant Escherichia coli in 48 German fattening pig farms. For each farm, the outcome was the count of samples with resistant bacteria. There was almost no overdispersion and only moderate evidence of excess zeros in the data. Our analyses show that it is essential to evaluate regression models in studies analyzing the relationship between environmental factors and antibiotic resistances in livestock. After model comparison based on evaluation of model predictions, Akaike information criterion, and Pearson residuals, here the hurdle model was judged to be the most appropriate

  6. Extralobar pulmonary sequestration in neonates: The natural course and predictive factors associated with spontaneous regression

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Hee Mang; Jung, Ah Young; Cho, Young Ah; Yoon, Chong Hyun; Lee, Jin Seong [Asan Medical Center Children' s Hospital, University of Ulsan College of Medicine, Department of Radiology and Research Institute of Radiology, Songpa-gu, Seoul (Korea, Republic of); Kim, Ellen Ai-Rhan [University of Ulsan College of Medicine, Division of Neonatology, Asan Medical Center Children' s Hospital, Seoul (Korea, Republic of); Chung, Sung-Hoon [Kyung Hee University School of Medicine, Department of Pediatrics, Seoul (Korea, Republic of); Kim, Seon-Ok [Asan Medical Center, Department of Clinical Epidemiology and Biostatistics, Seoul (Korea, Republic of)

    2017-06-15

    To describe the natural course of extralobar pulmonary sequestration (EPS) and identify factors associated with spontaneous regression of EPS. We retrospectively searched for patients diagnosed with EPS on initial contrast CT scan within 1 month after birth and had a follow-up CT scan without treatment. Spontaneous regression of EPS was assessed by percentage decrease in volume (PDV) and percentage decrease in sum of the diameter of systemic feeding arteries (PDD) by comparing initial and follow-up CT scans. Clinical and CT features were analysed to determine factors associated with PDV and PDD rates. Fifty-one neonates were included. The cumulative proportions of patients reaching PDV > 50 % and PDD > 50 % were 93.0 % and 73.3 % at 4 years, respectively. Tissue attenuation was significantly associated with PDV rate (B = -21.78, P <.001). The tissue attenuation (B = -22.62, P =.001) and diameter of the largest systemic feeding arteries (B = -48.31, P =.011) were significant factors associated with PDD rate. The volume and diameter of systemic feeding arteries of EPS spontaneously decreased within 4 years without treatment. EPSs showing a low tissue attenuation and small diameter of the largest systemic feeding arteries on initial contrast-enhanced CT scans were likely to regress spontaneously. (orig.)

  7. Variable and subset selection in PLS regression

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    2001-01-01

    The purpose of this paper is to present some useful methods for introductory analysis of variables and subsets in relation to PLS regression. We present here methods that are efficient in finding the appropriate variables or subset to use in the PLS regression. The general conclusion...... is that variable selection is important for successful analysis of chemometric data. An important aspect of the results presented is that lack of variable selection can spoil the PLS regression, and that cross-validation measures using a test set can show larger variation, when we use different subsets of X, than...

  8. Regression filter for signal resolution

    International Nuclear Information System (INIS)

    Matthes, W.

    1975-01-01

    The problem considered is that of resolving a measured pulse height spectrum of a material mixture, e.g. gamma ray spectrum, Raman spectrum, into a weighed sum of the spectra of the individual constituents. The model on which the analytical formulation is based is described. The problem reduces to that of a multiple linear regression. A stepwise linear regression procedure was constructed. The efficiency of this method was then tested by transforming the procedure in a computer programme which was used to unfold test spectra obtained by mixing some spectra, from a library of arbitrary chosen spectra, and adding a noise component. (U.K.)

  9. Direction of Effects in Multiple Linear Regression Models.

    Science.gov (United States)

    Wiedermann, Wolfgang; von Eye, Alexander

    2015-01-01

    Previous studies analyzed asymmetric properties of the Pearson correlation coefficient using higher than second order moments. These asymmetric properties can be used to determine the direction of dependence in a linear regression setting (i.e., establish which of two variables is more likely to be on the outcome side) within the framework of cross-sectional observational data. Extant approaches are restricted to the bivariate regression case. The present contribution extends the direction of dependence methodology to a multiple linear regression setting by analyzing distributional properties of residuals of competing multiple regression models. It is shown that, under certain conditions, the third central moments of estimated regression residuals can be used to decide upon direction of effects. In addition, three different approaches for statistical inference are discussed: a combined D'Agostino normality test, a skewness difference test, and a bootstrap difference test. Type I error and power of the procedures are assessed using Monte Carlo simulations, and an empirical example is provided for illustrative purposes. In the discussion, issues concerning the quality of psychological data, possible extensions of the proposed methods to the fourth central moment of regression residuals, and potential applications are addressed.

  10. Robust mislabel logistic regression without modeling mislabel probabilities.

    Science.gov (United States)

    Hung, Hung; Jou, Zhi-Yu; Huang, Su-Yun

    2018-03-01

    Logistic regression is among the most widely used statistical methods for linear discriminant analysis. In many applications, we only observe possibly mislabeled responses. Fitting a conventional logistic regression can then lead to biased estimation. One common resolution is to fit a mislabel logistic regression model, which takes into consideration of mislabeled responses. Another common method is to adopt a robust M-estimation by down-weighting suspected instances. In this work, we propose a new robust mislabel logistic regression based on γ-divergence. Our proposal possesses two advantageous features: (1) It does not need to model the mislabel probabilities. (2) The minimum γ-divergence estimation leads to a weighted estimating equation without the need to include any bias correction term, that is, it is automatically bias-corrected. These features make the proposed γ-logistic regression more robust in model fitting and more intuitive for model interpretation through a simple weighting scheme. Our method is also easy to implement, and two types of algorithms are included. Simulation studies and the Pima data application are presented to demonstrate the performance of γ-logistic regression. © 2017, The International Biometric Society.

  11. Clinical evaluation of a novel population-based regression analysis for detecting glaucomatous visual field progression.

    Science.gov (United States)

    Kovalska, M P; Bürki, E; Schoetzau, A; Orguel, S F; Orguel, S; Grieshaber, M C

    2011-04-01

    The distinction of real progression from test variability in visual field (VF) series may be based on clinical judgment, on trend analysis based on follow-up of test parameters over time, or on identification of a significant change related to the mean of baseline exams (event analysis). The aim of this study was to compare a new population-based method (Octopus field analysis, OFA) with classic regression analyses and clinical judgment for detecting glaucomatous VF changes. 240 VF series of 240 patients with at least 9 consecutive examinations available were included into this study. They were independently classified by two experienced investigators. The results of such a classification served as a reference for comparison for the following statistical tests: (a) t-test global, (b) r-test global, (c) regression analysis of 10 VF clusters and (d) point-wise linear regression analysis. 32.5 % of the VF series were classified as progressive by the investigators. The sensitivity and specificity were 89.7 % and 92.0 % for r-test, and 73.1 % and 93.8 % for the t-test, respectively. In the point-wise linear regression analysis, the specificity was comparable (89.5 % versus 92 %), but the sensitivity was clearly lower than in the r-test (22.4 % versus 89.7 %) at a significance level of p = 0.01. A regression analysis for the 10 VF clusters showed a markedly higher sensitivity for the r-test (37.7 %) than the t-test (14.1 %) at a similar specificity (88.3 % versus 93.8 %) for a significant trend (p = 0.005). In regard to the cluster distribution, the paracentral clusters and the superior nasal hemifield progressed most frequently. The population-based regression analysis seems to be superior to the trend analysis in detecting VF progression in glaucoma, and may eliminate the drawbacks of the event analysis. Further, it may assist the clinician in the evaluation of VF series and may allow better visualization of the correlation between function and structure owing to VF

  12. The Consumer Motivation Scale: A detailed review of item generation, exploration, confirmation, and validation procedures

    Directory of Open Access Journals (Sweden)

    I. Barbopoulos

    2017-08-01

    Full Text Available This data article offers a detailed description of analyses pertaining to the development of the Consumer Motivation Scale (CMS, from item generation and the extraction of factors, to confirmation of the factor structure and validation of the emergent dimensions. The established goal structure – consisting of the sub-goals Value for Money, Quality, Safety, Stimulation, Comfort, Ethics, and Social Acceptance – is shown to be related to a variety of consumption behaviors in different contexts and for different products, and should thereby prove useful in standard marketing research, as well as in the development of tailored marketing strategies, and the segmentation of consumer groups, settings, brands, and products.

  13. The Consumer Motivation Scale: A detailed review of item generation, exploration, confirmation, and validation procedures.

    Science.gov (United States)

    Barbopoulos, I; Johansson, L-O

    2017-08-01

    This data article offers a detailed description of analyses pertaining to the development of the Consumer Motivation Scale (CMS), from item generation and the extraction of factors, to confirmation of the factor structure and validation of the emergent dimensions. The established goal structure - consisting of the sub-goals Value for Money, Quality, Safety, Stimulation, Comfort, Ethics, and Social Acceptance - is shown to be related to a variety of consumption behaviors in different contexts and for different products, and should thereby prove useful in standard marketing research, as well as in the development of tailored marketing strategies, and the segmentation of consumer groups, settings, brands, and products.

  14. [Variations in the diagnostic confirmation process between breast cancer mass screening units].

    Science.gov (United States)

    Natal, Carmen; Fernández-Somoano, Ana; Torá-Rocamora, Isabel; Tardón, Adonina; Castells, Xavier

    2016-01-01

    To analyse variations in the diagnostic confirmation process between screening units, variations in the outcome of each episode and the relationship between the use of the different diagnostic confirmation tests and the lesion detection rate. Observational study of variability of the standardised use of diagnostic and lesion detection tests in 34 breast cancer mass screening units participating in early-detection programmes in three Spanish regions from 2002-2011. The diagnostic test variation ratio in percentiles 25-75 ranged from 1.68 (further appointments) to 3.39 (fine-needle aspiration). The variation ratio in detection rates of benign lesions, ductal carcinoma in situ and invasive cancer were 2.79, 1.99 and 1.36, respectively. A positive relationship between rates of testing and detection rates was found with fine-needle aspiration-benign lesions (R(2): 0.53), fine-needle aspiration-invasive carcinoma (R(2): 0 28), core biopsy-benign lesions (R(2): 0.64), core biopsy-ductal carcinoma in situ (R(2): 0.61) and core biopsy-invasive carcinoma (R(2): 0.48). Variation in the use of invasive tests between the breast cancer screening units participating in early-detection programmes was found to be significantly higher than variations in lesion detection. Units which conducted more fine-needle aspiration tests had higher benign lesion detection rates, while units that conducted more core biopsies detected more benign lesions and cancer. Copyright © 2016 SESPAS. Published by Elsevier Espana. All rights reserved.

  15. Hierarchical regression analysis in structural Equation Modeling

    NARCIS (Netherlands)

    de Jong, P.F.

    1999-01-01

    In a hierarchical or fixed-order regression analysis, the independent variables are entered into the regression equation in a prespecified order. Such an analysis is often performed when the extra amount of variance accounted for in a dependent variable by a specific independent variable is the main

  16. Repeated Results Analysis for Middleware Regression Benchmarking

    Czech Academy of Sciences Publication Activity Database

    Bulej, Lubomír; Kalibera, T.; Tůma, P.

    2005-01-01

    Roč. 60, - (2005), s. 345-358 ISSN 0166-5316 R&D Projects: GA ČR GA102/03/0672 Institutional research plan: CEZ:AV0Z10300504 Keywords : middleware benchmarking * regression benchmarking * regression testing Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.756, year: 2005

  17. and Multinomial Logistic Regression

    African Journals Online (AJOL)

    This work presented the results of an experimental comparison of two models: Multinomial Logistic Regression (MLR) and Artificial Neural Network (ANN) for classifying students based on their academic performance. The predictive accuracy for each model was measured by their average Classification Correct Rate (CCR).

  18. Modeling Fire Occurrence at the City Scale: A Comparison between Geographically Weighted Regression and Global Linear Regression.

    Science.gov (United States)

    Song, Chao; Kwan, Mei-Po; Zhu, Jiping

    2017-04-08

    An increasing number of fires are occurring with the rapid development of cities, resulting in increased risk for human beings and the environment. This study compares geographically weighted regression-based models, including geographically weighted regression (GWR) and geographically and temporally weighted regression (GTWR), which integrates spatial and temporal effects and global linear regression models (LM) for modeling fire risk at the city scale. The results show that the road density and the spatial distribution of enterprises have the strongest influences on fire risk, which implies that we should focus on areas where roads and enterprises are densely clustered. In addition, locations with a large number of enterprises have fewer fire ignition records, probably because of strict management and prevention measures. A changing number of significant variables across space indicate that heterogeneity mainly exists in the northern and eastern rural and suburban areas of Hefei city, where human-related facilities or road construction are only clustered in the city sub-centers. GTWR can capture small changes in the spatiotemporal heterogeneity of the variables while GWR and LM cannot. An approach that integrates space and time enables us to better understand the dynamic changes in fire risk. Thus governments can use the results to manage fire safety at the city scale.

  19. Background stratified Poisson regression analysis of cohort data

    International Nuclear Information System (INIS)

    Richardson, David B.; Langholz, Bryan

    2012-01-01

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models. (orig.)

  20. Regression of uveal malignant melanomas following cobalt-60 plaque. Correlates between acoustic spectrum analysis and tumor regression

    International Nuclear Information System (INIS)

    Coleman, D.J.; Lizzi, F.L.; Silverman, R.H.; Ellsworth, R.M.; Haik, B.G.; Abramson, D.H.; Smith, M.E.; Rondeau, M.J.

    1985-01-01

    Parameters derived from computer analysis of digital radio-frequency (rf) ultrasound scan data of untreated uveal malignant melanomas were examined for correlations with tumor regression following cobalt-60 plaque. Parameters included tumor height, normalized power spectrum and acoustic tissue type (ATT). Acoustic tissue type was based upon discriminant analysis of tumor power spectra, with spectra of tumors of known pathology serving as a model. Results showed ATT to be correlated with tumor regression during the first 18 months following treatment. Tumors with ATT associated with spindle cell malignant melanoma showed over twice the percentage reduction in height as those with ATT associated with mixed/epithelioid melanomas. Pre-treatment height was only weakly correlated with regression. Additionally, significant spectral changes were observed following treatment. Ultrasonic spectrum analysis thus provides a noninvasive tool for classification, prediction and monitoring of tumor response to cobalt-60 plaque

  1. Regression away from the mean: Theory and examples.

    Science.gov (United States)

    Schwarz, Wolf; Reike, Dennis

    2018-02-01

    Using a standard repeated measures model with arbitrary true score distribution and normal error variables, we present some fundamental closed-form results which explicitly indicate the conditions under which regression effects towards (RTM) and away from the mean are expected. Specifically, we show that for skewed and bimodal distributions many or even most cases will show a regression effect that is in expectation away from the mean, or that is not just towards but actually beyond the mean. We illustrate our results in quantitative detail with typical examples from experimental and biometric applications, which exhibit a clear regression away from the mean ('egression from the mean') signature. We aim not to repeal cautionary advice against potential RTM effects, but to present a balanced view of regression effects, based on a clear identification of the conditions governing the form that regression effects take in repeated measures designs. © 2017 The British Psychological Society.

  2. On directional multiple-output quantile regression

    Czech Academy of Sciences Publication Activity Database

    Paindaveine, D.; Šiman, Miroslav

    2011-01-01

    Roč. 102, č. 2 (2011), s. 193-212 ISSN 0047-259X R&D Projects: GA MŠk(CZ) 1M06047 Grant - others:Commision EC(BE) Fonds National de la Recherche Scientifique Institutional research plan: CEZ:AV0Z10750506 Keywords : multivariate quantile * quantile regression * multiple-output regression * halfspace depth * portfolio optimization * value-at risk Subject RIV: BA - General Mathematics Impact factor: 0.879, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/siman-0364128.pdf

  3. Determinants of the probability of adopting quality protein maize (QPM) technology in Tanzania: A logistic regression analysis

    OpenAIRE

    Gregory, T.; Sewando, P.

    2013-01-01

    Adoption of technology is an important factor in economic development. The thrust of this study was to establish factors affecting adoption of QPM technology in Northern zone of Tanzania. Primary data was collected from a random sample of 120 smallholder maize farmers in four villages. Data collected were analysed using descriptive and quantitative methods. Logit model was used to determine factors that influence adoption of QPM technology. The regression results indicated that education of t...

  4. Remote-sensing data processing with the multivariate regression analysis method for iron mineral resource potential mapping: a case study in the Sarvian area, central Iran

    Science.gov (United States)

    Mansouri, Edris; Feizi, Faranak; Jafari Rad, Alireza; Arian, Mehran

    2018-03-01

    This paper uses multivariate regression to create a mathematical model for iron skarn exploration in the Sarvian area, central Iran, using multivariate regression for mineral prospectivity mapping (MPM). The main target of this paper is to apply multivariate regression analysis (as an MPM method) to map iron outcrops in the northeastern part of the study area in order to discover new iron deposits in other parts of the study area. Two types of multivariate regression models using two linear equations were employed to discover new mineral deposits. This method is one of the reliable methods for processing satellite images. ASTER satellite images (14 bands) were used as unique independent variables (UIVs), and iron outcrops were mapped as dependent variables for MPM. According to the results of the probability value (p value), coefficient of determination value (R2) and adjusted determination coefficient (Radj2), the second regression model (which consistent of multiple UIVs) fitted better than other models. The accuracy of the model was confirmed by iron outcrops map and geological observation. Based on field observation, iron mineralization occurs at the contact of limestone and intrusive rocks (skarn type).

  5. Bayesian logistic regression analysis

    NARCIS (Netherlands)

    Van Erp, H.R.N.; Van Gelder, P.H.A.J.M.

    2012-01-01

    In this paper we present a Bayesian logistic regression analysis. It is found that if one wishes to derive the posterior distribution of the probability of some event, then, together with the traditional Bayes Theorem and the integrating out of nuissance parameters, the Jacobian transformation is an

  6. Examination of influential observations in penalized spline regression

    Science.gov (United States)

    Türkan, Semra

    2013-10-01

    In parametric or nonparametric regression models, the results of regression analysis are affected by some anomalous observations in the data set. Thus, detection of these observations is one of the major steps in regression analysis. These observations are precisely detected by well-known influence measures. Pena's statistic is one of them. In this study, Pena's approach is formulated for penalized spline regression in terms of ordinary residuals and leverages. The real data and artificial data are used to see illustrate the effectiveness of Pena's statistic as to Cook's distance on detecting influential observations. The results of the study clearly reveal that the proposed measure is superior to Cook's Distance to detect these observations in large data set.

  7. Subset selection in regression

    CERN Document Server

    Miller, Alan

    2002-01-01

    Originally published in 1990, the first edition of Subset Selection in Regression filled a significant gap in the literature, and its critical and popular success has continued for more than a decade. Thoroughly revised to reflect progress in theory, methods, and computing power, the second edition promises to continue that tradition. The author has thoroughly updated each chapter, incorporated new material on recent developments, and included more examples and references. New in the Second Edition:A separate chapter on Bayesian methodsComplete revision of the chapter on estimationA major example from the field of near infrared spectroscopyMore emphasis on cross-validationGreater focus on bootstrappingStochastic algorithms for finding good subsets from large numbers of predictors when an exhaustive search is not feasible Software available on the Internet for implementing many of the algorithms presentedMore examplesSubset Selection in Regression, Second Edition remains dedicated to the techniques for fitting...

  8. Stepwise versus Hierarchical Regression: Pros and Cons

    Science.gov (United States)

    Lewis, Mitzi

    2007-01-01

    Multiple regression is commonly used in social and behavioral data analysis. In multiple regression contexts, researchers are very often interested in determining the "best" predictors in the analysis. This focus may stem from a need to identify those predictors that are supportive of theory. Alternatively, the researcher may simply be interested…

  9. Nonparametric Mixture of Regression Models.

    Science.gov (United States)

    Huang, Mian; Li, Runze; Wang, Shaoli

    2013-07-01

    Motivated by an analysis of US house price index data, we propose nonparametric finite mixture of regression models. We study the identifiability issue of the proposed models, and develop an estimation procedure by employing kernel regression. We further systematically study the sampling properties of the proposed estimators, and establish their asymptotic normality. A modified EM algorithm is proposed to carry out the estimation procedure. We show that our algorithm preserves the ascent property of the EM algorithm in an asymptotic sense. Monte Carlo simulations are conducted to examine the finite sample performance of the proposed estimation procedure. An empirical analysis of the US house price index data is illustrated for the proposed methodology.

  10. An alternative method to achieve metrological confirmation in measurement process

    Science.gov (United States)

    Villeta, M.; Rubio, E. M.; Sanz, A.; Sevilla, L.

    2012-04-01

    Metrological confirmation process must be designed and implemented to ensure that metrological characteristics of the measurement system meet metrological requirements of the measurement process. The aim of this paper is to present an alternative method to the traditional metrological requirements about the relationship between tolerance and measurement uncertainty, to develop such confirmation processes. The proposed way to metrological confirmation considers a given inspection task of the measurement process into the manufacturing system, and it is based on the Index of Contamination of the Capability, ICC. Metrological confirmation process is then developed taking into account the producer risks and economic considerations on this index. As a consequence, depending on the capability of the manufacturing process, the measurement system will be or will not be in adequate state of metrological confirmation for the measurement process.

  11. Comparing Methodologies for Developing an Early Warning System: Classification and Regression Tree Model versus Logistic Regression. REL 2015-077

    Science.gov (United States)

    Koon, Sharon; Petscher, Yaacov

    2015-01-01

    The purpose of this report was to explicate the use of logistic regression and classification and regression tree (CART) analysis in the development of early warning systems. It was motivated by state education leaders' interest in maintaining high classification accuracy while simultaneously improving practitioner understanding of the rules by…

  12. On weighted and locally polynomial directional quantile regression

    Czech Academy of Sciences Publication Activity Database

    Boček, Pavel; Šiman, Miroslav

    2017-01-01

    Roč. 32, č. 3 (2017), s. 929-946 ISSN 0943-4062 R&D Projects: GA ČR GA14-07234S Institutional support: RVO:67985556 Keywords : Quantile regression * Nonparametric regression * Nonparametric regression Subject RIV: IN - Informatics, Computer Science OBOR OECD: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8) Impact factor: 0.434, year: 2016 http://library.utia.cas.cz/separaty/2017/SI/bocek-0458380.pdf

  13. Regression Benchmarking: An Approach to Quality Assurance in Performance

    OpenAIRE

    Bulej, Lubomír

    2005-01-01

    The paper presents a short summary of our work in the area of regression benchmarking and its application to software development. Specially, we explain the concept of regression benchmarking, the requirements for employing regression testing in a software project, and methods used for analyzing the vast amounts of data resulting from repeated benchmarking. We present the application of regression benchmarking on a real software project and conclude with a glimpse at the challenges for the fu...

  14. The N400 as a snapshot of interactive processing: evidence from regression analyses of orthographic neighbor and lexical associate effects

    Science.gov (United States)

    Laszlo, Sarah; Federmeier, Kara D.

    2010-01-01

    Linking print with meaning tends to be divided into subprocesses, such as recognition of an input's lexical entry and subsequent access of semantics. However, recent results suggest that the set of semantic features activated by an input is broader than implied by a view wherein access serially follows recognition. EEG was collected from participants who viewed items varying in number and frequency of both orthographic neighbors and lexical associates. Regression analysis of single item ERPs replicated past findings, showing that N400 amplitudes are greater for items with more neighbors, and further revealed that N400 amplitudes increase for items with more lexical associates and with higher frequency neighbors or associates. Together, the data suggest that in the N400 time window semantic features of items broadly related to inputs are active, consistent with models in which semantic access takes place in parallel with stimulus recognition. PMID:20624252

  15. Nonlinear Regression with R

    CERN Document Server

    Ritz, Christian; Parmigiani, Giovanni

    2009-01-01

    R is a rapidly evolving lingua franca of graphical display and statistical analysis of experiments from the applied sciences. This book provides a coherent treatment of nonlinear regression with R by means of examples from a diversity of applied sciences such as biology, chemistry, engineering, medicine and toxicology.

  16. Bounded Gaussian process regression

    DEFF Research Database (Denmark)

    Jensen, Bjørn Sand; Nielsen, Jens Brehm; Larsen, Jan

    2013-01-01

    We extend the Gaussian process (GP) framework for bounded regression by introducing two bounded likelihood functions that model the noise on the dependent variable explicitly. This is fundamentally different from the implicit noise assumption in the previously suggested warped GP framework. We...... with the proposed explicit noise-model extension....

  17. There is No Quantum Regression Theorem

    International Nuclear Information System (INIS)

    Ford, G.W.; OConnell, R.F.

    1996-01-01

    The Onsager regression hypothesis states that the regression of fluctuations is governed by macroscopic equations describing the approach to equilibrium. It is here asserted that this hypothesis fails in the quantum case. This is shown first by explicit calculation for the example of quantum Brownian motion of an oscillator and then in general from the fluctuation-dissipation theorem. It is asserted that the correct generalization of the Onsager hypothesis is the fluctuation-dissipation theorem. copyright 1996 The American Physical Society

  18. Two Paradoxes in Linear Regression Analysis

    Science.gov (United States)

    FENG, Ge; PENG, Jing; TU, Dongke; ZHENG, Julia Z.; FENG, Changyong

    2016-01-01

    Summary Regression is one of the favorite tools in applied statistics. However, misuse and misinterpretation of results from regression analysis are common in biomedical research. In this paper we use statistical theory and simulation studies to clarify some paradoxes around this popular statistical method. In particular, we show that a widely used model selection procedure employed in many publications in top medical journals is wrong. Formal procedures based on solid statistical theory should be used in model selection. PMID:28638214

  19. Meta-Modeling by Symbolic Regression and Pareto Simulated Annealing

    NARCIS (Netherlands)

    Stinstra, E.; Rennen, G.; Teeuwen, G.J.A.

    2006-01-01

    The subject of this paper is a new approach to Symbolic Regression.Other publications on Symbolic Regression use Genetic Programming.This paper describes an alternative method based on Pareto Simulated Annealing.Our method is based on linear regression for the estimation of constants.Interval

  20. Using Regression Equations Built from Summary Data in the Psychological Assessment of the Individual Case: Extension to Multiple Regression

    Science.gov (United States)

    Crawford, John R.; Garthwaite, Paul H.; Denham, Annie K.; Chelune, Gordon J.

    2012-01-01

    Regression equations have many useful roles in psychological assessment. Moreover, there is a large reservoir of published data that could be used to build regression equations; these equations could then be employed to test a wide variety of hypotheses concerning the functioning of individual cases. This resource is currently underused because…

  1. Thermodynamic Analysis of Simple Gas Turbine Cycle with Multiple Regression Modelling and Optimization

    Directory of Open Access Journals (Sweden)

    Abdul Ghafoor Memon

    2014-03-01

    Full Text Available In this study, thermodynamic and statistical analyses were performed on a gas turbine system, to assess the impact of some important operating parameters like CIT (Compressor Inlet Temperature, PR (Pressure Ratio and TIT (Turbine Inlet Temperature on its performance characteristics such as net power output, energy efficiency, exergy efficiency and fuel consumption. Each performance characteristic was enunciated as a function of operating parameters, followed by a parametric study and optimization. The results showed that the performance characteristics increase with an increase in the TIT and a decrease in the CIT, except fuel consumption which behaves oppositely. The net power output and efficiencies increase with the PR up to certain initial values and then start to decrease, whereas the fuel consumption always decreases with an increase in the PR. The results of exergy analysis showed the combustion chamber as a major contributor to the exergy destruction, followed by stack gas. Subsequently, multiple regression models were developed to correlate each of the response variables (performance characteristic with the predictor variables (operating parameters. The regression model equations showed a significant statistical relationship between the predictor and response variables.

  2. Mixed-effects regression models in linguistics

    CERN Document Server

    Heylen, Kris; Geeraerts, Dirk

    2018-01-01

    When data consist of grouped observations or clusters, and there is a risk that measurements within the same group are not independent, group-specific random effects can be added to a regression model in order to account for such within-group associations. Regression models that contain such group-specific random effects are called mixed-effects regression models, or simply mixed models. Mixed models are a versatile tool that can handle both balanced and unbalanced datasets and that can also be applied when several layers of grouping are present in the data; these layers can either be nested or crossed.  In linguistics, as in many other fields, the use of mixed models has gained ground rapidly over the last decade. This methodological evolution enables us to build more sophisticated and arguably more realistic models, but, due to its technical complexity, also introduces new challenges. This volume brings together a number of promising new evolutions in the use of mixed models in linguistics, but also addres...

  3. Racial athletic stereotype confirmation in college football recruiting.

    Science.gov (United States)

    Thomas, Grant; Good, Jessica J; Gross, Alexi R

    2015-01-01

    The present study tested real-world racial stereotype use in the context of college athletic recruiting. Stereotype confirmation suggests that observers use stereotypes as hypotheses and interpret relevant evidence in a biased way that confirms their stereotypes. Shifting standards suggest that the evaluative standard to which we hold a target changes as a function of their group membership. We examined whether stereotype confirmation and shifting standards effects would be seen in college football coaches during recruiting. College football coaches evaluated a Black or White player on several attributes and made both zero- and non-zero-sum allocations. Results suggested that coaches used the evidence presented to develop biased subjective evaluations of the players based on race while still maintaining equivalent objective evaluations. Coaches also allocated greater overall resources to the Black recruit than the White recruit.

  4. Collapse susceptibility mapping in karstified gypsum terrain (Sivas basin - Turkey) by conditional probability, logistic regression, artificial neural network models

    Science.gov (United States)

    Yilmaz, Isik; Keskin, Inan; Marschalko, Marian; Bednarik, Martin

    2010-05-01

    This study compares the GIS based collapse susceptibility mapping methods such as; conditional probability (CP), logistic regression (LR) and artificial neural networks (ANN) applied in gypsum rock masses in Sivas basin (Turkey). Digital Elevation Model (DEM) was first constructed using GIS software. Collapse-related factors, directly or indirectly related to the causes of collapse occurrence, such as distance from faults, slope angle and aspect, topographical elevation, distance from drainage, topographic wetness index- TWI, stream power index- SPI, Normalized Difference Vegetation Index (NDVI) by means of vegetation cover, distance from roads and settlements were used in the collapse susceptibility analyses. In the last stage of the analyses, collapse susceptibility maps were produced from CP, LR and ANN models, and they were then compared by means of their validations. Area Under Curve (AUC) values obtained from all three methodologies showed that the map obtained from ANN model looks like more accurate than the other models, and the results also showed that the artificial neural networks is a usefull tool in preparation of collapse susceptibility map and highly compatible with GIS operating features. Key words: Collapse; doline; susceptibility map; gypsum; GIS; conditional probability; logistic regression; artificial neural networks.

  5. Principles of Quantile Regression and an Application

    Science.gov (United States)

    Chen, Fang; Chalhoub-Deville, Micheline

    2014-01-01

    Newer statistical procedures are typically introduced to help address the limitations of those already in practice or to deal with emerging research needs. Quantile regression (QR) is introduced in this paper as a relatively new methodology, which is intended to overcome some of the limitations of least squares mean regression (LMR). QR is more…

  6. Analysis of animal experiments of radiation dependent tumor regression in relation to different parameters

    International Nuclear Information System (INIS)

    Heinzel, F.; Mueller-Duysing, W.; Blattman, H.; Bacesa, L.; Rao, K.R.; Mindek, G.

    In order to be able to test the therapeutic value of the pions in comparison with conventional X-rays, analyses of animal experiments with induced tumors, transplantation tumors, and comparative cellular kinetic studies of tissue cultures will be performed. So that differences in radiation effect and a possible superiority of the pion therapy be objectively acknowledged, the reaction systems to be tested must be as homogenous as possible. For this purpose, the dependence of the radiation related regression on various parameters such as sex, age of hosts, environmental factors radiation conditions (intensity, fractionation, and so on), tumor size, and so on, must be investigated on sterile animals in a sterile environment. The experiments should be conducted under conditions as close as possible to clinical ones. For comparison, the reaction of normal tissue (in vitro and in vivo) and of malignant cells in short-time tissue cultures will be analysed. Cellular kinetics, alteration of chromosomes and metabolic activity of the cells will be studied

  7. Functional data analysis of generalized regression quantiles

    KAUST Repository

    Guo, Mengmeng; Zhou, Lan; Huang, Jianhua Z.; Hä rdle, Wolfgang Karl

    2013-01-01

    Generalized regression quantiles, including the conditional quantiles and expectiles as special cases, are useful alternatives to the conditional means for characterizing a conditional distribution, especially when the interest lies in the tails. We develop a functional data analysis approach to jointly estimate a family of generalized regression quantiles. Our approach assumes that the generalized regression quantiles share some common features that can be summarized by a small number of principal component functions. The principal component functions are modeled as splines and are estimated by minimizing a penalized asymmetric loss measure. An iterative least asymmetrically weighted squares algorithm is developed for computation. While separate estimation of individual generalized regression quantiles usually suffers from large variability due to lack of sufficient data, by borrowing strength across data sets, our joint estimation approach significantly improves the estimation efficiency, which is demonstrated in a simulation study. The proposed method is applied to data from 159 weather stations in China to obtain the generalized quantile curves of the volatility of the temperature at these stations. © 2013 Springer Science+Business Media New York.

  8. Functional data analysis of generalized regression quantiles

    KAUST Repository

    Guo, Mengmeng

    2013-11-05

    Generalized regression quantiles, including the conditional quantiles and expectiles as special cases, are useful alternatives to the conditional means for characterizing a conditional distribution, especially when the interest lies in the tails. We develop a functional data analysis approach to jointly estimate a family of generalized regression quantiles. Our approach assumes that the generalized regression quantiles share some common features that can be summarized by a small number of principal component functions. The principal component functions are modeled as splines and are estimated by minimizing a penalized asymmetric loss measure. An iterative least asymmetrically weighted squares algorithm is developed for computation. While separate estimation of individual generalized regression quantiles usually suffers from large variability due to lack of sufficient data, by borrowing strength across data sets, our joint estimation approach significantly improves the estimation efficiency, which is demonstrated in a simulation study. The proposed method is applied to data from 159 weather stations in China to obtain the generalized quantile curves of the volatility of the temperature at these stations. © 2013 Springer Science+Business Media New York.

  9. Value of MR histogram analyses for prediction of microvascular invasion of hepatocellular carcinoma.

    Science.gov (United States)

    Huang, Ya-Qin; Liang, He-Yue; Yang, Zhao-Xia; Ding, Ying; Zeng, Meng-Su; Rao, Sheng-Xiang

    2016-06-01

    The objective is to explore the value of preoperative magnetic resonance (MR) histogram analyses in predicting microvascular invasion (MVI) of hepatocellular carcinoma (HCC).Fifty-one patients with histologically confirmed HCC who underwent diffusion-weighted and contrast-enhanced MR imaging were included. Histogram analyses were performed and mean, variance, skewness, kurtosis, 1th, 10th, 50th, 90th, and 99th percentiles were derived. Quantitative histogram parameters were compared between HCCs with and without MVI. Receiver operating characteristics (ROC) analyses were generated to compare the diagnostic performance of tumor size, histogram analyses of apparent diffusion coefficient (ADC) maps, and MR enhancement.The mean, 1th, 10th, and 50th percentiles of ADC maps, and the mean, variance. 1th, 10th, 50th, 90th, and 99th percentiles of the portal venous phase (PVP) images were significantly different between the groups with and without MVI (P histogram analyses-in particular for 1th percentile for PVP images-held promise for prediction of MVI of HCC.

  10. Using the angiogenic factors sFlt-1 and PlGF with Doppler ultrasound of the uterine artery for confirming preeclampsia.

    Science.gov (United States)

    Bahlmann, Franz; Al Naimi, Ammar

    2016-11-01

    The aim of this study is to assess the value of the angiogenic factors for diagnosing preeclampsia and predicting the severity of manifestation. A secondary aim is assessing the combination of the uterine artery Doppler with the angiogenic factors for improving the diagnostic power. This is a prospective single center study in a tertiary referral hospital. This study includes 728 individual patients. Inclusion criteria were singleton pregnancies, a referral to the hospital with suspicion of preeclampsia and any one or combination of the following symptoms: headache, upper abdominal pain, edema, and hypertension. Patients with complications that would affect the course of the pregnancy, such as placenta praevia, premature preterm rupture of membranes, breech presentation, and fetal chromosomal or structural anomalies, were excluded from the study. Blood samples collection and uterine artery Doppler ultrasound were performed at time of recruitment. The differences in sFlt-1, PlGF, and their quotient among normal collective and patients with preeclampsia were analyzed. Doppler ultrasound was performed by one of four highly qualified sonographers. Wilcoxon-Mann-Whitney U test, Spearman's rank correlation, receiver operating characteristic curves, Chi-square test, and logistic regression were used in the analysis. A total of 1003 individual samples for the angiogenic factors were included in the analysis. 584 out of the recruited 728 patients had follow-up data with delivery information at the study hospital. Patients with preeclampsia show a significant increase in sFlt-1, which directly correlate with the increased severity of manifestation (Spearman's ρ 0.49). The sFlt-1 cut-off value of 5424 pg/ml confirms preeclampsia with 83.7 % sensitivity, 68.1 % specificity, and 24 % misclassification rate. Preeclampsia patients also show a significant decrease in PlGF, which negatively correlates with the increased severity of manifestation (Spearman's ρ -0.39). A Pl

  11. Regression-Based Norms for the Symbol Digit Modalities Test in the Dutch Population: Improving Detection of Cognitive Impairment in Multiple Sclerosis?

    Science.gov (United States)

    Burggraaff, Jessica; Knol, Dirk L; Uitdehaag, Bernard M J

    2017-01-01

    Appropriate and timely screening instruments that sensitively capture the cognitive functioning of multiple sclerosis (MS) patients are the need of the hour. We evaluated newly derived regression-based norms for the Symbol Digit Modalities Test (SDMT) in a Dutch-speaking sample, as an indicator of the cognitive state of MS patients. Regression-based norms for the SDMT were created from a healthy control sample (n = 96) and used to convert MS patients' (n = 157) raw scores to demographically adjusted Z-scores, correcting for the effects of age, age2, gender, and education. Conventional and regression-based norms were compared on their impairment-classification rates and related to other neuropsychological measures. The regression analyses revealed that age was the only significantly influencing demographic in our healthy sample. Regression-based norms for the SDMT more readily detected impairment in MS patients than conventional normalization methods (32 patients instead of 15). Patients changing from an SDMT-preserved to -impaired status (n = 17) were also impaired on other cognitive domains (p < 0.05), except for visuospatial memory (p = 0.34). Regression-based norms for the SDMT more readily detect abnormal performance in MS patients than conventional norms, identifying those patients at highest risk for cognitive impairment, which was supported by a worse performance on other neuropsychological measures. © 2017 S. Karger AG, Basel.

  12. Simple and multiple linear regression: sample size considerations.

    Science.gov (United States)

    Hanley, James A

    2016-11-01

    The suggested "two subjects per variable" (2SPV) rule of thumb in the Austin and Steyerberg article is a chance to bring out some long-established and quite intuitive sample size considerations for both simple and multiple linear regression. This article distinguishes two of the major uses of regression models that imply very different sample size considerations, neither served well by the 2SPV rule. The first is etiological research, which contrasts mean Y levels at differing "exposure" (X) values and thus tends to focus on a single regression coefficient, possibly adjusted for confounders. The second research genre guides clinical practice. It addresses Y levels for individuals with different covariate patterns or "profiles." It focuses on the profile-specific (mean) Y levels themselves, estimating them via linear compounds of regression coefficients and covariates. By drawing on long-established closed-form variance formulae that lie beneath the standard errors in multiple regression, and by rearranging them for heuristic purposes, one arrives at quite intuitive sample size considerations for both research genres. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Removing Malmquist bias from linear regressions

    Science.gov (United States)

    Verter, Frances

    1993-01-01

    Malmquist bias is present in all astronomical surveys where sources are observed above an apparent brightness threshold. Those sources which can be detected at progressively larger distances are progressively more limited to the intrinsically luminous portion of the true distribution. This bias does not distort any of the measurements, but distorts the sample composition. We have developed the first treatment to correct for Malmquist bias in linear regressions of astronomical data. A demonstration of the corrected linear regression that is computed in four steps is presented.

  14. The microcomputer scientific software series 2: general linear model--regression.

    Science.gov (United States)

    Harold M. Rauscher

    1983-01-01

    The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...

  15. RAWS II: A MULTIPLE REGRESSION ANALYSIS PROGRAM,

    Science.gov (United States)

    This memorandum gives instructions for the use and operation of a revised version of RAWS, a multiple regression analysis program. The program...of preprocessed data, the directed retention of variable, listing of the matrix of the normal equations and its inverse, and the bypassing of the regression analysis to provide the input variable statistics only. (Author)

  16. Analyzing Seasonal Variations in Suicide With Fourier Poisson Time-Series Regression: A Registry-Based Study From Norway, 1969-2007.

    Science.gov (United States)

    Bramness, Jørgen G; Walby, Fredrik A; Morken, Gunnar; Røislien, Jo

    2015-08-01

    Seasonal variation in the number of suicides has long been acknowledged. It has been suggested that this seasonality has declined in recent years, but studies have generally used statistical methods incapable of confirming this. We examined all suicides occurring in Norway during 1969-2007 (more than 20,000 suicides in total) to establish whether seasonality decreased over time. Fitting of additive Fourier Poisson time-series regression models allowed for formal testing of a possible linear decrease in seasonality, or a reduction at a specific point in time, while adjusting for a possible smooth nonlinear long-term change without having to categorize time into discrete yearly units. The models were compared using Akaike's Information Criterion and analysis of variance. A model with a seasonal pattern was significantly superior to a model without one. There was a reduction in seasonality during the period. Both the model assuming a linear decrease in seasonality and the model assuming a change at a specific point in time were both superior to a model assuming constant seasonality, thus confirming by formal statistical testing that the magnitude of the seasonality in suicides has diminished. The additive Fourier Poisson time-series regression model would also be useful for studying other temporal phenomena with seasonal components. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. Learning a Nonnegative Sparse Graph for Linear Regression.

    Science.gov (United States)

    Fang, Xiaozhao; Xu, Yong; Li, Xuelong; Lai, Zhihui; Wong, Wai Keung

    2015-09-01

    Previous graph-based semisupervised learning (G-SSL) methods have the following drawbacks: 1) they usually predefine the graph structure and then use it to perform label prediction, which cannot guarantee an overall optimum and 2) they only focus on the label prediction or the graph structure construction but are not competent in handling new samples. To this end, a novel nonnegative sparse graph (NNSG) learning method was first proposed. Then, both the label prediction and projection learning were integrated into linear regression. Finally, the linear regression and graph structure learning were unified within the same framework to overcome these two drawbacks. Therefore, a novel method, named learning a NNSG for linear regression was presented, in which the linear regression and graph learning were simultaneously performed to guarantee an overall optimum. In the learning process, the label information can be accurately propagated via the graph structure so that the linear regression can learn a discriminative projection to better fit sample labels and accurately classify new samples. An effective algorithm was designed to solve the corresponding optimization problem with fast convergence. Furthermore, NNSG provides a unified perceptiveness for a number of graph-based learning methods and linear regression methods. The experimental results showed that NNSG can obtain very high classification accuracy and greatly outperforms conventional G-SSL methods, especially some conventional graph construction methods.

  18. Confirming preferences or collecting data? Information search strategies and romantic partner selection.

    Science.gov (United States)

    Hennessy, Michael H; Fishbein, Marty; Curtis, Brenda; Barrett, Daniel

    2008-03-01

    This article investigates two kinds of information search strategies in the context of selecting romantic partners. Confirmatory searching occurs when people ask for more information about a romantic partner in order to validate or confirm their assessment. Balanced searches are characterized by a search for risk information for partners rated as attractive and for attractiveness information about partners rated as risky in order to attain a more complete evaluation. A factorial survey computer program randomly constructed five types of partner descriptions and college-age respondents evaluated nine descriptions in terms of both health risk and romantic attractiveness outcomes. The results show little evidence of balanced search strategies: for all vignette types the respondents searched for attractiveness information. Regression analysis of the search outcomes showed no difference between males and females in the desire for attractiveness or risk information, the amount of additional information desired, or the proportion of descriptions for which more information was desired. However, an attractive physical appearance did increase the amount of additional information desired and the proportion of vignettes for which more information was desired. The results were generally inconsistent with a balanced search hypothesis; a better characterization of the respondents' strategy might be "confirmatory bias."

  19. Prediction of Vitamin D Deficiency Among Tabriz Elderly and Nursing Home Residents Using Stereotype Regression Model

    Directory of Open Access Journals (Sweden)

    Zohreh Razzaghi

    2011-07-01

    Full Text Available Objectives: Vitamin D deficiency is one of the most important health problems of any society. It is more common in elderly even in those dwelling in rest homes. By now, several studies have been conducted on vitamin D deficiency using current statistical models. In this study, corresponding proportional odds and stereotype regression methods were used to identify threatening factors related to vitamin D deficiency in elderly living in rest homes and comparing them with those who live out of the mentioned places. Methods & Materials: In this case-control study, there were 140 older persons living in rest homes and 140 ones not dwelling in these centers. In the present study, 25(OHD serum level variable and age, sex, body mass index, duration of exposure to sunlight variables were regarded as response and predictive variables to vitamin D deficiency, respectively. The analyses were carried out using corresponding proportional odds and stereotype regression methods and estimating parameters of these two models. Deviation statistics (AIC was used to evaluate and compare the mentioned methods. Stata.9.1 software was elected to conduct the analyses. Results: Average serum level of 25(OHD was 16.10±16.65 ng/ml and 39.62±24.78 ng/ml in individuals living in rest homes and those not living there, respectively (P=0.001. Prevalence of vitamin D deficiency (less than 20 ng/ml was observed in 75% of members of the group consisting of those living in rest homes and 23.78% of members of another group. Using corresponding proportional odds and stereotype regression methods, age, sex, body mass index, duration of exposure to sunlight variables and whether they are member of rest home were fitted. In both models, variables of group and duration of exposure to sunlight were regarded as meaningful (P<0.001. Stereotype regression model included group variable (odd ratio for a group suffering from severe vitamin D deficiency was 42.85, 95%CI:9.93-185.67 and

  20. Regression models of ultimate methane yields of fruits and vegetable solid wastes, sorghum and napiergrass on chemical composition

    Energy Technology Data Exchange (ETDEWEB)

    Gunaseelan, V.N. [PSG College of Arts and Science, Coimbatore (India). Department of Zoology

    2007-04-15

    Several fractions of fruits and vegetable solid wastes (FVSW), sorghum and napiergrass were analyzed for total solids (TS), volatile solids (VS), total organic carbon, total kjeldahl nitrogen, total soluble carbohydrate, extractable protein, acid-detergent fiber (ADF), lignin, cellulose and ash contents. Their ultimate methane yields (B{sub o}) were determined using the biochemical methane potential (BMP) assay. A series of simple and multiple regression models relating the B{sub o} to the various substrate constituents were generated and evaluated using computer statistical software, Statistical Package for Social Sciences (SPSS). The results of simple regression analyses revealed that, only weak relationship existed between the individual components such as carbohydrate, protein, ADF, lignin and cellulose versus B{sub o}. A regression of B{sub o} versus combination of two variables as a single independent variable such as carbohydrate/ADF and carbohydrate + protein/ADF also showed that the relationship is not strong. Thus it does not appear possible to relate the B{sub o} of FVSW, sorghum and napiergrass with single compositional characteristics. The results of multiple regression analyses showed promise and the relationship appeared to be good. When ADF and lignin/ADF were used as independent variables, the percentage of variation accounted for by the model is low for FVSW (r{sup 2}=0.665) and sorghum and napiergrass (r{sup 2}=0.746). Addition of nitrogen, ash and total soluble carbohydrate data to the model had a significantly higher effect on prediction of B{sub o} of these wastes with the r{sup 2} values ranging from 0.9 to 0.99. More than 90% of variation in B{sub o} of FVSW could be accounted for by the models when the variables carbohydrate, lignin, lignin/ADF, nitrogen and ash (r{sup 2}=0.904), carbohydrate, ADF, lignin/ADF, nitrogen and ash (r{sup 2}=0.90) and carbohydrate/ADF, lignin/ADF, lignin and ash (r{sup 2}=0.901) were used. All the models have