WorldWideScience

Sample records for regression analyses determined

  1. Determinants and Economic Impacts of North-South and South-South FDI in ASEAN : Panel Regression Analyses

    OpenAIRE

    Peseth, Seng

    2015-01-01

    This paper uses panel data of 10 ASEAN countries from 1995 to 2008 and studies the cross-country and industrial distribution of North and South FDI, investigates host country-specific determinants of the inflows of total FDI, North FDI and South FDI, and also compares the effects of North and South FDI on economic and industrial growth in the region.

  2. Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.

    Science.gov (United States)

    Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg

    2009-11-01

    G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.

  3. Determination of regression laws: Linear and nonlinear

    International Nuclear Information System (INIS)

    Onishchenko, A.M.

    1994-01-01

    A detailed mathematical determination of regression laws is presented in the article. Particular emphasis is place on determining the laws of X j on X l to account for source nuclei decay and detector errors in nuclear physics instrumentation. Both linear and nonlinear relations are presented. Linearization of 19 functions is tabulated, including graph, relation, variable substitution, obtained linear function, and remarks. 6 refs., 1 tab

  4. Applications of MIDAS regression in analysing trends in water quality

    Science.gov (United States)

    Penev, Spiridon; Leonte, Daniela; Lazarov, Zdravetz; Mann, Rob A.

    2014-04-01

    We discuss novel statistical methods in analysing trends in water quality. Such analysis uses complex data sets of different classes of variables, including water quality, hydrological and meteorological. We analyse the effect of rainfall and flow on trends in water quality utilising a flexible model called Mixed Data Sampling (MIDAS). This model arises because of the mixed frequency in the data collection. Typically, water quality variables are sampled fortnightly, whereas the rain data is sampled daily. The advantage of using MIDAS regression is in the flexible and parsimonious modelling of the influence of the rain and flow on trends in water quality variables. We discuss the model and its implementation on a data set from the Shoalhaven Supply System and Catchments in the state of New South Wales, Australia. Information criteria indicate that MIDAS modelling improves upon simplistic approaches that do not utilise the mixed data sampling nature of the data.

  5. Secondary mediation and regression analyses of the PTClinResNet database: determining causal relationships among the International Classification of Functioning, Disability and Health levels for four physical therapy intervention trials.

    Science.gov (United States)

    Mulroy, Sara J; Winstein, Carolee J; Kulig, Kornelia; Beneck, George J; Fowler, Eileen G; DeMuth, Sharon K; Sullivan, Katherine J; Brown, David A; Lane, Christianne J

    2011-12-01

    Each of the 4 randomized clinical trials (RCTs) hosted by the Physical Therapy Clinical Research Network (PTClinResNet) targeted a different disability group (low back disorder in the Muscle-Specific Strength Training Effectiveness After Lumbar Microdiskectomy [MUSSEL] trial, chronic spinal cord injury in the Strengthening and Optimal Movements for Painful Shoulders in Chronic Spinal Cord Injury [STOMPS] trial, adult stroke in the Strength Training Effectiveness Post-Stroke [STEPS] trial, and pediatric cerebral palsy in the Pediatric Endurance and Limb Strengthening [PEDALS] trial for children with spastic diplegic cerebral palsy) and tested the effectiveness of a muscle-specific or functional activity-based intervention on primary outcomes that captured pain (STOMPS, MUSSEL) or locomotor function (STEPS, PEDALS). The focus of these secondary analyses was to determine causal relationships among outcomes across levels of the International Classification of Functioning, Disability and Health (ICF) framework for the 4 RCTs. With the database from PTClinResNet, we used 2 separate secondary statistical approaches-mediation analysis for the MUSSEL and STOMPS trials and regression analysis for the STEPS and PEDALS trials-to test relationships among muscle performance, primary outcomes (pain related and locomotor related), activity and participation measures, and overall quality of life. Predictive models were stronger for the 2 studies with pain-related primary outcomes. Change in muscle performance mediated or predicted reductions in pain for the MUSSEL and STOMPS trials and, to some extent, walking speed for the STEPS trial. Changes in primary outcome variables were significantly related to changes in activity and participation variables for all 4 trials. Improvement in activity and participation outcomes mediated or predicted increases in overall quality of life for the 3 trials with adult populations. Variables included in the statistical models were limited to those

  6. Multicollinearity in Regression Analyses Conducted in Epidemiologic Studies.

    Science.gov (United States)

    Vatcheva, Kristina P; Lee, MinJae; McCormick, Joseph B; Rahbar, Mohammad H

    2016-04-01

    The adverse impact of ignoring multicollinearity on findings and data interpretation in regression analysis is very well documented in the statistical literature. The failure to identify and report multicollinearity could result in misleading interpretations of the results. A review of epidemiological literature in PubMed from January 2004 to December 2013, illustrated the need for a greater attention to identifying and minimizing the effect of multicollinearity in analysis of data from epidemiologic studies. We used simulated datasets and real life data from the Cameron County Hispanic Cohort to demonstrate the adverse effects of multicollinearity in the regression analysis and encourage researchers to consider the diagnostic for multicollinearity as one of the steps in regression analysis.

  7. Statistical and regression analyses of detected extrasolar systems

    Czech Academy of Sciences Publication Activity Database

    Pintr, Pavel; Peřinová, V.; Lukš, A.; Pathak, A.

    2013-01-01

    Roč. 75, č. 1 (2013), s. 37-45 ISSN 0032-0633 Institutional support: RVO:61389021 Keywords : Exoplanets * Kepler candidates * Regression analysis Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics Impact factor: 1.630, year: 2013 http://www.sciencedirect.com/science/article/pii/S0032063312003066

  8. Multicollinearity in Regression Analyses Conducted in Epidemiologic Studies

    OpenAIRE

    Vatcheva, Kristina P.; Lee, MinJae; McCormick, Joseph B.; Rahbar, Mohammad H.

    2016-01-01

    The adverse impact of ignoring multicollinearity on findings and data interpretation in regression analysis is very well documented in the statistical literature. The failure to identify and report multicollinearity could result in misleading interpretations of the results. A review of epidemiological literature in PubMed from January 2004 to December 2013, illustrated the need for a greater attention to identifying and minimizing the effect of multicollinearity in analysis of data from epide...

  9. Analysing inequalities in Germany a structured additive distributional regression approach

    CERN Document Server

    Silbersdorff, Alexander

    2017-01-01

    This book seeks new perspectives on the growing inequalities that our societies face, putting forward Structured Additive Distributional Regression as a means of statistical analysis that circumvents the common problem of analytical reduction to simple point estimators. This new approach allows the observed discrepancy between the individuals’ realities and the abstract representation of those realities to be explicitly taken into consideration using the arithmetic mean alone. In turn, the method is applied to the question of economic inequality in Germany.

  10. Using Dominance Analysis to Determine Predictor Importance in Logistic Regression

    Science.gov (United States)

    Azen, Razia; Traxel, Nicole

    2009-01-01

    This article proposes an extension of dominance analysis that allows researchers to determine the relative importance of predictors in logistic regression models. Criteria for choosing logistic regression R[superscript 2] analogues were determined and measures were selected that can be used to perform dominance analysis in logistic regression. A…

  11. How to deal with continuous and dichotomic outcomes in epidemiological research: linear and logistic regression analyses

    NARCIS (Netherlands)

    Tripepi, Giovanni; Jager, Kitty J.; Stel, Vianda S.; Dekker, Friedo W.; Zoccali, Carmine

    2011-01-01

    Because of some limitations of stratification methods, epidemiologists frequently use multiple linear and logistic regression analyses to address specific epidemiological questions. If the dependent variable is a continuous one (for example, systolic pressure and serum creatinine), the researcher

  12. Determination of gaussian peaks in gamma spectra by iterative regression

    International Nuclear Information System (INIS)

    Nordemann, D.J.R.

    1987-05-01

    The parameters of the peaks in gamma-ray spectra are determined by a simple iterative regression method. For each peak, the parameters are associated with a gaussian curve (3 parameters) located above a linear continuum (2 parameters). This method may produces the complete result of the calculation of statistical uncertainties and an accuracy higher than others methods. (author) [pt

  13. Determinants of Inequality in Cameroon: A Regression-Based ...

    African Journals Online (AJOL)

    This paper applies the regression-based inequality decomposition approach to explore determinants of income inequality in Cameroon using the 2007 Cameroon household consumption survey. The contribution of each source to measured income inequality is the sum of its weighted marginal contributions in all possible ...

  14. The number of subjects per variable required in linear regression analyses

    NARCIS (Netherlands)

    P.C. Austin (Peter); E.W. Steyerberg (Ewout)

    2015-01-01

    textabstractObjectives To determine the number of independent variables that can be included in a linear regression model. Study Design and Setting We used a series of Monte Carlo simulations to examine the impact of the number of subjects per variable (SPV) on the accuracy of estimated regression

  15. Logistic regression and multiple classification analyses to explore risk factors of under-5 mortality in bangladesh

    International Nuclear Information System (INIS)

    Bhowmik, K.R.; Islam, S.

    2016-01-01

    Logistic regression (LR) analysis is the most common statistical methodology to find out the determinants of childhood mortality. However, the significant predictors cannot be ranked according to their influence on the response variable. Multiple classification (MC) analysis can be applied to identify the significant predictors with a priority index which helps to rank the predictors. The main objective of the study is to find the socio-demographic determinants of childhood mortality at neonatal, post-neonatal, and post-infant period by fitting LR model as well as to rank those through MC analysis. The study is conducted using the data of Bangladesh Demographic and Health Survey 2007 where birth and death information of children were collected from their mothers. Three dichotomous response variables are constructed from children age at death to fit the LR and MC models. Socio-economic and demographic variables significantly associated with the response variables separately are considered in LR and MC analyses. Both the LR and MC models identified the same significant predictors for specific childhood mortality. For both the neonatal and child mortality, biological factors of children, regional settings, and parents socio-economic status are found as 1st, 2nd, and 3rd significant groups of predictors respectively. Mother education and household environment are detected as major significant predictors of post-neonatal mortality. This study shows that MC analysis with or without LR analysis can be applied to detect determinants with rank which help the policy makers taking initiatives on a priority basis. (author)

  16. Determinants of Non-Performing Assets in India - Panel Regression

    Directory of Open Access Journals (Sweden)

    Saikat Ghosh Roy

    2014-12-01

    Full Text Available It is well known that level of banks‟ credit plays an important role in economic developments. Indian banking sector has played a seminal role in supporting economic growth in India. Recently, Indian banks are experiencing consistent increase in non-performing assets (NPA. In this perspective, this paper investigates the trends in NPA in Indian banks and its determinants. The panel regressions, fixed effect allows evaluating the impact of selected macroeconomic variables on the NPA. The Panel regression result indicates that the GDP growth, change in exchange rate and global volatility have major effects on the NPA level of Indian banking sector.

  17. USE OF THE SIMPLE LINEAR REGRESSION MODEL IN MACRO-ECONOMICAL ANALYSES

    Directory of Open Access Journals (Sweden)

    Constantin ANGHELACHE

    2011-10-01

    Full Text Available The article presents the fundamental aspects of the linear regression, as a toolbox which can be used in macroeconomic analyses. The article describes the estimation of the parameters, the statistical tests used, the homoscesasticity and heteroskedasticity. The use of econometrics instrument in macroeconomics is an important factor that guarantees the quality of the models, analyses, results and possible interpretation that can be drawn at this level.

  18. The number of subjects per variable required in linear regression analyses.

    Science.gov (United States)

    Austin, Peter C; Steyerberg, Ewout W

    2015-06-01

    To determine the number of independent variables that can be included in a linear regression model. We used a series of Monte Carlo simulations to examine the impact of the number of subjects per variable (SPV) on the accuracy of estimated regression coefficients and standard errors, on the empirical coverage of estimated confidence intervals, and on the accuracy of the estimated R(2) of the fitted model. A minimum of approximately two SPV tended to result in estimation of regression coefficients with relative bias of less than 10%. Furthermore, with this minimum number of SPV, the standard errors of the regression coefficients were accurately estimated and estimated confidence intervals had approximately the advertised coverage rates. A much higher number of SPV were necessary to minimize bias in estimating the model R(2), although adjusted R(2) estimates behaved well. The bias in estimating the model R(2) statistic was inversely proportional to the magnitude of the proportion of variation explained by the population regression model. Linear regression models require only two SPV for adequate estimation of regression coefficients, standard errors, and confidence intervals. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  19. Determinants of Birthweight Outcomes: Quantile Regressions Based on Panel Data

    DEFF Research Database (Denmark)

    Bache, Stefan Holst; Dahl, Christian Møller; Kristensen, Johannes Tang

    to the possibility that smoking habits can be influenced through policy conduct. It is widely believed that maternal smoking reduces birthweight; however, the crucial difficulty in estimating such effects is the unobserved heterogeneity among mothers. We consider extensions of three panel data models to a quantile......Low birthweight outcomes are associated with large social and economic costs, and therefore the possible determinants of low birthweight are of great interest. One such determinant which has received considerable attention is maternal smoking. From an economic perspective this is in part due...... regression framework in order to control for heterogeneity and to infer conclusions about causality across the entire birthweight distribution. We obtain estimation results for maternal smoking and other interesting determinants, applying these to data obtained from Aarhus University Hospital, Skejby...

  20. Finding determinants of audit delay by pooled OLS regression analysis

    OpenAIRE

    Vuko, Tina; Čular, Marko

    2014-01-01

    The aim of this paper is to investigate determinants of audit delay. Audit delay is measured as the length of time (i.e. the number of calendar days) from the fiscal year-end to the audit report date. It is important to understand factors that influence audit delay since it directly affects the timeliness of financial reporting. The research is conducted on a sample of Croatian listed companies, covering the period of four years (from 2008 to 2011). We use pooled OLS regression analysis, mode...

  1. Finding determinants of audit delay by pooled OLS regression analysis

    Directory of Open Access Journals (Sweden)

    Tina Vuko

    2014-03-01

    Full Text Available The aim of this paper is to investigate determinants of audit delay. Audit delay is measured as the length of time (i.e. the number of calendar days from the fiscal year-end to the audit report date. It is important to understand factors that influence audit delay since it directly affects the timeliness of financial reporting. The research is conducted on a sample of Croatian listed companies, covering the period of four years (from 2008 to 2011. We use pooled OLS regression analysis, modelling audit delay as a function of the following explanatory variables: audit firm type, audit opinion, profitability, leverage, inventory and receivables to total assets, absolute value of total accruals, company size and audit committee existence. Our results indicate that audit committee existence, profitability and leverage are statistically significant determinants of audit delay in Croatia.

  2. Differential item functioning (DIF) analyses of health-related quality of life instruments using logistic regression

    DEFF Research Database (Denmark)

    Scott, Neil W; Fayers, Peter M; Aaronson, Neil K

    2010-01-01

    Differential item functioning (DIF) methods can be used to determine whether different subgroups respond differently to particular items within a health-related quality of life (HRQoL) subscale, after allowing for overall subgroup differences in that scale. This article reviews issues that arise ...... when testing for DIF in HRQoL instruments. We focus on logistic regression methods, which are often used because of their efficiency, simplicity and ease of application....

  3. Alpins and thibos vectorial astigmatism analyses: proposal of a linear regression model between methods

    Directory of Open Access Journals (Sweden)

    Giuliano de Oliveira Freitas

    2013-10-01

    Full Text Available PURPOSE: To determine linear regression models between Alpins descriptive indices and Thibos astigmatic power vectors (APV, assessing the validity and strength of such correlations. METHODS: This case series prospectively assessed 62 eyes of 31 consecutive cataract patients with preoperative corneal astigmatism between 0.75 and 2.50 diopters in both eyes. Patients were randomly assorted among two phacoemulsification groups: one assigned to receive AcrySof®Toric intraocular lens (IOL in both eyes and another assigned to have AcrySof Natural IOL associated with limbal relaxing incisions, also in both eyes. All patients were reevaluated postoperatively at 6 months, when refractive astigmatism analysis was performed using both Alpins and Thibos methods. The ratio between Thibos postoperative APV and preoperative APV (APVratio and its linear regression to Alpins percentage of success of astigmatic surgery, percentage of astigmatism corrected and percentage of astigmatism reduction at the intended axis were assessed. RESULTS: Significant negative correlation between the ratio of post- and preoperative Thibos APVratio and Alpins percentage of success (%Success was found (Spearman's ρ=-0.93; linear regression is given by the following equation: %Success = (-APVratio + 1.00x100. CONCLUSION: The linear regression we found between APVratio and %Success permits a validated mathematical inference concerning the overall success of astigmatic surgery.

  4. Determinants of orphan drugs prices in France: a regression analysis.

    Science.gov (United States)

    Korchagina, Daria; Millier, Aurelie; Vataire, Anne-Lise; Aballea, Samuel; Falissard, Bruno; Toumi, Mondher

    2017-04-21

    The introduction of the orphan drug legislation led to the increase in the number of available orphan drugs, but the access to them is often limited due to the high price. Social preferences regarding funding orphan drugs as well as the criteria taken into consideration while setting the price remain unclear. The study aimed at identifying the determinant of orphan drug prices in France using a regression analysis. All drugs with a valid orphan designation at the moment of launch for which the price was available in France were included in the analysis. The selection of covariates was based on a literature review and included drug characteristics (Anatomical Therapeutic Chemical (ATC) class, treatment line, age of target population), diseases characteristics (severity, prevalence, availability of alternative therapeutic options), health technology assessment (HTA) details (actual benefit (AB) and improvement in actual benefit (IAB) scores, delay between the HTA and commercialisation), and study characteristics (type of study, comparator, type of endpoint). The main data sources were European public assessment reports, HTA reports, summaries of opinion on orphan designation of the European Medicines Agency, and the French insurance database of drugs and tariffs. A generalized regression model was developed to test the association between the annual treatment cost and selected covariates. A total of 68 drugs were included. The mean annual treatment cost was €96,518. In the univariate analysis, the ATC class (p = 0.01), availability of alternative treatment options (p = 0.02) and the prevalence (p = 0.02) showed a significant correlation with the annual cost. The multivariate analysis demonstrated significant association between the annual cost and availability of alternative treatment options, ATC class, IAB score, type of comparator in the pivotal clinical trial, as well as commercialisation date and delay between the HTA and commercialisation. The

  5. Genetic analyses of partial egg production in Japanese quail using multi-trait random regression models.

    Science.gov (United States)

    Karami, K; Zerehdaran, S; Barzanooni, B; Lotfi, E

    2017-12-01

    1. The aim of the present study was to estimate genetic parameters for average egg weight (EW) and egg number (EN) at different ages in Japanese quail using multi-trait random regression (MTRR) models. 2. A total of 8534 records from 900 quail, hatched between 2014 and 2015, were used in the study. Average weekly egg weights and egg numbers were measured from second until sixth week of egg production. 3. Nine random regression models were compared to identify the best order of the Legendre polynomials (LP). The most optimal model was identified by the Bayesian Information Criterion. A model with second order of LP for fixed effects, second order of LP for additive genetic effects and third order of LP for permanent environmental effects (MTRR23) was found to be the best. 4. According to the MTRR23 model, direct heritability for EW increased from 0.26 in the second week to 0.53 in the sixth week of egg production, whereas the ratio of permanent environment to phenotypic variance decreased from 0.48 to 0.1. Direct heritability for EN was low, whereas the ratio of permanent environment to phenotypic variance decreased from 0.57 to 0.15 during the production period. 5. For each trait, estimated genetic correlations among weeks of egg production were high (from 0.85 to 0.98). Genetic correlations between EW and EN were low and negative for the first two weeks, but they were low and positive for the rest of the egg production period. 6. In conclusion, random regression models can be used effectively for analysing egg production traits in Japanese quail. Response to selection for increased egg weight would be higher at older ages because of its higher heritability and such a breeding program would have no negative genetic impact on egg production.

  6. Reducing Inter-Laboratory Differences between Semen Analyses Using Z Score and Regression Transformations

    Directory of Open Access Journals (Sweden)

    Esther Leushuis

    2016-12-01

    Full Text Available Background: Standardization of the semen analysis may improve reproducibility. We assessed variability between laboratories in semen analyses and evaluated whether a transformation using Z scores and regression statistics was able to reduce this variability. Materials and Methods: We performed a retrospective cohort study. We calculated between-laboratory coefficients of variation (CVB for sperm concentration and for morphology. Subsequently, we standardized the semen analysis results by calculating laboratory specific Z scores, and by using regression. We used analysis of variance for four semen parameters to assess systematic differences between laboratories before and after the transformations, both in the circulation samples and in the samples obtained in the prospective cohort study in the Netherlands between January 2002 and February 2004. Results: The mean CVB was 7% for sperm concentration (range 3 to 13% and 32% for sperm morphology (range 18 to 51%. The differences between the laboratories were statistically significant for all semen parameters (all P<0.001. Standardization using Z scores did not reduce the differences in semen analysis results between the laboratories (all P<0.001. Conclusion: There exists large between-laboratory variability for sperm morphology and small, but statistically significant, between-laboratory variation for sperm concentration. Standardization using Z scores does not eliminate between-laboratory variability.

  7. Determinants of LSIL Regression in Women from a Colombian Cohort

    International Nuclear Information System (INIS)

    Molano, Monica; Gonzalez, Mauricio; Gamboa, Oscar; Ortiz, Natasha; Luna, Joaquin; Hernandez, Gustavo; Posso, Hector; Murillo, Raul; Munoz, Nubia

    2010-01-01

    Objective: To analyze the role of Human Papillomavirus (HPV) and other risk factors in the regression of cervical lesions in women from the Bogota Cohort. Methods: 200 HPV positive women with abnormal cytology were included for regression analysis. The time of lesion regression was modeled using methods for interval censored survival time data. Median duration of total follow-up was 9 years. Results: 80 (40%) women were diagnosed with Atypical Squamous Cells of Undetermined Significance (ASCUS) or Atypical Glandular Cells of Undetermined Significance (AGUS) while 120 (60%) were diagnosed with Low Grade Squamous Intra-epithelial Lesions (LSIL). Globally, 40% of the lesions were still present at first year of follow up, while 1.5% was still present at 5 year check-up. The multivariate model showed similar regression rates for lesions in women with ASCUS/AGUS and women with LSIL (HR= 0.82, 95% CI 0.59-1.12). Women infected with HR HPV types and those with mixed infections had lower regression rates for lesions than did women infected with LR types (HR=0.526, 95% CI 0.33-0.84, for HR types and HR=0.378, 95% CI 0.20-0.69, for mixed infections). Furthermore, women over 30 years had a higher lesion regression rate than did women under 30 years (HR1.53, 95% CI 1.03-2.27). The study showed that the median time for lesion regression was 9 months while the median time for HPV clearance was 12 months. Conclusions: In the studied population, the type of infection and the age of the women are critical factors for the regression of cervical lesions.

  8. Determination of accident-prone road sections using quantile regression

    Directory of Open Access Journals (Sweden)

    Thomas Edison Guerrero-Barbosa

    2016-01-01

    Full Text Available La identificación acertada de sitios peligrosos a accidentalidad permite a las entidades gubernamentales, encargadas de realizar mejoras a la seguridad vial, una adecuada destinación de las inversiones a los tramos viales verdaderamente críticos; dada esta necesidad inmediata, la presente investigación se enfoca en determinar los tramos propensos a accidentes y la posterior elaboración de un ranking de peligrosidad efectuado a los tramos críticos encontrados en el perímetro urbano de Ocaña (Colombia, utilizando Regresión Cuantil (RC. A partir del modelo estimado correspondiente al cuantil 95 fue posible establecer relaciones de causalidad entre características como longitud del tramo vial, ancho de calzada, número de carriles, número de intersecciones, tránsito promedio diario y velocidad media con la frecuencia de accidentes, determinándose un total de 7 tramos críticos a los cuales se les estableció un ranking de peligrosidad.

  9. Correcting for multivariate measurement error by regression calibration in meta-analyses of epidemiological studies.

    NARCIS (Netherlands)

    Kromhout, D.

    2009-01-01

    Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements of the

  10. Testing Mediation Using Multiple Regression and Structural Equation Modeling Analyses in Secondary Data

    Science.gov (United States)

    Li, Spencer D.

    2011-01-01

    Mediation analysis in child and adolescent development research is possible using large secondary data sets. This article provides an overview of two statistical methods commonly used to test mediated effects in secondary analysis: multiple regression and structural equation modeling (SEM). Two empirical studies are presented to illustrate the…

  11. Regression Analyses on the Butterfly Ballot Effect: A Statistical Perspective of the US 2000 Election

    Science.gov (United States)

    Wu, Dane W.

    2002-01-01

    The year 2000 US presidential election between Al Gore and George Bush has been the most intriguing and controversial one in American history. The state of Florida was the trigger for the controversy, mainly, due to the use of the misleading "butterfly ballot". Using prediction (or confidence) intervals for least squares regression lines…

  12. Check-all-that-apply data analysed by Partial Least Squares regression

    DEFF Research Database (Denmark)

    Rinnan, Åsmund; Giacalone, Davide; Frøst, Michael Bom

    2015-01-01

    are analysed by multivariate techniques. CATA data can be analysed both by setting the CATA as the X and the Y. The former is the PLS-Discriminant Analysis (PLS-DA) version, while the latter is the ANOVA-PLS (A-PLS) version. We investigated the difference between these two approaches, concluding...

  13. Analyses of Developmental Rate Isomorphy in Ectotherms: Introducing the Dirichlet Regression.

    Directory of Open Access Journals (Sweden)

    David S Boukal

    Full Text Available Temperature drives development in insects and other ectotherms because their metabolic rate and growth depends directly on thermal conditions. However, relative durations of successive ontogenetic stages often remain nearly constant across a substantial range of temperatures. This pattern, termed 'developmental rate isomorphy' (DRI in insects, appears to be widespread and reported departures from DRI are generally very small. We show that these conclusions may be due to the caveats hidden in the statistical methods currently used to study DRI. Because the DRI concept is inherently based on proportional data, we propose that Dirichlet regression applied to individual-level data is an appropriate statistical method to critically assess DRI. As a case study we analyze data on five aquatic and four terrestrial insect species. We find that results obtained by Dirichlet regression are consistent with DRI violation in at least eight of the studied species, although standard analysis detects significant departure from DRI in only four of them. Moreover, the departures from DRI detected by Dirichlet regression are consistently much larger than previously reported. The proposed framework can also be used to infer whether observed departures from DRI reflect life history adaptations to size- or stage-dependent effects of varying temperature. Our results indicate that the concept of DRI in insects and other ectotherms should be critically re-evaluated and put in a wider context, including the concept of 'equiproportional development' developed for copepods.

  14. Correlation and regression analyses of genetic effects for different types of cells in mammals under radiation and chemical treatment

    International Nuclear Information System (INIS)

    Slutskaya, N.G.; Mosseh, I.B.

    2006-01-01

    Data about genetic mutations under radiation and chemical treatment for different types of cells have been analyzed with correlation and regression analyses. Linear correlation between different genetic effects in sex cells and somatic cells have found. The results may be extrapolated on sex cells of human and mammals. (authors)

  15. Determination of benzo(apyrene content in PM10 using regression methods

    Directory of Open Access Journals (Sweden)

    Jacek Gębicki

    2015-12-01

    Full Text Available The paper presents an attempt of application of multidimensional linear regression to estimation of an empirical model describing the factors influencing on B(aP content in suspended dust PM10 in Olsztyn and Elbląg city regions between 2010 and 2013. During this period annual average concentration of B(aP in PM10 exceeded the admissible level 1.5-3 times. Conducted investigations confirm that the reasons of B(aP concentration increase are low-efficiency individual home heat stations or low-temperature heat sources, which are responsible for so-called low emission during heating period. Dependences between the following quantities were analysed: concentration of PM10 dust in air, air temperature, wind velocity, air humidity. A measure of model fitting to actual B(aP concentration in PM10 was the coefficient of determination of the model. Application of multidimensional linear regression yielded the equations characterized by high values of the coefficient of determination of the model, especially during heating season. This parameter ranged from 0.54 to 0.80 during the analyzed period.

  16. Correcting for multivariate measurement error by regression calibration in meta-analyses of epidemiological studies

    DEFF Research Database (Denmark)

    Tybjærg-Hansen, Anne

    2009-01-01

    Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements...... of the risk factors are observed on a subsample. We extend the multivariate RC techniques to a meta-analysis framework where multiple studies provide independent repeat measurements and information on disease outcome. We consider the cases where some or all studies have repeat measurements, and compare study......-specific, averaged and empirical Bayes estimates of RC parameters. Additionally, we allow for binary covariates (e.g. smoking status) and for uncertainty and time trends in the measurement error corrections. Our methods are illustrated using a subset of individual participant data from prospective long-term studies...

  17. Sample size determination for logistic regression on a logit-normal distribution.

    Science.gov (United States)

    Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

    2017-06-01

    Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

  18. The Determinants of Equity Risk and Their Forecasting Implications: A Quantile Regression Perspective

    Directory of Open Access Journals (Sweden)

    Giovanni Bonaccolto

    2016-07-01

    Full Text Available Several market and macro-level variables influence the evolution of equity risk in addition to the well-known volatility persistence. However, the impact of those covariates might change depending on the risk level, being different between low and high volatility states. By combining equity risk estimates, obtained from the Realized Range Volatility, corrected for microstructure noise and jumps, and quantile regression methods, we evaluate the forecasting implications of the equity risk determinants in different volatility states and, without distributional assumptions on the realized range innovations, we recover both the points and the conditional distribution forecasts. In addition, we analyse how the the relationships among the involved variables evolve over time, through a rolling window procedure. The results show evidence of the selected variables’ relevant impacts and, particularly during periods of market stress, highlight heterogeneous effects across quantiles.

  19. Differential item functioning (DIF) analyses of health-related quality of life instruments using logistic regression

    DEFF Research Database (Denmark)

    Scott, Neil W.; Fayers, Peter M.; Aaronson, Neil K.

    2010-01-01

    Differential item functioning (DIF) methods can be used to determine whether different subgroups respond differently to particular items within a health-related quality of life (HRQoL) subscale, after allowing for overall subgroup differences in that scale. This article reviews issues that arise...

  20. Correlation, Regression and Path Analyses of Seed Yield Components in Crambe abyssinica, a Promising Industrial Oil Crop

    OpenAIRE

    Huang, Banglian; Yang, Yiming; Luo, Tingting; Wu, S.; Du, Xuezhu; Cai, Detian; Loo, van, E.N.; Huang Bangquan

    2013-01-01

    In the present study correlation, regression and path analyses were carried out to decide correlations among the agro- nomic traits and their contributions to seed yield per plant in Crambe abyssinica. Partial correlation analysis indicated that plant height (X1) was significantly correlated with branching height and the number of first branches (P <0.01); Branching height (X2) was significantly correlated with pod number of primary inflorescence (P <0.01) and number of secondary branch...

  1. Determinants of the probability of adopting quality protein maize (QPM) technology in Tanzania: A logistic regression analysis

    OpenAIRE

    Gregory, T.; Sewando, P.

    2013-01-01

    Adoption of technology is an important factor in economic development. The thrust of this study was to establish factors affecting adoption of QPM technology in Northern zone of Tanzania. Primary data was collected from a random sample of 120 smallholder maize farmers in four villages. Data collected were analysed using descriptive and quantitative methods. Logit model was used to determine factors that influence adoption of QPM technology. The regression results indicated that education of t...

  2. Uncertainty of pesticide residue concentration determined from ordinary and weighted linear regression curve.

    Science.gov (United States)

    Yolci Omeroglu, Perihan; Ambrus, Árpad; Boyacioglu, Dilek

    2018-03-28

    Determination of pesticide residues is based on calibration curves constructed for each batch of analysis. Calibration standard solutions are prepared from a known amount of reference material at different concentration levels covering the concentration range of the analyte in the analysed samples. In the scope of this study, the applicability of both ordinary linear and weighted linear regression (OLR and WLR) for pesticide residue analysis was investigated. We used 782 multipoint calibration curves obtained for 72 different analytical batches with high-pressure liquid chromatography equipped with an ultraviolet detector, and gas chromatography with electron capture, nitrogen phosphorus or mass spectrophotometer detectors. Quality criteria of the linear curves including regression coefficient, standard deviation of relative residuals and deviation of back calculated concentrations were calculated both for WLR and OLR methods. Moreover, the relative uncertainty of the predicted analyte concentration was estimated for both methods. It was concluded that calibration curve based on WLR complies with all the quality criteria set by international guidelines compared to those calculated with OLR. It means that all the data fit well with WLR for pesticide residue analysis. It was estimated that, regardless of the actual concentration range of the calibration, relative uncertainty at the lowest calibrated level ranged between 0.3% and 113.7% for OLR and between 0.2% and 22.1% for WLR. At or above 1/3 of the calibrated range, uncertainty of calibration curve ranged between 0.1% and 16.3% for OLR and 0% and 12.2% for WLR, and therefore, the two methods gave comparable results.

  3. Analyses of non-fatal accidents in an opencast mine by logistic regression model - a case study.

    Science.gov (United States)

    Onder, Seyhan; Mutlu, Mert

    2017-09-01

    Accidents cause major damage for both workers and enterprises in the mining industry. To reduce the number of occupational accidents, these incidents should be properly registered and carefully analysed. This study efficiently examines the Aegean Lignite Enterprise (ELI) of Turkish Coal Enterprises (TKI) in Soma between 2006 and 2011, and opencast coal mine occupational accident records were used for statistical analyses. A total of 231 occupational accidents were analysed for this study. The accident records were categorized into seven groups: area, reason, occupation, part of body, age, shift hour and lost days. The SPSS package program was used in this study for logistic regression analyses, which predicted the probability of accidents resulting in greater or less than 3 lost workdays for non-fatal injuries. Social facilities-area of surface installations, workshops and opencast mining areas are the areas with the highest probability for accidents with greater than 3 lost workdays for non-fatal injuries, while the reasons with the highest probability for these types of accidents are transporting and manual handling. Additionally, the model was tested for such reported accidents that occurred in 2012 for the ELI in Soma and estimated the probability of exposure to accidents with lost workdays correctly by 70%.

  4. Linear regression based on Minimum Covariance Determinant (MCD) and TELBS methods on the productivity of phytoplankton

    Science.gov (United States)

    Gusriani, N.; Firdaniza

    2018-03-01

    The existence of outliers on multiple linear regression analysis causes the Gaussian assumption to be unfulfilled. If the Least Square method is forcedly used on these data, it will produce a model that cannot represent most data. For that, we need a robust regression method against outliers. This paper will compare the Minimum Covariance Determinant (MCD) method and the TELBS method on secondary data on the productivity of phytoplankton, which contains outliers. Based on the robust determinant coefficient value, MCD method produces a better model compared to TELBS method.

  5. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing.

    Science.gov (United States)

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-02-01

    A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R 2 ), using R 2 as the primary metric of assay agreement. However, the use of R 2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  6. Improved Dietary Guidelines for Vitamin D: Application of Individual Participant Data (IPD-Level Meta-Regression Analyses

    Directory of Open Access Journals (Sweden)

    Kevin D. Cashman

    2017-05-01

    Full Text Available Dietary Reference Values (DRVs for vitamin D have a key role in the prevention of vitamin D deficiency. However, despite adopting similar risk assessment protocols, estimates from authoritative agencies over the last 6 years have been diverse. This may have arisen from diverse approaches to data analysis. Modelling strategies for pooling of individual subject data from cognate vitamin D randomized controlled trials (RCTs are likely to provide the most appropriate DRV estimates. Thus, the objective of the present work was to undertake the first-ever individual participant data (IPD-level meta-regression, which is increasingly recognized as best practice, from seven winter-based RCTs (with 882 participants ranging in age from 4 to 90 years of the vitamin D intake–serum 25-hydroxyvitamin D (25(OHD dose-response. Our IPD-derived estimates of vitamin D intakes required to maintain 97.5% of 25(OHD concentrations >25, 30, and 50 nmol/L across the population are 10, 13, and 26 µg/day, respectively. In contrast, standard meta-regression analyses with aggregate data (as used by several agencies in recent years from the same RCTs estimated that a vitamin D intake requirement of 14 µg/day would maintain 97.5% of 25(OHD >50 nmol/L. These first IPD-derived estimates offer improved dietary recommendations for vitamin D because the underpinning modeling captures the between-person variability in response of serum 25(OHD to vitamin D intake.

  7. A method to determine the necessity for global signal regression in resting-state fMRI studies.

    Science.gov (United States)

    Chen, Gang; Chen, Guangyu; Xie, Chunming; Ward, B Douglas; Li, Wenjun; Antuono, Piero; Li, Shi-Jiang

    2012-12-01

    In resting-state functional MRI studies, the global signal (operationally defined as the global average of resting-state functional MRI time courses) is often considered a nuisance effect and commonly removed in preprocessing. This global signal regression method can introduce artifacts, such as false anticorrelated resting-state networks in functional connectivity analyses. Therefore, the efficacy of this technique as a correction tool remains questionable. In this article, we establish that the accuracy of the estimated global signal is determined by the level of global noise (i.e., non-neural noise that has a global effect on the resting-state functional MRI signal). When the global noise level is low, the global signal resembles the resting-state functional MRI time courses of the largest cluster, but not those of the global noise. Using real data, we demonstrate that the global signal is strongly correlated with the default mode network components and has biological significance. These results call into question whether or not global signal regression should be applied. We introduce a method to quantify global noise levels. We show that a criteria for global signal regression can be found based on the method. By using the criteria, one can determine whether to include or exclude the global signal regression in minimizing errors in functional connectivity measures. Copyright © 2012 Wiley Periodicals, Inc.

  8. Utilizing the Quantile Regression to Explore the Determinants on the Application of E-Learning

    OpenAIRE

    Quang Linh Huynh; Thuy Lan Le Thi

    2014-01-01

    In this research, the quantile regression is applied to investigate the affecting factors associated with the application of e-learning. The findings provide a comprehensive picture about the relationships between the application of e-learning and its determinants. It sheds light on these complicated relationships that, at the different quantiles of the conditional distribution of e-learning adopting levels, the influence of the determinants on the application of e-learning is different. More...

  9. Sintering equation: determination of its coefficients by experiments - using multiple regression

    International Nuclear Information System (INIS)

    Windelberg, D.

    1999-01-01

    Sintering is a method for volume-compression (or volume-contraction) of powdered or grained material applying high temperature (less than the melting point of the material). Maekipirtti tried to find an equation which describes the process of sintering by its main parameters sintering time, sintering temperature and volume contracting. Such equation is called a sintering equation. It also contains some coefficients which characterise the behaviour of the material during the process of sintering. These coefficients have to be determined by experiments. Here we show that some linear regressions will produce wrong coefficients, but multiple regression results in an useful sintering equation. (orig.)

  10. Improved Dietary Guidelines for Vitamin D: Application of Individual Participant Data (IPD)-Level Meta-Regression Analyses

    Science.gov (United States)

    Cashman, Kevin D.; Ritz, Christian; Kiely, Mairead

    2017-01-01

    Dietary Reference Values (DRVs) for vitamin D have a key role in the prevention of vitamin D deficiency. However, despite adopting similar risk assessment protocols, estimates from authoritative agencies over the last 6 years have been diverse. This may have arisen from diverse approaches to data analysis. Modelling strategies for pooling of individual subject data from cognate vitamin D randomized controlled trials (RCTs) are likely to provide the most appropriate DRV estimates. Thus, the objective of the present work was to undertake the first-ever individual participant data (IPD)-level meta-regression, which is increasingly recognized as best practice, from seven winter-based RCTs (with 882 participants ranging in age from 4 to 90 years) of the vitamin D intake–serum 25-hydroxyvitamin D (25(OH)D) dose-response. Our IPD-derived estimates of vitamin D intakes required to maintain 97.5% of 25(OH)D concentrations >25, 30, and 50 nmol/L across the population are 10, 13, and 26 µg/day, respectively. In contrast, standard meta-regression analyses with aggregate data (as used by several agencies in recent years) from the same RCTs estimated that a vitamin D intake requirement of 14 µg/day would maintain 97.5% of 25(OH)D >50 nmol/L. These first IPD-derived estimates offer improved dietary recommendations for vitamin D because the underpinning modeling captures the between-person variability in response of serum 25(OH)D to vitamin D intake. PMID:28481259

  11. Automated monosegmented flow analyser. Determination of glucose, creatinine and urea.

    Science.gov (United States)

    Raimundo Júnior, I M; Pasquini, C

    1997-10-01

    An automated monosegmented flow analyser containing a sampling valve and a reagent addition module and employing a laboratory-made photodiode array spectrophotometer as detection system is described. The instrument was controlled by a 386SX IBM compatible microcomputer through an IC8255 parallel port that communicates with the interface which controls the sampling valve and reagent addition module. The spectrophotometer was controlled by the same microcomputer through an RS232 serial standard interface. The software for the instrument was written in QuickBasic 4.5. Opto-switches were employed to detect the air bubbles limiting the monosegment, allowing precise sample localisation for reagent addition and signal reading. The main characteristics of the analyser are low reagent consumption and high sensitivity which is independent of the sample volume. The instrument was designed to determine glucose, creatinine or urea in blood plasma and serum without hardware modification. The results were compared against those obtained by the Clinical Hospital of UNICAMP using commercial analysers. Correlation coefficients among the methods were 0.997, 0.982 and 0.996 for glucose, creatinine and urea, respectively.

  12. LOGISTIC REGRESSION AS A TOOL FOR DETERMINATION OF THE PROBABILITY OF DEFAULT FOR ENTERPRISES

    Directory of Open Access Journals (Sweden)

    Erika SPUCHLAKOVA

    2017-12-01

    Full Text Available In a rapidly changing world it is necessary to adapt to new conditions. From a day to day approaches can vary. For the proper management of the company it is essential to know the financial situation. Assessment of the company financial health can be carried out by financial analysis which provides a number of methods how to evaluate the company financial health. Analysis indicators are often included in the company assessment, in obtaining bank loans and other financial resources to ensure the functioning of the company. As company focuses on the future and its planning, it is essential to forecast the future financial situation. According to the results of company´s financial health prediction, the company decides on the extension or limitation of its business. It depends mainly on the capabilities of company´s management how they will use information obtained from financial analysis in practice. The findings of logistic regression methods were published firstly in the 60s, as an alternative to the least squares method. The essence of logistic regression is to determine the relationship between being explained (dependent variable and explanatory (independent variables. The basic principle of this static method is based on the regression analysis, but unlike linear regression, it can predict the probability of a phenomenon that has occurred or not. The aim of this paper is to determine the probability of bankruptcy enterprises.

  13. Determining factors influencing survival of breast cancer by fuzzy logistic regression model.

    Science.gov (United States)

    Nikbakht, Roya; Bahrampour, Abbas

    2017-01-01

    Fuzzy logistic regression model can be used for determining influential factors of disease. This study explores the important factors of actual predictive survival factors of breast cancer's patients. We used breast cancer data which collected by cancer registry of Kerman University of Medical Sciences during the period of 2000-2007. The variables such as morphology, grade, age, and treatments (surgery, radiotherapy, and chemotherapy) were applied in the fuzzy logistic regression model. Performance of model was determined in terms of mean degree of membership (MDM). The study results showed that almost 41% of patients were in neoplasm and malignant group and more than two-third of them were still alive after 5-year follow-up. Based on the fuzzy logistic model, the most important factors influencing survival were chemotherapy, morphology, and radiotherapy, respectively. Furthermore, the MDM criteria show that the fuzzy logistic regression have a good fit on the data (MDM = 0.86). Fuzzy logistic regression model showed that chemotherapy is more important than radiotherapy in survival of patients with breast cancer. In addition, another ability of this model is calculating possibilistic odds of survival in cancer patients. The results of this study can be applied in clinical research. Furthermore, there are few studies which applied the fuzzy logistic models. Furthermore, we recommend using this model in various research areas.

  14. Regression models in the determination of the absorbed dose with extrapolation chamber for ophthalmological applicators

    International Nuclear Information System (INIS)

    Alvarez R, J.T.; Morales P, R.

    1992-06-01

    The absorbed dose for equivalent soft tissue is determined,it is imparted by ophthalmologic applicators, ( 90 Sr/ 90 Y, 1850 MBq) using an extrapolation chamber of variable electrodes; when estimating the slope of the extrapolation curve using a simple lineal regression model is observed that the dose values are underestimated from 17.7 percent up to a 20.4 percent in relation to the estimate of this dose by means of a regression model polynomial two grade, at the same time are observed an improvement in the standard error for the quadratic model until in 50%. Finally the global uncertainty of the dose is presented, taking into account the reproducibility of the experimental arrangement. As conclusion it can infers that in experimental arrangements where the source is to contact with the extrapolation chamber, it was recommended to substitute the lineal regression model by the quadratic regression model, in the determination of the slope of the extrapolation curve, for more exact and accurate measurements of the absorbed dose. (Author)

  15. Longitudinal changes in telomere length and associated genetic parameters in dairy cattle analysed using random regression models.

    Directory of Open Access Journals (Sweden)

    Luise A Seeker

    Full Text Available Telomeres cap the ends of linear chromosomes and shorten with age in many organisms. In humans short telomeres have been linked to morbidity and mortality. With the accumulation of longitudinal datasets the focus shifts from investigating telomere length (TL to exploring TL change within individuals over time. Some studies indicate that the speed of telomere attrition is predictive of future disease. The objectives of the present study were to 1 characterize the change in bovine relative leukocyte TL (RLTL across the lifetime in Holstein Friesian dairy cattle, 2 estimate genetic parameters of RLTL over time and 3 investigate the association of differences in individual RLTL profiles with productive lifespan. RLTL measurements were analysed using Legendre polynomials in a random regression model to describe TL profiles and genetic variance over age. The analyses were based on 1,328 repeated RLTL measurements of 308 female Holstein Friesian dairy cattle. A quadratic Legendre polynomial was fitted to the fixed effect of age in months and to the random effect of the animal identity. Changes in RLTL, heritability and within-trait genetic correlation along the age trajectory were calculated and illustrated. At a population level, the relationship between RLTL and age was described by a positive quadratic function. Individuals varied significantly regarding the direction and amount of RLTL change over life. The heritability of RLTL ranged from 0.36 to 0.47 (SE = 0.05-0.08 and remained statistically unchanged over time. The genetic correlation of RLTL at birth with measurements later in life decreased with the time interval between samplings from near unity to 0.69, indicating that TL later in life might be regulated by different genes than TL early in life. Even though animals differed in their RLTL profiles significantly, those differences were not correlated with productive lifespan (p = 0.954.

  16. Comparison of Linear and Non-linear Regression Analysis to Determine Pulmonary Pressure in Hyperthyroidism.

    Science.gov (United States)

    Scarneciu, Camelia C; Sangeorzan, Livia; Rus, Horatiu; Scarneciu, Vlad D; Varciu, Mihai S; Andreescu, Oana; Scarneciu, Ioan

    2017-01-01

    This study aimed at assessing the incidence of pulmonary hypertension (PH) at newly diagnosed hyperthyroid patients and at finding a simple model showing the complex functional relation between pulmonary hypertension in hyperthyroidism and the factors causing it. The 53 hyperthyroid patients (H-group) were evaluated mainly by using an echocardiographical method and compared with 35 euthyroid (E-group) and 25 healthy people (C-group). In order to identify the factors causing pulmonary hypertension the statistical method of comparing the values of arithmetical means is used. The functional relation between the two random variables (PAPs and each of the factors determining it within our research study) can be expressed by linear or non-linear function. By applying the linear regression method described by a first-degree equation the line of regression (linear model) has been determined; by applying the non-linear regression method described by a second degree equation, a parabola-type curve of regression (non-linear or polynomial model) has been determined. We made the comparison and the validation of these two models by calculating the determination coefficient (criterion 1), the comparison of residuals (criterion 2), application of AIC criterion (criterion 3) and use of F-test (criterion 4). From the H-group, 47% have pulmonary hypertension completely reversible when obtaining euthyroidism. The factors causing pulmonary hypertension were identified: previously known- level of free thyroxin, pulmonary vascular resistance, cardiac output; new factors identified in this study- pretreatment period, age, systolic blood pressure. According to the four criteria and to the clinical judgment, we consider that the polynomial model (graphically parabola- type) is better than the linear one. The better model showing the functional relation between the pulmonary hypertension in hyperthyroidism and the factors identified in this study is given by a polynomial equation of second

  17. Regression Techniques for Determining the Effective Impervious Area in Southern California Watersheds

    Science.gov (United States)

    Sultana, R.; Mroczek, M.; Dallman, S.; Sengupta, A.; Stein, E. D.

    2016-12-01

    The portion of the Total Impervious Area (TIA) that is hydraulically connected to the storm drainage network is called the Effective Impervious Area (EIA). The remaining fraction of impervious area, called the non-effective impervious area, drains onto pervious surfaces which do not contribute to runoff for smaller events. Using the TIA instead of EIA in models and calculations can lead to overestimates of runoff volumes peak discharges and oversizing of drainage system since it is assumed all impervious areas produce urban runoff that is directly connected to storm drains. This makes EIA a better predictor of actual runoff from urban catchments for hydraulic design of storm drain systems and modeling non-point source pollution. Compared to TIA, determining the EIA is considerably more difficult to calculate since it cannot be found by using remote sensing techniques, readily available EIA datasets, or aerial imagery interpretation alone. For this study, EIA percentages were calculated by two successive regression methods for five watersheds (with areas of 8.38 - 158mi2) located in Southern California using rainfall-runoff event data for the years 2004 - 2007. Runoff generated from the smaller storm events are considered to be emanating only from the effective impervious areas. Therefore, larger events that were considered to have runoff from both impervious and pervious surfaces were successively removed in the regression methods using a criterion of (1) 1mm and (2) a max (2 , 1mm) above the regression line. MSE is calculated from actual runoff and runoff predicted by the regression. Analysis of standard deviations showed that criterion of max (2 , 1mm) better fit the regression line and is the preferred method in predicting the EIA percentage. The estimated EIAs have shown to be approximately 78% to 43% of the TIA which shows use of EIA instead of TIA can have significant impact on the cost building urban hydraulic systems and stormwater capture devices.

  18. Determination of osteoporosis risk factors using a multiple logistic regression model in postmenopausal Turkish women.

    Science.gov (United States)

    Akkus, Zeki; Camdeviren, Handan; Celik, Fatma; Gur, Ali; Nas, Kemal

    2005-09-01

    To determine the risk factors of osteoporosis using a multiple binary logistic regression method and to assess the risk variables for osteoporosis, which is a major and growing health problem in many countries. We presented a case-control study, consisting of 126 postmenopausal healthy women as control group and 225 postmenopausal osteoporotic women as the case group. The study was carried out in the Department of Physical Medicine and Rehabilitation, Dicle University, Diyarbakir, Turkey between 1999-2002. The data from the 351 participants were collected using a standard questionnaire that contains 43 variables. A multiple logistic regression model was then used to evaluate the data and to find the best regression model. We classified 80.1% (281/351) of the participants using the regression model. Furthermore, the specificity value of the model was 67% (84/126) of the control group while the sensitivity value was 88% (197/225) of the case group. We found the distribution of residual values standardized for final model to be exponential using the Kolmogorow-Smirnow test (p=0.193). The receiver operating characteristic curve was found successful to predict patients with risk for osteoporosis. This study suggests that low levels of dietary calcium intake, physical activity, education, and longer duration of menopause are independent predictors of the risk of low bone density in our population. Adequate dietary calcium intake in combination with maintaining a daily physical activity, increasing educational level, decreasing birth rate, and duration of breast-feeding may contribute to healthy bones and play a role in practical prevention of osteoporosis in Southeast Anatolia. In addition, the findings of the present study indicate that the use of multivariate statistical method as a multiple logistic regression in osteoporosis, which maybe influenced by many variables, is better than univariate statistical evaluation.

  19. Comparison of cranial sex determination by discriminant analysis and logistic regression.

    Science.gov (United States)

    Amores-Ampuero, Anabel; Alemán, Inmaculada

    2016-04-05

    Various methods have been proposed for estimating dimorphism. The objective of this study was to compare sex determination results from cranial measurements using discriminant analysis or logistic regression. The study sample comprised 130 individuals (70 males) of known sex, age, and cause of death from San José cemetery in Granada (Spain). Measurements of 19 neurocranial dimensions and 11 splanchnocranial dimensions were subjected to discriminant analysis and logistic regression, and the percentages of correct classification were compared between the sex functions obtained with each method. The discriminant capacity of the selected variables was evaluated with a cross-validation procedure. The percentage accuracy with discriminant analysis was 78.2% for the neurocranium (82.4% in females and 74.6% in males) and 73.7% for the splanchnocranium (79.6% in females and 68.8% in males). These percentages were higher with logistic regression analysis: 85.7% for the neurocranium (in both sexes) and 94.1% for the splanchnocranium (100% in females and 91.7% in males).

  20. Modeling the potential risk factors of bovine viral diarrhea prevalence in Egypt using univariable and multivariable logistic regression analyses

    Directory of Open Access Journals (Sweden)

    Abdelfattah M. Selim

    2018-03-01

    Full Text Available Aim: The present cross-sectional study was conducted to determine the seroprevalence and potential risk factors associated with Bovine viral diarrhea virus (BVDV disease in cattle and buffaloes in Egypt, to model the potential risk factors associated with the disease using logistic regression (LR models, and to fit the best predictive model for the current data. Materials and Methods: A total of 740 blood samples were collected within November 2012-March 2013 from animals aged between 6 months and 3 years. The potential risk factors studied were species, age, sex, and herd location. All serum samples were examined with indirect ELIZA test for antibody detection. Data were analyzed with different statistical approaches such as Chi-square test, odds ratios (OR, univariable, and multivariable LR models. Results: Results revealed a non-significant association between being seropositive with BVDV and all risk factors, except for species of animal. Seroprevalence percentages were 40% and 23% for cattle and buffaloes, respectively. OR for all categories were close to one with the highest OR for cattle relative to buffaloes, which was 2.237. Likelihood ratio tests showed a significant drop of the -2LL from univariable LR to multivariable LR models. Conclusion: There was an evidence of high seroprevalence of BVDV among cattle as compared with buffaloes with the possibility of infection in different age groups of animals. In addition, multivariable LR model was proved to provide more information for association and prediction purposes relative to univariable LR models and Chi-square tests if we have more than one predictor.

  1. SPECIFICS OF THE APPLICATIONS OF MULTIPLE REGRESSION MODEL IN THE ANALYSES OF THE EFFECTS OF GLOBAL FINANCIAL CRISES

    Directory of Open Access Journals (Sweden)

    Željko V. Račić

    2010-12-01

    Full Text Available This paper aims to present the specifics of the application of multiple linear regression model. The economic (financial crisis is analyzed in terms of gross domestic product which is in a function of the foreign trade balance (on one hand and the credit cards, i.e. indebtedness of the population on this basis (on the other hand, in the USA (from 1999. to 2008. We used the extended application model which shows how the analyst should run the whole development process of regression model. This process began with simple statistical features and the application of regression procedures, and ended with residual analysis, intended for the study of compatibility of data and model settings. This paper also analyzes the values of some standard statistics used in the selection of appropriate regression model. Testing of the model is carried out with the use of the Statistics PASW 17 program.

  2. Automated Decisional Model for Optimum Economic Order Quantity Determination Using Price Regressive Rates

    Science.gov (United States)

    Roşu, M. M.; Tarbă, C. I.; Neagu, C.

    2016-11-01

    The current models for inventory management are complementary, but together they offer a large pallet of elements for solving complex problems of companies when wanting to establish the optimum economic order quantity for unfinished products, row of materials, goods etc. The main objective of this paper is to elaborate an automated decisional model for the calculus of the economic order quantity taking into account the price regressive rates for the total order quantity. This model has two main objectives: first, to determine the periodicity when to be done the order n or the quantity order q; second, to determine the levels of stock: lighting control, security stock etc. In this way we can provide the answer to two fundamental questions: How much must be ordered? When to Order? In the current practice, the business relationships with its suppliers are based on regressive rates for price. This means that suppliers may grant discounts, from a certain level of quantities ordered. Thus, the unit price of the products is a variable which depends on the order size. So, the most important element for choosing the optimum for the economic order quantity is the total cost for ordering and this cost depends on the following elements: the medium price per units, the stock cost, the ordering cost etc.

  3. An attempt to evaluate some regression models used for radiometric ash determination in the brown coal

    International Nuclear Information System (INIS)

    Karamuz, S.; Urbanski, P.; Antoniak, W.; Wagner, D.

    1984-01-01

    Five different regression models for determination of the ash as well as iron and calcium contents in brown coal using fluorescence and scattering of X-rays have been evaluated. Calculations were done using experimental results obtained from the natural brown coal samples to which appropriate quantities of iron, calcium and silicon oxides were added. The secondary radiation was excited by Pu-238 source and detected by X-ray argone filled proportional counter. The investigation has shown the superiority of the multiparametric models over the radiometric ash determination in the pit-coal applying aluminium filter for the correction of the influence of iron content on the intensity of scattered radiation. Standard error of estimation for the best algorithm is about three time smaler than that for algorithm simulating application of the aluminium filter. Statistical parameters of the considered algorithm were reviewed and discussed. (author)

  4. Analyses of polycyclic aromatic hydrocarbon (PAH) and chiral-PAH analogues-methyl-β-cyclodextrin guest-host inclusion complexes by fluorescence spectrophotometry and multivariate regression analysis.

    Science.gov (United States)

    Greene, LaVana; Elzey, Brianda; Franklin, Mariah; Fakayode, Sayo O

    2017-03-05

    The negative health impact of polycyclic aromatic hydrocarbons (PAHs) and differences in pharmacological activity of enantiomers of chiral molecules in humans highlights the need for analysis of PAHs and their chiral analogue molecules in humans. Herein, the first use of cyclodextrin guest-host inclusion complexation, fluorescence spectrophotometry, and chemometric approach to PAH (anthracene) and chiral-PAH analogue derivatives (1-(9-anthryl)-2,2,2-triflouroethanol (TFE)) analyses are reported. The binding constants (K b ), stoichiometry (n), and thermodynamic properties (Gibbs free energy (ΔG), enthalpy (ΔH), and entropy (ΔS)) of anthracene and enantiomers of TFE-methyl-β-cyclodextrin (Me-β-CD) guest-host complexes were also determined. Chemometric partial-least-square (PLS) regression analysis of emission spectra data of Me-β-CD-guest-host inclusion complexes was used for the determination of anthracene and TFE enantiomer concentrations in Me-β-CD-guest-host inclusion complex samples. The values of calculated K b and negative ΔG suggest the thermodynamic favorability of anthracene-Me-β-CD and enantiomeric of TFE-Me-β-CD inclusion complexation reactions. However, anthracene-Me-β-CD and enantiomer TFE-Me-β-CD inclusion complexations showed notable differences in the binding affinity behaviors and thermodynamic properties. The PLS regression analysis resulted in square-correlation-coefficients of 0.997530 or better and a low LOD of 3.81×10 -7 M for anthracene and 3.48×10 -8 M for TFE enantiomers at physiological conditions. Most importantly, PLS regression accurately determined the anthracene and TFE enantiomer concentrations with an average low error of 2.31% for anthracene, 4.44% for R-TFE and 3.60% for S-TFE. The results of the study are highly significant because of its high sensitivity and accuracy for analysis of PAH and chiral PAH analogue derivatives without the need of an expensive chiral column, enantiomeric resolution, or use of a polarized

  5. Determinants of the probability of adopting quality protein maize (QPM technology in Tanzania: A logistic regression analysis

    Directory of Open Access Journals (Sweden)

    Gregory, T.

    2013-06-01

    Full Text Available Adoption of technology is an important factor in economic development. The thrust of this study was to establish factors affecting adoption of QPM technology in Northern zone of Tanzania. Primary data was collected from a random sample of 120 smallholder maize farmers in four villages. Data collected were analysed using descriptive and quantitative methods. Logit model was used to determine factors that influence adoption of QPM technology. The regression results indicated that education of the household head, farmers’ participation on demonstration trials, attendance to field days, and numbers of livestock owned have positively influenced the rate of adoption of the technology. Access to credit, and poor QPM marketing problem perception by farmers negatively influenced the rate of adoption. The study recommended government to ensure efficiency input-output linkage for QPM production.

  6. Determination of baroreflex sensitivity during the modified Oxford maneuver by trigonometric regressive spectral analysis.

    Directory of Open Access Journals (Sweden)

    Julia Gasch

    Full Text Available BACKGROUND: Differences in spontaneous and drug-induced baroreflex sensitivity (BRS have been attributed to its different operating ranges. The current study attempted to compare BRS estimates during cardiovascular steady-state and pharmacologically stimulation using an innovative algorithm for dynamic determination of baroreflex gain. METHODOLOGY/PRINCIPAL FINDINGS: Forty-five volunteers underwent the modified Oxford maneuver in supine and 60° tilted position with blood pressure and heart rate being continuously recorded. Drug-induced BRS-estimates were calculated from data obtained by bolus injections of nitroprusside and phenylephrine. Spontaneous indices were derived from data obtained during rest (stationary and under pharmacological stimulation (non-stationary using the algorithm of trigonometric regressive spectral analysis (TRS. Spontaneous and drug-induced BRS values were significantly correlated and display directionally similar changes under different situations. Using the Bland-Altman method, systematic differences between spontaneous and drug-induced estimates were found and revealed that the discrepancy can be as large as the gain itself. Fixed bias was not evident with ordinary least products regression. The correlation and agreement between the estimates increased significantly when BRS was calculated by TRS in non-stationary mode during the drug injection period. TRS-BRS significantly increased during phenylephrine and decreased under nitroprusside. CONCLUSIONS/SIGNIFICANCE: The TRS analysis provides a reliable, non-invasive assessment of human BRS not only under static steady state conditions, but also during pharmacological perturbation of the cardiovascular system.

  7. Application of Logistic Regression Tree Model in Determining Habitat Distribution of Astragalus verus

    Directory of Open Access Journals (Sweden)

    M. Saki

    2013-03-01

    Full Text Available The relationship between plant species and environmental factors has always been a central issue in plant ecology. With rising power of statistical techniques, geo-statistics and geographic information systems (GIS, the development of predictive habitat distribution models of organisms has rapidly increased in ecology. This study aimed to evaluate the ability of Logistic Regression Tree model to create potential habitat map of Astragalus verus. This species produces Tragacanth and has economic value. A stratified- random sampling was applied to 100 sites (50 presence- 50 absence of given species, and produced environmental and edaphic factors maps by using Kriging and Inverse Distance Weighting methods in the ArcGIS software for the whole study area. Relationships between species occurrence and environmental factors were determined by Logistic Regression Tree model and extended to the whole study area. The results indicated species occurrence has strong correlation with environmental factors such as mean daily temperature and clay, EC and organic carbon content of the soil. Species occurrence showed direct relationship with mean daily temperature and clay and organic carbon, and inverse relationship with EC. Model accuracy was evaluated both by Cohen’s kappa statistics (κ and by area under Receiver Operating Characteristics curve based on independent test data set. Their values (kappa=0.9, Auc of ROC=0.96 indicated the high power of LRT to create potential habitat map on local scales. This model, therefore, can be applied to recognize potential sites for rangeland reclamation projects.

  8. Structured Additive Quantile Regression for Assessing the Determinants of Childhood Anemia in Rwanda.

    Science.gov (United States)

    Habyarimana, Faustin; Zewotir, Temesgen; Ramroop, Shaun

    2017-06-17

    Childhood anemia is among the most significant health problems faced by public health departments in developing countries. This study aims at assessing the determinants and possible spatial effects associated with childhood anemia in Rwanda. The 2014/2015 Rwanda Demographic and Health Survey (RDHS) data was used. The analysis was done using the structured spatial additive quantile regression model. The findings of this study revealed that the child's age; the duration of breastfeeding; gender of the child; the nutritional status of the child (whether underweight and/or wasting); whether the child had a fever; had a cough in the two weeks prior to the survey or not; whether the child received vitamin A supplementation in the six weeks before the survey or not; the household wealth index; literacy of the mother; mother's anemia status; mother's age at the birth are all significant factors associated with childhood anemia in Rwanda. Furthermore, significant structured spatial location effects on childhood anemia was found.

  9. Identification of Determinants of Sports Skill Level in Badminton Players Using the Multiple Regression Model

    Directory of Open Access Journals (Sweden)

    Jaworski Janusz

    2016-03-01

    Full Text Available Purpose. The aim of the study was to evaluate somatic and functional determinants of sports skill level in badminton players at three consecutive stages of training. Methods. The study examined 96 badminton players aged 11 to 19 years. The scope of the study included somatic characteristics, physical abilities and neurosensory abilities. Thirty nine variables were analysed in each athlete. Coefficients of multiple determination were used to evaluate the effect of structural and functional parameters on sports skill level in badminton players. Results. In the group of younger cadets, quality and effectiveness of playing were mostly determined by the level of physical abilities. In the group of cadets, the most important determinants were physical abilities, followed by somatic characteristics. In this group, coordination abilities were also important. In juniors, the most pronounced was a set of the variables that reflect physical abilities. Conclusions. Models of determination of sports skill level are most noticeable in the group of cadets. In all three groups of badminton players, the dominant effect on the quality of playing is due to a set of the variables that determine physical abilities.

  10. Using synthetic data to evaluate multiple regression and principal component analyses for statistical modeling of daily building energy consumption

    Energy Technology Data Exchange (ETDEWEB)

    Reddy, T.A. (Energy Systems Lab., Texas A and M Univ., College Station, TX (United States)); Claridge, D.E. (Energy Systems Lab., Texas A and M Univ., College Station, TX (United States))

    1994-01-01

    Multiple regression modeling of monitored building energy use data is often faulted as a reliable means of predicting energy use on the grounds that multicollinearity between the regressor variables can lead both to improper interpretation of the relative importance of the various physical regressor parameters and to a model with unstable regressor coefficients. Principal component analysis (PCA) has the potential to overcome such drawbacks. While a few case studies have already attempted to apply this technique to building energy data, the objectives of this study were to make a broader evaluation of PCA and multiple regression analysis (MRA) and to establish guidelines under which one approach is preferable to the other. Four geographic locations in the US with different climatic conditions were selected and synthetic data sequence representative of daily energy use in large institutional buildings were generated in each location using a linear model with outdoor temperature, outdoor specific humidity and solar radiation as the three regression variables. MRA and PCA approaches were then applied to these data sets and their relative performances were compared. Conditions under which PCA seems to perform better than MRA were identified and preliminary recommendations on the use of either modeling approach formulated. (orig.)

  11. DETERMINATION OF FACTORS AFFECTING LENGTH OF STAY WITH MULTINOMIAL LOGISTIC REGRESSION IN TURKEY

    Directory of Open Access Journals (Sweden)

    Öğr. Gör. Rukiye NUMAN TEKİN

    2016-08-01

    Full Text Available Length of stay (LOS has important implications in various aspects of health services, can vary according to a wide range of factors. It is noticed that LOS has been neglected mostly in both theoratical studies and practice of health care management in Turkey. The main purpose of this study is to identify factors related to LOS in Turkey. A retrospective analysis of 2.255.836 patients hospitalized to private, university, foundation university and other (municipality, association and foreigners/minority hospitals hospitals which have an agreement with Social Security Institution (SSI in Turkey, from January 1, 2010, until the December 31, 2010, was examined. Patient’s data were taken from MEDULA (National Electronic Invoice System and SPSS 18.0 was used to perform statistical analysis. In this study t-test, one way anova and multinomial logistic regression are used to determine variables that may affect to LOS. The average LOS of patients was 3,93 days (SD = 5,882. LOS showed a statistically significant difference according to all independent variables used in the study (age, gender, disease class, type of hospitalization, presence of comorbidity, type and number of surgery, season of hospitalization, hospital ownership/bed capacity/ geographical region/residential area/type of service. According to the results of the multinomial lojistic regression analysis, LOS was negatively affected in terms of gender, presence of comorbidity, geographical region of hospital and was positively affected in terms of age, season of hospitalization, hospital bed capacity/ ownership/type of service/residential area.

  12. Determination of the Static Anthropometric Characteristics of Iranian Microscope Users Via Regression Model

    Directory of Open Access Journals (Sweden)

    Toktam Balandeh

    2016-04-01

    Full Text Available Background: Anthropometry is a branch of Ergonomics that considers the measurement and description of the human body dimensions. Accordingly, equipment, environments, and workstations should be designed using user-centered design processes. Anthropometric dimensions differ considerably across gender, race, ethnicity and age, taking into account ergonomic and anthropometric principles. The aim of this study was to determine anthropometric characteristics of microscope users and provide a regression model for anthropometric dimensions. Methods: In this cross-sectional study, anthropometric dimensions (18 dimensions of the microscope users (N=174; 78 males and 96 females in Shiraz were measured. Instruments included a Studio meter, 2 type calipers, adjustable seats, a 40-cm ruler, a tape measure, and scales. The study data were analyzed using SPSS, version 20. Results: The means of male and female microscope users’ age were 31.64±8.86 and 35±10.9 years, respectively and their height were 161.03±6.87cm and 174.81±5.45cm, respectively. The results showed that sitting and standing eye height and sitting horizontal range of accessibility had a significant correlation with stature. Conclusion: The established anthropometric database can be used as a source for designing workstations for working with microscopes in this group of users. The regression analysis showed that three dimensions, i.e. standing eye height, sitting eye height, and horizontal range of accessibility sitting had a significant correlation with stature. Therefore, given one’s stature, these dimensions can be obtained with less measurement.

  13. Investigating the Determinants of Toxoplasma gondii Prevalence in Meat: A Systematic Review and Meta-Regression.

    Directory of Open Access Journals (Sweden)

    Simone Belluco

    Full Text Available Toxoplasma gondii is one of the most widespread parasites in humans and can cause severe illness in immunocompromised individuals. However, its role in healthy people is probably under-appreciated. The complex epidemiology of this protozoan recognizes several infection routes but consumption of contaminated food is likely to be the predominant one. Among food, consumption of raw and undercooked meat is a relevant route of transmission, but the role of different meat producing animal species and meats thereof is controversial.The aim of the present work is to summarize and analyse literature data reporting prevalence estimates of T. gondii in meat animals/meats.We searched Medline, Web of Science, Science Direct (last update 31/03/2015.Relevant papers should report data from primary studies dealing with the prevalence of T. gondii in meat from livestock species as obtained through direct detection methods. Meta-analysis and meta-regression were performed.Of 1915 papers screened, 69 papers were included, dealing mainly with cattle, pigs and sheep. Pooled prevalences, based on random-effect models, were 2.6% (CI95 [0.5-5.8] for cattle, 12.3% (CI95 [7.6-17.8] for pigs and 14.7% (CI95 [8.9-21.5] for sheep. Due to the high heterogeneity observed, univariable and multivariable meta-regression models were fitted showing that the geographic area for cattle (p = 0.032, the farming type for pigs (p = 0.0004 and the sample composition for sheep (p = 0.03 had significant effects on the prevalences of Toxoplasma detected/estimated. Moreover, the role of different animal species was dependent on the geographic location of animals' origin.Limitations were due mainly to a possible publication bias.The present work confirms the role of meat, including beef, as T. gondii sources, and highlights the need for a control system for this parasite to be implemented along the meat production chain. Moreover, consumer knowledge should be strengthened in order to reduce

  14. Evaluation Of Three Methods Of Sugar Analyses For Determination ...

    African Journals Online (AJOL)

    Chemical methods developed by Lane and Eynon, Knight and Allen, and colorimetric method by Dubois et al were used to determine reducing sugar in eight fruit samples. The methods showed detection limits as follows: Lane and Eynon (1ppt); Knight and Allen (0.1ppt); and Dubois et al (<2ppm). The coefficients of ...

  15. The N400 as a snapshot of interactive processing: evidence from regression analyses of orthographic neighbor and lexical associate effects

    Science.gov (United States)

    Laszlo, Sarah; Federmeier, Kara D.

    2010-01-01

    Linking print with meaning tends to be divided into subprocesses, such as recognition of an input's lexical entry and subsequent access of semantics. However, recent results suggest that the set of semantic features activated by an input is broader than implied by a view wherein access serially follows recognition. EEG was collected from participants who viewed items varying in number and frequency of both orthographic neighbors and lexical associates. Regression analysis of single item ERPs replicated past findings, showing that N400 amplitudes are greater for items with more neighbors, and further revealed that N400 amplitudes increase for items with more lexical associates and with higher frequency neighbors or associates. Together, the data suggest that in the N400 time window semantic features of items broadly related to inputs are active, consistent with models in which semantic access takes place in parallel with stimulus recognition. PMID:20624252

  16. Using the Coefficient of Determination "R"[superscript 2] to Test the Significance of Multiple Linear Regression

    Science.gov (United States)

    Quinino, Roberto C.; Reis, Edna A.; Bessegato, Lupercio F.

    2013-01-01

    This article proposes the use of the coefficient of determination as a statistic for hypothesis testing in multiple linear regression based on distributions acquired by beta sampling. (Contains 3 figures.)

  17. Simultaneous chemometric determination of pyridoxine hydrochloride and isoniazid in tablets by multivariate regression methods.

    Science.gov (United States)

    Dinç, Erdal; Ustündağ, Ozgür; Baleanu, Dumitru

    2010-08-01

    The sole use of pyridoxine hydrochloride during treatment of tuberculosis gives rise to pyridoxine deficiency. Therefore, a combination of pyridoxine hydrochloride and isoniazid is used in pharmaceutical dosage form in tuberculosis treatment to reduce this side effect. In this study, two chemometric methods, partial least squares (PLS) and principal component regression (PCR), were applied to the simultaneous determination of pyridoxine (PYR) and isoniazid (ISO) in their tablets. A concentration training set comprising binary mixtures of PYR and ISO consisting of 20 different combinations were randomly prepared in 0.1 M HCl. Both multivariate calibration models were constructed using the relationships between the concentration data set (concentration data matrix) and absorbance data matrix in the spectral region 200-330 nm. The accuracy and the precision of the proposed chemometric methods were validated by analyzing synthetic mixtures containing the investigated drugs. The recovery results obtained by applying PCR and PLS calibrations to the artificial mixtures were found between 100.0 and 100.7%. Satisfactory results obtained by applying the PLS and PCR methods to both artificial and commercial samples were obtained. The results obtained in this manuscript strongly encourage us to use them for the quality control and the routine analysis of the marketing tablets containing PYR and ISO drugs. Copyright © 2010 John Wiley & Sons, Ltd.

  18. Multiple regression as a preventive tool for determining the risk of Legionella spp.

    Directory of Open Access Journals (Sweden)

    Enrique Gea-Izquierdo

    2012-04-01

    Full Text Available To determine the interrelationship between health & hygiene conditions for prevention of legionellosis, the compositionof materials used in water distribution systems, the water origin and Legionella pneumophila risk. Material and methods. Include adescriptive study and multiple regression analysis on a sample of golf course sprinkler irrigation systems (n=31 pertaining to hotelslocated on the Costa del Sol (Malaga, Spain. The study was carried out in 2009. Results. Presented a significant lineal relation, withall the independent variables contributing significantly (p<0.05 to the model’s fit. The relationship between water type and the risk ofLegionella, as well as the material composition and the latter, is lineal and positive. In contrast, the relationship between health-hygieneconditions and Legionella risk is lineal and negative. Conclusion. The characterization of Legionella pneumophila concentration, asdefined by the risk in water and through use of the predictive method, can contribute to the consideration of new influence variables inthe development of the agent, resulting in improved control and prevention of the disease.

  19. Structured Additive Quantile Regression for Assessing the Determinants of Childhood Anemia in Rwanda

    Directory of Open Access Journals (Sweden)

    Faustin Habyarimana

    2017-06-01

    Full Text Available Childhood anemia is among the most significant health problems faced by public health departments in developing countries. This study aims at assessing the determinants and possible spatial effects associated with childhood anemia in Rwanda. The 2014/2015 Rwanda Demographic and Health Survey (RDHS data was used. The analysis was done using the structured spatial additive quantile regression model. The findings of this study revealed that the child’s age; the duration of breastfeeding; gender of the child; the nutritional status of the child (whether underweight and/or wasting; whether the child had a fever; had a cough in the two weeks prior to the survey or not; whether the child received vitamin A supplementation in the six weeks before the survey or not; the household wealth index; literacy of the mother; mother’s anemia status; mother’s age at the birth are all significant factors associated with childhood anemia in Rwanda. Furthermore, significant structured spatial location effects on childhood anemia was found.

  20. Determination of carbohydrates present in Saccharomyces cerevisiae using mid-infrared spectroscopy and partial least squares regression.

    Science.gov (United States)

    Plata, Maria R; Koch, Cosima; Wechselberger, Patrick; Herwig, Christoph; Lendl, Bernhard

    2013-10-01

    A fast and simple method to control variations in carbohydrate composition of Saccharomyces cerevisiae, baker's yeast, during fermentation was developed using mid-infrared (mid-IR) spectroscopy. The method allows for precise and accurate determinations with minimal or no sample preparation and reagent consumption based on mid-IR spectra and partial least squares (PLS) regression. The PLS models were developed employing the results from reference analysis of the yeast cells. The reference analyses quantify the amount of trehalose, glucose, glycogen, and mannan in S. cerevisiae. The selection and optimization of pretreatment steps of samples such as the disruption of the yeast cells and the hydrolysis of mannan and glycogen to obtain monosaccharides were carried out. Trehalose, glucose, and mannose were determined using high-performance liquid chromatography coupled with a refractive index detector and total carbohydrates were measured using the phenol-sulfuric method. Linear concentration range, accuracy, precision, LOD and LOQ were examined to check the reliability of the chromatographic method for each analyte.

  1. New strategy for determination of anthocyanins, polyphenols and antioxidant capacity of Brassica oleracea liquid extract using infrared spectroscopies and multivariate regression

    Science.gov (United States)

    de Oliveira, Isadora R. N.; Roque, Jussara V.; Maia, Mariza P.; Stringheta, Paulo C.; Teófilo, Reinaldo F.

    2018-04-01

    A new method was developed to determine the antioxidant properties of red cabbage extract (Brassica oleracea) by mid (MID) and near (NIR) infrared spectroscopies and partial least squares (PLS) regression. A 70% (v/v) ethanolic extract of red cabbage was concentrated to 9° Brix and further diluted (12 to 100%) in water. The dilutions were used as external standards for the building of PLS models. For the first time, this strategy was applied for building multivariate regression models. Reference analyses and spectral data were obtained from diluted extracts. The determinate properties were total and monomeric anthocyanins, total polyphenols and antioxidant capacity by ABTS (2,2-azino-bis(3-ethyl-benzothiazoline-6-sulfonate)) and DPPH (2,2-diphenyl-1-picrylhydrazyl) methods. Ordered predictors selection (OPS) and genetic algorithm (GA) were used for feature selection before PLS regression (PLS-1). In addition, a PLS-2 regression was applied to all properties simultaneously. PLS-1 models provided more predictive models than did PLS-2 regression. PLS-OPS and PLS-GA models presented excellent prediction results with a correlation coefficient higher than 0.98. However, the best models were obtained using PLS and variable selection with the OPS algorithm and the models based on NIR spectra were considered more predictive for all properties. Then, these models provided a simple, rapid and accurate method for determination of red cabbage extract antioxidant properties and its suitability for use in the food industry.

  2. Structural vascular disease in Africans: performance of ethnic-specific waist circumference cut points using logistic regression and neural network analyses: the SABPA study

    OpenAIRE

    Botha, J.; De Ridder, J.H.; Potgieter, J.C.; Steyn, H.S.; Malan, L.

    2013-01-01

    A recently proposed model for waist circumference cut points (RPWC), driven by increased blood pressure, was demonstrated in an African population. We therefore aimed to validate the RPWC by comparing the RPWC and the Joint Statement Consensus (JSC) models via Logistic Regression (LR) and Neural Networks (NN) analyses. Urban African gender groups (N=171) were stratified according to the JSC and RPWC cut point models. Ultrasound carotid intima media thickness (CIMT), blood pressure (BP) and fa...

  3. Using interval maxima regression (IMR) to determine environmental optima controlling Microcystis spp. growth in Lake Taihu.

    Science.gov (United States)

    Li, Ming; Peng, Qiang; Xiao, Man

    2016-01-01

    Fortnightly investigations at 12 sampling sites in Meiliang Bay and Gonghu Bay of Lake Taihu (China) were carried out from June to early November 2010. The relationship between abiotic factors and cell density of different Microcystis species was analyzed using the interval maxima regression (IMR) to determine the optimum temperature and nutrient concentrations for growth of different Microcystis species. Our results showed that cell density of all the Microcystis species increased along with the increase of water temperature, but Microcystis aeruginosa adapted to a wide range of temperatures. The optimum total dissolved nitrogen concentrations for M. aeruginosa, Microcystis wesenbergii, Microcystis ichthyoblabe, and unidentified Microcystis were 3.7, 2.0, 2.4, and 1.9 mg L(-1), respectively. The optimum total dissolved phosphorus concentrations for different species were M. wesenbergii (0.27 mg L(-1)) > M. aeruginosa (0.1 mg L(-1)) > M. ichthyoblabe (0.06 mg L(-1)) ≈ unidentified Microcystis, and the iron (Fe(3+)) concentrations were M. wesenbergii (0.73 mg L(-1)) > M. aeruginosa (0.42 mg L(-1)) > M. ichthyoblabe (0.35 mg L(-1)) > unidentified Microcystis (0.09 mg L(-1)). The above results suggest that if phosphorus concentration was reduced to 0.06 mg L(-1) or/and iron concentration was reduced to 0.35 mg L(-1) in Lake Taihu, the large colonial M. wesenbergii and M. aeruginosa would be replaced by small colonial M. ichthyoblabe and unidentified Microcystis. Thereafter, the intensity and frequency of the occurrence of Microcystis blooms would be reduced by changing Microcystis species composition.

  4. Determining Balıkesir’s Energy Potential Using a Regression Analysis Computer Program

    Directory of Open Access Journals (Sweden)

    Bedri Yüksel

    2014-01-01

    Full Text Available Solar power and wind energy are used concurrently during specific periods, while at other times only the more efficient is used, and hybrid systems make this possible. When establishing a hybrid system, the extent to which these two energy sources support each other needs to be taken into account. This paper is a study of the effects of wind speed, insolation levels, and the meteorological parameters of temperature and humidity on the energy potential in Balıkesir, in the Marmara region of Turkey. The relationship between the parameters was studied using a multiple linear regression method. Using a designed-for-purpose computer program, two different regression equations were derived, with wind speed being the dependent variable in the first and insolation levels in the second. The regression equations yielded accurate results. The computer program allowed for the rapid calculation of different acceptance rates. The results of the statistical analysis proved the reliability of the equations. An estimate of identified meteorological parameters and unknown parameters could be produced with a specified precision by using the regression analysis method. The regression equations also worked for the evaluation of energy potential.

  5. Significance tests to determine the direction of effects in linear regression models.

    Science.gov (United States)

    Wiedermann, Wolfgang; Hagmann, Michael; von Eye, Alexander

    2015-02-01

    Previous studies have discussed asymmetric interpretations of the Pearson correlation coefficient and have shown that higher moments can be used to decide on the direction of dependence in the bivariate linear regression setting. The current study extends this approach by illustrating that the third moment of regression residuals may also be used to derive conclusions concerning the direction of effects. Assuming non-normally distributed variables, it is shown that the distribution of residuals of the correctly specified regression model (e.g., Y is regressed on X) is more symmetric than the distribution of residuals of the competing model (i.e., X is regressed on Y). Based on this result, 4 one-sample tests are discussed which can be used to decide which variable is more likely to be the response and which one is more likely to be the explanatory variable. A fifth significance test is proposed based on the differences of skewness estimates, which leads to a more direct test of a hypothesis that is compatible with direction of dependence. A Monte Carlo simulation study was performed to examine the behaviour of the procedures under various degrees of associations, sample sizes, and distributional properties of the underlying population. An empirical example is given which illustrates the application of the tests in practice. © 2014 The British Psychological Society.

  6. Exploratory regression analysis: a tool for selecting models and determining predictor importance.

    Science.gov (United States)

    Braun, Michael T; Oswald, Frederick L

    2011-06-01

    Linear regression analysis is one of the most important tools in a researcher's toolbox for creating and testing predictive models. Although linear regression analysis indicates how strongly a set of predictor variables, taken together, will predict a relevant criterion (i.e., the multiple R), the analysis cannot indicate which predictors are the most important. Although there is no definitive or unambiguous method for establishing predictor variable importance, there are several accepted methods. This article reviews those methods for establishing predictor importance and provides a program (in Excel) for implementing them (available for direct download at http://dl.dropbox.com/u/2480715/ERA.xlsm?dl=1) . The program investigates all 2(p) - 1 submodels and produces several indices of predictor importance. This exploratory approach to linear regression, similar to other exploratory data analysis techniques, has the potential to yield both theoretical and practical benefits.

  7. JT-60 configuration parameters for feedback control determined by regression analysis

    Energy Technology Data Exchange (ETDEWEB)

    Matsukawa, Makoto; Hosogane, Nobuyuki; Ninomiya, Hiromasa (Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment)

    1991-12-01

    The stepwise regression procedure was applied to obtain measurement formulas for equilibrium parameters used in the feedback control of JT-60. This procedure automatically selects variables necessary for the measurements, and selects a set of variables which are not likely to be picked up by physical considerations. Regression equations with stable and small multicollinearity were obtained and it was experimentally confirmed that the measurement formulas obtained through this procedure were accurate enough to be applicable to the feedback control of plasma configurations in JT-60. (author).

  8. JT-60 configuration parameters for feedback control determined by regression analysis

    International Nuclear Information System (INIS)

    Matsukawa, Makoto; Hosogane, Nobuyuki; Ninomiya, Hiromasa

    1991-12-01

    The stepwise regression procedure was applied to obtain measurement formulas for equilibrium parameters used in the feedback control of JT-60. This procedure automatically selects variables necessary for the measurements, and selects a set of variables which are not likely to be picked up by physical considerations. Regression equations with stable and small multicollinearity were obtained and it was experimentally confirmed that the measurement formulas obtained through this procedure were accurate enough to be applicable to the feedback control of plasma configurations in JT-60. (author)

  9. Proposition of Regression Equations to Determine Outdoor Thermal Comfort in Tropical and Humid Environment

    Directory of Open Access Journals (Sweden)

    Sangkertadi Sangkertadi

    2012-05-01

    Full Text Available This study is about field experimentation in order to construct regression equations of perception of thermalcomfort for outdoor activities under hot and humid environment. Relationships between thermal-comfort perceptions, micro climate variables (temperatures and humidity and body parameters (activity, clothing, body measure have been observed and analyzed. 180 adults, men, and women participated as samples/respondents. This study is limited for situation where wind velocity is about 1 m/s, which touch the body of the respondents/samples. From questionnaires and field measurements, three regression equations have been developed, each for activity of normal walking, brisk walking, and sitting.

  10. Use of a Regression Model to Study Host-Genomic Determinants of Phage Susceptibility in MRSA

    DEFF Research Database (Denmark)

    Zschach, Henrike; Larsen, Mette V; Hasman, Henrik

    2018-01-01

    strains to 12 (nine monovalent) different therapeutic phage preparations and subsequently employed linear regression models to estimate the influence of individual host gene families on resistance to phages. Specifically, we used a two-step regression model setup with a preselection step based on gene...... family enrichment. We show that our models are robust and capture the data's underlying signal by comparing their performance to that of models build on randomized data. In doing so, we have identified 167 gene families that govern phage resistance in our strain set and performed functional analysis...... on them. This revealed genes of possible prophage or mobile genetic element origin, along with genes involved in restriction-modification and transcription regulators, though the majority were genes of unknown function. This study is a step in the direction of understanding the intricate host...

  11. Classification and regression tree (CART) analyses of genomic signatures reveal sets of tetramers that discriminate temperature optima of archaea and bacteria

    Science.gov (United States)

    Dyer, Betsey D.; Kahn, Michael J.; LeBlanc, Mark D.

    2008-01-01

    Classification and regression tree (CART) analysis was applied to genome-wide tetranucleotide frequencies (genomic signatures) of 195 archaea and bacteria. Although genomic signatures have typically been used to classify evolutionary divergence, in this study, convergent evolution was the focus. Temperature optima for most of the organisms examined could be distinguished by CART analyses of tetranucleotide frequencies. This suggests that pervasive (nonlinear) qualities of genomes may reflect certain environmental conditions (such as temperature) in which those genomes evolved. The predominant use of GAGA and AGGA as the discriminating tetramers in CART models suggests that purine-loading and codon biases of thermophiles may explain some of the results. PMID:19054742

  12. A study on direct determination of uranium in ore by analyzing γ-ray spectrum with dual linear regression

    International Nuclear Information System (INIS)

    Liu Chunkui

    1996-01-01

    The method introduced is based on different energy of γ-ray emitted from radionuclide in the uranium-radium decay series in ore. The pulse counting rates of two spectra bands, i.e. N 1 (55∼193 keV) and N 2 (260∼1500 keV), are measured by portable type HYX-3 400-channel γ-ray spectrometer. On the other side, the uranium content (Q U ) is obtained by chemical analysis of channel sampling. Then the regression coefficients (b 0 , b 1 ,b 2 ) can be determined through dual linear regression by using Q U and N 1 , N 2 . The direct determination of uranium can be made with the regression equation Q U = b 0 + b 1 N 1 + b 2 N 2

  13. Assessing the suitability of summary data for two-sample Mendelian randomization analyses using MR-Egger regression: the role of the I2 statistic.

    Science.gov (United States)

    Bowden, Jack; Del Greco M, Fabiola; Minelli, Cosetta; Davey Smith, George; Sheehan, Nuala A; Thompson, John R

    2016-12-01

    : MR-Egger regression has recently been proposed as a method for Mendelian randomization (MR) analyses incorporating summary data estimates of causal effect from multiple individual variants, which is robust to invalid instruments. It can be used to test for directional pleiotropy and provides an estimate of the causal effect adjusted for its presence. MR-Egger regression provides a useful additional sensitivity analysis to the standard inverse variance weighted (IVW) approach that assumes all variants are valid instruments. Both methods use weights that consider the single nucleotide polymorphism (SNP)-exposure associations to be known, rather than estimated. We call this the `NO Measurement Error' (NOME) assumption. Causal effect estimates from the IVW approach exhibit weak instrument bias whenever the genetic variants utilized violate the NOME assumption, which can be reliably measured using the F-statistic. The effect of NOME violation on MR-Egger regression has yet to be studied. An adaptation of the I2 statistic from the field of meta-analysis is proposed to quantify the strength of NOME violation for MR-Egger. It lies between 0 and 1, and indicates the expected relative bias (or dilution) of the MR-Egger causal estimate in the two-sample MR context. We call it IGX2 . The method of simulation extrapolation is also explored to counteract the dilution. Their joint utility is evaluated using simulated data and applied to a real MR example. In simulated two-sample MR analyses we show that, when a causal effect exists, the MR-Egger estimate of causal effect is biased towards the null when NOME is violated, and the stronger the violation (as indicated by lower values of IGX2 ), the stronger the dilution. When additionally all genetic variants are valid instruments, the type I error rate of the MR-Egger test for pleiotropy is inflated and the causal effect underestimated. Simulation extrapolation is shown to substantially mitigate these adverse effects. We

  14. Partial F-tests with multiply imputed data in the linear regression framework via coefficient of determination.

    Science.gov (United States)

    Chaurasia, Ashok; Harel, Ofer

    2015-02-10

    Tests for regression coefficients such as global, local, and partial F-tests are common in applied research. In the framework of multiple imputation, there are several papers addressing tests for regression coefficients. However, for simultaneous hypothesis testing, the existing methods are computationally intensive because they involve calculation with vectors and (inversion of) matrices. In this paper, we propose a simple method based on the scalar entity, coefficient of determination, to perform (global, local, and partial) F-tests with multiply imputed data. The proposed method is evaluated using simulated data and applied to suicide prevention data. Copyright © 2014 John Wiley & Sons, Ltd.

  15. Determination of boiling point of petrochemicals by gas chromatography-mass spectrometry and multivariate regression analysis of structural activity relationship.

    Science.gov (United States)

    Fakayode, Sayo O; Mitchell, Breanna S; Pollard, David A

    2014-08-01

    Accurate understanding of analyte boiling points (BP) is of critical importance in gas chromatographic (GC) separation and crude oil refinery operation in petrochemical industries. This study reported the first combined use of GC separation and partial-least-square (PLS1) multivariate regression analysis of petrochemical structural activity relationship (SAR) for accurate BP determination of two commercially available (D3710 and MA VHP) calibration gas mix samples. The results of the BP determination using PLS1 multivariate regression were further compared with the results of traditional simulated distillation method of BP determination. The developed PLS1 regression was able to correctly predict analytes BP in D3710 and MA VHP calibration gas mix samples, with a root-mean-square-%-relative-error (RMS%RE) of 6.4%, and 10.8% respectively. In contrast, the overall RMS%RE of 32.9% and 40.4%, respectively obtained for BP determination in D3710 and MA VHP using a traditional simulated distillation method were approximately four times larger than the corresponding RMS%RE of BP prediction using MRA, demonstrating the better predictive ability of MRA. The reported method is rapid, robust, and promising, and can be potentially used routinely for fast analysis, pattern recognition, and analyte BP determination in petrochemical industries. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Prevalence and Determinants of Preterm Birth in Tehran, Iran: A Comparison between Logistic Regression and Decision Tree Methods.

    Science.gov (United States)

    Amini, Payam; Maroufizadeh, Saman; Samani, Reza Omani; Hamidi, Omid; Sepidarkish, Mahdi

    2017-06-01

    Preterm birth (PTB) is a leading cause of neonatal death and the second biggest cause of death in children under five years of age. The objective of this study was to determine the prevalence of PTB and its associated factors using logistic regression and decision tree classification methods. This cross-sectional study was conducted on 4,415 pregnant women in Tehran, Iran, from July 6-21, 2015. Data were collected by a researcher-developed questionnaire through interviews with mothers and review of their medical records. To evaluate the accuracy of the logistic regression and decision tree methods, several indices such as sensitivity, specificity, and the area under the curve were used. The PTB rate was 5.5% in this study. The logistic regression outperformed the decision tree for the classification of PTB based on risk factors. Logistic regression showed that multiple pregnancies, mothers with preeclampsia, and those who conceived with assisted reproductive technology had an increased risk for PTB ( p logistic regression model for the classification of risk groups for PTB.

  17. Determinants of tourist arrivals in Africa: a panel data regression analysis

    OpenAIRE

    Naudé, Wim; Saayman, Andrea

    2005-01-01

    Africa’s tourism potential is acknowledged to be significant but underdeveloped. This paper uses both cross-section data and panel data for the period 1996–2000 to identify the determinants of tourism arrivals in 43 African countries, taking into account tourists’ country of origin. The results strongly suggest that political stability, tourism infrastructure, marketing and information, and the level of development at the destination are key determinants of travel to Africa. Typical ‘devel...

  18. The more total cognitive load is reduced by cues, the better retention and transfer of multimedia learning: A meta-analysis and two meta-regression analyses.

    Science.gov (United States)

    Xie, Heping; Wang, Fuxing; Hao, Yanbin; Chen, Jiaxue; An, Jing; Wang, Yuxin; Liu, Huashan

    2017-01-01

    Cueing facilitates retention and transfer of multimedia learning. From the perspective of cognitive load theory (CLT), cueing has a positive effect on learning outcomes because of the reduction in total cognitive load and avoidance of cognitive overload. However, this has not been systematically evaluated. Moreover, what remains ambiguous is the direct relationship between the cue-related cognitive load and learning outcomes. A meta-analysis and two subsequent meta-regression analyses were conducted to explore these issues. Subjective total cognitive load (SCL) and scores on a retention test and transfer test were selected as dependent variables. Through a systematic literature search, 32 eligible articles encompassing 3,597 participants were included in the SCL-related meta-analysis. Among them, 25 articles containing 2,910 participants were included in the retention-related meta-analysis and the following retention-related meta-regression, while there were 29 articles containing 3,204 participants included in the transfer-related meta-analysis and the transfer-related meta-regression. The meta-analysis revealed a statistically significant cueing effect on subjective ratings of cognitive load (d = -0.11, 95% CI = [-0.19, -0.02], p < 0.05), retention performance (d = 0.27, 95% CI = [0.08, 0.46], p < 0.01), and transfer performance (d = 0.34, 95% CI = [0.12, 0.56], p < 0.01). The subsequent meta-regression analyses showed that dSCL for cueing significantly predicted dretention for cueing (β = -0.70, 95% CI = [-1.02, -0.38], p < 0.001), as well as dtransfer for cueing (β = -0.60, 95% CI = [-0.92, -0.28], p < 0.001). Thus in line with CLT, adding cues in multimedia materials can indeed reduce SCL and promote learning outcomes, and the more SCL is reduced by cues, the better retention and transfer of multimedia learning.

  19. Forsmark - System 522. Recursive linear regression for the determination of heating rate

    International Nuclear Information System (INIS)

    Carlsson, B.

    1980-01-01

    The heating rate for reactor tank and steam tubes is limited. The algorithm of the heating rate has been implemented on the computer and compared with real data from Forsmark-2. The evaluation of data shows a considerable improvement of the determination of derivata which contributes to information during heating events. (G.B.)

  20. A novel adaptive kernel method with kernel centers determined by a support vector regression approach

    NARCIS (Netherlands)

    Sun, L.G.; De Visser, C.C.; Chu, Q.P.; Mulder, J.A.

    2012-01-01

    The optimality of the kernel number and kernel centers plays a significant role in determining the approximation power of nearly all kernel methods. However, the process of choosing optimal kernels is always formulated as a global optimization task, which is hard to accomplish. Recently, an

  1. Determinants of unmet need for family planning in rural Burkina Faso: a multilevel logistic regression analysis.

    Science.gov (United States)

    Wulifan, Joseph K; Jahn, Albrecht; Hien, Hervé; Ilboudo, Patrick Christian; Meda, Nicolas; Robyn, Paul Jacob; Saidou Hamadou, T; Haidara, Ousmane; De Allegri, Manuela

    2017-12-19

    Unmet need for family planning has implications for women and their families, such as unsafe abortion, physical abuse, and poor maternal health. Contraceptive knowledge has increased across low-income settings, yet unmet need remains high with little information on the factors explaining it. This study assessed factors associated with unmet need among pregnant women in rural Burkina Faso. We collected data on pregnant women through a population-based survey conducted in 24 rural districts between October 2013 and March 2014. Multivariate multilevel logistic regression was used to assess the association between unmet need for family planning and a selection of relevant demand- and supply-side factors. Of the 1309 pregnant women covered in the survey, 239 (18.26%) reported experiencing unmet need for family planning. Pregnant women with more than three living children [OR = 1.80; 95% CI (1.11-2.91)], those with a child younger than 1 year [OR = 1.75; 95% CI (1.04-2.97)], pregnant women whose partners disapproves contraceptive use [OR = 1.51; 95% CI (1.03-2.21)] and women who desired fewer children compared to their partners preferred number of children [OR = 1.907; 95% CI (1.361-2.672)] were significantly more likely to experience unmet need for family planning, while health staff training in family planning logistics management (OR = 0.46; 95% CI (0.24-0.73)] was associated with a lower probability of experiencing unmet need for family planning. Findings suggest the need to strengthen family planning interventions in Burkina Faso to ensure greater uptake of contraceptive use and thus reduce unmet need for family planning.

  2. Predictors of success of external cephalic version and cephalic presentation at birth among 1253 women with non-cephalic presentation using logistic regression and classification tree analyses.

    Science.gov (United States)

    Hutton, Eileen K; Simioni, Julia C; Thabane, Lehana

    2017-08-01

    Among women with a fetus with a non-cephalic presentation, external cephalic version (ECV) has been shown to reduce the rate of breech presentation at birth and cesarean birth. Compared with ECV at term, beginning ECV prior to 37 weeks' gestation decreases the number of infants in a non-cephalic presentation at birth. The purpose of this secondary analysis was to investigate factors associated with a successful ECV procedure and to present this in a clinically useful format. Data were collected as part of the Early ECV Pilot and Early ECV2 Trials, which randomized 1776 women with a fetus in breech presentation to either early ECV (34-36 weeks' gestation) or delayed ECV (at or after 37 weeks). The outcome of interest was successful ECV, defined as the fetus being in a cephalic presentation immediately following the procedure, as well as at the time of birth. The importance of several factors in predicting successful ECV was investigated using two statistical methods: logistic regression and classification and regression tree (CART) analyses. Among nulliparas, non-engagement of the presenting part and an easily palpable fetal head were independently associated with success. Among multiparas, non-engagement of the presenting part, gestation less than 37 weeks and an easily palpable fetal head were found to be independent predictors of success. These findings were consistent with results of the CART analyses. Regardless of parity, descent of the presenting part was the most discriminating factor in predicting successful ECV and cephalic presentation at birth. © 2017 Nordic Federation of Societies of Obstetrics and Gynecology.

  3. Understanding child stunting in India: a comprehensive analysis of socio-economic, nutritional and environmental determinants using additive quantile regression.

    Science.gov (United States)

    Fenske, Nora; Burns, Jacob; Hothorn, Torsten; Rehfuess, Eva A

    2013-01-01

    Most attempts to address undernutrition, responsible for one third of global child deaths, have fallen behind expectations. This suggests that the assumptions underlying current modelling and intervention practices should be revisited. We undertook a comprehensive analysis of the determinants of child stunting in India, and explored whether the established focus on linear effects of single risks is appropriate. Using cross-sectional data for children aged 0-24 months from the Indian National Family Health Survey for 2005/2006, we populated an evidence-based diagram of immediate, intermediate and underlying determinants of stunting. We modelled linear, non-linear, spatial and age-varying effects of these determinants using additive quantile regression for four quantiles of the Z-score of standardized height-for-age and logistic regression for stunting and severe stunting. At least one variable within each of eleven groups of determinants was significantly associated with height-for-age in the 35% Z-score quantile regression. The non-modifiable risk factors child age and sex, and the protective factors household wealth, maternal education and BMI showed the largest effects. Being a twin or multiple birth was associated with dramatically decreased height-for-age. Maternal age, maternal BMI, birth order and number of antenatal visits influenced child stunting in non-linear ways. Findings across the four quantile and two logistic regression models were largely comparable. Our analysis confirms the multifactorial nature of child stunting. It emphasizes the need to pursue a systems-based approach and to consider non-linear effects, and suggests that differential effects across the height-for-age distribution do not play a major role.

  4. Determination of carbohydrates present in Saccharomyces cerevisiae using mid-infrared spectroscopy and partial least squares regression

    OpenAIRE

    Plata, Maria R.; Koch, Cosima; Wechselberger, Patrick; Herwig, Christoph; Lendl, Bernhard

    2013-01-01

    A fast and simple method to control variations in carbohydrate composition of Saccharomyces cerevisiae, baker's yeast, during fermentation was developed using mid-infrared (mid-IR) spectroscopy. The method allows for precise and accurate determinations with minimal or no sample preparation and reagent consumption based on mid-IR spectra and partial least squares (PLS) regression. The PLS models were developed employing the results from reference analysis of the yeast cells. The reference anal...

  5. Understanding child stunting in India: a comprehensive analysis of socio-economic, nutritional and environmental determinants using additive quantile regression.

    Directory of Open Access Journals (Sweden)

    Nora Fenske

    Full Text Available BACKGROUND: Most attempts to address undernutrition, responsible for one third of global child deaths, have fallen behind expectations. This suggests that the assumptions underlying current modelling and intervention practices should be revisited. OBJECTIVE: We undertook a comprehensive analysis of the determinants of child stunting in India, and explored whether the established focus on linear effects of single risks is appropriate. DESIGN: Using cross-sectional data for children aged 0-24 months from the Indian National Family Health Survey for 2005/2006, we populated an evidence-based diagram of immediate, intermediate and underlying determinants of stunting. We modelled linear, non-linear, spatial and age-varying effects of these determinants using additive quantile regression for four quantiles of the Z-score of standardized height-for-age and logistic regression for stunting and severe stunting. RESULTS: At least one variable within each of eleven groups of determinants was significantly associated with height-for-age in the 35% Z-score quantile regression. The non-modifiable risk factors child age and sex, and the protective factors household wealth, maternal education and BMI showed the largest effects. Being a twin or multiple birth was associated with dramatically decreased height-for-age. Maternal age, maternal BMI, birth order and number of antenatal visits influenced child stunting in non-linear ways. Findings across the four quantile and two logistic regression models were largely comparable. CONCLUSIONS: Our analysis confirms the multifactorial nature of child stunting. It emphasizes the need to pursue a systems-based approach and to consider non-linear effects, and suggests that differential effects across the height-for-age distribution do not play a major role.

  6. Determination of Selection Index of Cocoa (Theobroma Cacao L.) Yield Traits Using Regression Methods

    OpenAIRE

    Setyawan, Bayu; Taryono; Mitrowihardjo, Suyadi

    2016-01-01

    The increasing chocolate consumption has not been followed by growing production of dry cocoa beans. In order to support the increase in cocoa production, planting materials with high yield are needed. The objective of this research was to determine the components of cocoa traits affecting weight of dry cocoa beans, and set a selection index for superior cocoa trees. The experiment material were four cocoa hybrid populations of which their family ancestry were unknown, and were planted on Sam...

  7. Regression approach to non-invasive determination of bilirubin in neonatal blood

    Science.gov (United States)

    Lysenko, S. A.; Kugeiko, M. M.

    2012-07-01

    A statistical ensemble of structural and biophysical parameters of neonatal skin was modeled based on experimental data. Diffuse scattering coefficients of the skin in the visible and infrared regions were calculated by applying a Monte-Carlo method to each realization of the ensemble. The potential accuracy of recovering the bilirubin concentration in dermis (which correlates closely with that in blood) was estimated from spatially resolved spectrometric measurements of diffuse scattering. The possibility to determine noninvasively the bilirubin concentration was shown by measurements of diffuse scattering at λ = 460, 500, and 660 nm at three source-detector separations under conditions of total variability of the skin biophysical parameters.

  8. THE DETERMINATION OF BETA COEFFICIENTS OF PUBLICLY-HELD COMPANIES BY A REGRESSION MODEL AND AN APPLICATION ON PRIVATE FIRMS

    Directory of Open Access Journals (Sweden)

    METİN KAMİL ERCAN

    2013-06-01

    Full Text Available It is possible to determine the value of private companies by means of suggestions and assumptions derived from their financial statements. However, there comes out a serious problem in the determination of equity costs of these private companies using Capital Assets Pricing Model (CAPM as beta coefficients are unknown or unavailable. In this study, firstly, a regression model that represents the relationship between the beta coefficients and financial statements’ Variables of publicly-held companies will be developed. Then, this model will be tested and applied on private companies.

  9. The Spatial Distribution of Hepatitis C Virus Infections and Associated Determinants--An Application of a Geographically Weighted Poisson Regression for Evidence-Based Screening Interventions in Hotspots.

    Science.gov (United States)

    Kauhl, Boris; Heil, Jeanne; Hoebe, Christian J P A; Schweikart, Jürgen; Krafft, Thomas; Dukers-Muijrers, Nicole H T M

    2015-01-01

    Hepatitis C Virus (HCV) infections are a major cause for liver diseases. A large proportion of these infections remain hidden to care due to its mostly asymptomatic nature. Population-based screening and screening targeted on behavioural risk groups had not proven to be effective in revealing these hidden infections. Therefore, more practically applicable approaches to target screenings are necessary. Geographic Information Systems (GIS) and spatial epidemiological methods may provide a more feasible basis for screening interventions through the identification of hotspots as well as demographic and socio-economic determinants. Analysed data included all HCV tests (n = 23,800) performed in the southern area of the Netherlands between 2002-2008. HCV positivity was defined as a positive immunoblot or polymerase chain reaction test. Population data were matched to the geocoded HCV test data. The spatial scan statistic was applied to detect areas with elevated HCV risk. We applied global regression models to determine associations between population-based determinants and HCV risk. Geographically weighted Poisson regression models were then constructed to determine local differences of the association between HCV risk and population-based determinants. HCV prevalence varied geographically and clustered in urban areas. The main population at risk were middle-aged males, non-western immigrants and divorced persons. Socio-economic determinants consisted of one-person households, persons with low income and mean property value. However, the association between HCV risk and demographic as well as socio-economic determinants displayed strong regional and intra-urban differences. The detection of local hotspots in our study may serve as a basis for prioritization of areas for future targeted interventions. Demographic and socio-economic determinants associated with HCV risk show regional differences underlining that a one-size-fits-all approach even within small geographic

  10. Regression Analysis

    CERN Document Server

    Freund, Rudolf J; Sa, Ping

    2006-01-01

    The book provides complete coverage of the classical methods of statistical analysis. It is designed to give students an understanding of the purpose of statistical analyses, to allow the student to determine, at least to some degree, the correct type of statistical analyses to be performed in a given situation, and have some appreciation of what constitutes good experimental design

  11. Bisphenol-A exposures and behavioural aberrations: median and linear spline and meta-regression analyses of 12 toxicity studies in rodents.

    Science.gov (United States)

    Peluso, Marco E M; Munnia, Armelle; Ceppi, Marcello

    2014-11-05

    Exposures to bisphenol-A, a weak estrogenic chemical, largely used for the production of plastic containers, can affect the rodent behaviour. Thus, we examined the relationships between bisphenol-A and the anxiety-like behaviour, spatial skills, and aggressiveness, in 12 toxicity studies of rodent offspring from females orally exposed to bisphenol-A, while pregnant and/or lactating, by median and linear splines analyses. Subsequently, the meta-regression analysis was applied to quantify the behavioural changes. U-shaped, inverted U-shaped and J-shaped dose-response curves were found to describe the relationships between bisphenol-A with the behavioural outcomes. The occurrence of anxiogenic-like effects and spatial skill changes displayed U-shaped and inverted U-shaped curves, respectively, providing examples of effects that are observed at low-doses. Conversely, a J-dose-response relationship was observed for aggressiveness. When the proportion of rodents expressing certain traits or the time that they employed to manifest an attitude was analysed, the meta-regression indicated that a borderline significant increment of anxiogenic-like effects was present at low-doses regardless of sexes (β)=-0.8%, 95% C.I. -1.7/0.1, P=0.076, at ≤120 μg bisphenol-A. Whereas, only bisphenol-A-males exhibited a significant inhibition of spatial skills (β)=0.7%, 95% C.I. 0.2/1.2, P=0.004, at ≤100 μg/day. A significant increment of aggressiveness was observed in both the sexes (β)=67.9,C.I. 3.4, 172.5, P=0.038, at >4.0 μg. Then, bisphenol-A treatments significantly abrogated spatial learning and ability in males (Pbisphenol-A, e.g. ≤120 μg/day, were associated to behavioural aberrations in offspring. Copyright © 2014. Published by Elsevier Ireland Ltd.

  12. Exploring reasons for the observed inconsistent trial reports on intra-articular injections with hyaluronic acid in the treatment of osteoarthritis: Meta-regression analyses of randomized trials.

    Science.gov (United States)

    Johansen, Mette; Bahrt, Henriette; Altman, Roy D; Bartels, Else M; Juhl, Carsten B; Bliddal, Henning; Lund, Hans; Christensen, Robin

    2016-08-01

    The aim was to identify factors explaining inconsistent observations concerning the efficacy of intra-articular hyaluronic acid compared to intra-articular sham/control, or non-intervention control, in patients with symptomatic osteoarthritis, based on randomized clinical trials (RCTs). A systematic review and meta-regression analyses of available randomized trials were conducted. The outcome, pain, was assessed according to a pre-specified hierarchy of potentially available outcomes. Hedges׳s standardized mean difference [SMD (95% CI)] served as effect size. REstricted Maximum Likelihood (REML) mixed-effects models were used to combine study results, and heterogeneity was calculated and interpreted as Tau-squared and I-squared, respectively. Overall, 99 studies (14,804 patients) met the inclusion criteria: Of these, only 71 studies (72%), including 85 comparisons (11,216 patients), had adequate data available for inclusion in the primary meta-analysis. Overall, compared with placebo, intra-articular hyaluronic acid reduced pain with an effect size of -0.39 [-0.47 to -0.31; P hyaluronic acid. Based on available trial data, intra-articular hyaluronic acid showed a better effect than intra-articular saline on pain reduction in osteoarthritis. Publication bias and the risk of selective outcome reporting suggest only small clinical effect compared to saline. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Item Response Theory Modeling and Categorical Regression Analyses of the Five-Factor Model Rating Form: A Study on Italian Community-Dwelling Adolescent Participants and Adult Participants.

    Science.gov (United States)

    Fossati, Andrea; Widiger, Thomas A; Borroni, Serena; Maffei, Cesare; Somma, Antonella

    2017-06-01

    To extend the evidence on the reliability and construct validity of the Five-Factor Model Rating Form (FFMRF) in its self-report version, two independent samples of Italian participants, which were composed of 510 adolescent high school students and 457 community-dwelling adults, respectively, were administered the FFMRF in its Italian translation. Adolescent participants were also administered the Italian translation of the Borderline Personality Features Scale for Children-11 (BPFSC-11), whereas adult participants were administered the Italian translation of the Triarchic Psychopathy Measure (TriPM). Cronbach α values were consistent with previous findings; in both samples, average interitem r values indicated acceptable internal consistency for all FFMRF scales. A multidimensional graded item response theory model indicated that the majority of FFMRF items had adequate discrimination parameters; information indices supported the reliability of the FFMRF scales. Both categorical (i.e., item-level) and scale-level regression analyses suggested that the FFMRF scores may predict a nonnegligible amount of variance in the BPFSC-11 total score in adolescent participants, and in the TriPM scale scores in adult participants.

  14. Simultaneous determination of estrogens (ethinylestradiol and norgestimate) concentrations in human and bovine serum albumin by use of fluorescence spectroscopy and multivariate regression analysis.

    Science.gov (United States)

    Hordge, LaQuana N; McDaniel, Kiara L; Jones, Derick D; Fakayode, Sayo O

    2016-05-15

    The endocrine disruption property of estrogens necessitates the immediate need for effective monitoring and development of analytical protocols for their analyses in biological and human specimens. This study explores the first combined utility of a steady-state fluorescence spectroscopy and multivariate partial-least-square (PLS) regression analysis for the simultaneous determination of two estrogens (17α-ethinylestradiol (EE) and norgestimate (NOR)) concentrations in bovine serum albumin (BSA) and human serum albumin (HSA) samples. The influence of EE and NOR concentrations and temperature on the emission spectra of EE-HSA EE-BSA, NOR-HSA, and NOR-BSA complexes was also investigated. The binding of EE with HSA and BSA resulted in increase in emission characteristics of HSA and BSA and a significant blue spectra shift. In contrast, the interaction of NOR with HSA and BSA quenched the emission characteristics of HSA and BSA. The observed emission spectral shifts preclude the effective use of traditional univariate regression analysis of fluorescent data for the determination of EE and NOR concentrations in HSA and BSA samples. Multivariate partial-least-squares (PLS) regression analysis was utilized to correlate the changes in emission spectra with EE and NOR concentrations in HSA and BSA samples. The figures-of-merit of the developed PLS regression models were excellent, with limits of detection as low as 1.6×10(-8) M for EE and 2.4×10(-7) M for NOR and good linearity (R(2)>0.994985). The PLS models correctly predicted EE and NOR concentrations in independent validation HSA and BSA samples with a root-mean-square-percent-relative-error (RMS%RE) of less than 6.0% at physiological condition. On the contrary, the use of univariate regression resulted in poor predictions of EE and NOR in HSA and BSA samples, with RMS%RE larger than 40% at physiological conditions. High accuracy, low sensitivity, simplicity, low-cost with no prior analyte extraction or separation

  15. Effective Surfactants Blend Concentration Determination for O/W Emulsion Stabilization by Two Nonionic Surfactants by Simple Linear Regression.

    Science.gov (United States)

    Hassan, A K

    2015-01-01

    In this work, O/W emulsion sets were prepared by using different concentrations of two nonionic surfactants. The two surfactants, tween 80(HLB=15.0) and span 80(HLB=4.3) were used in a fixed proportions equal to 0.55:0.45 respectively. HLB value of the surfactants blends were fixed at 10.185. The surfactants blend concentration is starting from 3% up to 19%. For each O/W emulsion set the conductivity was measured at room temperature (25±2°), 40, 50, 60, 70 and 80°. Applying the simple linear regression least squares method statistical analysis to the temperature-conductivity obtained data determines the effective surfactants blend concentration required for preparing the most stable O/W emulsion. These results were confirmed by applying the physical stability centrifugation testing and the phase inversion temperature range measurements. The results indicated that, the relation which represents the most stable O/W emulsion has the strongest direct linear relationship between temperature and conductivity. This relationship is linear up to 80°. This work proves that, the most stable O/W emulsion is determined via the determination of the maximum R² value by applying of the simple linear regression least squares method to the temperature-conductivity obtained data up to 80°, in addition to, the true maximum slope is represented by the equation which has the maximum R² value. Because the conditions would be changed in a more complex formulation, the method of the determination of the effective surfactants blend concentration was verified by applying it for more complex formulations of 2% O/W miconazole nitrate cream and the results indicate its reproducibility.

  16. Effective behaviour change techniques for physical activity and healthy eating in overweight and obese adults; systematic review and meta-regression analyses.

    Science.gov (United States)

    Samdal, Gro Beate; Eide, Geir Egil; Barth, Tom; Williams, Geoffrey; Meland, Eivind

    2017-03-28

    This systematic review aims to explain the heterogeneity in results of interventions to promote physical activity and healthy eating for overweight and obese adults, by exploring the differential effects of behaviour change techniques (BCTs) and other intervention characteristics. The inclusion criteria specified RCTs with ≥ 12 weeks' duration, from January 2007 to October 2014, for adults (mean age ≥ 40 years, mean BMI ≥ 30). Primary outcomes were measures of healthy diet or physical activity. Two reviewers rated study quality, coded the BCTs, and collected outcome results at short (≤6 months) and long term (≥12 months). Meta-analyses and meta-regressions were used to estimate effect sizes (ES), heterogeneity indices (I 2 ) and regression coefficients. We included 48 studies containing a total of 82 outcome reports. The 32 long term reports had an overall ES = 0.24 with 95% confidence interval (CI): 0.15 to 0.33 and I 2  = 59.4%. The 50 short term reports had an ES = 0.37 with 95% CI: 0.26 to 0.48, and I 2  = 71.3%. The number of BCTs unique to the intervention group, and the BCTs goal setting and self-monitoring of behaviour predicted the effect at short and long term. The total number of BCTs in both intervention arms and using the BCTs goal setting of outcome, feedback on outcome of behaviour, implementing graded tasks, and adding objects to the environment, e.g. using a step counter, significantly predicted the effect at long term. Setting a goal for change; and the presence of reporting bias independently explained 58.8% of inter-study variation at short term. Autonomy supportive and person-centred methods as in Motivational Interviewing, the BCTs goal setting of behaviour, and receiving feedback on the outcome of behaviour, explained all of the between study variations in effects at long term. There are similarities, but also differences in effective BCTs promoting change in healthy eating and physical activity and

  17. Determination of regression functions for the charging and discharging processes of valve regulated lead-acid batteries

    Directory of Open Access Journals (Sweden)

    Vukić Vladimir Đ.

    2012-01-01

    Full Text Available Following a deep discharge of AGM SVT 300 valve-regulated lead-acid batteries using the ten-hour discharge current, the batteries were charged using variable current. In accordance with the obtained results, exponential and polynomial functions for the approximation of the specified processes were analyzed. The main evaluation instrument for the quality of the implemented approximations was the adjusted coefficient of determination R-2. It was perceived that the battery discharge process might be successfully approximated with both an exponential and the second order polynomial function. On all the occasions analyzed, values of the adjusted coefficient of determination were greater than 0.995. The charging process of the deeply discharged batteries was successfully approximated with the exponential function; the measured values of the adjusted coefficient of determination being nearly 0.95. Apart from the high measured values of the adjusted coefficient of determination, polynomial approximations of the second and third order did not provide satisfactory results regarding the interpolation of the battery charging characteristics. A possibility for a practical implementation of the procured regression functions in uninterruptible power supply systems was described.

  18. The application of artificial neural networks and support vector regression for simultaneous spectrophotometric determination of commercial eye drop contents

    Science.gov (United States)

    Valizadeh, Maryam; Sohrabi, Mahmoud Reza

    2018-03-01

    In the present study, artificial neural networks (ANNs) and support vector regression (SVR) as intelligent methods coupled with UV spectroscopy for simultaneous quantitative determination of Dorzolamide (DOR) and Timolol (TIM) in eye drop. Several synthetic mixtures were analyzed for validating the proposed methods. At first, neural network time series, which one type of network from the artificial neural network was employed and its efficiency was evaluated. Afterwards, the radial basis network was applied as another neural network. Results showed that the performance of this method is suitable for predicting. Finally, support vector regression was proposed to construct the Zilomole prediction model. Also, root mean square error (RMSE) and mean recovery (%) were calculated for SVR method. Moreover, the proposed methods were compared to the high-performance liquid chromatography (HPLC) as a reference method. One way analysis of variance (ANOVA) test at the 95% confidence level applied to the comparison results of suggested and reference methods that there were no significant differences between them. Also, the effect of interferences was investigated in spike solutions.

  19. Conformational determination of [Leu]enkephalin based on theoretical and experimental VA and VCD spectral analyses

    DEFF Research Database (Denmark)

    Abdali, Salim; Jalkanen, Karl J.; Cao, X.

    2004-01-01

    Conformational determination of [Leu]enkephalin in DMSO-d6 is carried out using VA and VCD spectral analyses. Conformational energies, vibrational frequencies and VA and VCD intensities are calculated using DFT at B3LYP/6-31G* level of theory. Comparison between the measured spectra...

  20. Efficient Determination of Free Energy Landscapes in Multiple Dimensions from Biased Umbrella Sampling Simulations Using Linear Regression.

    Science.gov (United States)

    Meng, Yilin; Roux, Benoît

    2015-08-11

    The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost.

  1. Comparison of FTIR-ATR and Raman spectroscopy in determination of VLDL triglycerides in blood serum with PLS regression

    Science.gov (United States)

    Oleszko, Adam; Hartwich, Jadwiga; Wójtowicz, Anna; Gąsior-Głogowska, Marlena; Huras, Hubert; Komorowska, Małgorzata

    2017-08-01

    Hypertriglyceridemia, related with triglyceride (TG) in plasma above 1.7 mmol/L is one of the cardiovascular risk factors. Very low density lipoproteins (VLDL) are the main TG carriers. Despite being time consuming, demanding well-qualified staff and expensive instrumentation, ultracentrifugation technique still remains the gold standard for the VLDL isolation. Therefore faster and simpler method of VLDL-TG determination is needed. Vibrational spectroscopy, including FT-IR and Raman, is widely used technique in lipid and protein research. The aim of this study was assessment of Raman and FT-IR spectroscopy in determination of VLDL-TG directly in serum with the isolation step omitted. TG concentration in serum and in ultracentrifugated VLDL fractions from 32 patients were measured with reference colorimetric method. FT-IR and Raman spectra of VLDL and serum samples were acquired. Partial least square (PLS) regression was used for calibration and leave-one-out cross validation. Our results confirmed possibility of reagent-free determination of VLDL-TG directly in serum with both Raman and FT-IR spectroscopy. Quantitative VLDL testing by FT-IR and/or Raman spectroscopy applied directly to maternal serum seems to be promising screening test to identify women with increased risk of adverse pregnancy outcomes and patient friendly method of choice based on ease of performance, accuracy and efficiency.

  2. Dual Regression

    OpenAIRE

    Spady, Richard; Stouli, Sami

    2012-01-01

    We propose dual regression as an alternative to the quantile regression process for the global estimation of conditional distribution functions under minimal assumptions. Dual regression provides all the interpretational power of the quantile regression process while avoiding the need for repairing the intersecting conditional quantile surfaces that quantile regression often produces in practice. Our approach introduces a mathematical programming characterization of conditional distribution f...

  3. Personal, social, and game-related correlates of active and non-active gaming among dutch gaming adolescents: survey-based multivariable, multilevel logistic regression analyses.

    Science.gov (United States)

    Simons, Monique; de Vet, Emely; Chinapaw, Mai Jm; de Boer, Michiel; Seidell, Jacob C; Brug, Johannes

    2014-04-04

    Playing video games contributes substantially to sedentary behavior in youth. A new generation of video games-active games-seems to be a promising alternative to sedentary games to promote physical activity and reduce sedentary behavior. At this time, little is known about correlates of active and non-active gaming among adolescents. The objective of this study was to examine potential personal, social, and game-related correlates of both active and non-active gaming in adolescents. A survey assessing game behavior and potential personal, social, and game-related correlates was conducted among adolescents (12-16 years, N=353) recruited via schools. Multivariable, multilevel logistic regression analyses, adjusted for demographics (age, sex and educational level of adolescents), were conducted to examine personal, social, and game-related correlates of active gaming ≥1 hour per week (h/wk) and non-active gaming >7 h/wk. Active gaming ≥1 h/wk was significantly associated with a more positive attitude toward active gaming (OR 5.3, CI 2.4-11.8; Pgames (OR 0.30, CI 0.1-0.6; P=.002), a higher score on habit strength regarding gaming (OR 1.9, CI 1.2-3.2; P=.008) and having brothers/sisters (OR 6.7, CI 2.6-17.1; Pgame engagement (OR 0.95, CI 0.91-0.997; P=.04). Non-active gaming >7 h/wk was significantly associated with a more positive attitude toward non-active gaming (OR 2.6, CI 1.1-6.3; P=.035), a stronger habit regarding gaming (OR 3.0, CI 1.7-5.3; P7 h/wk. Active gaming is most strongly (negatively) associated with attitude with respect to non-active games, followed by observed active game behavior of brothers and sisters and attitude with respect to active gaming (positive associations). On the other hand, non-active gaming is most strongly associated with observed non-active game behavior of friends, habit strength regarding gaming and attitude toward non-active gaming (positive associations). Habit strength was a correlate of both active and non-active gaming

  4. Personal, Social, and Game-Related Correlates of Active and Non-Active Gaming Among Dutch Gaming Adolescents: Survey-Based Multivariable, Multilevel Logistic Regression Analyses

    Science.gov (United States)

    de Vet, Emely; Chinapaw, Mai JM; de Boer, Michiel; Seidell, Jacob C; Brug, Johannes

    2014-01-01

    Background Playing video games contributes substantially to sedentary behavior in youth. A new generation of video games—active games—seems to be a promising alternative to sedentary games to promote physical activity and reduce sedentary behavior. At this time, little is known about correlates of active and non-active gaming among adolescents. Objective The objective of this study was to examine potential personal, social, and game-related correlates of both active and non-active gaming in adolescents. Methods A survey assessing game behavior and potential personal, social, and game-related correlates was conducted among adolescents (12-16 years, N=353) recruited via schools. Multivariable, multilevel logistic regression analyses, adjusted for demographics (age, sex and educational level of adolescents), were conducted to examine personal, social, and game-related correlates of active gaming ≥1 hour per week (h/wk) and non-active gaming >7 h/wk. Results Active gaming ≥1 h/wk was significantly associated with a more positive attitude toward active gaming (OR 5.3, CI 2.4-11.8; Pgames (OR 0.30, CI 0.1-0.6; P=.002), a higher score on habit strength regarding gaming (OR 1.9, CI 1.2-3.2; P=.008) and having brothers/sisters (OR 6.7, CI 2.6-17.1; Pgame engagement (OR 0.95, CI 0.91-0.997; P=.04). Non-active gaming >7 h/wk was significantly associated with a more positive attitude toward non-active gaming (OR 2.6, CI 1.1-6.3; P=.035), a stronger habit regarding gaming (OR 3.0, CI 1.7-5.3; P7 h/wk. Active gaming is most strongly (negatively) associated with attitude with respect to non-active games, followed by observed active game behavior of brothers and sisters and attitude with respect to active gaming (positive associations). On the other hand, non-active gaming is most strongly associated with observed non-active game behavior of friends, habit strength regarding gaming and attitude toward non-active gaming (positive associations). Habit strength was a

  5. Determination of DPPH Radical Oxidation Caused by Methanolic Extracts of Some Microalgal Species by Linear Regression Analysis of Spectrophotometric Measurements

    Directory of Open Access Journals (Sweden)

    Ulf-Peter Hansen

    2007-10-01

    Full Text Available The demonstrated modified spectrophotometric method makes use of the 2,2-diphenyl-1-picrylhydrazyl (DPPH radical and its specific absorbance properties. Theabsorbance decreases when the radical is reduced by antioxidants. In contrast to otherinvestigations, the absorbance was measured at a wavelength of 550 nm. This wavelengthenabled the measurements of the stable free DPPH radical without interference frommicroalgal pigments. This approach was applied to methanolic microalgae extracts for twodifferent DPPH concentrations. The changes in absorbance measured vs. the concentrationof the methanolic extract resulted in curves with a linear decrease ending in a saturationregion. Linear regression analysis of the linear part of DPPH reduction versus extractconcentration enabled the determination of the microalgae’s methanolic extractsantioxidative potentials which was independent to the employed DPPH concentrations. Theresulting slopes showed significant differences (6 - 34 μmol DPPH g-1 extractconcentration between the single different species of microalgae (Anabaena sp.,Isochrysis galbana, Phaeodactylum tricornutum, Porphyridium purpureum, Synechocystissp. PCC6803 in their ability to reduce the DPPH radical. The independency of the signal on the DPPH concentration is a valuable advantage over the determination of the EC50 value.

  6. Comparison of two regression-based approaches for determining nutrient and sediment fluxes and trends in the Chesapeake Bay watershed

    Science.gov (United States)

    Moyer, Douglas; Hirsch, Robert M.; Hyer, Kenneth

    2012-01-01

    Nutrient and sediment fluxes and changes in fluxes over time are key indicators that water resource managers can use to assess the progress being made in improving the structure and function of the Chesapeake Bay ecosystem. The U.S. Geological Survey collects annual nutrient (nitrogen and phosphorus) and sediment flux data and computes trends that describe the extent to which water-quality conditions are changing within the major Chesapeake Bay tributaries. Two regression-based approaches were compared for estimating annual nutrient and sediment fluxes and for characterizing how these annual fluxes are changing over time. The two regression models compared are the traditionally used ESTIMATOR and the newly developed Weighted Regression on Time, Discharge, and Season (WRTDS). The model comparison focused on answering three questions: (1) What are the differences between the functional form and construction of each model? (2) Which model produces estimates of flux with the greatest accuracy and least amount of bias? (3) How different would the historical estimates of annual flux be if WRTDS had been used instead of ESTIMATOR? One additional point of comparison between the two models is how each model determines trends in annual flux once the year-to-year variations in discharge have been determined. All comparisons were made using total nitrogen, nitrate, total phosphorus, orthophosphorus, and suspended-sediment concentration data collected at the nine U.S. Geological Survey River Input Monitoring stations located on the Susquehanna, Potomac, James, Rappahannock, Appomattox, Pamunkey, Mattaponi, Patuxent, and Choptank Rivers in the Chesapeake Bay watershed. Two model characteristics that uniquely distinguish ESTIMATOR and WRTDS are the fundamental model form and the determination of model coefficients. ESTIMATOR and WRTDS both predict water-quality constituent concentration by developing a linear relation between the natural logarithm of observed constituent

  7. Consequences of kriging and land use regression for PM2.5 predictions in epidemiologic analyses: insights into spatial variability using high-resolution satellite data.

    Science.gov (United States)

    Alexeeff, Stacey E; Schwartz, Joel; Kloog, Itai; Chudnovsky, Alexandra; Koutrakis, Petros; Coull, Brent A

    2015-01-01

    Many epidemiological studies use predicted air pollution exposures as surrogates for true air pollution levels. These predicted exposures contain exposure measurement error, yet simulation studies have typically found negligible bias in resulting health effect estimates. However, previous studies typically assumed a statistical spatial model for air pollution exposure, which may be oversimplified. We address this shortcoming by assuming a realistic, complex exposure surface derived from fine-scale (1 km × 1 km) remote-sensing satellite data. Using simulation, we evaluate the accuracy of epidemiological health effect estimates in linear and logistic regression when using spatial air pollution predictions from kriging and land use regression models. We examined chronic (long-term) and acute (short-term) exposure to air pollution. Results varied substantially across different scenarios. Exposure models with low out-of-sample R(2) yielded severe biases in the health effect estimates of some models, ranging from 60% upward bias to 70% downward bias. One land use regression exposure model with >0.9 out-of-sample R(2) yielded upward biases up to 13% for acute health effect estimates. Almost all models drastically underestimated the SEs. Land use regression models performed better in chronic effect simulations. These results can help researchers when interpreting health effect estimates in these types of studies.

  8. Determination of the cutting forces regression functions for milling machining of the X105CrMo17 material

    Science.gov (United States)

    Popovici, T. D.; Dijmărescu, M. R.

    2017-08-01

    The aim of the research presented in this paper is to determine a cutting force prediction model for milling machining of the X105CrMo17 stainless steel. The analysed material is a martensitic stainless steel which, due to the high Carbon content (∼1%) and Chromium (∼17%), has high hardness and good corrosion resistance characteristics. This material is used for the steel structures parts which are subject of wear in corrosive environments, for making valve seats, bearings, various types of cutters, high hardness bushings, casting shells and nozzles, measuring instruments, etc. The paper is structured into three main parts in accordance to the considered research program; they are preceded by an introduction and followed by relevant conclusions. In the first part, for a more detailed knowledge of the material characteristics, a quality and quantity micro-analysis X-ray and a spectral analysis were performed. The second part presents the physical experiment in terms of input, necessary means, process and registration of the experimental data. In the third part, the experimental data is analysed and the cutting force model is developed in terms of the cutting regime parameters such as cutting speed, feed rate, axial depth and radial depth.

  9. Multivariate regression analysis for determining short-term values of radon and its decay products from filter measurements

    International Nuclear Information System (INIS)

    Kraut, W.; Schwarz, W.; Wilhelm, A.

    1994-01-01

    A multivariate regression analysis is applied to decay measurements of α-resp. β-filter activcity. Activity concentrations for Po-218, Pb-214 and Bi-214, resp. for the Rn-222 equilibrium equivalent concentration are obtained explicitly. The regression analysis takes into account properly the variances of the measured count rates and their influence on the resulting activity concentrations. (orig.) [de

  10. An on-line potentiometric sequential injection titration process analyser for the determination of acetic acid.

    Science.gov (United States)

    van Staden, J F; Mashamba, Mulalo G; Stefan, Raluca I

    2002-09-01

    An on-line potentiometric sequential injection titration process analyser for the determination of acetic acid is proposed. A solution of 0.1 mol L(-1) sodium chloride is used as carrier. Titration is achieved by aspirating acetic acid samples between two strong base-zone volumes into a holding coil and by channelling the stack of well-defined zones with flow reversal through a reaction coil to a potentiometric sensor where the peak widths were measured. A linear relationship between peak width and logarithm of the acid concentration was obtained in the range 1-9 g/100 mL. Vinegar samples were analysed without any sample pre-treatment. The method has a relative standard deviation of 0.4% with a sample frequency of 28 samples per hour. The results revealed good agreement between the proposed sequential injection and an automated batch titration method.

  11. Partial Least Squares Regression for Determining the Control Factors for Runoff and Suspended Sediment Yield during Rainfall Events

    Directory of Open Access Journals (Sweden)

    Nufang Fang

    2015-07-01

    Full Text Available Multivariate statistics are commonly used to identify the factors that control the dynamics of runoff or sediment yields during hydrological processes. However, one issue with the use of conventional statistical methods to address relationships between variables and runoff or sediment yield is multicollinearity. The main objectives of this study were to apply a method for effectively identifying runoff and sediment control factors during hydrological processes and apply that method to a case study. The method combines the clustering approach and partial least squares regression (PLSR models. The case study was conducted in a mountainous watershed in the Three Gorges Area. A total of 29 flood events in three hydrological years in areas with different land uses were obtained. In total, fourteen related variables were separated from hydrographs using the classical hydrograph separation method. Twenty-nine rainfall events were classified into two rainfall regimes (heavy Rainfall Regime I and moderate Rainfall Regime II based on rainfall characteristics and K-means clustering. Four separate PLSR models were constructed to identify the main variables that control runoff and sediment yield for the two rainfall regimes. For Rainfall Regime I, the dominant first-order factors affecting the changes in sediment yield in our study were all of the four rainfall-related variables, flood peak discharge, maximum flood suspended sediment concentration, runoff, and the percentages of forest and farmland. For Rainfall Regime II, antecedent condition-related variables have more effects on both runoff and sediment yield than in Rainfall Regime I. The results suggest that the different control factors of the two rainfall regimes are determined by the rainfall characteristics and thus different runoff mechanisms.

  12. Application of annual ring analyses to the determination of smoke damage. II. Contribution to the evaluation of annual ring analyses

    Energy Technology Data Exchange (ETDEWEB)

    Vins, B

    1962-01-01

    The most emission-endangered forested areas of Czechoslovakia are Krusne Hory and of Decinsky Sneznik. The condition of the forest cover today is such that not only the productivity of the forests but also their hydrological and ecological functions are in jeopardy. Measurements by annual ring analysis were made on trees at 102 selected experimental sites to determine the growth gain decrease due to air pollution. The decrease in growth gains was first noted in 1952. Up to 1958 this decrease was differentiated according to the degree of damage caused in different areas. Thus, forests in the Chomutov area exhibited a 30% damage while forests in the Most and Teplice areas exhibited a 90% damage. In the Decinsky Sneznik area the growth gain drop was already noted in 1947 and from then on the annual growth impairment was about 15%. From 1953 on, this area suffered growth gain damage of the same magnitude as the Krusne Hory area. During 1954-1958, the growth gain decreases in four areas exposed to pollution of different severity were 23, 37, 40, and 70% respectively, compared to normal growth gains of trees grown in an unpolluted environment.

  13. Novel HPTLC and UV-AUC analyses: For simple, economical, and rapid determination of Zileuton racemate

    Directory of Open Access Journals (Sweden)

    Saurabh B. Ganorkar

    2017-03-01

    Full Text Available Novel, simple, rapid and reliable High-Performance Thin-Layer Chromatographic (HPTLC and UV-spectroscopic area under curve (UV-AUC methods were developed and validated for the analysis of zileuton racemate in bulk and in in-house tablet formulation. HPTLC quantitation of zileuton was done by UV detection at 260 nm and analysis was performed on (20 × 10 cm aluminium sheets precoated with silica gel 60-F254 (E. Merck as stationary phase and toluene–methanol–glacial acetic acid (3.5:1.5:0.1 v/v as mobile phase. Quantitation by HPTLC method was performed over the concentration range of 200–1200 ng/band. The HPTLC method resulted into a compact and well resolved band for zileuton at retention factor (Rf of 0.51 ± 0.02. Linear regression analysis data for calibration of HPTLC method represented a good linear relationship with regression coefficient; r2 = 0.997. UV-AUC method was developed using sodium lauryl sulphate (0.05 M as a hydrotropic agent to enhance water solubility and area was determined at a wavelength range in between 248.40 and 271.0 nm. Correlation coefficient for UV-AUC analysis was found to be r2 = 0.999. The developed UV-AUC method depicted a fine linear relationship for zileuton racemate in a concentration range of 2–12 μg/mL. Both the developed methods were validated for precision, robustness, ruggedness, accuracy, sensitivity as per guidelines laid by the International Conference on Harmonisation (ICH. Statistical analysis proved that the developed methods were precise, robust, sensitive and accurate and can be used effectively for the analysis of zileuton in bulk and pharmaceutical formulations.

  14. Comparison of autoregressive (AR) strategy with that of regression approach for determining ozone layer depletion as a physical process

    International Nuclear Information System (INIS)

    Yousufzai, M.A.K; Aansari, M.R.K.; Quamar, J.; Iqbal, J.; Hussain, M.A.

    2010-01-01

    This communication presents the development of a comprehensive characterization of ozone layer depletion (OLD) phenomenon as a physical process in the form of mathematical models that comprise the usual regression, multiple or polynomial regression and stochastic strategy. The relevance of these models has been illuminated using predicted values of different parameters under a changing environment. The information obtained from such analysis can be employed to alter the possible factors and variables to achieve optimum performance. This kind of analysis initiates a study towards formulating the phenomenon of OLD as a physical process with special reference to the stratospheric region of Pakistan. The data presented here establishes that the Auto regressive (AR) nature of modeling OLD as a physical process is an appropriate scenario rather than using usual regression. The data reported in literature suggest quantitatively the OLD is occurring in our region. For this purpose we have modeled this phenomenon using the data recorded at the Geophysical Centre Quetta during the period 1960-1999. The predictions made by this analysis are useful for public, private and other relevant organizations. (author)

  15. Regression Phalanxes

    OpenAIRE

    Zhang, Hongyang; Welch, William J.; Zamar, Ruben H.

    2017-01-01

    Tomal et al. (2015) introduced the notion of "phalanxes" in the context of rare-class detection in two-class classification problems. A phalanx is a subset of features that work well for classification tasks. In this paper, we propose a different class of phalanxes for application in regression settings. We define a "Regression Phalanx" - a subset of features that work well together for prediction. We propose a novel algorithm which automatically chooses Regression Phalanxes from high-dimensi...

  16. Univariate and multiple linear regression analyses for 23 single nucleotide polymorphisms in 14 genes predisposing to chronic glomerular diseases and IgA nephropathy in Han Chinese.

    Science.gov (United States)

    Wang, Hui; Sui, Weiguo; Xue, Wen; Wu, Junyong; Chen, Jiejing; Dai, Yong

    2014-09-01

    Immunoglobulin A nephropathy (IgAN) is a complex trait regulated by the interaction among multiple physiologic regulatory systems and probably involving numerous genes, which leads to inconsistent findings in genetic studies. One possibility of failure to replicate some single-locus results is that the underlying genetics of IgAN nephropathy is based on multiple genes with minor effects. To learn the association between 23 single nucleotide polymorphisms (SNPs) in 14 genes predisposing to chronic glomerular diseases and IgAN in Han males, the 23 SNPs genotypes of 21 Han males were detected and analyzed with a BaiO gene chip, and their associations were analyzed with univariate analysis and multiple linear regression analysis. Analysis showed that CTLA4 rs231726 and CR2 rs1048971 revealed a significant association with IgAN. These findings support the multi-gene nature of the etiology of IgAN and propose a potential gene-gene interactive model for future studies.

  17. Meta-regression analyses to explain statistical heterogeneity in a systematic review of strategies for guideline implementation in primary health care.

    Directory of Open Access Journals (Sweden)

    Susanne Unverzagt

    Full Text Available This study is an in-depth-analysis to explain statistical heterogeneity in a systematic review of implementation strategies to improve guideline adherence of primary care physicians in the treatment of patients with cardiovascular diseases. The systematic review included randomized controlled trials from a systematic search in MEDLINE, EMBASE, CENTRAL, conference proceedings and registers of ongoing studies. Implementation strategies were shown to be effective with substantial heterogeneity of treatment effects across all investigated strategies. Primary aim of this study was to explain different effects of eligible trials and to identify methodological and clinical effect modifiers. Random effects meta-regression models were used to simultaneously assess the influence of multimodal implementation strategies and effect modifiers on physician adherence. Effect modifiers included the staff responsible for implementation, level of prevention and definition pf the primary outcome, unit of randomization, duration of follow-up and risk of bias. Six clinical and methodological factors were investigated as potential effect modifiers of the efficacy of different implementation strategies on guideline adherence in primary care practices on the basis of information from 75 eligible trials. Five effect modifiers were able to explain a substantial amount of statistical heterogeneity. Physician adherence was improved by 62% (95% confidence interval (95% CI 29 to 104% or 29% (95% CI 5 to 60% in trials where other non-medical professionals or nurses were included in the implementation process. Improvement of physician adherence was more successful in primary and secondary prevention of cardiovascular diseases by around 30% (30%; 95% CI -2 to 71% and 31%; 95% CI 9 to 57%, respectively compared to tertiary prevention. This study aimed to identify effect modifiers of implementation strategies on physician adherence. Especially the cooperation of different health

  18. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part II: Evaluation of Sample Models

    Science.gov (United States)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Previous studies have shown that probabilistic forecasting may be a useful method for predicting persistent contrail formation. A probabilistic forecast to accurately predict contrail formation over the contiguous United States (CONUS) is created by using meteorological data based on hourly meteorological analyses from the Advanced Regional Prediction System (ARPS) and from the Rapid Update Cycle (RUC) as well as GOES water vapor channel measurements, combined with surface and satellite observations of contrails. Two groups of logistic models were created. The first group of models (SURFACE models) is based on surface-based contrail observations supplemented with satellite observations of contrail occurrence. The second group of models (OUTBREAK models) is derived from a selected subgroup of satellite-based observations of widespread persistent contrails. The mean accuracies for both the SURFACE and OUTBREAK models typically exceeded 75 percent when based on the RUC or ARPS analysis data, but decreased when the logistic models were derived from ARPS forecast data.

  19. Propensity-score matching in economic analyses: comparison with regression models, instrumental variables, residual inclusion, differences-in-differences, and decomposition methods.

    Science.gov (United States)

    Crown, William H

    2014-02-01

    This paper examines the use of propensity score matching in economic analyses of observational data. Several excellent papers have previously reviewed practical aspects of propensity score estimation and other aspects of the propensity score literature. The purpose of this paper is to compare the conceptual foundation of propensity score models with alternative estimators of treatment effects. References are provided to empirical comparisons among methods that have appeared in the literature. These comparisons are available for a subset of the methods considered in this paper. However, in some cases, no pairwise comparisons of particular methods are yet available, and there are no examples of comparisons across all of the methods surveyed here. Irrespective of the availability of empirical comparisons, the goal of this paper is to provide some intuition about the relative merits of alternative estimators in health economic evaluations where nonlinearity, sample size, availability of pre/post data, heterogeneity, and missing variables can have important implications for choice of methodology. Also considered is the potential combination of propensity score matching with alternative methods such as differences-in-differences and decomposition methods that have not yet appeared in the empirical literature.

  20. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part I: Effects of Random Error

    Science.gov (United States)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.

  1. The Use of Alternative Regression Methods in Social Sciences and the Comparison of Least Squares and M Estimation Methods in Terms of the Determination of Coefficient

    Science.gov (United States)

    Coskuntuncel, Orkun

    2013-01-01

    The purpose of this study is two-fold; the first aim being to show the effect of outliers on the widely used least squares regression estimator in social sciences. The second aim is to compare the classical method of least squares with the robust M-estimator using the "determination of coefficient" (R[superscript 2]). For this purpose,…

  2. Determination of crack morphology parameters from service failures for leak-rate analyses

    Energy Technology Data Exchange (ETDEWEB)

    Wilkowski, G.; Ghadiali, N.; Paul, D. [Battelle Memorial Institute, Columbus, OH (United States)] [and others

    1997-04-01

    In leak-rate analyses described in the literature, the crack morphology parameters are typically not well agreed upon by different investigators. This paper presents results on a review of crack morphology parameters determined from examination of service induced cracks. Service induced cracks were found to have a much more tortuous flow path than laboratory induced cracks due to crack branching associated with the service induced cracks. Several new parameters such as local and global surface roughnesses, as well as local and global number of turns were identified. The effect of each of these parameters are dependent on the crack-opening displacement. Additionally, the crack path is typically assumed to be straight through the pipe thickness, but the service data show that the flow path can be longer due to the crack following a fusion line, and/or the number of turns, where the number of turns in the past were included as a pressure drop term due to the turns, but not the longer flow path length. These parameters were statistically evaluated for fatigue cracks in air, corrosion-fatigue, IGSCC, and thermal fatigue cracks. A refined version of the SQUIRT leak-rate code was developed to account for these variables. Sample calculations are provided in this paper that show how the crack size can vary for a given leak rate and the statistical variation of the crack morphology parameters.

  3. Determination of threshold values for operating transients via 3-D parametric analyses

    International Nuclear Information System (INIS)

    Raju, P.P.; Baylac, G.; Faidy, C.

    1983-01-01

    The main objective of the work reported herein was to determine the threshold values of operating parameters such as internal pressure and temperature fluctuations in order that the monitoring of these parameters could be optimized in an operating nuclear power plant on the basis that these fluctuations would not adversely affect the structural integrity and/or fatigue life of the systems and components involved. Accordingly, a parametric study was performed, using a typical and potentially critical lateral connection commonly used in the PWR system. The d/D and D/T ratios for the selected configuration were 0.36 and 10.6, respectively. A three dimensional finite element model was generated for the study using the latest modeling techniques. The stresses due to 1 MPa internal pressure were computed first. Then, a transient thermal analysis was performed for the specified fluid temperature fluctuation of 30 0 C in 60 seconds. Subsequently, a thermal stress analysis was performed using the calculated thermal gradients through the wall. The results of the foregoing analyses are presented and discussed with the help of a threshold equation formulated to prevent fatigue failure. Stress intensification factors are also reported for critical areas

  4. Code conforming determination of cumulative usage factors for general elastic-plastic finite element analyses

    International Nuclear Information System (INIS)

    Rudolph, Juergen; Goetz, Andreas; Hilpert, Roland

    2012-01-01

    The procedures of fatigue analyses of several relevant nuclear and conventional design codes (ASME, KTA, EN, AD) for power plant components differentiate between an elastic, simplified elastic-plastic and elastic-plastic fatigue check. As a rule, operational load levels will exclude the purely elastic fatigue check. The application of the code procedure of the simplified elastic-plastic fatigue check is common practice. Nevertheless, resulting cumulative usage factors may be overly conservative mainly due to high code based plastification penalty factors Ke. As a consequence, the more complex and still code conforming general elastic-plastic fatigue analysis methodology based on non-linear finite element analysis (FEA) is applied for fatigue design as an alternative. The requirements of the FEA and the material law to be applied have to be clarified in a first step. Current design codes only give rough guidelines on these relevant items. While the procedure for the simplified elastic-plastic fatigue analysis and the associated code passages are based on stress related cycle counting and the determination of pseudo elastic equivalent stress ranges, an adaptation to elastic-plastic strains and strain ranges is required for the elastic-plastic fatigue check. The associated requirements are explained in detail in the paper. If the established and implemented evaluation mechanism (cycle counting according to the peak and valley respectively the rainflow method, calculation of stress ranges from arbitrary load-time histories and determination of cumulative usage factors based on all load events) is to be retained, a conversion of elastic-plastic strains and strain ranges into pseudo elastic stress ranges is required. The algorithm to be applied is described in the paper. It has to be implemented in the sense of an extended post processing operation of FEA e.g. by APDL scripts in ANSYS registered . Variations of principal stress (strain) directions during the loading

  5. Analysing risk factors of co-occurrence of schistosomiasis haematobium and hookworm using bivariate regression models: Case study of Chikwawa, Malawi

    Directory of Open Access Journals (Sweden)

    Bruce B.W. Phiri

    2016-06-01

    Full Text Available Schistosomiasis and soil-transmitted helminth (STH infections constitute a major public health problem in many parts of sub-Saharan Africa. In areas where prevalence of geo-helminths and schistosomes is high, co-infection with multiple parasite species is common, resulting in disproportionately elevated burden compared with single infections. Determining risk factors of co-infection intensity is important for better design of targeted interventions. In this paper, we examined risk factors of hookworm and S. haematobium co-infection intensity, in Chikwawa district, southern Malawi in 2005, using bivariate count models. Results show that hookworm and S. haematobium infections were much localised with small proportion of individuals harbouring more parasites especially among school-aged children. The risk of co-intensity with both hookworm and S. haematobium was high for all ages, although this diminished with increasing age, increased with fishing (hookworm: coefficient. = 12.29; 95% CI = 11.50–13.09; S. haematobium: 0.040; 95% CI = 0.0037, 3.832. Both infections were abundant in those with primary education (hookworm: coef. = 0.072; 95% CI = 0.056, 0.401 and S. haematobium: coef. = 0.286; 95% CI = 0.034, 0.538. However, much lower risk was observed for those who were farmers (hookworm: coef. = −0.349, 95% CI = −0.547,−0.150; S. haematobium: coef. −0.239, 95% CI = −0.406, −0.072. In conclusion, our findings suggest that efforts to control helminths infection should be co-integrated and health promotion campaigns should be aimed at school-going children and adults who are in constant contact with water.

  6. STELLAR LOCUS REGRESSION: ACCURATE COLOR CALIBRATION AND THE REAL-TIME DETERMINATION OF GALAXY CLUSTER PHOTOMETRIC REDSHIFTS

    International Nuclear Information System (INIS)

    High, F. William; Stubbs, Christopher W.; Rest, Armin; Stalder, Brian; Challis, Peter

    2009-01-01

    We present stellar locus regression (SLR), a method of directly adjusting the instrumental broadband optical colors of stars to bring them into accord with a universal stellar color-color locus, producing accurately calibrated colors for both stars and galaxies. This is achieved without first establishing individual zero points for each passband, and can be performed in real-time at the telescope. We demonstrate how SLR naturally makes one wholesale correction for differences in instrumental response, for atmospheric transparency, for atmospheric extinction, and for Galactic extinction. We perform an example SLR treatment of Sloan Digital Sky Survey data over a wide range of Galactic dust values and independently recover the direction and magnitude of the canonical Galactic reddening vector with 14-18 mmag rms uncertainties. We then isolate the effect of atmospheric extinction, showing that SLR accounts for this and returns precise colors over a wide range of air mass, with 5-14 mmag rms residuals. We demonstrate that SLR-corrected colors are sufficiently accurate to allow photometric redshift estimates for galaxy clusters (using red sequence galaxies) with an uncertainty σ(z)/(1 + z) = 0.6% per cluster for redshifts 0.09 < z < 0.25. Finally, we identify our objects in the 2MASS all-sky catalog, and produce i-band zero points typically accurate to 18 mmag using only SLR. We offer open-source access to our IDL routines, validated and verified for the implementation of this technique, at http://stellar-locus-regression.googlecode.com.

  7. Information on the Department of Energy's analyses to determine the need for appliance efficiency standards

    Energy Technology Data Exchange (ETDEWEB)

    1981-12-23

    A historical overview of three separate Department of Energy analyses performed to determine the need for appliance efficiency standards is presented. An identification of the assumptions used in each of the analyses and the conclusions reached in each analysis are covered. Standards for furnaces, water heaters, central air conditioners, refrigerators, ranges/ovens, clothes dryers, freezers, and room air conditioners are considered. (MCW)

  8. The cost determinants of routine infant immunization services: a meta-regression analysis of six country studies.

    Science.gov (United States)

    Menzies, Nicolas A; Suharlim, Christian; Geng, Fangli; Ward, Zachary J; Brenzel, Logan; Resch, Stephen C

    2017-10-06

    Evidence on immunization costs is a critical input for cost-effectiveness analysis and budgeting, and can describe variation in site-level efficiency. The Expanded Program on Immunization Costing and Financing (EPIC) Project represents the largest investigation of immunization delivery costs, collecting empirical data on routine infant immunization in Benin, Ghana, Honduras, Moldova, Uganda, and Zambia. We developed a pooled dataset from individual EPIC country studies (316 sites). We regressed log total costs against explanatory variables describing service volume, quality, access, other site characteristics, and income level. We used Bayesian hierarchical regression models to combine data from different countries and account for the multi-stage sample design. We calculated output elasticity as the percentage increase in outputs (service volume) for a 1% increase in inputs (total costs), averaged across the sample in each country, and reported first differences to describe the impact of other predictors. We estimated average and total cost curves for each country as a function of service volume. Across countries, average costs per dose ranged from $2.75 to $13.63. Average costs per child receiving diphtheria, tetanus, and pertussis ranged from $27 to $139. Within countries costs per dose varied widely-on average, sites in the highest quintile were 440% more expensive than those in the lowest quintile. In each country, higher service volume was strongly associated with lower average costs. A doubling of service volume was associated with a 19% (95% interval, 4.0-32) reduction in costs per dose delivered, (range 13% to 32% across countries), and the largest 20% of sites in each country realized costs per dose that were on average 61% lower than those for the smallest 20% of sites, controlling for other factors. Other factors associated with higher costs included hospital status, provision of outreach services, share of effort to management, level of staff training

  9. Determination of foodborne pathogenic bacteria by multiplex PCR-microchip capillary electrophoresis with genetic algorithm-support vector regression optimization.

    Science.gov (United States)

    Li, Yongxin; Li, Yuanqian; Zheng, Bo; Qu, Lingli; Li, Can

    2009-06-08

    A rapid and sensitive method based on microchip capillary electrophoresis with condition optimization of genetic algorithm-support vector regression (GA-SVR) was developed and applied to simultaneous analysis of multiplex PCR products of four foodborne pathogenic bacteria. Four pairs of oligonucleotide primers were designed to exclusively amplify the targeted gene of Vibrio parahemolyticus, Salmonella, Escherichia coli (E. coli) O157:H7, Shigella and the quadruplex PCR parameters were optimized. At the same time, GA-SVR was employed to optimize the separation conditions of DNA fragments in microchip capillary electrophoresis. The proposed method was applied to simultaneously detect the multiplex PCR products of four foodborne pathogenic bacteria under the optimal conditions within 8 min. The levels of detection were as low as 1.2 x 10(2) CFU mL(-1) of Vibrio parahemolyticus, 2.9 x 10(2) CFU mL(-1) of Salmonella, 8.7 x 10(1) CFU mL(-1) of E. coli O157:H7 and 5.2 x 10(1) CFU mL(-1) of Shigella, respectively. The relative standard deviation of migration time was in the range of 0.74-2.09%. The results demonstrated that the good resolution and less analytical time were achieved due to the application of the multivariate strategy. This study offers an efficient alternative to routine foodborne pathogenic bacteria detection in a fast, reliable, and sensitive way.

  10. Autistic Regression

    Science.gov (United States)

    Matson, Johnny L.; Kozlowski, Alison M.

    2010-01-01

    Autistic regression is one of the many mysteries in the developmental course of autism and pervasive developmental disorders not otherwise specified (PDD-NOS). Various definitions of this phenomenon have been used, further clouding the study of the topic. Despite this problem, some efforts at establishing prevalence have been made. The purpose of…

  11. Linear regression

    CERN Document Server

    Olive, David J

    2017-01-01

    This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...

  12. Determination of Ethanol in Blood Samples Using Partial Least Square Regression Applied to Surface Enhanced Raman Spectroscopy.

    Science.gov (United States)

    Açikgöz, Güneş; Hamamci, Berna; Yildiz, Abdulkadir

    2018-04-01

    Alcohol consumption triggers toxic effect to organs and tissues in the human body. The risks are essentially thought to be related to ethanol content in alcoholic beverages. The identification of ethanol in blood samples requires rapid, minimal sample handling, and non-destructive analysis, such as Raman Spectroscopy. This study aims to apply Raman Spectroscopy for identification of ethanol in blood samples. Silver nanoparticles were synthesized to obtain Surface Enhanced Raman Spectroscopy (SERS) spectra of blood samples. The SERS spectra were used for Partial Least Square (PLS) for determining ethanol quantitatively. To apply PLS method, 920~820 cm -1 band interval was chosen and the spectral changes of the observed concentrations statistically associated with each other. The blood samples were examined according to this model and the quantity of ethanol was determined as that: first a calibration method was established. A strong relationship was observed between known concentration values and the values obtained by PLS method (R 2 = 1). Second instead of then, quantities of ethanol in 40 blood samples were predicted according to the calibration method. Quantitative analysis of the ethanol in the blood was done by analyzing the data obtained by Raman spectroscopy and the PLS method.

  13. Assessing the Determinants of Renewable Electricity Acceptance Integrating Meta-Analysis Regression and a Local Comprehensive Survey

    Directory of Open Access Journals (Sweden)

    Simona Bigerna

    2015-08-01

    Full Text Available In dealing with renewable electricity (RE, individuals are involved both as end-consumers on the demand side and as stakeholders (citizens in the local production process on the supply side. Empirical evidence shows that in many countries, consumers are willing to pay a significant amount to facilitate adoption of RE. By contrast, environmental externalities are often the cause of strong opposition to RE adoption if local communities are involved as stakeholders in wind, solar or biomass investment projects. Looking at the literature on willingness to pay and on willingness to accept, we have investigated RE acceptance mechanisms. First, we have used the meta-analysis to assess the major determinants of RE acceptance on both demand and supply sides. Meta-analysis has provided some insights useful for managing field research on an onshore wind farm enlargement project located in the Umbria region. Meta-analysis and survey results confirm that the local community plays a central role in local RE acceptance. Furthermore, people who have previous experience with windmills require less compensation, or are willing to pay more, for RE development. Results suggest that these attributes should be included in future research to improve understanding of determinants of RE acceptance.

  14. Analysis of multi-layered films. [determining dye densities by applying a regression analysis to the spectral response of the composite transparency

    Science.gov (United States)

    Scarpace, F. L.; Voss, A. W.

    1973-01-01

    Dye densities of multi-layered films are determined by applying a regression analysis to the spectral response of the composite transparency. The amount of dye in each layer is determined by fitting the sum of the individual dye layer densities to the measured dye densities. From this, dye content constants are calculated. Methods of calculating equivalent exposures are discussed. Equivalent exposures are a constant amount of energy over a limited band-width that will give the same dye content constants as the real incident energy. Methods of using these equivalent exposures for analysis of photographic data are presented.

  15. Area under the curve predictions of dalbavancin, a new lipoglycopeptide agent, using the end of intravenous infusion concentration data point by regression analyses such as linear, log-linear and power models.

    Science.gov (United States)

    Bhamidipati, Ravi Kanth; Syed, Muzeeb; Mullangi, Ramesh; Srinivas, Nuggehally

    2018-02-01

    1. Dalbavancin, a lipoglycopeptide, is approved for treating gram-positive bacterial infections. Area under plasma concentration versus time curve (AUC inf ) of dalbavancin is a key parameter and AUC inf /MIC ratio is a critical pharmacodynamic marker. 2. Using end of intravenous infusion concentration (i.e. C max ) C max versus AUC inf relationship for dalbavancin was established by regression analyses (i.e. linear, log-log, log-linear and power models) using 21 pairs of subject data. 3. The predictions of the AUC inf were performed using published C max data by application of regression equations. The quotient of observed/predicted values rendered fold difference. The mean absolute error (MAE)/root mean square error (RMSE) and correlation coefficient (r) were used in the assessment. 4. MAE and RMSE values for the various models were comparable. The C max versus AUC inf exhibited excellent correlation (r > 0.9488). The internal data evaluation showed narrow confinement (0.84-1.14-fold difference) with a RMSE models predicted AUC inf with a RMSE of 3.02-27.46% with fold difference largely contained within 0.64-1.48. 5. Regardless of the regression models, a single time point strategy of using C max (i.e. end of 30-min infusion) is amenable as a prospective tool for predicting AUC inf of dalbavancin in patients.

  16. Molecular responses of genetically modified maize to abiotic stresses as determined through proteomic and metabolomic analyses.

    Directory of Open Access Journals (Sweden)

    Rafael Fonseca Benevenuto

    Full Text Available Some genetically modified (GM plants have transgenes that confer tolerance to abiotic stressors. Meanwhile, other transgenes may interact with abiotic stressors, causing pleiotropic effects that will affect the plant physiology. Thus, physiological alteration might have an impact on the product safety. However, routine risk assessment (RA analyses do not evaluate the response of GM plants exposed to different environmental conditions. Therefore, we here present a proteome profile of herbicide-tolerant maize, including the levels of phytohormones and related compounds, compared to its near-isogenic non-GM variety under drought and herbicide stresses. Twenty differentially abundant proteins were detected between GM and non-GM hybrids under different water deficiency conditions and herbicide sprays. Pathway enrichment analysis showed that most of these proteins are assigned to energetic/carbohydrate metabolic processes. Among phytohormones and related compounds, different levels of ABA, CA, JA, MeJA and SA were detected in the maize varieties and stress conditions analysed. In pathway and proteome analyses, environment was found to be the major source of variation followed by the genetic transformation factor. Nonetheless, differences were detected in the levels of JA, MeJA and CA and in the abundance of 11 proteins when comparing the GM plant and its non-GM near-isogenic variety under the same environmental conditions. Thus, these findings do support molecular studies in GM plants Risk Assessment analyses.

  17. An alternative approach to the determination of scaling law expressions for the L–H transition in Tokamaks utilizing classification tools instead of regression

    International Nuclear Information System (INIS)

    Gaudio, P; Gelfusa, M; Lupelli, I; Murari, A; Vega, J

    2014-01-01

    A new approach to determine the power law expressions for the threshold between the H and L mode of confinement is presented. The method is based on two powerful machine learning tools for classification: neural networks and support vector machines. Using as inputs clear examples of the systems on either side of the transition, the machine learning tools learn the input–output mapping corresponding to the equations of the boundary separating the confinement regimes. Systematic tests with synthetic data show that the machine learning tools provide results competitive with traditional statistical regression and more robust against random noise and systematic errors. The developed tools have then been applied to the multi-machine International Tokamak Physics Activity International Global Threshold Database of validated ITER-like Tokamak discharges. The machine learning tools converge on the same scaling law parameters obtained with non-linear regression. On the other hand, the developed tools allow a reduction of 50% of the uncertainty in the extrapolations to ITER. Therefore the proposed approach can effectively complement traditional regression since its application poses much less stringent requirements on the experimental data, to be used to determine the scaling laws, because they do not require examples exactly at the moment of the transition. (paper)

  18. The use of regression analysis in determining reference intervals for low hematocrit and thrombocyte count in multiple electrode aggregometry and platelet function analyzer 100 testing of platelet function.

    Science.gov (United States)

    Kuiper, Gerhardus J A J M; Houben, Rik; Wetzels, Rick J H; Verhezen, Paul W M; Oerle, Rene van; Ten Cate, Hugo; Henskens, Yvonne M C; Lancé, Marcus D

    2017-11-01

    Low platelet counts and hematocrit levels hinder whole blood point-of-care testing of platelet function. Thus far, no reference ranges for MEA (multiple electrode aggregometry) and PFA-100 (platelet function analyzer 100) devices exist for low ranges. Through dilution methods of volunteer whole blood, platelet function at low ranges of platelet count and hematocrit levels was assessed on MEA for four agonists and for PFA-100 in two cartridges. Using (multiple) regression analysis, 95% reference intervals were computed for these low ranges. Low platelet counts affected MEA in a positive correlation (all agonists showed r 2 ≥ 0.75) and PFA-100 in an inverse correlation (closure times were prolonged with lower platelet counts). Lowered hematocrit did not affect MEA testing, except for arachidonic acid activation (ASPI), which showed a weak positive correlation (r 2 = 0.14). Closure time on PFA-100 testing was inversely correlated with hematocrit for both cartridges. Regression analysis revealed different 95% reference intervals in comparison with originally established intervals for both MEA and PFA-100 in low platelet or hematocrit conditions. Multiple regression analysis of ASPI and both tests on the PFA-100 for combined low platelet and hematocrit conditions revealed that only PFA-100 testing should be adjusted for both thrombocytopenia and anemia. 95% reference intervals were calculated using multiple regression analysis. However, coefficients of determination of PFA-100 were poor, and some variance remained unexplained. Thus, in this pilot study using (multiple) regression analysis, we could establish reference intervals of platelet function in anemia and thrombocytopenia conditions on PFA-100 and in thrombocytopenia conditions on MEA.

  19. Evaluation of the efficiency of continuous wavelet transform as processing and preprocessing algorithm for resolution of overlapped signals in univariate and multivariate regression analyses; an application to ternary and quaternary mixtures

    Science.gov (United States)

    Hegazy, Maha A.; Lotfy, Hayam M.; Mowaka, Shereen; Mohamed, Ekram Hany

    2016-07-01

    Wavelets have been adapted for a vast number of signal-processing applications due to the amount of information that can be extracted from a signal. In this work, a comparative study on the efficiency of continuous wavelet transform (CWT) as a signal processing tool in univariate regression and a pre-processing tool in multivariate analysis using partial least square (CWT-PLS) was conducted. These were applied to complex spectral signals of ternary and quaternary mixtures. CWT-PLS method succeeded in the simultaneous determination of a quaternary mixture of drotaverine (DRO), caffeine (CAF), paracetamol (PAR) and p-aminophenol (PAP, the major impurity of paracetamol). While, the univariate CWT failed to simultaneously determine the quaternary mixture components and was able to determine only PAR and PAP, the ternary mixtures of DRO, CAF, and PAR and CAF, PAR, and PAP. During the calculations of CWT, different wavelet families were tested. The univariate CWT method was validated according to the ICH guidelines. While for the development of the CWT-PLS model a calibration set was prepared by means of an orthogonal experimental design and their absorption spectra were recorded and processed by CWT. The CWT-PLS model was constructed by regression between the wavelet coefficients and concentration matrices and validation was performed by both cross validation and external validation sets. Both methods were successfully applied for determination of the studied drugs in pharmaceutical formulations.

  20. Multi-level analyses of spatial and temporal determinants for dengue infection.

    Science.gov (United States)

    Vanwambeke, Sophie O; van Benthem, Birgit H B; Khantikul, Nardlada; Burghoorn-Maas, Chantal; Panart, Kamolwan; Oskam, Linda; Lambin, Eric F; Somboon, Pradya

    2006-01-18

    Dengue is a mosquito-borne viral infection that is now endemic in most tropical countries. In Thailand, dengue fever/dengue hemorrhagic fever is a leading cause of hospitalization and death among children. A longitudinal study among 1750 people in two rural and one urban sites in northern Thailand from 2001 to 2003 studied spatial and temporal determinants for recent dengue infection at three levels (time, individual and household). Determinants for dengue infection were measured by questionnaire, land-cover maps and GIS. IgM antibodies against dengue were detected by ELISA. Three-level multi-level analysis was used to study the risk determinants of recent dengue infection. Rates of recent dengue infection varied substantially in time from 4 to 30%, peaking in 2002. Determinants for recent dengue infection differed per site. Spatial clustering was observed, demonstrating variation in local infection patterns. Most of the variation in recent dengue infection was explained at the time-period level. Location of a person and the environment around the house (including irrigated fields and orchards) were important determinants for recent dengue infection. We showed the focal nature of asymptomatic dengue infections. The great variation of determinants for recent dengue infection in space and time should be taken into account when designing local dengue control programs.

  1. Multi-level analyses of spatial and temporal determinants for dengue infection

    Directory of Open Access Journals (Sweden)

    Oskam Linda

    2006-01-01

    Full Text Available Abstract Background Dengue is a mosquito-borne viral infection that is now endemic in most tropical countries. In Thailand, dengue fever/dengue hemorrhagic fever is a leading cause of hospitalization and death among children. A longitudinal study among 1750 people in two rural and one urban sites in northern Thailand from 2001 to 2003 studied spatial and temporal determinants for recent dengue infection at three levels (time, individual and household. Methods Determinants for dengue infection were measured by questionnaire, land-cover maps and GIS. IgM antibodies against dengue were detected by ELISA. Three-level multi-level analysis was used to study the risk determinants of recent dengue infection. Results Rates of recent dengue infection varied substantially in time from 4 to 30%, peaking in 2002. Determinants for recent dengue infection differed per site. Spatial clustering was observed, demonstrating variation in local infection patterns. Most of the variation in recent dengue infection was explained at the time-period level. Location of a person and the environment around the house (including irrigated fields and orchards were important determinants for recent dengue infection. Conclusion We showed the focal nature of asymptomatic dengue infections. The great variation of determinants for recent dengue infection in space and time should be taken into account when designing local dengue control programs.

  2. Age determination by teeth examination: a comparison between different morphologic and quantitative analyses.

    Science.gov (United States)

    Amariti, M L; Restori, M; De Ferrari, F; Paganelli, C; Faglia, R; Legnani, G

    1999-06-01

    Age determination by teeth examination is one of the main means of determining personal identification. Current studies have suggested different techniques for determining the age of a subject by means of the analysis of microscopic and macroscopic structural modifications of the tooth with ageing. The histological approach is useful among the various methodologies utilized for this purpose. It is still unclear as to what is the best technique, as almost all the authors suggest the use of the approach they themselves have tested. In the present study, age determination by means of microscopic techniques has been based on the quantitative analysis of three parameters, all well recognized in specialized literature: 1. dentinal tubules density/sclerosis 2. tooth translucency 3. analysis of the cementum thickness. After a description of the three methodologies (with automatic image processing of the dentinal sclerosis utilizing an appropriate computer program developed by the authors) the results obtained on cases using the three different approaches are presented, and the merits and failings of each technique are identified with the intention of identifying the one offering the least degree of error in age determination.

  3. Improvement in the determination of elemental concentrations in PIXE analyses using artificial neural system

    International Nuclear Information System (INIS)

    Correa, R.; Dinator, M.I.; Morales, J.R.; Miranda, P.A.; Cancino, S.A.; Vila, I.; Requena, I.

    2008-01-01

    An Artificial Neural System, ANS, has been designed to operate in the analysis of spectra obtained from a PIXE (Proton Induced X-ray Emissions) application. The special designed ANS was used in the calculation of the concentrations of the major elements in the samples. Neural systems using several feed-forward ANN of similar topology working in parallel were trained with error back propagation algorithm using sets of spectra of known elemental concentrations. Following the training phase of the neural networks, other PIXE spectra were analyzed with this methodology providing unknown elemental concentrations. ANS results were compared with results obtained by traditional computer codes like AXIL and GUPIX, obtaining correlations factors close to one. The rather short time required to process each spectrum, of the order of microseconds, allows fast analysis of a large number of samples. Here we present applications of ANS in the PIXE analyses of samples of organic nature like liver, gills and muscle from fishes. ANS results were compared with elemental concentrations obtained in a previous application where a single ANN was used for each analyzed element. PIXE analyses were performed at the Nuclear Physics Laboratory of the University of Chile, using 2.2 MeV proton beams provided by a Van de Graaff accelerator. (author)

  4. Determination of Watershed Infiltration and Erosion Parameters from Field Rainfall Simulation Analyses

    Directory of Open Access Journals (Sweden)

    Mark E. Grismer

    2016-06-01

    Full Text Available Realistic modeling of infiltration, runoff and erosion processes from watersheds requires estimation of the effective hydraulic conductivity (Km of the hillslope soils and how it varies with soil tilth, depth and cover conditions. Field rainfall simulation (RS plot studies provide an opportunity to assess the surface soil hydraulic and erodibility conditions, but a standardized interpretation and comparison of results of this kind from a wide variety of test conditions has been difficult. Here, we develop solutions to the combined set of time-to-ponding/runoff and Green– Ampt infiltration equations to determine Km values from RS test plot results and compare them to the simpler calculation of steady rain minus runoff rates. Relating soil detachment rates to stream power, we also examine the determination of “erodibility” as the ratio thereof. Using data from over 400 RS plot studies across the Lake Tahoe Basin area that employ a wide range of rain rates across a range of soil slopes and conditions, we find that the Km values can be determined from the combined infiltration equation for ~80% of the plot data and that the laminar flow form of stream power best described a constant “erodibility” across a range of volcanic skirun soil conditions. Moreover, definition of stream power based on laminar flows obviates the need for assumption of an arbitrary Mannings “n” value and the restriction to mild slopes (<10%. The infiltration equation based Km values, though more variable, were on average equivalent to that determined from the simpler calculation of steady rain minus steady runoff rates from the RS plots. However, these Km values were much smaller than those determined from other field test methods. Finally, we compare RS plot results from use of different rainfall simulators in the basin and demonstrate that despite the varying configurations and rain intensities, similar erodibilities were determined across a range of

  5. On stream radioisotope X-ray fluorescence analyser and a method for the determination of copper in slurry

    International Nuclear Information System (INIS)

    Holynska, B.; Lankosz, M.; Lacki, E.; Ostachowicz, J.; Baran, W.; Owsiak, T.

    1975-01-01

    The paper presents an ''on stream'' analyser and a radioisotope X-ray fluorescence method for the continuous determination of copper content in feed 0.5-2.5% Cu, concentrates 15-25% Cu and tailings 0.01-0.03% Cu. The analyser consists essentially of a radioisotope X-ray fluorescence measuring head, γ-density gauge, electronic unit, analog processor and recorders. The method is based on the measurement of the characteristic radiation of Cu series, selected by nickel-cobalt filters. The total relative error (1s) of the determination of copper in feed is 6-8%, in concentrates 5-7% and in tailings about 18%. The ''on stream'' analyser has been succesfully operated in a pilot plant. (author)

  6. Analyses and quantitative determination of the strontium radioisotopes 89 and 90 in milk powder

    International Nuclear Information System (INIS)

    Jeanmaire, L.; Michon, G.

    1959-01-01

    The authors describe a procedure for the determination of the strontium radioisotopes 89 and 90. The concentration of strontium is made possible by the insolubility of its nitrate salt in strong nitric acid which allows the removal of greatest part of calcium. The purification is performed on a cation exchange column. The amount of radioisotope 90 is determined by means of its daughter product yttrium 90 necessary calibrations and computations are treated in special paragraphs. With regard to the reproducibility of the measurements, the fluctuations are less than 20 per cent. This seems satisfaction for such a technique which have great sensibility while being long and necessitative great carefulness. (author) [fr

  7. Determination of trace elements in bottled water in Greece by instrumental and radiochemical neutron activation analyses

    International Nuclear Information System (INIS)

    Soupioni, M.J.; Symeopoulos, B.D.; Papaefthymiou, H.V.

    2006-01-01

    Four different bottled water brands sold in Greece in the winter of 2001-2002 were analyzed for a wide range of chemical elements, using neutron activation analysis (NAA). The elements Na and Br were determined instrumentally (INAA), whereas the other metals and trace elements radiochemically (RNAA). The results indicated that the mean level of all the elements determined in the samples were well within the European Union (EU) directive on drinking water and accomplish the drinking water standards of the World Health Organisation (WHO) as well as of the Food and Drug Administration (FDA). (author)

  8. Determination of trace elements in BCR single cell protein via destructive neutron activation analyses

    International Nuclear Information System (INIS)

    Tjioe, P.S.; Goeij, J.J.M. de; Nooijen, J.L.; Kroon, J.J.

    1978-10-01

    The amount of some trace elements in single cell protein (SCP), a product of BP Research Centre at Sunbury-at-Thames, England, was determined by neutron activation analysis. The SCP-samples were irradiated in the reactor of the Interuniversity Reactor Institute at Delft in a neutron flux of 1.0x10 13 n/cm 2 s for 12 hours. Samples of Bowen's Kale were used as reference material. After a decay of two or three days the samples were chemically destroyed, and the trace elements were separated. The quantity of the following elements was determined by measuring the γ-activity by means of a scintillation counter: antimony, cadmium, mercury, arsenic and selenium. The amounts of these elements in the SCP and in the reference material were tabled

  9. Determination of radioactive emission origins based on analyses of isotopic composition

    International Nuclear Information System (INIS)

    Devell, L.

    1987-01-01

    The nature of radioactivity emissions can be determined through gamma spectroscopy of air samples with good precision, which means that the type of source of the emission may be found, e.g. nuclear weapons test, of nuclear power plant accident. Combined with information on wind trajectories it is normally possible to recognize time and area for the emission. In this preliminary study, the knowledge of and preparedness for such measurements are described. (L.E.)

  10. Determining significant endpoints for ecological risk analyses. 1997 annual progress report

    Energy Technology Data Exchange (ETDEWEB)

    Hinton, T.G.; Congdon, J.; Rowe, C.; Scott, D. [Univ. of Georgia, Aiken, SC (US). Savannah River Ecology Lab.; Bedford, J.; Whicker, F.W. [Colorado State Univ., Fort Collins, CO (US)

    1997-11-01

    'This report summarizes the first year''s progress of research funded under the Department of Energy''s Environmental Management Science Program. The research was initiated to better determine ecological risks from toxic and radioactive contaminants. More precisely, the research is designed to determine the relevancy of sublethal cellular damage to the performance of individuals and to identify characteristics of non-human populations exposed to chronic, low-level radiation, as is typically found on many DOE sites. The authors propose to establish a protocol to assess risks to non-human species at higher levels of biological organization by relating molecular damage to more relevant responses that reflect population health. They think that they can achieve this by coupling changes in metabolic rates and energy allocation patterns to meaningful population response variables, and by using novel biological dosimeters in controlled, manipulative dose/effects experiments. They believe that a scientifically defensible endpoint for measuring ecological risks can only be determined once its understood the extent to which molecular damage from contaminant exposure is detrimental at the individual and population levels of biological organization.'

  11. Dispensing processes impact apparent biological activity as determined by computational and statistical analyses.

    Directory of Open Access Journals (Sweden)

    Sean Ekins

    Full Text Available Dispensing and dilution processes may profoundly influence estimates of biological activity of compounds. Published data show Ephrin type-B receptor 4 IC50 values obtained via tip-based serial dilution and dispensing versus acoustic dispensing with direct dilution differ by orders of magnitude with no correlation or ranking of datasets. We generated computational 3D pharmacophores based on data derived by both acoustic and tip-based transfer. The computed pharmacophores differ significantly depending upon dispensing and dilution methods. The acoustic dispensing-derived pharmacophore correctly identified active compounds in a subsequent test set where the tip-based method failed. Data from acoustic dispensing generates a pharmacophore containing two hydrophobic features, one hydrogen bond donor and one hydrogen bond acceptor. This is consistent with X-ray crystallography studies of ligand-protein interactions and automatically generated pharmacophores derived from this structural data. In contrast, the tip-based data suggest a pharmacophore with two hydrogen bond acceptors, one hydrogen bond donor and no hydrophobic features. This pharmacophore is inconsistent with the X-ray crystallographic studies and automatically generated pharmacophores. In short, traditional dispensing processes are another important source of error in high-throughput screening that impacts computational and statistical analyses. These findings have far-reaching implications in biological research.

  12. The diversity of arthropods in homes across the United States as determined by environmental DNA analyses.

    Science.gov (United States)

    Madden, Anne A; Barberán, Albert; Bertone, Matthew A; Menninger, Holly L; Dunn, Robert R; Fierer, Noah

    2016-12-01

    We spend most of our lives inside homes, surrounded by arthropods that impact our property as pests and our health as disease vectors and producers of sensitizing allergens. Despite their relevance to human health and well-being, we know relatively little about the arthropods that exist in our homes and the factors structuring their diversity. As previous work has been limited in scale by the costs and time associated with collecting arthropods and the subsequent morphological identification, we used a DNA-based method for investigating the arthropod diversity in homes via high-throughput marker gene sequencing of home dust. Settled dust samples were collected by citizen scientists from both inside and outside more than 700 homes across the United States, yielding the first continental-scale estimates of arthropod diversity associated with our residences. We were able to document food webs and previously unknown geographic distributions of diverse arthropods - from allergen producers to invasive species and nuisance pests. Home characteristics, including the presence of basements, home occupants and surrounding land use, were more useful than climate parameters in predicting arthropod diversity in homes. These noninvasive, scalable tools and resultant findings not only provide the first continental-scale maps of household arthropod diversity, but our analyses also provide valuable baseline information on arthropod allergen exposures and the distributions of invasive pests inside homes. © 2016 John Wiley & Sons Ltd.

  13. Determination of the spatial response of neutron based analysers using a Monte Carlo based method

    International Nuclear Information System (INIS)

    Tickner, James

    2000-01-01

    One of the principal advantages of using thermal neutron capture (TNC, also called prompt gamma neutron activation analysis or PGNAA) or neutron inelastic scattering (NIS) techniques for measuring elemental composition is the high penetrating power of both the incident neutrons and the resultant gamma-rays, which means that large sample volumes can be interrogated. Gauges based on these techniques are widely used in the mineral industry for on-line determination of the composition of bulk samples. However, attenuation of both neutrons and gamma-rays in the sample and geometric (source/detector distance) effects typically result in certain parts of the sample contributing more to the measured composition than others. In turn, this introduces errors in the determination of the composition of inhomogeneous samples. This paper discusses a combined Monte Carlo/analytical method for estimating the spatial response of a neutron gauge. Neutron propagation is handled using a Monte Carlo technique which allows an arbitrarily complex neutron source and gauge geometry to be specified. Gamma-ray production and detection is calculated analytically which leads to a dramatic increase in the efficiency of the method. As an example, the method is used to study ways of reducing the spatial sensitivity of on-belt composition measurements of cement raw meal

  14. Determining Volcanic Deformation at San Miguel Volcano, El Salvador by Integrating Radar Interferometry and Seismic Analyses

    Science.gov (United States)

    Schiek, C. G.; Hurtado, J. M.; Velasco, A. A.; Buckley, S. M.; Escobar, D.

    2008-12-01

    From the early 1900's to the present day, San Miguel volcano has experienced many small eruptions and several periods of heightened seismic activity, making it one of the most active volcanoes in the El Salvadoran volcanic chain. Prior to 1969, the volcano experienced many explosive eruptions with Volcano Explosivity Indices (VEI) of 2. Since then, eruptions have decreased in intensity to an average VEI of 1. Eruptions mostly consist of phreatic explosions and central vent eruptions. Due to the explosive nature of this volcano, it is important to study the origins of the volcanism and its relationship to surface deformation and earthquake activity. We analyze these interactions by integrating interferometric synthetic aperture radar (InSAR) results with earthquake source location data from a ten-month (March 2007-January 2008) seismic deployment. The InSAR results show a maximum of 7 cm of volcanic inflation from March 2007 to mid-October 2007. During this time, seismic activity increased to a Real-time Seismic-Amplitude Measurement (RSAM) value of >400. Normal RSAM values for this volcano are earthquakes that occurred between March 2007 and January 2008 suggests a fault zone through the center of the San Miguel volcanic cone. This fault zone is most likely where dyke propagation is occurring. Source mechanisms will be determined for the earthquakes associated with this fault zone, and they will be compared to the InSAR deformation field to determine if the mid-October seismic activity and observed surface deformation are compatible.

  15. International law's effects on health and its social determinants: protocol for a systematic review, meta-analysis, and meta-regression analysis.

    Science.gov (United States)

    Hoffman, Steven J; Hughsam, Matthew; Randhawa, Harkanwal; Sritharan, Lathika; Guyatt, Gordon; Lavis, John N; Røttingen, John-Arne

    2016-04-16

    In recent years, there have been numerous calls for global institutions to develop and enforce new international laws. International laws are, however, often blunt instruments with many uncertain benefits, costs, risks of harm, and trade-offs. Thus, they are probably not always appropriate solutions to global health challenges. Given these uncertainties and international law's potential importance for improving global health, the paucity of synthesized evidence addressing whether international laws achieve their intended effects or whether they are superior in comparison to other approaches is problematic. Ten electronic bibliographic databases were searched using predefined search strategies, including MEDLINE, Global Health, CINAHL, Applied Social Sciences Index and Abstracts, Dissertations and Theses, International Bibliography of Social Sciences, International Political Science Abstracts, Social Sciences Abstracts, Social Sciences Citation Index, PAIS International, and Worldwide Political Science Abstracts. Two reviewers will independently screen titles and abstracts using predefined inclusion criteria. Pairs of reviewers will then independently screen the full-text of articles for inclusion using predefined inclusion criteria and then independently extract data and assess risk of bias for included studies. Where feasible, results will be pooled through subgroup analyses, meta-analyses, and meta-regression techniques. The findings of this review will contribute to a better understanding of the expected benefits and possible harms of using international law to address different kinds of problems, thereby providing important evidence-informed guidance on when and how it can be effectively introduced and implemented by countries and global institutions. PROSPERO CRD42015019830.

  16. Smoking and Its Determinants in Chinese Internal Migrants: Nationally Representative Cross-Sectional Data Analyses.

    Science.gov (United States)

    Ji, Ying; Liu, Shenglan; Zhao, Xiaoping; Jiang, Ying; Zeng, Qingqi; Chang, Chun

    2016-08-01

    Migrants often face multiple risk factors for smoking initiation. Former studies that have explored the smoking habits of Chinese migrants have provided inconsistent findings and lacked nationally representative samples. Using data from the 2012 Migrant Dynamics Monitoring Survey in China published by the National Population and Family Planning Commission, this study explored current smoking rates and its determinants among migrants in China. The smoking rates of men (46.9%, 46.3%-47.3%) and women (1.8%, 1.7%-1.9%) differed significantly. Although the overall smoking rates in migrants was slightly lower than in the general population, the rates in certain subgroups were much higher. Among men, the three leading associated factors were the following: higher smoking rates among the divorced or widowed (odds ratio [OR] = 1.53, 95% confidence interval [CI]: 1.34-1.74); lower smoking rates among those with an educational level of senior high school or above (OR = 0.73, 95% CI: 0.71-0.76), and higher smoking rates in the migrant-receiving area (OR = 1.29, 95% CI: 1.18-1.42). Among women, smoking rates were also higher in the migrant-receiving area (OR = 1.78, 95% CI: 1.34-2.34), when monthly income was more than 3000 Renminbi (OR = 1.65, 95% CI: 1.43-1.90), and among those with an educational level of senior high school or above (OR = 0.65, 95% CI: 0.56-0.75). The social integration of migrants, the duration of stay, and working hours had weaker associations with smoking risk. The sociodemographic features, work pressure, and migration-related features were sex-dependent determinants of smoking rates. These factors need to be considered when planning tobacco control interventions among migrants. Our study was the first to analyze a nationally representative Chinese migrant sample with respect to smoking, its differential rates across various subgroups, and its determinants. Our results provided overall levels of migrant smoking rates. The findings also demonstrated the

  17. Determining significant endpoints for ecological risk analyses. 1998 annual progress report

    Energy Technology Data Exchange (ETDEWEB)

    Hinton, T.G.; Congdon, J.; Scott, D. [Univ. of Georgia, Aiken, SC (US). Savannah River Ecology Lab.; Rowe, C. [Univ. of Puerto Rico, San Juan (PR); Bedford, J.; Whicker, W. [Colorado State Univ., Fort Collins, CO (US)

    1998-06-01

    'The goal of this report is to establish a protocol for assessing risks to non-human populations exposed to environmental stresses typically found on many DOE sites. The authors think that they can achieve this by using novel biological dosimeters in controlled, manipulative dose/effects experiments, and by coupling changes in metabolic rates and energy allocation patterns to meaningful population response variables (such as age-specific survivorship, reproductive output, age at maturity and longevity). This research is needed to determine the relevancy of sublethal cellular damage to the performance of individuals and populations exposed to chronic, low-level radiation, and radiation with concomitant exposure to chemicals. They believe that a scientifically defensible endpoint for measuring ecological risks can only be determined once its understood the extent to which molecular damage from contaminant exposure is detrimental at the individual and population levels of biological organization. The experimental facility will allow them to develop a credible assessment tool for appraising ecological risks, and to evaluate the effects of radionuclide/chemical synergisms on non-human species. This report summarizes work completed midway of a 3-year project that began in November 1996. Emphasis to date has centered on three areas: (1) developing a molecular probe to measure stable chromosomal aberrations known as reciprocal translocations, (2) constructing an irradiation facility where the statistical power inherent in replicated mesocosms can be used to address the response of non-human organisms to exposures from low levels of radiation and metal contaminants, and (3) quantifying responses of organisms living in contaminated mesocosms and field sites.'

  18. Recursive Algorithm For Linear Regression

    Science.gov (United States)

    Varanasi, S. V.

    1988-01-01

    Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.

  19. Vanadium NMR Chemical Shifts of (Imido)vanadium(V) Dichloride Complexes with Imidazolin-2-iminato and Imidazolidin-2-iminato Ligands: Cooperation with Quantum-Chemical Calculations and Multiple Linear Regression Analyses.

    Science.gov (United States)

    Yi, Jun; Yang, Wenhong; Sun, Wen-Hua; Nomura, Kotohiro; Hada, Masahiko

    2017-11-30

    The NMR chemical shifts of vanadium ( 51 V) in (imido)vanadium(V) dichloride complexes with imidazolin-2-iminato and imidazolidin-2-iminato ligands were calculated by the density functional theory (DFT) method with GIAO. The calculated 51 V NMR chemical shifts were analyzed by the multiple linear regression (MLR) analysis (MLRA) method with a series of calculated molecular properties. Some of calculated NMR chemical shifts were incorrect using the optimized molecular geometries of the X-ray structures. After the global minimum geometries of all of the molecules were determined, the trend of the observed chemical shifts was well reproduced by the present DFT method. The MLRA method was performed to investigate the correlation between the 51 V NMR chemical shift and the natural charge, band energy gap, and Wiberg bond index of the V═N bond. The 51 V NMR chemical shifts obtained with the present MLR model were well reproduced with a correlation coefficient of 0.97.

  20. The comparison of partial least squares and principal component regression in simultaneous spectrophotometric determination of ascorbic acid, dopamine and uric acid in real samples

    Directory of Open Access Journals (Sweden)

    Habiboallah Khajehsharifi

    2017-05-01

    Full Text Available Partial least squares (PLS1 and principal component regression (PCR are two multivariate calibration methods that allow simultaneous determination of several analytes in spite of their overlapping spectra. In this research, a spectrophotometric method using PLS1 is proposed for the simultaneous determination of ascorbic acid (AA, dopamine (DA and uric acid (UA. The linear concentration ranges for AA, DA and UA were 1.76–47.55, 0.57–22.76 and 1.68–28.58 (in μg mL−1, respectively. However, PLS1 and PCR were applied to design calibration set based on absorption spectra in the 250–320 nm range for 36 different mixtures of AA, DA and UA, in all cases, the PLS1 calibration method showed more quantitative prediction ability than PCR method. Cross validation method was used to select the optimum number of principal components (NPC. The NPC for AA, DA and UA was found to be 4 by PLS1 and 5, 12, 8 by PCR. Prediction error sum of squares (PRESS of AA, DA and UA were 1.2461, 1.1144, 2.3104 for PLS1 and 11.0563, 1.3819, 4.0956 for PCR, respectively. Satisfactory results were achieved for the simultaneous determination of AA, DA and UA in some real samples such as human urine, serum and pharmaceutical formulations.

  1. Determinations of elements in pepperbush standard reference material by neutron activation and X-ray fluorescence analyses

    International Nuclear Information System (INIS)

    Mizumoto, Yoshihiko; Okada, Takayuki; Tatsumi, Toshiya; Kusakabe, Toshio; Katsurayama, Kousuke; Iwata, Shiro.

    1988-01-01

    Elemental contents in Pepperbush standard reference material have been determined by neutron activation and X-ray fluorescence analyses. The standard samples of orchard leaves, tomato leaves, pine needles and Kale are used for the experiment. In the neutron activation analysis, gamma-ray spectra of nuclei produced by (n,γ) reaction on Pepperbush and standard samples are measured with Ge detectors. In the X-ray fluorescence analysis, the samples are excited with X-rays from X-ray tube with rhodium anode, and the characteristic X-rays from samples are measured with a proportional counter or NaI(Tl) detector. From the gamma- and X-ray intensities, the elemental contents in Pepperbush are determined. As a result, the contents of seventeen elements, such as sodium, calcium, iron, etc., in Pepperbush are determined. (author)

  2. Improved intact soil-core carbon determination applying regression shrinkage and variable selection techniques to complete spectrum laser-induced breakdown spectroscopy (LIBS).

    Science.gov (United States)

    Bricklemyer, Ross S; Brown, David J; Turk, Philip J; Clegg, Sam M

    2013-10-01

    Laser-induced breakdown spectroscopy (LIBS) provides a potential method for rapid, in situ soil C measurement. In previous research on the application of LIBS to intact soil cores, we hypothesized that ultraviolet (UV) spectrum LIBS (200-300 nm) might not provide sufficient elemental information to reliably discriminate between soil organic C (SOC) and inorganic C (IC). In this study, using a custom complete spectrum (245-925 nm) core-scanning LIBS instrument, we analyzed 60 intact soil cores from six wheat fields. Predictive multi-response partial least squares (PLS2) models using full and reduced spectrum LIBS were compared for directly determining soil total C (TC), IC, and SOC. Two regression shrinkage and variable selection approaches, the least absolute shrinkage and selection operator (LASSO) and sparse multivariate regression with covariance estimation (MRCE), were tested for soil C predictions and the identification of wavelengths important for soil C prediction. Using complete spectrum LIBS for PLS2 modeling reduced the calibration standard error of prediction (SEP) 15 and 19% for TC and IC, respectively, compared to UV spectrum LIBS. The LASSO and MRCE approaches provided significantly improved calibration accuracy and reduced SEP 32-55% over UV spectrum PLS2 models. We conclude that (1) complete spectrum LIBS is superior to UV spectrum LIBS for predicting soil C for intact soil cores without pretreatment; (2) LASSO and MRCE approaches provide improved calibration prediction accuracy over PLS2 but require additional testing with increased soil and target analyte diversity; and (3) measurement errors associated with analyzing intact cores (e.g., sample density and surface roughness) require further study and quantification.

  3. Use of Quantile Regression to Determine the Impact on Total Health Care Costs of Surgical Site Infections Following Common Ambulatory Procedures.

    Science.gov (United States)

    Olsen, Margaret A; Tian, Fang; Wallace, Anna E; Nickel, Katelin B; Warren, David K; Fraser, Victoria J; Selvam, Nandini; Hamilton, Barton H

    2017-02-01

    To determine the impact of surgical site infections (SSIs) on health care costs following common ambulatory surgical procedures throughout the cost distribution. Data on costs of SSIs following ambulatory surgery are sparse, particularly variation beyond just mean costs. We performed a retrospective cohort study of persons undergoing cholecystectomy, breast-conserving surgery, anterior cruciate ligament reconstruction, and hernia repair from December 31, 2004 to December 31, 2010 using commercial insurer claims data. SSIs within 90 days post-procedure were identified; infections during a hospitalization or requiring surgery were considered serious. We used quantile regression, controlling for patient, operative, and postoperative factors to examine the impact of SSIs on 180-day health care costs throughout the cost distribution. The incidence of serious and nonserious SSIs was 0.8% and 0.2%, respectively, after 21,062 anterior cruciate ligament reconstruction, 0.5% and 0.3% after 57,750 cholecystectomy, 0.6% and 0.5% after 60,681 hernia, and 0.8% and 0.8% after 42,489 breast-conserving surgery procedures. Serious SSIs were associated with significantly higher costs than nonserious SSIs for all 4 procedures throughout the cost distribution. The attributable cost of serious SSIs increased for both cholecystectomy and hernia repair as the quantile of total costs increased ($38,410 for cholecystectomy with serious SSI vs no SSI at the 70th percentile of costs, up to $89,371 at the 90th percentile). SSIs, particularly serious infections resulting in hospitalization or surgical treatment, were associated with significantly increased health care costs after 4 common surgical procedures. Quantile regression illustrated the differential effect of serious SSIs on health care costs at the upper end of the cost distribution.

  4. Analyses of the Short Periodical Part of the Spectrum of Pole Coordinate Variations Determined by the Astrometric and Laser Technique

    Science.gov (United States)

    Kołaczek, B.; Kosek, W.; Galas, R.

    Series of BIH astrometric (BIH-ASTR) pole coordinates and of CSR LAGEOS laser ranging (CSR-LALAR) pole coordinates determined in the MERIT Campaign in the years 1972 - 1986, 1983 - 1986, respectively, have been filtered by different band pass filters consisting of the law pass Gauss filter and of the high pass Butterworth filter. Filtered residuals were analysed by the MESA-Maximum Entropy Spectra Analysis and by the Ormsby narrow band pass filters in order to find numerically modeled signals approximating these residuals in the best way.

  5. Determining the 95% limit of detection for waterborne pathogen analyses from primary concentration to qPCR

    Science.gov (United States)

    Stokdyk, Joel P.; Firnstahl, Aaron; Spencer, Susan K.; Burch, Tucker R; Borchardt, Mark A.

    2016-01-01

    The limit of detection (LOD) for qPCR-based analyses is not consistently defined or determined in studies on waterborne pathogens. Moreover, the LODs reported often reflect the qPCR assay alone rather than the entire sample process. Our objective was to develop an approach to determine the 95% LOD (lowest concentration at which 95% of positive samples are detected) for the entire process of waterborne pathogen detection. We began by spiking the lowest concentration that was consistently positive at the qPCR step (based on its standard curve) into each procedural step working backwards (i.e., extraction, secondary concentration, primary concentration), which established a concentration that was detectable following losses of the pathogen from processing. Using the fraction of positive replicates (n = 10) at this concentration, we selected and analyzed a second, and then third, concentration. If the fraction of positive replicates equaled 1 or 0 for two concentrations, we selected another. We calculated the LOD using probit analysis. To demonstrate our approach we determined the 95% LOD for Salmonella enterica serovar Typhimurium, adenovirus 41, and vaccine-derived poliovirus Sabin 3, which were 11, 12, and 6 genomic copies (gc) per reaction (rxn), respectively (equivalent to 1.3, 1.5, and 4.0 gc L−1 assuming the 1500 L tap-water sample volume prescribed in EPA Method 1615). This approach limited the number of analyses required and was amenable to testing multiple genetic targets simultaneously (i.e., spiking a single sample with multiple microorganisms). An LOD determined this way can facilitate study design, guide the number of required technical replicates, aid method evaluation, and inform data interpretation.

  6. Comparing near-infrared conventional diffuse reflectance spectroscopy and hyperspectral imaging for determination of the bulk properties of solid samples by multivariate regression: determination of Mooney viscosity and plasticity indices of natural rubber.

    Science.gov (United States)

    Juliano da Silva, Carlos; Pasquini, Celio

    2015-01-21

    Conventional reflectance spectroscopy (NIRS) and hyperspectral imaging (HI) in the near-infrared region (1000-2500 nm) are evaluated and compared, using, as the case study, the determination of relevant properties related to the quality of natural rubber. Mooney viscosity (MV) and plasticity indices (PI) (PI0 - original plasticity, PI30 - plasticity after accelerated aging, and PRI - the plasticity retention index after accelerated aging) of rubber were determined using multivariate regression models. Two hundred and eighty six samples of rubber were measured using conventional and hyperspectral near-infrared imaging reflectance instruments in the range of 1000-2500 nm. The sample set was split into regression (n = 191) and external validation (n = 95) sub-sets. Three instruments were employed for data acquisition: a line scanning hyperspectral camera and two conventional FT-NIR spectrometers. Sample heterogeneity was evaluated using hyperspectral images obtained with a resolution of 150 × 150 μm and principal component analysis. The probed sample area (5 cm(2); 24,000 pixels) to achieve representativeness was found to be equivalent to the average of 6 spectra for a 1 cm diameter probing circular window of one FT-NIR instrument. The other spectrophotometer can probe the whole sample in only one measurement. The results show that the rubber properties can be determined with very similar accuracy and precision by Partial Least Square (PLS) regression models regardless of whether HI-NIR or conventional FT-NIR produce the spectral datasets. The best Root Mean Square Errors of Prediction (RMSEPs) of external validation for MV, PI0, PI30, and PRI were 4.3, 1.8, 3.4, and 5.3%, respectively. Though the quantitative results provided by the three instruments can be considered equivalent, the hyperspectral imaging instrument presents a number of advantages, being about 6 times faster than conventional bulk spectrometers, producing robust spectral data by ensuring sample

  7. Novel liquid chromatography method based on linear weighted regression for the fast determination of isoprostane isomers in plasma samples using sensitive tandem mass spectrometry detection.

    Science.gov (United States)

    Aszyk, Justyna; Kot, Jacek; Tkachenko, Yurii; Woźniak, Michał; Bogucka-Kocka, Anna; Kot-Wasik, Agata

    2017-04-15

    A simple, fast, sensitive and accurate methodology based on a LLE followed by liquid chromatography-tandem mass spectrometry for simultaneous determination of four regioisomers (8-iso prostaglandin F 2α , 8-iso-15(R)-prostaglandin F 2α , 11β-prostaglandin F 2α , 15(R)-prostaglandin F 2α ) in routine analysis of human plasma samples was developed. Isoprostanes are stable products of arachidonic acid peroxidation and are regarded as the most reliable markers of oxidative stress in vivo. Validation of method was performed by evaluation of the key analytical parameters such as: matrix effect, analytical curve, trueness, precision, limits of detection and limits of quantification. As a homoscedasticity was not met for analytical data, weighted linear regression was applied in order to improve the accuracy at the lower end points of calibration curve. The detection limits (LODs) ranged from 1.0 to 2.1pg/mL. For plasma samples spiked with the isoprostanes at the level of 50pg/mL, intra-and interday repeatability ranged from 2.1 to 3.5% and 0.1 to 5.1%, respectively. The applicability of the proposed approach has been verified by monitoring of isoprostane isomers level in plasma samples collected from young patients (n=8) subjected to hyperbaric hyperoxia (100% oxygen at 280kPa(a) for 30min) in a multiplace hyperbaric chamber. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Pitfalls in the assessment of radioresponse as determined by tumor regression. Consideration based on the location and histologic constitution of tumors

    Energy Technology Data Exchange (ETDEWEB)

    Ohara, Kiyoshi; Shimizu, Wakako; Itai, Yuji [Tsukuba Univ., Ibaraki (Japan). Inst. of Clinical Medicine

    2000-05-01

    To prove the following hypotheses regarding tumor shrinkage after radiotherapy. Tumors located on an outer tissue surface, e.g. esophageal tumors shrink faster than parenchymal tumors, e.g. lymph-node metastasis, because two clearance mechanisms, exfoliation and absorption, can operate in the former type of tumors whereas only absorption can function in the latter. Tumors which are being controlled do not necessarily respond completely, because tumors are constituted not only of tumor cells but also stromal tissues that are difficult to be absorbed. Long-term shrinkage patterns of a parenchymal tumor were determined by using 18 curatively irradiated hepatomas. Preoperatively irradiated thymomas (10) and lymph-node metastases (37) from head and neck cancers were examined histopathologically. Twenty-one esophageal cancers were used for intra-patient response comparison between the primary disease and the lymph-node metastases. Shrinkage patterns were generally biphasic: rapid exponential regression followed by a plateau phase. Histologically, thymomas generally consisted of predominant fibrous tissues and few remaining tumor cells. Radioresponse did not predict the presence of remaining cancer cells in the lymph nodes. Esophageal-cancer radiorespone was always higher for the primary disease than the lymph-node metastases. The location and histologic constitution of tumors must be taken into account in predicting radiocurability using radioresponse. (author)

  9. Determining the Relationship between U.S. County-Level Adult Obesity Rate and Multiple Risk Factors by PLS Regression and SVM Modeling Approaches

    Directory of Open Access Journals (Sweden)

    Chau-Kuang Chen

    2015-02-01

    Full Text Available Data from the Center for Disease Control (CDC has shown that the obesity rate doubled among adults within the past two decades. This upsurge was the result of changes in human behavior and environment. Partial least squares (PLS regression and support vector machine (SVM models were conducted to determine the relationship between U.S. county-level adult obesity rate and multiple risk factors. The outcome variable was the adult obesity rate. The 23 risk factors were categorized into four domains of the social ecological model including biological/behavioral factor, socioeconomic status, food environment, and physical environment. Of the 23 risk factors related to adult obesity, the top eight significant risk factors with high normalized importance were identified including physical inactivity, natural amenity, percent of households receiving SNAP benefits, and percent of all restaurants being fast food. The study results were consistent with those in the literature. The study showed that adult obesity rate was influenced by biological/behavioral factor, socioeconomic status, food environment, and physical environment embedded in the social ecological theory. By analyzing multiple risk factors of obesity in the communities, may lead to the proposal of more comprehensive and integrated policies and intervention programs to solve the population-based problem.

  10. Determination of fat content in chicken hamburgers using NIR spectroscopy and the Successive Projections Algorithm for interval selection in PLS regression (iSPA-PLS)

    Science.gov (United States)

    Krepper, Gabriela; Romeo, Florencia; Fernandes, David Douglas de Sousa; Diniz, Paulo Henrique Gonçalves Dias; de Araújo, Mário César Ugulino; Di Nezio, María Susana; Pistonesi, Marcelo Fabián; Centurión, María Eugenia

    2018-01-01

    Determining fat content in hamburgers is very important to minimize or control the negative effects of fat on human health, effects such as cardiovascular diseases and obesity, which are caused by the high consumption of saturated fatty acids and cholesterol. This study proposed an alternative analytical method based on Near Infrared Spectroscopy (NIR) and Successive Projections Algorithm for interval selection in Partial Least Squares regression (iSPA-PLS) for fat content determination in commercial chicken hamburgers. For this, 70 hamburger samples with a fat content ranging from 14.27 to 32.12 mg kg- 1 were prepared based on the upper limit recommended by the Argentinean Food Codex, which is 20% (w w- 1). NIR spectra were then recorded and then preprocessed by applying different approaches: base line correction, SNV, MSC, and Savitzky-Golay smoothing. For comparison, full-spectrum PLS and the Interval PLS are also used. The best performance for the prediction set was obtained for the first derivative Savitzky-Golay smoothing with a second-order polynomial and window size of 19 points, achieving a coefficient of correlation of 0.94, RMSEP of 1.59 mg kg- 1, REP of 7.69% and RPD of 3.02. The proposed methodology represents an excellent alternative to the conventional Soxhlet extraction method, since waste generation is avoided, yet without the use of either chemical reagents or solvents, which follows the primary principles of Green Chemistry. The new method was successfully applied to chicken hamburger analysis, and the results agreed with those with reference values at a 95% confidence level, making it very attractive for routine analysis.

  11. Differentiating regressed melanoma from regressed lichenoid keratosis.

    Science.gov (United States)

    Chan, Aegean H; Shulman, Kenneth J; Lee, Bonnie A

    2017-04-01

    Distinguishing regressed lichen planus-like keratosis (LPLK) from regressed melanoma can be difficult on histopathologic examination, potentially resulting in mismanagement of patients. We aimed to identify histopathologic features by which regressed melanoma can be differentiated from regressed LPLK. Twenty actively inflamed LPLK, 12 LPLK with regression and 15 melanomas with regression were compared and evaluated by hematoxylin and eosin staining as well as Melan-A, microphthalmia transcription factor (MiTF) and cytokeratin (AE1/AE3) immunostaining. (1) A total of 40% of regressed melanomas showed complete or near complete loss of melanocytes within the epidermis with Melan-A and MiTF immunostaining, while 8% of regressed LPLK exhibited this finding. (2) Necrotic keratinocytes were seen in the epidermis in 33% regressed melanomas as opposed to all of the regressed LPLK. (3) A dense infiltrate of melanophages in the papillary dermis was seen in 40% of regressed melanomas, a feature not seen in regressed LPLK. In summary, our findings suggest that a complete or near complete loss of melanocytes within the epidermis strongly favors a regressed melanoma over a regressed LPLK. In addition, necrotic epidermal keratinocytes and the presence of a dense band-like distribution of dermal melanophages can be helpful in differentiating these lesions. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. Applying of factor analyses for determination of trace elements distribution in water from Vardar and its tributaries, Macedonia/Greece.

    Science.gov (United States)

    Popov, Stanko Ilić; Stafilov, Trajče; Sajn, Robert; Tănăselia, Claudiu; Bačeva, Katerina

    2014-01-01

    A systematic study was carried out to investigate the distribution of fifty-six elements in the water samples from river Vardar (Republic of Macedonia and Greece) and its major tributaries. The samples were collected from 27 sampling sites. Analyses were performed by mass spectrometry with inductively coupled plasma (ICP-MS) and atomic emission spectrometry with inductively coupled plasma (ICP-AES). Cluster and R mode factor analysis (FA) was used to identify and characterise element associations and four associations of elements were determined by the method of multivariate statistics. Three factors represent the associations of elements that occur in the river water naturally while Factor 3 represents an anthropogenic association of the elements (Cd, Ga, In, Pb, Re, Tl, Cu, and Zn) introduced in the river waters from the waste waters from the mining and metallurgical activities in the country.

  13. Passive Suicide Ideation Among Older Adults in Europe: A Multilevel Regression Analysis of Individual and Societal Determinants in 12 Countries (SHARE).

    Science.gov (United States)

    Stolz, Erwin; Fux, Beat; Mayerl, Hannes; Rásky, Éva; Freidl, Wolfgang

    2016-09-01

    Passive suicide ideation (PSI) is common among older adults, but prevalences have been reported to vary considerably across European countries. The goal of this study was to assess the role of individual-level risk factors and societal contextual factors associated with PSI in old age. We analyzed longitudinal data from the Survey of Health, Ageing, and Retirement in Europe (SHARE) on 6,791 community-dwelling respondents (75+) from 12 countries. Bayesian logistic multilevel regression models were used to assess variance components, individual-level and country-level risk factors. About 4% of the total variance of PSI was located at the country level, a third of which was attributable to compositional effects of individual-level predictors. Predictors for the development of PSI at the individual level were female gender, depression, older age, poor health, smaller social network size, loneliness, nonreligiosity, and low perceived control (R (2) = 25.8%). At the country level, cultural acceptance of suicide, religiosity, and intergenerational cohabitation were associated with the rates of PSI. Cross-national variation in old-age PSI is mostly attributable to individual-level determinants and compositional differences, but there is also evidence for contextual effects of country-level characteristics. Suicide prevention programs should be intensified in high-risk countries and attitudes toward suicide should be addressed in information campaigns. © The Author 2016. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. Comparison of a point-of-care analyser for the determination of HbA1c with HPLC method.

    Science.gov (United States)

    Grant, D A; Dunseath, G J; Churm, R; Luzio, S D

    2017-08-01

    As the use of Point of Care Testing (POCT) devices for measurement of glycated haemoglobin (HbA1c) increases, it is imperative to determine how their performance compares to laboratory methods. This study compared the performance of the automated Quo-Test POCT device (EKF Diagnostics), which uses boronate fluorescence quenching technology, with a laboratory based High Performance Liquid Chromatography (HPLC) method (Biorad D10) for measurement of HbA1c. Whole blood EDTA samples from subjects (n=100) with and without diabetes were assayed using a BioRad D10 and a Quo-Test analyser. Intra-assay variation was determined by measuring six HbA1c samples in triplicate and inter-assay variation was determined by assaying four samples on 4 days. Stability was determined by assaying three samples stored at -20 °C for 14 and 28 days post collection. Median (IQR) HbA1c was 60 (44.0-71.2) mmol/mol (7.6 (6.17-8.66) %) and 62 (45.0-69.0) mmol/mol (7.8 (6.27-8.46) %) for D10 and Quo-Test, respectively, with very good agreement (R 2 =0.969, Pglucose intolerance (IGT and T2DM) and 100% for diagnosis of T2DM. Good agreement between the D10 and Quo-Test was seen across a wide HbA1c range. The Quo-Test POCT device provided similar performance to a laboratory based HPLC method.

  15. Determining Effects of Genes, Environment, and Gene X Environment Interaction That Are Common to Breast and Ovarian Cancers Via Bivariate Logistic Regression

    National Research Council Canada - National Science Library

    Ramakrishnan, Viswanathan

    2003-01-01

    .... A generalized estimation equations (GEE) logistic regression model was used for the modeling. A shared trait is defined for two discrete traits based upon explicit patterns of trait concordance and discordance within twin pairs...

  16. Using niche-modelling and species-specific cost analyses to determine a multispecies corridor in a fragmented landscape

    Science.gov (United States)

    Zurano, Juan Pablo; Selleski, Nicole; Schneider, Rosio G.

    2017-01-01

    Misiones, Argentina, contains the largest remaining tract of Upper Paraná Atlantic Forest ecoregion; however, ~50% of native forest is unprotected and located in a mosaic of plantations, agriculture, and pastures. Existing protected areas are becoming increasingly isolated due to ongoing habitat modification. These factors, combined with lower than expected regional carnivore densities, emphasize the need to understand the effect of fragmentation on animal movement and connectivity between protected areas. Using detection dogs and genetic analyses of scat, we collected data on jaguars (Panthera onca), pumas (Puma concolor), ocelots (Leopardus pardalis), oncillas (Leopardus tigrinus), and bush dogs (Speothos venaticus) across habitats that varied in vegetation, disturbance, human proximity, and protective status. With MaxEnt we evaluated habitat use, habitat suitability, and potential species richness for the five carnivores across northern-central Misiones, Argentina. Through a multifaceted cost analysis that included unique requirements of each carnivore and varying degrees of overlap among them, we determined the optimal location for primary/secondary corridors that would link the northern-central zones of the Green Corridor in Misiones and identified areas within these corridors needing priority management. A secondary analysis, comparing these multispecies corridors with the jaguar’s unique requirements, demonstrated that this multispecies approach balanced the preferences of all five species and effectively captured areas required by this highly restricted and endangered carnivore. We emphasize the potential importance of expanding beyond a single umbrella or focal species when developing biological corridors that aim to capture the varied ecological requirements of coexisting species and ecological processes across the landscape. Detection dogs and genetic analyses of scat allow data on multiple species to be collected efficiently across multiple habitat

  17. Determinants of the over-anticoagulation response during warfarin initiation therapy in Asian patients based on population pharmacokinetic-pharmacodynamic analyses.

    Science.gov (United States)

    Ohara, Minami; Takahashi, Harumi; Lee, Ming Ta Michael; Wen, Ming-Shien; Lee, Tsong-Hai; Chuang, Hui-Ping; Luo, Chen-Hui; Arima, Aki; Onozuka, Akiko; Nagai, Rui; Shiomi, Mari; Mihara, Kiyoshi; Morita, Takashi; Chen, Yuan-Tsong

    2014-01-01

    To clarify pharmacokinetic-pharmacodynamic (PK-PD) factors associated with the over-anticoagulation response in Asians during warfarin induction therapy, population PK-PD analyses were conducted in an attempt to predict the time-courses of the plasma S-warfarin concentration, Cp(S), and coagulation and anti-coagulation (INR) responses. In 99 Chinese patients we analyzed the relationships between dose and Cp(S) to estimate the clearance of S-warfarin, CL(S), and that between Cp(S) and the normal prothrombin concentration (NPT) as a coagulation marker for estimation of IC50. We also analyzed the non-linear relationship between NPT inhibition and the increase in INR to derive the non-linear index λ. Population analyses accurately predicted the time-courses of Cp(S), NPT and INR. Multivariate analysis showed that CYP2C9*3 mutation and body surface area were predictors of CL(S), that VKORC1 and CYP4F2 polymorphisms were predictors of IC50, and that baseline NPT was a predictor of λ. CL(S) and λ were significantly lower in patients with INR≥4 than in those with INR<4 (190 mL/h vs 265 mL/h, P<0.01 and 3.2 vs 3.7, P<0.01, respectively). Finally, logistic regression analysis revealed that CL(S), ALT and hypertension contributed significantly to INR≥4. All these results indicate that factors associated with the reduced metabolic activity of warfarin represented by CL(S), might be critical determinants of the over-anticoagulation response during warfarin initiation in Asians. ClinicalTrials.gov NCT02065388.

  18. Determinants of the over-anticoagulation response during warfarin initiation therapy in Asian patients based on population pharmacokinetic-pharmacodynamic analyses.

    Directory of Open Access Journals (Sweden)

    Minami Ohara

    Full Text Available To clarify pharmacokinetic-pharmacodynamic (PK-PD factors associated with the over-anticoagulation response in Asians during warfarin induction therapy, population PK-PD analyses were conducted in an attempt to predict the time-courses of the plasma S-warfarin concentration, Cp(S, and coagulation and anti-coagulation (INR responses. In 99 Chinese patients we analyzed the relationships between dose and Cp(S to estimate the clearance of S-warfarin, CL(S, and that between Cp(S and the normal prothrombin concentration (NPT as a coagulation marker for estimation of IC50. We also analyzed the non-linear relationship between NPT inhibition and the increase in INR to derive the non-linear index λ. Population analyses accurately predicted the time-courses of Cp(S, NPT and INR. Multivariate analysis showed that CYP2C9*3 mutation and body surface area were predictors of CL(S, that VKORC1 and CYP4F2 polymorphisms were predictors of IC50, and that baseline NPT was a predictor of λ. CL(S and λ were significantly lower in patients with INR≥4 than in those with INR<4 (190 mL/h vs 265 mL/h, P<0.01 and 3.2 vs 3.7, P<0.01, respectively. Finally, logistic regression analysis revealed that CL(S, ALT and hypertension contributed significantly to INR≥4. All these results indicate that factors associated with the reduced metabolic activity of warfarin represented by CL(S, might be critical determinants of the over-anticoagulation response during warfarin initiation in Asians.ClinicalTrials.gov NCT02065388.

  19. Determinants of LSIL Regression in Women from a Colombian Cohort; Determinantes de la regresion de lesiones cervicales de bajo grado en una cohorte de mujeres colombianas.

    Energy Technology Data Exchange (ETDEWEB)

    Molano, Monica; Gonzalez, Mauricio; Gamboa, Oscar; Ortiz, Natasha; Luna, Joaquin; Hernandez, Gustavo; Posso, Hector; Murillo, Raul; Munoz, Nubia

    2010-07-01

    Objective: To analyze the role of Human Papillomavirus (HPV) and other risk factors in the regression of cervical lesions in women from the Bogota Cohort. Methods: 200 HPV positive women with abnormal cytology were included for regression analysis. The time of lesion regression was modeled using methods for interval censored survival time data. Median duration of total follow-up was 9 years. Results: 80 (40%) women were diagnosed with Atypical Squamous Cells of Undetermined Significance (ASCUS) or Atypical Glandular Cells of Undetermined Significance (AGUS) while 120 (60%) were diagnosed with Low Grade Squamous Intra-epithelial Lesions (LSIL). Globally, 40% of the lesions were still present at first year of follow up, while 1.5% was still present at 5 year check-up. The multivariate model showed similar regression rates for lesions in women with ASCUS/AGUS and women with LSIL (HR= 0.82, 95% CI 0.59-1.12). Women infected with HR HPV types and those with mixed infections had lower regression rates for lesions than did women infected with LR types (HR=0.526, 95% CI 0.33-0.84, for HR types and HR=0.378, 95% CI 0.20-0.69, for mixed infections). Furthermore, women over 30 years had a higher lesion regression rate than did women under 30 years (HR1.53, 95% CI 1.03-2.27). The study showed that the median time for lesion regression was 9 months while the median time for HPV clearance was 12 months. Conclusions: In the studied population, the type of infection and the age of the women are critical factors for the regression of cervical lesions.

  20. Regression: A Bibliography.

    Science.gov (United States)

    Pedrini, D. T.; Pedrini, Bonnie C.

    Regression, another mechanism studied by Sigmund Freud, has had much research, e.g., hypnotic regression, frustration regression, schizophrenic regression, and infra-human-animal regression (often directly related to fixation). Many investigators worked with hypnotic age regression, which has a long history, going back to Russian reflexologists.…

  1. Prevalence and Social Determinants of Smoking in 15 Countries from North Africa, Central and Western Asia, Latin America and Caribbean: Secondary Data Analyses of Demographic and Health Surveys.

    Science.gov (United States)

    Sreeramareddy, Chandrashekhar T; Pradhan, Pranil Man Singh

    2015-01-01

    Article 20 of the World Health Organisation Framework Convention on Tobacco Control calls for a cross-country surveillance of tobacco use through population-based surveys. We aimed to provide country-level prevalence estimates for current smoking and current smokeless tobacco use and to assess social determinants of smoking. Data from Demographic and Health Surveys done between 2005 and 2012, among men and women from nine North African, Central and West Asian countries and six Latin American and Caribbean countries were analyzed. Weighted country-level prevalence rates were estimated for 'current smoking' and 'current use of smokeless tobacco (SLT) products' among men and women. In each country, social determinants of smoking among men and women were assessed by binary logistic regression analyses by including men's and women's sampling weights to account for the complex survey design. Prevalence of smoking among men was higher than 40% in Armenia (63.1%), Moldova (51.1%), Ukraine (52%), Azerbaijan (49.8 %), Kyrgyz Republic (44.3 %) and Albania (42.52%) but the prevalence of smoking among women was less than 10% in most countries except Ukraine (14.81%) and Jordan (17.96%). The prevalence of smokeless tobacco use among men and women was less than 5% in all countries except among men in the Kyrgyz Republic (10.6 %). Smoking was associated with older age, lower education and poverty among men and higher education and higher wealth among women. Smoking among both men and women was associated with unskilled work, living in urban areas and being single. Smoking among men was very high in Central and West Asian countries. Social pattern of smoking among women that was different from men in education and wealth should be considered while formulating tobacco control policies in some Central and West Asian countries.

  2. Prevalence and Social Determinants of Smoking in 15 Countries from North Africa, Central and Western Asia, Latin America and Caribbean: Secondary Data Analyses of Demographic and Health Surveys.

    Directory of Open Access Journals (Sweden)

    Chandrashekhar T Sreeramareddy

    Full Text Available Article 20 of the World Health Organisation Framework Convention on Tobacco Control calls for a cross-country surveillance of tobacco use through population-based surveys. We aimed to provide country-level prevalence estimates for current smoking and current smokeless tobacco use and to assess social determinants of smoking.Data from Demographic and Health Surveys done between 2005 and 2012, among men and women from nine North African, Central and West Asian countries and six Latin American and Caribbean countries were analyzed. Weighted country-level prevalence rates were estimated for 'current smoking' and 'current use of smokeless tobacco (SLT products' among men and women. In each country, social determinants of smoking among men and women were assessed by binary logistic regression analyses by including men's and women's sampling weights to account for the complex survey design.Prevalence of smoking among men was higher than 40% in Armenia (63.1%, Moldova (51.1%, Ukraine (52%, Azerbaijan (49.8 %, Kyrgyz Republic (44.3 % and Albania (42.52% but the prevalence of smoking among women was less than 10% in most countries except Ukraine (14.81% and Jordan (17.96%. The prevalence of smokeless tobacco use among men and women was less than 5% in all countries except among men in the Kyrgyz Republic (10.6 %. Smoking was associated with older age, lower education and poverty among men and higher education and higher wealth among women. Smoking among both men and women was associated with unskilled work, living in urban areas and being single.Smoking among men was very high in Central and West Asian countries. Social pattern of smoking among women that was different from men in education and wealth should be considered while formulating tobacco control policies in some Central and West Asian countries.

  3. Aplicación de modelos de regresión lineal para determinar las armónicas de tensión y corriente; Application of linear regression models to determine the current and voltage harmonics

    Directory of Open Access Journals (Sweden)

    Juan Miguel Astorga Gómez

    2015-04-01

    Full Text Available En este artículo se evalúan las armónicas individuales de tensión como función de las armónicas individuales de corriente usando los análisis estadísticos de regresión lineal simple, regresión polinomial y regresión lineal múltiple. Para la selección del modelo, se usan el coeficiente de determinación R2 y el criterio de información de Akaike (AIC. Se utiliza como caso de estudio un sistema eléctrico de un proceso minero ubicado en la región de Atacama Copiapó Chile,  que ocupa la técnica de electro obtención de cobre como parte principal de su proceso productivo. Se muestran y comparan los resultados para los distintos modelos estadísticos y se discute la información de éstos para el estudio de calidad de energía. Finalmente, usando el modelo que mejor se ajusta a las mediciones de armónicas de tensión y corriente, se muestran algunas predicciones para la componente armónica dominante de tensión. This paper assesses the individual voltage harmonics as a function of the individual current harmonics using the statistical analyses of simple linear regression, polynomial regression and multiple linear regression. For model choice, the coefficient of determination R2 and Akaike information criterion (AIC are used. A mining process electrical system located in the Atacama Copiapó Chile, is used as a case study. This uses the technique copper electrowining as the main part of its production process.  The results for different statistical models are shown and their information discussed for the power quality study. Finally, using the model that better fits to the measurements of voltage and current harmonics, some predictions for the dominant voltage harmonic are shown.

  4. Better Autologistic Regression

    Directory of Open Access Journals (Sweden)

    Mark A. Wolters

    2017-11-01

    Full Text Available Autologistic regression is an important probability model for dichotomous random variables observed along with covariate information. It has been used in various fields for analyzing binary data possessing spatial or network structure. The model can be viewed as an extension of the autologistic model (also known as the Ising model, quadratic exponential binary distribution, or Boltzmann machine to include covariates. It can also be viewed as an extension of logistic regression to handle responses that are not independent. Not all authors use exactly the same form of the autologistic regression model. Variations of the model differ in two respects. First, the variable coding—the two numbers used to represent the two possible states of the variables—might differ. Common coding choices are (zero, one and (minus one, plus one. Second, the model might appear in either of two algebraic forms: a standard form, or a recently proposed centered form. Little attention has been paid to the effect of these differences, and the literature shows ambiguity about their importance. It is shown here that changes to either coding or centering in fact produce distinct, non-nested probability models. Theoretical results, numerical studies, and analysis of an ecological data set all show that the differences among the models can be large and practically significant. Understanding the nature of the differences and making appropriate modeling choices can lead to significantly improved autologistic regression analyses. The results strongly suggest that the standard model with plus/minus coding, which we call the symmetric autologistic model, is the most natural choice among the autologistic variants.

  5. Comparison of a point-of-care analyser for the determination of HbA1c with HPLC method

    Directory of Open Access Journals (Sweden)

    D.A. Grant

    2017-08-01

    Full Text Available Aims: As the use of Point of Care Testing (POCT devices for measurement of glycated haemoglobin (HbA1c increases, it is imperative to determine how their performance compares to laboratory methods. This study compared the performance of the automated Quo-Test POCT device (EKF Diagnostics, which uses boronate fluorescence quenching technology, with a laboratory based High Performance Liquid Chromatography (HPLC method (Biorad D10 for measurement of HbA1c. Methods: Whole blood EDTA samples from subjects (n=100 with and without diabetes were assayed using a BioRad D10 and a Quo-Test analyser. Intra-assay variation was determined by measuring six HbA1c samples in triplicate and inter-assay variation was determined by assaying four samples on 4 days. Stability was determined by assaying three samples stored at −20 °C for 14 and 28 days post collection. Results: Median (IQR HbA1c was 60 (44.0–71.2 mmol/mol (7.6 (6.17–8.66 % and 62 (45.0–69.0 mmol/mol (7.8 (6.27–8.46 % for D10 and Quo-Test, respectively, with very good agreement (R2=0.969, P<0.0001. Mean (range intra- and inter-assay variation was 1.2% (0.0–2.7% and 1.6% (0.0–2.7% for the D10 and 3.5% (0.0–6.7% and 2.7% (0.7–5.1% for the Quo-Test. Mean change in HbA1c after 28 days storage at −20 °C was −0.7% and +0.3% for D10 and Quo-Test respectively. Compared to the D10, Quo-Test showed 98% agreement for diagnosis of glucose intolerance (IGT and T2DM and 100% for diagnosis of T2DM. Conclusion: Good agreement between the D10 and Quo-Test was seen across a wide HbA1c range. The Quo-Test POCT device provided similar performance to a laboratory based HPLC method. Keywords: Point of care testing, HbA1c measurement

  6. Determination of S_1_7 from systematic analyses on "8B Coulomb breakup with the Eikonal-CDCC method

    International Nuclear Information System (INIS)

    Ogata, K.; Matsumoto, T.; Yamashita, N.; Kamimura, M.; Yahiro, M.; Iseri, Y.

    2003-01-01

    Systematic analysis of "8B Coulomb dissociation with the Asymptotic Normalization Coefficient (ANC) method is proposed to determine the astrophysical factor S_1_7(0) accurately. An important advantage of the analysis is that uncertainties of the extracted S_1_7(0) coming from the use of the ANC method can quantitatively be evaluated, in contrast to previous analyses using the Virtual Photon Theory (VPT). Calculation of measured spectra in dissociation experiments is done by means of the method of Continuum-Discretized Coupled-Channels (CDCC). From the analysis of "5"8Ni("8B,"7Be+p) "5"8Ni at 25.8 MeV, S_1_7(0) = 22.83 ± 0.51(theo) ± 2.28(expt) (eVb) is obtained; the ANC method turned out to work in this case within 1% of error. Preceding systematic analysis of experimental data at intermediate energies, we propose hybrid (HY) Coupled-Channels (CC) calculation of "8B Coulomb dissociation, which makes numerical calculation much simple, retaining its accuracy. The validity of the HY calculation is tested for "5"8Ni("8B,"7Be+p) "5"8Ni at 240 MeV. The ANC method combined with the HY CC calculation is shown to be a powerful technique to obtain a reliable S_1_7(0).

  7. Logistic Regression: Concept and Application

    Science.gov (United States)

    Cokluk, Omay

    2010-01-01

    The main focus of logistic regression analysis is classification of individuals in different groups. The aim of the present study is to explain basic concepts and processes of binary logistic regression analysis intended to determine the combination of independent variables which best explain the membership in certain groups called dichotomous…

  8. Determination of the sanguine iodine content by activation analysis (1962); Determination de l'iode sanguin par analyse d'activation (1962)

    Energy Technology Data Exchange (ETDEWEB)

    Kellershohn, C; Comar, D; Le Poec, C [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1962-07-01

    Two methods of measuring the iodine content of the blood by activation analysis after 30 min irradiation in a flux of 6.10{sup 12} n/cm{sup 2}.s are described. One method includes a chemical separation of iodine before irradiation. The radioactivity of this specimen is determined on the basis of the amplitude of the iodine 128 photoelectric peak measured by {gamma}-spectrometry. The sensitivity of this method is about 10{sup -8} g. The other includes chemical separation after irradiation, and radioactivity is subsequently measured with a Geiger-Muller counter. The sensitivity of this second method is about 10{sup -10} g, sufficient for determination of the plasmatic mineral content. The advantages of activation analysis over the conventional method based on colorimetric measurement of the catalytic action of iodine in the oxidation of arsenious anhydride with ceric sulphate are discussed. Initial results obtained with this new form of analysis are detailed as are also the possibilities of measuring plasmatic proteic iodine by scintillation spectrometry without prior chemical separation. (authors) [French] Deux methodes de dosage de l'iode sanguin par analyse d'activation apres irradiation de 30 minutes a 6.10{sup 12} n/cm{sup 2} s sont decrites. L'une comprend une separation chimique de l'iode avant l'irradiation et la mesure de la radioactivite est effectuee a partir de l'amplitude du pic photo-electrique de l'iode 128 obtenu par spectrometrie-{gamma}. Sa sensibilite est de l'ordre de 10{sup -8} g. L'autre comprend une separation chimique de l'iode apres l'irradiation et la radioactivite de l'iode-128 est mesuree par comptage-{beta} avec un compteur Geiger. Sa sensibilite est de l'ordre de 10{sup -10} g, permettant d'envisager le dosage de l'iode mineral plasmatique. Les avantages de l'analyse d'activation sur la methode classique, basee sur la mesure colorimetrique de l'action catalytique de l'iode dans l'oxydation de l'anhydride arsenieux par le sulfate cerique

  9. Logistic Regression Analysis of Operational Errors and Routine Operations Using Sector Characteristics

    National Research Council Canada - National Science Library

    Pfleiderer, Elaine M; Scroggins, Cheryl L; Manning, Carol A

    2009-01-01

    Two separate logistic regression analyses were conducted for low- and high-altitude sectors to determine whether a set of dynamic sector characteristics variables could reliably discriminate between operational error (OE...

  10. Least median of squares and iteratively re-weighted least squares as robust linear regression methods for fluorimetric determination of α-lipoic acid in capsules in ideal and non-ideal cases of linearity.

    Science.gov (United States)

    Korany, Mohamed A; Gazy, Azza A; Khamis, Essam F; Ragab, Marwa A A; Kamal, Miranda F

    2018-03-26

    This study outlines two robust regression approaches, namely least median of squares (LMS) and iteratively re-weighted least squares (IRLS) to investigate their application in instrument analysis of nutraceuticals (that is, fluorescence quenching of merbromin reagent upon lipoic acid addition). These robust regression methods were used to calculate calibration data from the fluorescence quenching reaction (∆F and F-ratio) under ideal or non-ideal linearity conditions. For each condition, data were treated using three regression fittings: Ordinary Least Squares (OLS), LMS and IRLS. Assessment of linearity, limits of detection (LOD) and quantitation (LOQ), accuracy and precision were carefully studied for each condition. LMS and IRLS regression line fittings showed significant improvement in correlation coefficients and all regression parameters for both methods and both conditions. In the ideal linearity condition, the intercept and slope changed insignificantly, but a dramatic change was observed for the non-ideal condition and linearity intercept. Under both linearity conditions, LOD and LOQ values after the robust regression line fitting of data were lower than those obtained before data treatment. The results obtained after statistical treatment indicated that the linearity ranges for drug determination could be expanded to lower limits of quantitation by enhancing the regression equation parameters after data treatment. Analysis results for lipoic acid in capsules, using both fluorimetric methods, treated by parametric OLS and after treatment by robust LMS and IRLS were compared for both linearity conditions. Copyright © 2018 John Wiley & Sons, Ltd.

  11. Echinococcus oligarthrus in the subtropical region of Argentina: First integration of morphological and molecular analyses determines two distinct populations.

    Science.gov (United States)

    Arrabal, Juan Pablo; Avila, Hector Gabriel; Rivero, Maria Romina; Camicia, Federico; Salas, Martin Miguel; Costa, Sebastián A; Nocera, Carlos G; Rosenzvit, Mara C; Kamenetzky, Laura

    2017-06-15

    Echinococcosis is a parasitic zoonosis that is considered as a neglected disease by the World Health Organization. The species Echinococcus oligarthrus is one of the causative agents of Neotropical echinococcosis, which is a poorly understood disease that requires a complex medical examination, may threaten human life, and is frequently associated with a low socioeconomic status. Morphological and genetic diversity in E. oligarthrus remains unknown. The aim of this work is to identify and characterize E. oligarthrus infections in sylvatic animals from the Upper Paraná Atlantic Forest in the province of Misiones, Argentina, by following an integrative approach that links morphological, genetic and ecological aspects. This study demonstrates, for the first time, one of the complete life cycles of E. oligarthrus in an important ecoregion. The Upper Paraná Atlantic Forest constitutes the largest remnant continuous forest of the Atlantic Forest, representing 7% of the world's biodiversity. This is the first molecular determination of E. oligarthrus in Argentina. In addition, the agouti (Dasyprocta azarae), the ocelot (Leopardus pardalis) and the puma (Puma concolor) were identified as sylvatic hosts of Neotropical echinococcosis caused by E. oligarthrus. Mitochondrial and nuclear molecular marker analyses showed a high genetic diversity in E. oligarthrus. Moreover, the genetic distance found among E. oligarthrus isolates is higher than the one observed among Echinococcus granulosus genotypes, which clearly indicates that there are at least two different E. oligarthrus populations in Argentina. This study provides valuable information to understand the underlying conditions that favour the maintenance of E. oligarthrus in sylvatic cycles and to evaluate its zoonotic significance for devising preventive measures for human and animal wellbeing. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Reduced Rank Regression

    DEFF Research Database (Denmark)

    Johansen, Søren

    2008-01-01

    The reduced rank regression model is a multivariate regression model with a coefficient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure, which estimates the reduced rank regression model. It is related to canonical correlations and involves calculating...

  13. Regression analysis with categorized regression calibrated exposure: some interesting findings

    Directory of Open Access Journals (Sweden)

    Hjartåker Anette

    2006-07-01

    Full Text Available Abstract Background Regression calibration as a method for handling measurement error is becoming increasingly well-known and used in epidemiologic research. However, the standard version of the method is not appropriate for exposure analyzed on a categorical (e.g. quintile scale, an approach commonly used in epidemiologic studies. A tempting solution could then be to use the predicted continuous exposure obtained through the regression calibration method and treat it as an approximation to the true exposure, that is, include the categorized calibrated exposure in the main regression analysis. Methods We use semi-analytical calculations and simulations to evaluate the performance of the proposed approach compared to the naive approach of not correcting for measurement error, in situations where analyses are performed on quintile scale and when incorporating the original scale into the categorical variables, respectively. We also present analyses of real data, containing measures of folate intake and depression, from the Norwegian Women and Cancer study (NOWAC. Results In cases where extra information is available through replicated measurements and not validation data, regression calibration does not maintain important qualities of the true exposure distribution, thus estimates of variance and percentiles can be severely biased. We show that the outlined approach maintains much, in some cases all, of the misclassification found in the observed exposure. For that reason, regression analysis with the corrected variable included on a categorical scale is still biased. In some cases the corrected estimates are analytically equal to those obtained by the naive approach. Regression calibration is however vastly superior to the naive method when applying the medians of each category in the analysis. Conclusion Regression calibration in its most well-known form is not appropriate for measurement error correction when the exposure is analyzed on a

  14. Equações de regressão para estimar valores energéticos do grão de trigo e seus subprodutos para frangos de corte, a partir de análises químicas Regression equations to evaluate the energy values of wheat grain and its by-products for broiler chickens from chemical analyses

    Directory of Open Access Journals (Sweden)

    F.M.O. Borges

    2003-12-01

    que significou pouca influência da metodologia sobre essa medida. A FDN não mostrou ser melhor preditor de EM do que a FB.One experiment was run with broiler chickens, to obtain prediction equations for metabolizable energy (ME based on feedstuffs chemical analyses, and determined ME of wheat grain and its by-products, using four different methodologies. Seven wheat grain by-products were used in five treatments: wheat grain, wheat germ, white wheat flour, dark wheat flour, wheat bran for human use, wheat bran for animal use and rough wheat bran. Based on chemical analyses of crude fiber (CF, ether extract (EE, crude protein (CP, ash (AS and starch (ST of the feeds and the determined values of apparent energy (MEA, true energy (MEV, apparent corrected energy (MEAn and true energy corrected by nitrogen balance (MEVn in five treatments, prediction equations were obtained using the stepwise procedure. CF showed the best relationship with metabolizable energy values, however, this variable alone was not enough for a good estimate of the energy values (R² below 0.80. When EE and CP were included in the equations, R² increased to 0.90 or higher in most estimates. When the equations were calculated with all treatments, the equation for MEA were less precise and R² decreased. When ME data of the traditional or force-feeding methods were used separately, the precision of the equations increases (R² higher than 0.85. For MEV and MEVn values, the best multiple linear equations included CF, EE and CP (R²>0.90, independently of using all experimental data or separating by methodology. The estimates of MEVn values showed high precision and the linear coefficients (a of the equations were similar for all treatments or methodologies. Therefore, it explains the small influence of the different methodologies on this parameter. NDF was not a better predictor of ME than CF.

  15. Linear support vector regression and partial least squares chemometric models for determination of Hydrochlorothiazide and Benazepril hydrochloride in presence of related impurities: A comparative study

    Science.gov (United States)

    Naguib, Ibrahim A.; Abdelaleem, Eglal A.; Draz, Mohammed E.; Zaazaa, Hala E.

    2014-09-01

    Partial least squares regression (PLSR) and support vector regression (SVR) are two popular chemometric models that are being subjected to a comparative study in the presented work. The comparison shows their characteristics via applying them to analyze Hydrochlorothiazide (HCZ) and Benazepril hydrochloride (BZ) in presence of HCZ impurities; Chlorothiazide (CT) and Salamide (DSA) as a case study. The analysis results prove to be valid for analysis of the two active ingredients in raw materials and pharmaceutical dosage form through handling UV spectral data in range (220-350 nm). For proper analysis a 4 factor 4 level experimental design was established resulting in a training set consisting of 16 mixtures containing different ratios of interfering species. An independent test set consisting of 8 mixtures was used to validate the prediction ability of the suggested models. The results presented indicate the ability of mentioned multivariate calibration models to analyze HCZ and BZ in presence of HCZ impurities CT and DSA with high selectivity and accuracy of mean percentage recoveries of (101.01 ± 0.80) and (100.01 ± 0.87) for HCZ and BZ respectively using PLSR model and of (99.78 ± 0.80) and (99.85 ± 1.08) for HCZ and BZ respectively using SVR model. The analysis results of the dosage form were statistically compared to the reference HPLC method with no significant differences regarding accuracy and precision. SVR model gives more accurate results compared to PLSR model and show high generalization ability, however, PLSR still keeps the advantage of being fast to optimize and implement.

  16. Fit-To-Fight: Waist vs Waist/Height Measurements to Determine an Individual's Fitness Level - A Study in Statistical Regression and Analysis

    National Research Council Canada - National Science Library

    Swiderski, Steven J

    2005-01-01

    .... The abdominal measurement is a "one-size-fits-all" fitness standard. This research determines that a person's waist-to-height ratio is a better measurement than the waist measurement to estimate an individual's fitness level...

  17. Determining clinical benefits of drug-eluting coronary stents according to the population risk profile: a meta-regression from 31 randomized trials.

    Science.gov (United States)

    Moreno, Raul; Martin-Reyes, Roberto; Jimenez-Valero, Santiago; Sanchez-Recalde, Angel; Galeote, Guillermo; Calvo, Luis; Plaza, Ignacio; Lopez-Sendon, Jose-Luis

    2011-04-01

    The use of drug-eluting stents (DES) in unfavourable patients has been associated with higher rates of clinical complications and stent thrombosis, and because of that concerns about the use of DES in high-risk settings have been raised. This study sought to demonstrate that the clinical benefit of DES increases as the risk profile of the patients increases. A meta-regression analysis from 31 randomized trials that compared DES and bare-metal stents, including overall 12,035 patients, was performed. The relationship between the clinical benefit of using DES (number of patients to treat [NNT] to prevent one episode of target lesion revascularization [TLR]), and the risk profile of the population (rate of TLR in patients allocated to bare-metal stents) in each trial was evaluated. The clinical benefit of DES increased as the risk profile of each study population increased: NNT for TLR=31.1-1.2 (TLR for bare-metal stents); prisk profile of each study population, since the effect of DES in mortality, myocardial infarction, and stent thrombosis, was not adversely affected by the risk profile of each study population (95% confidence interval for β value 0.09 to 0.11, -0.12 to 0.19, and -0.03 to-0.15 for mortality, myocardial infarction, and stent thrombosis, respectively). The clinical benefit of DES increases as the risk profile of the patients increases, without affecting safety. Copyright © 2009 Elsevier Ireland Ltd. All rights reserved.

  18. Regression models in the determination of the absorbed dose with extrapolation chamber for ophthalmological applicators; Modelos de regresion en la determinacion de la dosis absorbida con camara de extrapolacion para aplicadores oftalmologicos

    Energy Technology Data Exchange (ETDEWEB)

    Alvarez R, J T; Morales P, R

    1992-06-15

    The absorbed dose for equivalent soft tissue is determined,it is imparted by ophthalmologic applicators, ({sup 90} Sr/{sup 90} Y, 1850 MBq) using an extrapolation chamber of variable electrodes; when estimating the slope of the extrapolation curve using a simple lineal regression model is observed that the dose values are underestimated from 17.7 percent up to a 20.4 percent in relation to the estimate of this dose by means of a regression model polynomial two grade, at the same time are observed an improvement in the standard error for the quadratic model until in 50%. Finally the global uncertainty of the dose is presented, taking into account the reproducibility of the experimental arrangement. As conclusion it can infers that in experimental arrangements where the source is to contact with the extrapolation chamber, it was recommended to substitute the lineal regression model by the quadratic regression model, in the determination of the slope of the extrapolation curve, for more exact and accurate measurements of the absorbed dose. (Author)

  19. Incorporation of star measurements for the determination of orbit and attitude parameters of a geosynchronous satellite: An iterative application of linear regression

    Science.gov (United States)

    Phillips, D.

    1980-01-01

    Currently on NOAA/NESS's VIRGS system at the World Weather Building star images are being ingested on a daily basis. The image coordinates of the star locations are measured and stored. Subsequently, the information is used to determine the attitude, the misalignment angles between the spin axis and the principal axis of the satellite, and the precession rate and direction. This is done for both the 'East' and 'West' operational geosynchronous satellites. This orientation information is then combined with image measurements of earth based landmarks to determine the orbit of each satellite. The method for determining the orbit is simple. For each landmark measurement one determines a nominal position vector for the satellite by extending a ray from the landmark's position towards the satellite and intersecting the ray with a sphere with center coinciding with the Earth's center and with radius equal to the nominal height for a geosynchronous satellite. The apparent motion of the satellite around the Earth's center is then approximated with a Keplerian model. In turn the variations of the satellite's height, as a function of time found by using this model, are used to redetermine the successive satellite positions by again using the Earth based landmark measurements and intersecting rays from these landmarks with the newly determined spheres. This process is performed iteratively until convergence is achieved. Only three iterations are required.

  20. Regression analysis by example

    CERN Document Server

    Chatterjee, Samprit

    2012-01-01

    Praise for the Fourth Edition: ""This book is . . . an excellent source of examples for regression analysis. It has been and still is readily readable and understandable."" -Journal of the American Statistical Association Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. Regression Analysis by Example, Fifth Edition has been expanded

  1. Simple Linear Regression and Reflectance Sensitivity Analysis Used to Determine the Optimum Wavelength for Nondestructive Assessment of Chlorophyll in Fresh Leaves Using Spectral Reflectance

    Science.gov (United States)

    The accuracy of nondestructive optical methods for chlorophyll (Chl) assessment based on leaf spectral characteristics depends on the wavelengths used for Chl assessment. Using spectroscopy, the optimum wavelengths for Chl assessment (OWChl) were determined for almond, poplar, and apple trees grown ...

  2. Magnitude And Distance Determination From The First Few Seconds Of One Three Components Seismological Station Signal Using Support Vector Machine Regression Methods

    Science.gov (United States)

    Ochoa Gutierrez, L. H.; Vargas Jimenez, C. A.; Niño Vasquez, L. F.

    2011-12-01

    The "Sabana de Bogota" (Bogota Savannah) is the most important social and economical center of Colombia. Almost the third of population is concentrated in this region and generates about the 40% of Colombia's Internal Brute Product (IBP). According to this, the zone presents an elevated vulnerability in case that a high destructive seismic event occurs. Historical evidences show that high magnitude events took place in the past with a huge damage caused to the city and indicate that is probable that such events can occur in the next years. This is the reason why we are working in an early warning generation system, using the first few seconds of a seismic signal registered by three components and wide band seismometers. Such system can be implemented using Computational Intelligence tools, designed and calibrated to the particular Geological, Structural and environmental conditions present in the region. The methods developed are expected to work on real time, thus suitable software and electronic tools need to be developed. We used Support Vector Machines Regression (SVMR) methods trained and tested with historic seismic events registered by "EL ROSAL" Station, located near Bogotá, calculating descriptors or attributes as the input of the model, from the first 6 seconds of signal. With this algorithm, we obtained less than 10% of mean absolute error and correlation coefficients greater than 85% in hypocentral distance and Magnitude estimation. With this results we consider that we can improve the method trying to have better accuracy with less signal time and that this can be a very useful model to be implemented directly in the seismological stations to generate a fast characterization of the event, broadcasting not only raw signal but pre-processed information that can be very useful for accurate Early Warning Generation.

  3. Strategies for cloud-top phase determination: differentiation between thin cirrus clouds and snow in manual (ground truth) analyses

    Science.gov (United States)

    Hutchison, Keith D.; Etherton, Brian J.; Topping, Phillip C.

    1996-12-01

    Quantitative assessments on the performance of automated cloud analysis algorithms require the creation of highly accurate, manual cloud, no cloud (CNC) images from multispectral meteorological satellite data. In general, the methodology to create ground truth analyses for the evaluation of cloud detection algorithms is relatively straightforward. However, when focus shifts toward quantifying the performance of automated cloud classification algorithms, the task of creating ground truth images becomes much more complicated since these CNC analyses must differentiate between water and ice cloud tops while ensuring that inaccuracies in automated cloud detection are not propagated into the results of the cloud classification algorithm. The process of creating these ground truth CNC analyses may become particularly difficult when little or no spectral signature is evident between a cloud and its background, as appears to be the case when thin cirrus is present over snow-covered surfaces. In this paper, procedures are described that enhance the researcher's ability to manually interpret and differentiate between thin cirrus clouds and snow-covered surfaces in daytime AVHRR imagery. The methodology uses data in up to six AVHRR spectral bands, including an additional band derived from the daytime 3.7 micron channel, which has proven invaluable for the manual discrimination between thin cirrus clouds and snow. It is concluded that while the 1.6 micron channel remains essential to differentiate between thin ice clouds and snow. However, this capability that may be lost if the 3.7 micron data switches to a nighttime-only transmission with the launch of future NOAA satellites.

  4. Determination of pKa values of alendronate sodium in aqueous solution by piecewise linear regression based on acid-base potentiometric titration.

    Science.gov (United States)

    Ke, Jing; Dou, Hanfei; Zhang, Ximin; Uhagaze, Dushimabararezi Serge; Ding, Xiali; Dong, Yuming

    2016-12-01

    As a mono-sodium salt form of alendronic acid, alendronate sodium presents multi-level ionization for the dissociation of its four hydroxyl groups. The dissociation constants of alendronate sodium were determined in this work by studying the piecewise linear relationship between volume of titrant and pH value based on acid-base potentiometric titration reaction. The distribution curves of alendronate sodium were drawn according to the determined pKa values. There were 4 dissociation constants (pKa 1 =2.43, pKa 2 =7.55, pKa 3 =10.80, pKa 4 =11.99, respectively) of alendronate sodium, and 12 existing forms, of which 4 could be ignored, existing in different pH environments.

  5. Quantile Regression Methods

    DEFF Research Database (Denmark)

    Fitzenberger, Bernd; Wilke, Ralf Andreas

    2015-01-01

    if the mean regression model does not. We provide a short informal introduction into the principle of quantile regression which includes an illustrative application from empirical labor market research. This is followed by briefly sketching the underlying statistical model for linear quantile regression based......Quantile regression is emerging as a popular statistical approach, which complements the estimation of conditional mean models. While the latter only focuses on one aspect of the conditional distribution of the dependent variable, the mean, quantile regression provides more detailed insights...... by modeling conditional quantiles. Quantile regression can therefore detect whether the partial effect of a regressor on the conditional quantiles is the same for all quantiles or differs across quantiles. Quantile regression can provide evidence for a statistical relationship between two variables even...

  6. Oridonin Targets Multiple Drug-Resistant Tumor Cells as Determined by in Silico and in Vitro Analyses

    Directory of Open Access Journals (Sweden)

    Onat Kadioglu

    2018-04-01

    Full Text Available Drug resistance is one of the main reasons of chemotherapy failure. Therefore, overcoming drug resistance is an invaluable approach to identify novel anticancer drugs that have the potential to bypass or overcome resistance to established drugs and to substantially increase life span of cancer patients for effective chemotherapy. Oridonin is a cytotoxic diterpenoid isolated from Rabdosia rubescens with in vivo anticancer activity. In the present study, we evaluated the cytotoxicity of oridonin toward a panel of drug-resistant cancer cells overexpressing ABCB1, ABCG2, or ΔEGFR or with a knockout deletion of TP53. Interestingly, oridonin revealed lower degree of resistance than the control drug, doxorubicin. Molecular docking analyses pointed out that oridonin can interact with Akt/EGFR pathway proteins with comparable binding energies and similar docking poses as the known inhibitors. Molecular dynamics results validated the stable conformation of oridonin docking pose on Akt kinase domain. Western blot experiments clearly revealed dose-dependent downregulation of Akt and STAT3. Pharmacogenomics analyses pointed to a mRNA signature that predicted sensitivity and resistance to oridonin. In conclusion, oridonin bypasses major drug resistance mechanisms and targets Akt pathway and might be effective toward drug refractory tumors. The identification of oridonin-specific gene expressions may be useful for the development of personalized treatment approaches.

  7. APPLICATION OF ELASTICITY ANALYSES AND PERTURBATION SIMULATIONS IN DETERMINING STRESSOR IMPACTS ON POPULATION GROWTH RATE AND EXTINCTION RISK

    Science.gov (United States)

    Population structure and life history strategies are determinants of how populations respond to stressor-induced impairments in individual-level responses, but a consistent and holistic analysis has not been reported. Effects on population growth rate were modeled using five theo...

  8. Determination of micro-quantities of several elements in soil solution by isotope dilution and activation analyses

    International Nuclear Information System (INIS)

    Cho, C.M.; Axmann, H.

    1965-01-01

    Determination of small quantities of plant nutrients in the soil solution of flooded rice soils is a difficult problem. The concentrations of Mn, Fe and P, for example, in some soil solutions are so small that no chemical method gives any accurate result. Neutron activation analysis was reported to give a much lower limit of detectability for several elements, while for elements with low-induced activity after neutron irradiation, substoichiometric isotopic dilution analysis was applied. One of the advantages of neutron activation analysis lies in the fact that simultaneous activation of every inducible element in a sample takes place. This gives an opportunity to determine many elements by one sample preparation and irradiation. This, however, is not a simple task since identification of the activated products and their quantitative estimation becomes very difficult. Certain operations of separation must be carried out before activity measurements. Ion-exchange resin columns and chemical separation following the addition of carriers were successfully used for the determination of many elements after neutron irradiation. These procedures, however, cannot be directly applied to the determination of the elements of agronomic interest. A procedure was developed to determine several elements of agronomic interest. Times of irradiation and cooling, quick separation by ion-exchange columns, together with chemical precipitation for β-emitters of relatively long half-lives, were all combined to get the maximum benefit from neutron activation analysis. For Fe, for which no satisfactory neutron activation analysis has yet been developed, a modified substoichiometric double isotope dilution procedure is applied

  9. [Development of quantitative analyse method for determination of alkaloid cytisin in Spartium junceum L., growing in Georgia].

    Science.gov (United States)

    Iavich, P A; Churadze, L I; Suladze, T Sh; Rukhadze, T A

    2011-12-01

    The aim of the research was to develop a method for quantitative determination of cytisine in Spartium junceum L. We used the above-ground parts of plants. In developing a method of analysis we used the method of 3-phase extraction. In this case the best results were obtained in the system: chopped raw material - water solution of ammonia - chloroform. In this case, the amount of alkaloids extracted almost entirely from the plant and goes into the chloroform phase. Evaluation of the results was carried out by the validation. The method for determination of cytisine in raw product was proposed. The method comprises the following steps-extraction of raw materials extracting chloroform phase and its evaporation, the translation of solids in methanol, the chromatographic separation cytisine and its fixation of the spectrophotometer method. The method is reproducible, has the required accuracy, is easy to analysis (less than 9 hours).

  10. Comparison of a point-of-care analyser for the determination of HbA1c with HPLC method

    OpenAIRE

    Grant, D.A.; Dunseath, G.J.; Churm, R.; Luzio, S.D.

    2017-01-01

    Aims: As the use of Point of Care Testing (POCT) devices for measurement of glycated haemoglobin (HbA1c) increases, it is imperative to determine how their performance compares to laboratory methods. This study compared the performance of the automated Quo-Test POCT device (EKF Diagnostics), which uses boronate fluorescence quenching technology, with a laboratory based High Performance Liquid Chromatography (HPLC) method (Biorad D10) for measurement of HbA1c. Methods: Whole blood EDTA samples...

  11. Understanding logistic regression analysis

    OpenAIRE

    Sperandei, Sandro

    2014-01-01

    Logistic regression is used to obtain odds ratio in the presence of more than one explanatory variable. The procedure is quite similar to multiple linear regression, with the exception that the response variable is binomial. The result is the impact of each variable on the odds ratio of the observed event of interest. The main advantage is to avoid confounding effects by analyzing the association of all variables together. In this article, we explain the logistic regression procedure using ex...

  12. Introduction to regression graphics

    CERN Document Server

    Cook, R Dennis

    2009-01-01

    Covers the use of dynamic and interactive computer graphics in linear regression analysis, focusing on analytical graphics. Features new techniques like plot rotation. The authors have composed their own regression code, using Xlisp-Stat language called R-code, which is a nearly complete system for linear regression analysis and can be utilized as the main computer program in a linear regression course. The accompanying disks, for both Macintosh and Windows computers, contain the R-code and Xlisp-Stat. An Instructor's Manual presenting detailed solutions to all the problems in the book is ava

  13. Alternative Methods of Regression

    CERN Document Server

    Birkes, David

    2011-01-01

    Of related interest. Nonlinear Regression Analysis and its Applications Douglas M. Bates and Donald G. Watts ".an extraordinary presentation of concepts and methods concerning the use and analysis of nonlinear regression models.highly recommend[ed].for anyone needing to use and/or understand issues concerning the analysis of nonlinear regression models." --Technometrics This book provides a balance between theory and practice supported by extensive displays of instructive geometrical constructs. Numerous in-depth case studies illustrate the use of nonlinear regression analysis--with all data s

  14. Determination of polyphenolic content, HPLC analyses and DNA cleavage activity of Malaysian Averrhoa carambola L. fruit extracts

    Directory of Open Access Journals (Sweden)

    Zakia Khanam

    2015-10-01

    Full Text Available In developing countries, the increasing gap between population growth and food supply has created renewed interest in finding reliable and cheap natural resources of nutraceutical value and health promoting properties. Therefore, the present study deals with the phytochemical analyses and DNA cleavage activity of Averrhoa carambola L. fruit (starfruit extracts. The phytochemical studies involve colour tests and quantification of phenolics and flavonoids of the prepared ethanolic and aqueous extracts. Identification of phenolic acids and flavonoids present in the extracts were conducted by high performance liquid chromatography (HPLC equipped with diode array detector (DAD. DNA cleavage activity of the extracts was evaluated through gel electrophoresis against plasmid Escherichia coli DNA at different concentrations (0.125–0.60 μg/μl. The results of the study exhibited that the starfruit is a rich source of polyphenols and all the extracts exhibited a dose dependent DNA cleavage activity, whereas ethanolic extract induced more cleavage as compared to the aqueous extract. In conclusion, the present study provides preliminary evidence with regard to nutraceutical value of the fruit. So, further extensive study is a prerequisite to exploit DNA cleaving properties of the fruit extracts for therapeutic application.

  15. Simultaneous determination of penicillin G salts by infrared spectroscopy: Evaluation of combining orthogonal signal correction with radial basis function-partial least squares regression

    Science.gov (United States)

    Talebpour, Zahra; Tavallaie, Roya; Ahmadi, Seyyed Hamid; Abdollahpour, Assem

    2010-09-01

    In this study, a new method for the simultaneous determination of penicillin G salts in pharmaceutical mixture via FT-IR spectroscopy combined with chemometrics was investigated. The mixture of penicillin G salts is a complex system due to similar analytical characteristics of components. Partial least squares (PLS) and radial basis function-partial least squares (RBF-PLS) were used to develop the linear and nonlinear relation between spectra and components, respectively. The orthogonal signal correction (OSC) preprocessing method was used to correct unexpected information, such as spectral overlapping and scattering effects. In order to compare the influence of OSC on PLS and RBF-PLS models, the optimal linear (PLS) and nonlinear (RBF-PLS) models based on conventional and OSC preprocessed spectra were established and compared. The obtained results demonstrated that OSC clearly enhanced the performance of both RBF-PLS and PLS calibration models. Also in the case of some nonlinear relation between spectra and component, OSC-RBF-PLS gave satisfactory results than OSC-PLS model which indicated that the OSC was helpful to remove extrinsic deviations from linearity without elimination of nonlinear information related to component. The chemometric models were tested on an external dataset and finally applied to the analysis commercialized injection product of penicillin G salts.

  16. Determination of male strobilus developmental stages by cytological and gene expression analyses in Japanese cedar (Cryptomeria japonica).

    Science.gov (United States)

    Tsubomura, Miyoko; Kurita, Manabu; Watanabe, Atsushi

    2016-05-01

    The molecular mechanisms that control male strobilus development in conifers are largely unknown because the developmental stages and related genes have not yet been characterized. The determination of male strobilus developmental stages will contribute to genetic research and reproductive biology in conifers. Our objectives in this study were to determine the developmental stages of male strobili by cytological and transcriptome analysis, and to determine the stages at which aberrant morphology is observed in a male-sterile mutant of Cryptomeria japonica D. Don to better understand the molecular mechanisms that control male strobilus and pollen development. Male strobilus development was observed for 8 months, from initiation to pollen dispersal. A set of 19,209 expressed sequence tags (ESTs) collected from a male reproductive library and a pollen library was used for microarray analysis. We divided male strobilus development into 10 stages by cytological and transcriptome analysis. Eight clusters (7324 ESTs) exhibited major changes in transcriptome profiles during male strobili and pollen development in C. japonica Two clusters showed a gradual increase and decline in transcript abundance, respectively, while the other six clusters exhibited stage-specific changes. The stages at which the male sterility trait of Sosyun was expressed were identified using information on male strobilus and pollen developmental stages and gene expression profiles. Aberrant morphology was observed cytologically at Stage 6 (microspore stage), and differences in expression patterns compared with wild type were observed at Stage 4 (tetrad stage). © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. Principal component regression analysis with SPSS.

    Science.gov (United States)

    Liu, R X; Kuang, J; Gong, Q; Hou, X L

    2003-06-01

    The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.

  18. LS-SVM: uma nova ferramenta quimiométrica para regressão multivariada. Comparação de modelos de regressão LS-SVM e PLS na quantificação de adulterantes em leite em pó empregando NIR LS-SVM: a new chemometric tool for multivariate regression. Comparison of LS-SVM and pls regression for determination of common adulterants in powdered milk by nir spectroscopy

    Directory of Open Access Journals (Sweden)

    Marco F. Ferrão

    2007-08-01

    Full Text Available Least-squares support vector machines (LS-SVM were used as an alternative multivariate calibration method for the simultaneous quantification of some common adulterants found in powdered milk samples, using near-infrared spectroscopy. Excellent models were built using LS-SVM for determining R², RMSECV and RMSEP values. LS-SVMs show superior performance for quantifying starch, whey and sucrose in powdered milk samples in relation to PLSR. This study shows that it is possible to determine precisely the amount of one and two common adulterants simultaneously in powdered milk samples using LS-SVM and NIR spectra.

  19. A Simulation Investigation of Principal Component Regression.

    Science.gov (United States)

    Allen, David E.

    Regression analysis is one of the more common analytic tools used by researchers. However, multicollinearity between the predictor variables can cause problems in using the results of regression analyses. Problems associated with multicollinearity include entanglement of relative influences of variables due to reduced precision of estimation,…

  20. Scientometric and patentometric analyses to determine the knowledge landscape in innovative technologies: The case of 3D bioprinting.

    Science.gov (United States)

    Rodríguez-Salvador, Marisela; Rio-Belver, Rosa María; Garechana-Anacabe, Gaizka

    2017-01-01

    This research proposes an innovative data model to determine the landscape of emerging technologies. It is based on a competitive technology intelligence methodology that incorporates the assessment of scientific publications and patent analysis production, and is further supported by experts' feedback. It enables the definition of the growth rate of scientific and technological output in terms of the top countries, institutions and journals producing knowledge within the field as well as the identification of main areas of research and development by analyzing the International Patent Classification codes including keyword clusterization and co-occurrence of patent assignees and patent codes. This model was applied to the evolving domain of 3D bioprinting. Scientific documents from the Scopus and Web of Science databases, along with patents from 27 authorities and 140 countries, were retrieved. In total, 4782 scientific publications and 706 patents were identified from 2000 to mid-2016. The number of scientific documents published and patents in the last five years showed an annual average growth of 20% and 40%, respectively. Results indicate that the most prolific nations and institutions publishing on 3D bioprinting are the USA and China, including the Massachusetts Institute of Technology (USA), Nanyang Technological University (Singapore) and Tsinghua University (China), respectively. Biomaterials and Biofabrication are the predominant journals. The most prolific patenting countries are China and the USA; while Organovo Holdings Inc. (USA) and Tsinghua University (China) are the institutions leading. International Patent Classification codes reveal that most 3D bioprinting inventions intended for medical purposes apply porous or cellular materials or biologically active materials. Knowledge clusters and expert drivers indicate that there is a research focus on tissue engineering including the fabrication of organs, bioinks and new 3D bioprinting systems. Our

  1. Determination of detection limits for a VPD ICPMS method of analysis; Determination des limites de detection d'une methode d'analyse VPD ICPMS

    Energy Technology Data Exchange (ETDEWEB)

    Badard, M.; Veillerot, M

    2007-07-01

    This training course report presents the different methods of detection and quantifying of metallic impurities in semiconductors. One of the most precise technique is the collection of metal impurities by vapor phase decomposition (VPD) followed by their analysis by ICPMS (inductively coupled plasma mass spectrometry). The study shows the importance of detection limits in the domain of chemical analysis and the way to determine them for the ICPMS analysis. The results found on detection limits are excellent. Even if the detection limits reached with ICPMS performed after manual or automatic VPD are much higher than detection limits of ICPMS alone, this method remains one of the most sensible for ultra-traces analysis. (J.S.)

  2. Boosted beta regression.

    Directory of Open Access Journals (Sweden)

    Matthias Schmid

    Full Text Available Regression analysis with a bounded outcome is a common problem in applied statistics. Typical examples include regression models for percentage outcomes and the analysis of ratings that are measured on a bounded scale. In this paper, we consider beta regression, which is a generalization of logit models to situations where the response is continuous on the interval (0,1. Consequently, beta regression is a convenient tool for analyzing percentage responses. The classical approach to fit a beta regression model is to use maximum likelihood estimation with subsequent AIC-based variable selection. As an alternative to this established - yet unstable - approach, we propose a new estimation technique called boosted beta regression. With boosted beta regression estimation and variable selection can be carried out simultaneously in a highly efficient way. Additionally, both the mean and the variance of a percentage response can be modeled using flexible nonlinear covariate effects. As a consequence, the new method accounts for common problems such as overdispersion and non-binomial variance structures.

  3. Porosity and permeability determination of organic-rich Posidonia shales based on 3-D analyses by FIB-SEM microscopy

    Science.gov (United States)

    Grathoff, Georg H.; Peltz, Markus; Enzmann, Frieder; Kaufhold, Stephan

    2016-07-01

    background permeability of 1 × 10-21 m2 to the calculations, the total permeability increased by up to 1 order of magnitude for the low mature and decreases slightly for the overmature sample from the gas window. Anisotropy of permeability was observed. Permeability coefficients increase by 1 order of magnitude if simulations are performed parallel to the bedding. Our results compare well with experimental data from the literature suggesting that upscaling may be possible in the future as soon as maturity dependent organic matter permeability coefficients can be determined.

  4. Understanding logistic regression analysis.

    Science.gov (United States)

    Sperandei, Sandro

    2014-01-01

    Logistic regression is used to obtain odds ratio in the presence of more than one explanatory variable. The procedure is quite similar to multiple linear regression, with the exception that the response variable is binomial. The result is the impact of each variable on the odds ratio of the observed event of interest. The main advantage is to avoid confounding effects by analyzing the association of all variables together. In this article, we explain the logistic regression procedure using examples to make it as simple as possible. After definition of the technique, the basic interpretation of the results is highlighted and then some special issues are discussed.

  5. Applied linear regression

    CERN Document Server

    Weisberg, Sanford

    2013-01-01

    Praise for the Third Edition ""...this is an excellent book which could easily be used as a course text...""-International Statistical Institute The Fourth Edition of Applied Linear Regression provides a thorough update of the basic theory and methodology of linear regression modeling. Demonstrating the practical applications of linear regression analysis techniques, the Fourth Edition uses interesting, real-world exercises and examples. Stressing central concepts such as model building, understanding parameters, assessing fit and reliability, and drawing conclusions, the new edition illus

  6. Applied logistic regression

    CERN Document Server

    Hosmer, David W; Sturdivant, Rodney X

    2013-01-01

     A new edition of the definitive guide to logistic regression modeling for health science and other applications This thoroughly expanded Third Edition provides an easily accessible introduction to the logistic regression (LR) model and highlights the power of this model by examining the relationship between a dichotomous outcome and a set of covariables. Applied Logistic Regression, Third Edition emphasizes applications in the health sciences and handpicks topics that best suit the use of modern statistical software. The book provides readers with state-of-

  7. Multilingual speaker age recognition: regression analyses on the Lwazi corpus

    CSIR Research Space (South Africa)

    Feld, M

    2009-12-01

    Full Text Available Multilinguality represents an area of significant opportunities for automatic speech-processing systems: whereas multilingual societies are commonplace, the majority of speechprocessing systems are developed with a single language in mind. As a step...

  8. Understanding poisson regression.

    Science.gov (United States)

    Hayat, Matthew J; Higgins, Melinda

    2014-04-01

    Nurse investigators often collect study data in the form of counts. Traditional methods of data analysis have historically approached analysis of count data either as if the count data were continuous and normally distributed or with dichotomization of the counts into the categories of occurred or did not occur. These outdated methods for analyzing count data have been replaced with more appropriate statistical methods that make use of the Poisson probability distribution, which is useful for analyzing count data. The purpose of this article is to provide an overview of the Poisson distribution and its use in Poisson regression. Assumption violations for the standard Poisson regression model are addressed with alternative approaches, including addition of an overdispersion parameter or negative binomial regression. An illustrative example is presented with an application from the ENSPIRE study, and regression modeling of comorbidity data is included for illustrative purposes. Copyright 2014, SLACK Incorporated.

  9. Vector regression introduced

    Directory of Open Access Journals (Sweden)

    Mok Tik

    2014-06-01

    Full Text Available This study formulates regression of vector data that will enable statistical analysis of various geodetic phenomena such as, polar motion, ocean currents, typhoon/hurricane tracking, crustal deformations, and precursory earthquake signals. The observed vector variable of an event (dependent vector variable is expressed as a function of a number of hypothesized phenomena realized also as vector variables (independent vector variables and/or scalar variables that are likely to impact the dependent vector variable. The proposed representation has the unique property of solving the coefficients of independent vector variables (explanatory variables also as vectors, hence it supersedes multivariate multiple regression models, in which the unknown coefficients are scalar quantities. For the solution, complex numbers are used to rep- resent vector information, and the method of least squares is deployed to estimate the vector model parameters after transforming the complex vector regression model into a real vector regression model through isomorphism. Various operational statistics for testing the predictive significance of the estimated vector parameter coefficients are also derived. A simple numerical example demonstrates the use of the proposed vector regression analysis in modeling typhoon paths.

  10. Regression algorithm for emotion detection

    OpenAIRE

    Berthelon , Franck; Sander , Peter

    2013-01-01

    International audience; We present here two components of a computational system for emotion detection. PEMs (Personalized Emotion Maps) store links between bodily expressions and emotion values, and are individually calibrated to capture each person's emotion profile. They are an implementation based on aspects of Scherer's theoretical complex system model of emotion~\\cite{scherer00, scherer09}. We also present a regression algorithm that determines a person's emotional feeling from sensor m...

  11. Multicollinearity and Regression Analysis

    Science.gov (United States)

    Daoud, Jamal I.

    2017-12-01

    In regression analysis it is obvious to have a correlation between the response and predictor(s), but having correlation among predictors is something undesired. The number of predictors included in the regression model depends on many factors among which, historical data, experience, etc. At the end selection of most important predictors is something objective due to the researcher. Multicollinearity is a phenomena when two or more predictors are correlated, if this happens, the standard error of the coefficients will increase [8]. Increased standard errors means that the coefficients for some or all independent variables may be found to be significantly different from In other words, by overinflating the standard errors, multicollinearity makes some variables statistically insignificant when they should be significant. In this paper we focus on the multicollinearity, reasons and consequences on the reliability of the regression model.

  12. Minimax Regression Quantiles

    DEFF Research Database (Denmark)

    Bache, Stefan Holst

    A new and alternative quantile regression estimator is developed and it is shown that the estimator is root n-consistent and asymptotically normal. The estimator is based on a minimax ‘deviance function’ and has asymptotically equivalent properties to the usual quantile regression estimator. It is......, however, a different and therefore new estimator. It allows for both linear- and nonlinear model specifications. A simple algorithm for computing the estimates is proposed. It seems to work quite well in practice but whether it has theoretical justification is still an open question....

  13. riskRegression

    DEFF Research Database (Denmark)

    Ozenne, Brice; Sørensen, Anne Lyngholm; Scheike, Thomas

    2017-01-01

    In the presence of competing risks a prediction of the time-dynamic absolute risk of an event can be based on cause-specific Cox regression models for the event and the competing risks (Benichou and Gail, 1990). We present computationally fast and memory optimized C++ functions with an R interface...... for predicting the covariate specific absolute risks, their confidence intervals, and their confidence bands based on right censored time to event data. We provide explicit formulas for our implementation of the estimator of the (stratified) baseline hazard function in the presence of tied event times. As a by...... functionals. The software presented here is implemented in the riskRegression package....

  14. Determination of soil, sand and ore primordial radionuclide concentrations by full-spectrum analyses of high-purity germanium detector spectra

    International Nuclear Information System (INIS)

    Newman, R.T.; Lindsay, R.; Maphoto, K.P.; Mlwilo, N.A.; Mohanty, A.K.; Roux, D.G.; Meijer, R.J. de; Hlatshwayo, I.N.

    2008-01-01

    The full-spectrum analysis (FSA) method was used to determine primordial activity concentrations (ACs) in soil, sand and ore samples, in conjunction with a HPGe detector. FSA involves the least-squares fitting of sample spectra by linear combinations of 238 U, 232 Th and 40 K standard spectra. The differences between the FSA results and those from traditional windows analyses (using regions-of-interest around selected photopeaks) are less than 10% for all samples except zircon ore, where FSA yielded an unphysical 40 K AC

  15. Prediction, Regression and Critical Realism

    DEFF Research Database (Denmark)

    Næss, Petter

    2004-01-01

    This paper considers the possibility of prediction in land use planning, and the use of statistical research methods in analyses of relationships between urban form and travel behaviour. Influential writers within the tradition of critical realism reject the possibility of predicting social...... phenomena. This position is fundamentally problematic to public planning. Without at least some ability to predict the likely consequences of different proposals, the justification for public sector intervention into market mechanisms will be frail. Statistical methods like regression analyses are commonly...... seen as necessary in order to identify aggregate level effects of policy measures, but are questioned by many advocates of critical realist ontology. Using research into the relationship between urban structure and travel as an example, the paper discusses relevant research methods and the kinds...

  16. Multiple linear regression analysis

    Science.gov (United States)

    Edwards, T. R.

    1980-01-01

    Program rapidly selects best-suited set of coefficients. User supplies only vectors of independent and dependent data and specifies confidence level required. Program uses stepwise statistical procedure for relating minimal set of variables to set of observations; final regression contains only most statistically significant coefficients. Program is written in FORTRAN IV for batch execution and has been implemented on NOVA 1200.

  17. Bayesian logistic regression analysis

    NARCIS (Netherlands)

    Van Erp, H.R.N.; Van Gelder, P.H.A.J.M.

    2012-01-01

    In this paper we present a Bayesian logistic regression analysis. It is found that if one wishes to derive the posterior distribution of the probability of some event, then, together with the traditional Bayes Theorem and the integrating out of nuissance parameters, the Jacobian transformation is an

  18. Linear Regression Analysis

    CERN Document Server

    Seber, George A F

    2012-01-01

    Concise, mathematically clear, and comprehensive treatment of the subject.* Expanded coverage of diagnostics and methods of model fitting.* Requires no specialized knowledge beyond a good grasp of matrix algebra and some acquaintance with straight-line regression and simple analysis of variance models.* More than 200 problems throughout the book plus outline solutions for the exercises.* This revision has been extensively class-tested.

  19. Nonlinear Regression with R

    CERN Document Server

    Ritz, Christian; Parmigiani, Giovanni

    2009-01-01

    R is a rapidly evolving lingua franca of graphical display and statistical analysis of experiments from the applied sciences. This book provides a coherent treatment of nonlinear regression with R by means of examples from a diversity of applied sciences such as biology, chemistry, engineering, medicine and toxicology.

  20. Bayesian ARTMAP for regression.

    Science.gov (United States)

    Sasu, L M; Andonie, R

    2013-10-01

    Bayesian ARTMAP (BA) is a recently introduced neural architecture which uses a combination of Fuzzy ARTMAP competitive learning and Bayesian learning. Training is generally performed online, in a single-epoch. During training, BA creates input data clusters as Gaussian categories, and also infers the conditional probabilities between input patterns and categories, and between categories and classes. During prediction, BA uses Bayesian posterior probability estimation. So far, BA was used only for classification. The goal of this paper is to analyze the efficiency of BA for regression problems. Our contributions are: (i) we generalize the BA algorithm using the clustering functionality of both ART modules, and name it BA for Regression (BAR); (ii) we prove that BAR is a universal approximator with the best approximation property. In other words, BAR approximates arbitrarily well any continuous function (universal approximation) and, for every given continuous function, there is one in the set of BAR approximators situated at minimum distance (best approximation); (iii) we experimentally compare the online trained BAR with several neural models, on the following standard regression benchmarks: CPU Computer Hardware, Boston Housing, Wisconsin Breast Cancer, and Communities and Crime. Our results show that BAR is an appropriate tool for regression tasks, both for theoretical and practical reasons. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Bounded Gaussian process regression

    DEFF Research Database (Denmark)

    Jensen, Bjørn Sand; Nielsen, Jens Brehm; Larsen, Jan

    2013-01-01

    We extend the Gaussian process (GP) framework for bounded regression by introducing two bounded likelihood functions that model the noise on the dependent variable explicitly. This is fundamentally different from the implicit noise assumption in the previously suggested warped GP framework. We...... with the proposed explicit noise-model extension....

  2. and Multinomial Logistic Regression

    African Journals Online (AJOL)

    This work presented the results of an experimental comparison of two models: Multinomial Logistic Regression (MLR) and Artificial Neural Network (ANN) for classifying students based on their academic performance. The predictive accuracy for each model was measured by their average Classification Correct Rate (CCR).

  3. Mechanisms of neuroblastoma regression

    Science.gov (United States)

    Brodeur, Garrett M.; Bagatell, Rochelle

    2014-01-01

    Recent genomic and biological studies of neuroblastoma have shed light on the dramatic heterogeneity in the clinical behaviour of this disease, which spans from spontaneous regression or differentiation in some patients, to relentless disease progression in others, despite intensive multimodality therapy. This evidence also suggests several possible mechanisms to explain the phenomena of spontaneous regression in neuroblastomas, including neurotrophin deprivation, humoral or cellular immunity, loss of telomerase activity and alterations in epigenetic regulation. A better understanding of the mechanisms of spontaneous regression might help to identify optimal therapeutic approaches for patients with these tumours. Currently, the most druggable mechanism is the delayed activation of developmentally programmed cell death regulated by the tropomyosin receptor kinase A pathway. Indeed, targeted therapy aimed at inhibiting neurotrophin receptors might be used in lieu of conventional chemotherapy or radiation in infants with biologically favourable tumours that require treatment. Alternative approaches consist of breaking immune tolerance to tumour antigens or activating neurotrophin receptor pathways to induce neuronal differentiation. These approaches are likely to be most effective against biologically favourable tumours, but they might also provide insights into treatment of biologically unfavourable tumours. We describe the different mechanisms of spontaneous neuroblastoma regression and the consequent therapeutic approaches. PMID:25331179

  4. Ridge Regression Signal Processing

    Science.gov (United States)

    Kuhl, Mark R.

    1990-01-01

    The introduction of the Global Positioning System (GPS) into the National Airspace System (NAS) necessitates the development of Receiver Autonomous Integrity Monitoring (RAIM) techniques. In order to guarantee a certain level of integrity, a thorough understanding of modern estimation techniques applied to navigational problems is required. The extended Kalman filter (EKF) is derived and analyzed under poor geometry conditions. It was found that the performance of the EKF is difficult to predict, since the EKF is designed for a Gaussian environment. A novel approach is implemented which incorporates ridge regression to explain the behavior of an EKF in the presence of dynamics under poor geometry conditions. The basic principles of ridge regression theory are presented, followed by the derivation of a linearized recursive ridge estimator. Computer simulations are performed to confirm the underlying theory and to provide a comparative analysis of the EKF and the recursive ridge estimator.

  5. Subset selection in regression

    CERN Document Server

    Miller, Alan

    2002-01-01

    Originally published in 1990, the first edition of Subset Selection in Regression filled a significant gap in the literature, and its critical and popular success has continued for more than a decade. Thoroughly revised to reflect progress in theory, methods, and computing power, the second edition promises to continue that tradition. The author has thoroughly updated each chapter, incorporated new material on recent developments, and included more examples and references. New in the Second Edition:A separate chapter on Bayesian methodsComplete revision of the chapter on estimationA major example from the field of near infrared spectroscopyMore emphasis on cross-validationGreater focus on bootstrappingStochastic algorithms for finding good subsets from large numbers of predictors when an exhaustive search is not feasible Software available on the Internet for implementing many of the algorithms presentedMore examplesSubset Selection in Regression, Second Edition remains dedicated to the techniques for fitting...

  6. Regression in organizational leadership.

    Science.gov (United States)

    Kernberg, O F

    1979-02-01

    The choice of good leaders is a major task for all organizations. Inforamtion regarding the prospective administrator's personality should complement questions regarding his previous experience, his general conceptual skills, his technical knowledge, and the specific skills in the area for which he is being selected. The growing psychoanalytic knowledge about the crucial importance of internal, in contrast to external, object relations, and about the mutual relationships of regression in individuals and in groups, constitutes an important practical tool for the selection of leaders.

  7. Classification and regression trees

    CERN Document Server

    Breiman, Leo; Olshen, Richard A; Stone, Charles J

    1984-01-01

    The methodology used to construct tree structured rules is the focus of this monograph. Unlike many other statistical procedures, which moved from pencil and paper to calculators, this text's use of trees was unthinkable before computers. Both the practical and theoretical sides have been developed in the authors' study of tree methods. Classification and Regression Trees reflects these two sides, covering the use of trees as a data analysis method, and in a more mathematical framework, proving some of their fundamental properties.

  8. Comparison of Classical Linear Regression and Orthogonal Regression According to the Sum of Squares Perpendicular Distances

    OpenAIRE

    KELEŞ, Taliha; ALTUN, Murat

    2016-01-01

    Regression analysis is a statistical technique for investigating and modeling the relationship between variables. The purpose of this study was the trivial presentation of the equation for orthogonal regression (OR) and the comparison of classical linear regression (CLR) and OR techniques with respect to the sum of squared perpendicular distances. For that purpose, the analyses were shown by an example. It was found that the sum of squared perpendicular distances of OR is smaller. Thus, it wa...

  9. Logistic regression models

    CERN Document Server

    Hilbe, Joseph M

    2009-01-01

    This book really does cover everything you ever wanted to know about logistic regression … with updates available on the author's website. Hilbe, a former national athletics champion, philosopher, and expert in astronomy, is a master at explaining statistical concepts and methods. Readers familiar with his other expository work will know what to expect-great clarity.The book provides considerable detail about all facets of logistic regression. No step of an argument is omitted so that the book will meet the needs of the reader who likes to see everything spelt out, while a person familiar with some of the topics has the option to skip "obvious" sections. The material has been thoroughly road-tested through classroom and web-based teaching. … The focus is on helping the reader to learn and understand logistic regression. The audience is not just students meeting the topic for the first time, but also experienced users. I believe the book really does meet the author's goal … .-Annette J. Dobson, Biometric...

  10. Stepwise versus Hierarchical Regression: Pros and Cons

    Science.gov (United States)

    Lewis, Mitzi

    2007-01-01

    Multiple regression is commonly used in social and behavioral data analysis. In multiple regression contexts, researchers are very often interested in determining the "best" predictors in the analysis. This focus may stem from a need to identify those predictors that are supportive of theory. Alternatively, the researcher may simply be interested…

  11. Wheat bran reduces concentrations of digestible, metabolizable, and net energy in diets fed to pigs, but energy values in wheat bran determined by the difference procedure are not different from values estimated from a linear regression procedure.

    Science.gov (United States)

    Jaworski, N W; Liu, D W; Li, D F; Stein, H H

    2016-07-01

    An experiment was conducted to determine effects on DE, ME, and NE for growing pigs of adding 15 or 30% wheat bran to a corn-soybean meal diet and to compare values for DE, ME, and NE calculated using the difference procedure with values obtained using linear regression. Eighteen barrows (54.4 ± 4.3 kg initial BW) were individually housed in metabolism crates. The experiment had 3 diets and 6 replicate pigs per diet. The control diet contained corn, soybean meal, and no wheat bran. Two additional diets were formulated by mixing 15 or 30% wheat bran with 85 or 70% of the control diet, respectively. The experimental period lasted 15 d. During the initial 7 d, pigs were adapted to their experimental diets and housed in metabolism crates and fed 573 kcal ME/kg BW per day. On d 8, metabolism crates with the pigs were moved into open-circuit respiration chambers for measurement of O consumption and CO and CH production. The feeding level was the same as in the adaptation period, and feces and urine were collected during this period. On d 13 and 14, pigs were fed 225 kcal ME/kg BW per day, and pigs were then fasted for 24 h to obtain fasting heat production. Results of the experiment indicated that the apparent total tract digestibility of DM, GE, crude fiber, ADF, and NDF linearly decreased ( ≤ 0.05) as wheat bran inclusion increased in the diets. The daily O consumption and CO and CH production by pigs fed increasing concentrations of wheat bran linearly decreased ( ≤ 0.05), resulting in a linear decrease ( ≤ 0.05) in heat production. The DE (3,454, 3,257, and 3,161 kcal/kg for diets containing 0, 15, and 30% wheat bran, respectively for diets containing 0, 15, and 30% wheat bran, respectively), ME (3,400, 3,209, and 3,091 kcal/kg for diets containing 0, 15, and 30% wheat bran, respectively), and NE (1,808, 1,575, and 1,458 kcal/kg for diets containing 0, 15, and 30% wheat bran, respectively) of diets decreased (linear, ≤ 0.05) as wheat bran inclusion increased

  12. Evaluation of in-line Raman data for end-point determination of a coating process: Comparison of Science-Based Calibration, PLS-regression and univariate data analysis.

    Science.gov (United States)

    Barimani, Shirin; Kleinebudde, Peter

    2017-10-01

    A multivariate analysis method, Science-Based Calibration (SBC), was used for the first time for endpoint determination of a tablet coating process using Raman data. Two types of tablet cores, placebo and caffeine cores, received a coating suspension comprising a polyvinyl alcohol-polyethylene glycol graft-copolymer and titanium dioxide to a maximum coating thickness of 80µm. Raman spectroscopy was used as in-line PAT tool. The spectra were acquired every minute and correlated to the amount of applied aqueous coating suspension. SBC was compared to another well-known multivariate analysis method, Partial Least Squares-regression (PLS) and a simpler approach, Univariate Data Analysis (UVDA). All developed calibration models had coefficient of determination values (R 2 ) higher than 0.99. The coating endpoints could be predicted with root mean square errors (RMSEP) less than 3.1% of the applied coating suspensions. Compared to PLS and UVDA, SBC proved to be an alternative multivariate calibration method with high predictive power. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Influence of photoisomers in bilirubin determinations on Kodak Ektachem and Hitachi analysers in neonatal specimens study of the contribution of structural and configurational isomers.

    Science.gov (United States)

    Gulian, J M; Dalmasso, C; Millet, V; Unal, D; Charrel, M

    1995-08-01

    We compared data obtained with the Kodak Ektachem and Hitachi 717 Analysers and HPLC from 83 neonates under phototherapy. Total bilirubin values determined with the Kodak and Hitachi are in good agreement, but we observed a large discrepancy in the results for conjugated (Kodak) and direct (Hitachi) bilirubin. HPLC revealed that all the samples contained configurational isomers, while only 7.7% and 30.8% contained conjugated bilirubin and structural isomers, respectively. We developed a device for the specific and quantitative production of configurational or structural isomers, by irradiation with blue or green light. In vitro, total bilirubin values are coherent for the routine analysers in the presence of configurational or structural isomers. With configurational isomers, unconjugated bilirubin (Kodak) is lower than total bilirubin (Kodak), and conjugated bilirubin (Kodak) is always equal to zero, so the apparatus gives a false positive response for delta bilirubin. In contrast, the direct bilirubin (Hitachi) is constant. Furthermore, in the presence of structural isomers, unconjugated bilirubin (Kodak) is unexpectedly higher than total bilirubin (Kodak), conjugated bilirubin (Kodak) is proportional to the quantity of these isomers, and direct bilirubin (Hitachi) is constant. The contribution of photoisomers in bilirubin measurements is discussed.

  14. Applying of Factor Analyses for Determination of Trace Elements Distribution in Water from River Vardar and Its Tributaries, Macedonia/Greece

    Science.gov (United States)

    Popov, Stanko Ilić; Stafilov, Trajče; Šajn, Robert; Tănăselia, Claudiu; Bačeva, Katerina

    2014-01-01

    A systematic study was carried out to investigate the distribution of fifty-six elements in the water samples from river Vardar (Republic of Macedonia and Greece) and its major tributaries. The samples were collected from 27 sampling sites. Analyses were performed by mass spectrometry with inductively coupled plasma (ICP-MS) and atomic emission spectrometry with inductively coupled plasma (ICP-AES). Cluster and R mode factor analysis (FA) was used to identify and characterise element associations and four associations of elements were determined by the method of multivariate statistics. Three factors represent the associations of elements that occur in the river water naturally while Factor 3 represents an anthropogenic association of the elements (Cd, Ga, In, Pb, Re, Tl, Cu, and Zn) introduced in the river waters from the waste waters from the mining and metallurgical activities in the country. PMID:24587756

  15. Applying of Factor Analyses for Determination of Trace Elements Distribution in Water from River Vardar and Its Tributaries, Macedonia/Greece

    Directory of Open Access Journals (Sweden)

    Stanko Ilić Popov

    2014-01-01

    Full Text Available A systematic study was carried out to investigate the distribution of fifty-six elements in the water samples from river Vardar (Republic of Macedonia and Greece and its major tributaries. The samples were collected from 27 sampling sites. Analyses were performed by mass spectrometry with inductively coupled plasma (ICP-MS and atomic emission spectrometry with inductively coupled plasma (ICP-AES. Cluster and R mode factor analysis (FA was used to identify and characterise element associations and four associations of elements were determined by the method of multivariate statistics. Three factors represent the associations of elements that occur in the river water naturally while Factor 3 represents an anthropogenic association of the elements (Cd, Ga, In, Pb, Re, Tl, Cu, and Zn introduced in the river waters from the waste waters from the mining and metallurgical activities in the country.

  16. Steganalysis using logistic regression

    Science.gov (United States)

    Lubenko, Ivans; Ker, Andrew D.

    2011-02-01

    We advocate Logistic Regression (LR) as an alternative to the Support Vector Machine (SVM) classifiers commonly used in steganalysis. LR offers more information than traditional SVM methods - it estimates class probabilities as well as providing a simple classification - and can be adapted more easily and efficiently for multiclass problems. Like SVM, LR can be kernelised for nonlinear classification, and it shows comparable classification accuracy to SVM methods. This work is a case study, comparing accuracy and speed of SVM and LR classifiers in detection of LSB Matching and other related spatial-domain image steganography, through the state-of-art 686-dimensional SPAM feature set, in three image sets.

  17. SEPARATION PHENOMENA LOGISTIC REGRESSION

    Directory of Open Access Journals (Sweden)

    Ikaro Daniel de Carvalho Barreto

    2014-03-01

    Full Text Available This paper proposes an application of concepts about the maximum likelihood estimation of the binomial logistic regression model to the separation phenomena. It generates bias in the estimation and provides different interpretations of the estimates on the different statistical tests (Wald, Likelihood Ratio and Score and provides different estimates on the different iterative methods (Newton-Raphson and Fisher Score. It also presents an example that demonstrates the direct implications for the validation of the model and validation of variables, the implications for estimates of odds ratios and confidence intervals, generated from the Wald statistics. Furthermore, we present, briefly, the Firth correction to circumvent the phenomena of separation.

  18. riskRegression

    DEFF Research Database (Denmark)

    Ozenne, Brice; Sørensen, Anne Lyngholm; Scheike, Thomas

    2017-01-01

    In the presence of competing risks a prediction of the time-dynamic absolute risk of an event can be based on cause-specific Cox regression models for the event and the competing risks (Benichou and Gail, 1990). We present computationally fast and memory optimized C++ functions with an R interface......-product we obtain fast access to the baseline hazards (compared to survival::basehaz()) and predictions of survival probabilities, their confidence intervals and confidence bands. Confidence intervals and confidence bands are based on point-wise asymptotic expansions of the corresponding statistical...

  19. Adaptive metric kernel regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    2000-01-01

    Kernel smoothing is a widely used non-parametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this contribution, we propose an algorithm that adapts the input metric used in multivariate...... regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...

  20. Adaptive Metric Kernel Regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    Kernel smoothing is a widely used nonparametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this paper, we propose an algorithm that adapts the input metric used in multivariate regression...... by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...

  1. [From clinical judgment to linear regression model.

    Science.gov (United States)

    Palacios-Cruz, Lino; Pérez, Marcela; Rivas-Ruiz, Rodolfo; Talavera, Juan O

    2013-01-01

    When we think about mathematical models, such as linear regression model, we think that these terms are only used by those engaged in research, a notion that is far from the truth. Legendre described the first mathematical model in 1805, and Galton introduced the formal term in 1886. Linear regression is one of the most commonly used regression models in clinical practice. It is useful to predict or show the relationship between two or more variables as long as the dependent variable is quantitative and has normal distribution. Stated in another way, the regression is used to predict a measure based on the knowledge of at least one other variable. Linear regression has as it's first objective to determine the slope or inclination of the regression line: Y = a + bx, where "a" is the intercept or regression constant and it is equivalent to "Y" value when "X" equals 0 and "b" (also called slope) indicates the increase or decrease that occurs when the variable "x" increases or decreases in one unit. In the regression line, "b" is called regression coefficient. The coefficient of determination (R 2 ) indicates the importance of independent variables in the outcome.

  2. Elastic fibers in human skin: quantitation of elastic fibers by computerized digital image analyses and determination of elastin by radioimmunoassay of desmosine.

    Science.gov (United States)

    Uitto, J; Paul, J L; Brockley, K; Pearce, R H; Clark, J G

    1983-10-01

    The elastic fibers in the skin and other organs can be affected in several disease processes. In this study, we have developed morphometric techniques that allow accurate quantitation of the elastic fibers in punch biopsy specimens of skin. In this procedure, the elastic fibers, visualized by elastin-specific stains, are examined through a camera unit attached to the microscope. The black and white images sensing various gray levels are then converted to binary images after selecting a threshold with an analog threshold selection device. The binary images are digitized and the data analyzed by a computer program designed to express the properties of the image, thus allowing determination of the volume fraction occupied by the elastic fibers. As an independent measure of the elastic fibers, alternate tissue sections were used for assay of desmosine, an elastin-specific cross-link compound, by a radioimmunoassay. The clinical applicability of the computerized morphometric analyses was tested by examining the elastic fibers in the skin of five patients with pseudoxanthoma elasticum or Buschke-Ollendorff syndrome. In the skin of 10 healthy control subjects, the elastic fibers occupied 2.1 +/- 1.1% (mean +/- SD) of the dermis. The volume fractions occupied by the elastic fibers in the lesions of pseudoxanthoma elasticum or Buschke-Ollendorff syndrome were increased as much as 6-fold, whereas the values in the unaffected areas of the skin in the same patients were within normal limits. A significant correlation between the volume fraction of elastic fibers, determined by computerized morphometric analyses, and the concentration of desmosine, quantitated by radioimmunoassay, was noted in the total material. These results demonstrate that computerized morphometric techniques are helpful in characterizing disease processes affecting skin. This methodology should also be applicable to other tissues that contain elastic fibers and that are affected in various heritable and

  3. Vectors, a tool in statistical regression theory

    NARCIS (Netherlands)

    Corsten, L.C.A.

    1958-01-01

    Using linear algebra this thesis developed linear regression analysis including analysis of variance, covariance analysis, special experimental designs, linear and fertility adjustments, analysis of experiments at different places and times. The determination of the orthogonal projection, yielding

  4. Simultaneous spectrophotometric determination of crystal violet and malachite green in water samples using partial least squares regression and central composite design after preconcentration by dispersive solid-phase extraction.

    Science.gov (United States)

    Razi-Asrami, Mahboobeh; Ghasemi, Jahan B; Amiri, Nayereh; Sadeghi, Seyed Jamal

    2017-04-01

    In this paper, a simple, fast, and inexpensive method is introduced for the simultaneous spectrophotometric determination of crystal violet (CV) and malachite green (MG) contents in aquatic samples using partial least squares regression (PLS) as a multivariate calibration technique after preconcentration by graphene oxide (GO). The method was based on the sorption and desorption of analytes onto GO and direct determination by ultraviolet-visible spectrophotometric techniques. GO was synthesized according to Hummers method. To characterize the shape and structure of GO, FT-IR, SEM, and XRD were used. The effective factors on the extraction efficiency such as pH, extraction time, and the amount of adsorbent were optimized using central composite design. The optimum values of these factors were 6, 15 min, and 12 mg, respectively. The maximum capacity of GO for the adsorption of CV and MG was 63.17 and 77.02 mg g -1 , respectively. Preconcentration factors and extraction recoveries were obtained and were 19.6, 98% for CV and 20, 100% for MG, respectively. LOD and linear dynamic ranges for CV and MG were 0.009, 0.03-0.3, 0.015, and 0.05-0.5 (μg mL -1 ), respectively. The intra-day and inter-day relative standard deviations were 1.99 and 0.58 for CV and 1.69 and 3.13 for MG at the concentration level of 50 ng mL -1 , respectively. Finally, the proposed DSPE/PLS method was successfully applied for the simultaneous determination of the trace amount of CV and MG in the real water samples.

  5. Application of support vector regression (SVR) for stream flow prediction on the Amazon basin

    CSIR Research Space (South Africa)

    Du Toit, Melise

    2016-10-01

    Full Text Available regression technique is used in this study to analyse historical stream flow occurrences and predict stream flow values for the Amazon basin. Up to twelve month predictions are made and the coefficient of determination and root-mean-square error are used...

  6. Aid and growth regressions

    DEFF Research Database (Denmark)

    Hansen, Henrik; Tarp, Finn

    2001-01-01

    This paper examines the relationship between foreign aid and growth in real GDP per capita as it emerges from simple augmentations of popular cross country growth specifications. It is shown that aid in all likelihood increases the growth rate, and this result is not conditional on ‘good’ policy....... investment. We conclude by stressing the need for more theoretical work before this kind of cross-country regressions are used for policy purposes.......This paper examines the relationship between foreign aid and growth in real GDP per capita as it emerges from simple augmentations of popular cross country growth specifications. It is shown that aid in all likelihood increases the growth rate, and this result is not conditional on ‘good’ policy...

  7. Modified Regression Correlation Coefficient for Poisson Regression Model

    Science.gov (United States)

    Kaengthong, Nattacha; Domthong, Uthumporn

    2017-09-01

    This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).

  8. Measurement Error in Education and Growth Regressions

    NARCIS (Netherlands)

    Portela, M.; Teulings, C.N.; Alessie, R.

    The perpetual inventory method used for the construction of education data per country leads to systematic measurement error. This paper analyses the effect of this measurement error on GDP regressions. There is a systematic difference in the education level between census data and observations

  9. Measurement error in education and growth regressions

    NARCIS (Netherlands)

    Portela, Miguel; Teulings, Coen; Alessie, R.

    2004-01-01

    The perpetual inventory method used for the construction of education data per country leads to systematic measurement error. This paper analyses the effect of this measurement error on GDP regressions. There is a systematic difference in the education level between census data and observations

  10. Panel data specifications in nonparametric kernel regression

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    parametric panel data estimators to analyse the production technology of Polish crop farms. The results of our nonparametric kernel regressions generally differ from the estimates of the parametric models but they only slightly depend on the choice of the kernel functions. Based on economic reasoning, we...

  11. Canonical variate regression.

    Science.gov (United States)

    Luo, Chongliang; Liu, Jin; Dey, Dipak K; Chen, Kun

    2016-07-01

    In many fields, multi-view datasets, measuring multiple distinct but interrelated sets of characteristics on the same set of subjects, together with data on certain outcomes or phenotypes, are routinely collected. The objective in such a problem is often two-fold: both to explore the association structures of multiple sets of measurements and to develop a parsimonious model for predicting the future outcomes. We study a unified canonical variate regression framework to tackle the two problems simultaneously. The proposed criterion integrates multiple canonical correlation analysis with predictive modeling, balancing between the association strength of the canonical variates and their joint predictive power on the outcomes. Moreover, the proposed criterion seeks multiple sets of canonical variates simultaneously to enable the examination of their joint effects on the outcomes, and is able to handle multivariate and non-Gaussian outcomes. An efficient algorithm based on variable splitting and Lagrangian multipliers is proposed. Simulation studies show the superior performance of the proposed approach. We demonstrate the effectiveness of the proposed approach in an [Formula: see text] intercross mice study and an alcohol dependence study. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. Nonparametric regression using the concept of minimum energy

    International Nuclear Information System (INIS)

    Williams, Mike

    2011-01-01

    It has recently been shown that an unbinned distance-based statistic, the energy, can be used to construct an extremely powerful nonparametric multivariate two sample goodness-of-fit test. An extension to this method that makes it possible to perform nonparametric regression using multiple multivariate data sets is presented in this paper. The technique, which is based on the concept of minimizing the energy of the system, permits determination of parameters of interest without the need for parametric expressions of the parent distributions of the data sets. The application and performance of this new method is discussed in the context of some simple example analyses.

  13. The laser microprobe mass analyser for determining partitioning of minor and trace elements among intimately associated macerals: an example from the Swallow Wood coal bed, Yorkshire, UK

    Science.gov (United States)

    Lyons, P.C.; Morelli, J.J.; Hercules, D.M.; Lineman, D.; Thompson-Rizer, C. L.; Dulong, F.T.

    1990-01-01

    A study of the elemental composition of intimately associated coal macerals in the English Swallow Wood coal bed was conducted using a laser microprobe mass analyser, and indicated a similar trace and minor elemental chemistry in the vitrinite and cutinite and a different elemental signature in the fusinite. Three to six sites were analysed within each maceral during the study by laser micro mass spectrometry (LAMMS). Al, Ba, Ca, Cl, Cr, Dy, F, Fe, Ga, K, Li, Mg, Na, S, Si, Sr, Ti, V, and Y were detected by LAMMS in all three macerals but not necessarily at each site analysed. The signal intensities of major isotopic peaks were normalized to the signal intensity of the m z 85 peak (C7H) to determine the relative minor- and trace-element concentrations among the three dominant macerals. The vitrinite and the cutinite were depleted in Ba, Ca, Dy, Li, Mg, Sr, and Y relative to their concentrations observed in the fusinite. The cutinite was distinguished over vitrinite by less Ti, V, Cr and Ca, and K Ca $ ??1 (relative signal intensities). The fusinite, relative to the cutinite and vitrinite, was relatively depleted in Cr, Sc, Ti, and V. The fusinite, as compared with both the cutinite and vitrinite, was relatively enriched in Ba, Ca, Dy, Li, Mg, Sr, and Y, and also showed the most intense m z 64, 65, 66 signals (possibly S2+, HS2+, H2S2+, respectively). The LAMMS data indicate a common source for most elements and selective loss from the maceral precursors in the peat or entrapment of certain elements as mineral matter, most likely during the peat stage or during early diagenesis. The relatively high amounts of Ba, Ca, Dy, Li, Mg, Sr, and Y in the fusinite are consistent with micron and submicron mineral-matter inclusions such as carbonates and Ca-Al phosphates (probably crandallite group minerals). Mineralogical data on the whole coal, the LAMMS chemistry of the vitrinite and cutinite, and scanning electron microscopy/energy dispersive X-ray analysis (SEM/EDAX) of

  14. Chronological development of element concentrations in grapes during growth and ripeness and during fermentation of must determined by instrumental neutron-activation analyses

    International Nuclear Information System (INIS)

    Feige, Markus; Hampel, Gabriele; Kratz, Jens Volker; Wiehl, Norbert

    2014-01-01

    The chronological development of element concentrations during growth and ripeness of grapes described in the literature has only been concerned with the macro elements Mg, K, and Ca. Concentrations of trace elements in must are only described as a snapshot at the end of the ripeness. Therefore, the motivation for the present work was to accompany the growth and the ripening process of grapes successively by systematically determining element concentrations in grapes of Riesling and Cabernet Sauvignon by neutron-activation analyses. While for a number of elements, the concentrations in the grapes increased as a function of grape development (e.g., Na, K, Rb, Al), other concentrations decreased (e.g., Mg, Ca, Mn). These decreases are not only to be attributed to a dilution by an increasing uptake of water during growth, but also by an active transport of the cations out of the berries. Furthermore, the interest focused on the influence of mineral substances on the process of fermentation and on the uptake of trace elements by the yeasts. (orig.)

  15. Chronological development of element concentrations in grapes during growth and ripeness and during fermentation of must determined by instrumental neutron-activation analyses

    Energy Technology Data Exchange (ETDEWEB)

    Feige, Markus; Hampel, Gabriele; Kratz, Jens Volker; Wiehl, Norbert [Mainz Univ. (Germany). Inst. fuer Kernchemie; Koenig, Helmut [Mainz Univ. (Germany). Inst. fuer Mikrobiologie und Weinforschung; Wagner, Andreas [Weingut Wagner, Essenheim (Germany)

    2014-07-01

    The chronological development of element concentrations during growth and ripeness of grapes described in the literature has only been concerned with the macro elements Mg, K, and Ca. Concentrations of trace elements in must are only described as a snapshot at the end of the ripeness. Therefore, the motivation for the present work was to accompany the growth and the ripening process of grapes successively by systematically determining element concentrations in grapes of Riesling and Cabernet Sauvignon by neutron-activation analyses. While for a number of elements, the concentrations in the grapes increased as a function of grape development (e.g., Na, K, Rb, Al), other concentrations decreased (e.g., Mg, Ca, Mn). These decreases are not only to be attributed to a dilution by an increasing uptake of water during growth, but also by an active transport of the cations out of the berries. Furthermore, the interest focused on the influence of mineral substances on the process of fermentation and on the uptake of trace elements by the yeasts. (orig.)

  16. Quantum algorithm for linear regression

    Science.gov (United States)

    Wang, Guoming

    2017-07-01

    We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.

  17. Determination of oxygen to metal ratio for varying UO2 content in sintered (U,Th)O2 pellet by oxidation-reduction method using thermo-gravimetric analyser

    International Nuclear Information System (INIS)

    Mahanty, B.N.; Khan, F.A.; Karande, A.P.; Prakash, A.; Afzal, Md.; Panakkal, J.P.

    2009-01-01

    Experiments were carried out to determine oxygen to metal ratio in 4%, 6%, 10%, 20%, 50% and 80% UO 2 in sintered (U, Th) O 2 pellets by oxidation-reduction method using thermo gravimetric analyser. (author)

  18. Polynomial regression analysis and significance test of the regression function

    International Nuclear Information System (INIS)

    Gao Zhengming; Zhao Juan; He Shengping

    2012-01-01

    In order to analyze the decay heating power of a certain radioactive isotope per kilogram with polynomial regression method, the paper firstly demonstrated the broad usage of polynomial function and deduced its parameters with ordinary least squares estimate. Then significance test method of polynomial regression function is derived considering the similarity between the polynomial regression model and the multivariable linear regression model. Finally, polynomial regression analysis and significance test of the polynomial function are done to the decay heating power of the iso tope per kilogram in accord with the authors' real work. (authors)

  19. Combining Alphas via Bounded Regression

    Directory of Open Access Journals (Sweden)

    Zura Kakushadze

    2015-11-01

    Full Text Available We give an explicit algorithm and source code for combining alpha streams via bounded regression. In practical applications, typically, there is insufficient history to compute a sample covariance matrix (SCM for a large number of alphas. To compute alpha allocation weights, one then resorts to (weighted regression over SCM principal components. Regression often produces alpha weights with insufficient diversification and/or skewed distribution against, e.g., turnover. This can be rectified by imposing bounds on alpha weights within the regression procedure. Bounded regression can also be applied to stock and other asset portfolio construction. We discuss illustrative examples.

  20. Regression in autistic spectrum disorders.

    Science.gov (United States)

    Stefanatos, Gerry A

    2008-12-01

    A significant proportion of children diagnosed with Autistic Spectrum Disorder experience a developmental regression characterized by a loss of previously-acquired skills. This may involve a loss of speech or social responsitivity, but often entails both. This paper critically reviews the phenomena of regression in autistic spectrum disorders, highlighting the characteristics of regression, age of onset, temporal course, and long-term outcome. Important considerations for diagnosis are discussed and multiple etiological factors currently hypothesized to underlie the phenomenon are reviewed. It is argued that regressive autistic spectrum disorders can be conceptualized on a spectrum with other regressive disorders that may share common pathophysiological features. The implications of this viewpoint are discussed.

  1. Linear regression in astronomy. I

    Science.gov (United States)

    Isobe, Takashi; Feigelson, Eric D.; Akritas, Michael G.; Babu, Gutti Jogesh

    1990-01-01

    Five methods for obtaining linear regression fits to bivariate data with unknown or insignificant measurement errors are discussed: ordinary least-squares (OLS) regression of Y on X, OLS regression of X on Y, the bisector of the two OLS lines, orthogonal regression, and 'reduced major-axis' regression. These methods have been used by various researchers in observational astronomy, most importantly in cosmic distance scale applications. Formulas for calculating the slope and intercept coefficients and their uncertainties are given for all the methods, including a new general form of the OLS variance estimates. The accuracy of the formulas was confirmed using numerical simulations. The applicability of the procedures is discussed with respect to their mathematical properties, the nature of the astronomical data under consideration, and the scientific purpose of the regression. It is found that, for problems needing symmetrical treatment of the variables, the OLS bisector performs significantly better than orthogonal or reduced major-axis regression.

  2. Advanced statistics: linear regression, part I: simple linear regression.

    Science.gov (United States)

    Marill, Keith A

    2004-01-01

    Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.

  3. Linear regression in astronomy. II

    Science.gov (United States)

    Feigelson, Eric D.; Babu, Gutti J.

    1992-01-01

    A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.

  4. Time-adaptive quantile regression

    DEFF Research Database (Denmark)

    Møller, Jan Kloppenborg; Nielsen, Henrik Aalborg; Madsen, Henrik

    2008-01-01

    and an updating procedure are combined into a new algorithm for time-adaptive quantile regression, which generates new solutions on the basis of the old solution, leading to savings in computation time. The suggested algorithm is tested against a static quantile regression model on a data set with wind power......An algorithm for time-adaptive quantile regression is presented. The algorithm is based on the simplex algorithm, and the linear optimization formulation of the quantile regression problem is given. The observations have been split to allow a direct use of the simplex algorithm. The simplex method...... production, where the models combine splines and quantile regression. The comparison indicates superior performance for the time-adaptive quantile regression in all the performance parameters considered....

  5. Comparison of multinomial logistic regression and logistic regression: which is more efficient in allocating land use?

    Science.gov (United States)

    Lin, Yingzhi; Deng, Xiangzheng; Li, Xing; Ma, Enjun

    2014-12-01

    Spatially explicit simulation of land use change is the basis for estimating the effects of land use and cover change on energy fluxes, ecology and the environment. At the pixel level, logistic regression is one of the most common approaches used in spatially explicit land use allocation models to determine the relationship between land use and its causal factors in driving land use change, and thereby to evaluate land use suitability. However, these models have a drawback in that they do not determine/allocate land use based on the direct relationship between land use change and its driving factors. Consequently, a multinomial logistic regression method was introduced to address this flaw, and thereby, judge the suitability of a type of land use in any given pixel in a case study area of the Jiangxi Province, China. A comparison of the two regression methods indicated that the proportion of correctly allocated pixels using multinomial logistic regression was 92.98%, which was 8.47% higher than that obtained using logistic regression. Paired t-test results also showed that pixels were more clearly distinguished by multinomial logistic regression than by logistic regression. In conclusion, multinomial logistic regression is a more efficient and accurate method for the spatial allocation of land use changes. The application of this method in future land use change studies may improve the accuracy of predicting the effects of land use and cover change on energy fluxes, ecology, and environment.

  6. Retro-regression--another important multivariate regression improvement.

    Science.gov (United States)

    Randić, M

    2001-01-01

    We review the serious problem associated with instabilities of the coefficients of regression equations, referred to as the MRA (multivariate regression analysis) "nightmare of the first kind". This is manifested when in a stepwise regression a descriptor is included or excluded from a regression. The consequence is an unpredictable change of the coefficients of the descriptors that remain in the regression equation. We follow with consideration of an even more serious problem, referred to as the MRA "nightmare of the second kind", arising when optimal descriptors are selected from a large pool of descriptors. This process typically causes at different steps of the stepwise regression a replacement of several previously used descriptors by new ones. We describe a procedure that resolves these difficulties. The approach is illustrated on boiling points of nonanes which are considered (1) by using an ordered connectivity basis; (2) by using an ordering resulting from application of greedy algorithm; and (3) by using an ordering derived from an exhaustive search for optimal descriptors. A novel variant of multiple regression analysis, called retro-regression (RR), is outlined showing how it resolves the ambiguities associated with both "nightmares" of the first and the second kind of MRA.

  7. Quantile regression theory and applications

    CERN Document Server

    Davino, Cristina; Vistocco, Domenico

    2013-01-01

    A guide to the implementation and interpretation of Quantile Regression models This book explores the theory and numerous applications of quantile regression, offering empirical data analysis as well as the software tools to implement the methods. The main focus of this book is to provide the reader with a comprehensivedescription of the main issues concerning quantile regression; these include basic modeling, geometrical interpretation, estimation and inference for quantile regression, as well as issues on validity of the model, diagnostic tools. Each methodological aspect is explored and

  8. Logistic regression applied to natural hazards: rare event logistic regression with replications

    Directory of Open Access Journals (Sweden)

    M. Guns

    2012-06-01

    Full Text Available Statistical analysis of natural hazards needs particular attention, as most of these phenomena are rare events. This study shows that the ordinary rare event logistic regression, as it is now commonly used in geomorphologic studies, does not always lead to a robust detection of controlling factors, as the results can be strongly sample-dependent. In this paper, we introduce some concepts of Monte Carlo simulations in rare event logistic regression. This technique, so-called rare event logistic regression with replications, combines the strength of probabilistic and statistical methods, and allows overcoming some of the limitations of previous developments through robust variable selection. This technique was here developed for the analyses of landslide controlling factors, but the concept is widely applicable for statistical analyses of natural hazards.

  9. Logistic regression applied to natural hazards: rare event logistic regression with replications

    Science.gov (United States)

    Guns, M.; Vanacker, V.

    2012-06-01

    Statistical analysis of natural hazards needs particular attention, as most of these phenomena are rare events. This study shows that the ordinary rare event logistic regression, as it is now commonly used in geomorphologic studies, does not always lead to a robust detection of controlling factors, as the results can be strongly sample-dependent. In this paper, we introduce some concepts of Monte Carlo simulations in rare event logistic regression. This technique, so-called rare event logistic regression with replications, combines the strength of probabilistic and statistical methods, and allows overcoming some of the limitations of previous developments through robust variable selection. This technique was here developed for the analyses of landslide controlling factors, but the concept is widely applicable for statistical analyses of natural hazards.

  10. Deriving the Regression Line with Algebra

    Science.gov (United States)

    Quintanilla, John A.

    2017-01-01

    Exploration with spreadsheets and reliance on previous skills can lead students to determine the line of best fit. To perform linear regression on a set of data, students in Algebra 2 (or, in principle, Algebra 1) do not have to settle for using the mysterious "black box" of their graphing calculators (or other classroom technologies).…

  11. Panel Smooth Transition Regression Models

    DEFF Research Database (Denmark)

    González, Andrés; Terasvirta, Timo; Dijk, Dick van

    We introduce the panel smooth transition regression model. This new model is intended for characterizing heterogeneous panels, allowing the regression coefficients to vary both across individuals and over time. Specifically, heterogeneity is allowed for by assuming that these coefficients are bou...

  12. Testing discontinuities in nonparametric regression

    KAUST Repository

    Dai, Wenlin

    2017-01-19

    In nonparametric regression, it is often needed to detect whether there are jump discontinuities in the mean function. In this paper, we revisit the difference-based method in [13 H.-G. Müller and U. Stadtmüller, Discontinuous versus smooth regression, Ann. Stat. 27 (1999), pp. 299–337. doi: 10.1214/aos/1018031100

  13. Testing discontinuities in nonparametric regression

    KAUST Repository

    Dai, Wenlin; Zhou, Yuejin; Tong, Tiejun

    2017-01-01

    In nonparametric regression, it is often needed to detect whether there are jump discontinuities in the mean function. In this paper, we revisit the difference-based method in [13 H.-G. Müller and U. Stadtmüller, Discontinuous versus smooth regression, Ann. Stat. 27 (1999), pp. 299–337. doi: 10.1214/aos/1018031100

  14. Fungible weights in logistic regression.

    Science.gov (United States)

    Jones, Jeff A; Waller, Niels G

    2016-06-01

    In this article we develop methods for assessing parameter sensitivity in logistic regression models. To set the stage for this work, we first review Waller's (2008) equations for computing fungible weights in linear regression. Next, we describe 2 methods for computing fungible weights in logistic regression. To demonstrate the utility of these methods, we compute fungible logistic regression weights using data from the Centers for Disease Control and Prevention's (2010) Youth Risk Behavior Surveillance Survey, and we illustrate how these alternate weights can be used to evaluate parameter sensitivity. To make our work accessible to the research community, we provide R code (R Core Team, 2015) that will generate both kinds of fungible logistic regression weights. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  15. Ordinary least square regression, orthogonal regression, geometric mean regression and their applications in aerosol science

    International Nuclear Information System (INIS)

    Leng Ling; Zhang Tianyi; Kleinman, Lawrence; Zhu Wei

    2007-01-01

    Regression analysis, especially the ordinary least squares method which assumes that errors are confined to the dependent variable, has seen a fair share of its applications in aerosol science. The ordinary least squares approach, however, could be problematic due to the fact that atmospheric data often does not lend itself to calling one variable independent and the other dependent. Errors often exist for both measurements. In this work, we examine two regression approaches available to accommodate this situation. They are orthogonal regression and geometric mean regression. Comparisons are made theoretically as well as numerically through an aerosol study examining whether the ratio of organic aerosol to CO would change with age

  16. Tumor regression patterns in retinoblastoma

    International Nuclear Information System (INIS)

    Zafar, S.N.; Siddique, S.N.; Zaheer, N.

    2016-01-01

    To observe the types of tumor regression after treatment, and identify the common pattern of regression in our patients. Study Design: Descriptive study. Place and Duration of Study: Department of Pediatric Ophthalmology and Strabismus, Al-Shifa Trust Eye Hospital, Rawalpindi, Pakistan, from October 2011 to October 2014. Methodology: Children with unilateral and bilateral retinoblastoma were included in the study. Patients were referred to Pakistan Institute of Medical Sciences, Islamabad, for chemotherapy. After every cycle of chemotherapy, dilated funds examination under anesthesia was performed to record response of the treatment. Regression patterns were recorded on RetCam II. Results: Seventy-four tumors were included in the study. Out of 74 tumors, 3 were ICRB group A tumors, 43 were ICRB group B tumors, 14 tumors belonged to ICRB group C, and remaining 14 were ICRB group D tumors. Type IV regression was seen in 39.1% (n=29) tumors, type II in 29.7% (n=22), type III in 25.6% (n=19), and type I in 5.4% (n=4). All group A tumors (100%) showed type IV regression. Seventeen (39.5%) group B tumors showed type IV regression. In group C, 5 tumors (35.7%) showed type II regression and 5 tumors (35.7%) showed type IV regression. In group D, 6 tumors (42.9%) regressed to type II non-calcified remnants. Conclusion: The response and success of the focal and systemic treatment, as judged by the appearance of different patterns of tumor regression, varies with the ICRB grouping of the tumor. (author)

  17. Regression to Causality : Regression-style presentation influences causal attribution

    DEFF Research Database (Denmark)

    Bordacconi, Mats Joe; Larsen, Martin Vinæs

    2014-01-01

    of equivalent results presented as either regression models or as a test of two sample means. Our experiment shows that the subjects who were presented with results as estimates from a regression model were more inclined to interpret these results causally. Our experiment implies that scholars using regression...... models – one of the primary vehicles for analyzing statistical results in political science – encourage causal interpretation. Specifically, we demonstrate that presenting observational results in a regression model, rather than as a simple comparison of means, makes causal interpretation of the results...... more likely. Our experiment drew on a sample of 235 university students from three different social science degree programs (political science, sociology and economics), all of whom had received substantial training in statistics. The subjects were asked to compare and evaluate the validity...

  18. Regression and regression analysis time series prediction modeling on climate data of quetta, pakistan

    International Nuclear Information System (INIS)

    Jafri, Y.Z.; Kamal, L.

    2007-01-01

    Various statistical techniques was used on five-year data from 1998-2002 of average humidity, rainfall, maximum and minimum temperatures, respectively. The relationships to regression analysis time series (RATS) were developed for determining the overall trend of these climate parameters on the basis of which forecast models can be corrected and modified. We computed the coefficient of determination as a measure of goodness of fit, to our polynomial regression analysis time series (PRATS). The correlation to multiple linear regression (MLR) and multiple linear regression analysis time series (MLRATS) were also developed for deciphering the interdependence of weather parameters. Spearman's rand correlation and Goldfeld-Quandt test were used to check the uniformity or non-uniformity of variances in our fit to polynomial regression (PR). The Breusch-Pagan test was applied to MLR and MLRATS, respectively which yielded homoscedasticity. We also employed Bartlett's test for homogeneity of variances on a five-year data of rainfall and humidity, respectively which showed that the variances in rainfall data were not homogenous while in case of humidity, were homogenous. Our results on regression and regression analysis time series show the best fit to prediction modeling on climatic data of Quetta, Pakistan. (author)

  19. Augmenting Data with Published Results in Bayesian Linear Regression

    Science.gov (United States)

    de Leeuw, Christiaan; Klugkist, Irene

    2012-01-01

    In most research, linear regression analyses are performed without taking into account published results (i.e., reported summary statistics) of similar previous studies. Although the prior density in Bayesian linear regression could accommodate such prior knowledge, formal models for doing so are absent from the literature. The goal of this…

  20. Predicting Word Reading Ability: A Quantile Regression Study

    Science.gov (United States)

    McIlraith, Autumn L.

    2018-01-01

    Predictors of early word reading are well established. However, it is unclear if these predictors hold for readers across a range of word reading abilities. This study used quantile regression to investigate predictive relationships at different points in the distribution of word reading. Quantile regression analyses used preschool and…

  1. Determinantes do consumo de frutas e hortaliças em adolescentes por regressão quantílica Determinantes del consumo de frutas y hortalizas en adolescentes por regresión cuantílica Determinants of fruit and vegetable intake in adolescents using quantile regression

    Directory of Open Access Journals (Sweden)

    Roberta Schein Bigio

    2011-06-01

    Full Text Available OBJETIVO: Analisar o consumo de frutas, legumes e verduras (FLV de adolescentes e identificar fatores associados. MÉTODOS: Estudo transversal de base populacional com amostra representativa de 812 adolescentes de ambos os sexos de São Paulo, SP, em 2003. O consumo alimentar foi medido pelo recordatório alimentar de 24 horas. O consumo de FLV foi descrito em percentis e para investigar a associação entre a ingestão de FLV e variáveis explanatórias; foram utilizados modelos de regressão quantílica. RESULTADOS: Dos adolescentes entrevistados, 6,4% consumiram a recomendação mínima de 400 g/dia de FLV e 22% não consumiram nenhum tipo de FLV. Nos modelos de regressão quantílica, ajustados pelo consumo energético, faixa etária e sexo, a renda domiciliar per capita e a escolaridade do chefe de família associaram-se positivamente ao consumo de FLV, enquanto o hábito de fumar associou-se negativamente. Renda associou-se significativamente aos menores percentis de ingestão (p20 ao p55; tabagismo aos percentis intermediários (p45 ao p75 e escolaridade do chefe de família aos percentis finais de consumo de FLV (p70 ao p95. CONCLUSÕES: O consumo de FLV por adolescentes paulistanos mostrou-se abaixo das recomendações do Ministério da Saúde e é influenciado pela renda domiciliar per capita, pela escolaridade do chefe de família e pelo hábito de fumar.OBJETIVO: Analizar el consumo de frutas, legumbres y verduras (FLV de adolescentes e identificar factores asociados. MÉTODOS: Estudio transversal de base poblacional con muestra representativa de 812 adolescentes de ambos sexos de Sao Paulo, Sureste de Brasil, en 2003. El consumo alimentario fue medido por el recordatorio alimentario de 24 horas. El consumo de FLV fue descrito en percentiles y para investigar la asociación entre la ingestión de FLV y variables explicativas fueron utilizados modelos de regresión cuantílica. RESULTADOS: De los adolescentes entrevistados, 6

  2. Advanced statistics: linear regression, part II: multiple linear regression.

    Science.gov (United States)

    Marill, Keith A

    2004-01-01

    The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.

  3. Logic regression and its extensions.

    Science.gov (United States)

    Schwender, Holger; Ruczinski, Ingo

    2010-01-01

    Logic regression is an adaptive classification and regression procedure, initially developed to reveal interacting single nucleotide polymorphisms (SNPs) in genetic association studies. In general, this approach can be used in any setting with binary predictors, when the interaction of these covariates is of primary interest. Logic regression searches for Boolean (logic) combinations of binary variables that best explain the variability in the outcome variable, and thus, reveals variables and interactions that are associated with the response and/or have predictive capabilities. The logic expressions are embedded in a generalized linear regression framework, and thus, logic regression can handle a variety of outcome types, such as binary responses in case-control studies, numeric responses, and time-to-event data. In this chapter, we provide an introduction to the logic regression methodology, list some applications in public health and medicine, and summarize some of the direct extensions and modifications of logic regression that have been proposed in the literature. Copyright © 2010 Elsevier Inc. All rights reserved.

  4. Least-Squares Linear Regression and Schrodinger's Cat: Perspectives on the Analysis of Regression Residuals.

    Science.gov (United States)

    Hecht, Jeffrey B.

    The analysis of regression residuals and detection of outliers are discussed, with emphasis on determining how deviant an individual data point must be to be considered an outlier and the impact that multiple suspected outlier data points have on the process of outlier determination and treatment. Only bivariate (one dependent and one independent)…

  5. Abstract Expression Grammar Symbolic Regression

    Science.gov (United States)

    Korns, Michael F.

    This chapter examines the use of Abstract Expression Grammars to perform the entire Symbolic Regression process without the use of Genetic Programming per se. The techniques explored produce a symbolic regression engine which has absolutely no bloat, which allows total user control of the search space and output formulas, which is faster, and more accurate than the engines produced in our previous papers using Genetic Programming. The genome is an all vector structure with four chromosomes plus additional epigenetic and constraint vectors, allowing total user control of the search space and the final output formulas. A combination of specialized compiler techniques, genetic algorithms, particle swarm, aged layered populations, plus discrete and continuous differential evolution are used to produce an improved symbolic regression sytem. Nine base test cases, from the literature, are used to test the improvement in speed and accuracy. The improved results indicate that these techniques move us a big step closer toward future industrial strength symbolic regression systems.

  6. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying; Carroll, Raymond J.

    2009-01-01

    . The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a

  7. From Rasch scores to regression

    DEFF Research Database (Denmark)

    Christensen, Karl Bang

    2006-01-01

    Rasch models provide a framework for measurement and modelling latent variables. Having measured a latent variable in a population a comparison of groups will often be of interest. For this purpose the use of observed raw scores will often be inadequate because these lack interval scale propertie....... This paper compares two approaches to group comparison: linear regression models using estimated person locations as outcome variables and latent regression models based on the distribution of the score....

  8. Testing Heteroscedasticity in Robust Regression

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2011-01-01

    Roč. 1, č. 4 (2011), s. 25-28 ISSN 2045-3345 Grant - others:GA ČR(CZ) GA402/09/0557 Institutional research plan: CEZ:AV0Z10300504 Keywords : robust regression * heteroscedasticity * regression quantiles * diagnostics Subject RIV: BB - Applied Statistics , Operational Research http://www.researchjournals.co.uk/documents/Vol4/06%20Kalina.pdf

  9. Regression methods for medical research

    CERN Document Server

    Tai, Bee Choo

    2013-01-01

    Regression Methods for Medical Research provides medical researchers with the skills they need to critically read and interpret research using more advanced statistical methods. The statistical requirements of interpreting and publishing in medical journals, together with rapid changes in science and technology, increasingly demands an understanding of more complex and sophisticated analytic procedures.The text explains the application of statistical models to a wide variety of practical medical investigative studies and clinical trials. Regression methods are used to appropriately answer the

  10. Forecasting with Dynamic Regression Models

    CERN Document Server

    Pankratz, Alan

    2012-01-01

    One of the most widely used tools in statistical forecasting, single equation regression models is examined here. A companion to the author's earlier work, Forecasting with Univariate Box-Jenkins Models: Concepts and Cases, the present text pulls together recent time series ideas and gives special attention to possible intertemporal patterns, distributed lag responses of output to input series and the auto correlation patterns of regression disturbance. It also includes six case studies.

  11. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    Science.gov (United States)

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  12. Analytical determination of Chemical Oxygen Demand in samples considered to be difficult to analyse: solid substrates and liquid samples with high suspended solid concentrations

    DEFF Research Database (Denmark)

    Raposo, Francisco; Fernández-Cegrí, V.; De la Rubia, M.A.

    of a general standard method and high quality certified reference materials (CRMs), currently the traceability of the COD determination in such samples is not easy to check. Proficiency testing (PT) is a powerful tool that can be used to test the performance that the participant’s laboratories can achieve. Two...

  13. Jaffe-Campanacci syndrome, revisited: detailed clinical and molecular analyses determine whether patients have neurofibromatosis type 1, coincidental manifestations, or a distinct disorder

    NARCIS (Netherlands)

    Stewart, Douglas R.; Brems, Hilde; Gomes, Alicia G.; Ruppert, Sarah L.; Callens, Tom; Williams, Jennifer; Claes, Kathleen; Bober, Michael B.; Hachen, Rachel; Kaban, Leonard B.; Li, Hua; Lin, Angela; McDonald, Marie; Melancon, Serge; Ortenberg, June; Radtke, Heather B.; Samson, Ignace; Saul, Robert A.; Shen, Joseph; Siqveland, Elizabeth; Toler, Tomi L.; van Maarle, Merel; Wallace, Margaret; Williams, Misti; Legius, Eric; Messiaen, Ludwine

    2014-01-01

    "Jaffe-Campanacci syndrome" describes the complex of multiple nonossifying fibromas of the long bones, mandibular giant cell lesions, and café-au-lait macules in individuals without neurofibromas. We sought to determine whether Jaffe-Campanacci syndrome is a distinct genetic entity or a variant of

  14. Determining

    Directory of Open Access Journals (Sweden)

    Bahram Andarzian

    2015-06-01

    Full Text Available Wheat production in the south of Khuzestan, Iran is constrained by heat stress for late sowing dates. For optimization of yield, sowing at the appropriate time to fit the cultivar maturity length and growing season is critical. Crop models could be used to determine optimum sowing window for a locality. The objectives of this study were to evaluate the Cropping System Model (CSM-CERES-Wheat for its ability to simulate growth, development, grain yield of wheat in the tropical regions of Iran, and to study the impact of different sowing dates on wheat performance. The genetic coefficients of cultivar Chamran were calibrated for the CSM-CERES-Wheat model and crop model performance was evaluated with experimental data. Wheat cultivar Chamran was sown on different dates, ranging from 5 November to 9 January during 5 years of field experiments that were conducted in the Khuzestan province, Iran, under full and deficit irrigation conditions. The model was run for 8 sowing dates starting on 25 October and repeated every 10 days until 5 January using long-term historical weather data from the Ahvaz, Behbehan, Dezful and Izeh locations. The seasonal analysis program of DSSAT was used to determine the optimum sowing window for different locations as well. Evaluation with the experimental data showed that performance of the model was reasonable as indicated by fairly accurate simulation of crop phenology, biomass accumulation and grain yield against measured data. The normalized RMSE were 3%, 2%, 11.8%, and 3.4% for anthesis date, maturity date, grain yield and biomass, respectively. Optimum sowing window was different among locations. It was opened and closed on 5 November and 5 December for Ahvaz; 5 November and 15 December for Behbehan and Dezful;and 1 November and 15 December for Izeh, respectively. CERES-Wheat model could be used as a tool to evaluate the effect of sowing date on wheat performance in Khuzestan conditions. Further model evaluations

  15. Determination of the distribution of copper and chromium in partly remediated CCA-treated pine wood using SEM and EDX analyses

    DEFF Research Database (Denmark)

    Christensen, Iben Vernegren; Ottosen, Lisbeth M.; Melcher, Eckhard

    2005-01-01

    . After soaking, a small amount of Cu and Cr was still present in the cell walls but larger particles were now found on wall surfaces. Most effective removal of Cu was obtained after soaking in phosphoric and oxalic acid followed by EDR; here numerous rice grain-shaped particles were observed containing...... large amounts of Cu and no Cr. Cr was most effectively removed after soaking in oxalic acid and subsequent EDR treatment or dual soaking in phosphoric acid and oxalic acid with and without subsequent EDR.......Soaking in different acids and electrodialytic remediation (EDR) were applied for removing copper and chromium from freshly Chromated Copper Arsenate (CCA) impregnated EN 113 pine wood samples. After remedial treatments, AAS analyses revealed that the concentration of copper (Cu) and chromium (Cr...

  16. Soil behavior under earthquake loading conditions. In situ impulse test for determination of shear modulus for seismic response analyses. Progress report

    International Nuclear Information System (INIS)

    1974-06-01

    Progress is reported in the determination of the best methods of evaluation and prediction of soil behavior of potential nuclear power plant sites under seismic loading conditions. Results are reported of combined experimental and analytical studies undertaken to continue development of an in situ impulse test for determination of the soil shear modulus. Emphasis of the field work was directed toward making the field measurements at frequent depth intervals and at shear strains in the strong motion earthquake range. Emphasis of the analytical work was aimed toward supporting the field effort through processing and evaluation of the experimental test results combined with additional calculations required to gain insight into data interpretation and the in situ test setup itself. Continuing studies to evaluate free field soil behavior under earthquake loading conditions are discussed. (U.S.)

  17. Analysing of 228Th, 232Th, 228Ra in human bone tissues for the purpose of determining the post mortal interval

    International Nuclear Information System (INIS)

    Kandlbinder, R.; Geissler, V.; Schupfner, R.; Wolfbeis, O.; Zinka, B.

    2009-01-01

    Bone tissues of thirteen deceased persons were analyzed to determine the activity concentration of the radionuclides 228 Ra, 228 Th, 232 Th and 2 30 Th. The activity ratios enable to assess the post-mortem-interval PMI). The samples were prepared for analysis by incinerating and pulverizing. 228 Ra was directly detected by γ-spectrometry. 2 28 Th, 230 Th, 232 Th were detected by α-spectrometry after radiochemical purification and electrodeposition. It is shown that the method s principally suited to determine the PMI. A minimum of 300 g (wet weight) f human bone tissue is required for the analysis. Counting times are in the range of one to two weeks. (author)

  18. Comparing probabilistic and descriptive analyses of time–dose–toxicity relationship for determining no-observed-adverse-effect level in drug development

    International Nuclear Information System (INIS)

    Glatard, Anaïs; Berges, Aliénor; Sahota, Tarjinder; Ambery, Claire; Osborne, Jan; Smith, Randall; Hénin, Emilie; Chen, Chao

    2015-01-01

    The no-observed-adverse-effect level (NOAEL) of a drug defined from animal studies is important for inferring a maximal safe dose in human. However, several issues are associated with its concept, determination and application. It is confined to the actual doses used in the study; becomes lower with increasing sample size or dose levels; and reflects the risk level seen in the experiment rather than what may be relevant for human. We explored a pharmacometric approach in an attempt to address these issues. We first used simulation to examine the behaviour of the NOAEL values as determined by current common practice; and then fitted the probability of toxicity as a function of treatment duration and dose to data collected from all applicable toxicology studies of a test compound. Our investigation was in the context of an irreversible toxicity that is detected at the end of the study. Simulations illustrated NOAEL's dependency on experimental factors such as dose and sample size, as well as the underlying uncertainty. Modelling the probability as a continuous function of treatment duration and dose simultaneously to data from multiple studies allowed the estimation of the dose, along with its confidence interval, for a maximal risk level that might be deemed as acceptable for human. The model-based data integration also reconciled between-study inconsistency and explicitly provided maximised estimation confidence. Such alternative NOAEL determination method should be explored for its more efficient data use, more quantifiable insight to toxic doses, and the potential for more relevant animal-to-human translation. - Highlights: • Simulations revealed issues with NOAEL concept, determination and application. • Probabilistic modelling was used to address these issues. • The model integrated time-dose-toxicity data from multiple studies. • The approach uses data efficiently and may allow more meaningful human translation.

  19. Comparing probabilistic and descriptive analyses of time–dose–toxicity relationship for determining no-observed-adverse-effect level in drug development

    Energy Technology Data Exchange (ETDEWEB)

    Glatard, Anaïs; Berges, Aliénor; Sahota, Tarjinder; Ambery, Claire [Clinical Pharmacology Modelling and Simulation, GlaxoSmithKline, 1 Iron Bridge Road, Uxbridge, UB11 1BT London (United Kingdom); Osborne, Jan [Non-Clinical Safety Projects, GlaxoSmithKline, Ware (United Kingdom); Smith, Randall [Computational Toxicology, GlaxoSmithKline, Upper Merion (United States); Hénin, Emilie [UMR 5558 Laboratoire Biométrie et de Biologie Evolutive, Equipe EMET (Evaluation et Modélisation des Effets Thérapeutiques), Université Claude Bernard Lyon1, Service de Pharmacologie Clinique et Essais Thérapeutiques, Hospices Civils de Lyon, Lyon (France); Chen, Chao, E-mail: chao.c.chen@gsk.com [Clinical Pharmacology Modelling and Simulation, GlaxoSmithKline, 1 Iron Bridge Road, Uxbridge, UB11 1BT London (United Kingdom)

    2015-10-15

    The no-observed-adverse-effect level (NOAEL) of a drug defined from animal studies is important for inferring a maximal safe dose in human. However, several issues are associated with its concept, determination and application. It is confined to the actual doses used in the study; becomes lower with increasing sample size or dose levels; and reflects the risk level seen in the experiment rather than what may be relevant for human. We explored a pharmacometric approach in an attempt to address these issues. We first used simulation to examine the behaviour of the NOAEL values as determined by current common practice; and then fitted the probability of toxicity as a function of treatment duration and dose to data collected from all applicable toxicology studies of a test compound. Our investigation was in the context of an irreversible toxicity that is detected at the end of the study. Simulations illustrated NOAEL's dependency on experimental factors such as dose and sample size, as well as the underlying uncertainty. Modelling the probability as a continuous function of treatment duration and dose simultaneously to data from multiple studies allowed the estimation of the dose, along with its confidence interval, for a maximal risk level that might be deemed as acceptable for human. The model-based data integration also reconciled between-study inconsistency and explicitly provided maximised estimation confidence. Such alternative NOAEL determination method should be explored for its more efficient data use, more quantifiable insight to toxic doses, and the potential for more relevant animal-to-human translation. - Highlights: • Simulations revealed issues with NOAEL concept, determination and application. • Probabilistic modelling was used to address these issues. • The model integrated time-dose-toxicity data from multiple studies. • The approach uses data efficiently and may allow more meaningful human translation.

  20. Logistic regression for dichotomized counts.

    Science.gov (United States)

    Preisser, John S; Das, Kalyan; Benecha, Habtamu; Stamm, John W

    2016-12-01

    Sometimes there is interest in a dichotomized outcome indicating whether a count variable is positive or zero. Under this scenario, the application of ordinary logistic regression may result in efficiency loss, which is quantifiable under an assumed model for the counts. In such situations, a shared-parameter hurdle model is investigated for more efficient estimation of regression parameters relating to overall effects of covariates on the dichotomous outcome, while handling count data with many zeroes. One model part provides a logistic regression containing marginal log odds ratio effects of primary interest, while an ancillary model part describes the mean count of a Poisson or negative binomial process in terms of nuisance regression parameters. Asymptotic efficiency of the logistic model parameter estimators of the two-part models is evaluated with respect to ordinary logistic regression. Simulations are used to assess the properties of the models with respect to power and Type I error, the latter investigated under both misspecified and correctly specified models. The methods are applied to data from a randomized clinical trial of three toothpaste formulations to prevent incident dental caries in a large population of Scottish schoolchildren. © The Author(s) 2014.

  1. Vertical distribution of elements in non-polluted estuarine sediments determined by neutron induced prompt gamma-ray and instrumental neutron activation analyses

    International Nuclear Information System (INIS)

    Kuno, A.; Sampei, K.; Matsuo, M.; Sawahata, H.

    1999-01-01

    Neutron induced prompt gamma-ray analysis (PGA) and instrumental neutron activation analysis (INAA) have been applied to the sediments collected from the Yasaka River estuary in Oita Prefecture, Japan. The vertical distribution of 33 elements in the sediments has been determined and compared with that in more polluted estuarine sediments. While the S content increased with increasing depth because of a sulphide accumulation under reducing condition, the increase in sulphide-forming elements such as Ag, Cd, Co and Zn was not observed in the deeper section of the Yasaka River estuarine sediments. (author)

  2. Determinations of its Absolute Dimensions and Distance by the Analyses of Light and Radial-Velocity Curves of the Contact Binary -I. V417 Aquilae

    Directory of Open Access Journals (Sweden)

    Jae Woo Lee

    2004-06-01

    Full Text Available New photometric and spectroscopic solutions of W-type overcontact binary V417 Aql were obtained by solving the UBV light curves of Samec et al. (1997 and radial-velocity ones of Lu & Rucinski (1999 with the 2003 version of the Wilson-Devinney binary code. In the light curve synthesis the light of a third-body, which Qian (2003 proposed, was considered and obtained about 2.7%, 2.2%, and 0.4% for U, B, and V bandpasses, respectively. The model with third-light is better fitted to eclipse parts than that with no third-light. Absolute dimensions of V417 Aql are determined from our solution as M1=0.53 M⊙, M2=1.45 M⊙, R1=0.84 R⊙ and R2=1.31 M⊙, and the distance to it is deduced as about 216pc. Our distance is well consistent with that (204pc derived from Rucinski & Duerbeck's (1997 relation, MV=MV(log P, B-V, but is more distant than that (131±40pc determined by the Hipparcos trigonometric parallax. The difference may result from the relatively large error of Hipparcos parallax for V417 Aql.

  3. Producing The New Regressive Left

    DEFF Research Database (Denmark)

    Crone, Christine

    members, this thesis investigates a growing political trend and ideological discourse in the Arab world that I have called The New Regressive Left. On the premise that a media outlet can function as a forum for ideology production, the thesis argues that an analysis of this material can help to trace...... the contexture of The New Regressive Left. If the first part of the thesis lays out the theoretical approach and draws the contextual framework, through an exploration of the surrounding Arab media-and ideoscapes, the second part is an analytical investigation of the discourse that permeates the programmes aired...... becomes clear from the analytical chapters is the emergence of the new cross-ideological alliance of The New Regressive Left. This emerging coalition between Shia Muslims, religious minorities, parts of the Arab Left, secular cultural producers, and the remnants of the political,strategic resistance...

  4. A Matlab program for stepwise regression

    Directory of Open Access Journals (Sweden)

    Yanhong Qi

    2016-03-01

    Full Text Available The stepwise linear regression is a multi-variable regression for identifying statistically significant variables in the linear regression equation. In present study, we presented the Matlab program of stepwise regression.

  5. Correlation and simple linear regression.

    Science.gov (United States)

    Zou, Kelly H; Tuncali, Kemal; Silverman, Stuart G

    2003-06-01

    In this tutorial article, the concepts of correlation and regression are reviewed and demonstrated. The authors review and compare two correlation coefficients, the Pearson correlation coefficient and the Spearman rho, for measuring linear and nonlinear relationships between two continuous variables. In the case of measuring the linear relationship between a predictor and an outcome variable, simple linear regression analysis is conducted. These statistical concepts are illustrated by using a data set from published literature to assess a computed tomography-guided interventional technique. These statistical methods are important for exploring the relationships between variables and can be applied to many radiologic studies.

  6. Regression filter for signal resolution

    International Nuclear Information System (INIS)

    Matthes, W.

    1975-01-01

    The problem considered is that of resolving a measured pulse height spectrum of a material mixture, e.g. gamma ray spectrum, Raman spectrum, into a weighed sum of the spectra of the individual constituents. The model on which the analytical formulation is based is described. The problem reduces to that of a multiple linear regression. A stepwise linear regression procedure was constructed. The efficiency of this method was then tested by transforming the procedure in a computer programme which was used to unfold test spectra obtained by mixing some spectra, from a library of arbitrary chosen spectra, and adding a noise component. (U.K.)

  7. Nonparametric Mixture of Regression Models.

    Science.gov (United States)

    Huang, Mian; Li, Runze; Wang, Shaoli

    2013-07-01

    Motivated by an analysis of US house price index data, we propose nonparametric finite mixture of regression models. We study the identifiability issue of the proposed models, and develop an estimation procedure by employing kernel regression. We further systematically study the sampling properties of the proposed estimators, and establish their asymptotic normality. A modified EM algorithm is proposed to carry out the estimation procedure. We show that our algorithm preserves the ascent property of the EM algorithm in an asymptotic sense. Monte Carlo simulations are conducted to examine the finite sample performance of the proposed estimation procedure. An empirical analysis of the US house price index data is illustrated for the proposed methodology.

  8. Regression: The Apple Does Not Fall Far From the Tree.

    Science.gov (United States)

    Vetter, Thomas R; Schober, Patrick

    2018-05-15

    Researchers and clinicians are frequently interested in either: (1) assessing whether there is a relationship or association between 2 or more variables and quantifying this association; or (2) determining whether 1 or more variables can predict another variable. The strength of such an association is mainly described by the correlation. However, regression analysis and regression models can be used not only to identify whether there is a significant relationship or association between variables but also to generate estimations of such a predictive relationship between variables. This basic statistical tutorial discusses the fundamental concepts and techniques related to the most common types of regression analysis and modeling, including simple linear regression, multiple regression, logistic regression, ordinal regression, and Poisson regression, as well as the common yet often underrecognized phenomenon of regression toward the mean. The various types of regression analysis are powerful statistical techniques, which when appropriately applied, can allow for the valid interpretation of complex, multifactorial data. Regression analysis and models can assess whether there is a relationship or association between 2 or more observed variables and estimate the strength of this association, as well as determine whether 1 or more variables can predict another variable. Regression is thus being applied more commonly in anesthesia, perioperative, critical care, and pain research. However, it is crucial to note that regression can identify plausible risk factors; it does not prove causation (a definitive cause and effect relationship). The results of a regression analysis instead identify independent (predictor) variable(s) associated with the dependent (outcome) variable. As with other statistical methods, applying regression requires that certain assumptions be met, which can be tested with specific diagnostics.

  9. Determinations of silicon and phosphorus in rice planted on a district of high incidence of amyotrophic lateral sclerosis by neutron activation and X-ray fluorescence analyses

    International Nuclear Information System (INIS)

    Mizumoto, Yoshihiko; Ishikawa, Teruumi; Kusakabe, Toshio; Katsurayama, Kousuke; Iwata, Shiro.

    1989-01-01

    Silicon and phosphorus contents in polished and unpolished rice planted on a district of high incidence of amyotrophic lateral sclerosis (ALS) have been determined by neutron activation and X-ray fluorescence methods, and compared with those from control areas. In the neutron activation analysis, β-ray spectra of 32 P produced by the 31 P(n, γ) 32 P reaction on polished and unpolished rice were measured with a low background β-ray spectrometer. In the X-ray fluorescence analysis, characteristic X-ray were analyzed with a wavelength dispersive X-ray fluorescence spectrometer. Silicon contents in polished and unpolished rice from the ALS area are 42 μg.g -1 and 370 μg.g -1 , respectively, and the corresponding phosphorus contents are 1210 μg.g -1 , and 3370 μg.g -1 , respectively. The data for ALS area are equal to those for the control area within atandard deviation. (author)

  10. Lifestyle and Dietary Determinants of Serum Apolipoprotein A1 and Apolipoprotein B Concentrations: Cross-Sectional Analyses within a Swedish Cohort of 24,984 Individuals

    Directory of Open Access Journals (Sweden)

    Kasper Frondelius

    2017-02-01

    Full Text Available Low serum apolipoprotein (Apo A1 concentrations and high serum ApoB concentrations may be better markers of the risk of cardiovascular disease than high-density lipoprotein (HDL and low-density lipoprotein (LDL. However, the associations between modifiable lifestyle factors and Apo concentrations have not been investigated in detail. Therefore, this study investigated the associations between Apo concentrations and education, lifestyle factors and dietary intake (macronutrients and 34 food groups. These cross-sectional associations were examined among 24,984 individuals in a Swedish population-based cohort. Baseline examinations of the cohort were conducted between 1991 and 1996. Dietary intake was assessed using a modified diet history method. The main determinants of high ApoA1 concentrations (r between 0.05 and 0.25 were high alcohol consumption, high physical activity, non-smoking, and a low body mass index (BMI, and the main determinants of high ApoB concentrations were smoking and a high BMI. The intake of sucrose and food products containing added sugar (such as pastries, sweets, chocolate, jam/sugar and sugar-sweetened beverages was negatively correlated with ApoA1 concentrations and positively correlated with ApoB concentrations and the ApoB/ApoA1 ratio, whereas the intake of fermented dairy products, such as fermented milk and cheese, was positively correlated with ApoA1 concentrations and negatively correlated with the ApoB/ApoA1 ratio. These results indicate that smoking, obesity, low physical activity, low alcohol consumption and a diet high in sugar and low in fermented dairy products are correlated with an unfavorable Apo profile.

  11. Lifestyle and Dietary Determinants of Serum Apolipoprotein A1 and Apolipoprotein B Concentrations: Cross-Sectional Analyses within a Swedish Cohort of 24,984 Individuals.

    Science.gov (United States)

    Frondelius, Kasper; Borg, Madelene; Ericson, Ulrika; Borné, Yan; Melander, Olle; Sonestedt, Emily

    2017-02-28

    Low serum apolipoprotein (Apo) A1 concentrations and high serum ApoB concentrations may be better markers of the risk of cardiovascular disease than high-density lipoprotein (HDL) and low-density lipoprotein (LDL). However, the associations between modifiable lifestyle factors and Apo concentrations have not been investigated in detail. Therefore, this study investigated the associations between Apo concentrations and education, lifestyle factors and dietary intake (macronutrients and 34 food groups). These cross-sectional associations were examined among 24,984 individuals in a Swedish population-based cohort. Baseline examinations of the cohort were conducted between 1991 and 1996. Dietary intake was assessed using a modified diet history method. The main determinants of high ApoA1 concentrations ( r between 0.05 and 0.25) were high alcohol consumption, high physical activity, non-smoking, and a low body mass index (BMI), and the main determinants of high ApoB concentrations were smoking and a high BMI. The intake of sucrose and food products containing added sugar (such as pastries, sweets, chocolate, jam/sugar and sugar-sweetened beverages) was negatively correlated with ApoA1 concentrations and positively correlated with ApoB concentrations and the ApoB/ApoA1 ratio, whereas the intake of fermented dairy products, such as fermented milk and cheese, was positively correlated with ApoA1 concentrations and negatively correlated with the ApoB/ApoA1 ratio. These results indicate that smoking, obesity, low physical activity, low alcohol consumption and a diet high in sugar and low in fermented dairy products are correlated with an unfavorable Apo profile.

  12. Cactus: An Introduction to Regression

    Science.gov (United States)

    Hyde, Hartley

    2008-01-01

    When the author first used "VisiCalc," the author thought it a very useful tool when he had the formulas. But how could he design a spreadsheet if there was no known formula for the quantities he was trying to predict? A few months later, the author relates he learned to use multiple linear regression software and suddenly it all clicked into…

  13. Regression Models for Repairable Systems

    Czech Academy of Sciences Publication Activity Database

    Novák, Petr

    2015-01-01

    Roč. 17, č. 4 (2015), s. 963-972 ISSN 1387-5841 Institutional support: RVO:67985556 Keywords : Reliability analysis * Repair models * Regression Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.782, year: 2015 http://library.utia.cas.cz/separaty/2015/SI/novak-0450902.pdf

  14. Survival analysis II: Cox regression

    NARCIS (Netherlands)

    Stel, Vianda S.; Dekker, Friedo W.; Tripepi, Giovanni; Zoccali, Carmine; Jager, Kitty J.

    2011-01-01

    In contrast to the Kaplan-Meier method, Cox proportional hazards regression can provide an effect estimate by quantifying the difference in survival between patient groups and can adjust for confounding effects of other variables. The purpose of this article is to explain the basic concepts of the

  15. Kernel regression with functional response

    OpenAIRE

    Ferraty, Frédéric; Laksaci, Ali; Tadj, Amel; Vieu, Philippe

    2011-01-01

    We consider kernel regression estimate when both the response variable and the explanatory one are functional. The rates of uniform almost complete convergence are stated as function of the small ball probability of the predictor and as function of the entropy of the set on which uniformity is obtained.

  16. Clinical study of combined determination of CYFRA21-1, CEA, NSE in the patients with lung cancer. A retrospective analyses in 239 cases

    International Nuclear Information System (INIS)

    Jin Xiumu; Xie Wenhui; Yu Zhichang; Zhang Peiling

    2000-01-01

    Three tumor markers or CEA, CYFRA 21-1 and NSE were assayed in 239 cases with lung cancer (adenocarcinoma 129, squamous-cell carcinoma 59, small cell lung cancer 35, mixed type of adenocarcinoma and squamous-cell carcinoma 16) and 66 cases with benign lung disease. The positive rate of CEA, CYFRA 21-1 and NSE for detecting lung cancer were 51.4%, 43.5% and 48.1% respectively. It seemed there was a relationship between the sensitivity and the pathologic patterns of lung cancer. The highest diagnostic sensitivities were 76.3% for CYFRA 21-1 in the detection of squamous-cell carcinoma, 57.4% for CEA in adenocarcinoma and 94.3% for NSE in the small cell lung cancer respectively. In 49 cases with pleural effusion (adenocarcinoma 27, small cell lung cancer 7, benign disease 15), three tumor markers in serum and pleural fluid were both measured. The results indicated that the sensitivity of the CEA and CYFRA 21-1 in pleural fluid in patients with adenocarcinoma was superior to serum. The detection of the NSE in pleural fluid was a very reliable method in diagnosing the small cell lung cancer. The sensitivity and specificity of CEA, CYFRA 21-1 and NSE in different pathologic patterns of lung cancer was compared and also the false positive and false negative in benign lung disease. Moreover, the clinical role and necessity of combined determination were also discussed

  17. Use of the Accusport semi-automated analyser to determine blood lactate as an aid in the clinical assessment of horses with colic

    Directory of Open Access Journals (Sweden)

    M.L. Schulman

    2001-07-01

    Full Text Available The most useful diagnostic methods in the initial evaluation of horses with colic assess the morphological and functional status of the gastrointestinal tract and cardiovascular status. This evaluation is best achieved using a combination of clinical and laboratory data. Blood lactate concentration (BL is one of these variables. BL rises mainly due to poor tissue perfusion and anaerobic glycolysis associated with shock, providing an indicator of both the severity of disease and its prognosis. A hand-held lactate meter, Accusport, provides a rapid (60 seconds, inexpensive dry-chemical-based determination of BL. This trial evaluated the Accusport's ability to provide BL data as an adjunct to the initial clinical evaluation of horses with colic. The accuracy of the Accusport was tested by evaluation of its interchangeability with the benchmark enzymatic kit evaluation of BL in a trial using data collected firstly from 10 clinically normal control horses and subsequently from 48 horses presented with signs of colic. The BL values were recorded together with the clinical variables of heart rate (HR, capillary refill time (CRT, haematocrit (Hct, and pain character and severity on the initial assessment of the colic horses. Information regarding choice of therapeutic management (medical or surgical and eventual case outcome (full recovery or died/euthanased was recorded. The Accusport was found to be interchangeable with the enzymatic kit for recording BL values in colic horses with BL 8 mmol/ died or were euthanased.

  18. QCD analyses and determinations of $\\alpha_{s}$ in $e^{+}e^{-}$ annihilation at energies between 35 and 189 GeV

    CERN Document Server

    Pfeifenschneider, P.; Movilla Fernandez, P.A.; Abbiendi, G.; Ackerstaff, K.; Akesson, P.F.; Alexander, G.; Allison, John; Anderson, K.J.; Arcelli, S.; Asai, S.; Ashby, S.F.; Axen, D.; Azuelos, G.; Bailey, I.; Ball, A.H.; Barberio, E.; Barlow, Roger J.; Batley, J.R.; Baumann, S.; Behnke, T.; Bell, Kenneth Watson; Bella, G.; Bellerive, A.; Bentvelsen, S.; Bethke, S.; Biguzzi, A.; Bloodworth, I.J.; Bock, P.; Bohme, J.; Boeriu, O.; Bonacorsi, D.; Boutemeur, M.; Braibant, S.; Bright-Thomas, P.; Brigliadori, L.; Brown, Robert M.; Burckhart, H.J.; Cammin, J.; Capiluppi, P.; Carnegie, R.K.; Carter, A.A.; Carter, J.R.; Chang, C.Y.; Charlton, David G.; Chrisman, D.; Ciocca, C.; Clarke, P.E.L.; Clay, E.; Cohen, I.; Cooke, O.C.; Couchman, J.; Couyoumtzelis, C.; Coxe, R.L.; Cuffiani, M.; Dado, S.; Dallavalle, G.Marco; Dallison, S.; Davis, R.; de Roeck, A.; Dervan, P.; Desch, K.; Dienes, B.; Dixit, M.S.; Donkers, M.; Dubbert, J.; Duchovni, E.; Duckeck, G.; Duerdoth, I.P.; Estabrooks, P.G.; Etzion, E.; Fabbri, F.; Fanfani, A.; Fanti, M.; Faust, A.A.; Feld, L.; Ferrari, P.; Fiedler, F.; Fierro, M.; Fleck, I.; Frey, A.; Furtjes, A.; Futyan, D.I.; Gagnon, P.; Gary, J.W.; Gaycken, G.; Geich-Gimbel, C.; Giacomelli, G.; Giacomelli, P.; Gingrich, D.M.; Glenzinski, D.; Goldberg, J.; Gorn, W.; Grandi, C.; Graham, K.; Gross, E.; Grunhaus, J.; Gruwe, M.; Gunther, P.O.; Hajdu, C.; Hanson, G.G.; Hansroul, M.; Hapke, M.; Harder, K.; Harel, A.; Hargrove, C.K.; Harin-Dirac, M.; Hauke, A.; Hauschild, M.; Hawkes, C.M.; Hawkings, R.; Hemingway, R.J.; Hensel, C.; Herten, G.; Heuer, R.D.; Hildreth, M.D.; Hill, J.C.; Hobson, P.R.; Hocker, James Andrew; Hoffman, Kara Dion; Homer, R.J.; Honma, A.K.; Horvath, D.; Hossain, K.R.; Howard, R.; Huntemeyer, P.; Igo-Kemenes, P.; Imrie, D.C.; Ishii, K.; Jacob, F.R.; Jawahery, A.; Jeremie, H.; Jimack, M.; Jones, C.R.; Jovanovic, P.; Junk, T.R.; Kanaya, N.; Kanzaki, J.; Karapetian, G.; Karlen, D.; Kartvelishvili, V.; Kawagoe, K.; Kawamoto, T.; Kayal, P.I.; Keeler, R.K.; Kellogg, R.G.; Kennedy, B.W.; Kim, D.H.; Klier, A.; Kobayashi, T.; Kobel, M.; Kokott, T.P.; Kolrep, M.; Komamiya, S.; Kowalewski, Robert V.; Kress, T.; Krieger, P.; von Krogh, J.; Kuhl, T.; Kupper, M.; Kyberd, P.; Lafferty, G.D.; Landsman, H.; Lanske, D.; Lawson, I.; Layter, J.G.; Leins, A.; Lellouch, D.; Letts, J.; Levinson, L.; Liebisch, R.; Lillich, J.; List, B.; Littlewood, C.; Lloyd, A.W.; Lloyd, S.L.; Loebinger, F.K.; Long, G.D.; Losty, M.J.; Lu, J.; Ludwig, J.; Macchiolo, A.; Macpherson, A.; Mader, W.; Mannelli, M.; Marcellini, S.; Marchant, T.E.; Martin, A.J.; Martin, J.P.; Martinez, G.; Mashimo, T.; Mattig, Peter; McDonald, W.John; McKenna, J.; McMahon, T.J.; McPherson, R.A.; Meijers, F.; Mendez-Lorenzo, P.; Merritt, F.S.; Mes, H.; Meyer, I.; Michelini, A.; Mihara, S.; Mikenberg, G.; Miller, D.J.; Mohr, W.; Montanari, A.; Mori, T.; Nagai, K.; Nakamura, I.; Neal, H.A.; Nisius, R.; O'Neale, S.W.; Oakham, F.G.; Odorici, F.; Ogren, H.O.; Okpara, A.; Oreglia, M.J.; Orito, S.; Pasztor, G.; Pater, J.R.; Patrick, G.N.; Patt, J.; Perez-Ochoa, R.; Pilcher, J.E.; Pinfold, J.; Plane, David E.; Poli, B.; Polok, J.; Przybycien, M.; Quadt, A.; Rembser, C.; Rick, H.; Robins, S.A.; Rodning, N.; Roney, J.M.; Rosati, S.; Roscoe, K.; Rossi, A.M.; Rozen, Y.; Runge, K.; Runolfsson, O.; Rust, D.R.; Sachs, K.; Saeki, T.; Sahr, O.; Sang, W.M.; Sarkisian, E.K.G.; Sbarra, C.; Schaile, A.D.; Schaile, O.; Scharff-Hansen, P.; Schieck, J.; Schmitt, S.; Schoning, A.; Schroder, Matthias; Schumacher, M.; Schwick, C.; Scott, W.G.; Seuster, R.; Shears, T.G.; Shen, B.C.; Shepherd-Themistocleous, C.H.; Sherwood, P.; Siroli, G.P.; Skuja, A.; Smith, A.M.; Snow, G.A.; Sobie, R.; Soldner-Rembold, S.; Spagnolo, S.; Sproston, M.; Stahl, A.; Stephens, K.; Stoll, K.; Strom, David M.; Strohmer, R.; Surrow, B.; Talbot, S.D.; Tarem, S.; Taylor, R.J.; Teuscher, R.; Thiergen, M.; Thomas, J.; Thomson, M.A.; Torrence, E.; Towers, S.; Trefzger, T.; Trigger, I.; Trocsanyi, Z.; Tsur, E.; Turner-Watson, M.F.; Ueda, I.; Van Kooten, Rick J.; Vannerem, P.; Verzocchi, M.; Voss, H.; Waller, D.; Ward, C.P.; Ward, D.R.; Watkins, P.M.; Watson, A.T.; Watson, N.K.; Wells, P.S.; Wengler, T.; Wermes, N.; Wetterling, D.; White, J.S.; Wilson, G.W.; Wilson, J.A.; Wyatt, T.R.; Yamashita, S.; Zacek, V.; Zer-Zion, D.; Jade, The

    2000-01-01

    We employ data taken by the JADE and OPAL experiments for an integrated QCD study in hadronic e+e- annihilations at c.m.s. energies ranging from 35 GeV through 189 GeV. The study is based on jet-multiplicity related observables. The observables are obtained to high jet resolution scales with the JADE, Durham, Cambridge and cone jet finders, and compared with the predictions of various QCD and Monte Carlo models. The strong coupling strength, alpha_s, is determined at each energy by fits of O(alpha_s^2) calculations, as well as matched O(alpha_s^2) and NLLA predictions, to the data. Matching schemes are compared, and the dependence of the results on the choice of the renormalization scale is investigated. The combination of the results using matched predictions gives alpha_s(MZ)=0.1187+{0.0034}-{0.0019}. The strong coupling is also obtained, at lower precision, from O(alpha_s^2) fits of the c.m.s. energy evolution of some of the observables. A qualitative comparison is made between the data and a recent MLLA p...

  19. delta 13C analyses of vegetable oil fatty acid components, determined by gas chromatography--combustion--isotope ratio mass spectrometry, after saponification or regiospecific hydrolysis.

    Science.gov (United States)

    Woodbury, S E; Evershed, R P; Rossell, J B

    1998-05-01

    The delta 13C values of the major fatty acids of several different commercially important vegetable oils were measured by gas chromatography--combustion--isotope ratio mass spectrometry. The delta 13C values obtained were found to fall into two distinct groups, representing the C3 and C4 plants classes from which the oils were derived. The delta 13C values of the oils were measured by continuous flow elemental isotope ratio mass spectrometry and were found to be similar to their fatty acids, with slight differences between individual fatty acids. Investigations were then made into the influence on the delta 13C values of fatty acids of the position occupied on the glycerol backbone. Pancreatic lipase was employed to selectively hydrolyse fatty acids from the 1- and 3-positions with the progress of the reaction being followed by high-temperature gas chromatography in order to determine the optimum incubation time. The 2-monoacylglycerols were then isolated by thin-layer chromatography and fatty acid methyl esters prepared. The delta 13C values obtained indicate that fatty acids from any position on the glycerol backbone are isotopically identical. Thus, whilst quantification of fatty acid composition at the 2-position and measurement of delta 13C values of oils and their major fatty acids are useful criteria in edible oil purity assessment, measurement of delta 13C values of fatty acids from the 2-position does not assist with oil purity assignments.

  20. QCD analyses and determinations of αs in e+e- annihilation at energies between 35 and 189 GeV

    International Nuclear Information System (INIS)

    Pfeifenschneider, P.; Biebel, O.; Movilla Fernandez, P.A.

    2000-01-01

    We employ data taken by the JADE and OPAL experiments for an integrated QCD study in hadronic e + e - annihilations at c.m.s. energies ranging from 35 GeV through 189 GeV. The study is based on jet-multiplicity related observables. The observables are obtained to high jet resolution scales with the JADE, Durham, Cambridge and cone jet finders, and compared with the predictions of various QCD and Monte Carlo models. The strong coupling strength, α s , is determined at each energy by fits of O(α s 2 ) calculations, as well as matched O(α s 2 ) and NLLA predictions, to the data. Matching schemes are compared, and the dependence of the results on the choice of the renormalization scale is investigated. The combination of the results using matched predictions givesα s (M Z 0 )=0.1187 0.0034 0.0019 . The strong coupling is also obtained, at lower precision, from O(α s 2 ) fits of the c.m.s. energy evolution of some of the observables. A qualitative comparison is made between the data and a recent MLLA prediction for mean jet multiplicities. (orig.)

  1. Determinations of silicon and phosphorus in rice planted on a district of high incidence of amyotrophic lateral sclerosis by neutron activation and X-ray fluorescence analyses

    Energy Technology Data Exchange (ETDEWEB)

    Mizumoto, Yoshihiko; Ishikawa, Teruumi; Kusakabe, Toshio; Katsurayama, Kousuke (Kinki Univ., Higashi-Osaka, Osaka (Japan)); Iwata, Shiro

    1989-12-01

    Silicon and phosphorus contents in polished and unpolished rice planted on a district of high incidence of amyotrophic lateral sclerosis (ALS) have been determined by neutron activation and X-ray fluorescence methods, and compared with those from control areas. In the neutron activation analysis, {beta}-ray spectra of {sup 32}P produced by the {sup 31}P(n, {gamma}){sup 32}P reaction on polished and unpolished rice were measured with a low background {beta}-ray spectrometer. In the X-ray fluorescence analysis, characteristic X-ray were analyzed with a wavelength dispersive X-ray fluorescence spectrometer. Silicon contents in polished and unpolished rice from the ALS area are 42 {mu}g.g{sup -1} and 370 {mu}g.g{sup -1}, respectively, and the corresponding phosphorus contents are 1210 {mu}g.g{sup -1}, and 3370 {mu}g.g{sup -1}, respectively. The data for ALS area are equal to those for the control area within atandard deviation. (author).

  2. Linear regression and the normality assumption.

    Science.gov (United States)

    Schmidt, Amand F; Finan, Chris

    2017-12-16

    Researchers often perform arbitrary outcome transformations to fulfill the normality assumption of a linear regression model. This commentary explains and illustrates that in large data settings, such transformations are often unnecessary, and worse may bias model estimates. Linear regression assumptions are illustrated using simulated data and an empirical example on the relation between time since type 2 diabetes diagnosis and glycated hemoglobin levels. Simulation results were evaluated on coverage; i.e., the number of times the 95% confidence interval included the true slope coefficient. Although outcome transformations bias point estimates, violations of the normality assumption in linear regression analyses do not. The normality assumption is necessary to unbiasedly estimate standard errors, and hence confidence intervals and P-values. However, in large sample sizes (e.g., where the number of observations per variable is >10) violations of this normality assumption often do not noticeably impact results. Contrary to this, assumptions on, the parametric model, absence of extreme observations, homoscedasticity, and independency of the errors, remain influential even in large sample size settings. Given that modern healthcare research typically includes thousands of subjects focusing on the normality assumption is often unnecessary, does not guarantee valid results, and worse may bias estimates due to the practice of outcome transformations. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Geographically weighted regression model on poverty indicator

    Science.gov (United States)

    Slamet, I.; Nugroho, N. F. T. A.; Muslich

    2017-12-01

    In this research, we applied geographically weighted regression (GWR) for analyzing the poverty in Central Java. We consider Gaussian Kernel as weighted function. The GWR uses the diagonal matrix resulted from calculating kernel Gaussian function as a weighted function in the regression model. The kernel weights is used to handle spatial effects on the data so that a model can be obtained for each location. The purpose of this paper is to model of poverty percentage data in Central Java province using GWR with Gaussian kernel weighted function and to determine the influencing factors in each regency/city in Central Java province. Based on the research, we obtained geographically weighted regression model with Gaussian kernel weighted function on poverty percentage data in Central Java province. We found that percentage of population working as farmers, population growth rate, percentage of households with regular sanitation, and BPJS beneficiaries are the variables that affect the percentage of poverty in Central Java province. In this research, we found the determination coefficient R2 are 68.64%. There are two categories of district which are influenced by different of significance factors.

  4. Laser Beam Focus Analyser

    DEFF Research Database (Denmark)

    Nielsen, Peter Carøe; Hansen, Hans Nørgaard; Olsen, Flemming Ove

    2007-01-01

    the obtainable features in direct laser machining as well as heat affected zones in welding processes. This paper describes the development of a measuring unit capable of analysing beam shape and diameter of lasers to be used in manufacturing processes. The analyser is based on the principle of a rotating......The quantitative and qualitative description of laser beam characteristics is important for process implementation and optimisation. In particular, a need for quantitative characterisation of beam diameter was identified when using fibre lasers for micro manufacturing. Here the beam diameter limits...... mechanical wire being swept through the laser beam at varying Z-heights. The reflected signal is analysed and the resulting beam profile determined. The development comprised the design of a flexible fixture capable of providing both rotation and Z-axis movement, control software including data capture...

  5. Evaluation of sun holiday, diet habits, origin and other factors as determinants of vitamin D status in Swedish primary health care patients: a cross-sectional study with regression analysis of ethnic Swedish and immigrant women.

    Science.gov (United States)

    Björk, Anne; Andersson, Åsa; Johansson, Gunnar; Björkegren, Karin; Bardel, Annika; Kristiansson, Per

    2013-09-03

    Determinants of vitamin D status measured as 25-OH-vitamin D in blood are exposure to sunlight and intake of vitamin D through food and supplements. It is unclear how large the contributions are from these determinants in Swedish primary care patients, considering the low radiation of UVB in Sweden and the fortification of some foods. Asian and African immigrants in Norway and Denmark have been found to have very low levels, but it is not clear whether the same applies to Swedish patients. The purpose of our study was to identify contributors to vitamin D status in Swedish women attending a primary health care centre at latitude 60°N in Sweden. In this cross-sectional, observational study, 61 female patients were consecutively recruited between January and March 2009, irrespective of reason for attending the clinic. The women were interviewed about their sun habits, smoking, education and food intake at a personal appointment and blood samples were drawn for measurements of vitamin D and calcium concentrations. Plasma concentration of 25-OH-vitamin D below 25 nmol/L was found in 61% (19/31) of immigrant and 7% (2/30) of native women. Multivariate analysis showed that reported sun holiday of one week during the last year at latitude below 40°N with the purpose of sun-bathing and native origin, were significantly, independently and positively associated with 25-OH-vitamin D concentrations in plasma with the strongest association for sun holiday during the past year. Vitamin D deficiency was common among the women in the present study, with sun holiday and origin as main determinants of 25-OH-vitamin D concentrations in plasma. Given a negative effect on health this would imply needs for vitamin D treatment particularly in women with immigrant background who have moved from lower to higher latitudes.

  6. Application of nonlinear regression analysis for ammonium exchange by natural (Bigadic) clinoptilolite

    International Nuclear Information System (INIS)

    Gunay, Ahmet

    2007-01-01

    The experimental data of ammonium exchange by natural Bigadic clinoptilolite was evaluated using nonlinear regression analysis. Three two-parameters isotherm models (Langmuir, Freundlich and Temkin) and three three-parameters isotherm models (Redlich-Peterson, Sips and Khan) were used to analyse the equilibrium data. Fitting of isotherm models was determined using values of standard normalization error procedure (SNE) and coefficient of determination (R 2 ). HYBRID error function provided lowest sum of normalized error and Khan model had better performance for modeling the equilibrium data. Thermodynamic investigation indicated that ammonium removal by clinoptilolite was favorable at lower temperatures and exothermic in nature

  7. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying

    2009-08-27

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.

  8. Multivariate and semiparametric kernel regression

    OpenAIRE

    Härdle, Wolfgang; Müller, Marlene

    1997-01-01

    The paper gives an introduction to theory and application of multivariate and semiparametric kernel smoothing. Multivariate nonparametric density estimation is an often used pilot tool for examining the structure of data. Regression smoothing helps in investigating the association between covariates and responses. We concentrate on kernel smoothing using local polynomial fitting which includes the Nadaraya-Watson estimator. Some theory on the asymptotic behavior and bandwidth selection is pro...

  9. Directional quantile regression in R

    Czech Academy of Sciences Publication Activity Database

    Boček, Pavel; Šiman, Miroslav

    2017-01-01

    Roč. 53, č. 3 (2017), s. 480-492 ISSN 0023-5954 R&D Projects: GA ČR GA14-07234S Institutional support: RVO:67985556 Keywords : multivariate quantile * regression quantile * halfspace depth * depth contour Subject RIV: BD - Theory of Information OBOR OECD: Applied mathematics Impact factor: 0.379, year: 2016 http://library.utia.cas.cz/separaty/2017/SI/bocek-0476587.pdf

  10. Polylinear regression analysis in radiochemistry

    International Nuclear Information System (INIS)

    Kopyrin, A.A.; Terent'eva, T.N.; Khramov, N.N.

    1995-01-01

    A number of radiochemical problems have been formulated in the framework of polylinear regression analysis, which permits the use of conventional mathematical methods for their solution. The authors have considered features of the use of polylinear regression analysis for estimating the contributions of various sources to the atmospheric pollution, for studying irradiated nuclear fuel, for estimating concentrations from spectral data, for measuring neutron fields of a nuclear reactor, for estimating crystal lattice parameters from X-ray diffraction patterns, for interpreting data of X-ray fluorescence analysis, for estimating complex formation constants, and for analyzing results of radiometric measurements. The problem of estimating the target parameters can be incorrect at certain properties of the system under study. The authors showed the possibility of regularization by adding a fictitious set of data open-quotes obtainedclose quotes from the orthogonal design. To estimate only a part of the parameters under consideration, the authors used incomplete rank models. In this case, it is necessary to take into account the possibility of confounding estimates. An algorithm for evaluating the degree of confounding is presented which is realized using standard software or regression analysis

  11. Gaussian Process Regression Model in Spatial Logistic Regression

    Science.gov (United States)

    Sofro, A.; Oktaviarina, A.

    2018-01-01

    Spatial analysis has developed very quickly in the last decade. One of the favorite approaches is based on the neighbourhood of the region. Unfortunately, there are some limitations such as difficulty in prediction. Therefore, we offer Gaussian process regression (GPR) to accommodate the issue. In this paper, we will focus on spatial modeling with GPR for binomial data with logit link function. The performance of the model will be investigated. We will discuss the inference of how to estimate the parameters and hyper-parameters and to predict as well. Furthermore, simulation studies will be explained in the last section.

  12. Changes in persistence, spurious regressions and the Fisher hypothesis

    DEFF Research Database (Denmark)

    Kruse, Robinson; Ventosa-Santaulària, Daniel; Noriega, Antonio E.

    Declining inflation persistence has been documented in numerous studies. When such series are analyzed in a regression framework in conjunction with other persistent time series, spurious regressions are likely to occur. We propose to use the coefficient of determination R2 as a test statistic to...

  13. Determination of correction coefficients for quantitative analysis by mass spectrometry. Application to uranium impurities analysis; Recherche des coefficients de correction permettant l'analyse quantitative par spectrometrie de masse. Application a l'analyse d'impuretes dans l'uranium

    Energy Technology Data Exchange (ETDEWEB)

    Billon, J P [Commissariat a l' Energie Atomique, Bruyeres-le-Chatel (France). Centre d' Etudes

    1970-07-01

    Some of basic principles in spark source mass spectrometry are recalled. It is shown how this method can lead to quantitative analysis when attention is paid to some theoretical aspects. A time constant relation being assumed between the analysed solid sample and the ionic beam it gives we determined experimental relative sensitivity factors for impurities in uranium matrix. Results being in fairly good agreement with: an unelaborate theory on ionization yield in spark-source use of theoretically obtained relative sensitivity factors in uranium matrix has been developed. (author) [French] Apres avoir rappele quelques principes fondamentaux regissant la spectrometrie de masse a etincelles, nous avons montre que moyennant un certain nombre de precautions, il etait possible d'utiliser cette methode en analyse quantitative. Ayant admis qu'il existait une relation constante dans le temps entre l'echantillon solide analyse et le faisceau ionique qui en est issu, nous avons d'abord entrepris de determiner des coefficients de correction experimentaux pour des matrices d'uranium. Les premiers resultats pratiques semblant en accord avec une theorie simple relative au rendement d'ionisation dans la source a etincelles, nous avons etudie la possibilite d'appliquer directement les coefficients theoriques ainsi definis, l'application etant toujours faite sur des matrices d'uranium. (auteur)

  14. Tutorial on Using Regression Models with Count Outcomes Using R

    Directory of Open Access Journals (Sweden)

    A. Alexander Beaujean

    2016-02-01

    Full Text Available Education researchers often study count variables, such as times a student reached a goal, discipline referrals, and absences. Most researchers that study these variables use typical regression methods (i.e., ordinary least-squares either with or without transforming the count variables. In either case, using typical regression for count data can produce parameter estimates that are biased, thus diminishing any inferences made from such data. As count-variable regression models are seldom taught in training programs, we present a tutorial to help educational researchers use such methods in their own research. We demonstrate analyzing and interpreting count data using Poisson, negative binomial, zero-inflated Poisson, and zero-inflated negative binomial regression models. The count regression methods are introduced through an example using the number of times students skipped class. The data for this example are freely available and the R syntax used run the example analyses are included in the Appendix.

  15. Development of speciation analysis for selenium in nutritional supplements by the determination of the seleno-methionine; Developpement de l'analyse de speciation du selenium dans des complements alimentaires par la determination de la selenomethionine

    Energy Technology Data Exchange (ETDEWEB)

    Sannac, S.; Labarraque, G.; Fisicaro, P. [Laboratoire National de Metrologie et d' Essai (LNE), 75 - Paris (France); Sannac, S.; Pannier, F.; Potin-Gautier, M. [Pau Univ. et des Pays de l' Adour, CNRS/UMR 5254, Lab. de Chimie Analytique, Bio-Inorganique et Environnement (IPREM), 64 (France)

    2009-07-01

    The development of a reference method in analytical chemistry is presented. Liquid chromatography with inductively coupled plasma mass spectrometry is employed to perform in speciation analysis. Applications are developed for the determination of seleno-methionine in nutritional supplements. The use of isotope dilution, a primary method, is required to enable measurement traceability. Method validation is ensured by the study of a certified reference material. (authors)

  16. The evolution of GDP in USA using cyclic regression analysis

    OpenAIRE

    Catalin Angelo IOAN; Gina IOAN

    2013-01-01

    Based on the four major types of economic cycles (Kondratieff, Juglar, Kitchin, Kuznet), the paper aims to determine their actual length (for the U.S. economy) using cyclic regressions based on Fourier analysis.

  17. A method for nonlinear exponential regression analysis

    Science.gov (United States)

    Junkin, B. G.

    1971-01-01

    A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.

  18. Spontaneous regression of pulmonary bullae

    International Nuclear Information System (INIS)

    Satoh, H.; Ishikawa, H.; Ohtsuka, M.; Sekizawa, K.

    2002-01-01

    The natural history of pulmonary bullae is often characterized by gradual, progressive enlargement. Spontaneous regression of bullae is, however, very rare. We report a case in which complete resolution of pulmonary bullae in the left upper lung occurred spontaneously. The management of pulmonary bullae is occasionally made difficult because of gradual progressive enlargement associated with abnormal pulmonary function. Some patients have multiple bulla in both lungs and/or have a history of pulmonary emphysema. Others have a giant bulla without emphysematous change in the lungs. Our present case had treated lung cancer with no evidence of local recurrence. He had no emphysematous change in lung function test and had no complaints, although the high resolution CT scan shows evidence of underlying minimal changes of emphysema. Ortin and Gurney presented three cases of spontaneous reduction in size of bulla. Interestingly, one of them had a marked decrease in the size of a bulla in association with thickening of the wall of the bulla, which was observed in our patient. This case we describe is of interest, not only because of the rarity with which regression of pulmonary bulla has been reported in the literature, but also because of the spontaneous improvements in the radiological picture in the absence of overt infection or tumor. Copyright (2002) Blackwell Science Pty Ltd

  19. Interpretation of commonly used statistical regression models.

    Science.gov (United States)

    Kasza, Jessica; Wolfe, Rory

    2014-01-01

    A review of some regression models commonly used in respiratory health applications is provided in this article. Simple linear regression, multiple linear regression, logistic regression and ordinal logistic regression are considered. The focus of this article is on the interpretation of the regression coefficients of each model, which are illustrated through the application of these models to a respiratory health research study. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.

  20. Multiple regression and beyond an introduction to multiple regression and structural equation modeling

    CERN Document Server

    Keith, Timothy Z

    2014-01-01

    Multiple Regression and Beyond offers a conceptually oriented introduction to multiple regression (MR) analysis and structural equation modeling (SEM), along with analyses that flow naturally from those methods. By focusing on the concepts and purposes of MR and related methods, rather than the derivation and calculation of formulae, this book introduces material to students more clearly, and in a less threatening way. In addition to illuminating content necessary for coursework, the accessibility of this approach means students are more likely to be able to conduct research using MR or SEM--and more likely to use the methods wisely. Covers both MR and SEM, while explaining their relevance to one another Also includes path analysis, confirmatory factor analysis, and latent growth modeling Figures and tables throughout provide examples and illustrate key concepts and techniques For additional resources, please visit: http://tzkeith.com/.

  1. Biomass feedstock analyses

    Energy Technology Data Exchange (ETDEWEB)

    Wilen, C.; Moilanen, A.; Kurkela, E. [VTT Energy, Espoo (Finland). Energy Production Technologies

    1996-12-31

    The overall objectives of the project `Feasibility of electricity production from biomass by pressurized gasification systems` within the EC Research Programme JOULE II were to evaluate the potential of advanced power production systems based on biomass gasification and to study the technical and economic feasibility of these new processes with different type of biomass feed stocks. This report was prepared as part of this R and D project. The objectives of this task were to perform fuel analyses of potential woody and herbaceous biomasses with specific regard to the gasification properties of the selected feed stocks. The analyses of 15 Scandinavian and European biomass feed stock included density, proximate and ultimate analyses, trace compounds, ash composition and fusion behaviour in oxidizing and reducing atmospheres. The wood-derived fuels, such as whole-tree chips, forest residues, bark and to some extent willow, can be expected to have good gasification properties. Difficulties caused by ash fusion and sintering in straw combustion and gasification are generally known. The ash and alkali metal contents of the European biomasses harvested in Italy resembled those of the Nordic straws, and it is expected that they behave to a great extent as straw in gasification. Any direct relation between the ash fusion behavior (determined according to the standard method) and, for instance, the alkali metal content was not found in the laboratory determinations. A more profound characterisation of the fuels would require gasification experiments in a thermobalance and a PDU (Process development Unit) rig. (orig.) (10 refs.)

  2. On Weighted Support Vector Regression

    DEFF Research Database (Denmark)

    Han, Xixuan; Clemmensen, Line Katrine Harder

    2014-01-01

    We propose a new type of weighted support vector regression (SVR), motivated by modeling local dependencies in time and space in prediction of house prices. The classic weights of the weighted SVR are added to the slack variables in the objective function (OF‐weights). This procedure directly...... shrinks the coefficient of each observation in the estimated functions; thus, it is widely used for minimizing influence of outliers. We propose to additionally add weights to the slack variables in the constraints (CF‐weights) and call the combination of weights the doubly weighted SVR. We illustrate...... the differences and similarities of the two types of weights by demonstrating the connection between the Least Absolute Shrinkage and Selection Operator (LASSO) and the SVR. We show that an SVR problem can be transformed to a LASSO problem plus a linear constraint and a box constraint. We demonstrate...

  3. Semisupervised Clustering by Iterative Partition and Regression with Neuroscience Applications

    Directory of Open Access Journals (Sweden)

    Guoqi Qian

    2016-01-01

    Full Text Available Regression clustering is a mixture of unsupervised and supervised statistical learning and data mining method which is found in a wide range of applications including artificial intelligence and neuroscience. It performs unsupervised learning when it clusters the data according to their respective unobserved regression hyperplanes. The method also performs supervised learning when it fits regression hyperplanes to the corresponding data clusters. Applying regression clustering in practice requires means of determining the underlying number of clusters in the data, finding the cluster label of each data point, and estimating the regression coefficients of the model. In this paper, we review the estimation and selection issues in regression clustering with regard to the least squares and robust statistical methods. We also provide a model selection based technique to determine the number of regression clusters underlying the data. We further develop a computing procedure for regression clustering estimation and selection. Finally, simulation studies are presented for assessing the procedure, together with analyzing a real data set on RGB cell marking in neuroscience to illustrate and interpret the method.

  4. Determinants of Inequality in Cameroon: A Regression-Based ...

    African Journals Online (AJOL)

    2008-07-22

    Jul 22, 2008 ... In this regard, the marginal contributions of the estimated-income sources and the ... regarding household utility and production functions, and therefore ..... interviewed were headed by women, and 78% of these household owned or exploited .... borrow money and to have an adequate insurance policy.

  5. Analysis of Palm Oil Production, Export, and Government Consumption to Gross Domestic Product of Five Districts in West Kalimantan by Panel Regression

    Science.gov (United States)

    Sulistianingsih, E.; Kiftiah, M.; Rosadi, D.; Wahyuni, H.

    2017-04-01

    Gross Domestic Product (GDP) is an indicator of economic growth in a region. GDP is a panel data, which consists of cross-section and time series data. Meanwhile, panel regression is a tool which can be utilised to analyse panel data. There are three models in panel regression, namely Common Effect Model (CEM), Fixed Effect Model (FEM) and Random Effect Model (REM). The models will be chosen based on results of Chow Test, Hausman Test and Lagrange Multiplier Test. This research analyses palm oil about production, export, and government consumption to five district GDP are in West Kalimantan, namely Sanggau, Sintang, Sambas, Ketapang and Bengkayang by panel regression. Based on the results of analyses, it concluded that REM, which adjusted-determination-coefficient is 0,823, is the best model in this case. Also, according to the result, only Export and Government Consumption that influence GDP of the districts.

  6. Improved radiocarbon analyses of modern human hair to determine the year-of-death by cross-flow nanofiltered amino acids: common contaminants, implications for isotopic analysis, and recommendations.

    Science.gov (United States)

    Santos, Guaciara M; De La Torre, Hector A Martinez; Boudin, Mathieu; Bonafini, Marco; Saverwyns, Steven

    2015-10-15

    In forensic investigation, radiocarbon ((14)C) measurements of human tissues (i.e., nails and hair) can help determine the year-of-death. However, the frequent use of cosmetics can bias hair (14)C results as well as stable isotope values. Evidence shows that hair exogenous impurities percolate beyond the cuticle layer, and therefore conventional pretreatments are ineffective in removing them. We conducted isotopic analysis ((14)C, δ(13)C, δ(15)N and C/N) of conventionally treated and cross-flow nanofiltered amino acid (CFNAA)-treated samples (scalp- and body-hair) from a single female subject using fingernails as a reference. The subject studied frequently applies a permanent dark-brown dye kit to her scalp-hair and uses other care products for daily cleansing. We also performed pyrolysis-gas chromatography/mass spectrometry (Py-GC/MS) analyses of CFNAA-treated scalp-hair to identify contaminant remnants that could possibly interfere with isotopic analyses. The conventionally treated scalp- and body-hair showed (14)C offsets of ~21‰ and ~9‰, respectively. These offsets confirm the contamination by petrochemicals in modern human hair. A single CFNAA extraction reduced those offsets by ~34%. No significant improvement was observed when sequential extractions were performed, as it appears that the procedure introduced some foreign contaminants. A chromatogram of the CFNAA scalp-hair pyrolysis products showed the presence of petroleum and plant/animal compound residues, which can bias isotopic analyses. We have demonstrated that CFNAA extractions can partially remove cosmetic contaminants embedded in human hair. We conclude that fingernails are still the best source of keratin protein for year-of-death determinations and isotopic analysis, with body-hair and/or scalp-hair coupled with CFNAA extraction a close second. Copyright © 2015 John Wiley & Sons, Ltd.

  7. Mapping geogenic radon potential by regression kriging

    Energy Technology Data Exchange (ETDEWEB)

    Pásztor, László [Institute for Soil Sciences and Agricultural Chemistry, Centre for Agricultural Research, Hungarian Academy of Sciences, Department of Environmental Informatics, Herman Ottó út 15, 1022 Budapest (Hungary); Szabó, Katalin Zsuzsanna, E-mail: sz_k_zs@yahoo.de [Department of Chemistry, Institute of Environmental Science, Szent István University, Páter Károly u. 1, Gödöllő 2100 (Hungary); Szatmári, Gábor; Laborczi, Annamária [Institute for Soil Sciences and Agricultural Chemistry, Centre for Agricultural Research, Hungarian Academy of Sciences, Department of Environmental Informatics, Herman Ottó út 15, 1022 Budapest (Hungary); Horváth, Ákos [Department of Atomic Physics, Eötvös University, Pázmány Péter sétány 1/A, 1117 Budapest (Hungary)

    2016-02-15

    Radon ({sup 222}Rn) gas is produced in the radioactive decay chain of uranium ({sup 238}U) which is an element that is naturally present in soils. Radon is transported mainly by diffusion and convection mechanisms through the soil depending mainly on the physical and meteorological parameters of the soil and can enter and accumulate in buildings. Health risks originating from indoor radon concentration can be attributed to natural factors and is characterized by geogenic radon potential (GRP). Identification of areas with high health risks require spatial modeling, that is, mapping of radon risk. In addition to geology and meteorology, physical soil properties play a significant role in the determination of GRP. In order to compile a reliable GRP map for a model area in Central-Hungary, spatial auxiliary information representing GRP forming environmental factors were taken into account to support the spatial inference of the locally measured GRP values. Since the number of measured sites was limited, efficient spatial prediction methodologies were searched for to construct a reliable map for a larger area. Regression kriging (RK) was applied for the interpolation using spatially exhaustive auxiliary data on soil, geology, topography, land use and climate. RK divides the spatial inference into two parts. Firstly, the deterministic component of the target variable is determined by a regression model. The residuals of the multiple linear regression analysis represent the spatially varying but dependent stochastic component, which are interpolated by kriging. The final map is the sum of the two component predictions. Overall accuracy of the map was tested by Leave-One-Out Cross-Validation. Furthermore the spatial reliability of the resultant map is also estimated by the calculation of the 90% prediction interval of the local prediction values. The applicability of the applied method as well as that of the map is discussed briefly. - Highlights: • A new method

  8. Mapping geogenic radon potential by regression kriging

    International Nuclear Information System (INIS)

    Pásztor, László; Szabó, Katalin Zsuzsanna; Szatmári, Gábor; Laborczi, Annamária; Horváth, Ákos

    2016-01-01

    Radon ( 222 Rn) gas is produced in the radioactive decay chain of uranium ( 238 U) which is an element that is naturally present in soils. Radon is transported mainly by diffusion and convection mechanisms through the soil depending mainly on the physical and meteorological parameters of the soil and can enter and accumulate in buildings. Health risks originating from indoor radon concentration can be attributed to natural factors and is characterized by geogenic radon potential (GRP). Identification of areas with high health risks require spatial modeling, that is, mapping of radon risk. In addition to geology and meteorology, physical soil properties play a significant role in the determination of GRP. In order to compile a reliable GRP map for a model area in Central-Hungary, spatial auxiliary information representing GRP forming environmental factors were taken into account to support the spatial inference of the locally measured GRP values. Since the number of measured sites was limited, efficient spatial prediction methodologies were searched for to construct a reliable map for a larger area. Regression kriging (RK) was applied for the interpolation using spatially exhaustive auxiliary data on soil, geology, topography, land use and climate. RK divides the spatial inference into two parts. Firstly, the deterministic component of the target variable is determined by a regression model. The residuals of the multiple linear regression analysis represent the spatially varying but dependent stochastic component, which are interpolated by kriging. The final map is the sum of the two component predictions. Overall accuracy of the map was tested by Leave-One-Out Cross-Validation. Furthermore the spatial reliability of the resultant map is also estimated by the calculation of the 90% prediction interval of the local prediction values. The applicability of the applied method as well as that of the map is discussed briefly. - Highlights: • A new method, regression

  9. Credit Scoring Problem Based on Regression Analysis

    OpenAIRE

    Khassawneh, Bashar Suhil Jad Allah

    2014-01-01

    ABSTRACT: This thesis provides an explanatory introduction to the regression models of data mining and contains basic definitions of key terms in the linear, multiple and logistic regression models. Meanwhile, the aim of this study is to illustrate fitting models for the credit scoring problem using simple linear, multiple linear and logistic regression models and also to analyze the found model functions by statistical tools. Keywords: Data mining, linear regression, logistic regression....

  10. Variable selection and model choice in geoadditive regression models.

    Science.gov (United States)

    Kneib, Thomas; Hothorn, Torsten; Tutz, Gerhard

    2009-06-01

    Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection.

  11. An Original Stepwise Multilevel Logistic Regression Analysis of Discriminatory Accuracy

    DEFF Research Database (Denmark)

    Merlo, Juan; Wagner, Philippe; Ghith, Nermin

    2016-01-01

    BACKGROUND AND AIM: Many multilevel logistic regression analyses of "neighbourhood and health" focus on interpreting measures of associations (e.g., odds ratio, OR). In contrast, multilevel analysis of variance is rarely considered. We propose an original stepwise analytical approach that disting...

  12. Interpreting Multiple Linear Regression: A Guidebook of Variable Importance

    Science.gov (United States)

    Nathans, Laura L.; Oswald, Frederick L.; Nimon, Kim

    2012-01-01

    Multiple regression (MR) analyses are commonly employed in social science fields. It is also common for interpretation of results to typically reflect overreliance on beta weights, often resulting in very limited interpretations of variable importance. It appears that few researchers employ other methods to obtain a fuller understanding of what…

  13. Regularized Label Relaxation Linear Regression.

    Science.gov (United States)

    Fang, Xiaozhao; Xu, Yong; Li, Xuelong; Lai, Zhihui; Wong, Wai Keung; Fang, Bingwu

    2018-04-01

    Linear regression (LR) and some of its variants have been widely used for classification problems. Most of these methods assume that during the learning phase, the training samples can be exactly transformed into a strict binary label matrix, which has too little freedom to fit the labels adequately. To address this problem, in this paper, we propose a novel regularized label relaxation LR method, which has the following notable characteristics. First, the proposed method relaxes the strict binary label matrix into a slack variable matrix by introducing a nonnegative label relaxation matrix into LR, which provides more freedom to fit the labels and simultaneously enlarges the margins between different classes as much as possible. Second, the proposed method constructs the class compactness graph based on manifold learning and uses it as the regularization item to avoid the problem of overfitting. The class compactness graph is used to ensure that the samples sharing the same labels can be kept close after they are transformed. Two different algorithms, which are, respectively, based on -norm and -norm loss functions are devised. These two algorithms have compact closed-form solutions in each iteration so that they are easily implemented. Extensive experiments show that these two algorithms outperform the state-of-the-art algorithms in terms of the classification accuracy and running time.

  14. Estimating the exceedance probability of rain rate by logistic regression

    Science.gov (United States)

    Chiu, Long S.; Kedem, Benjamin

    1990-01-01

    Recent studies have shown that the fraction of an area with rain intensity above a fixed threshold is highly correlated with the area-averaged rain rate. To estimate the fractional rainy area, a logistic regression model, which estimates the conditional probability that rain rate over an area exceeds a fixed threshold given the values of related covariates, is developed. The problem of dependency in the data in the estimation procedure is bypassed by the method of partial likelihood. Analyses of simulated scanning multichannel microwave radiometer and observed electrically scanning microwave radiometer data during the Global Atlantic Tropical Experiment period show that the use of logistic regression in pixel classification is superior to multiple regression in predicting whether rain rate at each pixel exceeds a given threshold, even in the presence of noisy data. The potential of the logistic regression technique in satellite rain rate estimation is discussed.

  15. Estimating the causes of traffic accidents using logistic regression and discriminant analysis.

    Science.gov (United States)

    Karacasu, Murat; Ergül, Barış; Altin Yavuz, Arzu

    2014-01-01

    Factors that affect traffic accidents have been analysed in various ways. In this study, we use the methods of logistic regression and discriminant analysis to determine the damages due to injury and non-injury accidents in the Eskisehir Province. Data were obtained from the accident reports of the General Directorate of Security in Eskisehir; 2552 traffic accidents between January and December 2009 were investigated regarding whether they resulted in injury. According to the results, the effects of traffic accidents were reflected in the variables. These results provide a wealth of information that may aid future measures toward the prevention of undesired results.

  16. Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    Science.gov (United States)

    Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami

    2017-06-01

    A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.

  17. A hydrologic regression sediment-yield model for two ungaged watershed outlet stations in Africa

    International Nuclear Information System (INIS)

    Moussa, O.M.; Smith, S.E.; Shrestha, R.L.

    1991-01-01

    A hydrologic regression sediment-yield model was established to determine the relationship between water discharge and suspended sediment discharge at the Blue Nile and the Atbara River outlet stations during the flood season. The model consisted of two main submodels: (1) a suspended sediment discharge model, which was used to determine suspended sediment discharge for each basin outlet; and (2) a sediment rating model, which related water discharge and suspended sediment discharge for each outlet station. Due to the absence of suspended sediment concentration measurements at or near the outlet stations, a minimum norm solution, which is based on the minimization of the unknowns rather than the residuals, was used to determine the suspended sediment discharges at the stations. In addition, the sediment rating submodel was regressed by using an observation equations procedure. Verification analyses on the model were carried out and the mean percentage errors were found to be +12.59 and -12.39, respectively, for the Blue Nile and Atbara. The hydrologic regression model was found to be most sensitive to the relative weight matrix, moderately sensitive to the mean water discharge ratio, and slightly sensitive to the concentration variation along the River Nile's course

  18. Use of probabilistic weights to enhance linear regression myoelectric control.

    Science.gov (United States)

    Smith, Lauren H; Kuiken, Todd A; Hargrove, Levi J

    2015-12-01

    Clinically available prostheses for transradial amputees do not allow simultaneous myoelectric control of degrees of freedom (DOFs). Linear regression methods can provide simultaneous myoelectric control, but frequently also result in difficulty with isolating individual DOFs when desired. This study evaluated the potential of using probabilistic estimates of categories of gross prosthesis movement, which are commonly used in classification-based myoelectric control, to enhance linear regression myoelectric control. Gaussian models were fit to electromyogram (EMG) feature distributions for three movement classes at each DOF (no movement, or movement in either direction) and used to weight the output of linear regression models by the probability that the user intended the movement. Eight able-bodied and two transradial amputee subjects worked in a virtual Fitts' law task to evaluate differences in controllability between linear regression and probability-weighted regression for an intramuscular EMG-based three-DOF wrist and hand system. Real-time and offline analyses in able-bodied subjects demonstrated that probability weighting improved performance during single-DOF tasks (p linear regression control. Use of probability weights can improve the ability to isolate individual during linear regression myoelectric control, while maintaining the ability to simultaneously control multiple DOFs.

  19. Independent contrasts and PGLS regression estimators are equivalent.

    Science.gov (United States)

    Blomberg, Simon P; Lefevre, James G; Wells, Jessie A; Waterhouse, Mary

    2012-05-01

    We prove that the slope parameter of the ordinary least squares regression of phylogenetically independent contrasts (PICs) conducted through the origin is identical to the slope parameter of the method of generalized least squares (GLSs) regression under a Brownian motion model of evolution. This equivalence has several implications: 1. Understanding the structure of the linear model for GLS regression provides insight into when and why phylogeny is important in comparative studies. 2. The limitations of the PIC regression analysis are the same as the limitations of the GLS model. In particular, phylogenetic covariance applies only to the response variable in the regression and the explanatory variable should be regarded as fixed. Calculation of PICs for explanatory variables should be treated as a mathematical idiosyncrasy of the PIC regression algorithm. 3. Since the GLS estimator is the best linear unbiased estimator (BLUE), the slope parameter estimated using PICs is also BLUE. 4. If the slope is estimated using different branch lengths for the explanatory and response variables in the PIC algorithm, the estimator is no longer the BLUE, so this is not recommended. Finally, we discuss whether or not and how to accommodate phylogenetic covariance in regression analyses, particularly in relation to the problem of phylogenetic uncertainty. This discussion is from both frequentist and Bayesian perspectives.

  20. Comparing parametric and nonparametric regression methods for panel data

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    We investigate and compare the suitability of parametric and non-parametric stochastic regression methods for analysing production technologies and the optimal firm size. Our theoretical analysis shows that the most commonly used functional forms in empirical production analysis, Cobb......-Douglas and Translog, are unsuitable for analysing the optimal firm size. We show that the Translog functional form implies an implausible linear relationship between the (logarithmic) firm size and the elasticity of scale, where the slope is artificially related to the substitutability between the inputs....... The practical applicability of the parametric and non-parametric regression methods is scrutinised and compared by an empirical example: we analyse the production technology and investigate the optimal size of Polish crop farms based on a firm-level balanced panel data set. A nonparametric specification test...

  1. Diagnostic Algorithm to Reflect Regressive Changes of Human Papilloma Virus in Tissue Biopsies

    Science.gov (United States)

    Lhee, Min Jin; Cha, Youn Jin; Bae, Jong Man; Kim, Young Tae

    2014-01-01

    Purpose Landmark indicators have not yet to be developed to detect the regression of cervical intraepithelial neoplasia (CIN). We propose that quantitative viral load and indicative histological criteria can be used to differentiate between atypical squamous cells of undetermined significance (ASCUS) and a CIN of grade 1. Materials and Methods We collected 115 tissue biopsies from women who tested positive for the human papilloma virus (HPV). Nine morphological parameters including nuclear size, perinuclear halo, hyperchromasia, typical koilocyte (TK), abortive koilocyte (AK), bi-/multi-nucleation, keratohyaline granules, inflammation, and dyskeratosis were examined for each case. Correlation analyses, cumulative logistic regression, and binary logistic regression were used to determine optimal cut-off values of HPV copy numbers. The parameters TK, perinuclear halo, multi-nucleation, and nuclear size were significantly correlated quantitatively to HPV copy number. Results An HPV loading number of 58.9 and AK number of 20 were optimal to discriminate between negative and subtle findings in biopsies. An HPV loading number of 271.49 and AK of 20 were optimal for discriminating between equivocal changes and obvious koilocytosis. Conclusion We propose that a squamous epithelial lesion with AK of >20 and quantitative HPV copy number between 58.9-271.49 represents a new spectrum of subtle pathological findings, characterized by AK in ASCUS. This can be described as a distinct entity and called "regressing koilocytosis". PMID:24532500

  2. Differential item functioning analysis with ordinal logistic regression techniques. DIFdetect and difwithpar.

    Science.gov (United States)

    Crane, Paul K; Gibbons, Laura E; Jolley, Lance; van Belle, Gerald

    2006-11-01

    We present an ordinal logistic regression model for identification of items with differential item functioning (DIF) and apply this model to a Mini-Mental State Examination (MMSE) dataset. We employ item response theory ability estimation in our models. Three nested ordinal logistic regression models are applied to each item. Model testing begins with examination of the statistical significance of the interaction term between ability and the group indicator, consistent with nonuniform DIF. Then we turn our attention to the coefficient of the ability term in models with and without the group term. If including the group term has a marked effect on that coefficient, we declare that it has uniform DIF. We examined DIF related to language of test administration in addition to self-reported race, Hispanic ethnicity, age, years of education, and sex. We used PARSCALE for IRT analyses and STATA for ordinal logistic regression approaches. We used an iterative technique for adjusting IRT ability estimates on the basis of DIF findings. Five items were found to have DIF related to language. These same items also had DIF related to other covariates. The ordinal logistic regression approach to DIF detection, when combined with IRT ability estimates, provides a reasonable alternative for DIF detection. There appear to be several items with significant DIF related to language of test administration in the MMSE. More attention needs to be paid to the specific criteria used to determine whether an item has DIF, not just the technique used to identify DIF.

  3. Analysis of quantile regression as alternative to ordinary least squares

    OpenAIRE

    Ibrahim Abdullahi; Abubakar Yahaya

    2015-01-01

    In this article, an alternative to ordinary least squares (OLS) regression based on analytical solution in the Statgraphics software is considered, and this alternative is no other than quantile regression (QR) model. We also present goodness of fit statistic as well as approximate distributions of the associated test statistics for the parameters. Furthermore, we suggest a goodness of fit statistic called the least absolute deviation (LAD) coefficient of determination. The procedure is well ...

  4. Regression analysis of nuclear plant capacity factors

    International Nuclear Information System (INIS)

    Stocks, K.J.; Faulkner, J.I.

    1980-07-01

    Operating data on all commercial nuclear power plants of the PWR, HWR, BWR and GCR types in the Western World are analysed statistically to determine whether the explanatory variables size, year of operation, vintage and reactor supplier are significant in accounting for the variation in capacity factor. The results are compared with a number of previous studies which analysed only United States reactors. The possibility of specification errors affecting the results is also examined. Although, in general, the variables considered are statistically significant, they explain only a small portion of the variation in the capacity factor. The equations thus obtained should certainly not be used to predict the lifetime performance of future large reactors

  5. Determinants for gallstone formation

    DEFF Research Database (Denmark)

    Shabanzadeh, Daniel Monsted; Sorensen, Lars Tue; Jørgensen, Torben

    2016-01-01

    . Gallstone incidence was assessed through repeated ultrasound examinations. Body mass index (BMI), blood pressure, self-rated health, lifestyle variables, blood lipids, and use of female sex hormones were measured at the baseline examination. Statistical analyses included logistic regression. Based...... re-examination were followed-up completely (mean 11.6 years, N = 2848). The overall cumulative incidence of gallstones was 0.60% per year. Independent positive determinants for incident gallstones were age, female sex, non-high density lipoprotein (non-HDL) cholesterol, and gallbladder polyps...... associations were found for blood pressure, smoking, alcohol consumption, HDL cholesterol, or triglycerides in meta-analyses. Conclusions: Age, female sex, BMI, non-HDL cholesterol, and polyps are independent determinants for gallstone formation. Incident gallstones and the metabolic syndrome share common risk...

  6. Social determinants of alcohol use among drivers in Calabar | Bello ...

    African Journals Online (AJOL)

    A semistructured questionnaire, which included the World Health Organization Alcohol Use Disorders Identification Test, was administered at interview. Binary and multinomial logistic regression analyses were used to identify social determinants of any and hazardous alcohol use. Results: Determinants of any alcohol use ...

  7. Unbalanced Regressions and the Predictive Equation

    DEFF Research Database (Denmark)

    Osterrieder, Daniela; Ventosa-Santaulària, Daniel; Vera-Valdés, J. Eduardo

    Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness in the theoreti......Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness...

  8. Semiparametric regression during 2003–2007

    KAUST Repository

    Ruppert, David; Wand, M.P.; Carroll, Raymond J.

    2009-01-01

    Semiparametric regression is a fusion between parametric regression and nonparametric regression that integrates low-rank penalized splines, mixed model and hierarchical Bayesian methodology – thus allowing more streamlined handling of longitudinal and spatial correlation. We review progress in the field over the five-year period between 2003 and 2007. We find semiparametric regression to be a vibrant field with substantial involvement and activity, continual enhancement and widespread application.

  9. Gaussian process regression analysis for functional data

    CERN Document Server

    Shi, Jian Qing

    2011-01-01

    Gaussian Process Regression Analysis for Functional Data presents nonparametric statistical methods for functional regression analysis, specifically the methods based on a Gaussian process prior in a functional space. The authors focus on problems involving functional response variables and mixed covariates of functional and scalar variables.Covering the basics of Gaussian process regression, the first several chapters discuss functional data analysis, theoretical aspects based on the asymptotic properties of Gaussian process regression models, and new methodological developments for high dime

  10. Regression Analysis by Example. 5th Edition

    Science.gov (United States)

    Chatterjee, Samprit; Hadi, Ali S.

    2012-01-01

    Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. "Regression Analysis by Example, Fifth Edition" has been expanded and thoroughly…

  11. Standards for Standardized Logistic Regression Coefficients

    Science.gov (United States)

    Menard, Scott

    2011-01-01

    Standardized coefficients in logistic regression analysis have the same utility as standardized coefficients in linear regression analysis. Although there has been no consensus on the best way to construct standardized logistic regression coefficients, there is now sufficient evidence to suggest a single best approach to the construction of a…

  12. A Seemingly Unrelated Poisson Regression Model

    OpenAIRE

    King, Gary

    1989-01-01

    This article introduces a new estimator for the analysis of two contemporaneously correlated endogenous event count variables. This seemingly unrelated Poisson regression model (SUPREME) estimator combines the efficiencies created by single equation Poisson regression model estimators and insights from "seemingly unrelated" linear regression models.

  13. Efficiency of insurance companies: Application of DEA and Tobit analyses

    Directory of Open Access Journals (Sweden)

    Eva Grmanová

    2017-10-01

    Full Text Available The aim of this paper is to determine the relationship between technical efficiency and profitability of insurance companies. The profitability of insurance companies was expressed by such indicators as ROA, ROE and the size of assets. We analysed 15 commercial insurance companies in Slovakia in the period of 2013-2015. Technical efficiency scores were expressed using DEA models. The relationship between the technical efficiency score and the indicators of profitability was expressed using censored regression, i.e. the Tobit regression model and the Mann-Whitney U-test. The relationship between the technical efficiency score in the CCR and BCC models and all the groups formed on the basis of the return on assets and the group formed basing on the return on equity was not confirmed. Statistically significant difference between average technical efficiency score in the CCR model in the group of insurance companies with ROA

  14. A simple beam analyser

    International Nuclear Information System (INIS)

    Lemarchand, G.

    1977-01-01

    (ee'p) experiments allow to measure the missing energy distribution as well as the momentum distribution of the extracted proton in the nucleus versus the missing energy. Such experiments are presently conducted on SACLAY's A.L.S. 300 Linac. Electrons and protons are respectively analysed by two spectrometers and detected in their focal planes. Counting rates are usually low and include time coincidences and accidentals. Signal-to-noise ratio is dependent on the physics of the experiment and the resolution of the coincidence, therefore it is mandatory to get a beam current distribution as flat as possible. Using new technologies has allowed to monitor in real time the behavior of the beam pulse and determine when the duty cycle can be considered as being good with respect to a numerical basis

  15. PARAMETRIC AND NON PARAMETRIC (MARS: MULTIVARIATE ADDITIVE REGRESSION SPLINES) LOGISTIC REGRESSIONS FOR PREDICTION OF A DICHOTOMOUS RESPONSE VARIABLE WITH AN EXAMPLE FOR PRESENCE/ABSENCE OF AMPHIBIANS

    Science.gov (United States)

    The purpose of this report is to provide a reference manual that could be used by investigators for making informed use of logistic regression using two methods (standard logistic regression and MARS). The details for analyses of relationships between a dependent binary response ...

  16. Establishment of regression dependences. Linear and nonlinear dependences

    International Nuclear Information System (INIS)

    Onishchenko, A.M.

    1994-01-01

    The main problems of determination of linear and 19 types of nonlinear regression dependences are completely discussed. It is taken into consideration that total dispersions are the sum of measurement dispersions and parameter variation dispersions themselves. Approaches to all dispersions determination are described. It is shown that the least square fit gives inconsistent estimation for industrial objects and processes. The correction methods by taking into account comparable measurement errors for both variable give an opportunity to obtain consistent estimation for the regression equation parameters. The condition of the correction technique application expediency is given. The technique for determination of nonlinear regression dependences taking into account the dependence form and comparable errors of both variables is described. 6 refs., 1 tab

  17. Regression with Sparse Approximations of Data

    DEFF Research Database (Denmark)

    Noorzad, Pardis; Sturm, Bob L.

    2012-01-01

    We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected...... by a sparse approximation of the point in terms of the regressors. We show SPARROW can be considered a variant of \\(k\\)-nearest neighbors regression (\\(k\\)-NNR), and more generally, local polynomial kernel regression. Unlike \\(k\\)-NNR, however, SPARROW can adapt the number of regressors to use based...

  18. Spontaneous regression of a congenital melanocytic nevus

    Directory of Open Access Journals (Sweden)

    Amiya Kumar Nath

    2011-01-01

    Full Text Available Congenital melanocytic nevus (CMN may rarely regress which may also be associated with a halo or vitiligo. We describe a 10-year-old girl who presented with CMN on the left leg since birth, which recently started to regress spontaneously with associated depigmentation in the lesion and at a distant site. Dermoscopy performed at different sites of the regressing lesion demonstrated loss of epidermal pigments first followed by loss of dermal pigments. Histopathology and Masson-Fontana stain demonstrated lymphocytic infiltration and loss of pigment production in the regressing area. Immunohistochemistry staining (S100 and HMB-45, however, showed that nevus cells were present in the regressing areas.

  19. The Use of Nonparametric Kernel Regression Methods in Econometric Production Analysis

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard

    and nonparametric estimations of production functions in order to evaluate the optimal firm size. The second paper discusses the use of parametric and nonparametric regression methods to estimate panel data regression models. The third paper analyses production risk, price uncertainty, and farmers' risk preferences...... within a nonparametric panel data regression framework. The fourth paper analyses the technical efficiency of dairy farms with environmental output using nonparametric kernel regression in a semiparametric stochastic frontier analysis. The results provided in this PhD thesis show that nonparametric......This PhD thesis addresses one of the fundamental problems in applied econometric analysis, namely the econometric estimation of regression functions. The conventional approach to regression analysis is the parametric approach, which requires the researcher to specify the form of the regression...

  20. Least Squares Adjustment: Linear and Nonlinear Weighted Regression Analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2007-01-01

    This note primarily describes the mathematics of least squares regression analysis as it is often used in geodesy including land surveying and satellite positioning applications. In these fields regression is often termed adjustment. The note also contains a couple of typical land surveying...... and satellite positioning application examples. In these application areas we are typically interested in the parameters in the model typically 2- or 3-D positions and not in predictive modelling which is often the main concern in other regression analysis applications. Adjustment is often used to obtain...... the clock error) and to obtain estimates of the uncertainty with which the position is determined. Regression analysis is used in many other fields of application both in the natural, the technical and the social sciences. Examples may be curve fitting, calibration, establishing relationships between...

  1. Analyses of developmental rate isomorphy in ectotherms: Introducing the dirichlet regression

    Czech Academy of Sciences Publication Activity Database

    Boukal S., David; Ditrich, Tomáš; Kutcherov, D.; Sroka, Pavel; Dudová, Pavla; Papáček, M.

    2015-01-01

    Roč. 10, č. 6 (2015), e0129341 E-ISSN 1932-6203 R&D Projects: GA ČR GAP505/10/0096 Grant - others:European Fund(CZ) PERG04-GA-2008-239543; GA JU(CZ) 145/2013/P Institutional support: RVO:60077344 Keywords : ectotherms Subject RIV: ED - Physiology Impact factor: 3.057, year: 2015 http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0129341

  2. The benefits of using quantile regression for analysing the effect of weeds on organic winter wheat

    NARCIS (Netherlands)

    Casagrande, M.; Makowski, D.; Jeuffroy, M.H.; Valantin-Morison, M.; David, C.

    2010-01-01

    P>In organic farming, weeds are one of the threats that limit crop yield. An early prediction of weed effect on yield loss and the size of late weed populations could help farmers and advisors to improve weed management. Numerous studies predicting the effect of weeds on yield have already been

  3. Quantitative Research Methods in Chaos and Complexity: From Probability to Post Hoc Regression Analyses

    Science.gov (United States)

    Gilstrap, Donald L.

    2013-01-01

    In addition to qualitative methods presented in chaos and complexity theories in educational research, this article addresses quantitative methods that may show potential for future research studies. Although much in the social and behavioral sciences literature has focused on computer simulations, this article explores current chaos and…

  4. The determination of iodine in biological media using radioactivation analysis (1962); Dosage de l'iode dans les milieux biologiques au moyen de l'analyse par radioactivation (1962)

    Energy Technology Data Exchange (ETDEWEB)

    Comar, D [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1962-06-15

    The object of this study is to show that the application of radioactivation analysis to the determination of iodine in biological media makes it possible to measure iodine concentrations of the order of 0.0001 {mu}g. After a review of the chemical methods with a mention of the difficulties they present, the optimum conditions for the determination of iodine in biological liquids are given. Three methods are described: - the first consists of a chemical treatment which liberates the protein bound iodine in an inorganic form. After distillation this iodine is irradiated in a flux of thermal neutrons. The induced radioactivity is compared to that of a standard sample irradiated in the same conditions by {gamma} spectrometry. - the second method which is of more general application consists in irradiating the sample and then extracting the iodine; its induced radio-activity is then measured by {beta}-counting. - the third method measures the iodine directly in the thyroid tissue by anti-compton spectrometry. The sensitivity, the reproducibility and the accuracy are discussed. Some applications are described: determination of iodine in its various organic forms in serum, determination of iodine in urines, in food-stuffs, etc., in the thyroid tissue, etc. (author) [French] Le but de cette etude est de montrer que l'analyse par radioactivation appliquee au dosage de l'iode dans les milieux biologiques permet de mesurer des taux d'iode de l'ordie de 0,0001 {mu}g. Apres avoir rappele le principe des methodes chimiques et montre les difficultes de leur mise en oeuvre, il est etabli les conditions optima pour realiser le dosage de l'iode dans les liquides biologiques. Trois methodes sont decrites; - la premiere consiste a pratiquer un traitement chimique liberant l'iode proteique sous forme minerale. Apres distillation cet iode est irradie dans un flux de neutrons thermiques. La radioactivite induite est mesuree comparativement a celle d'un etalon traite dans les memes

  5. Intermediate and advanced topics in multilevel logistic regression analysis.

    Science.gov (United States)

    Austin, Peter C; Merlo, Juan

    2017-09-10

    Multilevel data occur frequently in health services, population and public health, and epidemiologic research. In such research, binary outcomes are common. Multilevel logistic regression models allow one to account for the clustering of subjects within clusters of higher-level units when estimating the effect of subject and cluster characteristics on subject outcomes. A search of the PubMed database demonstrated that the use of multilevel or hierarchical regression models is increasing rapidly. However, our impression is that many analysts simply use multilevel regression models to account for the nuisance of within-cluster homogeneity that is induced by clustering. In this article, we describe a suite of analyses that can complement the fitting of multilevel logistic regression models. These ancillary analyses permit analysts to estimate the marginal or population-average effect of covariates measured at the subject and cluster level, in contrast to the within-cluster or cluster-specific effects arising from the original multilevel logistic regression model. We describe the interval odds ratio and the proportion of opposed odds ratios, which are summary measures of effect for cluster-level covariates. We describe the variance partition coefficient and the median odds ratio which are measures of components of variance and heterogeneity in outcomes. These measures allow one to quantify the magnitude of the general contextual effect. We describe an R 2 measure that allows analysts to quantify the proportion of variation explained by different multilevel logistic regression models. We illustrate the application and interpretation of these measures by analyzing mortality in patients hospitalized with a diagnosis of acute myocardial infarction. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  6. Spatial vulnerability assessments by regression kriging

    Science.gov (United States)

    Pásztor, László; Laborczi, Annamária; Takács, Katalin; Szatmári, Gábor

    2016-04-01

    information representing IEW or GRP forming environmental factors were taken into account to support the spatial inference of the locally experienced IEW frequency and measured GRP values respectively. An efficient spatial prediction methodology was applied to construct reliable maps, namely regression kriging (RK) using spatially exhaustive auxiliary data on soil, geology, topography, land use and climate. RK divides the spatial inference into two parts. Firstly the deterministic component of the target variable is determined by a regression model. The residuals of the multiple linear regression analysis represent the spatially varying but dependent stochastic component, which are interpolated by kriging. The final map is the sum of the two component predictions. Application of RK also provides the possibility of inherent accuracy assessment. The resulting maps are characterized by global and local measures of its accuracy. Additionally the method enables interval estimation for spatial extension of the areas of predefined risk categories. All of these outputs provide useful contribution to spatial planning, action planning and decision making. Acknowledgement: Our work was partly supported by the Hungarian National Scientific Research Foundation (OTKA, Grant No. K105167).

  7. Uncertainty Analyses and Strategy

    International Nuclear Information System (INIS)

    Kevin Coppersmith

    2001-01-01

    performance difficult. Likewise, a demonstration of the magnitude of conservatisms in the dose estimates that result from conservative inputs is difficult to determine. To respond to these issues, the DOE explored the significance of uncertainties and the magnitude of conservatisms in the SSPA Volumes 1 and 2 (BSC 2001 [DIRS 155950]; BSC 2001 [DIRS 154659]). The three main goals of this report are: (1) To briefly summarize and consolidate the discussion of much of the work that has been done over the past few years to evaluate, clarify, and improve the representation of uncertainties in the TSPA and performance projections for a potential repository. This report does not contain any new analyses of those uncertainties, but it summarizes in one place the main findings of that work. (2) To develop a strategy for how uncertainties may be handled in the TSPA and supporting analyses and models to support a License Application, should the site be recommended. It should be noted that the strategy outlined in this report is based on current information available to DOE. The strategy may be modified pending receipt of additional pertinent information, such as the Yucca Mountain Review Plan. (3) To discuss issues related to communication about uncertainties, and propose some approaches the DOE may use in the future to improve how it communicates uncertainty in its models and performance assessments to decision-makers and to technical audiences

  8. Determinants of the distribution and concentration of biogas production in Germany. A spatial econometric analysis; Bestimmungsfaktoren der Verteilung und Konzentration der Biogasproduktion in Deutschland. Eine raeumlich-oekonometrische Analyse

    Energy Technology Data Exchange (ETDEWEB)

    Scholz, Lukas

    2015-07-01

    The biogas production in Germany is characterized by a heterogeneous distribution and the formation of regional centers. In the present study the determinants of the spatial distribution and concentration are analyzed with methods of spatial statistics and spatial econometrics. In addition to the consideration of ''classic'' site factors of agricultural production, the analysis here focuses on the possible relevance of agglomeration effects. The results of the work contribute to a better understanding of the regional distribution and concentration of the biogas production in Germany. [German] Die Biogasproduktion in Deutschland ist durch eine heterogene Verteilung und die Ausbildung von regionalen Zentren charakterisiert. In der vorliegenden Arbeit werden die Bestimmungsfaktoren der raeumlichen Verteilung und Konzentration mit Methoden der raeumlichen Statistik und der raeumlichen Oekonometrie analysiert. Neben der Betrachtung ''klassischer'' Standortfaktoren landwirtschaftlicher Produktion fokussiert die Analyse dabei auf die moegliche Relevanz von Agglomerationseffekten. Die Ergebnisse der Arbeit tragen zu einem besseren Verstaendnis der regionalen Verteilung und Konzentration der Biogasproduktion in Deutschland bei.

  9. Applied regression analysis a research tool

    CERN Document Server

    Pantula, Sastry; Dickey, David

    1998-01-01

    Least squares estimation, when used appropriately, is a powerful research tool. A deeper understanding of the regression concepts is essential for achieving optimal benefits from a least squares analysis. This book builds on the fundamentals of statistical methods and provides appropriate concepts that will allow a scientist to use least squares as an effective research tool. Applied Regression Analysis is aimed at the scientist who wishes to gain a working knowledge of regression analysis. The basic purpose of this book is to develop an understanding of least squares and related statistical methods without becoming excessively mathematical. It is the outgrowth of more than 30 years of consulting experience with scientists and many years of teaching an applied regression course to graduate students. Applied Regression Analysis serves as an excellent text for a service course on regression for non-statisticians and as a reference for researchers. It also provides a bridge between a two-semester introduction to...

  10. Regression models of reactor diagnostic signals

    International Nuclear Information System (INIS)

    Vavrin, J.

    1989-01-01

    The application is described of an autoregression model as the simplest regression model of diagnostic signals in experimental analysis of diagnostic systems, in in-service monitoring of normal and anomalous conditions and their diagnostics. The method of diagnostics is described using a regression type diagnostic data base and regression spectral diagnostics. The diagnostics is described of neutron noise signals from anomalous modes in the experimental fuel assembly of a reactor. (author)

  11. Normalization Ridge Regression in Practice I: Comparisons Between Ordinary Least Squares, Ridge Regression and Normalization Ridge Regression.

    Science.gov (United States)

    Bulcock, J. W.

    The problem of model estimation when the data are collinear was examined. Though the ridge regression (RR) outperforms ordinary least squares (OLS) regression in the presence of acute multicollinearity, it is not a problem free technique for reducing the variance of the estimates. It is a stochastic procedure when it should be nonstochastic and it…

  12. Multivariate Regression Analysis and Slaughter Livestock,

    Science.gov (United States)

    AGRICULTURE, *ECONOMICS), (*MEAT, PRODUCTION), MULTIVARIATE ANALYSIS, REGRESSION ANALYSIS , ANIMALS, WEIGHT, COSTS, PREDICTIONS, STABILITY, MATHEMATICAL MODELS, STORAGE, BEEF, PORK, FOOD, STATISTICAL DATA, ACCURACY

  13. Evaluation of Linear Regression Simultaneous Myoelectric Control Using Intramuscular EMG.

    Science.gov (United States)

    Smith, Lauren H; Kuiken, Todd A; Hargrove, Levi J

    2016-04-01

    The objective of this study was to evaluate the ability of linear regression models to decode patterns of muscle coactivation from intramuscular electromyogram (EMG) and provide simultaneous myoelectric control of a virtual 3-DOF wrist/hand system. Performance was compared to the simultaneous control of conventional myoelectric prosthesis methods using intramuscular EMG (parallel dual-site control)-an approach that requires users to independently modulate individual muscles in the residual limb, which can be challenging for amputees. Linear regression control was evaluated in eight able-bodied subjects during a virtual Fitts' law task and was compared to performance of eight subjects using parallel dual-site control. An offline analysis also evaluated how different types of training data affected prediction accuracy of linear regression control. The two control systems demonstrated similar overall performance; however, the linear regression method demonstrated improved performance for targets requiring use of all three DOFs, whereas parallel dual-site control demonstrated improved performance for targets that required use of only one DOF. Subjects using linear regression control could more easily activate multiple DOFs simultaneously, but often experienced unintended movements when trying to isolate individual DOFs. Offline analyses also suggested that the method used to train linear regression systems may influence controllability. Linear regression myoelectric control using intramuscular EMG provided an alternative to parallel dual-site control for 3-DOF simultaneous control at the wrist and hand. The two methods demonstrated different strengths in controllability, highlighting the tradeoff between providing simultaneous control and the ability to isolate individual DOFs when desired.

  14. Use of probabilistic weights to enhance linear regression myoelectric control

    Science.gov (United States)

    Smith, Lauren H.; Kuiken, Todd A.; Hargrove, Levi J.

    2015-12-01

    Objective. Clinically available prostheses for transradial amputees do not allow simultaneous myoelectric control of degrees of freedom (DOFs). Linear regression methods can provide simultaneous myoelectric control, but frequently also result in difficulty with isolating individual DOFs when desired. This study evaluated the potential of using probabilistic estimates of categories of gross prosthesis movement, which are commonly used in classification-based myoelectric control, to enhance linear regression myoelectric control. Approach. Gaussian models were fit to electromyogram (EMG) feature distributions for three movement classes at each DOF (no movement, or movement in either direction) and used to weight the output of linear regression models by the probability that the user intended the movement. Eight able-bodied and two transradial amputee subjects worked in a virtual Fitts’ law task to evaluate differences in controllability between linear regression and probability-weighted regression for an intramuscular EMG-based three-DOF wrist and hand system. Main results. Real-time and offline analyses in able-bodied subjects demonstrated that probability weighting improved performance during single-DOF tasks (p < 0.05) by preventing extraneous movement at additional DOFs. Similar results were seen in experiments with two transradial amputees. Though goodness-of-fit evaluations suggested that the EMG feature distributions showed some deviations from the Gaussian, equal-covariance assumptions used in this experiment, the assumptions were sufficiently met to provide improved performance compared to linear regression control. Significance. Use of probability weights can improve the ability to isolate individual during linear regression myoelectric control, while maintaining the ability to simultaneously control multiple DOFs.

  15. Regression modeling methods, theory, and computation with SAS

    CERN Document Server

    Panik, Michael

    2009-01-01

    Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,

  16. Information fusion via constrained principal component regression for robust quantification with incomplete calibrations

    International Nuclear Information System (INIS)

    Vogt, Frank

    2013-01-01

    Graphical abstract: Analysis Task: Determine the albumin (= protein) concentration in microalgae cells as a function of the cells’ nutrient availability. Left Panel: The predicted albumin concentrations as obtained by conventional principal component regression features low reproducibility and are partially higher than the concentrations of algae in which albumin is contained. Right Panel: Augmenting an incomplete PCR calibration with additional expert information derives reasonable albumin concentrations which now reveal a significant dependency on the algae's nutrient situation. -- Highlights: •Make quantitative analyses of compounds embedded in largely unknown chemical matrices robust. •Improved concentration prediction with originally insufficient calibration models. •Chemometric approach for incorporating expertise from other fields and/or researchers. •Ensure chemical, biological, or medicinal meaningfulness of quantitative analyses. -- Abstract: Incomplete calibrations are encountered in many applications and hamper chemometric data analyses. Such situations arise when target analytes are embedded in a chemically complex matrix from which calibration concentrations cannot be determined with reasonable efforts. In other cases, the samples’ chemical composition may fluctuate in an unpredictable way and thus cannot be comprehensively covered by calibration samples. The reason for calibration model to fail is the regression principle itself which seeks to explain measured data optimally in terms of the (potentially incomplete) calibration model but does not consider chemical meaningfulness. This study presents a novel chemometric approach which is based on experimentally feasible calibrations, i.e. concentration series of the target analytes outside the chemical matrix (‘ex situ calibration’). The inherent lack-of-information is then compensated by incorporating additional knowledge in form of regression constraints. Any outside knowledge can be

  17. Millifluidic droplet analyser for microbiology

    NARCIS (Netherlands)

    Baraban, L.; Bertholle, F.; Salverda, M.L.M.; Bremond, N.; Panizza, P.; Baudry, J.; Visser, de J.A.G.M.; Bibette, J.

    2011-01-01

    We present a novel millifluidic droplet analyser (MDA) for precisely monitoring the dynamics of microbial populations over multiple generations in numerous (=103) aqueous emulsion droplets (100 nL). As a first application, we measure the growth rate of a bacterial strain and determine the minimal

  18. Direction of Effects in Multiple Linear Regression Models.

    Science.gov (United States)

    Wiedermann, Wolfgang; von Eye, Alexander

    2015-01-01

    Previous studies analyzed asymmetric properties of the Pearson correlation coefficient using higher than second order moments. These asymmetric properties can be used to determine the direction of dependence in a linear regression setting (i.e., establish which of two variables is more likely to be on the outcome side) within the framework of cross-sectional observational data. Extant approaches are restricted to the bivariate regression case. The present contribution extends the direction of dependence methodology to a multiple linear regression setting by analyzing distributional properties of residuals of competing multiple regression models. It is shown that, under certain conditions, the third central moments of estimated regression residuals can be used to decide upon direction of effects. In addition, three different approaches for statistical inference are discussed: a combined D'Agostino normality test, a skewness difference test, and a bootstrap difference test. Type I error and power of the procedures are assessed using Monte Carlo simulations, and an empirical example is provided for illustrative purposes. In the discussion, issues concerning the quality of psychological data, possible extensions of the proposed methods to the fourth central moment of regression residuals, and potential applications are addressed.

  19. Reduction of interferences in graphite furnace atomic absorption spectrometry by multiple linear regression modelling

    Science.gov (United States)

    Grotti, Marco; Abelmoschi, Maria Luisa; Soggia, Francesco; Tiberiade, Christian; Frache, Roberto

    2000-12-01

    The multivariate effects of Na, K, Mg and Ca as nitrates on the electrothermal atomisation of manganese, cadmium and iron were studied by multiple linear regression modelling. Since the models proved to efficiently predict the effects of the considered matrix elements in a wide range of concentrations, they were applied to correct the interferences occurring in the determination of trace elements in seawater after pre-concentration of the analytes. In order to obtain a statistically significant number of samples, a large volume of the certified seawater reference materials CASS-3 and NASS-3 was treated with Chelex-100 resin; then, the chelating resin was separated from the solution, divided into several sub-samples, each of them was eluted with nitric acid and analysed by electrothermal atomic absorption spectrometry (for trace element determinations) and inductively coupled plasma optical emission spectrometry (for matrix element determinations). To minimise any other systematic error besides that due to matrix effects, accuracy of the pre-concentration step and contamination levels of the procedure were checked by inductively coupled plasma mass spectrometric measurements. Analytical results obtained by applying the multiple linear regression models were compared with those obtained with other calibration methods, such as external calibration using acid-based standards, external calibration using matrix-matched standards and the analyte addition technique. Empirical models proved to efficiently reduce interferences occurring in the analysis of real samples, allowing an improvement of accuracy better than for other calibration methods.

  20. Extralobar pulmonary sequestration in neonates: The natural course and predictive factors associated with spontaneous regression

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Hee Mang; Jung, Ah Young; Cho, Young Ah; Yoon, Chong Hyun; Lee, Jin Seong [Asan Medical Center Children' s Hospital, University of Ulsan College of Medicine, Department of Radiology and Research Institute of Radiology, Songpa-gu, Seoul (Korea, Republic of); Kim, Ellen Ai-Rhan [University of Ulsan College of Medicine, Division of Neonatology, Asan Medical Center Children' s Hospital, Seoul (Korea, Republic of); Chung, Sung-Hoon [Kyung Hee University School of Medicine, Department of Pediatrics, Seoul (Korea, Republic of); Kim, Seon-Ok [Asan Medical Center, Department of Clinical Epidemiology and Biostatistics, Seoul (Korea, Republic of)

    2017-06-15

    To describe the natural course of extralobar pulmonary sequestration (EPS) and identify factors associated with spontaneous regression of EPS. We retrospectively searched for patients diagnosed with EPS on initial contrast CT scan within 1 month after birth and had a follow-up CT scan without treatment. Spontaneous regression of EPS was assessed by percentage decrease in volume (PDV) and percentage decrease in sum of the diameter of systemic feeding arteries (PDD) by comparing initial and follow-up CT scans. Clinical and CT features were analysed to determine factors associated with PDV and PDD rates. Fifty-one neonates were included. The cumulative proportions of patients reaching PDV > 50 % and PDD > 50 % were 93.0 % and 73.3 % at 4 years, respectively. Tissue attenuation was significantly associated with PDV rate (B = -21.78, P <.001). The tissue attenuation (B = -22.62, P =.001) and diameter of the largest systemic feeding arteries (B = -48.31, P =.011) were significant factors associated with PDD rate. The volume and diameter of systemic feeding arteries of EPS spontaneously decreased within 4 years without treatment. EPSs showing a low tissue attenuation and small diameter of the largest systemic feeding arteries on initial contrast-enhanced CT scans were likely to regress spontaneously. (orig.)

  1. Prediction of radiation levels in residences: A methodological comparison of CART [Classification and Regression Tree Analysis] and conventional regression

    International Nuclear Information System (INIS)

    Janssen, I.; Stebbings, J.H.

    1990-01-01

    In environmental epidemiology, trace and toxic substance concentrations frequently have very highly skewed distributions ranging over one or more orders of magnitude, and prediction by conventional regression is often poor. Classification and Regression Tree Analysis (CART) is an alternative in such contexts. To compare the techniques, two Pennsylvania data sets and three independent variables are used: house radon progeny (RnD) and gamma levels as predicted by construction characteristics in 1330 houses; and ∼200 house radon (Rn) measurements as predicted by topographic parameters. CART may identify structural variables of interest not identified by conventional regression, and vice versa, but in general the regression models are similar. CART has major advantages in dealing with other common characteristics of environmental data sets, such as missing values, continuous variables requiring transformations, and large sets of potential independent variables. CART is most useful in the identification and screening of independent variables, greatly reducing the need for cross-tabulations and nested breakdown analyses. There is no need to discard cases with missing values for the independent variables because surrogate variables are intrinsic to CART. The tree-structured approach is also independent of the scale on which the independent variables are measured, so that transformations are unnecessary. CART identifies important interactions as well as main effects. The major advantages of CART appear to be in exploring data. Once the important variables are identified, conventional regressions seem to lead to results similar but more interpretable by most audiences. 12 refs., 8 figs., 10 tabs

  2. RAWS II: A MULTIPLE REGRESSION ANALYSIS PROGRAM,

    Science.gov (United States)

    This memorandum gives instructions for the use and operation of a revised version of RAWS, a multiple regression analysis program. The program...of preprocessed data, the directed retention of variable, listing of the matrix of the normal equations and its inverse, and the bypassing of the regression analysis to provide the input variable statistics only. (Author)

  3. Hierarchical regression analysis in structural Equation Modeling

    NARCIS (Netherlands)

    de Jong, P.F.

    1999-01-01

    In a hierarchical or fixed-order regression analysis, the independent variables are entered into the regression equation in a prespecified order. Such an analysis is often performed when the extra amount of variance accounted for in a dependent variable by a specific independent variable is the main

  4. Categorical regression dose-response modeling

    Science.gov (United States)

    The goal of this training is to provide participants with training on the use of the U.S. EPA’s Categorical Regression soft¬ware (CatReg) and its application to risk assessment. Categorical regression fits mathematical models to toxicity data that have been assigned ord...

  5. Variable importance in latent variable regression models

    NARCIS (Netherlands)

    Kvalheim, O.M.; Arneberg, R.; Bleie, O.; Rajalahti, T.; Smilde, A.K.; Westerhuis, J.A.

    2014-01-01

    The quality and practical usefulness of a regression model are a function of both interpretability and prediction performance. This work presents some new graphical tools for improved interpretation of latent variable regression models that can also assist in improved algorithms for variable

  6. Suppression Situations in Multiple Linear Regression

    Science.gov (United States)

    Shieh, Gwowen

    2006-01-01

    This article proposes alternative expressions for the two most prevailing definitions of suppression without resorting to the standardized regression modeling. The formulation provides a simple basis for the examination of their relationship. For the two-predictor regression, the author demonstrates that the previous results in the literature are…

  7. Gibrat’s law and quantile regressions

    DEFF Research Database (Denmark)

    Distante, Roberta; Petrella, Ivan; Santoro, Emiliano

    2017-01-01

    The nexus between firm growth, size and age in U.S. manufacturing is examined through the lens of quantile regression models. This methodology allows us to overcome serious shortcomings entailed by linear regression models employed by much of the existing literature, unveiling a number of important...

  8. Regression Analysis and the Sociological Imagination

    Science.gov (United States)

    De Maio, Fernando

    2014-01-01

    Regression analysis is an important aspect of most introductory statistics courses in sociology but is often presented in contexts divorced from the central concerns that bring students into the discipline. Consequently, we present five lesson ideas that emerge from a regression analysis of income inequality and mortality in the USA and Canada.

  9. Repeated Results Analysis for Middleware Regression Benchmarking

    Czech Academy of Sciences Publication Activity Database

    Bulej, Lubomír; Kalibera, T.; Tůma, P.

    2005-01-01

    Roč. 60, - (2005), s. 345-358 ISSN 0166-5316 R&D Projects: GA ČR GA102/03/0672 Institutional research plan: CEZ:AV0Z10300504 Keywords : middleware benchmarking * regression benchmarking * regression testing Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.756, year: 2005

  10. Principles of Quantile Regression and an Application

    Science.gov (United States)

    Chen, Fang; Chalhoub-Deville, Micheline

    2014-01-01

    Newer statistical procedures are typically introduced to help address the limitations of those already in practice or to deal with emerging research needs. Quantile regression (QR) is introduced in this paper as a relatively new methodology, which is intended to overcome some of the limitations of least squares mean regression (LMR). QR is more…

  11. ON REGRESSION REPRESENTATIONS OF STOCHASTIC-PROCESSES

    NARCIS (Netherlands)

    RUSCHENDORF, L; DEVALK, [No Value

    We construct a.s. nonlinear regression representations of general stochastic processes (X(n))n is-an-element-of N. As a consequence we obtain in particular special regression representations of Markov chains and of certain m-dependent sequences. For m-dependent sequences we obtain a constructive

  12. Regression of environmental noise in LIGO data

    International Nuclear Information System (INIS)

    Tiwari, V; Klimenko, S; Mitselmakher, G; Necula, V; Drago, M; Prodi, G; Frolov, V; Yakushin, I; Re, V; Salemi, F; Vedovato, G

    2015-01-01

    We address the problem of noise regression in the output of gravitational-wave (GW) interferometers, using data from the physical environmental monitors (PEM). The objective of the regression analysis is to predict environmental noise in the GW channel from the PEM measurements. One of the most promising regression methods is based on the construction of Wiener–Kolmogorov (WK) filters. Using this method, the seismic noise cancellation from the LIGO GW channel has already been performed. In the presented approach the WK method has been extended, incorporating banks of Wiener filters in the time–frequency domain, multi-channel analysis and regulation schemes, which greatly enhance the versatility of the regression analysis. Also we present the first results on regression of the bi-coherent noise in the LIGO data. (paper)

  13. Pathological assessment of liver fibrosis regression

    Directory of Open Access Journals (Sweden)

    WANG Bingqiong

    2017-03-01

    Full Text Available Hepatic fibrosis is the common pathological outcome of chronic hepatic diseases. An accurate assessment of fibrosis degree provides an important reference for a definite diagnosis of diseases, treatment decision-making, treatment outcome monitoring, and prognostic evaluation. At present, many clinical studies have proven that regression of hepatic fibrosis and early-stage liver cirrhosis can be achieved by effective treatment, and a correct evaluation of fibrosis regression has become a hot topic in clinical research. Liver biopsy has long been regarded as the gold standard for the assessment of hepatic fibrosis, and thus it plays an important role in the evaluation of fibrosis regression. This article reviews the clinical application of current pathological staging systems in the evaluation of fibrosis regression from the perspectives of semi-quantitative scoring system, quantitative approach, and qualitative approach, in order to propose a better pathological evaluation system for the assessment of fibrosis regression.

  14. Should metacognition be measured by logistic regression?

    Science.gov (United States)

    Rausch, Manuel; Zehetleitner, Michael

    2017-03-01

    Are logistic regression slopes suitable to quantify metacognitive sensitivity, i.e. the efficiency with which subjective reports differentiate between correct and incorrect task responses? We analytically show that logistic regression slopes are independent from rating criteria in one specific model of metacognition, which assumes (i) that rating decisions are based on sensory evidence generated independently of the sensory evidence used for primary task responses and (ii) that the distributions of evidence are logistic. Given a hierarchical model of metacognition, logistic regression slopes depend on rating criteria. According to all considered models, regression slopes depend on the primary task criterion. A reanalysis of previous data revealed that massive numbers of trials are required to distinguish between hierarchical and independent models with tolerable accuracy. It is argued that researchers who wish to use logistic regression as measure of metacognitive sensitivity need to control the primary task criterion and rating criteria. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Linear regression and sensitivity analysis in nuclear reactor design

    International Nuclear Information System (INIS)

    Kumar, Akansha; Tsvetkov, Pavel V.; McClarren, Ryan G.

    2015-01-01

    Highlights: • Presented a benchmark for the applicability of linear regression to complex systems. • Applied linear regression to a nuclear reactor power system. • Performed neutronics, thermal–hydraulics, and energy conversion using Brayton’s cycle for the design of a GCFBR. • Performed detailed sensitivity analysis to a set of parameters in a nuclear reactor power system. • Modeled and developed reactor design using MCNP, regression using R, and thermal–hydraulics in Java. - Abstract: The paper presents a general strategy applicable for sensitivity analysis (SA), and uncertainity quantification analysis (UA) of parameters related to a nuclear reactor design. This work also validates the use of linear regression (LR) for predictive analysis in a nuclear reactor design. The analysis helps to determine the parameters on which a LR model can be fit for predictive analysis. For those parameters, a regression surface is created based on trial data and predictions are made using this surface. A general strategy of SA to determine and identify the influential parameters those affect the operation of the reactor is mentioned. Identification of design parameters and validation of linearity assumption for the application of LR of reactor design based on a set of tests is performed. The testing methods used to determine the behavior of the parameters can be used as a general strategy for UA, and SA of nuclear reactor models, and thermal hydraulics calculations. A design of a gas cooled fast breeder reactor (GCFBR), with thermal–hydraulics, and energy transfer has been used for the demonstration of this method. MCNP6 is used to simulate the GCFBR design, and perform the necessary criticality calculations. Java is used to build and run input samples, and to extract data from the output files of MCNP6, and R is used to perform regression analysis and other multivariate variance, and analysis of the collinearity of data

  16. Seismic fragility analyses

    International Nuclear Information System (INIS)

    Kostov, Marin

    2000-01-01

    In the last two decades there is increasing number of probabilistic seismic risk assessments performed. The basic ideas of the procedure for performing a Probabilistic Safety Analysis (PSA) of critical structures (NUREG/CR-2300, 1983) could be used also for normal industrial and residential buildings, dams or other structures. The general formulation of the risk assessment procedure applied in this investigation is presented in Franzini, et al., 1984. The probability of failure of a structure for an expected lifetime (for example 50 years) can be obtained from the annual frequency of failure, β E determined by the relation: β E ∫[d[β(x)]/dx]P(flx)dx. β(x) is the annual frequency of exceedance of load level x (for example, the variable x may be peak ground acceleration), P(fI x) is the conditional probability of structure failure at a given seismic load level x. The problem leads to the assessment of the seismic hazard β(x) and the fragility P(fl x). The seismic hazard curves are obtained by the probabilistic seismic hazard analysis. The fragility curves are obtained after the response of the structure is defined as probabilistic and its capacity and the associated uncertainties are assessed. Finally the fragility curves are combined with the seismic loading to estimate the frequency of failure for each critical scenario. The frequency of failure due to seismic event is presented by the scenario with the highest frequency. The tools usually applied for probabilistic safety analyses of critical structures could relatively easily be adopted to ordinary structures. The key problems are the seismic hazard definitions and the fragility analyses. The fragility could be derived either based on scaling procedures or on the base of generation. Both approaches have been presented in the paper. After the seismic risk (in terms of failure probability) is assessed there are several approaches for risk reduction. Generally the methods could be classified in two groups. The

  17. N-terminal pro-B-type natriuretic peptide measurement is useful in predicting left ventricular hypertrophy regression after aortic valve replacement in patients with severe aortic stenosis.

    Science.gov (United States)

    Lee, Mirae; Choi, Jin-Oh; Park, Sung-Ji; Kim, Eun Young; Park, PyoWon; Oh, Jae K; Jeon, Eun-Seok

    2015-01-01

    The predictive factors for early left ventricular hypertrophy (LVH) regression after aortic valve replacement (AVR) have not been fully elucidated. This study was conducted to investigate which preoperative parameters predict early LVH regression after AVR. 87 consecutive patients who underwent AVR due to isolated severe aortic stenosis (AS) were analysed. Patients with ejection fraction regression of LVH at the midterm follow-up was determined. In multivariate analysis, including preoperative echocardiographic parameters, only E/e' ratio was associated with midterm LVH regression (OR 1.11, 95% CI 1.01 to 1.22; p=0.035). When preoperative NT-proBNP was added to the analysis, logNT-proBNP was found to be the single significant predictor of midterm LVH regression (OR 2.00, 95% CI 1.08 to 3.71; p=0.028). By receiver operating characteristic curve analysis, a cut-off value of 440 pg/mL for NT-proBNP yielded a sensitivity of 72% and a specificity of 77% for the prediction of LVH regression after AVR. Preoperative NT-proBNP was an independent predictor for early LVH regression after AVR in patients with isolated severe AS.

  18. Easy methods for extracting individual regression slopes: Comparing SPSS, R, and Excel

    Directory of Open Access Journals (Sweden)

    Roland Pfister

    2013-10-01

    Full Text Available Three different methods for extracting coefficientsof linear regression analyses are presented. The focus is on automatic and easy-to-use approaches for common statistical packages: SPSS, R, and MS Excel / LibreOffice Calc. Hands-on examples are included for each analysis, followed by a brief description of how a subsequent regression coefficient analysis is performed.

  19. Regression modeling of ground-water flow

    Science.gov (United States)

    Cooley, R.L.; Naff, R.L.

    1985-01-01

    Nonlinear multiple regression methods are developed to model and analyze groundwater flow systems. Complete descriptions of regression methodology as applied to groundwater flow models allow scientists and engineers engaged in flow modeling to apply the methods to a wide range of problems. Organization of the text proceeds from an introduction that discusses the general topic of groundwater flow modeling, to a review of basic statistics necessary to properly apply regression techniques, and then to the main topic: exposition and use of linear and nonlinear regression to model groundwater flow. Statistical procedures are given to analyze and use the regression models. A number of exercises and answers are included to exercise the student on nearly all the methods that are presented for modeling and statistical analysis. Three computer programs implement the more complex methods. These three are a general two-dimensional, steady-state regression model for flow in an anisotropic, heterogeneous porous medium, a program to calculate a measure of model nonlinearity with respect to the regression parameters, and a program to analyze model errors in computed dependent variables such as hydraulic head. (USGS)

  20. Logistic Regression in the Identification of Hazards in Construction

    Science.gov (United States)

    Drozd, Wojciech

    2017-10-01

    The construction site and its elements create circumstances that are conducive to the formation of risks to safety during the execution of works. Analysis indicates the critical importance of these factors in the set of characteristics that describe the causes of accidents in the construction industry. This article attempts to analyse the characteristics related to the construction site, in order to indicate their importance in defining the circumstances of accidents at work. The study includes sites inspected in 2014 - 2016 by the employees of the District Labour Inspectorate in Krakow (Poland). The analysed set of detailed (disaggregated) data includes both quantitative and qualitative characteristics. The substantive task focused on classification modelling in the identification of hazards in construction and identifying those of the analysed characteristics that are important in an accident. In terms of methodology, resource data analysis using statistical classifiers, in the form of logistic regression, was the method used.

  1. Variable and subset selection in PLS regression

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    2001-01-01

    The purpose of this paper is to present some useful methods for introductory analysis of variables and subsets in relation to PLS regression. We present here methods that are efficient in finding the appropriate variables or subset to use in the PLS regression. The general conclusion...... is that variable selection is important for successful analysis of chemometric data. An important aspect of the results presented is that lack of variable selection can spoil the PLS regression, and that cross-validation measures using a test set can show larger variation, when we use different subsets of X, than...

  2. Applied Regression Modeling A Business Approach

    CERN Document Server

    Pardoe, Iain

    2012-01-01

    An applied and concise treatment of statistical regression techniques for business students and professionals who have little or no background in calculusRegression analysis is an invaluable statistical methodology in business settings and is vital to model the relationship between a response variable and one or more predictor variables, as well as the prediction of a response value given values of the predictors. In view of the inherent uncertainty of business processes, such as the volatility of consumer spending and the presence of market uncertainty, business professionals use regression a

  3. Regression of oral lichenoid lesions after replacement of dental restorations.

    Science.gov (United States)

    Mårell, L; Tillberg, A; Widman, L; Bergdahl, J; Berglund, A

    2014-05-01

    The aim of the study was to determine the prognosis and to evaluate the regression of lichenoid contact reactions (LCR) and oral lichen planus (OLP) after replacement of dental restorative materials suspected as causing the lesions. Forty-four referred patients with oral lesions participated in a follow-up study that was initiated an average of 6 years after the first examination at the Department of Odontology, i.e. the baseline examination. The patients underwent odontological clinical examination and answered a questionnaire with questions regarding dental health, medical and psychological health, and treatments undertaken from baseline to follow-up. After exchange of dental materials, regression of oral lesions was significantly higher among patients with LCR than with OLP. As no cases with OLP regressed after an exchange of materials, a proper diagnosis has to be made to avoid unnecessary exchanges of intact restorations on patients with OLP.

  4. Linear regression metamodeling as a tool to summarize and present simulation model results.

    Science.gov (United States)

    Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M

    2013-10-01

    Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.

  5. Analyzing hospitalization data: potential limitations of Poisson regression.

    Science.gov (United States)

    Weaver, Colin G; Ravani, Pietro; Oliver, Matthew J; Austin, Peter C; Quinn, Robert R

    2015-08-01

    Poisson regression is commonly used to analyze hospitalization data when outcomes are expressed as counts (e.g. number of days in hospital). However, data often violate the assumptions on which Poisson regression is based. More appropriate extensions of this model, while available, are rarely used. We compared hospitalization data between 206 patients treated with hemodialysis (HD) and 107 treated with peritoneal dialysis (PD) using Poisson regression and compared results from standard Poisson regression with those obtained using three other approaches for modeling count data: negative binomial (NB) regression, zero-inflated Poisson (ZIP) regression and zero-inflated negative binomial (ZINB) regression. We examined the appropriateness of each model and compared the results obtained with each approach. During a mean 1.9 years of follow-up, 183 of 313 patients (58%) were never hospitalized (indicating an excess of 'zeros'). The data also displayed overdispersion (variance greater than mean), violating another assumption of the Poisson model. Using four criteria, we determined that the NB and ZINB models performed best. According to these two models, patients treated with HD experienced similar hospitalization rates as those receiving PD {NB rate ratio (RR): 1.04 [bootstrapped 95% confidence interval (CI): 0.49-2.20]; ZINB summary RR: 1.21 (bootstrapped 95% CI 0.60-2.46)}. Poisson and ZIP models fit the data poorly and had much larger point estimates than the NB and ZINB models [Poisson RR: 1.93 (bootstrapped 95% CI 0.88-4.23); ZIP summary RR: 1.84 (bootstrapped 95% CI 0.88-3.84)]. We found substantially different results when modeling hospitalization data, depending on the approach used. Our results argue strongly for a sound model selection process and improved reporting around statistical methods used for modeling count data. © The Author 2015. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  6. Sparse Regression by Projection and Sparse Discriminant Analysis

    KAUST Repository

    Qi, Xin

    2015-04-03

    © 2015, © American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America. Recent years have seen active developments of various penalized regression methods, such as LASSO and elastic net, to analyze high-dimensional data. In these approaches, the direction and length of the regression coefficients are determined simultaneously. Due to the introduction of penalties, the length of the estimates can be far from being optimal for accurate predictions. We introduce a new framework, regression by projection, and its sparse version to analyze high-dimensional data. The unique nature of this framework is that the directions of the regression coefficients are inferred first, and the lengths and the tuning parameters are determined by a cross-validation procedure to achieve the largest prediction accuracy. We provide a theoretical result for simultaneous model selection consistency and parameter estimation consistency of our method in high dimension. This new framework is then generalized such that it can be applied to principal components analysis, partial least squares, and canonical correlation analysis. We also adapt this framework for discriminant analysis. Compared with the existing methods, where there is relatively little control of the dependency among the sparse components, our method can control the relationships among the components. We present efficient algorithms and related theory for solving the sparse regression by projection problem. Based on extensive simulations and real data analysis, we demonstrate that our method achieves good predictive performance and variable selection in the regression setting, and the ability to control relationships between the sparse components leads to more accurate classification. In supplementary materials available online, the details of the algorithms and theoretical proofs, and R codes for all simulation studies are provided.

  7. Genetics Home Reference: caudal regression syndrome

    Science.gov (United States)

    ... umbilical artery: Further support for a caudal regression-sirenomelia spectrum. Am J Med Genet A. 2007 Dec ... AK, Dickinson JE, Bower C. Caudal dysgenesis and sirenomelia-single centre experience suggests common pathogenic basis. Am ...

  8. Dynamic travel time estimation using regression trees.

    Science.gov (United States)

    2008-10-01

    This report presents a methodology for travel time estimation by using regression trees. The dissemination of travel time information has become crucial for effective traffic management, especially under congested road conditions. In the absence of c...

  9. Two Paradoxes in Linear Regression Analysis

    Science.gov (United States)

    FENG, Ge; PENG, Jing; TU, Dongke; ZHENG, Julia Z.; FENG, Changyong

    2016-01-01

    Summary Regression is one of the favorite tools in applied statistics. However, misuse and misinterpretation of results from regression analysis are common in biomedical research. In this paper we use statistical theory and simulation studies to clarify some paradoxes around this popular statistical method. In particular, we show that a widely used model selection procedure employed in many publications in top medical journals is wrong. Formal procedures based on solid statistical theory should be used in model selection. PMID:28638214

  10. Discriminative Elastic-Net Regularized Linear Regression.

    Science.gov (United States)

    Zhang, Zheng; Lai, Zhihui; Xu, Yong; Shao, Ling; Wu, Jian; Xie, Guo-Sen

    2017-03-01

    In this paper, we aim at learning compact and discriminative linear regression models. Linear regression has been widely used in different problems. However, most of the existing linear regression methods exploit the conventional zero-one matrix as the regression targets, which greatly narrows the flexibility of the regression model. Another major limitation of these methods is that the learned projection matrix fails to precisely project the image features to the target space due to their weak discriminative capability. To this end, we present an elastic-net regularized linear regression (ENLR) framework, and develop two robust linear regression models which possess the following special characteristics. First, our methods exploit two particular strategies to enlarge the margins of different classes by relaxing the strict binary targets into a more feasible variable matrix. Second, a robust elastic-net regularization of singular values is introduced to enhance the compactness and effectiveness of the learned projection matrix. Third, the resulting optimization problem of ENLR has a closed-form solution in each iteration, which can be solved efficiently. Finally, rather than directly exploiting the projection matrix for recognition, our methods employ the transformed features as the new discriminate representations to make final image classification. Compared with the traditional linear regression model and some of its variants, our method is much more accurate in image classification. Extensive experiments conducted on publicly available data sets well demonstrate that the proposed framework can outperform the state-of-the-art methods. The MATLAB codes of our methods can be available at http://www.yongxu.org/lunwen.html.

  11. Fuzzy multiple linear regression: A computational approach

    Science.gov (United States)

    Juang, C. H.; Huang, X. H.; Fleming, J. W.

    1992-01-01

    This paper presents a new computational approach for performing fuzzy regression. In contrast to Bardossy's approach, the new approach, while dealing with fuzzy variables, closely follows the conventional regression technique. In this approach, treatment of fuzzy input is more 'computational' than 'symbolic.' The following sections first outline the formulation of the new approach, then deal with the implementation and computational scheme, and this is followed by examples to illustrate the new procedure.

  12. Computing multiple-output regression quantile regions

    Czech Academy of Sciences Publication Activity Database

    Paindaveine, D.; Šiman, Miroslav

    2012-01-01

    Roč. 56, č. 4 (2012), s. 840-853 ISSN 0167-9473 R&D Projects: GA MŠk(CZ) 1M06047 Institutional research plan: CEZ:AV0Z10750506 Keywords : halfspace depth * multiple-output regression * parametric linear programming * quantile regression Subject RIV: BA - General Mathematics Impact factor: 1.304, year: 2012 http://library.utia.cas.cz/separaty/2012/SI/siman-0376413.pdf

  13. There is No Quantum Regression Theorem

    International Nuclear Information System (INIS)

    Ford, G.W.; OConnell, R.F.

    1996-01-01

    The Onsager regression hypothesis states that the regression of fluctuations is governed by macroscopic equations describing the approach to equilibrium. It is here asserted that this hypothesis fails in the quantum case. This is shown first by explicit calculation for the example of quantum Brownian motion of an oscillator and then in general from the fluctuation-dissipation theorem. It is asserted that the correct generalization of the Onsager hypothesis is the fluctuation-dissipation theorem. copyright 1996 The American Physical Society

  14. Caudal regression syndrome : a case report

    International Nuclear Information System (INIS)

    Lee, Eun Joo; Kim, Hi Hye; Kim, Hyung Sik; Park, So Young; Han, Hye Young; Lee, Kwang Hun

    1998-01-01

    Caudal regression syndrome is a rare congenital anomaly, which results from a developmental failure of the caudal mesoderm during the fetal period. We present a case of caudal regression syndrome composed of a spectrum of anomalies including sirenomelia, dysplasia of the lower lumbar vertebrae, sacrum, coccyx and pelvic bones,genitourinary and anorectal anomalies, and dysplasia of the lung, as seen during infantography and MR imaging

  15. Caudal regression syndrome : a case report

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Eun Joo; Kim, Hi Hye; Kim, Hyung Sik; Park, So Young; Han, Hye Young; Lee, Kwang Hun [Chungang Gil Hospital, Incheon (Korea, Republic of)

    1998-07-01

    Caudal regression syndrome is a rare congenital anomaly, which results from a developmental failure of the caudal mesoderm during the fetal period. We present a case of caudal regression syndrome composed of a spectrum of anomalies including sirenomelia, dysplasia of the lower lumbar vertebrae, sacrum, coccyx and pelvic bones,genitourinary and anorectal anomalies, and dysplasia of the lung, as seen during infantography and MR imaging.

  16. Spontaneous regression of metastatic Merkel cell carcinoma.

    LENUS (Irish Health Repository)

    Hassan, S J

    2010-01-01

    Merkel cell carcinoma is a rare aggressive neuroendocrine carcinoma of the skin predominantly affecting elderly Caucasians. It has a high rate of local recurrence and regional lymph node metastases. It is associated with a poor prognosis. Complete spontaneous regression of Merkel cell carcinoma has been reported but is a poorly understood phenomenon. Here we present a case of complete spontaneous regression of metastatic Merkel cell carcinoma demonstrating a markedly different pattern of events from those previously published.

  17. Forecasting exchange rates: a robust regression approach

    OpenAIRE

    Preminger, Arie; Franck, Raphael

    2005-01-01

    The least squares estimation method as well as other ordinary estimation method for regression models can be severely affected by a small number of outliers, thus providing poor out-of-sample forecasts. This paper suggests a robust regression approach, based on the S-estimation method, to construct forecasting models that are less sensitive to data contamination by outliers. A robust linear autoregressive (RAR) and a robust neural network (RNN) models are estimated to study the predictabil...

  18. Marginal longitudinal semiparametric regression via penalized splines

    KAUST Repository

    Al Kadiri, M.

    2010-08-01

    We study the marginal longitudinal nonparametric regression problem and some of its semiparametric extensions. We point out that, while several elaborate proposals for efficient estimation have been proposed, a relative simple and straightforward one, based on penalized splines, has not. After describing our approach, we then explain how Gibbs sampling and the BUGS software can be used to achieve quick and effective implementation. Illustrations are provided for nonparametric regression and additive models.

  19. Marginal longitudinal semiparametric regression via penalized splines

    KAUST Repository

    Al Kadiri, M.; Carroll, R.J.; Wand, M.P.

    2010-01-01

    We study the marginal longitudinal nonparametric regression problem and some of its semiparametric extensions. We point out that, while several elaborate proposals for efficient estimation have been proposed, a relative simple and straightforward one, based on penalized splines, has not. After describing our approach, we then explain how Gibbs sampling and the BUGS software can be used to achieve quick and effective implementation. Illustrations are provided for nonparametric regression and additive models.

  20. Post-processing through linear regression

    Science.gov (United States)

    van Schaeybroeck, B.; Vannitsem, S.

    2011-03-01

    Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS) method, a new time-dependent Tikhonov regularization (TDTR) method, the total least-square method, a new geometric-mean regression (GM), a recently introduced error-in-variables (EVMOS) method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified. These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise). At long lead times the regression schemes (EVMOS, TDTR) which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.

  1. Post-processing through linear regression

    Directory of Open Access Journals (Sweden)

    B. Van Schaeybroeck

    2011-03-01

    Full Text Available Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS method, a new time-dependent Tikhonov regularization (TDTR method, the total least-square method, a new geometric-mean regression (GM, a recently introduced error-in-variables (EVMOS method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified.

    These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise. At long lead times the regression schemes (EVMOS, TDTR which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.

  2. Unbalanced Regressions and the Predictive Equation

    DEFF Research Database (Denmark)

    Osterrieder, Daniela; Ventosa-Santaulària, Daniel; Vera-Valdés, J. Eduardo

    Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness in the theoreti......Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness...... in the theoretical predictive equation by suggesting a data generating process, where returns are generated as linear functions of a lagged latent I(0) risk process. The observed predictor is a function of this latent I(0) process, but it is corrupted by a fractionally integrated noise. Such a process may arise due...... to aggregation or unexpected level shifts. In this setup, the practitioner estimates a misspecified, unbalanced, and endogenous predictive regression. We show that the OLS estimate of this regression is inconsistent, but standard inference is possible. To obtain a consistent slope estimate, we then suggest...

  3. Determination of Habitat Requirements For Birds in Suburban Areas

    Science.gov (United States)

    Jack Ward Thomas; Richard M. DeGraaf; Joseph C. Mawson

    1977-01-01

    Songbird populations can be related to habitat components by a method that allows the simultaneous determination of habitat requirements for a variety of species . Through correlation and multiple-regression analyses, 10 bird species were studied in a suburban habitat, which was stratified according to human density. Variables used to account for bird distribution...

  4. Macroeconomic determinants of remittance flows from russia to tajikistan

    OpenAIRE

    Mirzosaid Sultonov

    2012-01-01

    In this paper, we assess the macroeconomic determinants of remittance flows from Russia to Tajikistan. Applying quarterly time series and an econometric model with regression analyses, we find that Russia's economic growth and Tajikistan's inflation have positive and statistically significant effects on remittances, and Russia's unemployment has negative and statistically significant effects.

  5. Partaking in cycling, at what cost? : determinants of cycling expenses

    NARCIS (Netherlands)

    Thibaut, E.; Vos, S.B.; Lagae, W.; Van Puyenbroeck, T.; Scheerder, J.

    2016-01-01

    This study analyses the determinants of cycling expenditure by means of a Tobit regression analysis, based on a dataset of 5,157 cyclists. Using a heterodox economic framework, 23 different variables are combined into two commonly used variable groups (socio-demographics, sports intensity variables)

  6. An introduction to using Bayesian linear regression with clinical data.

    Science.gov (United States)

    Baldwin, Scott A; Larson, Michael J

    2017-11-01

    Statistical training psychology focuses on frequentist methods. Bayesian methods are an alternative to standard frequentist methods. This article provides researchers with an introduction to fundamental ideas in Bayesian modeling. We use data from an electroencephalogram (EEG) and anxiety study to illustrate Bayesian models. Specifically, the models examine the relationship between error-related negativity (ERN), a particular event-related potential, and trait anxiety. Methodological topics covered include: how to set up a regression model in a Bayesian framework, specifying priors, examining convergence of the model, visualizing and interpreting posterior distributions, interval estimates, expected and predicted values, and model comparison tools. We also discuss situations where Bayesian methods can outperform frequentist methods as well has how to specify more complicated regression models. Finally, we conclude with recommendations about reporting guidelines for those using Bayesian methods in their own research. We provide data and R code for replicating our analyses. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Covariate Imbalance and Adjustment for Logistic Regression Analysis of Clinical Trial Data

    Science.gov (United States)

    Ciolino, Jody D.; Martin, Reneé H.; Zhao, Wenle; Jauch, Edward C.; Hill, Michael D.; Palesch, Yuko Y.

    2014-01-01

    In logistic regression analysis for binary clinical trial data, adjusted treatment effect estimates are often not equivalent to unadjusted estimates in the presence of influential covariates. This paper uses simulation to quantify the benefit of covariate adjustment in logistic regression. However, International Conference on Harmonization guidelines suggest that covariate adjustment be pre-specified. Unplanned adjusted analyses should be considered secondary. Results suggest that that if adjustment is not possible or unplanned in a logistic setting, balance in continuous covariates can alleviate some (but never all) of the shortcomings of unadjusted analyses. The case of log binomial regression is also explored. PMID:24138438

  8. SPLINE LINEAR REGRESSION USED FOR EVALUATING FINANCIAL ASSETS 1

    Directory of Open Access Journals (Sweden)

    Liviu GEAMBAŞU

    2010-12-01

    Full Text Available One of the most important preoccupations of financial markets participants was and still is the problem of determining more precise the trend of financial assets prices. For solving this problem there were written many scientific papers and were developed many mathematical and statistical models in order to better determine the financial assets price trend. If until recently the simple linear models were largely used due to their facile utilization, the financial crises that affected the world economy starting with 2008 highlight the necessity of adapting the mathematical models to variation of economy. A simple to use model but adapted to economic life realities is the spline linear regression. This type of regression keeps the continuity of regression function, but split the studied data in intervals with homogenous characteristics. The characteristics of each interval are highlighted and also the evolution of market over all the intervals, resulting reduced standard errors. The first objective of the article is the theoretical presentation of the spline linear regression, also referring to scientific national and international papers related to this subject. The second objective is applying the theoretical model to data from the Bucharest Stock Exchange

  9. Economic Analyses of Ware Yam Production in Orlu Agricultural ...

    African Journals Online (AJOL)

    Economic Analyses of Ware Yam Production in Orlu Agricultural Zone of Imo State. ... International Journal of Agriculture and Rural Development ... statistics, gross margin analysis, marginal analysis and multiple regression analysis. Results ...

  10. The best of both worlds: Phylogenetic eigenvector regression and mapping

    Directory of Open Access Journals (Sweden)

    José Alexandre Felizola Diniz Filho

    2015-09-01

    Full Text Available Eigenfunction analyses have been widely used to model patterns of autocorrelation in time, space and phylogeny. In a phylogenetic context, Diniz-Filho et al. (1998 proposed what they called Phylogenetic Eigenvector Regression (PVR, in which pairwise phylogenetic distances among species are submitted to a Principal Coordinate Analysis, and eigenvectors are then used as explanatory variables in regression, correlation or ANOVAs. More recently, a new approach called Phylogenetic Eigenvector Mapping (PEM was proposed, with the main advantage of explicitly incorporating a model-based warping in phylogenetic distance in which an Ornstein-Uhlenbeck (O-U process is fitted to data before eigenvector extraction. Here we compared PVR and PEM in respect to estimated phylogenetic signal, correlated evolution under alternative evolutionary models and phylogenetic imputation, using simulated data. Despite similarity between the two approaches, PEM has a slightly higher prediction ability and is more general than the original PVR. Even so, in a conceptual sense, PEM may provide a technique in the best of both worlds, combining the flexibility of data-driven and empirical eigenfunction analyses and the sounding insights provided by evolutionary models well known in comparative analyses.

  11. Temporal trends in sperm count: a systematic review and meta-regression analysis.

    Science.gov (United States)

    Levine, Hagai; Jørgensen, Niels; Martino-Andrade, Anderson; Mendiola, Jaime; Weksler-Derri, Dan; Mindlis, Irina; Pinotti, Rachel; Swan, Shanna H

    2017-11-01

    Reported declines in sperm counts remain controversial today and recent trends are unknown. A definitive meta-analysis is critical given the predictive value of sperm count for fertility, morbidity and mortality. To provide a systematic review and meta-regression analysis of recent trends in sperm counts as measured by sperm concentration (SC) and total sperm count (TSC), and their modification by fertility and geographic group. PubMed/MEDLINE and EMBASE were searched for English language studies of human SC published in 1981-2013. Following a predefined protocol 7518 abstracts were screened and 2510 full articles reporting primary data on SC were reviewed. A total of 244 estimates of SC and TSC from 185 studies of 42 935 men who provided semen samples in 1973-2011 were extracted for meta-regression analysis, as well as information on years of sample collection and covariates [fertility group ('Unselected by fertility' versus 'Fertile'), geographic group ('Western', including North America, Europe Australia and New Zealand versus 'Other', including South America, Asia and Africa), age, ejaculation abstinence time, semen collection method, method of measuring SC and semen volume, exclusion criteria and indicators of completeness of covariate data]. The slopes of SC and TSC were estimated as functions of sample collection year using both simple linear regression and weighted meta-regression models and the latter were adjusted for pre-determined covariates and modification by fertility and geographic group. Assumptions were examined using multiple sensitivity analyses and nonlinear models. SC declined significantly between 1973 and 2011 (slope in unadjusted simple regression models -0.70 million/ml/year; 95% CI: -0.72 to -0.69; P regression analysis reports a significant decline in sperm counts (as measured by SC and TSC) between 1973 and 2011, driven by a 50-60% decline among men unselected by fertility from North America, Europe, Australia and New Zealand. Because

  12. External Tank Liquid Hydrogen (LH2) Prepress Regression Analysis Independent Review Technical Consultation Report

    Science.gov (United States)

    Parsons, Vickie s.

    2009-01-01

    The request to conduct an independent review of regression models, developed for determining the expected Launch Commit Criteria (LCC) External Tank (ET)-04 cycle count for the Space Shuttle ET tanking process, was submitted to the NASA Engineering and Safety Center NESC on September 20, 2005. The NESC team performed an independent review of regression models documented in Prepress Regression Analysis, Tom Clark and Angela Krenn, 10/27/05. This consultation consisted of a peer review by statistical experts of the proposed regression models provided in the Prepress Regression Analysis. This document is the consultation's final report.

  13. Introduction to statistical modelling: linear regression.

    Science.gov (United States)

    Lunt, Mark

    2015-07-01

    In many studies we wish to assess how a range of variables are associated with a particular outcome and also determine the strength of such relationships so that we can begin to understand how these factors relate to each other at a population level. Ultimately, we may also be interested in predicting the outcome from a series of predictive factors available at, say, a routine clinic visit. In a recent article in Rheumatology, Desai et al. did precisely that when they studied the prediction of hip and spine BMD from hand BMD and various demographic, lifestyle, disease and therapy variables in patients with RA. This article aims to introduce the statistical methodology that can be used in such a situation and explain the meaning of some of the terms employed. It will also outline some common pitfalls encountered when performing such analyses. © The Author 2013. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Neighborhood social capital and crime victimization: comparison of spatial regression analysis and hierarchical regression analysis.

    Science.gov (United States)

    Takagi, Daisuke; Ikeda, Ken'ichi; Kawachi, Ichiro

    2012-11-01

    Crime is an important determinant of public health outcomes, including quality of life, mental well-being, and health behavior. A body of research has documented the association between community social capital and crime victimization. The association between social capital and crime victimization has been examined at multiple levels of spatial aggregation, ranging from entire countries, to states, metropolitan areas, counties, and neighborhoods. In multilevel analysis, the spatial boundaries at level 2 are most often drawn from administrative boundaries (e.g., Census tracts in the U.S.). One problem with adopting administrative definitions of neighborhoods is that it ignores spatial spillover. We conducted a study of social capital and crime victimization in one ward of Tokyo city, using a spatial Durbin model with an inverse-distance weighting matrix that assigned each respondent a unique level of "exposure" to social capital based on all other residents' perceptions. The study is based on a postal questionnaire sent to 20-69 years old residents of Arakawa Ward, Tokyo. The response rate was 43.7%. We examined the contextual influence of generalized trust, perceptions of reciprocity, two types of social network variables, as well as two principal components of social capital (constructed from the above four variables). Our outcome measure was self-reported crime victimization in the last five years. In the spatial Durbin model, we found that neighborhood generalized trust, reciprocity, supportive networks and two principal components of social capital were each inversely associated with crime victimization. By contrast, a multilevel regression performed with the same data (using administrative neighborhood boundaries) found generally null associations between neighborhood social capital and crime. Spatial regression methods may be more appropriate for investigating the contextual influence of social capital in homogeneous cultural settings such as Japan. Copyright

  15. Time series regression model for infectious disease and weather.

    Science.gov (United States)

    Imai, Chisato; Armstrong, Ben; Chalabi, Zaid; Mangtani, Punam; Hashizume, Masahiro

    2015-10-01

    Time series regression has been developed and long used to evaluate the short-term associations of air pollution and weather with mortality or morbidity of non-infectious diseases. The application of the regression approaches from this tradition to infectious diseases, however, is less well explored and raises some new issues. We discuss and present potential solutions for five issues often arising in such analyses: changes in immune population, strong autocorrelations, a wide range of plausible lag structures and association patterns, seasonality adjustments, and large overdispersion. The potential approaches are illustrated with datasets of cholera cases and rainfall from Bangladesh and influenza and temperature in Tokyo. Though this article focuses on the application of the traditional time series regression to infectious diseases and weather factors, we also briefly introduce alternative approaches, including mathematical modeling, wavelet analysis, and autoregressive integrated moving average (ARIMA) models. Modifications proposed to standard time series regression practice include using sums of past cases as proxies for the immune population, and using the logarithm of lagged disease counts to control autocorrelation due to true contagion, both of which are motivated from "susceptible-infectious-recovered" (SIR) models. The complexity of lag structures and association patterns can often be informed by biological mechanisms and explored by using distributed lag non-linear models. For overdispersed models, alternative distribution models such as quasi-Poisson and negative binomial should be considered. Time series regression can be used to investigate dependence of infectious diseases on weather, but may need modifying to allow for features specific to this context. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Regression analysis using dependent Polya trees.

    Science.gov (United States)

    Schörgendorfer, Angela; Branscum, Adam J

    2013-11-30

    Many commonly used models for linear regression analysis force overly simplistic shape and scale constraints on the residual structure of data. We propose a semiparametric Bayesian model for regression analysis that produces data-driven inference by using a new type of dependent Polya tree prior to model arbitrary residual distributions that are allowed to evolve across increasing levels of an ordinal covariate (e.g., time, in repeated measurement studies). By modeling residual distributions at consecutive covariate levels or time points using separate, but dependent Polya tree priors, distributional information is pooled while allowing for broad pliability to accommodate many types of changing residual distributions. We can use the proposed dependent residual structure in a wide range of regression settings, including fixed-effects and mixed-effects linear and nonlinear models for cross-sectional, prospective, and repeated measurement data. A simulation study illustrates the flexibility of our novel semiparametric regression model to accurately capture evolving residual distributions. In an application to immune development data on immunoglobulin G antibodies in children, our new model outperforms several contemporary semiparametric regression models based on a predictive model selection criterion. Copyright © 2013 John Wiley & Sons, Ltd.

  17. Is past life regression therapy ethical?

    Science.gov (United States)

    Andrade, Gabriel

    2017-01-01

    Past life regression therapy is used by some physicians in cases with some mental diseases. Anxiety disorders, mood disorders, and gender dysphoria have all been treated using life regression therapy by some doctors on the assumption that they reflect problems in past lives. Although it is not supported by psychiatric associations, few medical associations have actually condemned it as unethical. In this article, I argue that past life regression therapy is unethical for two basic reasons. First, it is not evidence-based. Past life regression is based on the reincarnation hypothesis, but this hypothesis is not supported by evidence, and in fact, it faces some insurmountable conceptual problems. If patients are not fully informed about these problems, they cannot provide an informed consent, and hence, the principle of autonomy is violated. Second, past life regression therapy has the great risk of implanting false memories in patients, and thus, causing significant harm. This is a violation of the principle of non-malfeasance, which is surely the most important principle in medical ethics.

  18. Interpret with caution: multicollinearity in multiple regression of cognitive data.

    Science.gov (United States)

    Morrison, Catriona M

    2003-08-01

    Shibihara and Kondo in 2002 reported a reanalysis of the 1997 Kanji picture-naming data of Yamazaki, Ellis, Morrison, and Lambon-Ralph in which independent variables were highly correlated. Their addition of the variable visual familiarity altered the previously reported pattern of results, indicating that visual familiarity, but not age of acquisition, was important in predicting Kanji naming speed. The present paper argues that caution should be taken when drawing conclusions from multiple regression analyses in which the independent variables are so highly correlated, as such multicollinearity can lead to unreliable output.

  19. Preference learning with evolutionary Multivariate Adaptive Regression Spline model

    DEFF Research Database (Denmark)

    Abou-Zleikha, Mohamed; Shaker, Noor; Christensen, Mads Græsbøll

    2015-01-01

    This paper introduces a novel approach for pairwise preference learning through combining an evolutionary method with Multivariate Adaptive Regression Spline (MARS). Collecting users' feedback through pairwise preferences is recommended over other ranking approaches as this method is more appealing...... for function approximation as well as being relatively easy to interpret. MARS models are evolved based on their efficiency in learning pairwise data. The method is tested on two datasets that collectively provide pairwise preference data of five cognitive states expressed by users. The method is analysed...

  20. The determination of Fe, Mn and Ca in sintered iron and blast-furnace slag by X-ray fluorescent analyses of energy and wave dispersion-comparison of results

    International Nuclear Information System (INIS)

    Dworak, B.; Gajek, Sz.

    1980-01-01

    The results of sintered iron and of blast-furnace slag examination obtained by X-ray fluorescent analyses of energy and of wave dispersion are compared. They show that the methods are comparable for such elements as Ca and Fe, whereas for Mn (in sinter) the X-ray fluorescent analysis of wave dispersion is less precise. (author)