WorldWideScience

Sample records for regression analyses evaluating

  1. Using synthetic data to evaluate multiple regression and principal component analyses for statistical modeling of daily building energy consumption

    Energy Technology Data Exchange (ETDEWEB)

    Reddy, T.A. (Energy Systems Lab., Texas A and M Univ., College Station, TX (United States)); Claridge, D.E. (Energy Systems Lab., Texas A and M Univ., College Station, TX (United States))

    1994-01-01

    Multiple regression modeling of monitored building energy use data is often faulted as a reliable means of predicting energy use on the grounds that multicollinearity between the regressor variables can lead both to improper interpretation of the relative importance of the various physical regressor parameters and to a model with unstable regressor coefficients. Principal component analysis (PCA) has the potential to overcome such drawbacks. While a few case studies have already attempted to apply this technique to building energy data, the objectives of this study were to make a broader evaluation of PCA and multiple regression analysis (MRA) and to establish guidelines under which one approach is preferable to the other. Four geographic locations in the US with different climatic conditions were selected and synthetic data sequence representative of daily energy use in large institutional buildings were generated in each location using a linear model with outdoor temperature, outdoor specific humidity and solar radiation as the three regression variables. MRA and PCA approaches were then applied to these data sets and their relative performances were compared. Conditions under which PCA seems to perform better than MRA were identified and preliminary recommendations on the use of either modeling approach formulated. (orig.)

  2. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing.

    Science.gov (United States)

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-02-01

    A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R 2 ), using R 2 as the primary metric of assay agreement. However, the use of R 2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  3. Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.

    Science.gov (United States)

    Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg

    2009-11-01

    G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.

  4. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part II: Evaluation of Sample Models

    Science.gov (United States)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Previous studies have shown that probabilistic forecasting may be a useful method for predicting persistent contrail formation. A probabilistic forecast to accurately predict contrail formation over the contiguous United States (CONUS) is created by using meteorological data based on hourly meteorological analyses from the Advanced Regional Prediction System (ARPS) and from the Rapid Update Cycle (RUC) as well as GOES water vapor channel measurements, combined with surface and satellite observations of contrails. Two groups of logistic models were created. The first group of models (SURFACE models) is based on surface-based contrail observations supplemented with satellite observations of contrail occurrence. The second group of models (OUTBREAK models) is derived from a selected subgroup of satellite-based observations of widespread persistent contrails. The mean accuracies for both the SURFACE and OUTBREAK models typically exceeded 75 percent when based on the RUC or ARPS analysis data, but decreased when the logistic models were derived from ARPS forecast data.

  5. Applications of MIDAS regression in analysing trends in water quality

    Science.gov (United States)

    Penev, Spiridon; Leonte, Daniela; Lazarov, Zdravetz; Mann, Rob A.

    2014-04-01

    We discuss novel statistical methods in analysing trends in water quality. Such analysis uses complex data sets of different classes of variables, including water quality, hydrological and meteorological. We analyse the effect of rainfall and flow on trends in water quality utilising a flexible model called Mixed Data Sampling (MIDAS). This model arises because of the mixed frequency in the data collection. Typically, water quality variables are sampled fortnightly, whereas the rain data is sampled daily. The advantage of using MIDAS regression is in the flexible and parsimonious modelling of the influence of the rain and flow on trends in water quality variables. We discuss the model and its implementation on a data set from the Shoalhaven Supply System and Catchments in the state of New South Wales, Australia. Information criteria indicate that MIDAS modelling improves upon simplistic approaches that do not utilise the mixed data sampling nature of the data.

  6. Multicollinearity in Regression Analyses Conducted in Epidemiologic Studies.

    Science.gov (United States)

    Vatcheva, Kristina P; Lee, MinJae; McCormick, Joseph B; Rahbar, Mohammad H

    2016-04-01

    The adverse impact of ignoring multicollinearity on findings and data interpretation in regression analysis is very well documented in the statistical literature. The failure to identify and report multicollinearity could result in misleading interpretations of the results. A review of epidemiological literature in PubMed from January 2004 to December 2013, illustrated the need for a greater attention to identifying and minimizing the effect of multicollinearity in analysis of data from epidemiologic studies. We used simulated datasets and real life data from the Cameron County Hispanic Cohort to demonstrate the adverse effects of multicollinearity in the regression analysis and encourage researchers to consider the diagnostic for multicollinearity as one of the steps in regression analysis.

  7. Statistical and regression analyses of detected extrasolar systems

    Czech Academy of Sciences Publication Activity Database

    Pintr, Pavel; Peřinová, V.; Lukš, A.; Pathak, A.

    2013-01-01

    Roč. 75, č. 1 (2013), s. 37-45 ISSN 0032-0633 Institutional support: RVO:61389021 Keywords : Exoplanets * Kepler candidates * Regression analysis Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics Impact factor: 1.630, year: 2013 http://www.sciencedirect.com/science/article/pii/S0032063312003066

  8. Multicollinearity in Regression Analyses Conducted in Epidemiologic Studies

    OpenAIRE

    Vatcheva, Kristina P.; Lee, MinJae; McCormick, Joseph B.; Rahbar, Mohammad H.

    2016-01-01

    The adverse impact of ignoring multicollinearity on findings and data interpretation in regression analysis is very well documented in the statistical literature. The failure to identify and report multicollinearity could result in misleading interpretations of the results. A review of epidemiological literature in PubMed from January 2004 to December 2013, illustrated the need for a greater attention to identifying and minimizing the effect of multicollinearity in analysis of data from epide...

  9. Analysing inequalities in Germany a structured additive distributional regression approach

    CERN Document Server

    Silbersdorff, Alexander

    2017-01-01

    This book seeks new perspectives on the growing inequalities that our societies face, putting forward Structured Additive Distributional Regression as a means of statistical analysis that circumvents the common problem of analytical reduction to simple point estimators. This new approach allows the observed discrepancy between the individuals’ realities and the abstract representation of those realities to be explicitly taken into consideration using the arithmetic mean alone. In turn, the method is applied to the question of economic inequality in Germany.

  10. Evaluation of Linear Regression Simultaneous Myoelectric Control Using Intramuscular EMG.

    Science.gov (United States)

    Smith, Lauren H; Kuiken, Todd A; Hargrove, Levi J

    2016-04-01

    The objective of this study was to evaluate the ability of linear regression models to decode patterns of muscle coactivation from intramuscular electromyogram (EMG) and provide simultaneous myoelectric control of a virtual 3-DOF wrist/hand system. Performance was compared to the simultaneous control of conventional myoelectric prosthesis methods using intramuscular EMG (parallel dual-site control)-an approach that requires users to independently modulate individual muscles in the residual limb, which can be challenging for amputees. Linear regression control was evaluated in eight able-bodied subjects during a virtual Fitts' law task and was compared to performance of eight subjects using parallel dual-site control. An offline analysis also evaluated how different types of training data affected prediction accuracy of linear regression control. The two control systems demonstrated similar overall performance; however, the linear regression method demonstrated improved performance for targets requiring use of all three DOFs, whereas parallel dual-site control demonstrated improved performance for targets that required use of only one DOF. Subjects using linear regression control could more easily activate multiple DOFs simultaneously, but often experienced unintended movements when trying to isolate individual DOFs. Offline analyses also suggested that the method used to train linear regression systems may influence controllability. Linear regression myoelectric control using intramuscular EMG provided an alternative to parallel dual-site control for 3-DOF simultaneous control at the wrist and hand. The two methods demonstrated different strengths in controllability, highlighting the tradeoff between providing simultaneous control and the ability to isolate individual DOFs when desired.

  11. How to deal with continuous and dichotomic outcomes in epidemiological research: linear and logistic regression analyses

    NARCIS (Netherlands)

    Tripepi, Giovanni; Jager, Kitty J.; Stel, Vianda S.; Dekker, Friedo W.; Zoccali, Carmine

    2011-01-01

    Because of some limitations of stratification methods, epidemiologists frequently use multiple linear and logistic regression analyses to address specific epidemiological questions. If the dependent variable is a continuous one (for example, systolic pressure and serum creatinine), the researcher

  12. Reducing Inter-Laboratory Differences between Semen Analyses Using Z Score and Regression Transformations

    Directory of Open Access Journals (Sweden)

    Esther Leushuis

    2016-12-01

    Full Text Available Background: Standardization of the semen analysis may improve reproducibility. We assessed variability between laboratories in semen analyses and evaluated whether a transformation using Z scores and regression statistics was able to reduce this variability. Materials and Methods: We performed a retrospective cohort study. We calculated between-laboratory coefficients of variation (CVB for sperm concentration and for morphology. Subsequently, we standardized the semen analysis results by calculating laboratory specific Z scores, and by using regression. We used analysis of variance for four semen parameters to assess systematic differences between laboratories before and after the transformations, both in the circulation samples and in the samples obtained in the prospective cohort study in the Netherlands between January 2002 and February 2004. Results: The mean CVB was 7% for sperm concentration (range 3 to 13% and 32% for sperm morphology (range 18 to 51%. The differences between the laboratories were statistically significant for all semen parameters (all P<0.001. Standardization using Z scores did not reduce the differences in semen analysis results between the laboratories (all P<0.001. Conclusion: There exists large between-laboratory variability for sperm morphology and small, but statistically significant, between-laboratory variation for sperm concentration. Standardization using Z scores does not eliminate between-laboratory variability.

  13. USE OF THE SIMPLE LINEAR REGRESSION MODEL IN MACRO-ECONOMICAL ANALYSES

    Directory of Open Access Journals (Sweden)

    Constantin ANGHELACHE

    2011-10-01

    Full Text Available The article presents the fundamental aspects of the linear regression, as a toolbox which can be used in macroeconomic analyses. The article describes the estimation of the parameters, the statistical tests used, the homoscesasticity and heteroskedasticity. The use of econometrics instrument in macroeconomics is an important factor that guarantees the quality of the models, analyses, results and possible interpretation that can be drawn at this level.

  14. Evaluation of the efficiency of continuous wavelet transform as processing and preprocessing algorithm for resolution of overlapped signals in univariate and multivariate regression analyses; an application to ternary and quaternary mixtures

    Science.gov (United States)

    Hegazy, Maha A.; Lotfy, Hayam M.; Mowaka, Shereen; Mohamed, Ekram Hany

    2016-07-01

    Wavelets have been adapted for a vast number of signal-processing applications due to the amount of information that can be extracted from a signal. In this work, a comparative study on the efficiency of continuous wavelet transform (CWT) as a signal processing tool in univariate regression and a pre-processing tool in multivariate analysis using partial least square (CWT-PLS) was conducted. These were applied to complex spectral signals of ternary and quaternary mixtures. CWT-PLS method succeeded in the simultaneous determination of a quaternary mixture of drotaverine (DRO), caffeine (CAF), paracetamol (PAR) and p-aminophenol (PAP, the major impurity of paracetamol). While, the univariate CWT failed to simultaneously determine the quaternary mixture components and was able to determine only PAR and PAP, the ternary mixtures of DRO, CAF, and PAR and CAF, PAR, and PAP. During the calculations of CWT, different wavelet families were tested. The univariate CWT method was validated according to the ICH guidelines. While for the development of the CWT-PLS model a calibration set was prepared by means of an orthogonal experimental design and their absorption spectra were recorded and processed by CWT. The CWT-PLS model was constructed by regression between the wavelet coefficients and concentration matrices and validation was performed by both cross validation and external validation sets. Both methods were successfully applied for determination of the studied drugs in pharmaceutical formulations.

  15. Analyses of Developmental Rate Isomorphy in Ectotherms: Introducing the Dirichlet Regression.

    Directory of Open Access Journals (Sweden)

    David S Boukal

    Full Text Available Temperature drives development in insects and other ectotherms because their metabolic rate and growth depends directly on thermal conditions. However, relative durations of successive ontogenetic stages often remain nearly constant across a substantial range of temperatures. This pattern, termed 'developmental rate isomorphy' (DRI in insects, appears to be widespread and reported departures from DRI are generally very small. We show that these conclusions may be due to the caveats hidden in the statistical methods currently used to study DRI. Because the DRI concept is inherently based on proportional data, we propose that Dirichlet regression applied to individual-level data is an appropriate statistical method to critically assess DRI. As a case study we analyze data on five aquatic and four terrestrial insect species. We find that results obtained by Dirichlet regression are consistent with DRI violation in at least eight of the studied species, although standard analysis detects significant departure from DRI in only four of them. Moreover, the departures from DRI detected by Dirichlet regression are consistently much larger than previously reported. The proposed framework can also be used to infer whether observed departures from DRI reflect life history adaptations to size- or stage-dependent effects of varying temperature. Our results indicate that the concept of DRI in insects and other ectotherms should be critically re-evaluated and put in a wider context, including the concept of 'equiproportional development' developed for copepods.

  16. Genetic analyses of partial egg production in Japanese quail using multi-trait random regression models.

    Science.gov (United States)

    Karami, K; Zerehdaran, S; Barzanooni, B; Lotfi, E

    2017-12-01

    1. The aim of the present study was to estimate genetic parameters for average egg weight (EW) and egg number (EN) at different ages in Japanese quail using multi-trait random regression (MTRR) models. 2. A total of 8534 records from 900 quail, hatched between 2014 and 2015, were used in the study. Average weekly egg weights and egg numbers were measured from second until sixth week of egg production. 3. Nine random regression models were compared to identify the best order of the Legendre polynomials (LP). The most optimal model was identified by the Bayesian Information Criterion. A model with second order of LP for fixed effects, second order of LP for additive genetic effects and third order of LP for permanent environmental effects (MTRR23) was found to be the best. 4. According to the MTRR23 model, direct heritability for EW increased from 0.26 in the second week to 0.53 in the sixth week of egg production, whereas the ratio of permanent environment to phenotypic variance decreased from 0.48 to 0.1. Direct heritability for EN was low, whereas the ratio of permanent environment to phenotypic variance decreased from 0.57 to 0.15 during the production period. 5. For each trait, estimated genetic correlations among weeks of egg production were high (from 0.85 to 0.98). Genetic correlations between EW and EN were low and negative for the first two weeks, but they were low and positive for the rest of the egg production period. 6. In conclusion, random regression models can be used effectively for analysing egg production traits in Japanese quail. Response to selection for increased egg weight would be higher at older ages because of its higher heritability and such a breeding program would have no negative genetic impact on egg production.

  17. Equações de regressão para estimar valores energéticos do grão de trigo e seus subprodutos para frangos de corte, a partir de análises químicas Regression equations to evaluate the energy values of wheat grain and its by-products for broiler chickens from chemical analyses

    Directory of Open Access Journals (Sweden)

    F.M.O. Borges

    2003-12-01

    que significou pouca influência da metodologia sobre essa medida. A FDN não mostrou ser melhor preditor de EM do que a FB.One experiment was run with broiler chickens, to obtain prediction equations for metabolizable energy (ME based on feedstuffs chemical analyses, and determined ME of wheat grain and its by-products, using four different methodologies. Seven wheat grain by-products were used in five treatments: wheat grain, wheat germ, white wheat flour, dark wheat flour, wheat bran for human use, wheat bran for animal use and rough wheat bran. Based on chemical analyses of crude fiber (CF, ether extract (EE, crude protein (CP, ash (AS and starch (ST of the feeds and the determined values of apparent energy (MEA, true energy (MEV, apparent corrected energy (MEAn and true energy corrected by nitrogen balance (MEVn in five treatments, prediction equations were obtained using the stepwise procedure. CF showed the best relationship with metabolizable energy values, however, this variable alone was not enough for a good estimate of the energy values (R² below 0.80. When EE and CP were included in the equations, R² increased to 0.90 or higher in most estimates. When the equations were calculated with all treatments, the equation for MEA were less precise and R² decreased. When ME data of the traditional or force-feeding methods were used separately, the precision of the equations increases (R² higher than 0.85. For MEV and MEVn values, the best multiple linear equations included CF, EE and CP (R²>0.90, independently of using all experimental data or separating by methodology. The estimates of MEVn values showed high precision and the linear coefficients (a of the equations were similar for all treatments or methodologies. Therefore, it explains the small influence of the different methodologies on this parameter. NDF was not a better predictor of ME than CF.

  18. Logistic regression and multiple classification analyses to explore risk factors of under-5 mortality in bangladesh

    International Nuclear Information System (INIS)

    Bhowmik, K.R.; Islam, S.

    2016-01-01

    Logistic regression (LR) analysis is the most common statistical methodology to find out the determinants of childhood mortality. However, the significant predictors cannot be ranked according to their influence on the response variable. Multiple classification (MC) analysis can be applied to identify the significant predictors with a priority index which helps to rank the predictors. The main objective of the study is to find the socio-demographic determinants of childhood mortality at neonatal, post-neonatal, and post-infant period by fitting LR model as well as to rank those through MC analysis. The study is conducted using the data of Bangladesh Demographic and Health Survey 2007 where birth and death information of children were collected from their mothers. Three dichotomous response variables are constructed from children age at death to fit the LR and MC models. Socio-economic and demographic variables significantly associated with the response variables separately are considered in LR and MC analyses. Both the LR and MC models identified the same significant predictors for specific childhood mortality. For both the neonatal and child mortality, biological factors of children, regional settings, and parents socio-economic status are found as 1st, 2nd, and 3rd significant groups of predictors respectively. Mother education and household environment are detected as major significant predictors of post-neonatal mortality. This study shows that MC analysis with or without LR analysis can be applied to detect determinants with rank which help the policy makers taking initiatives on a priority basis. (author)

  19. Evaluation of periodic safety status analyses

    International Nuclear Information System (INIS)

    Faber, C.; Staub, G.

    1997-01-01

    In order to carry out the evaluation of safety status analyses by the safety assessor within the periodical safety reviews of nuclear power plants safety goal oriented requirements have been formulated together with complementary evaluation criteria. Their application in an inter-disciplinary coopertion covering the subject areas involved facilitates a complete safety goal oriented assessment of the plant status. The procedure is outlined briefly by an example for the safety goal 'reactivity control' for BWRs. (orig.) [de

  20. The number of subjects per variable required in linear regression analyses

    NARCIS (Netherlands)

    P.C. Austin (Peter); E.W. Steyerberg (Ewout)

    2015-01-01

    textabstractObjectives To determine the number of independent variables that can be included in a linear regression model. Study Design and Setting We used a series of Monte Carlo simulations to examine the impact of the number of subjects per variable (SPV) on the accuracy of estimated regression

  1. The number of subjects per variable required in linear regression analyses.

    Science.gov (United States)

    Austin, Peter C; Steyerberg, Ewout W

    2015-06-01

    To determine the number of independent variables that can be included in a linear regression model. We used a series of Monte Carlo simulations to examine the impact of the number of subjects per variable (SPV) on the accuracy of estimated regression coefficients and standard errors, on the empirical coverage of estimated confidence intervals, and on the accuracy of the estimated R(2) of the fitted model. A minimum of approximately two SPV tended to result in estimation of regression coefficients with relative bias of less than 10%. Furthermore, with this minimum number of SPV, the standard errors of the regression coefficients were accurately estimated and estimated confidence intervals had approximately the advertised coverage rates. A much higher number of SPV were necessary to minimize bias in estimating the model R(2), although adjusted R(2) estimates behaved well. The bias in estimating the model R(2) statistic was inversely proportional to the magnitude of the proportion of variation explained by the population regression model. Linear regression models require only two SPV for adequate estimation of regression coefficients, standard errors, and confidence intervals. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  2. An evaluation of the Olympus "Quickrate" analyser.

    Science.gov (United States)

    Williams, D G; Wood, R J; Worth, H G

    1979-02-01

    The Olympus "Quickrate", a photometer built for both kinetic and end point analysis was evaluated in this laboratory. Aspartate transaminase, lactate dehydrogenase, hydroxybutyrate dehydrogenase, creatine kinase, alkaline phosphatase and gamma glutamyl transpeptidase were measured in the kinetic mode and glucose, urea, total protein, albumin, bilirubin, calcium and iron in the end point mode. Overall, good correlation was observed with routine methodologies and the precision of the methods was acceptable. An electrical evaluation was also performed. In our hands, the instrument proved to be simple to use and gave no trouble. It should prove useful for paediatric and emergency work, and as a back up for other analysers.

  3. Genetic evaluation of European quails by random regression models

    Directory of Open Access Journals (Sweden)

    Flaviana Miranda Gonçalves

    2012-09-01

    Full Text Available The objective of this study was to compare different random regression models, defined from different classes of heterogeneity of variance combined with different Legendre polynomial orders for the estimate of (covariance of quails. The data came from 28,076 observations of 4,507 female meat quails of the LF1 lineage. Quail body weights were determined at birth and 1, 14, 21, 28, 35 and 42 days of age. Six different classes of residual variance were fitted to Legendre polynomial functions (orders ranging from 2 to 6 to determine which model had the best fit to describe the (covariance structures as a function of time. According to the evaluated criteria (AIC, BIC and LRT, the model with six classes of residual variances and of sixth-order Legendre polynomial was the best fit. The estimated additive genetic variance increased from birth to 28 days of age, and dropped slightly from 35 to 42 days. The heritability estimates decreased along the growth curve and changed from 0.51 (1 day to 0.16 (42 days. Animal genetic and permanent environmental correlation estimates between weights and age classes were always high and positive, except for birth weight. The sixth order Legendre polynomial, along with the residual variance divided into six classes was the best fit for the growth rate curve of meat quails; therefore, they should be considered for breeding evaluation processes by random regression models.

  4. Correcting for multivariate measurement error by regression calibration in meta-analyses of epidemiological studies.

    NARCIS (Netherlands)

    Kromhout, D.

    2009-01-01

    Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements of the

  5. Testing Mediation Using Multiple Regression and Structural Equation Modeling Analyses in Secondary Data

    Science.gov (United States)

    Li, Spencer D.

    2011-01-01

    Mediation analysis in child and adolescent development research is possible using large secondary data sets. This article provides an overview of two statistical methods commonly used to test mediated effects in secondary analysis: multiple regression and structural equation modeling (SEM). Two empirical studies are presented to illustrate the…

  6. Regression Analyses on the Butterfly Ballot Effect: A Statistical Perspective of the US 2000 Election

    Science.gov (United States)

    Wu, Dane W.

    2002-01-01

    The year 2000 US presidential election between Al Gore and George Bush has been the most intriguing and controversial one in American history. The state of Florida was the trigger for the controversy, mainly, due to the use of the misleading "butterfly ballot". Using prediction (or confidence) intervals for least squares regression lines…

  7. Alpins and thibos vectorial astigmatism analyses: proposal of a linear regression model between methods

    Directory of Open Access Journals (Sweden)

    Giuliano de Oliveira Freitas

    2013-10-01

    Full Text Available PURPOSE: To determine linear regression models between Alpins descriptive indices and Thibos astigmatic power vectors (APV, assessing the validity and strength of such correlations. METHODS: This case series prospectively assessed 62 eyes of 31 consecutive cataract patients with preoperative corneal astigmatism between 0.75 and 2.50 diopters in both eyes. Patients were randomly assorted among two phacoemulsification groups: one assigned to receive AcrySof®Toric intraocular lens (IOL in both eyes and another assigned to have AcrySof Natural IOL associated with limbal relaxing incisions, also in both eyes. All patients were reevaluated postoperatively at 6 months, when refractive astigmatism analysis was performed using both Alpins and Thibos methods. The ratio between Thibos postoperative APV and preoperative APV (APVratio and its linear regression to Alpins percentage of success of astigmatic surgery, percentage of astigmatism corrected and percentage of astigmatism reduction at the intended axis were assessed. RESULTS: Significant negative correlation between the ratio of post- and preoperative Thibos APVratio and Alpins percentage of success (%Success was found (Spearman's ρ=-0.93; linear regression is given by the following equation: %Success = (-APVratio + 1.00x100. CONCLUSION: The linear regression we found between APVratio and %Success permits a validated mathematical inference concerning the overall success of astigmatic surgery.

  8. Check-all-that-apply data analysed by Partial Least Squares regression

    DEFF Research Database (Denmark)

    Rinnan, Åsmund; Giacalone, Davide; Frøst, Michael Bom

    2015-01-01

    are analysed by multivariate techniques. CATA data can be analysed both by setting the CATA as the X and the Y. The former is the PLS-Discriminant Analysis (PLS-DA) version, while the latter is the ANOVA-PLS (A-PLS) version. We investigated the difference between these two approaches, concluding...

  9. Differential item functioning (DIF) analyses of health-related quality of life instruments using logistic regression

    DEFF Research Database (Denmark)

    Scott, Neil W; Fayers, Peter M; Aaronson, Neil K

    2010-01-01

    Differential item functioning (DIF) methods can be used to determine whether different subgroups respond differently to particular items within a health-related quality of life (HRQoL) subscale, after allowing for overall subgroup differences in that scale. This article reviews issues that arise ...... when testing for DIF in HRQoL instruments. We focus on logistic regression methods, which are often used because of their efficiency, simplicity and ease of application....

  10. Correlation and regression analyses of genetic effects for different types of cells in mammals under radiation and chemical treatment

    International Nuclear Information System (INIS)

    Slutskaya, N.G.; Mosseh, I.B.

    2006-01-01

    Data about genetic mutations under radiation and chemical treatment for different types of cells have been analyzed with correlation and regression analyses. Linear correlation between different genetic effects in sex cells and somatic cells have found. The results may be extrapolated on sex cells of human and mammals. (authors)

  11. Evaluation of the Olympus AU-510 analyser.

    Science.gov (United States)

    Farré, C; Velasco, J; Ramón, F

    1991-01-01

    The selective multitest Olympus AU-510 analyser was evaluated according to the recommendations of the Comision de Instrumentacion de la Sociedad Española de Quimica Clinica and the European Committee for Clinical Laboratory Standards. The evaluation was carried out in two stages: an examination of the analytical units and then an evaluation in routine work conditions. The operational characteristics of the system were also studied.THE FIRST STAGE INCLUDED A PHOTOMETRIC STUDY: dependent on the absorbance, the inaccuracy varies between +0.5% to -0.6% at 405 nm and from -5.6% to 10.6% at 340 nm; the imprecision ranges between -0.22% and 0.56% at 405 nm and between 0.09% and 2.74% at 340 nm. Linearity was acceptable, apart from a very low absorbance for NADH at 340 nm; and the imprecision of the serum sample pipetter was satisfactory.TWELVE SERUM ANALYTES WERE STUDIED UNDER ROUTINE CONDITIONS: glucose, urea urate, cholesterol, triglycerides, total bilirubin, creatinine, phosphate, iron, aspartate aminotransferase, alanine aminotransferase and gamma-glutamyl transferase.The within-run imprecision (CV%) ranged from 0.67% for phosphate to 2.89% for iron and the between-run imprecision from 0.97% for total bilirubin to 7.06% for iron. There was no carryover in a study of the serum sample pipetter. Carry-over studies with the reagent and sample pipetters shows some cross contamination in the iron assay.

  12. Evaluating the Performance of Polynomial Regression Method with Different Parameters during Color Characterization

    Directory of Open Access Journals (Sweden)

    Bangyong Sun

    2014-01-01

    Full Text Available The polynomial regression method is employed to calculate the relationship of device color space and CIE color space for color characterization, and the performance of different expressions with specific parameters is evaluated. Firstly, the polynomial equation for color conversion is established and the computation of polynomial coefficients is analysed. And then different forms of polynomial equations are used to calculate the RGB and CMYK’s CIE color values, while the corresponding color errors are compared. At last, an optimal polynomial expression is obtained by analysing several related parameters during color conversion, including polynomial numbers, the degree of polynomial terms, the selection of CIE visual spaces, and the linearization.

  13. Correcting for multivariate measurement error by regression calibration in meta-analyses of epidemiological studies

    DEFF Research Database (Denmark)

    Tybjærg-Hansen, Anne

    2009-01-01

    Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements...... of the risk factors are observed on a subsample. We extend the multivariate RC techniques to a meta-analysis framework where multiple studies provide independent repeat measurements and information on disease outcome. We consider the cases where some or all studies have repeat measurements, and compare study......-specific, averaged and empirical Bayes estimates of RC parameters. Additionally, we allow for binary covariates (e.g. smoking status) and for uncertainty and time trends in the measurement error corrections. Our methods are illustrated using a subset of individual participant data from prospective long-term studies...

  14. Correlation, Regression and Path Analyses of Seed Yield Components in Crambe abyssinica, a Promising Industrial Oil Crop

    OpenAIRE

    Huang, Banglian; Yang, Yiming; Luo, Tingting; Wu, S.; Du, Xuezhu; Cai, Detian; Loo, van, E.N.; Huang Bangquan

    2013-01-01

    In the present study correlation, regression and path analyses were carried out to decide correlations among the agro- nomic traits and their contributions to seed yield per plant in Crambe abyssinica. Partial correlation analysis indicated that plant height (X1) was significantly correlated with branching height and the number of first branches (P <0.01); Branching height (X2) was significantly correlated with pod number of primary inflorescence (P <0.01) and number of secondary branch...

  15. SPLINE LINEAR REGRESSION USED FOR EVALUATING FINANCIAL ASSETS 1

    Directory of Open Access Journals (Sweden)

    Liviu GEAMBAŞU

    2010-12-01

    Full Text Available One of the most important preoccupations of financial markets participants was and still is the problem of determining more precise the trend of financial assets prices. For solving this problem there were written many scientific papers and were developed many mathematical and statistical models in order to better determine the financial assets price trend. If until recently the simple linear models were largely used due to their facile utilization, the financial crises that affected the world economy starting with 2008 highlight the necessity of adapting the mathematical models to variation of economy. A simple to use model but adapted to economic life realities is the spline linear regression. This type of regression keeps the continuity of regression function, but split the studied data in intervals with homogenous characteristics. The characteristics of each interval are highlighted and also the evolution of market over all the intervals, resulting reduced standard errors. The first objective of the article is the theoretical presentation of the spline linear regression, also referring to scientific national and international papers related to this subject. The second objective is applying the theoretical model to data from the Bucharest Stock Exchange

  16. Application of random regression models to the genetic evaluation ...

    African Journals Online (AJOL)

    The model included fixed regression on AM (range from 30 to 138 mo) and the effect of herd-measurement date concatenation. Random parts of the model were RRM coefficients for additive and permanent environmental effects, while residual effects were modelled to account for heterogeneity of variance by AY. Estimates ...

  17. Transpiration of glasshouse rose crops: evaluation of regression models

    NARCIS (Netherlands)

    Baas, R.; Rijssel, van E.

    2006-01-01

    Regression models of transpiration (T) based on global radiation inside the greenhouse (G), with or without energy input from heating pipes (Eh) and/or vapor pressure deficit (VPD) were parameterized. Therefore, data on T, G, temperatures from air, canopy and heating pipes, and VPD from both a

  18. Analyses of non-fatal accidents in an opencast mine by logistic regression model - a case study.

    Science.gov (United States)

    Onder, Seyhan; Mutlu, Mert

    2017-09-01

    Accidents cause major damage for both workers and enterprises in the mining industry. To reduce the number of occupational accidents, these incidents should be properly registered and carefully analysed. This study efficiently examines the Aegean Lignite Enterprise (ELI) of Turkish Coal Enterprises (TKI) in Soma between 2006 and 2011, and opencast coal mine occupational accident records were used for statistical analyses. A total of 231 occupational accidents were analysed for this study. The accident records were categorized into seven groups: area, reason, occupation, part of body, age, shift hour and lost days. The SPSS package program was used in this study for logistic regression analyses, which predicted the probability of accidents resulting in greater or less than 3 lost workdays for non-fatal injuries. Social facilities-area of surface installations, workshops and opencast mining areas are the areas with the highest probability for accidents with greater than 3 lost workdays for non-fatal injuries, while the reasons with the highest probability for these types of accidents are transporting and manual handling. Additionally, the model was tested for such reported accidents that occurred in 2012 for the ELI in Soma and estimated the probability of exposure to accidents with lost workdays correctly by 70%.

  19. Nonlinear regression analysis for evaluating tracer binding parameters using the programmable K1003 desk computer

    International Nuclear Information System (INIS)

    Sarrach, D.; Strohner, P.

    1986-01-01

    The Gauss-Newton algorithm has been used to evaluate tracer binding parameters of RIA by nonlinear regression analysis. The calculations were carried out on the K1003 desk computer. Equations for simple binding models and its derivatives are presented. The advantages of nonlinear regression analysis over linear regression are demonstrated

  20. Improved Dietary Guidelines for Vitamin D: Application of Individual Participant Data (IPD-Level Meta-Regression Analyses

    Directory of Open Access Journals (Sweden)

    Kevin D. Cashman

    2017-05-01

    Full Text Available Dietary Reference Values (DRVs for vitamin D have a key role in the prevention of vitamin D deficiency. However, despite adopting similar risk assessment protocols, estimates from authoritative agencies over the last 6 years have been diverse. This may have arisen from diverse approaches to data analysis. Modelling strategies for pooling of individual subject data from cognate vitamin D randomized controlled trials (RCTs are likely to provide the most appropriate DRV estimates. Thus, the objective of the present work was to undertake the first-ever individual participant data (IPD-level meta-regression, which is increasingly recognized as best practice, from seven winter-based RCTs (with 882 participants ranging in age from 4 to 90 years of the vitamin D intake–serum 25-hydroxyvitamin D (25(OHD dose-response. Our IPD-derived estimates of vitamin D intakes required to maintain 97.5% of 25(OHD concentrations >25, 30, and 50 nmol/L across the population are 10, 13, and 26 µg/day, respectively. In contrast, standard meta-regression analyses with aggregate data (as used by several agencies in recent years from the same RCTs estimated that a vitamin D intake requirement of 14 µg/day would maintain 97.5% of 25(OHD >50 nmol/L. These first IPD-derived estimates offer improved dietary recommendations for vitamin D because the underpinning modeling captures the between-person variability in response of serum 25(OHD to vitamin D intake.

  1. Clinical evaluation of a novel population-based regression analysis for detecting glaucomatous visual field progression.

    Science.gov (United States)

    Kovalska, M P; Bürki, E; Schoetzau, A; Orguel, S F; Orguel, S; Grieshaber, M C

    2011-04-01

    The distinction of real progression from test variability in visual field (VF) series may be based on clinical judgment, on trend analysis based on follow-up of test parameters over time, or on identification of a significant change related to the mean of baseline exams (event analysis). The aim of this study was to compare a new population-based method (Octopus field analysis, OFA) with classic regression analyses and clinical judgment for detecting glaucomatous VF changes. 240 VF series of 240 patients with at least 9 consecutive examinations available were included into this study. They were independently classified by two experienced investigators. The results of such a classification served as a reference for comparison for the following statistical tests: (a) t-test global, (b) r-test global, (c) regression analysis of 10 VF clusters and (d) point-wise linear regression analysis. 32.5 % of the VF series were classified as progressive by the investigators. The sensitivity and specificity were 89.7 % and 92.0 % for r-test, and 73.1 % and 93.8 % for the t-test, respectively. In the point-wise linear regression analysis, the specificity was comparable (89.5 % versus 92 %), but the sensitivity was clearly lower than in the r-test (22.4 % versus 89.7 %) at a significance level of p = 0.01. A regression analysis for the 10 VF clusters showed a markedly higher sensitivity for the r-test (37.7 %) than the t-test (14.1 %) at a similar specificity (88.3 % versus 93.8 %) for a significant trend (p = 0.005). In regard to the cluster distribution, the paracentral clusters and the superior nasal hemifield progressed most frequently. The population-based regression analysis seems to be superior to the trend analysis in detecting VF progression in glaucoma, and may eliminate the drawbacks of the event analysis. Further, it may assist the clinician in the evaluation of VF series and may allow better visualization of the correlation between function and structure owing to VF

  2. Improved Dietary Guidelines for Vitamin D: Application of Individual Participant Data (IPD)-Level Meta-Regression Analyses

    Science.gov (United States)

    Cashman, Kevin D.; Ritz, Christian; Kiely, Mairead

    2017-01-01

    Dietary Reference Values (DRVs) for vitamin D have a key role in the prevention of vitamin D deficiency. However, despite adopting similar risk assessment protocols, estimates from authoritative agencies over the last 6 years have been diverse. This may have arisen from diverse approaches to data analysis. Modelling strategies for pooling of individual subject data from cognate vitamin D randomized controlled trials (RCTs) are likely to provide the most appropriate DRV estimates. Thus, the objective of the present work was to undertake the first-ever individual participant data (IPD)-level meta-regression, which is increasingly recognized as best practice, from seven winter-based RCTs (with 882 participants ranging in age from 4 to 90 years) of the vitamin D intake–serum 25-hydroxyvitamin D (25(OH)D) dose-response. Our IPD-derived estimates of vitamin D intakes required to maintain 97.5% of 25(OH)D concentrations >25, 30, and 50 nmol/L across the population are 10, 13, and 26 µg/day, respectively. In contrast, standard meta-regression analyses with aggregate data (as used by several agencies in recent years) from the same RCTs estimated that a vitamin D intake requirement of 14 µg/day would maintain 97.5% of 25(OH)D >50 nmol/L. These first IPD-derived estimates offer improved dietary recommendations for vitamin D because the underpinning modeling captures the between-person variability in response of serum 25(OH)D to vitamin D intake. PMID:28481259

  3. Longitudinal changes in telomere length and associated genetic parameters in dairy cattle analysed using random regression models.

    Directory of Open Access Journals (Sweden)

    Luise A Seeker

    Full Text Available Telomeres cap the ends of linear chromosomes and shorten with age in many organisms. In humans short telomeres have been linked to morbidity and mortality. With the accumulation of longitudinal datasets the focus shifts from investigating telomere length (TL to exploring TL change within individuals over time. Some studies indicate that the speed of telomere attrition is predictive of future disease. The objectives of the present study were to 1 characterize the change in bovine relative leukocyte TL (RLTL across the lifetime in Holstein Friesian dairy cattle, 2 estimate genetic parameters of RLTL over time and 3 investigate the association of differences in individual RLTL profiles with productive lifespan. RLTL measurements were analysed using Legendre polynomials in a random regression model to describe TL profiles and genetic variance over age. The analyses were based on 1,328 repeated RLTL measurements of 308 female Holstein Friesian dairy cattle. A quadratic Legendre polynomial was fitted to the fixed effect of age in months and to the random effect of the animal identity. Changes in RLTL, heritability and within-trait genetic correlation along the age trajectory were calculated and illustrated. At a population level, the relationship between RLTL and age was described by a positive quadratic function. Individuals varied significantly regarding the direction and amount of RLTL change over life. The heritability of RLTL ranged from 0.36 to 0.47 (SE = 0.05-0.08 and remained statistically unchanged over time. The genetic correlation of RLTL at birth with measurements later in life decreased with the time interval between samplings from near unity to 0.69, indicating that TL later in life might be regulated by different genes than TL early in life. Even though animals differed in their RLTL profiles significantly, those differences were not correlated with productive lifespan (p = 0.954.

  4. Assessing the suitability of summary data for two-sample Mendelian randomization analyses using MR-Egger regression: the role of the I2 statistic.

    Science.gov (United States)

    Bowden, Jack; Del Greco M, Fabiola; Minelli, Cosetta; Davey Smith, George; Sheehan, Nuala A; Thompson, John R

    2016-12-01

    : MR-Egger regression has recently been proposed as a method for Mendelian randomization (MR) analyses incorporating summary data estimates of causal effect from multiple individual variants, which is robust to invalid instruments. It can be used to test for directional pleiotropy and provides an estimate of the causal effect adjusted for its presence. MR-Egger regression provides a useful additional sensitivity analysis to the standard inverse variance weighted (IVW) approach that assumes all variants are valid instruments. Both methods use weights that consider the single nucleotide polymorphism (SNP)-exposure associations to be known, rather than estimated. We call this the `NO Measurement Error' (NOME) assumption. Causal effect estimates from the IVW approach exhibit weak instrument bias whenever the genetic variants utilized violate the NOME assumption, which can be reliably measured using the F-statistic. The effect of NOME violation on MR-Egger regression has yet to be studied. An adaptation of the I2 statistic from the field of meta-analysis is proposed to quantify the strength of NOME violation for MR-Egger. It lies between 0 and 1, and indicates the expected relative bias (or dilution) of the MR-Egger causal estimate in the two-sample MR context. We call it IGX2 . The method of simulation extrapolation is also explored to counteract the dilution. Their joint utility is evaluated using simulated data and applied to a real MR example. In simulated two-sample MR analyses we show that, when a causal effect exists, the MR-Egger estimate of causal effect is biased towards the null when NOME is violated, and the stronger the violation (as indicated by lower values of IGX2 ), the stronger the dilution. When additionally all genetic variants are valid instruments, the type I error rate of the MR-Egger test for pleiotropy is inflated and the causal effect underestimated. Simulation extrapolation is shown to substantially mitigate these adverse effects. We

  5. The more total cognitive load is reduced by cues, the better retention and transfer of multimedia learning: A meta-analysis and two meta-regression analyses.

    Science.gov (United States)

    Xie, Heping; Wang, Fuxing; Hao, Yanbin; Chen, Jiaxue; An, Jing; Wang, Yuxin; Liu, Huashan

    2017-01-01

    Cueing facilitates retention and transfer of multimedia learning. From the perspective of cognitive load theory (CLT), cueing has a positive effect on learning outcomes because of the reduction in total cognitive load and avoidance of cognitive overload. However, this has not been systematically evaluated. Moreover, what remains ambiguous is the direct relationship between the cue-related cognitive load and learning outcomes. A meta-analysis and two subsequent meta-regression analyses were conducted to explore these issues. Subjective total cognitive load (SCL) and scores on a retention test and transfer test were selected as dependent variables. Through a systematic literature search, 32 eligible articles encompassing 3,597 participants were included in the SCL-related meta-analysis. Among them, 25 articles containing 2,910 participants were included in the retention-related meta-analysis and the following retention-related meta-regression, while there were 29 articles containing 3,204 participants included in the transfer-related meta-analysis and the transfer-related meta-regression. The meta-analysis revealed a statistically significant cueing effect on subjective ratings of cognitive load (d = -0.11, 95% CI = [-0.19, -0.02], p < 0.05), retention performance (d = 0.27, 95% CI = [0.08, 0.46], p < 0.01), and transfer performance (d = 0.34, 95% CI = [0.12, 0.56], p < 0.01). The subsequent meta-regression analyses showed that dSCL for cueing significantly predicted dretention for cueing (β = -0.70, 95% CI = [-1.02, -0.38], p < 0.001), as well as dtransfer for cueing (β = -0.60, 95% CI = [-0.92, -0.28], p < 0.001). Thus in line with CLT, adding cues in multimedia materials can indeed reduce SCL and promote learning outcomes, and the more SCL is reduced by cues, the better retention and transfer of multimedia learning.

  6. SPECIFICS OF THE APPLICATIONS OF MULTIPLE REGRESSION MODEL IN THE ANALYSES OF THE EFFECTS OF GLOBAL FINANCIAL CRISES

    Directory of Open Access Journals (Sweden)

    Željko V. Račić

    2010-12-01

    Full Text Available This paper aims to present the specifics of the application of multiple linear regression model. The economic (financial crisis is analyzed in terms of gross domestic product which is in a function of the foreign trade balance (on one hand and the credit cards, i.e. indebtedness of the population on this basis (on the other hand, in the USA (from 1999. to 2008. We used the extended application model which shows how the analyst should run the whole development process of regression model. This process began with simple statistical features and the application of regression procedures, and ended with residual analysis, intended for the study of compatibility of data and model settings. This paper also analyzes the values of some standard statistics used in the selection of appropriate regression model. Testing of the model is carried out with the use of the Statistics PASW 17 program.

  7. Regression analysis utilizing subjective evaluation of emotional experience in PET studies on emotions.

    Science.gov (United States)

    Aalto, Sargo; Wallius, Esa; Näätänen, Petri; Hiltunen, Jaana; Metsähonkala, Liisa; Sipilä, Hannu; Karlsson, Hasse

    2005-09-01

    A methodological study on subject-specific regression analysis (SSRA) exploring the correlation between the neural response and the subjective evaluation of emotional experience in eleven healthy females is presented. The target emotions, i.e., amusement and sadness, were induced using validated film clips, regional cerebral blood flow (rCBF) was measured using positron emission tomography (PET), and the subjective intensity of the emotional experience during the PET scanning was measured using a category ratio (CR-10) scale. Reliability analysis of the rating data indicated that the subjects rated the intensity of their emotional experience fairly consistently on the CR-10 scale (Cronbach alphas 0.70-0.97). A two-phase random-effects analysis was performed to ensure the generalizability and inter-study comparability of the SSRA results. Random-effects SSRAs using Statistical non-Parametric Mapping 99 (SnPM99) showed that rCBF correlated with the self-rated intensity of the emotional experience mainly in the brain regions that were identified in the random-effects subtraction analyses using the same imaging data. Our results give preliminary evidence of a linear association between the neural responses related to amusement and sadness and the self-evaluated intensity of the emotional experience in several regions involved in the emotional response. SSRA utilizing subjective evaluation of emotional experience turned out a feasible and promising method of analysis. It allows versatile exploration of the neurobiology of emotions and the neural correlates of actual and individual emotional experience. Thus, SSRA might be able to catch the idiosyncratic aspects of the emotional response better than traditional subtraction analysis.

  8. Evaluation of syngas production unit cost of bio-gasification facility using regression analysis techniques

    Energy Technology Data Exchange (ETDEWEB)

    Deng, Yangyang; Parajuli, Prem B.

    2011-08-10

    Evaluation of economic feasibility of a bio-gasification facility needs understanding of its unit cost under different production capacities. The objective of this study was to evaluate the unit cost of syngas production at capacities from 60 through 1800Nm 3/h using an economic model with three regression analysis techniques (simple regression, reciprocal regression, and log-log regression). The preliminary result of this study showed that reciprocal regression analysis technique had the best fit curve between per unit cost and production capacity, with sum of error squares (SES) lower than 0.001 and coefficient of determination of (R 2) 0.996. The regression analysis techniques determined the minimum unit cost of syngas production for micro-scale bio-gasification facilities of $0.052/Nm 3, under the capacity of 2,880 Nm 3/h. The results of this study suggest that to reduce cost, facilities should run at a high production capacity. In addition, the contribution of this technique could be the new categorical criterion to evaluate micro-scale bio-gasification facility from the perspective of economic analysis.

  9. Evaluation of Logistic Regression and Multivariate Adaptive Regression Spline Models for Groundwater Potential Mapping Using R and GIS

    Directory of Open Access Journals (Sweden)

    Soyoung Park

    2017-07-01

    Full Text Available This study mapped and analyzed groundwater potential using two different models, logistic regression (LR and multivariate adaptive regression splines (MARS, and compared the results. A spatial database was constructed for groundwater well data and groundwater influence factors. Groundwater well data with a high potential yield of ≥70 m3/d were extracted, and 859 locations (70% were used for model training, whereas the other 365 locations (30% were used for model validation. We analyzed 16 groundwater influence factors including altitude, slope degree, slope aspect, plan curvature, profile curvature, topographic wetness index, stream power index, sediment transport index, distance from drainage, drainage density, lithology, distance from fault, fault density, distance from lineament, lineament density, and land cover. Groundwater potential maps (GPMs were constructed using LR and MARS models and tested using a receiver operating characteristics curve. Based on this analysis, the area under the curve (AUC for the success rate curve of GPMs created using the MARS and LR models was 0.867 and 0.838, and the AUC for the prediction rate curve was 0.836 and 0.801, respectively. This implies that the MARS model is useful and effective for groundwater potential analysis in the study area.

  10. Prognostic value of tumor regression evaluated after first course of radiotherapy for anal canal cancer

    International Nuclear Information System (INIS)

    Chapet, Olivier; Gerard, Jean-Pierre; Riche, Benjamin; Alessio, Annunziato; Mornex, Francoise; Romestaing, Pascale

    2005-01-01

    Purpose: To evaluate whether the tumor response after an initial course of irradiation predicts for colostomy-free survival and overall survival in patients with anal canal cancer. Methods and Materials: Between 1980 and 1998, 252 patients were treated by pelvic external-beam radiotherapy (EBRT) followed by a brachytherapy boost in 218 or EBRT in 34. EBRT was combined with chemotherapy in 168 patients. An evaluation of tumor regression, before the boost, was available for 221 patients. They were divided into four groups according to the tumor response: 80% but 80% vs. ≤80%. The group with a T3-T4 lesion and tumor regression ≤80% had the poorest overall (52.8% ± 12.3%), disease-free (19.9% ± 9.9%), and colostomy-free survival (24.8% ± 11.2%) rates. Conclusion: The amount of tumor regression before EBRT or brachytherapy boost is a strong prognostic factor of disease control without colostomy. When regression is ≤80% in patients with an initial T3-T4 lesion, the use of conservative RT should be carefully evaluated because of the very poor disease-free and colostomy-free survival

  11. The N400 as a snapshot of interactive processing: evidence from regression analyses of orthographic neighbor and lexical associate effects

    Science.gov (United States)

    Laszlo, Sarah; Federmeier, Kara D.

    2010-01-01

    Linking print with meaning tends to be divided into subprocesses, such as recognition of an input's lexical entry and subsequent access of semantics. However, recent results suggest that the set of semantic features activated by an input is broader than implied by a view wherein access serially follows recognition. EEG was collected from participants who viewed items varying in number and frequency of both orthographic neighbors and lexical associates. Regression analysis of single item ERPs replicated past findings, showing that N400 amplitudes are greater for items with more neighbors, and further revealed that N400 amplitudes increase for items with more lexical associates and with higher frequency neighbors or associates. Together, the data suggest that in the N400 time window semantic features of items broadly related to inputs are active, consistent with models in which semantic access takes place in parallel with stimulus recognition. PMID:20624252

  12. Regression Discontinuity in Prospective Evaluations: The Case of the FFVP Evaluation

    Science.gov (United States)

    Klerman, Jacob Alex; Olsho, Lauren E. W.; Bartlett, Susan

    2015-01-01

    While regression discontinuity has usually been applied retrospectively to secondary data, it is even more attractive when applied prospectively. In a prospective design, data collection can be focused on cases near the discontinuity, thereby improving internal validity and substantially increasing precision. Furthermore, such prospective…

  13. [Evaluation of estimation of prevalence ratio using bayesian log-binomial regression model].

    Science.gov (United States)

    Gao, W L; Lin, H; Liu, X N; Ren, X W; Li, J S; Shen, X P; Zhu, S L

    2017-03-10

    To evaluate the estimation of prevalence ratio ( PR ) by using bayesian log-binomial regression model and its application, we estimated the PR of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea in their infants by using bayesian log-binomial regression model in Openbugs software. The results showed that caregivers' recognition of infant' s risk signs of diarrhea was associated significantly with a 13% increase of medical care-seeking. Meanwhile, we compared the differences in PR 's point estimation and its interval estimation of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea and convergence of three models (model 1: not adjusting for the covariates; model 2: adjusting for duration of caregivers' education, model 3: adjusting for distance between village and township and child month-age based on model 2) between bayesian log-binomial regression model and conventional log-binomial regression model. The results showed that all three bayesian log-binomial regression models were convergence and the estimated PRs were 1.130(95 %CI : 1.005-1.265), 1.128(95 %CI : 1.001-1.264) and 1.132(95 %CI : 1.004-1.267), respectively. Conventional log-binomial regression model 1 and model 2 were convergence and their PRs were 1.130(95 % CI : 1.055-1.206) and 1.126(95 % CI : 1.051-1.203), respectively, but the model 3 was misconvergence, so COPY method was used to estimate PR , which was 1.125 (95 %CI : 1.051-1.200). In addition, the point estimation and interval estimation of PRs from three bayesian log-binomial regression models differed slightly from those of PRs from conventional log-binomial regression model, but they had a good consistency in estimating PR . Therefore, bayesian log-binomial regression model can effectively estimate PR with less misconvergence and have more advantages in application compared with conventional log-binomial regression model.

  14. Evaluation of linear regression techniques for atmospheric applications: the importance of appropriate weighting

    Directory of Open Access Journals (Sweden)

    C. Wu

    2018-03-01

    Full Text Available Linear regression techniques are widely used in atmospheric science, but they are often improperly applied due to lack of consideration or inappropriate handling of measurement uncertainty. In this work, numerical experiments are performed to evaluate the performance of five linear regression techniques, significantly extending previous works by Chu and Saylor. The five techniques are ordinary least squares (OLS, Deming regression (DR, orthogonal distance regression (ODR, weighted ODR (WODR, and York regression (YR. We first introduce a new data generation scheme that employs the Mersenne twister (MT pseudorandom number generator. The numerical simulations are also improved by (a refining the parameterization of nonlinear measurement uncertainties, (b inclusion of a linear measurement uncertainty, and (c inclusion of WODR for comparison. Results show that DR, WODR and YR produce an accurate slope, but the intercept by WODR and YR is overestimated and the degree of bias is more pronounced with a low R2 XY dataset. The importance of a properly weighting parameter λ in DR is investigated by sensitivity tests, and it is found that an improper λ in DR can lead to a bias in both the slope and intercept estimation. Because the λ calculation depends on the actual form of the measurement error, it is essential to determine the exact form of measurement error in the XY data during the measurement stage. If a priori error in one of the variables is unknown, or the measurement error described cannot be trusted, DR, WODR and YR can provide the least biases in slope and intercept among all tested regression techniques. For these reasons, DR, WODR and YR are recommended for atmospheric studies when both X and Y data have measurement errors. An Igor Pro-based program (Scatter Plot was developed to facilitate the implementation of error-in-variables regressions.

  15. Evaluation of linear regression techniques for atmospheric applications: the importance of appropriate weighting

    Science.gov (United States)

    Wu, Cheng; Zhen Yu, Jian

    2018-03-01

    Linear regression techniques are widely used in atmospheric science, but they are often improperly applied due to lack of consideration or inappropriate handling of measurement uncertainty. In this work, numerical experiments are performed to evaluate the performance of five linear regression techniques, significantly extending previous works by Chu and Saylor. The five techniques are ordinary least squares (OLS), Deming regression (DR), orthogonal distance regression (ODR), weighted ODR (WODR), and York regression (YR). We first introduce a new data generation scheme that employs the Mersenne twister (MT) pseudorandom number generator. The numerical simulations are also improved by (a) refining the parameterization of nonlinear measurement uncertainties, (b) inclusion of a linear measurement uncertainty, and (c) inclusion of WODR for comparison. Results show that DR, WODR and YR produce an accurate slope, but the intercept by WODR and YR is overestimated and the degree of bias is more pronounced with a low R2 XY dataset. The importance of a properly weighting parameter λ in DR is investigated by sensitivity tests, and it is found that an improper λ in DR can lead to a bias in both the slope and intercept estimation. Because the λ calculation depends on the actual form of the measurement error, it is essential to determine the exact form of measurement error in the XY data during the measurement stage. If a priori error in one of the variables is unknown, or the measurement error described cannot be trusted, DR, WODR and YR can provide the least biases in slope and intercept among all tested regression techniques. For these reasons, DR, WODR and YR are recommended for atmospheric studies when both X and Y data have measurement errors. An Igor Pro-based program (Scatter Plot) was developed to facilitate the implementation of error-in-variables regressions.

  16. Modeling the potential risk factors of bovine viral diarrhea prevalence in Egypt using univariable and multivariable logistic regression analyses

    Directory of Open Access Journals (Sweden)

    Abdelfattah M. Selim

    2018-03-01

    Full Text Available Aim: The present cross-sectional study was conducted to determine the seroprevalence and potential risk factors associated with Bovine viral diarrhea virus (BVDV disease in cattle and buffaloes in Egypt, to model the potential risk factors associated with the disease using logistic regression (LR models, and to fit the best predictive model for the current data. Materials and Methods: A total of 740 blood samples were collected within November 2012-March 2013 from animals aged between 6 months and 3 years. The potential risk factors studied were species, age, sex, and herd location. All serum samples were examined with indirect ELIZA test for antibody detection. Data were analyzed with different statistical approaches such as Chi-square test, odds ratios (OR, univariable, and multivariable LR models. Results: Results revealed a non-significant association between being seropositive with BVDV and all risk factors, except for species of animal. Seroprevalence percentages were 40% and 23% for cattle and buffaloes, respectively. OR for all categories were close to one with the highest OR for cattle relative to buffaloes, which was 2.237. Likelihood ratio tests showed a significant drop of the -2LL from univariable LR to multivariable LR models. Conclusion: There was an evidence of high seroprevalence of BVDV among cattle as compared with buffaloes with the possibility of infection in different age groups of animals. In addition, multivariable LR model was proved to provide more information for association and prediction purposes relative to univariable LR models and Chi-square tests if we have more than one predictor.

  17. Structural vascular disease in Africans: performance of ethnic-specific waist circumference cut points using logistic regression and neural network analyses: the SABPA study

    OpenAIRE

    Botha, J.; De Ridder, J.H.; Potgieter, J.C.; Steyn, H.S.; Malan, L.

    2013-01-01

    A recently proposed model for waist circumference cut points (RPWC), driven by increased blood pressure, was demonstrated in an African population. We therefore aimed to validate the RPWC by comparing the RPWC and the Joint Statement Consensus (JSC) models via Logistic Regression (LR) and Neural Networks (NN) analyses. Urban African gender groups (N=171) were stratified according to the JSC and RPWC cut point models. Ultrasound carotid intima media thickness (CIMT), blood pressure (BP) and fa...

  18. Evaluation of the comprehensive palatability of Japanese sake paired with dishes by multiple regression analysis based on subdomains.

    Science.gov (United States)

    Nakamura, Ryo; Nakano, Kumiko; Tamura, Hiroyasu; Mizunuma, Masaki; Fushiki, Tohru; Hirata, Dai

    2017-08-01

    Many factors contribute to palatability. In order to evaluate the palatability of Japanese alcohol sake paired with certain dishes by integrating multiple factors, here we applied an evaluation method previously reported for palatability of cheese by multiple regression analysis based on 3 subdomain factors (rewarding, cultural, and informational). We asked 94 Japanese participants/subjects to evaluate the palatability of sake (1st evaluation/E1 for the first cup, 2nd/E2 and 3rd/E3 for the palatability with aftertaste/afterglow of certain dishes) and to respond to a questionnaire related to 3 subdomains. In E1, 3 factors were extracted by a factor analysis, and the subsequent multiple regression analyses indicated that the palatability of sake was interpreted by mainly the rewarding. Further, the results of attribution-dissections in E1 indicated that 2 factors (rewarding and informational) contributed to the palatability. Finally, our results indicated that the palatability of sake was influenced by the dish eaten just before drinking.

  19. Evaluation for Long Term PM10 Concentration Forecasting using Multi Linear Regression (MLR and Principal Component Regression (PCR Models

    Directory of Open Access Journals (Sweden)

    Samsuri Abdullah

    2016-07-01

    Full Text Available Air pollution in Peninsular Malaysia is dominated by particulate matter which is demonstrated by having the highest Air Pollution Index (API value compared to the other pollutants at most part of the country. Particulate Matter (PM10 forecasting models development is crucial because it allows the authority and citizens of a community to take necessary actions to limit their exposure to harmful levels of particulates pollution and implement protection measures to significantly improve air quality on designated locations. This study aims in improving the ability of MLR using PCs inputs for PM10 concentrations forecasting. Daily observations for PM10 in Kuala Terengganu, Malaysia from January 2003 till December 2011 were utilized to forecast PM10 concentration levels. MLR and PCR (using PCs input models were developed and the performance was evaluated using RMSE, NAE and IA. Results revealed that PCR performed better than MLR due to the implementation of PCA which reduce intricacy and eliminate data multi-collinearity.

  20. An evaluation of an operating BWR piping system damping during earthquake by applying auto regressive analysis

    International Nuclear Information System (INIS)

    Kitada, Y.; Makiguchi, M.; Komori, A.; Ichiki, T.

    1985-01-01

    The records of three earthquakes which had induced significant earthquake response to the piping system were obtained with the earthquake observation system. In the present paper, first, the eigenvalue analysis results for the natural piping system based on the piping support (boundary) conditions are described and second, the frequency and the damping factor evaluation results for each vibrational mode are described. In the present study, the Auto Regressive (AR) analysis method is used in the evaluation of natural frequencies and damping factors. The AR analysis applied here has a capability of direct evaluation of natural frequencies and damping factors from earthquake records observed on a piping system without any information on the input motions to the system. (orig./HP)

  1. Evaluating disease management programme effectiveness: an introduction to the regression discontinuity design.

    Science.gov (United States)

    Linden, Ariel; Adams, John L; Roberts, Nancy

    2006-04-01

    Although disease management (DM) has been in existence for over a decade, there is still much uncertainty as to its effectiveness in improving health status and reducing medical cost. The main reason is that most programme evaluations typically follow weak observational study designs that are subject to bias, most notably selection bias and regression to the mean. The regression discontinuity (RD) design may be the best alternative to randomized studies for evaluating DM programme effectiveness. The most crucial element of the RD design is its use of a 'cut-off' score on a pre-test measure to determine assignment to intervention or control. A valuable feature of this technique is that the pre-test measure does not have to be the same as the outcome measure, thus maximizing the programme's ability to use research-based practice guidelines, survey instruments and other tools to identify those individuals in greatest need of the programme intervention. Similarly, the cut-off score can be based on clinical understanding of the disease process, empirically derived, or resource-based. In the RD design, programme effectiveness is determined by a change in the pre-post relationship at the cut-off point. While the RD design is uniquely suitable for DM programme evaluation, its success will depend, in large part, on fundamental changes being made in the way DM programmes identify and assign individuals to the programme intervention.

  2. The N-shaped environmental Kuznets curve: an empirical evaluation using a panel quantile regression approach.

    Science.gov (United States)

    Allard, Alexandra; Takman, Johanna; Uddin, Gazi Salah; Ahmed, Ali

    2018-02-01

    We evaluate the N-shaped environmental Kuznets curve (EKC) using panel quantile regression analysis. We investigate the relationship between CO 2 emissions and GDP per capita for 74 countries over the period of 1994-2012. We include additional explanatory variables, such as renewable energy consumption, technological development, trade, and institutional quality. We find evidence for the N-shaped EKC in all income groups, except for the upper-middle-income countries. Heterogeneous characteristics are, however, observed over the N-shaped EKC. Finally, we find a negative relationship between renewable energy consumption and CO 2 emissions, which highlights the importance of promoting greener energy in order to combat global warming.

  3. Multicentre evaluation of the new ORTHO VISION® analyser.

    Science.gov (United States)

    Lazarova, E; Scott, Y; van den Bos, A; Wantzin, P; Atugonza, R; Solkar, S; Carpio, N

    2017-10-01

    Implementation of fully automated analysers has become a crucial security step in the blood bank; it reduces human errors, allows standardisation and improves turnaround time (TAT). We aimed at evaluating the ease of use and the efficiency of the ORTHO VISION ® Analyser (VISION) in comparison to the ORTHO AutoVue ® Innova System (AutoVue) in six different laboratories. After initial training and system configuration, VISION was used in parallel to AutoVue following the daily workload, both automates being based on ORTHO BioVue ® System column agglutination technology. Each participating laboratory provided data and scored the training, system configuration, quality control, maintenance and system efficiency. A total of 1049 individual samples were run: 266 forward and reverse grouping and antibody screens with 10 urgent samples, 473 ABD forward grouping and antibody screens with 22 urgent samples, 160 ABD forward grouping, 42 antibody screens and a series of 108 specific case profiles. The VISION instrument was more rapid than the AutoVue with a mean performing test time of 27·9 min compared to 36 min; for various test type comparisons, the TAT data obtained from VISION was shorter than that from AutoVue. Moreover, VISION analysed urgent STAT samples faster. Regarding the ease of use, VISION was intuitive and user friendly. VISION is a robust, reproducible system performing the most types of analytical determinations needed for pre-transfusion testing today, thus accommodating a wide range of clinical needs. VISION brings appreciated new features that could further secure blood transfusions. © 2017 The Authors. Transfusion Medicine published by John Wiley & Sons Ltd on behalf of British Blood Transfusion Society.

  4. Methodology development for statistical evaluation of reactor safety analyses

    International Nuclear Information System (INIS)

    Mazumdar, M.; Marshall, J.A.; Chay, S.C.; Gay, R.

    1976-07-01

    In February 1975, Westinghouse Electric Corporation, under contract to Electric Power Research Institute, started a one-year program to develop methodology for statistical evaluation of nuclear-safety-related engineering analyses. The objectives of the program were to develop an understanding of the relative efficiencies of various computational methods which can be used to compute probability distributions of output variables due to input parameter uncertainties in analyses of design basis events for nuclear reactors and to develop methods for obtaining reasonably accurate estimates of these probability distributions at an economically feasible level. A series of tasks was set up to accomplish these objectives. Two of the tasks were to investigate the relative efficiencies and accuracies of various Monte Carlo and analytical techniques for obtaining such estimates for a simple thermal-hydraulic problem whose output variable of interest is given in a closed-form relationship of the input variables and to repeat the above study on a thermal-hydraulic problem in which the relationship between the predicted variable and the inputs is described by a short-running computer program. The purpose of the report presented is to document the results of the investigations completed under these tasks, giving the rationale for choices of techniques and problems, and to present interim conclusions

  5. Classification and regression tree (CART) analyses of genomic signatures reveal sets of tetramers that discriminate temperature optima of archaea and bacteria

    Science.gov (United States)

    Dyer, Betsey D.; Kahn, Michael J.; LeBlanc, Mark D.

    2008-01-01

    Classification and regression tree (CART) analysis was applied to genome-wide tetranucleotide frequencies (genomic signatures) of 195 archaea and bacteria. Although genomic signatures have typically been used to classify evolutionary divergence, in this study, convergent evolution was the focus. Temperature optima for most of the organisms examined could be distinguished by CART analyses of tetranucleotide frequencies. This suggests that pervasive (nonlinear) qualities of genomes may reflect certain environmental conditions (such as temperature) in which those genomes evolved. The predominant use of GAGA and AGGA as the discriminating tetramers in CART models suggests that purine-loading and codon biases of thermophiles may explain some of the results. PMID:19054742

  6. Graphical evaluation of the ridge-type robust regression estimators in mixture experiments.

    Science.gov (United States)

    Erkoc, Ali; Emiroglu, Esra; Akay, Kadri Ulas

    2014-01-01

    In mixture experiments, estimation of the parameters is generally based on ordinary least squares (OLS). However, in the presence of multicollinearity and outliers, OLS can result in very poor estimates. In this case, effects due to the combined outlier-multicollinearity problem can be reduced to certain extent by using alternative approaches. One of these approaches is to use biased-robust regression techniques for the estimation of parameters. In this paper, we evaluate various ridge-type robust estimators in the cases where there are multicollinearity and outliers during the analysis of mixture experiments. Also, for selection of biasing parameter, we use fraction of design space plots for evaluating the effect of the ridge-type robust estimators with respect to the scaled mean squared error of prediction. The suggested graphical approach is illustrated on Hald cement data set.

  7. Consequences of kriging and land use regression for PM2.5 predictions in epidemiologic analyses: insights into spatial variability using high-resolution satellite data.

    Science.gov (United States)

    Alexeeff, Stacey E; Schwartz, Joel; Kloog, Itai; Chudnovsky, Alexandra; Koutrakis, Petros; Coull, Brent A

    2015-01-01

    Many epidemiological studies use predicted air pollution exposures as surrogates for true air pollution levels. These predicted exposures contain exposure measurement error, yet simulation studies have typically found negligible bias in resulting health effect estimates. However, previous studies typically assumed a statistical spatial model for air pollution exposure, which may be oversimplified. We address this shortcoming by assuming a realistic, complex exposure surface derived from fine-scale (1 km × 1 km) remote-sensing satellite data. Using simulation, we evaluate the accuracy of epidemiological health effect estimates in linear and logistic regression when using spatial air pollution predictions from kriging and land use regression models. We examined chronic (long-term) and acute (short-term) exposure to air pollution. Results varied substantially across different scenarios. Exposure models with low out-of-sample R(2) yielded severe biases in the health effect estimates of some models, ranging from 60% upward bias to 70% downward bias. One land use regression exposure model with >0.9 out-of-sample R(2) yielded upward biases up to 13% for acute health effect estimates. Almost all models drastically underestimated the SEs. Land use regression models performed better in chronic effect simulations. These results can help researchers when interpreting health effect estimates in these types of studies.

  8. Direct integral linear least square regression method for kinetic evaluation of hepatobiliary scintigraphy

    International Nuclear Information System (INIS)

    Shuke, Noriyuki

    1991-01-01

    In hepatobiliary scintigraphy, kinetic model analysis, which provides kinetic parameters like hepatic extraction or excretion rate, have been done for quantitative evaluation of liver function. In this analysis, unknown model parameters are usually determined using nonlinear least square regression method (NLS method) where iterative calculation and initial estimate for unknown parameters are required. As a simple alternative to NLS method, direct integral linear least square regression method (DILS method), which can determine model parameters by a simple calculation without initial estimate, is proposed, and tested the applicability to analysis of hepatobiliary scintigraphy. In order to see whether DILS method could determine model parameters as good as NLS method, or to determine appropriate weight for DILS method, simulated theoretical data based on prefixed parameters were fitted to 1 compartment model using both DILS method with various weightings and NLS method. The parameter values obtained were then compared with prefixed values which were used for data generation. The effect of various weights on the error of parameter estimate was examined, and inverse of time was found to be the best weight to make the error minimum. When using this weight, DILS method could give parameter values close to those obtained by NLS method and both parameter values were very close to prefixed values. With appropriate weighting, the DILS method could provide reliable parameter estimate which is relatively insensitive to the data noise. In conclusion, the DILS method could be used as a simple alternative to NLS method, providing reliable parameter estimate. (author)

  9. Evaluation of an optoacoustic based gas analysing device

    Science.gov (United States)

    Markmann, Janine; Lange, Birgit; Theisen-Kunde, Dirk; Danicke, Veit; Mayorov, Fedor; Eckert, Sebastian; Kettmann, Pascal; Brinkmann, Ralf

    2017-07-01

    The relative occurrence of volatile organic compounds in the human respiratory gas is disease-specific (ppb range). A prototype of a gas analysing device using two tuneable laser systems, an OPO-laser (2.5 to 10 μm) and a CO2-laser (9 to 11 μm), and an optoacoustic measurement cell was developed to detect concentrations in the ppb range. The sensitivity and resolution of the system was determined by test gas measurements, measuring ethylene and sulfur hexafluoride with the CO2-laser and butane with the OPO-laser. System sensitivity found to be 13 ppb for sulfur hexafluoride, 17 ppb for ethylene and Respiratory gas samples of 8 healthy volunteers were investigated by irradiation with 17 laser lines of the CO2-laser. Several of those lines overlap with strong absorption bands of ammonia. As it is known that ammonia concentration increases by age a separation of people 35 was striven for. To evaluate the data the first seven gas samples were used to train a discriminant analysis algorithm. The eighth subject was then assigned correctly to the group >35 years with the age of 49 years.

  10. An attempt to evaluate some regression models used for radiometric ash determination in the brown coal

    International Nuclear Information System (INIS)

    Karamuz, S.; Urbanski, P.; Antoniak, W.; Wagner, D.

    1984-01-01

    Five different regression models for determination of the ash as well as iron and calcium contents in brown coal using fluorescence and scattering of X-rays have been evaluated. Calculations were done using experimental results obtained from the natural brown coal samples to which appropriate quantities of iron, calcium and silicon oxides were added. The secondary radiation was excited by Pu-238 source and detected by X-ray argone filled proportional counter. The investigation has shown the superiority of the multiparametric models over the radiometric ash determination in the pit-coal applying aluminium filter for the correction of the influence of iron content on the intensity of scattered radiation. Standard error of estimation for the best algorithm is about three time smaler than that for algorithm simulating application of the aluminium filter. Statistical parameters of the considered algorithm were reviewed and discussed. (author)

  11. Secure and Efficient Regression Analysis Using a Hybrid Cryptographic Framework: Development and Evaluation.

    Science.gov (United States)

    Sadat, Md Nazmus; Jiang, Xiaoqian; Aziz, Md Momin Al; Wang, Shuang; Mohammed, Noman

    2018-03-05

    Machine learning is an effective data-driven tool that is being widely used to extract valuable patterns and insights from data. Specifically, predictive machine learning models are very important in health care for clinical data analysis. The machine learning algorithms that generate predictive models often require pooling data from different sources to discover statistical patterns or correlations among different attributes of the input data. The primary challenge is to fulfill one major objective: preserving the privacy of individuals while discovering knowledge from data. Our objective was to develop a hybrid cryptographic framework for performing regression analysis over distributed data in a secure and efficient way. Existing secure computation schemes are not suitable for processing the large-scale data that are used in cutting-edge machine learning applications. We designed, developed, and evaluated a hybrid cryptographic framework, which can securely perform regression analysis, a fundamental machine learning algorithm using somewhat homomorphic encryption and a newly introduced secure hardware component of Intel Software Guard Extensions (Intel SGX) to ensure both privacy and efficiency at the same time. Experimental results demonstrate that our proposed method provides a better trade-off in terms of security and efficiency than solely secure hardware-based methods. Besides, there is no approximation error. Computed model parameters are exactly similar to plaintext results. To the best of our knowledge, this kind of secure computation model using a hybrid cryptographic framework, which leverages both somewhat homomorphic encryption and Intel SGX, is not proposed or evaluated to this date. Our proposed framework ensures data security and computational efficiency at the same time. ©Md Nazmus Sadat, Xiaoqian Jiang, Md Momin Al Aziz, Shuang Wang, Noman Mohammed. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 05.03.2018.

  12. Coefficient shifts in geographical ecology: an empirical evaluation of spatial and non-spatial regression

    DEFF Research Database (Denmark)

    Bini, L. M.; Diniz-Filho, J. A. F.; Rangel, T. F. L. V. B.

    2009-01-01

    A major focus of geographical ecology and macroecology is to understand the causes of spatially structured ecological patterns. However, achieving this understanding can be complicated when using multiple regression, because the relative importance of explanatory variables, as measured by regress...

  13. Predictors of success of external cephalic version and cephalic presentation at birth among 1253 women with non-cephalic presentation using logistic regression and classification tree analyses.

    Science.gov (United States)

    Hutton, Eileen K; Simioni, Julia C; Thabane, Lehana

    2017-08-01

    Among women with a fetus with a non-cephalic presentation, external cephalic version (ECV) has been shown to reduce the rate of breech presentation at birth and cesarean birth. Compared with ECV at term, beginning ECV prior to 37 weeks' gestation decreases the number of infants in a non-cephalic presentation at birth. The purpose of this secondary analysis was to investigate factors associated with a successful ECV procedure and to present this in a clinically useful format. Data were collected as part of the Early ECV Pilot and Early ECV2 Trials, which randomized 1776 women with a fetus in breech presentation to either early ECV (34-36 weeks' gestation) or delayed ECV (at or after 37 weeks). The outcome of interest was successful ECV, defined as the fetus being in a cephalic presentation immediately following the procedure, as well as at the time of birth. The importance of several factors in predicting successful ECV was investigated using two statistical methods: logistic regression and classification and regression tree (CART) analyses. Among nulliparas, non-engagement of the presenting part and an easily palpable fetal head were independently associated with success. Among multiparas, non-engagement of the presenting part, gestation less than 37 weeks and an easily palpable fetal head were found to be independent predictors of success. These findings were consistent with results of the CART analyses. Regardless of parity, descent of the presenting part was the most discriminating factor in predicting successful ECV and cephalic presentation at birth. © 2017 Nordic Federation of Societies of Obstetrics and Gynecology.

  14. Evaluation of Visual Field Progression in Glaucoma: Quasar Regression Program and Event Analysis.

    Science.gov (United States)

    Díaz-Alemán, Valentín T; González-Hernández, Marta; Perera-Sanz, Daniel; Armas-Domínguez, Karintia

    2016-01-01

    To determine the sensitivity, specificity and agreement between the Quasar program, glaucoma progression analysis (GPA II) event analysis and expert opinion in the detection of glaucomatous progression. The Quasar program is based on linear regression analysis of both mean defect (MD) and pattern standard deviation (PSD). Each series of visual fields was evaluated by three methods; Quasar, GPA II and four experts. The sensitivity, specificity and agreement (kappa) for each method was calculated, using expert opinion as the reference standard. The study included 439 SITA Standard visual fields of 56 eyes of 42 patients, with a mean of 7.8 ± 0.8 visual fields per eye. When suspected cases of progression were considered stable, sensitivity and specificity of Quasar, GPA II and the experts were 86.6% and 70.7%, 26.6% and 95.1%, and 86.6% and 92.6% respectively. When suspected cases of progression were considered as progressing, sensitivity and specificity of Quasar, GPA II and the experts were 79.1% and 81.2%, 45.8% and 90.6%, and 85.4% and 90.6% respectively. The agreement between Quasar and GPA II when suspected cases were considered stable or progressing was 0.03 and 0.28 respectively. The degree of agreement between Quasar and the experts when suspected cases were considered stable or progressing was 0.472 and 0.507. The degree of agreement between GPA II and the experts when suspected cases were considered stable or progressing was 0.262 and 0.342. The combination of MD and PSD regression analysis in the Quasar program showed better agreement with the experts and higher sensitivity than GPA II.

  15. Evaluation of Regression and Neuro_Fuzzy Models in Estimating Saturated Hydraulic Conductivity

    Directory of Open Access Journals (Sweden)

    J. Behmanesh

    2015-06-01

    Full Text Available Study of soil hydraulic properties such as saturated and unsaturated hydraulic conductivity is required in the environmental investigations. Despite numerous research, measuring saturated hydraulic conductivity using by direct methods are still costly, time consuming and professional. Therefore estimating saturated hydraulic conductivity using rapid and low cost methods such as pedo-transfer functions with acceptable accuracy was developed. The purpose of this research was to compare and evaluate 11 pedo-transfer functions and Adaptive Neuro-Fuzzy Inference System (ANFIS to estimate saturated hydraulic conductivity of soil. In this direct, saturated hydraulic conductivity and physical properties in 40 points of Urmia were calculated. The soil excavated was used in the lab to determine its easily accessible parameters. The results showed that among existing models, Aimrun et al model had the best estimation for soil saturated hydraulic conductivity. For mentioned model, the Root Mean Square Error and Mean Absolute Error parameters were 0.174 and 0.028 m/day respectively. The results of the present research, emphasises the importance of effective porosity application as an important accessible parameter in accuracy of pedo-transfer functions. sand and silt percent, bulk density and soil particle density were selected to apply in 561 ANFIS models. In training phase of best ANFIS model, the R2 and RMSE were calculated 1 and 1.2×10-7 respectively. These amounts in the test phase were 0.98 and 0.0006 respectively. Comparison of regression and ANFIS models showed that the ANFIS model had better results than regression functions. Also Nuro-Fuzzy Inference System had capability to estimatae with high accuracy in various soil textures.

  16. Evaluating effects of developmental education for college students using a regression discontinuity design.

    Science.gov (United States)

    Moss, Brian G; Yeaton, William H

    2013-10-01

    Annually, American colleges and universities provide developmental education (DE) to millions of underprepared students; however, evaluation estimates of DE benefits have been mixed. Using a prototypic exemplar of DE, our primary objective was to investigate the utility of a replicative evaluative framework for assessing program effectiveness. Within the context of the regression discontinuity (RD) design, this research examined the effectiveness of a DE program for five, sequential cohorts of first-time college students. Discontinuity estimates were generated for individual terms and cumulatively, across terms. Participants were 3,589 first-time community college students. DE program effects were measured by contrasting both college-level English grades and a dichotomous measure of pass/fail, for DE and non-DE students. Parametric and nonparametric estimates of overall effect were positive for continuous and dichotomous measures of achievement (grade and pass/fail). The variability of program effects over time was determined by tracking results within individual terms and cumulatively, across terms. Applying this replication strategy, DE's overall impact was modest (an effect size of approximately .20) but quite consistent, based on parametric and nonparametric estimation approaches. A meta-analysis of five RD results yielded virtually the same estimate as the overall, parametric findings. Subset analysis, though tentative, suggested that males benefited more than females, while academic gains were comparable for different ethnicities. The cumulative, within-study comparison, replication approach offers considerable potential for the evaluation of new and existing policies, particularly when effects are relatively small, as is often the case in applied settings.

  17. A graphical method to evaluate spectral preprocessing in multivariate regression calibrations: example with Savitzky-Golay filters and partial least squares regression.

    Science.gov (United States)

    Delwiche, Stephen R; Reeves, James B

    2010-01-01

    In multivariate regression analysis of spectroscopy data, spectral preprocessing is often performed to reduce unwanted background information (offsets, sloped baselines) or accentuate absorption features in intrinsically overlapping bands. These procedures, also known as pretreatments, are commonly smoothing operations or derivatives. While such operations are often useful in reducing the number of latent variables of the actual decomposition and lowering residual error, they also run the risk of misleading the practitioner into accepting calibration equations that are poorly adapted to samples outside of the calibration. The current study developed a graphical method to examine this effect on partial least squares (PLS) regression calibrations of near-infrared (NIR) reflection spectra of ground wheat meal with two analytes, protein content and sodium dodecyl sulfate sedimentation (SDS) volume (an indicator of the quantity of the gluten proteins that contribute to strong doughs). These two properties were chosen because of their differing abilities to be modeled by NIR spectroscopy: excellent for protein content, fair for SDS sedimentation volume. To further demonstrate the potential pitfalls of preprocessing, an artificial component, a randomly generated value, was included in PLS regression trials. Savitzky-Golay (digital filter) smoothing, first-derivative, and second-derivative preprocess functions (5 to 25 centrally symmetric convolution points, derived from quadratic polynomials) were applied to PLS calibrations of 1 to 15 factors. The results demonstrated the danger of an over reliance on preprocessing when (1) the number of samples used in a multivariate calibration is low (<50), (2) the spectral response of the analyte is weak, and (3) the goodness of the calibration is based on the coefficient of determination (R(2)) rather than a term based on residual error. The graphical method has application to the evaluation of other preprocess functions and various

  18. Accelerated safety analyses - structural analyses Phase I - structural sensitivity evaluation of single- and double-shell waste storage tanks

    International Nuclear Information System (INIS)

    Becker, D.L.

    1994-11-01

    Accelerated Safety Analyses - Phase I (ASA-Phase I) have been conducted to assess the appropriateness of existing tank farm operational controls and/or limits as now stipulated in the Operational Safety Requirements (OSRs) and Operating Specification Documents, and to establish a technical basis for the waste tank operating safety envelope. Structural sensitivity analyses were performed to assess the response of the different waste tank configurations to variations in loading conditions, uncertainties in loading parameters, and uncertainties in material characteristics. Extensive documentation of the sensitivity analyses conducted and results obtained are provided in the detailed ASA-Phase I report, Structural Sensitivity Evaluation of Single- and Double-Shell Waste Tanks for Accelerated Safety Analysis - Phase I. This document provides a summary of the accelerated safety analyses sensitivity evaluations and the resulting findings

  19. Installation and performance evaluation of an indigenous surface area analyser

    International Nuclear Information System (INIS)

    Pillai, S.N.; Solapurkar, M.N.; Venkatesan, V.; Prakash, A.; Khan, K.B.; Kumar, Arun; Prasad, R.S.

    2014-01-01

    An indigenously available surface area analyser was installed inside glove box and checked for its performance by analyzing uranium oxide and thorium oxide powders at RMD. The unit has been made ready for analysis of Plutonium oxide powders after incorporating several important features. (author)

  20. Analyses of polycyclic aromatic hydrocarbon (PAH) and chiral-PAH analogues-methyl-β-cyclodextrin guest-host inclusion complexes by fluorescence spectrophotometry and multivariate regression analysis.

    Science.gov (United States)

    Greene, LaVana; Elzey, Brianda; Franklin, Mariah; Fakayode, Sayo O

    2017-03-05

    The negative health impact of polycyclic aromatic hydrocarbons (PAHs) and differences in pharmacological activity of enantiomers of chiral molecules in humans highlights the need for analysis of PAHs and their chiral analogue molecules in humans. Herein, the first use of cyclodextrin guest-host inclusion complexation, fluorescence spectrophotometry, and chemometric approach to PAH (anthracene) and chiral-PAH analogue derivatives (1-(9-anthryl)-2,2,2-triflouroethanol (TFE)) analyses are reported. The binding constants (K b ), stoichiometry (n), and thermodynamic properties (Gibbs free energy (ΔG), enthalpy (ΔH), and entropy (ΔS)) of anthracene and enantiomers of TFE-methyl-β-cyclodextrin (Me-β-CD) guest-host complexes were also determined. Chemometric partial-least-square (PLS) regression analysis of emission spectra data of Me-β-CD-guest-host inclusion complexes was used for the determination of anthracene and TFE enantiomer concentrations in Me-β-CD-guest-host inclusion complex samples. The values of calculated K b and negative ΔG suggest the thermodynamic favorability of anthracene-Me-β-CD and enantiomeric of TFE-Me-β-CD inclusion complexation reactions. However, anthracene-Me-β-CD and enantiomer TFE-Me-β-CD inclusion complexations showed notable differences in the binding affinity behaviors and thermodynamic properties. The PLS regression analysis resulted in square-correlation-coefficients of 0.997530 or better and a low LOD of 3.81×10 -7 M for anthracene and 3.48×10 -8 M for TFE enantiomers at physiological conditions. Most importantly, PLS regression accurately determined the anthracene and TFE enantiomer concentrations with an average low error of 2.31% for anthracene, 4.44% for R-TFE and 3.60% for S-TFE. The results of the study are highly significant because of its high sensitivity and accuracy for analysis of PAH and chiral PAH analogue derivatives without the need of an expensive chiral column, enantiomeric resolution, or use of a polarized

  1. Bisphenol-A exposures and behavioural aberrations: median and linear spline and meta-regression analyses of 12 toxicity studies in rodents.

    Science.gov (United States)

    Peluso, Marco E M; Munnia, Armelle; Ceppi, Marcello

    2014-11-05

    Exposures to bisphenol-A, a weak estrogenic chemical, largely used for the production of plastic containers, can affect the rodent behaviour. Thus, we examined the relationships between bisphenol-A and the anxiety-like behaviour, spatial skills, and aggressiveness, in 12 toxicity studies of rodent offspring from females orally exposed to bisphenol-A, while pregnant and/or lactating, by median and linear splines analyses. Subsequently, the meta-regression analysis was applied to quantify the behavioural changes. U-shaped, inverted U-shaped and J-shaped dose-response curves were found to describe the relationships between bisphenol-A with the behavioural outcomes. The occurrence of anxiogenic-like effects and spatial skill changes displayed U-shaped and inverted U-shaped curves, respectively, providing examples of effects that are observed at low-doses. Conversely, a J-dose-response relationship was observed for aggressiveness. When the proportion of rodents expressing certain traits or the time that they employed to manifest an attitude was analysed, the meta-regression indicated that a borderline significant increment of anxiogenic-like effects was present at low-doses regardless of sexes (β)=-0.8%, 95% C.I. -1.7/0.1, P=0.076, at ≤120 μg bisphenol-A. Whereas, only bisphenol-A-males exhibited a significant inhibition of spatial skills (β)=0.7%, 95% C.I. 0.2/1.2, P=0.004, at ≤100 μg/day. A significant increment of aggressiveness was observed in both the sexes (β)=67.9,C.I. 3.4, 172.5, P=0.038, at >4.0 μg. Then, bisphenol-A treatments significantly abrogated spatial learning and ability in males (Pbisphenol-A, e.g. ≤120 μg/day, were associated to behavioural aberrations in offspring. Copyright © 2014. Published by Elsevier Ireland Ltd.

  2. A Rubric for Evaluating Student Analyses of Business Cases

    Science.gov (United States)

    Riddle, Emma Jane; Smith, Marilyn; Frankforter, Steven A.

    2016-01-01

    This article presents a rubric for evaluating student performance on written case assignments that require qualitative analysis. This rubric is designed for three purposes. First, it informs students of the criteria on which their work will be evaluated. Second, it provides instructors with a reliable instrument for accurately measuring and…

  3. Area under the curve predictions of dalbavancin, a new lipoglycopeptide agent, using the end of intravenous infusion concentration data point by regression analyses such as linear, log-linear and power models.

    Science.gov (United States)

    Bhamidipati, Ravi Kanth; Syed, Muzeeb; Mullangi, Ramesh; Srinivas, Nuggehally

    2018-02-01

    1. Dalbavancin, a lipoglycopeptide, is approved for treating gram-positive bacterial infections. Area under plasma concentration versus time curve (AUC inf ) of dalbavancin is a key parameter and AUC inf /MIC ratio is a critical pharmacodynamic marker. 2. Using end of intravenous infusion concentration (i.e. C max ) C max versus AUC inf relationship for dalbavancin was established by regression analyses (i.e. linear, log-log, log-linear and power models) using 21 pairs of subject data. 3. The predictions of the AUC inf were performed using published C max data by application of regression equations. The quotient of observed/predicted values rendered fold difference. The mean absolute error (MAE)/root mean square error (RMSE) and correlation coefficient (r) were used in the assessment. 4. MAE and RMSE values for the various models were comparable. The C max versus AUC inf exhibited excellent correlation (r > 0.9488). The internal data evaluation showed narrow confinement (0.84-1.14-fold difference) with a RMSE models predicted AUC inf with a RMSE of 3.02-27.46% with fold difference largely contained within 0.64-1.48. 5. Regardless of the regression models, a single time point strategy of using C max (i.e. end of 30-min infusion) is amenable as a prospective tool for predicting AUC inf of dalbavancin in patients.

  4. EVALUATION OF A VACUUM DISTILLER FOR PERFORMING METHOD 8261 ANALYSES

    Science.gov (United States)

    Vacuum distillation uses a specialized apparatus. This apparatus has been developed and patented by the EPA. Through the Federal Technology Transfer Act this invention has been made available for commercialization. Available vendors for this instrumentation are being evaluated. ...

  5. Exploring reasons for the observed inconsistent trial reports on intra-articular injections with hyaluronic acid in the treatment of osteoarthritis: Meta-regression analyses of randomized trials.

    Science.gov (United States)

    Johansen, Mette; Bahrt, Henriette; Altman, Roy D; Bartels, Else M; Juhl, Carsten B; Bliddal, Henning; Lund, Hans; Christensen, Robin

    2016-08-01

    The aim was to identify factors explaining inconsistent observations concerning the efficacy of intra-articular hyaluronic acid compared to intra-articular sham/control, or non-intervention control, in patients with symptomatic osteoarthritis, based on randomized clinical trials (RCTs). A systematic review and meta-regression analyses of available randomized trials were conducted. The outcome, pain, was assessed according to a pre-specified hierarchy of potentially available outcomes. Hedges׳s standardized mean difference [SMD (95% CI)] served as effect size. REstricted Maximum Likelihood (REML) mixed-effects models were used to combine study results, and heterogeneity was calculated and interpreted as Tau-squared and I-squared, respectively. Overall, 99 studies (14,804 patients) met the inclusion criteria: Of these, only 71 studies (72%), including 85 comparisons (11,216 patients), had adequate data available for inclusion in the primary meta-analysis. Overall, compared with placebo, intra-articular hyaluronic acid reduced pain with an effect size of -0.39 [-0.47 to -0.31; P hyaluronic acid. Based on available trial data, intra-articular hyaluronic acid showed a better effect than intra-articular saline on pain reduction in osteoarthritis. Publication bias and the risk of selective outcome reporting suggest only small clinical effect compared to saline. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Item Response Theory Modeling and Categorical Regression Analyses of the Five-Factor Model Rating Form: A Study on Italian Community-Dwelling Adolescent Participants and Adult Participants.

    Science.gov (United States)

    Fossati, Andrea; Widiger, Thomas A; Borroni, Serena; Maffei, Cesare; Somma, Antonella

    2017-06-01

    To extend the evidence on the reliability and construct validity of the Five-Factor Model Rating Form (FFMRF) in its self-report version, two independent samples of Italian participants, which were composed of 510 adolescent high school students and 457 community-dwelling adults, respectively, were administered the FFMRF in its Italian translation. Adolescent participants were also administered the Italian translation of the Borderline Personality Features Scale for Children-11 (BPFSC-11), whereas adult participants were administered the Italian translation of the Triarchic Psychopathy Measure (TriPM). Cronbach α values were consistent with previous findings; in both samples, average interitem r values indicated acceptable internal consistency for all FFMRF scales. A multidimensional graded item response theory model indicated that the majority of FFMRF items had adequate discrimination parameters; information indices supported the reliability of the FFMRF scales. Both categorical (i.e., item-level) and scale-level regression analyses suggested that the FFMRF scores may predict a nonnegligible amount of variance in the BPFSC-11 total score in adolescent participants, and in the TriPM scale scores in adult participants.

  7. Comparative evaluation of left ventricular mass regression after aortic valve replacement: a prospective randomized analysis

    Directory of Open Access Journals (Sweden)

    Kiessling Arndt H

    2011-10-01

    Full Text Available Abstract Background We assessed the hemodynamic performance of various prostheses and the clinical outcomes after aortic valve replacement, in different age groups. Methods One-hundred-and-twenty patients with isolated aortic valve stenosis were included in this prospective randomized randomised trial and allocated in three age-groups to receive either pulmonary autograft (PA, n = 20 or mechanical prosthesis (MP, Edwards Mira n = 20 in group 1 (age 75. Clinical outcomes and hemodynamic performance were evaluated at discharge, six months and one year. Results In group 1, patients with PA had significantly lower mean gradients than the MP (2.6 vs. 10.9 mmHg, p = 0.0005 with comparable left ventricular mass regression (LVMR. Morbidity included 1 stroke in the PA population and 1 gastrointestinal bleeding in the MP subgroup. In group 2, mean gradients did not differ significantly between both populations (7.0 vs. 8.9 mmHg, p = 0.81. The rate of LVMR and EF were comparable at 12 months; each group with one mortality. Morbidity included 1 stroke and 1 gastrointestinal bleeding in the stentless and 3 bleeding complications in the MP group. In group 3, mean gradients did not differ significantly (7.8 vs 6.5 mmHg, p = 0.06. Postoperative EF and LVMR were comparable. There were 3 deaths in the stented group and no mortality in the stentless group. Morbidity included 1 endocarditis and 1 stroke in the stentless compared to 1 endocarditis, 1 stroke and one pulmonary embolism in the stented group. Conclusions Clinical outcomes justify valve replacement with either valve substitute in the respective age groups. The PA hemodynamically outperformed the MPs. Stentless valves however, did not demonstrate significantly superior hemodynamics or outcomes in comparison to stented bioprosthesis or MPs.

  8. Testing contingency hypotheses in budgetary research: An evaluation of the use of moderated regression analysis

    NARCIS (Netherlands)

    Hartmann, Frank G.H.; Moers, Frank

    1999-01-01

    In the contingency literature on the behavioral and organizational effects of budgeting, use of the Moderated Regression Analysis (MRA) technique is prevalent. This technique is used to test contingency hypotheses that predict interaction effects between budgetary and contextual variables. This

  9. An evaluation of regression methods to estimate nutritional condition of canvasbacks and other water birds

    Science.gov (United States)

    Sparling, D.W.; Barzen, J.A.; Lovvorn, J.R.; Serie, J.R.

    1992-01-01

    Regression equations that use mensural data to estimate body condition have been developed for several water birds. These equations often have been based on data that represent different sexes, age classes, or seasons, without being adequately tested for intergroup differences. We used proximate carcass analysis of 538 adult and juvenile canvasbacks (Aythya valisineria ) collected during fall migration, winter, and spring migrations in 1975-76 and 1982-85 to test regression methods for estimating body condition.

  10. Evaluation of logistic regression models and effect of covariates for case-control study in RNA-Seq analysis.

    Science.gov (United States)

    Choi, Seung Hoan; Labadorf, Adam T; Myers, Richard H; Lunetta, Kathryn L; Dupuis, Josée; DeStefano, Anita L

    2017-02-06

    Next generation sequencing provides a count of RNA molecules in the form of short reads, yielding discrete, often highly non-normally distributed gene expression measurements. Although Negative Binomial (NB) regression has been generally accepted in the analysis of RNA sequencing (RNA-Seq) data, its appropriateness has not been exhaustively evaluated. We explore logistic regression as an alternative method for RNA-Seq studies designed to compare cases and controls, where disease status is modeled as a function of RNA-Seq reads using simulated and Huntington disease data. We evaluate the effect of adjusting for covariates that have an unknown relationship with gene expression. Finally, we incorporate the data adaptive method in order to compare false positive rates. When the sample size is small or the expression levels of a gene are highly dispersed, the NB regression shows inflated Type-I error rates but the Classical logistic and Bayes logistic (BL) regressions are conservative. Firth's logistic (FL) regression performs well or is slightly conservative. Large sample size and low dispersion generally make Type-I error rates of all methods close to nominal alpha levels of 0.05 and 0.01. However, Type-I error rates are controlled after applying the data adaptive method. The NB, BL, and FL regressions gain increased power with large sample size, large log2 fold-change, and low dispersion. The FL regression has comparable power to NB regression. We conclude that implementing the data adaptive method appropriately controls Type-I error rates in RNA-Seq analysis. Firth's logistic regression provides a concise statistical inference process and reduces spurious associations from inaccurately estimated dispersion parameters in the negative binomial framework.

  11. Effective behaviour change techniques for physical activity and healthy eating in overweight and obese adults; systematic review and meta-regression analyses.

    Science.gov (United States)

    Samdal, Gro Beate; Eide, Geir Egil; Barth, Tom; Williams, Geoffrey; Meland, Eivind

    2017-03-28

    This systematic review aims to explain the heterogeneity in results of interventions to promote physical activity and healthy eating for overweight and obese adults, by exploring the differential effects of behaviour change techniques (BCTs) and other intervention characteristics. The inclusion criteria specified RCTs with ≥ 12 weeks' duration, from January 2007 to October 2014, for adults (mean age ≥ 40 years, mean BMI ≥ 30). Primary outcomes were measures of healthy diet or physical activity. Two reviewers rated study quality, coded the BCTs, and collected outcome results at short (≤6 months) and long term (≥12 months). Meta-analyses and meta-regressions were used to estimate effect sizes (ES), heterogeneity indices (I 2 ) and regression coefficients. We included 48 studies containing a total of 82 outcome reports. The 32 long term reports had an overall ES = 0.24 with 95% confidence interval (CI): 0.15 to 0.33 and I 2  = 59.4%. The 50 short term reports had an ES = 0.37 with 95% CI: 0.26 to 0.48, and I 2  = 71.3%. The number of BCTs unique to the intervention group, and the BCTs goal setting and self-monitoring of behaviour predicted the effect at short and long term. The total number of BCTs in both intervention arms and using the BCTs goal setting of outcome, feedback on outcome of behaviour, implementing graded tasks, and adding objects to the environment, e.g. using a step counter, significantly predicted the effect at long term. Setting a goal for change; and the presence of reporting bias independently explained 58.8% of inter-study variation at short term. Autonomy supportive and person-centred methods as in Motivational Interviewing, the BCTs goal setting of behaviour, and receiving feedback on the outcome of behaviour, explained all of the between study variations in effects at long term. There are similarities, but also differences in effective BCTs promoting change in healthy eating and physical activity and

  12. Evaluating Non-Linear Regression Models in Analysis of Persian Walnut Fruit Growth

    Directory of Open Access Journals (Sweden)

    I. Karamatlou

    2016-02-01

    Full Text Available Introduction: Persian walnut (Juglans regia L. is a large, wind-pollinated, monoecious, dichogamous, long lived, perennial tree cultivated for its high quality wood and nuts throughout the temperate regions of the world. Growth model methodology has been widely used in the modeling of plant growth. Mathematical models are important tools to study the plant growth and agricultural systems. These models can be applied for decision-making anddesigning management procedures in horticulture. Through growth analysis, planning for planting systems, fertilization, pruning operations, harvest time as well as obtaining economical yield can be more accessible.Non-linear models are more difficult to specify and estimate than linear models. This research was aimed to studynon-linear regression models based on data obtained from fruit weight, length and width. Selecting the best models which explain that fruit inherent growth pattern of Persian walnut was a further goal of this study. Materials and Methods: The experimental material comprising 14 Persian walnut genotypes propagated by seed collected from a walnut orchard in Golestan province, Minoudasht region, Iran, at latitude 37◦04’N; longitude 55◦32’E; altitude 1060 m, in a silt loam soil type. These genotypes were selected as a representative sampling of the many walnut genotypes available throughout the Northeastern Iran. The age range of walnut trees was 30 to 50 years. The annual mean temperature at the location is16.3◦C, with annual mean rainfall of 690 mm.The data used here is the average of walnut fresh fruit and measured withgram/millimeter/day in2011.According to the data distribution pattern, several equations have been proposed to describesigmoidal growth patterns. Here, we used double-sigmoid and logistic–monomolecular models to evaluate fruit growth based on fruit weight and4different regression models in cluding Richards, Gompertz, Logistic and Exponential growth for evaluation

  13. Evaluation of the divided attention condition during functional analyses.

    Science.gov (United States)

    Fahmie, Tara A; Iwata, Brian A; Harper, Jill M; Querim, Angie C

    2013-01-01

    A common condition included in most functional analyses (FAs) is the attention condition, in which the therapist ignores the client by engaging in a solitary activity (antecedent event) but delivers attention to the client contingent on problem behavior (consequent event). The divided attention condition is similar, except that the antecedent event consists of the therapist conversing with an adult confederate. We compared the typical and divided attention conditions to determine whether behavior in general (Study 1) and problem behavior in particular (Study 2) were more sensitive to one of the test conditions. Results showed that the divided attention condition resulted in faster acquisition or more efficient FA results for 2 of 9 subjects, suggesting that the divided attention condition could be considered a preferred condition when resources are available. © Society for the Experimental Analysis of Behavior.

  14. Linear regression analysis: part 14 of a series on evaluation of scientific publications.

    Science.gov (United States)

    Schneider, Astrid; Hommel, Gerhard; Blettner, Maria

    2010-11-01

    Regression analysis is an important statistical method for the analysis of medical data. It enables the identification and characterization of relationships among multiple factors. It also enables the identification of prognostically relevant risk factors and the calculation of risk scores for individual prognostication. This article is based on selected textbooks of statistics, a selective review of the literature, and our own experience. After a brief introduction of the uni- and multivariable regression models, illustrative examples are given to explain what the important considerations are before a regression analysis is performed, and how the results should be interpreted. The reader should then be able to judge whether the method has been used correctly and interpret the results appropriately. The performance and interpretation of linear regression analysis are subject to a variety of pitfalls, which are discussed here in detail. The reader is made aware of common errors of interpretation through practical examples. Both the opportunities for applying linear regression analysis and its limitations are presented.

  15. Dialogues with social robots enablements, analyses, and evaluation

    CERN Document Server

    Wilcock, Graham

    2017-01-01

    This book explores novel aspects of social robotics, spoken dialogue systems, human-robot interaction, spoken language understanding, multimodal communication, and system evaluation. It offers a variety of perspectives on and solutions to the most important questions about advanced techniques for social robots and chat systems. Chapters by leading researchers address key research and development topics in the field of spoken dialogue systems, focusing in particular on three special themes: dialogue state tracking, evaluation of human-robot dialogue in social robotics, and socio-cognitive language processing. The book offers a valuable resource for researchers and practitioners in both academia and industry whose work involves advanced interaction technology and who are seeking an up-to-date overview of the key topics. It also provides supplementary educational material for courses on state-of-the-art dialogue system technologies, social robotics, and related research fields.

  16. Evaluation of accuracy of linear regression models in predicting urban stormwater discharge characteristics.

    Science.gov (United States)

    Madarang, Krish J; Kang, Joo-Hyon

    2014-06-01

    Stormwater runoff has been identified as a source of pollution for the environment, especially for receiving waters. In order to quantify and manage the impacts of stormwater runoff on the environment, predictive models and mathematical models have been developed. Predictive tools such as regression models have been widely used to predict stormwater discharge characteristics. Storm event characteristics, such as antecedent dry days (ADD), have been related to response variables, such as pollutant loads and concentrations. However it has been a controversial issue among many studies to consider ADD as an important variable in predicting stormwater discharge characteristics. In this study, we examined the accuracy of general linear regression models in predicting discharge characteristics of roadway runoff. A total of 17 storm events were monitored in two highway segments, located in Gwangju, Korea. Data from the monitoring were used to calibrate United States Environmental Protection Agency's Storm Water Management Model (SWMM). The calibrated SWMM was simulated for 55 storm events, and the results of total suspended solid (TSS) discharge loads and event mean concentrations (EMC) were extracted. From these data, linear regression models were developed. R(2) and p-values of the regression of ADD for both TSS loads and EMCs were investigated. Results showed that pollutant loads were better predicted than pollutant EMC in the multiple regression models. Regression may not provide the true effect of site-specific characteristics, due to uncertainty in the data. Copyright © 2014 The Research Centre for Eco-Environmental Sciences, Chinese Academy of Sciences. Published by Elsevier B.V. All rights reserved.

  17. Methods for estimating disease transmission rates: Evaluating the precision of Poisson regression and two novel methods

    DEFF Research Database (Denmark)

    Kirkeby, Carsten Thure; Hisham Beshara Halasa, Tariq; Gussmann, Maya Katrin

    2017-01-01

    the transmission rate. We use data from the two simulation models and vary the sampling intervals and the size of the population sampled. We devise two new methods to determine transmission rate, and compare these to the frequently used Poisson regression method in both epidemic and endemic situations. For most...... tested scenarios these new methods perform similar or better than Poisson regression, especially in the case of long sampling intervals. We conclude that transmission rate estimates are easily biased, which is important to take into account when using these rates in simulation models....

  18. Object-oriented fault tree evaluation program for quantitative analyses

    Science.gov (United States)

    Patterson-Hine, F. A.; Koen, B. V.

    1988-01-01

    Object-oriented programming can be combined with fault free techniques to give a significantly improved environment for evaluating the safety and reliability of large complex systems for space missions. Deep knowledge about system components and interactions, available from reliability studies and other sources, can be described using objects that make up a knowledge base. This knowledge base can be interrogated throughout the design process, during system testing, and during operation, and can be easily modified to reflect design changes in order to maintain a consistent information source. An object-oriented environment for reliability assessment has been developed on a Texas Instrument (TI) Explorer LISP workstation. The program, which directly evaluates system fault trees, utilizes the object-oriented extension to LISP called Flavors that is available on the Explorer. The object representation of a fault tree facilitates the storage and retrieval of information associated with each event in the tree, including tree structural information and intermediate results obtained during the tree reduction process. Reliability data associated with each basic event are stored in the fault tree objects. The object-oriented environment on the Explorer also includes a graphical tree editor which was modified to display and edit the fault trees.

  19. Einleitung: Jahrbuch Medienpädagogik 5. Evaluation und Analyse

    Directory of Open Access Journals (Sweden)

    Ben Bachmair

    2017-09-01

    Full Text Available Evaluationsmethoden für E-Learning, Forschungsmethoden zur Mediennutzung und zur Analyse von Medienkulturen sind Gegenstand dieses Jahrbuchs Medienpädagogik. Dabei reicht das Spektrum der Medien von Fernsehen über Video bis zum Internet. Die Auswahl dieses Themenschwerpunkts kommt dabei nicht von ungefähr. Zum einen hat wissenschaftliche Medienforschung bereits eine lange Tradition und ist in vielen Disziplinen etabliert. Forschungsmethoden lieferten der medienpädagogischen Praxis empirisch fundierte Ergebnisse und Instrumente. Forschungsdesigns und Methodologien wurden entwickelt, um Mediennutzungsverhalten und Medienwirkungen zu erklären, Evaluationsmethoden eingesetzt, um Lernprozesse zu beurteilen. Zum anderen gewinnen Qualitätssicherung und Leistungsmessungen im Bildungssystem generell immer mehr an Bedeutung. In den Ergebnissen aktueller Evaluationen zeigen sich Bestrebungen nach Qualität von Bildungs- und Lernprozessen. Empirische Forschung kommt die Aufgabe zu medienpädagogische Wirkungsgrade und Zielerreichung zu überprüfen und zu kontrollieren. Quantitative Forschungsmethoden ermöglichen es Mediennutzungsverhalten bestimmter Zielgruppen in Wechselwirkung mit Programmstrukturen und -inhalten zu ermitteln. Nutzungsmuster von Medienangeboten – vor allem bei Kindern und Jugendlichen – sind auch bei der Entwicklung medienpädagogischer Ansätze wichtig. Qualitative Medienforschung dient als Planungsinstrument für medienpädagogische Konzeptionen sowie für medienpädagogische Praxis und hilft bei der Entwicklung neuer, mehr als bisher an inhaltlichen Kriterien orientierter Angebote eingesetzt werden. Eine der zentralen Fragestellungen war und ist dabei, mit welchen kognitiven und ästhetischen Vermittlungsformen die Inhalte einer Sendung am besten kommuniziert werden können. Dies ist zugleich eine der Grundfragen im Hinblick auf den Medieneinsatz in pädagogisch orientierten Lernprozessen (vgl. Dichanz 1998.

  20. Synthetic biology analysed tools for discussion and evaluation

    CERN Document Server

    2016-01-01

    Synthetic biology is a dynamic, young, ambitious, attractive, and heterogeneous scientific discipline. It is constantly developing and changing, which makes societal evaluation of this emerging new science a challenging task, prone to misunderstandings. Synthetic biology is difficult to capture, and confusion arises not only regarding which part of synthetic biology the discussion is about, but also with respect to the underlying concepts in use. This book offers a useful toolbox to approach this complex and fragmented field. It provides a biological access to the discussion using a 'layer' model that describes the connectivity of synthetic or semisynthetic organisms and cells to the realm of natural organisms derived by evolution. Instead of directly reviewing the field as a whole, firstly our book addresses the characteristic features of synthetic biology that are relevant to the societal discussion. Some of these features apply only to parts of synthetic biology, whereas others are relevant to synthetic bi...

  1. STACE: source term analyses for containment evaluations of transport casks

    International Nuclear Information System (INIS)

    Seager, K.D.; Gianoulakis, S.E.; Barrett, P.R.; Rashid, Y.R.; Reardon, P.C.

    1993-01-01

    STACE evaluates the calculated fuel rod response against failure criteria based on the cladding residual ductility and fracture properties as functions of irradiation and thermal environments. The fuel rod gap inventory contains three forms of releasable RAM: (1) gaseous, e.g., 85 Kr, (2) volatiles, e.g., 134 Cs and 137 Cs, and (3) actinides associated with fuel fines. The quantities of these products are limited to that contained within the fuel-cladding gap region and associated interconnected voids. Cladding pinhole failure will also result in the ejection of about 0.003 percent of the fuel, in the form of fines, into the cask cavity. Significant attenuation of the aerosol concentration in the transport cask can occur, depending upon the residence time of the aerosol in the cask compared with its rate of escape from the cask into the environment. (J.P.N.)

  2. STACE: Source Term Analyses for Containment Evaluations of transport casks

    International Nuclear Information System (INIS)

    Seager, K.D.; Gianoulakis, S.E.; Barrett, P.R.; Rashid, Y.R.; Reardon, P.C.

    1992-01-01

    Following the guidance of ANSI N14.5, the STACE methodology provides a technically defensible means for estimating maximum permissible leakage rates. These containment criteria attempt to reflect the true radiological hazard by performing a detailed examination of the spent fuel, CRUD, and residual contamination contributions to the releasable source term. The evaluation of the spent fuel contribution to the source term has been modeled fairly accurately using the STACE methodology. The structural model predicts the cask drop load history, the mechanical response of the fuel assembly, and the probability of cladding breach. These data are then used to predict the amount of fission gas, volatile species, and fuel fines that are releasable from the cask. There are some areas where data are sparse or lacking (e.g., the quantity and size distribution of fuel rod breaches) in which experimental validation is planned. The CRUD spallation fraction is the major area where no quantitative data has been found; therefore, this also requires experimental validation. In the interim, STACE conservatively assumes a 100% spallation fraction for computing the releasable activity. The source term methodology also conservatively assumes that there is 1 Ci of residual contamination available for release in the transport cask. However, residual contamination is still by far the smallest contributor to the source term activity

  3. Economic evaluation of algae biodiesel based on meta-analyses

    Science.gov (United States)

    Zhang, Yongli; Liu, Xiaowei; White, Mark A.; Colosi, Lisa M.

    2017-08-01

    The objective of this study is to elucidate the economic viability of algae-to-energy systems at a large scale, by developing a meta-analysis of five previously published economic evaluations of systems producing algae biodiesel. Data from original studies were harmonised into a standardised framework using financial and technical assumptions. Results suggest that the selling price of algae biodiesel under the base case would be 5.00-10.31/gal, higher than the selected benchmarks: 3.77/gal for petroleum diesel, and 4.21/gal for commercial biodiesel (B100) from conventional vegetable oil or animal fat. However, the projected selling price of algal biodiesel (2.76-4.92/gal), following anticipated improvements, would be competitive. A scenario-based sensitivity analysis reveals that the price of algae biodiesel is most sensitive to algae biomass productivity, algae oil content, and algae cultivation cost. This indicates that the improvements in the yield, quality, and cost of algae feedstock could be the key factors to make algae-derived biodiesel economically viable.

  4. A regression technique for evaluation and quantification for water quality parameters from remote sensing data

    International Nuclear Information System (INIS)

    Whitlock, C.H.; Kuo, C.Y.

    1979-01-01

    The paper attempts to define optical physics and/or environmental conditions under which the linear multiple-regression should be applicable. It is reported that investigation of the signal response shows that the exact solution for a number of optical physics conditions is of the same form as a linearized multiple-regression equation, even if nonlinear contributions from surface reflections, atmospheric constituents, or other water pollutants are included. Limitations on achieving this type of solution are defined. Laboratory data are used to demonstrate that the technique is applicable to water mixtures which contain constituents with both linear and nonlinear radiance gradients. Finally, it is concluded that instrument noise, ground-truth placement, and time lapse between remote sensor overpass and water sample operations are serious barriers to successful use of the technique

  5. Regression analysis: An evaluation of the inuences behindthe pricing of beer

    OpenAIRE

    Eriksson, Sara; Häggmark, Jonas

    2017-01-01

    This bachelor thesis in applied mathematics is an analysis of which factors affect the pricing of beer at the Swedish market. A multiple linear regression model is created with the statistical programming language R through a study of the influences for several explanatory variables. For example these variables include country of origin, beer style, volume sold and a Bayesian weighted mean rating from RateBeer, a popular website for beer enthusiasts. The main goal of the project is to find si...

  6. Dual Regression

    OpenAIRE

    Spady, Richard; Stouli, Sami

    2012-01-01

    We propose dual regression as an alternative to the quantile regression process for the global estimation of conditional distribution functions under minimal assumptions. Dual regression provides all the interpretational power of the quantile regression process while avoiding the need for repairing the intersecting conditional quantile surfaces that quantile regression often produces in practice. Our approach introduces a mathematical programming characterization of conditional distribution f...

  7. The Crash Intensity Evaluation Using General Centrality Criterions and a Geographically Weighted Regression

    Science.gov (United States)

    Ghadiriyan Arani, M.; Pahlavani, P.; Effati, M.; Noori Alamooti, F.

    2017-09-01

    Today, one of the social problems influencing on the lives of many people is the road traffic crashes especially the highway ones. In this regard, this paper focuses on highway of capital and the most populous city in the U.S. state of Georgia and the ninth largest metropolitan area in the United States namely Atlanta. Geographically weighted regression and general centrality criteria are the aspects of traffic used for this article. In the first step, in order to estimate of crash intensity, it is needed to extract the dual graph from the status of streets and highways to use general centrality criteria. With the help of the graph produced, the criteria are: Degree, Pageranks, Random walk, Eccentricity, Closeness, Betweenness, Clustering coefficient, Eigenvector, and Straightness. The intensity of crash point is counted for every highway by dividing the number of crashes in that highway to the total number of crashes. Intensity of crash point is calculated for each highway. Then, criteria and crash point were normalized and the correlation between them was calculated to determine the criteria that are not dependent on each other. The proposed hybrid approach is a good way to regression issues because these effective measures result to a more desirable output. R2 values for geographically weighted regression using the Gaussian kernel was 0.539 and also 0.684 was obtained using a triple-core cube. The results showed that the triple-core cube kernel is better for modeling the crash intensity.

  8. THE CRASH INTENSITY EVALUATION USING GENERAL CENTRALITY CRITERIONS AND A GEOGRAPHICALLY WEIGHTED REGRESSION

    Directory of Open Access Journals (Sweden)

    M. Ghadiriyan Arani

    2017-09-01

    Full Text Available Today, one of the social problems influencing on the lives of many people is the road traffic crashes especially the highway ones. In this regard, this paper focuses on highway of capital and the most populous city in the U.S. state of Georgia and the ninth largest metropolitan area in the United States namely Atlanta. Geographically weighted regression and general centrality criteria are the aspects of traffic used for this article. In the first step, in order to estimate of crash intensity, it is needed to extract the dual graph from the status of streets and highways to use general centrality criteria. With the help of the graph produced, the criteria are: Degree, Pageranks, Random walk, Eccentricity, Closeness, Betweenness, Clustering coefficient, Eigenvector, and Straightness. The intensity of crash point is counted for every highway by dividing the number of crashes in that highway to the total number of crashes. Intensity of crash point is calculated for each highway. Then, criteria and crash point were normalized and the correlation between them was calculated to determine the criteria that are not dependent on each other. The proposed hybrid approach is a good way to regression issues because these effective measures result to a more desirable output. R2 values for geographically weighted regression using the Gaussian kernel was 0.539 and also 0.684 was obtained using a triple-core cube. The results showed that the triple-core cube kernel is better for modeling the crash intensity.

  9. Personal, social, and game-related correlates of active and non-active gaming among dutch gaming adolescents: survey-based multivariable, multilevel logistic regression analyses.

    Science.gov (United States)

    Simons, Monique; de Vet, Emely; Chinapaw, Mai Jm; de Boer, Michiel; Seidell, Jacob C; Brug, Johannes

    2014-04-04

    Playing video games contributes substantially to sedentary behavior in youth. A new generation of video games-active games-seems to be a promising alternative to sedentary games to promote physical activity and reduce sedentary behavior. At this time, little is known about correlates of active and non-active gaming among adolescents. The objective of this study was to examine potential personal, social, and game-related correlates of both active and non-active gaming in adolescents. A survey assessing game behavior and potential personal, social, and game-related correlates was conducted among adolescents (12-16 years, N=353) recruited via schools. Multivariable, multilevel logistic regression analyses, adjusted for demographics (age, sex and educational level of adolescents), were conducted to examine personal, social, and game-related correlates of active gaming ≥1 hour per week (h/wk) and non-active gaming >7 h/wk. Active gaming ≥1 h/wk was significantly associated with a more positive attitude toward active gaming (OR 5.3, CI 2.4-11.8; Pgames (OR 0.30, CI 0.1-0.6; P=.002), a higher score on habit strength regarding gaming (OR 1.9, CI 1.2-3.2; P=.008) and having brothers/sisters (OR 6.7, CI 2.6-17.1; Pgame engagement (OR 0.95, CI 0.91-0.997; P=.04). Non-active gaming >7 h/wk was significantly associated with a more positive attitude toward non-active gaming (OR 2.6, CI 1.1-6.3; P=.035), a stronger habit regarding gaming (OR 3.0, CI 1.7-5.3; P7 h/wk. Active gaming is most strongly (negatively) associated with attitude with respect to non-active games, followed by observed active game behavior of brothers and sisters and attitude with respect to active gaming (positive associations). On the other hand, non-active gaming is most strongly associated with observed non-active game behavior of friends, habit strength regarding gaming and attitude toward non-active gaming (positive associations). Habit strength was a correlate of both active and non-active gaming

  10. Personal, Social, and Game-Related Correlates of Active and Non-Active Gaming Among Dutch Gaming Adolescents: Survey-Based Multivariable, Multilevel Logistic Regression Analyses

    Science.gov (United States)

    de Vet, Emely; Chinapaw, Mai JM; de Boer, Michiel; Seidell, Jacob C; Brug, Johannes

    2014-01-01

    Background Playing video games contributes substantially to sedentary behavior in youth. A new generation of video games—active games—seems to be a promising alternative to sedentary games to promote physical activity and reduce sedentary behavior. At this time, little is known about correlates of active and non-active gaming among adolescents. Objective The objective of this study was to examine potential personal, social, and game-related correlates of both active and non-active gaming in adolescents. Methods A survey assessing game behavior and potential personal, social, and game-related correlates was conducted among adolescents (12-16 years, N=353) recruited via schools. Multivariable, multilevel logistic regression analyses, adjusted for demographics (age, sex and educational level of adolescents), were conducted to examine personal, social, and game-related correlates of active gaming ≥1 hour per week (h/wk) and non-active gaming >7 h/wk. Results Active gaming ≥1 h/wk was significantly associated with a more positive attitude toward active gaming (OR 5.3, CI 2.4-11.8; Pgames (OR 0.30, CI 0.1-0.6; P=.002), a higher score on habit strength regarding gaming (OR 1.9, CI 1.2-3.2; P=.008) and having brothers/sisters (OR 6.7, CI 2.6-17.1; Pgame engagement (OR 0.95, CI 0.91-0.997; P=.04). Non-active gaming >7 h/wk was significantly associated with a more positive attitude toward non-active gaming (OR 2.6, CI 1.1-6.3; P=.035), a stronger habit regarding gaming (OR 3.0, CI 1.7-5.3; P7 h/wk. Active gaming is most strongly (negatively) associated with attitude with respect to non-active games, followed by observed active game behavior of brothers and sisters and attitude with respect to active gaming (positive associations). On the other hand, non-active gaming is most strongly associated with observed non-active game behavior of friends, habit strength regarding gaming and attitude toward non-active gaming (positive associations). Habit strength was a

  11. Propensity-score matching in economic analyses: comparison with regression models, instrumental variables, residual inclusion, differences-in-differences, and decomposition methods.

    Science.gov (United States)

    Crown, William H

    2014-02-01

    This paper examines the use of propensity score matching in economic analyses of observational data. Several excellent papers have previously reviewed practical aspects of propensity score estimation and other aspects of the propensity score literature. The purpose of this paper is to compare the conceptual foundation of propensity score models with alternative estimators of treatment effects. References are provided to empirical comparisons among methods that have appeared in the literature. These comparisons are available for a subset of the methods considered in this paper. However, in some cases, no pairwise comparisons of particular methods are yet available, and there are no examples of comparisons across all of the methods surveyed here. Irrespective of the availability of empirical comparisons, the goal of this paper is to provide some intuition about the relative merits of alternative estimators in health economic evaluations where nonlinearity, sample size, availability of pre/post data, heterogeneity, and missing variables can have important implications for choice of methodology. Also considered is the potential combination of propensity score matching with alternative methods such as differences-in-differences and decomposition methods that have not yet appeared in the empirical literature.

  12. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part I: Effects of Random Error

    Science.gov (United States)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.

  13. An evaluation of bias in propensity score-adjusted non-linear regression models.

    Science.gov (United States)

    Wan, Fei; Mitra, Nandita

    2018-03-01

    Propensity score methods are commonly used to adjust for observed confounding when estimating the conditional treatment effect in observational studies. One popular method, covariate adjustment of the propensity score in a regression model, has been empirically shown to be biased in non-linear models. However, no compelling underlying theoretical reason has been presented. We propose a new framework to investigate bias and consistency of propensity score-adjusted treatment effects in non-linear models that uses a simple geometric approach to forge a link between the consistency of the propensity score estimator and the collapsibility of non-linear models. Under this framework, we demonstrate that adjustment of the propensity score in an outcome model results in the decomposition of observed covariates into the propensity score and a remainder term. Omission of this remainder term from a non-collapsible regression model leads to biased estimates of the conditional odds ratio and conditional hazard ratio, but not for the conditional rate ratio. We further show, via simulation studies, that the bias in these propensity score-adjusted estimators increases with larger treatment effect size, larger covariate effects, and increasing dissimilarity between the coefficients of the covariates in the treatment model versus the outcome model.

  14. Evaluating penalized logistic regression models to predict Heat-Related Electric grid stress days

    Energy Technology Data Exchange (ETDEWEB)

    Bramer, L. M.; Rounds, J.; Burleyson, C. D.; Fortin, D.; Hathaway, J.; Rice, J.; Kraucunas, I.

    2017-11-01

    Understanding the conditions associated with stress on the electricity grid is important in the development of contingency plans for maintaining reliability during periods when the grid is stressed. In this paper, heat-related grid stress and the relationship with weather conditions is examined using data from the eastern United States. Penalized logistic regression models were developed and applied to predict stress on the electric grid using weather data. The inclusion of other weather variables, such as precipitation, in addition to temperature improved model performance. Several candidate models and datasets were examined. A penalized logistic regression model fit at the operation-zone level was found to provide predictive value and interpretability. Additionally, the importance of different weather variables observed at different time scales were examined. Maximum temperature and precipitation were identified as important across all zones while the importance of other weather variables was zone specific. The methods presented in this work are extensible to other regions and can be used to aid in planning and development of the electrical grid.

  15. Logistic regression function for detection of suspicious performance during baseline evaluations using concussion vital signs.

    Science.gov (United States)

    Hill, Benjamin David; Womble, Melissa N; Rohling, Martin L

    2015-01-01

    This study utilized logistic regression to determine whether performance patterns on Concussion Vital Signs (CVS) could differentiate known groups with either genuine or feigned performance. For the embedded measure development group (n = 174), clinical patients and undergraduate students categorized as feigning obtained significantly lower scores on the overall test battery mean for the CVS, Shipley-2 composite score, and California Verbal Learning Test-Second Edition subtests than did genuinely performing individuals. The final full model of 3 predictor variables (Verbal Memory immediate hits, Verbal Memory immediate correct passes, and Stroop Test complex reaction time correct) was significant and correctly classified individuals in their known group 83% of the time (sensitivity = .65; specificity = .97) in a mixed sample of young-adult clinical cases and simulators. The CVS logistic regression function was applied to a separate undergraduate college group (n = 378) that was asked to perform genuinely and identified 5% as having possibly feigned performance indicating a low false-positive rate. The failure rate was 11% and 16% at baseline cognitive testing in samples of high school and college athletes, respectively. These findings have particular relevance given the increasing use of computerized test batteries for baseline cognitive testing and return-to-play decisions after concussion.

  16. Differentiating regressed melanoma from regressed lichenoid keratosis.

    Science.gov (United States)

    Chan, Aegean H; Shulman, Kenneth J; Lee, Bonnie A

    2017-04-01

    Distinguishing regressed lichen planus-like keratosis (LPLK) from regressed melanoma can be difficult on histopathologic examination, potentially resulting in mismanagement of patients. We aimed to identify histopathologic features by which regressed melanoma can be differentiated from regressed LPLK. Twenty actively inflamed LPLK, 12 LPLK with regression and 15 melanomas with regression were compared and evaluated by hematoxylin and eosin staining as well as Melan-A, microphthalmia transcription factor (MiTF) and cytokeratin (AE1/AE3) immunostaining. (1) A total of 40% of regressed melanomas showed complete or near complete loss of melanocytes within the epidermis with Melan-A and MiTF immunostaining, while 8% of regressed LPLK exhibited this finding. (2) Necrotic keratinocytes were seen in the epidermis in 33% regressed melanomas as opposed to all of the regressed LPLK. (3) A dense infiltrate of melanophages in the papillary dermis was seen in 40% of regressed melanomas, a feature not seen in regressed LPLK. In summary, our findings suggest that a complete or near complete loss of melanocytes within the epidermis strongly favors a regressed melanoma over a regressed LPLK. In addition, necrotic epidermal keratinocytes and the presence of a dense band-like distribution of dermal melanophages can be helpful in differentiating these lesions. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  17. Engineering estimates versus impact evaluation of energy efficiency projects: Regression discontinuity evidence from a case study

    International Nuclear Information System (INIS)

    Lang, Corey; Siler, Matthew

    2013-01-01

    Energy efficiency upgrades have been gaining widespread attention across global channels as a cost-effective approach to addressing energy challenges. The cost-effectiveness of these projects is generally predicted using engineering estimates pre-implementation, often with little ex post analysis of project success. In this paper, for a suite of energy efficiency projects, we directly compare ex ante engineering estimates of energy savings to ex post econometric estimates that use 15-min interval, building-level energy consumption data. In contrast to most prior literature, our econometric results confirm the engineering estimates, even suggesting the engineering estimates were too modest. Further, we find heterogeneous efficiency impacts by time of day, suggesting select efficiency projects can be useful in reducing peak load. - Highlights: • Regression discontinuity used to estimate energy savings from efficiency projects. • Ex post econometric estimates validate ex ante engineering estimates of energy savings. • Select efficiency projects shown to reduce peak load

  18. Evaluation of Land Use Regression Models for Nitrogen Dioxide and Benzene in Four US Cities

    Directory of Open Access Journals (Sweden)

    Shaibal Mukerjee

    2012-01-01

    Full Text Available Spatial analysis studies have included the application of land use regression models (LURs for health and air quality assessments. Recent LUR studies have collected nitrogen dioxide (NO2 and volatile organic compounds (VOCs using passive samplers at urban air monitoring networks in El Paso and Dallas, TX, Detroit, MI, and Cleveland, OH to assess spatial variability and source influences. LURs were successfully developed to estimate pollutant concentrations throughout the study areas. Comparisons of development and predictive capabilities of LURs from these four cities are presented to address this issue of uniform application of LURs across study areas. Traffic and other urban variables were important predictors in the LURs although city-specific influences (such as border crossings were also important. In addition, transferability of variables or LURs from one city to another may be problematic due to intercity differences and data availability or comparability. Thus, developing common predictors in future LURs may be difficult.

  19. Regression analysis of the structure function for reliability evaluation of continuous-state system

    International Nuclear Information System (INIS)

    Gamiz, M.L.; Martinez Miranda, M.D.

    2010-01-01

    Technical systems are designed to perform an intended task with an admissible range of efficiency. According to this idea, it is permissible that the system runs among different levels of performance, in addition to complete failure and the perfect functioning one. As a consequence, reliability theory has evolved from binary-state systems to the most general case of continuous-state system, in which the state of the system changes over time through some interval on the real number line. In this context, obtaining an expression for the structure function becomes difficult, compared to the discrete case, with difficulty increasing as the number of components of the system increases. In this work, we propose a method to build a structure function for a continuum system by using multivariate nonparametric regression techniques, in which certain analytical restrictions on the variable of interest must be taken into account. Once the structure function is obtained, some reliability indices of the system are estimated. We illustrate our method via several numerical examples.

  20. Regression analysis of the structure function for reliability evaluation of continuous-state system

    Energy Technology Data Exchange (ETDEWEB)

    Gamiz, M.L., E-mail: mgamiz@ugr.e [Departamento de Estadistica e I.O., Facultad de Ciencias, Universidad de Granada, Granada 18071 (Spain); Martinez Miranda, M.D. [Departamento de Estadistica e I.O., Facultad de Ciencias, Universidad de Granada, Granada 18071 (Spain)

    2010-02-15

    Technical systems are designed to perform an intended task with an admissible range of efficiency. According to this idea, it is permissible that the system runs among different levels of performance, in addition to complete failure and the perfect functioning one. As a consequence, reliability theory has evolved from binary-state systems to the most general case of continuous-state system, in which the state of the system changes over time through some interval on the real number line. In this context, obtaining an expression for the structure function becomes difficult, compared to the discrete case, with difficulty increasing as the number of components of the system increases. In this work, we propose a method to build a structure function for a continuum system by using multivariate nonparametric regression techniques, in which certain analytical restrictions on the variable of interest must be taken into account. Once the structure function is obtained, some reliability indices of the system are estimated. We illustrate our method via several numerical examples.

  1. A linear regression approach to evaluate the green supply chain management impact on industrial organizational performance.

    Science.gov (United States)

    Mumtaz, Ubaidullah; Ali, Yousaf; Petrillo, Antonella

    2018-05-15

    The increase in the environmental pollution is one of the most important topic in today's world. In this context, the industrial activities can pose a significant threat to the environment. To manage problems associate to industrial activities several methods, techniques and approaches have been developed. Green supply chain management (GSCM) is considered one of the most important "environmental management approach". In developing countries such as Pakistan the implementation of GSCM practices is still in its initial stages. Lack of knowledge about its effects on economic performance is the reason because of industries fear to implement these practices. The aim of this research is to perceive the effects of GSCM practices on organizational performance in Pakistan. In this research the GSCM practices considered are: internal practices, external practices, investment recovery and eco-design. While, the performance parameters considered are: environmental pollution, operational cost and organizational flexibility. A set of hypothesis propose the effect of each GSCM practice on the performance parameters. Factor analysis and linear regression are used to analyze the survey data of Pakistani industries, in order to authenticate these hypotheses. The findings of this research indicate a decrease in environmental pollution and operational cost with the implementation of GSCM practices, whereas organizational flexibility has not improved for Pakistani industries. These results aim to help managers regarding their decision of implementing GSCM practices in the industrial sector of Pakistan. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Evaluation of a multiple linear regression model and SARIMA model in forecasting heat demand for district heating system

    International Nuclear Information System (INIS)

    Fang, Tingting; Lahdelma, Risto

    2016-01-01

    Highlights: • Social factor is considered for the linear regression models besides weather file. • Simultaneously optimize all the coefficients for linear regression models. • SARIMA combined with linear regression is used to forecast the heat demand. • The accuracy for both linear regression and time series models are evaluated. - Abstract: Forecasting heat demand is necessary for production and operation planning of district heating (DH) systems. In this study we first propose a simple regression model where the hourly outdoor temperature and wind speed forecast the heat demand. Weekly rhythm of heat consumption as a social component is added to the model to significantly improve the accuracy. The other type of model is the seasonal autoregressive integrated moving average (SARIMA) model with exogenous variables as a combination to take weather factors, and the historical heat consumption data as depending variables. One outstanding advantage of the model is that it peruses the high accuracy for both long-term and short-term forecast by considering both exogenous factors and time series. The forecasting performance of both linear regression models and time series model are evaluated based on real-life heat demand data for the city of Espoo in Finland by out-of-sample tests for the last 20 full weeks of the year. The results indicate that the proposed linear regression model (T168h) using 168-h demand pattern with midweek holidays classified as Saturdays or Sundays gives the highest accuracy and strong robustness among all the tested models based on the tested forecasting horizon and corresponding data. Considering the parsimony of the input, the ease of use and the high accuracy, the proposed T168h model is the best in practice. The heat demand forecasting model can also be developed for individual buildings if automated meter reading customer measurements are available. This would allow forecasting the heat demand based on more accurate heat consumption

  3. Bibliometric analyses on repository contents as a library service for the evaluation of research

    NARCIS (Netherlands)

    Veller, van M.G.P.; Gerritsma, W.

    2010-01-01

    Since the last two decennia, the library of Wageningen University and Research (or Wageningen UR) has been involved in several bibliometric analyses for the evaluation of scientific output of staff, chair groups, research institutes and graduate schools. In these bibliometric analyses several

  4. Bibliometric analyses on repository contents for the evaluation of research at Wageningen UR

    NARCIS (Netherlands)

    Veller, van M.G.P.; Gerritsma, W.; Togt, van der P.L.; Leon, C.D.; Zeist, van C.M.

    2010-01-01

    Since the last two decennia, Wageningen UR Library has been involved in bibliometric analyses for the evaluation of scientific output of staff, chair groups and research institutes of Wageningen UR. In these advanced bibliometric analyses several indicator scores, such as the number of publications,

  5. Evaluating an Organizational-Level Occupational Health Intervention in a Combined Regression Discontinuity and Randomized Control Design.

    Science.gov (United States)

    Sørensen, By Ole H

    2016-10-01

    Organizational-level occupational health interventions have great potential to improve employees' health and well-being. However, they often compare unfavourably to individual-level interventions. This calls for improving methods for designing, implementing and evaluating organizational interventions. This paper presents and discusses the regression discontinuity design because, like the randomized control trial, it is a strong summative experimental design, but it typically fits organizational-level interventions better. The paper explores advantages and disadvantages of a regression discontinuity design with an embedded randomized control trial. It provides an example from an intervention study focusing on reducing sickness absence in 196 preschools. The paper demonstrates that such a design fits the organizational context, because it allows management to focus on organizations or workgroups with the most salient problems. In addition, organizations may accept an embedded randomized design because the organizations or groups with most salient needs receive obligatory treatment as part of the regression discontinuity design. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. A haplotype regression approach for genetic evaluation using sequences from the 1000 bull genomes Project

    International Nuclear Information System (INIS)

    Lakhssassi, K.; González-Recio, O.

    2017-01-01

    Haplotypes from sequencing data may improve the prediction accuracy in genomic evaluations as haplotypes are in stronger linkage disequilibrium with quantitative trait loci than markers from SNP chips. This study focuses first, on the creation of haplotypes in a population sample of 450 Holstein animals, with full-sequence data from the 1000 bull genomes project; and second, on incorporating them into the whole genome prediction model. In total, 38,319,258 SNPs (and indels) from Next Generation Sequencing were included in the analysis. After filtering variants with minor allele frequency (MAF< 0.025) 13,912,326 SNPs were available for the haplotypes extraction with findhap.f90. The number of SNPs in the haploblocks was on average 924 SNP (166,552 bp). Unique haplotypes were around 97% in all chromosomes and were ignored leaving 153,428 haplotypes. Estimated haplotypes had a large contribution to the total variance of genomic estimated breeding values for kilogram of protein, Global Type Index, Somatic Cell Score and Days Open (between 32 and 99.9%). Haploblocks containing haplotypes with large effects were selected by filtering for each trait, haplotypes whose effect was larger/lower than the mean plus/minus 3 times the standard deviation (SD) and 1 SD above the mean of the haplotypes effect distribution. Results showed that filtering by 3 SD would not be enough to capture a large proportion of genetic variance, whereas filtering by 1 SD could be useful but model convergence should be considered. Additionally, sequence haplotypes were able to capture additional genetic variance to the polygenic effect for traits undergoing lower selection intensity like fertility and health traits.

  7. A haplotype regression approach for genetic evaluation using sequences from the 1000 bull genomes Project

    Energy Technology Data Exchange (ETDEWEB)

    Lakhssassi, K.; González-Recio, O.

    2017-07-01

    Haplotypes from sequencing data may improve the prediction accuracy in genomic evaluations as haplotypes are in stronger linkage disequilibrium with quantitative trait loci than markers from SNP chips. This study focuses first, on the creation of haplotypes in a population sample of 450 Holstein animals, with full-sequence data from the 1000 bull genomes project; and second, on incorporating them into the whole genome prediction model. In total, 38,319,258 SNPs (and indels) from Next Generation Sequencing were included in the analysis. After filtering variants with minor allele frequency (MAF< 0.025) 13,912,326 SNPs were available for the haplotypes extraction with findhap.f90. The number of SNPs in the haploblocks was on average 924 SNP (166,552 bp). Unique haplotypes were around 97% in all chromosomes and were ignored leaving 153,428 haplotypes. Estimated haplotypes had a large contribution to the total variance of genomic estimated breeding values for kilogram of protein, Global Type Index, Somatic Cell Score and Days Open (between 32 and 99.9%). Haploblocks containing haplotypes with large effects were selected by filtering for each trait, haplotypes whose effect was larger/lower than the mean plus/minus 3 times the standard deviation (SD) and 1 SD above the mean of the haplotypes effect distribution. Results showed that filtering by 3 SD would not be enough to capture a large proportion of genetic variance, whereas filtering by 1 SD could be useful but model convergence should be considered. Additionally, sequence haplotypes were able to capture additional genetic variance to the polygenic effect for traits undergoing lower selection intensity like fertility and health traits.

  8. Development of the evaluation methods in reactor safety analyses and core characteristics

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-08-15

    In order to support the safety reviews by NRA on reactor safety design including the phenomena with multiple failures, the computer codes are developed and the safety evaluations with analyses are performed in the areas of thermal hydraulics and core characteristics evaluation. In the code preparation of safety analyses, the TRACE and RELAP5 code were prepared to conduct the safety analyses of LOCA and beyond design basis accidents with multiple failures. In the core physics code preparation, the functions of sensitivity and uncertainty analysis were incorporated in the lattice physics code CASMO-4. The verification of improved CASMO-4 /SIMULATE-3 was continued by using core physics data. (author)

  9. U.S. Army Armament Research, Development and Engineering Center Grain Evaluation Software to Numerically Predict Linear Burn Regression for Solid Propellant Grain Geometries

    Science.gov (United States)

    2017-10-01

    ENGINEERING CENTER GRAIN EVALUATION SOFTWARE TO NUMERICALLY PREDICT LINEAR BURN REGRESSION FOR SOLID PROPELLANT GRAIN GEOMETRIES Brian...distribution is unlimited. AD U.S. ARMY ARMAMENT RESEARCH, DEVELOPMENT AND ENGINEERING CENTER Munitions Engineering Technology Center Picatinny...U.S. ARMY ARMAMENT RESEARCH, DEVELOPMENT AND ENGINEERING CENTER GRAIN EVALUATION SOFTWARE TO NUMERICALLY PREDICT LINEAR BURN REGRESSION FOR SOLID

  10. Regression Phalanxes

    OpenAIRE

    Zhang, Hongyang; Welch, William J.; Zamar, Ruben H.

    2017-01-01

    Tomal et al. (2015) introduced the notion of "phalanxes" in the context of rare-class detection in two-class classification problems. A phalanx is a subset of features that work well for classification tasks. In this paper, we propose a different class of phalanxes for application in regression settings. We define a "Regression Phalanx" - a subset of features that work well together for prediction. We propose a novel algorithm which automatically chooses Regression Phalanxes from high-dimensi...

  11. Heterogeneity index evaluated by slope of linear regression on 18F-FDG PET/CT as a prognostic marker for predicting tumor recurrence in pancreatic ductal adenocarcinoma

    International Nuclear Information System (INIS)

    Kim, Yong-il; Kim, Yong Joong; Paeng, Jin Chul; Cheon, Gi Jeong; Lee, Dong Soo; Chung, June-Key; Kang, Keon Wook

    2017-01-01

    18 F-Fluorodeoxyglucose (FDG) positron emission tomography (PET)/computed tomography (CT) has been investigated as a method to predict pancreatic cancer recurrence after pancreatic surgery. We evaluated the recently introduced heterogeneity indices of 18 F-FDG PET/CT used for predicting pancreatic cancer recurrence after surgery and compared them with current clinicopathologic and 18 F-FDG PET/CT parameters. A total of 93 pancreatic ductal adenocarcinoma patients (M:F = 60:33, mean age = 64.2 ± 9.1 years) who underwent preoperative 18 F-FDG PET/CT following pancreatic surgery were retrospectively enrolled. The standardized uptake values (SUVs) and tumor-to-background ratios (TBR) were measured on each 18 F-FDG PET/CT, as metabolic parameters. Metabolic tumor volume (MTV) and total lesion glycolysis (TLG) were examined as volumetric parameters. The coefficient of variance (heterogeneity index-1; SUVmean divided by the standard deviation) and linear regression slopes (heterogeneity index-2) of the MTV, according to SUV thresholds of 2.0, 2.5 and 3.0, were evaluated as heterogeneity indices. Predictive values of clinicopathologic and 18 F-FDG PET/CT parameters and heterogeneity indices were compared in terms of pancreatic cancer recurrence. Seventy patients (75.3%) showed recurrence after pancreatic cancer surgery (mean recurrence = 9.4 ± 8.4 months). Comparing the recurrence and no recurrence patients, all of the 18 F-FDG PET/CT parameters and heterogeneity indices demonstrated significant differences. In univariate Cox-regression analyses, MTV (P = 0.013), TLG (P = 0.007), and heterogeneity index-2 (P = 0.027) were significant. Among the clinicopathologic parameters, CA19-9 (P = 0.025) and venous invasion (P = 0.002) were selected as significant parameters. In multivariate Cox-regression analyses, MTV (P = 0.005), TLG (P = 0.004), and heterogeneity index-2 (P = 0.016) with venous invasion (P < 0.001, 0.001, and 0.001, respectively) demonstrated significant results

  12. Univariate and multiple linear regression analyses for 23 single nucleotide polymorphisms in 14 genes predisposing to chronic glomerular diseases and IgA nephropathy in Han Chinese.

    Science.gov (United States)

    Wang, Hui; Sui, Weiguo; Xue, Wen; Wu, Junyong; Chen, Jiejing; Dai, Yong

    2014-09-01

    Immunoglobulin A nephropathy (IgAN) is a complex trait regulated by the interaction among multiple physiologic regulatory systems and probably involving numerous genes, which leads to inconsistent findings in genetic studies. One possibility of failure to replicate some single-locus results is that the underlying genetics of IgAN nephropathy is based on multiple genes with minor effects. To learn the association between 23 single nucleotide polymorphisms (SNPs) in 14 genes predisposing to chronic glomerular diseases and IgAN in Han males, the 23 SNPs genotypes of 21 Han males were detected and analyzed with a BaiO gene chip, and their associations were analyzed with univariate analysis and multiple linear regression analysis. Analysis showed that CTLA4 rs231726 and CR2 rs1048971 revealed a significant association with IgAN. These findings support the multi-gene nature of the etiology of IgAN and propose a potential gene-gene interactive model for future studies.

  13. Using poisson regression to compare rates of unsatisfactory pap smears among gynecologists and to evaluate a performance improvement plan.

    Science.gov (United States)

    Wachtel, Mitchell S; Hatley, Warren G; de Riese, Cornelia

    2009-01-01

    To evaluate impact of a performance improvement (PI) plan implemented after initial analysis, comparing 7 gynecologists working in 2 clinics. From January to October 2005, unsatisfactory rates for gynecologists and clinics were calculated. A PI plan was instituted at the end of the first quarter of 2006. Unsatisfactory rates for each quarter of 2006 and the first quarter of 2007 were calculated. Poisson regression analyzed results. A total of 100 ThinPrep Pap smears initially deemed unsatisfactory underwent reprocessing and revaluation. The study's first part evaluated 2890 smears. Clinic unsatisfactory rates, 2.7% and 2.6%, were similar (p > 0.05). Gynecologists' unsatisfactory rates were 4.8-0.6%; differences between each of the highest 2 and lowest rates were significant (p improvement. Reprocessing ThinPrep smears is an important means of reducing unsatisfactory rates but should not be a substitute for attention to quality in sampling.

  14. Meta-regression analyses to explain statistical heterogeneity in a systematic review of strategies for guideline implementation in primary health care.

    Directory of Open Access Journals (Sweden)

    Susanne Unverzagt

    Full Text Available This study is an in-depth-analysis to explain statistical heterogeneity in a systematic review of implementation strategies to improve guideline adherence of primary care physicians in the treatment of patients with cardiovascular diseases. The systematic review included randomized controlled trials from a systematic search in MEDLINE, EMBASE, CENTRAL, conference proceedings and registers of ongoing studies. Implementation strategies were shown to be effective with substantial heterogeneity of treatment effects across all investigated strategies. Primary aim of this study was to explain different effects of eligible trials and to identify methodological and clinical effect modifiers. Random effects meta-regression models were used to simultaneously assess the influence of multimodal implementation strategies and effect modifiers on physician adherence. Effect modifiers included the staff responsible for implementation, level of prevention and definition pf the primary outcome, unit of randomization, duration of follow-up and risk of bias. Six clinical and methodological factors were investigated as potential effect modifiers of the efficacy of different implementation strategies on guideline adherence in primary care practices on the basis of information from 75 eligible trials. Five effect modifiers were able to explain a substantial amount of statistical heterogeneity. Physician adherence was improved by 62% (95% confidence interval (95% CI 29 to 104% or 29% (95% CI 5 to 60% in trials where other non-medical professionals or nurses were included in the implementation process. Improvement of physician adherence was more successful in primary and secondary prevention of cardiovascular diseases by around 30% (30%; 95% CI -2 to 71% and 31%; 95% CI 9 to 57%, respectively compared to tertiary prevention. This study aimed to identify effect modifiers of implementation strategies on physician adherence. Especially the cooperation of different health

  15. Regression analysis with categorized regression calibrated exposure: some interesting findings

    Directory of Open Access Journals (Sweden)

    Hjartåker Anette

    2006-07-01

    Full Text Available Abstract Background Regression calibration as a method for handling measurement error is becoming increasingly well-known and used in epidemiologic research. However, the standard version of the method is not appropriate for exposure analyzed on a categorical (e.g. quintile scale, an approach commonly used in epidemiologic studies. A tempting solution could then be to use the predicted continuous exposure obtained through the regression calibration method and treat it as an approximation to the true exposure, that is, include the categorized calibrated exposure in the main regression analysis. Methods We use semi-analytical calculations and simulations to evaluate the performance of the proposed approach compared to the naive approach of not correcting for measurement error, in situations where analyses are performed on quintile scale and when incorporating the original scale into the categorical variables, respectively. We also present analyses of real data, containing measures of folate intake and depression, from the Norwegian Women and Cancer study (NOWAC. Results In cases where extra information is available through replicated measurements and not validation data, regression calibration does not maintain important qualities of the true exposure distribution, thus estimates of variance and percentiles can be severely biased. We show that the outlined approach maintains much, in some cases all, of the misclassification found in the observed exposure. For that reason, regression analysis with the corrected variable included on a categorical scale is still biased. In some cases the corrected estimates are analytically equal to those obtained by the naive approach. Regression calibration is however vastly superior to the naive method when applying the medians of each category in the analysis. Conclusion Regression calibration in its most well-known form is not appropriate for measurement error correction when the exposure is analyzed on a

  16. The Use of a Poisson Regression to Evaluate Antihistamines and Fatal Aircraft Mishaps in Instrument Meteorological Conditions.

    Science.gov (United States)

    Gildea, Kevin M; Hileman, Christy R; Rogers, Paul; Salazar, Guillermo J; Paskoff, Lawrence N

    2018-04-01

    Research indicates that first-generation antihistamine usage may impair pilot performance by increasing the likelihood of vestibular illusions, spatial disorientation, and/or cognitive impairment. Second- and third-generation antihistamines generally have fewer impairing side effects and are approved for pilot use. We hypothesized that toxicological findings positive for second- and third-generation antihistamines are less likely to be associated with pilots involved in fatal mishaps than first-generation antihistamines. The evaluated population consisted of 1475 U.S. civil pilots fatally injured between September 30, 2008, and October 1, 2014. Mishap factors evaluated included year, weather conditions, airman rating, recent airman flight time, quarter of year, and time of day. Due to the low prevalence of positive antihistamine findings, a count-based model was selected, which can account for rare outcomes. The means and variances were close for both regression models supporting the assumption that the data follow a Poisson distribution; first-generation antihistamine mishap airmen (N = 582, M = 0.17, S2 = 0.17) with second- and third-generation antihistamine mishap airmen (N = 116, M = 0.20, S2 = 0.18). The data indicate fewer airmen with second- and third-generation antihistamines than first-generation antihistamines in their system are fatally injured while flying in IMC conditions. Whether the lower incidence is a factor of greater usage of first-generation antihistamines versus second- and third-generation antihistamines by the pilot population or fewer deleterious side effects with second- and third-generation antihistamines is unclear. These results engender cautious optimism, but additional research is necessary to determine why these differences exist.Gildea KM, Hileman CR, Rogers P, Salazar GJ, Paskoff LN. The use of a Poisson regression to evaluate antihistamines and fatal aircraft mishaps in instrument meteorological conditions. Aerosp Med Hum Perform

  17. Secondary mediation and regression analyses of the PTClinResNet database: determining causal relationships among the International Classification of Functioning, Disability and Health levels for four physical therapy intervention trials.

    Science.gov (United States)

    Mulroy, Sara J; Winstein, Carolee J; Kulig, Kornelia; Beneck, George J; Fowler, Eileen G; DeMuth, Sharon K; Sullivan, Katherine J; Brown, David A; Lane, Christianne J

    2011-12-01

    Each of the 4 randomized clinical trials (RCTs) hosted by the Physical Therapy Clinical Research Network (PTClinResNet) targeted a different disability group (low back disorder in the Muscle-Specific Strength Training Effectiveness After Lumbar Microdiskectomy [MUSSEL] trial, chronic spinal cord injury in the Strengthening and Optimal Movements for Painful Shoulders in Chronic Spinal Cord Injury [STOMPS] trial, adult stroke in the Strength Training Effectiveness Post-Stroke [STEPS] trial, and pediatric cerebral palsy in the Pediatric Endurance and Limb Strengthening [PEDALS] trial for children with spastic diplegic cerebral palsy) and tested the effectiveness of a muscle-specific or functional activity-based intervention on primary outcomes that captured pain (STOMPS, MUSSEL) or locomotor function (STEPS, PEDALS). The focus of these secondary analyses was to determine causal relationships among outcomes across levels of the International Classification of Functioning, Disability and Health (ICF) framework for the 4 RCTs. With the database from PTClinResNet, we used 2 separate secondary statistical approaches-mediation analysis for the MUSSEL and STOMPS trials and regression analysis for the STEPS and PEDALS trials-to test relationships among muscle performance, primary outcomes (pain related and locomotor related), activity and participation measures, and overall quality of life. Predictive models were stronger for the 2 studies with pain-related primary outcomes. Change in muscle performance mediated or predicted reductions in pain for the MUSSEL and STOMPS trials and, to some extent, walking speed for the STEPS trial. Changes in primary outcome variables were significantly related to changes in activity and participation variables for all 4 trials. Improvement in activity and participation outcomes mediated or predicted increases in overall quality of life for the 3 trials with adult populations. Variables included in the statistical models were limited to those

  18. Analysing and evaluating the task of automatic tweet generation: Knowledge to business

    OpenAIRE

    Lloret, Elena; Palomar, Manuel

    2016-01-01

    In this paper a study concerning the evaluation and analysis of natural language tweets is presented. Based on our experience in text summarisation, we carry out a deep analysis on user's perception through the evaluation of tweets manual and automatically generated from news. Specifically, we consider two key issues of a tweet: its informativeness and its interestingness. Therefore, we analyse: (1) do users equally perceive manual and automatic tweets?; (2) what linguistic features a good tw...

  19. Evaluation of Induced Settlements of Piled Rafts in the Coupled Static-Dynamic Loads Using Neural Networks and Evolutionary Polynomial Regression

    Directory of Open Access Journals (Sweden)

    Ali Ghorbani

    2017-01-01

    Full Text Available Coupled Piled Raft Foundations (CPRFs are broadly applied to share heavy loads of superstructures between piles and rafts and reduce total and differential settlements. Settlements induced by static/coupled static-dynamic loads are one of the main concerns of engineers in designing CPRFs. Evaluation of induced settlements of CPRFs has been commonly carried out using three-dimensional finite element/finite difference modeling or through expensive real-scale/prototype model tests. Since the analyses, especially in the case of coupled static-dynamic loads, are not simply conducted, this paper presents two practical methods to gain the values of settlement. First, different nonlinear finite difference models under different static and coupled static-dynamic loads are developed to calculate exerted settlements. Analyses are performed with respect to different axial loads and pile’s configurations, numbers, lengths, diameters, and spacing for both loading cases. Based on the results of well-validated three-dimensional finite difference modeling, artificial neural networks and evolutionary polynomial regressions are then applied and introduced as capable methods to accurately present both static and coupled static-dynamic settlements. Also, using a sensitivity analysis based on Cosine Amplitude Method, axial load is introduced as the most influential parameter, while the ratio l/d is reported as the least effective parameter on the settlements of CPRFs.

  20. A histological evaluation and in vivo assessment of intratumoral near infrared photothermal nanotherapy-induced tumor regression

    Directory of Open Access Journals (Sweden)

    Green HN

    2014-11-01

    Full Text Available Hadiyah N Green,1,2 Stephanie D Crockett,3 Dmitry V Martyshkin,1 Karan P Singh,2,4 William E Grizzle,2,5 Eben L Rosenthal,2,6 Sergey B Mirov11Department of Physics, Center for Optical Sensors and Spectroscopies, 2Comprehensive Cancer Center, 3Department of Pediatrics, Division of Neonatology, 4Department of Medicine, Division of Preventive Medicine, Biostatistics and Bioinformatics Shared Facility, 5Department of Pathology, 6Department of Surgery, Division of Otolaryngology, Head and Neck Surgery, The University of Alabama at Birmingham, Birmingham, AL, USAPurpose: Nanoparticle (NP-enabled near infrared (NIR photothermal therapy has realized limited success in in vivo studies as a potential localized cancer therapy. This is primarily due to a lack of successful methods that can prevent NP uptake by the reticuloendothelial system, especially the liver and kidney, and deliver sufficient quantities of intravenously injected NPs to the tumor site. Histological evaluation of photothermal therapy-induced tumor regression is also neglected in the current literature. This report demonstrates and histologically evaluates the in vivo potential of NIR photothermal therapy by circumventing the challenges of intravenous NP delivery and tumor targeting found in other photothermal therapy studies.Methods: Subcutaneous Cal 27 squamous cell carcinoma xenografts received photothermal nanotherapy treatments, radial injections of polyethylene glycol (PEG-ylated gold nanorods and one NIR 785 nm laser irradiation for 10 minutes at 9.5 W/cm2. Tumor response was measured for 10–15 days, gross changes in tumor size were evaluated, and the remaining tumors or scar tissues were excised and histologically analyzed.Results: The single treatment of intratumoral nanorod injections followed by a 10 minute NIR laser treatment also known as photothermal nanotherapy, resulted in ~100% tumor regression in ~90% of treated tumors, which was statistically significant in a

  1. Evaluating the performance of different predictor strategies in regression-based downscaling with a focus on glacierized mountain environments

    Science.gov (United States)

    Hofer, Marlis; Nemec, Johanna

    2016-04-01

    This study presents first steps towards verifying the hypothesis that uncertainty in global and regional glacier mass simulations can be reduced considerably by reducing the uncertainty in the high-resolution atmospheric input data. To this aim, we systematically explore the potential of different predictor strategies for improving the performance of regression-based downscaling approaches. The investigated local-scale target variables are precipitation, air temperature, wind speed, relative humidity and global radiation, all at a daily time scale. Observations of these target variables are assessed from three sites in geo-environmentally and climatologically very distinct settings, all within highly complex topography and in the close proximity to mountain glaciers: (1) the Vernagtbach station in the Northern European Alps (VERNAGT), (2) the Artesonraju measuring site in the tropical South American Andes (ARTESON), and (3) the Brewster measuring site in the Southern Alps of New Zealand (BREWSTER). As the large-scale predictors, ERA interim reanalysis data are used. In the applied downscaling model training and evaluation procedures, particular emphasis is put on appropriately accounting for the pitfalls of limited and/or patchy observation records that are usually the only (if at all) available data from the glacierized mountain sites. Generalized linear models and beta regression are investigated as alternatives to ordinary least squares regression for the non-Gaussian target variables. By analyzing results for the three different sites, five predictands and for different times of the year, we look for systematic improvements in the downscaling models' skill specifically obtained by (i) using predictor data at the optimum scale rather than the minimum scale of the reanalysis data, (ii) identifying the optimum predictor allocation in the vertical, and (iii) considering multiple (variable, level and/or grid point) predictor options combined with state

  2. Hong Kong Hospital Authority resource efficiency evaluation: Via a novel DEA-Malmquist model and Tobit regression model.

    Science.gov (United States)

    Guo, Hainan; Zhao, Yang; Niu, Tie; Tsui, Kwok-Leung

    2017-01-01

    The Hospital Authority (HA) is a statutory body managing all the public hospitals and institutes in Hong Kong (HK). In recent decades, Hong Kong Hospital Authority (HKHA) has been making efforts to improve the healthcare services, but there still exist some problems like unfair resource allocation and poor management, as reported by the Hong Kong medical legislative committee. One critical consequence of these problems is low healthcare efficiency of hospitals, leading to low satisfaction among patients. Moreover, HKHA also suffers from the conflict between limited resource and growing demand. An effective evaluation of HA is important for resource planning and healthcare decision making. In this paper, we propose a two-phase method to evaluate HA efficiency for reducing healthcare expenditure and improving healthcare service. Specifically, in Phase I, we measure the HKHA efficiency changes from 2000 to 2013 by applying a novel DEA-Malmquist index with undesirable factors. In Phase II, we further explore the impact of some exogenous factors (e.g., population density) on HKHA efficiency by Tobit regression model. Empirical results show that there are significant differences between the efficiencies of different hospitals and clusters. In particular, it is found that the public hospital serving in a richer district has a relatively lower efficiency. To a certain extent, this reflects the socioeconomic reality in HK that people with better economic condition prefers receiving higher quality service from the private hospitals.

  3. Autistic Regression

    Science.gov (United States)

    Matson, Johnny L.; Kozlowski, Alison M.

    2010-01-01

    Autistic regression is one of the many mysteries in the developmental course of autism and pervasive developmental disorders not otherwise specified (PDD-NOS). Various definitions of this phenomenon have been used, further clouding the study of the topic. Despite this problem, some efforts at establishing prevalence have been made. The purpose of…

  4. Regression Analysis

    CERN Document Server

    Freund, Rudolf J; Sa, Ping

    2006-01-01

    The book provides complete coverage of the classical methods of statistical analysis. It is designed to give students an understanding of the purpose of statistical analyses, to allow the student to determine, at least to some degree, the correct type of statistical analyses to be performed in a given situation, and have some appreciation of what constitutes good experimental design

  5. Reliable change indices and standardized regression-based change score norms for evaluating neuropsychological change in children with epilepsy.

    Science.gov (United States)

    Busch, Robyn M; Lineweaver, Tara T; Ferguson, Lisa; Haut, Jennifer S

    2015-06-01

    Reliable change indices (RCIs) and standardized regression-based (SRB) change score norms permit evaluation of meaningful changes in test scores following treatment interventions, like epilepsy surgery, while accounting for test-retest reliability, practice effects, score fluctuations due to error, and relevant clinical and demographic factors. Although these methods are frequently used to assess cognitive change after epilepsy surgery in adults, they have not been widely applied to examine cognitive change in children with epilepsy. The goal of the current study was to develop RCIs and SRB change score norms for use in children with epilepsy. Sixty-three children with epilepsy (age range: 6-16; M=10.19, SD=2.58) underwent comprehensive neuropsychological evaluations at two time points an average of 12 months apart. Practice effect-adjusted RCIs and SRB change score norms were calculated for all cognitive measures in the battery. Practice effects were quite variable across the neuropsychological measures, with the greatest differences observed among older children, particularly on the Children's Memory Scale and Wisconsin Card Sorting Test. There was also notable variability in test-retest reliabilities across measures in the battery, with coefficients ranging from 0.14 to 0.92. Reliable change indices and SRB change score norms for use in assessing meaningful cognitive change in children following epilepsy surgery are provided for measures with reliability coefficients above 0.50. This is the first study to provide RCIs and SRB change score norms for a comprehensive neuropsychological battery based on a large sample of children with epilepsy. Tables to aid in evaluating cognitive changes in children who have undergone epilepsy surgery are provided for clinical use. An Excel sheet to perform all relevant calculations is also available to interested clinicians or researchers. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Reliable Change Indices and Standardized Regression-Based Change Score Norms for Evaluating Neuropsychological Change in Children with Epilepsy

    Science.gov (United States)

    Busch, Robyn M.; Lineweaver, Tara T.; Ferguson, Lisa; Haut, Jennifer S.

    2015-01-01

    Reliable change index scores (RCIs) and standardized regression-based change score norms (SRBs) permit evaluation of meaningful changes in test scores following treatment interventions, like epilepsy surgery, while accounting for test-retest reliability, practice effects, score fluctuations due to error, and relevant clinical and demographic factors. Although these methods are frequently used to assess cognitive change after epilepsy surgery in adults, they have not been widely applied to examine cognitive change in children with epilepsy. The goal of the current study was to develop RCIs and SRBs for use in children with epilepsy. Sixty-three children with epilepsy (age range 6–16; M=10.19, SD=2.58) underwent comprehensive neuropsychological evaluations at two time points an average of 12 months apart. Practice adjusted RCIs and SRBs were calculated for all cognitive measures in the battery. Practice effects were quite variable across the neuropsychological measures, with the greatest differences observed among older children, particularly on the Children’s Memory Scale and Wisconsin Card Sorting Test. There was also notable variability in test-retest reliabilities across measures in the battery, with coefficients ranging from 0.14 to 0.92. RCIs and SRBs for use in assessing meaningful cognitive change in children following epilepsy surgery are provided for measures with reliability coefficients above 0.50. This is the first study to provide RCIs and SRBs for a comprehensive neuropsychological battery based on a large sample of children with epilepsy. Tables to aid in evaluating cognitive changes in children who have undergone epilepsy surgery are provided for clinical use. An excel sheet to perform all relevant calculations is also available to interested clinicians or researchers. PMID:26043163

  7. Evaluation of heat transfer mathematical models and multiple linear regression to predict the inside variables in semi-solar greenhouse

    Directory of Open Access Journals (Sweden)

    M Taki

    2017-05-01

    Full Text Available Introduction Controlling greenhouse microclimate not only influences the growth of plants, but also is critical in the spread of diseases inside the greenhouse. The microclimate parameters were inside air, greenhouse roof and soil temperature, relative humidity and solar radiation intensity. Predicting the microclimate conditions inside a greenhouse and enabling the use of automatic control systems are the two main objectives of greenhouse climate model. The microclimate inside a greenhouse can be predicted by conducting experiments or by using simulation. Static and dynamic models are used for this purpose as a function of the metrological conditions and the parameters of the greenhouse components. Some works were done in past to 2015 year to simulation and predict the inside variables in different greenhouse structures. Usually simulation has a lot of problems to predict the inside climate of greenhouse and the error of simulation is higher in literature. The main objective of this paper is comparison between heat transfer and regression models to evaluate them to predict inside air and roof temperature in a semi-solar greenhouse in Tabriz University. Materials and Methods In this study, a semi-solar greenhouse was designed and constructed at the North-West of Iran in Azerbaijan Province (geographical location of 38°10′ N and 46°18′ E with elevation of 1364 m above the sea level. In this research, shape and orientation of the greenhouse, selected between some greenhouses common shapes and according to receive maximum solar radiation whole the year. Also internal thermal screen and cement north wall was used to store and prevent of heat lost during the cold period of year. So we called this structure, ‘semi-solar’ greenhouse. It was covered with glass (4 mm thickness. It occupies a surface of approximately 15.36 m2 and 26.4 m3. The orientation of this greenhouse was East–West and perpendicular to the direction of the wind prevailing

  8. Linear regression

    CERN Document Server

    Olive, David J

    2017-01-01

    This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...

  9. Risk factors for subclinical intramammary infection in dairy goats in two longitudinal field studies evaluated by Bayesian logistic regression

    DEFF Research Database (Denmark)

    Koop, Gerrit; Collar, Carol A.; Toft, Nils

    2013-01-01

    Identification of risk factors for subclinical intramammary infections (IMI) in dairy goats should contribute to improved udder health. Intramammary infection may be diagnosed by bacteriological culture or by somatic cell count (SCC) of a milk sample. Both bacteriological culture and SCC are impe......Identification of risk factors for subclinical intramammary infections (IMI) in dairy goats should contribute to improved udder health. Intramammary infection may be diagnosed by bacteriological culture or by somatic cell count (SCC) of a milk sample. Both bacteriological culture and SCC...... are imperfect tests, particularly lacking sensitivity, which leads to misclassification and thus to biased estimates of odds ratios in risk factor studies. The objective of this study was to evaluate risk factors for the true (latent) IMI status of major pathogens in dairy goats. We used Bayesian logistic...... regression models that accounted for imperfect measurement of IMI by both culture and SCC. Udder half milk samples were collected from 530 Dutch and 438 California dairy goats in 10 herds on 3 occasions during lactation. Udder halves were classified as positive or negative for isolation of a major pathogen...

  10. Evaluation of fracture mechanics analyses used in RPV integrity assessment regarding brittle fracture

    International Nuclear Information System (INIS)

    Moinereau, D.; Faidy, C.; Valeta, M.P.; Bhandari, S.; Guichard, D.

    1997-01-01

    Electricite de France has conducted during these last years some experimental and numerical research programmes in order to evaluate fracture mechanics analyses used in nuclear reactor pressure vessels structural integrity assessment, regarding the risk of brittle fracture. These programmes included cleavage fracture tests on large scale cladded specimens containing subclad flaws with their interpretations by 2D and 3D numerical computations, and validation of finite element codes for pressurized thermal shocks analyses. Four cladded specimens made of ferritic steel A508 C13 with stainless steel cladding, and containing shallow subclad flaws, have been tested in four point bending at very low temperature in order to obtain cleavage failure. The specimen failure was obtained in each case in base metal by cleavage fracture. These tests have been interpreted by two-dimensional and three-dimensional finite element computations using different fracture mechanics approaches (elastic analysis with specific plasticity corrections, elastic-plastic analysis, local approach to cleavage fracture). The failure of specimens are conservatively predicted by different analyses. The comparison between the elastic analyses and elastic-plastic analyses shows the conservatism of specific plasticity corrections used in French RPV elastic analyses. Numerous finite element calculations have also been performed between EDF, CEA and Framatome in order to compare and validate several fracture mechanics post processors implemented in finite element programmes used in pressurized thermal shock analyses. This work includes two-dimensional numerical computations on specimens with different geometries and loadings. The comparisons show a rather good agreement on main results, allowing to validate the finite element codes and their post-processors. (author). 11 refs, 24 figs, 3 tabs

  11. Evaluation of fracture mechanics analyses used in RPV integrity assessment regarding brittle fracture

    Energy Technology Data Exchange (ETDEWEB)

    Moinereau, D [Electricite de France, Dept. MTC, Moret-sur-Loing (France); Faidy, C [Electricite de France, SEPTEN, Villeurbanne (France); Valeta, M P [Commisariat a l` Energie Atomique, Dept. DMT, Gif-sur-Yvette (France); Bhandari, S; Guichard, D [Societe Franco-Americaine de Constructions Atomiques (FRAMATOME), 92 - Paris-La-Defense (France)

    1997-09-01

    Electricite de France has conducted during these last years some experimental and numerical research programmes in order to evaluate fracture mechanics analyses used in nuclear reactor pressure vessels structural integrity assessment, regarding the risk of brittle fracture. These programmes included cleavage fracture tests on large scale cladded specimens containing subclad flaws with their interpretations by 2D and 3D numerical computations, and validation of finite element codes for pressurized thermal shocks analyses. Four cladded specimens made of ferritic steel A508 C13 with stainless steel cladding, and containing shallow subclad flaws, have been tested in four point bending at very low temperature in order to obtain cleavage failure. The specimen failure was obtained in each case in base metal by cleavage fracture. These tests have been interpreted by two-dimensional and three-dimensional finite element computations using different fracture mechanics approaches (elastic analysis with specific plasticity corrections, elastic-plastic analysis, local approach to cleavage fracture). The failure of specimens are conservatively predicted by different analyses. The comparison between the elastic analyses and elastic-plastic analyses shows the conservatism of specific plasticity corrections used in French RPV elastic analyses. Numerous finite element calculations have also been performed between EDF, CEA and Framatome in order to compare and validate several fracture mechanics post processors implemented in finite element programmes used in pressurized thermal shock analyses. This work includes two-dimensional numerical computations on specimens with different geometries and loadings. The comparisons show a rather good agreement on main results, allowing to validate the finite element codes and their post-processors. (author). 11 refs, 24 figs, 3 tabs.

  12. Development and evaluation of a regression-based model to predict cesium-137 concentration ratios for saltwater fish

    International Nuclear Information System (INIS)

    Pinder, John E.; Rowan, David J.; Smith, Jim T.

    2016-01-01

    Data from published studies and World Wide Web sources were combined to develop a regression model to predict "1"3"7Cs concentration ratios for saltwater fish. Predictions were developed from 1) numeric trophic levels computed primarily from random resampling of known food items and 2) K concentrations in the saltwater for 65 samplings from 41 different species from both the Atlantic and Pacific Oceans. A number of different models were initially developed and evaluated for accuracy which was assessed as the ratios of independently measured concentration ratios to those predicted by the model. In contrast to freshwater systems, were K concentrations are highly variable and are an important factor in affecting fish concentration ratios, the less variable K concentrations in saltwater were relatively unimportant in affecting concentration ratios. As a result, the simplest model, which used only trophic level as a predictor, had comparable accuracies to more complex models that also included K concentrations. A test of model accuracy involving comparisons of 56 published concentration ratios from 51 species of marine fish to those predicted by the model indicated that 52 of the predicted concentration ratios were within a factor of 2 of the observed concentration ratios. - Highlights: • We developed a model to predict concentration ratios (C_r) for saltwater fish. • The model requires only a single input variable to predict C_r. • That variable is a mean numeric trophic level available at (fishbase.org). • The K concentrations in seawater were not an important predictor variable. • The median-to observed ratio for 56 independently measured C_r was 0.83.

  13. Applying spatial regression to evaluate risk factors for microbiological contamination of urban groundwater sources in Juba, South Sudan

    Science.gov (United States)

    Engström, Emma; Mörtberg, Ulla; Karlström, Anders; Mangold, Mikael

    2017-06-01

    This study developed methodology for statistically assessing groundwater contamination mechanisms. It focused on microbial water pollution in low-income regions. Risk factors for faecal contamination of groundwater-fed drinking-water sources were evaluated in a case study in Juba, South Sudan. The study was based on counts of thermotolerant coliforms in water samples from 129 sources, collected by the humanitarian aid organisation Médecins Sans Frontières in 2010. The factors included hydrogeological settings, land use and socio-economic characteristics. The results showed that the residuals of a conventional probit regression model had a significant positive spatial autocorrelation (Moran's I = 3.05, I-stat = 9.28); therefore, a spatial model was developed that had better goodness-of-fit to the observations. The most significant factor in this model ( p-value 0.005) was the distance from a water source to the nearest Tukul area, an area with informal settlements that lack sanitation services. It is thus recommended that future remediation and monitoring efforts in the city be concentrated in such low-income regions. The spatial model differed from the conventional approach: in contrast with the latter case, lowland topography was not significant at the 5% level, as the p-value was 0.074 in the spatial model and 0.040 in the traditional model. This study showed that statistical risk-factor assessments of groundwater contamination need to consider spatial interactions when the water sources are located close to each other. Future studies might further investigate the cut-off distance that reflects spatial autocorrelation. Particularly, these results advise research on urban groundwater quality.

  14. Total-Factor Energy Efficiency (TFEE Evaluation on Thermal Power Industry with DEA, Malmquist and Multiple Regression Techniques

    Directory of Open Access Journals (Sweden)

    Jin-Peng Liu

    2017-07-01

    Full Text Available Under the background of a new round of power market reform, realizing the goals of energy saving and emission reduction, reducing the coal consumption and ensuring the sustainable development are the key issues for thermal power industry. With the biggest economy and energy consumption scales in the world, China should promote the energy efficiency of thermal power industry to solve these problems. Therefore, from multiple perspectives, the factors influential to the energy efficiency of thermal power industry were identified. Based on the economic, social and environmental factors, a combination model with Data Envelopment Analysis (DEA and Malmquist index was constructed to evaluate the total-factor energy efficiency (TFEE in thermal power industry. With the empirical studies from national and provincial levels, the TFEE index can be factorized into the technical efficiency index (TECH, the technical progress index (TPCH, the pure efficiency index (PECH and the scale efficiency index (SECH. The analysis showed that the TFEE was mainly determined by TECH and PECH. Meanwhile, by panel data regression model, unit coal consumption, talents and government supervision were selected as important indexes to have positive effects on TFEE in thermal power industry. In addition, the negative indexes, such as energy price and installed capacity, were also analyzed to control their undesired effects. Finally, considering the analysis results, measures for improving energy efficiency of thermal power industry were discussed widely, such as strengthening technology research and design (R&D, enforcing pollutant and emission reduction, distributing capital and labor rationally and improving the government supervision. Relative study results and suggestions can provide references for Chinese government and enterprises to enhance the energy efficiency level.

  15. To analyse a trace or not? Evaluating the decision-making process in the criminal investigation.

    Science.gov (United States)

    Bitzer, Sonja; Ribaux, Olivier; Albertini, Nicola; Delémont, Olivier

    2016-05-01

    In order to broaden our knowledge and understanding of the decision steps in the criminal investigation process, we started by evaluating the decision to analyse a trace and the factors involved in this decision step. This decision step is embedded in the complete criminal investigation process, involving multiple decision and triaging steps. Considering robbery cases occurring in a geographic region during a 2-year-period, we have studied the factors influencing the decision to submit biological traces, directly sampled on the scene of the robbery or on collected objects, for analysis. The factors were categorised into five knowledge dimensions: strategic, immediate, physical, criminal and utility and decision tree analysis was carried out. Factors in each category played a role in the decision to analyse a biological trace. Interestingly, factors involving information available prior to the analysis are of importance, such as the fact that a positive result (a profile suitable for comparison) is already available in the case, or that a suspect has been identified through traditional police work before analysis. One factor that was taken into account, but was not significant, is the matrix of the trace. Hence, the decision to analyse a trace is not influenced by this variable. The decision to analyse a trace first is very complex and many of the tested variables were taken into account. The decisions are often made on a case-by-case basis. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. Emergency response guide-B ECCS guideline evaluation analyses for N reactor

    International Nuclear Information System (INIS)

    Chapman, J.C.; Callow, R.A.

    1989-07-01

    INEL conducted two ECCS analyses for Westinghouse Hanford. Both analyses will assist in the evaluation of proposed changes to the N Reactor Emergency Response Guide-B (ERG-B) Emergency Core System (ECCS) guideline. The analyses were a sensitivity study for reduced-ECCS flow rates and a mechanistically determined confinement steam source for a delayed-ECCS LOCA sequence. The reduced-ECCS sensitivity study established the maximum allowable reduction in ECCS flow as a function of time after core refill for a large break loss-of-coolant accident (LOCA) sequence in the N Reactor. The maximum allowable ECCS flow reduction is defined as the maximum flow reduction for which ECCS continues to provide adequate core cooling. The delayed-ECCS analysis established the liquid and steam break flows and enthalpies during the reflood of a hot core following a delayed ECCS injection LOCA sequence. A simulation of a large, hot leg manifold break with a seven-minute ECCS injection delay was used as a representative LOCA sequence. Both analyses were perform using the RELAP5/MOD2.5 transient computer code. 13 refs., 17 figs., 3 tabs

  17. EVALUATION OF ANAEMIA USING RED CELL AND RETICULOCYTE PARAMETERS USING AUTOMATED HAEMATOLOGY ANALYSER

    Directory of Open Access Journals (Sweden)

    Vidyadhar Rao

    2016-06-01

    Full Text Available Use of current models of Automated Haematology Analysers help in calculating the haemoglobin contents of the mature Red cells, Reticulocytes and percentages of Microcytic and hypochromic Red cells. This has helped the clinician in reaching early diagnosis and management of Different haemopoietic disorders like Iron Deficiency Anaemia, Thalassaemia and anaemia of chronic diseases. AIM This study is conducted using an Automated Haematology Analyser to evaluate anaemia using the Red Cell and Reticulocyte parameters. Three types of anaemia were evaluated; iron deficiency anaemia, anaemia of long duration and anaemia associated with chronic disease and Iron deficiency. MATERIALS AND METHODS The blood samples were collected from 287 adult patients with anaemia differentiated depending upon their iron status, haemoglobinopathies and inflammatory activity. Iron deficiency anaemia (n=132, anaemia of long duration (ACD, (n=97 and anaemia associated with chronic disease with iron deficiency (ACD Combi, (n=58. Microcytic Red cells, hypochromic red cells percentage and levels of haemoglobin in reticulocytes and matured RBCs were calculated. The accuracy of the parameters was analysed using receiver operating characteristic analyser to differentiate between the types of anaemia. OBSERVATIONS AND RESULTS There was no difference in parameters between the iron deficiency group or anaemia associated with chronic disease and iron deficiency. The hypochromic red cells percentage was the best parameter in differentiating anaemia of chronic disease with or without absolute iron deficiency with a sensitivity of 72.7% and a specificity of 70.4%. CONCLUSIONS The parameters of red cells and reticulocytes were of reasonably good indicators in differentiating the absolute iron deficiency anaemia with chronic disease.

  18. Assessing the validity of road safety evaluation studies by analysing causal chains.

    Science.gov (United States)

    Elvik, Rune

    2003-09-01

    This paper discusses how the validity of road safety evaluation studies can be assessed by analysing causal chains. A causal chain denotes the path through which a road safety measure influences the number of accidents. Two cases are examined. One involves chemical de-icing of roads (salting). The intended causal chain of this measure is: spread of salt --> removal of snow and ice from the road surface --> improved friction --> shorter stopping distance --> fewer accidents. A Norwegian study that evaluated the effects of salting on accident rate provides information that describes this causal chain. This information indicates that the study overestimated the effect of salting on accident rate, and suggests that this estimate is influenced by confounding variables the study did not control for. The other case involves a traffic club for children. The intended causal chain in this study was: join the club --> improve knowledge --> improve behaviour --> reduce accident rate. In this case, results are rather messy, which suggests that the observed difference in accident rate between members and non-members of the traffic club is not primarily attributable to membership in the club. The two cases show that by analysing causal chains, one may uncover confounding factors that were not adequately controlled in a study. Lack of control for confounding factors remains the most serious threat to the validity of road safety evaluation studies.

  19. An evaluation system for electronic retrospective analyses in radiation oncology: implemented exemplarily for pancreatic cancer

    Science.gov (United States)

    Kessel, Kerstin A.; Jäger, Andreas; Bohn, Christian; Habermehl, Daniel; Zhang, Lanlan; Engelmann, Uwe; Bougatf, Nina; Bendl, Rolf; Debus, Jürgen; Combs, Stephanie E.

    2013-03-01

    To date, conducting retrospective clinical analyses is rather difficult and time consuming. Especially in radiation oncology, handling voluminous datasets from various information systems and different documentation styles efficiently is crucial for patient care and research. With the example of patients with pancreatic cancer treated with radio-chemotherapy, we performed a therapy evaluation by using analysis tools connected with a documentation system. A total number of 783 patients have been documented into a professional, web-based documentation system. Information about radiation therapy, diagnostic images and dose distributions have been imported. For patients with disease progression after neoadjuvant chemoradiation, we designed and established an analysis workflow. After automatic registration of the radiation plans with the follow-up images, the recurrence volumes are segmented manually. Based on these volumes the DVH (dose-volume histogram) statistic is calculated, followed by the determination of the dose applied to the region of recurrence. All results are stored in the database and included in statistical calculations. The main goal of using an automatic evaluation system is to reduce time and effort conducting clinical analyses, especially with large patient groups. We showed a first approach and use of some existing tools, however manual interaction is still necessary. Further steps need to be taken to enhance automation. Already, it has become apparent that the benefits of digital data management and analysis lie in the central storage of data and reusability of the results. Therefore, we intend to adapt the evaluation system to other types of tumors in radiation oncology.

  20. Trace substances in landfill gases. Evaluation and meaningful analysis. Spurenstoffe in Deponiegasen. Bewertung und sinnvolle Analyse

    Energy Technology Data Exchange (ETDEWEB)

    Eisenmann, R [Karlsruhe Univ. (T.H.) (Germany, F.R.). Engler-Bunte-Institut

    1989-06-01

    Many of the innumerable substances which may occur in landfill gases are to be considered as possibly dangerous; they lead to environmental problems due to their malodour or noxious combustion products. With respect to the evaluation of the traces of substances there is great unsecurity and often extreme requirements as to volume and quality of gas analyses have to be met. Generally it can be noticed that there are hazards emanating from landfill gases, but in comparison to other risks they are not excessive. The contribution shall help to clarify questions and furnish a basis for the practice-oriented and appropriate landfill gas analytics. (orig.).

  1. The importance of probabilistic evaluations in connection with risk analyses according to technical safety laws

    International Nuclear Information System (INIS)

    Mathiak, E.

    1984-01-01

    The nuclear energy sector exemplifies the essential importance to be attached to the practical application of probabilistic evaluations (e.g. probabilistic reliability analyses) in connection with the legal risk assessment of technical systems and installations. The study is making use of a triad risk analysis and tries to reconcile the natural science and legal points of view. Without changing the definitions of 'risk' and 'hazard' in the legal sense of their meaning the publication discusses their reconcilation with the laws of natural science, their interpretation and application in view of the latter. (HSCH) [de

  2. Systematic review of economic evaluation analyses of available vaccines in Spain from 1990 to 2012.

    Science.gov (United States)

    Cortés, Isabel; Pérez-Camarero, Santiago; Del Llano, Juan; Peña, Luz María; Hidalgo-Vega, Alvaro

    2013-08-02

    The objective of this survey was to describe the evolution of economic evaluation studies on vaccines available in Spain. We conducted a systematic review of the economic evaluations published by Spanish researchers in major bibliographic databases available online from 1990 to 2012. For all references identified, we limited them to full economic evaluation carried out in Spanish vaccine programs. The following variables were analyzed: type of study, year of publication, vaccine evaluated, the herd immunity and the main methodological aspects proposed by international guidelines. The type of vaccines studied were Hepatitis A and B, Rotavirus, Influenza, Varicella, Tetanus, Measles, Human papillomavirus, Streptococcus pneumoniae infection and Neisseria meningitides serogroup C infection. A total of 34 references was included in the study. The number of economic evaluations has been increasing over the years by 86%. For many of the vaccines there were no economic evaluations, while others such as the vaccine against S. pneumoniae infection took up most of the studies. The non-vaccinated comparison was the most used strategy. The cost-effectiveness model was selected in 60% of cases. The most common health outcome was "cost per case prevented" and in 82% of the studies did not consider herd immunity. The results showed a cost-effectiveness ratio which was below breakeven. It is clear that the existence of a huge gap in this kind of work compared to other countries. Although the quality of the work discussed here was significant, we found many areas which could be improved. The reviewed literature exposed the great benefit of vaccination for society by analysing the health outcomes achieved for decades since its implementation. However, the evidence on the efficiency and effectiveness vaccination is not very high, and there are few studies about economic evaluation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. A regression model using sediment chemistry for the evaluation of marine environmental impacts associated with salmon aquaculture cage wastes

    International Nuclear Information System (INIS)

    Chou, C.L.; Haya, K.; Paon, L.A.; Moffatt, J.D.

    2004-01-01

    This study was undertaken to develop an approach for modelling changes of sediment chemistry related to the accumulation of aquaculture waste. Metal composition of sediment Al, Cu, Fe, Li, Mn, and Zn; organic carbon and 2 =0.945 compared to R 2 =0.653 for the regression model using unadjusted EMP for assessing the environmental conditions

  4. A multi-criteria evaluation system for marine litter pollution based on statistical analyses of OSPAR beach litter monitoring time series.

    Science.gov (United States)

    Schulz, Marcus; Neumann, Daniel; Fleet, David M; Matthies, Michael

    2013-12-01

    During the last decades, marine pollution with anthropogenic litter has become a worldwide major environmental concern. Standardized monitoring of litter since 2001 on 78 beaches selected within the framework of the Convention for the Protection of the Marine Environment of the North-East Atlantic (OSPAR) has been used to identify temporal trends of marine litter. Based on statistical analyses of this dataset a two-part multi-criteria evaluation system for beach litter pollution of the North-East Atlantic and the North Sea is proposed. Canonical correlation analyses, linear regression analyses, and non-parametric analyses of variance were used to identify different temporal trends. A classification of beaches was derived from cluster analyses and served to define different states of beach quality according to abundances of 17 input variables. The evaluation system is easily applicable and relies on the above-mentioned classification and on significant temporal trends implied by significant rank correlations. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Secondary Data Analyses of Subjective Outcome Evaluation Data Based on Nine Databases

    Directory of Open Access Journals (Sweden)

    Daniel T. L. Shek

    2012-01-01

    Full Text Available The purpose of this study was to evaluate the effectiveness of the Tier 1 Program of the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes in Hong Kong by analyzing 1,327 school-based program reports submitted by program implementers. In each report, program implementers were invited to write down five conclusions based on an integration of the subjective outcome evaluation data collected from the program participants and program implementers. Secondary data analyses were carried out by aggregating nine databases, with 14,390 meaningful units extracted from 6,618 conclusions. Results showed that most of the conclusions were positive in nature. The findings generally showed that the workers perceived the program and program implementers to be positive, and they also pointed out that the program could promote holistic development of the program participants in societal, familial, interpersonal, and personal aspects. However, difficulties encountered during program implementation (2.15% and recommendations for improvement were also reported (16.26%. In conjunction with the evaluation findings based on other strategies, the present study suggests that the Tier 1 Program of the Project P.A.T.H.S. is beneficial to the holistic development of the program participants.

  6. Orbitrap mass analyser for in situ characterisation of planetary environments: Performance evaluation of a laboratory prototype

    Science.gov (United States)

    Briois, Christelle; Thissen, Roland; Thirkell, Laurent; Aradj, Kenzi; Bouabdellah, Abdel; Boukrara, Amirouche; Carrasco, Nathalie; Chalumeau, Gilles; Chapelon, Olivier; Colin, Fabrice; Coll, Patrice; Cottin, Hervé; Engrand, Cécile; Grand, Noel; Lebreton, Jean-Pierre; Orthous-Daunay, François-Régis; Pennanech, Cyril; Szopa, Cyril; Vuitton, Véronique; Zapf, Pascal; Makarov, Alexander

    2016-10-01

    For decades of space exploration, mass spectrometry has proven to be a reliable instrumentation for the characterisation of the nature and energy of ionic and neutral, atomic and molecular species in the interplanetary medium and upper planetary atmospheres. It has been used as well to analyse the chemical composition of planetary and small bodies environments. The chemical complexity of these environments calls for the need to develop a new generation of mass spectrometers with significantly increased mass resolving power. The recently developed OrbitrapTM mass analyser at ultra-high resolution shows promising adaptability to space instrumentation, offering improved performances for in situ measurements. In this article, we report on our project named ;Cosmorbitrap; aiming at demonstrating the adaptability of the Orbitrap technology for in situ space exploration. We present the prototype that was developed in the laboratory for demonstration of both technical feasibility and analytical capabilities. A set of samples containing elements with masses ranging from 9 to 208 u has been used to evaluate the performance of the analyser, in terms of mass resolving power (reaching 474,000 at m/z 9) and ability to discriminate between isobaric interferences, accuracy of mass measurement (below 15 ppm) and determination of relative isotopic abundances (below 5%) of various samples. We observe a good agreement between the results obtained with the prototype and those of a commercial instrument. As the background pressure is a key parameter for in situ exploration of atmosphere planetary bodies, we study the effect of background gas on the performance of the Cosmorbitrap prototype, showing an upper limit for N2 in our set-up at 10-8 mbar. The results demonstrate the strong potential to adapt this technology to space exploration.

  7. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation

    International Nuclear Information System (INIS)

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This manual covers an array of modules written for the SCALE package, consisting of drivers, system libraries, cross section and materials properties libraries, input/output routines, storage modules, and help files

  8. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This manual covers an array of modules written for the SCALE package, consisting of drivers, system libraries, cross section and materials properties libraries, input/output routines, storage modules, and help files.

  9. Development and initial evaluation of a spectral microdensitometer for analysing radiochromic films

    International Nuclear Information System (INIS)

    Lee, K Y; Fung, K L; Kwok, C S

    2004-01-01

    Radiation dose deposited on a radiochromic film is considered as a dose image. A precise image extraction system with commensurate capabilities is required to measure the transmittance of the image and translate it to radiation dose. This paper describes the development of a spectral microdensitometer which has been designed to achieve this goal under the conditions of (a) the linearity and sensitivity of the dose response curve of the radiochromic film being highly dependent on the wavelength of the analysing light, and (b) the inherent high spatial resolution of the film. The microdensitometer consists of a monochromator which provides an analysing light of variable wavelength, a film tray on a high-precision scanning stage, a transmission microscope coupled to a thermoelectrically cooled CCD camera, a microcomputer and corresponding interfaces. The measurement of the transmittance of the radiochromic film is made at the two absorption peaks with maximum sensitivities. The high spatial resolution of the instrument, of the order of micrometres, is achieved through the use of the microscope combined with a measure-and-step technique to cover the whole film. The performance of the instrument in regard to the positional accuracy, system reproducibility and dual-peak film calibration was evaluated. The results show that the instrument fulfils the design objective of providing a precise image extraction system for radiochromic films with micrometre spatial resolution and sensitive dose response

  10. Development and initial evaluation of a spectral microdensitometer for analysing radiochromic films

    Energy Technology Data Exchange (ETDEWEB)

    Lee, K Y [Department of Optometry and Radiography, Hong Kong Polytechnic University, Hong Kong (China); Fung, K L [Department of Optometry and Radiography, Hong Kong Polytechnic University, Hong Kong (China); Kwok, C S [Department of Radioimmunotherapy, City of Hope National Medical Centre, Duarte, CA 91010 (United States)

    2004-11-21

    Radiation dose deposited on a radiochromic film is considered as a dose image. A precise image extraction system with commensurate capabilities is required to measure the transmittance of the image and translate it to radiation dose. This paper describes the development of a spectral microdensitometer which has been designed to achieve this goal under the conditions of (a) the linearity and sensitivity of the dose response curve of the radiochromic film being highly dependent on the wavelength of the analysing light, and (b) the inherent high spatial resolution of the film. The microdensitometer consists of a monochromator which provides an analysing light of variable wavelength, a film tray on a high-precision scanning stage, a transmission microscope coupled to a thermoelectrically cooled CCD camera, a microcomputer and corresponding interfaces. The measurement of the transmittance of the radiochromic film is made at the two absorption peaks with maximum sensitivities. The high spatial resolution of the instrument, of the order of micrometres, is achieved through the use of the microscope combined with a measure-and-step technique to cover the whole film. The performance of the instrument in regard to the positional accuracy, system reproducibility and dual-peak film calibration was evaluated. The results show that the instrument fulfils the design objective of providing a precise image extraction system for radiochromic films with micrometre spatial resolution and sensitive dose response.

  11. Evaluation of fire hazard analyses for nuclear power plants. A publication within the NUSS programme

    International Nuclear Information System (INIS)

    1995-01-01

    The present publication has been developed with the help of experts from regulatory, operating and engineering organizations, all with practical experience in the field of fire safety of nuclear power plants. The publication supplements the broad concepts of Safety Series No. 50-SG-D2 (Rev.1), Fire Protection in Nuclear Power Plants, by providing a detailed list of the issues, and some of the limitations, to be considered when evaluating the adequacy and effectiveness of the fire hazard analysis of a nuclear power plant. The publication is intended for assessors of fire hazard analyses, including regulators, independent assessors or plant assessors, and gives a broad description of the methodology to be used by operators in preparing a fire hazard analysis for their own plant. 1 fig

  12. Modelos de regressão aleatória para avaliação da curva de crescimento em matrizes de codorna de corte Random regression models for growth evaluation of meat-type quail hens

    Directory of Open Access Journals (Sweden)

    Bruno Bastos Teixeira

    2012-09-01

    Full Text Available Objetivou-se comparar diferentes modelos de regressão aleatória por meio de funções polinomiais de Legendre de diferentes ordens, para avaliar o que melhor se ajusta ao estudo genético da curva de crescimento de codornas de corte. Foram avaliados dados de 2136 matrizes de codorna de corte, dos quais 1026 pertenciam ao grupo genético UFV1 e 1110 ao grupo UFV2. As codornas foram pesadas nos 1°, 7°, 14°, 21°, 28°, 35°, 42°, 77°, 112° e 147° dias de idade e seus pesos utilizados para a análise. Foram testadas duas possíveis modelagens de variância residual heterogênea, sendo agrupadas em 3 e 5 classes de idade. Após, foi realizado o estudo do modelo de regressão aleatória que melhor aplica-se à curva de crescimento das codornas. A comparação entre os modelos foi feita pelo Critério de Informação de Akaike (AIC, Critério de Informação Bayesiano de Schwarz (BIC, Logaritmo da função de verossimilhança (Log e L e teste da razão de verossimilhança (LRT, ao nível de 1%. O modelo que considerou a heterogeneidade de variância residual CL3 mostrou-se adequado à linhagem UFV1, e o modelo CL5 à linhagem UFV2. Uma função polinomial de Legendre com ordem 5, para efeito genético aditivo direto e 5 para efeito permanente de animal, para a linhagem UFV1 e, com ordem 3, para efeito genético aditivo direto e 5 para efeito permanente de animal para a linhagem UFV2, deve ser utilizada na avaliação genética da curva de crescimento das codornas de corte.The objective was to compare different random regression models using Legendre polynomial functions of different orders, to evaluate what best fits the genetic study of the growth curve of meat quails. It was evaluated data from 2136 cut dies quail, of which 1026 belonged to genetic group UFV1 and 1110 the group UFV2. Quail were weighed at 10, 70, 140, 210, 280, 350, 420, 770, 1120 and 1470 days of age, and weights used for the analysis. It was tested two possible modeling

  13. System Evaluations and Life-Cycle Cost Analyses for High-Temperature Electrolysis Hydrogen Production Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Edwin A. Harvego; James E. O' Brien; Michael G. McKellar

    2012-05-01

    This report presents results of system evaluations and lifecycle cost analyses performed for several different commercial-scale high-temperature electrolysis (HTE) hydrogen production concepts. The concepts presented in this report rely on grid electricity and non-nuclear high-temperature process heat sources for the required energy inputs. The HYSYS process analysis software was used to evaluate both central plant designs for large-scale hydrogen production (50,000 kg/day or larger) and forecourt plant designs for distributed production and delivery at about 1,500 kg/day. The HYSYS software inherently ensures mass and energy balances across all components and it includes thermodynamic data for all chemical species. The optimized designs described in this report are based on analyses of process flow diagrams that included realistic representations of fluid conditions and component efficiencies and operating parameters for each of the HTE hydrogen production configurations analyzed. As with previous HTE system analyses performed at the INL, a custom electrolyzer model was incorporated into the overall process flow sheet. This electrolyzer model allows for the determination of the average Nernst potential, cell operating voltage, gas outlet temperatures, and electrolyzer efficiency for any specified inlet steam, hydrogen, and sweep-gas flow rates, current density, cell active area, and external heat loss or gain. The lifecycle cost analyses were performed using the H2A analysis methodology developed by the Department of Energy (DOE) Hydrogen Program. This methodology utilizes spreadsheet analysis tools that require detailed plant performance information (obtained from HYSYS), along with financial and cost information to calculate lifecycle costs. There are standard default sets of assumptions that the methodology uses to ensure consistency when comparing the cost of different production or plant design options. However, these assumptions may also be varied within the

  14. Evaluating transient performance of servo mechanisms by analysing stator current of PMSM

    Science.gov (United States)

    Zhang, Qing; Tan, Luyao; Xu, Guanghua

    2018-02-01

    Smooth running and rapid response are the desired performance goals for the transient motions of servo mechanisms. Because of the uncertain and unobservable transient behaviour of servo mechanisms, it is difficult to evaluate their transient performance. Under the effects of electromechanical coupling, the stator current signals of a permanent-magnet synchronous motor (PMSM) potentially contain the performance information regarding servo mechanisms in use. In this paper, a novel method based on analysing the stator current of the PMSM is proposed for quantifying the transient performance. First, a vector control model is constructed to simulate the stator current behaviour in the transient processes of consecutive speed changes, consecutive load changes, and intermittent start-stops. It is discovered that the amplitude and frequency of the stator current are modulated by the transient load torque and motor speed, respectively. The stator currents under different performance conditions are also simulated and compared. Then, the stator current is processed using a local means decomposition (LMD) algorithm to extract the instantaneous amplitude and instantaneous frequency. The sample entropy of the instantaneous amplitude, which reflects the complexity of the load torque variation, is calculated as a performance indicator of smooth running. The peak-to-peak value of the instantaneous frequency, which defines the range of the motor speed variation, is set as a performance indicator of rapid response. The proposed method is applied to both simulated data in an intermittent start-stops process and experimental data measured for a batch of servo turrets for turning lathes. The results show that the performance evaluations agree with the actual performance.

  15. Update and evaluation of decay data for spent nuclear fuel analyses

    Directory of Open Access Journals (Sweden)

    Simeonov Teodosi

    2017-01-01

    Full Text Available Studsvik’s approach to spent nuclear fuel analyses combines isotopic concentrations and multi-group cross-sections, calculated by the CASMO5 or HELIOS2 lattice transport codes, with core irradiation history data from the SIMULATE5 reactor core simulator and tabulated isotopic decay data. These data sources are used and processed by the code SNF to predict spent nuclear fuel characteristics. Recent advances in the generation procedure for the SNF decay data are presented. The SNF decay data includes basic data, such as decay constants, atomic masses and nuclide transmutation chains; radiation emission spectra for photons from radioactive decay, alpha-n reactions, bremsstrahlung, and spontaneous fission, electrons and alpha particles from radioactive decay, and neutrons from radioactive decay, spontaneous fission, and alpha-n reactions; decay heat production; and electro-atomic interaction data for bremsstrahlung production. These data are compiled from fundamental (ENDF, ENSDF, TENDL and processed (ESTAR sources for nearly 3700 nuclides. A rigorous evaluation procedure of internal consistency checks and comparisons to measurements and benchmarks, and code-to-code verifications is performed at the individual isotope level and using integral characteristics on a fuel assembly level (e.g., decay heat, radioactivity, neutron and gamma sources. Significant challenges are presented by the scope and complexity of the data processing, a dearth of relevant detailed measurements, and reliance on theoretical models for some data.

  16. Update and evaluation of decay data for spent nuclear fuel analyses

    Science.gov (United States)

    Simeonov, Teodosi; Wemple, Charles

    2017-09-01

    Studsvik's approach to spent nuclear fuel analyses combines isotopic concentrations and multi-group cross-sections, calculated by the CASMO5 or HELIOS2 lattice transport codes, with core irradiation history data from the SIMULATE5 reactor core simulator and tabulated isotopic decay data. These data sources are used and processed by the code SNF to predict spent nuclear fuel characteristics. Recent advances in the generation procedure for the SNF decay data are presented. The SNF decay data includes basic data, such as decay constants, atomic masses and nuclide transmutation chains; radiation emission spectra for photons from radioactive decay, alpha-n reactions, bremsstrahlung, and spontaneous fission, electrons and alpha particles from radioactive decay, and neutrons from radioactive decay, spontaneous fission, and alpha-n reactions; decay heat production; and electro-atomic interaction data for bremsstrahlung production. These data are compiled from fundamental (ENDF, ENSDF, TENDL) and processed (ESTAR) sources for nearly 3700 nuclides. A rigorous evaluation procedure of internal consistency checks and comparisons to measurements and benchmarks, and code-to-code verifications is performed at the individual isotope level and using integral characteristics on a fuel assembly level (e.g., decay heat, radioactivity, neutron and gamma sources). Significant challenges are presented by the scope and complexity of the data processing, a dearth of relevant detailed measurements, and reliance on theoretical models for some data.

  17. Evaluation of crack interaction effect for in-plane surface cracks using elastic finite element analyses

    International Nuclear Information System (INIS)

    Huh, Nam Su; Choi, Suhn; Park, Keun Bae; Kim, Jong Min; Choi, Jae Boong; Kim, Young Jin

    2008-01-01

    The crack-tip stress fields and fracture mechanics assessment parameters, such as the elastic stress intensity factor and the elastic-plastic J-integral, for a surface crack can be significantly affected by adjacent cracks. Such a crack interaction effect due to multiple cracks can magnify the fracture mechanics assessment parameters. There are many factors to be considered, for instance the relative distance between adjacent cracks, crack shape and loading condition, to quantify a crack interaction effect on the fracture mechanics assessment parameters. Thus, the current guidance on a crack interaction effect (crack combination rule), including ASME Sec. XI, BS7910, British Energy R6 and API RP579, provide different rules for combining multiple surface cracks into a single surface crack. The present paper investigates a crack interaction effect by evaluating the elastic stress intensity factor of adjacent surface cracks in a plate along the crack front through detailed 3-dimensional elastic finite element analyses. The effects of the geometric parameters, the relative distance between cracks and the crack shape, on the stress intensity factor are systematically investigated. As for the loading condition, only axial tension is considered. Based on the elastic finite element results, the acceptability of the crack combination rules provided in the existing guidance was investigated, and the relevant recommendations on a crack interaction for in-plane surface cracks in a plate were discussed

  18. Evaluation of Pre- and Post- Redevelopment Groundwater Chemical Analyses from LM Monitoring Wells

    International Nuclear Information System (INIS)

    Kamp, Susan; Dayvault, Jalena

    2016-01-01

    This report documents the efforts and analyses conducted for the Applied Studies and Technology (AS&T) Ancillary Work Plan (AWP) project titled Evaluation of Pre- and Post- Redevelopment Groundwater Sample Laboratory Analyses from Selected LM Groundwater Monitoring Wells. This effort entailed compiling an inventory of nearly 500 previous well redevelopment events at 16 U.S. Department of Energy Office of Legacy Management (LM) sites, searching the literature for impacts of well redevelopment on groundwater sample quality, and-the focus of this report-evaluating the impacts of well redevelopment on field measurements and sample analytical results. Study Catalyst Monitoring well redevelopment, the surging or high-volume pumping of a well to loosen and remove accumulated sediment and biological build-up from a well, is considered an element of monitoring well maintenance that is implemented periodically during the lifetime of the well to mitigate its gradual deterioration. Well redevelopment has been conducted fairly routinely at a few LM sites in the western United States (e.g., the Grand Junction office site and the Gunnison processing site in Colorado), but at most other sites in this region it is not a routine practice. Also, until recently (2014-2015), there had been no specific criteria for implementing well redevelopment, and documentation of redevelopment events has been inconsistent. A catalyst for this evaluation was the self-identification of these inconsistencies by the Legacy Management Support contractor. As a result, in early 2015 Environmental Monitoring Operations (EMO) staff began collecting and documenting additional field measurements during well redevelopment events. In late 2015, AS&T staff undertook an independent internal evaluation of EMO's well redevelopment records and corresponding pre- and post-well-redevelopment groundwater analytical results. Study Findings Although literature discussions parallel the prevailing industry

  19. Evaluation of Pre- and Post- Redevelopment Groundwater Chemical Analyses from LM Monitoring Wells

    Energy Technology Data Exchange (ETDEWEB)

    Kamp, Susan [Navarro Reserch and Engineering, Oak Ridge, TN (United States); Dayvault, Jalena [US Department of Energy, Washington, DC (United States). Office of Legacy Management

    2016-05-01

    This report documents the efforts and analyses conducted for the Applied Studies and Technology (AS&T) Ancillary Work Plan (AWP) project titled Evaluation of Pre- and Post- Redevelopment Groundwater Sample Laboratory Analyses from Selected LM Groundwater Monitoring Wells. This effort entailed compiling an inventory of nearly 500 previous well redevelopment events at 16 U.S. Department of Energy Office of Legacy Management (LM) sites, searching the literature for impacts of well redevelopment on groundwater sample quality, and—the focus of this report—evaluating the impacts of well redevelopment on field measurements and sample analytical results. Study Catalyst Monitoring well redevelopment, the surging or high-volume pumping of a well to loosen and remove accumulated sediment and biological build-up from a well, is considered an element of monitoring well maintenance that is implemented periodically during the lifetime of the well to mitigate its gradual deterioration. Well redevelopment has been conducted fairly routinely at a few LM sites in the western United States (e.g., the Grand Junction office site and the Gunnison processing site in Colorado), but at most other sites in this region it is not a routine practice. Also, until recently (2014–2015), there had been no specific criteria for implementing well redevelopment, and documentation of redevelopment events has been inconsistent. A catalyst for this evaluation was the self-identification of these inconsistencies by the Legacy Management Support contractor. As a result, in early 2015 Environmental Monitoring Operations (EMO) staff began collecting and documenting additional field measurements during well redevelopment events. In late 2015, AS&T staff undertook an independent internal evaluation of EMO's well redevelopment records and corresponding pre- and post-well-redevelopment groundwater analytical results. Study Findings Although literature discussions parallel the prevailing industry

  20. Geometrical quality evaluation in laser cutting of Inconel-718 sheet by using Taguchi based regression analysis and particle swarm optimization

    Science.gov (United States)

    Shrivastava, Prashant Kumar; Pandey, Arun Kumar

    2018-03-01

    The Inconel-718 is one of the most demanding advanced engineering materials because of its superior quality. The conventional machining techniques are facing many problems to cut intricate profiles on these materials due to its minimum thermal conductivity, minimum elastic property and maximum chemical affinity at magnified temperature. The laser beam cutting is one of the advanced cutting method that may be used to achieve the geometrical accuracy with more precision by the suitable management of input process parameters. In this research work, the experimental investigation during the pulsed Nd:YAG laser cutting of Inconel-718 has been carried out. The experiments have been conducted by using the well planned orthogonal array L27. The experimentally measured values of different quality characteristics have been used for developing the second order regression models of bottom kerf deviation (KD), bottom kerf width (KW) and kerf taper (KT). The developed models of different quality characteristics have been utilized as a quality function for single-objective optimization by using particle swarm optimization (PSO) method. The optimum results obtained by the proposed hybrid methodology have been compared with experimental results. The comparison of optimized results with the experimental results shows that an individual improvement of 75%, 12.67% and 33.70% in bottom kerf deviation, bottom kerf width, and kerf taper has been observed. The parametric effects of different most significant input process parameters on quality characteristics have also been discussed.

  1. Evaluation of modulation transfer function of optical lens system by support vector regression methodologies - A comparative study

    Science.gov (United States)

    Petković, Dalibor; Shamshirband, Shahaboddin; Saboohi, Hadi; Ang, Tan Fong; Anuar, Nor Badrul; Rahman, Zulkanain Abdul; Pavlović, Nenad T.

    2014-07-01

    The quantitative assessment of image quality is an important consideration in any type of imaging system. The modulation transfer function (MTF) is a graphical description of the sharpness and contrast of an imaging system or of its individual components. The MTF is also known and spatial frequency response. The MTF curve has different meanings according to the corresponding frequency. The MTF of an optical system specifies the contrast transmitted by the system as a function of image size, and is determined by the inherent optical properties of the system. In this study, the polynomial and radial basis function (RBF) are applied as the kernel function of Support Vector Regression (SVR) to estimate and predict estimate MTF value of the actual optical system according to experimental tests. Instead of minimizing the observed training error, SVR_poly and SVR_rbf attempt to minimize the generalization error bound so as to achieve generalized performance. The experimental results show that an improvement in predictive accuracy and capability of generalization can be achieved by the SVR_rbf approach in compare to SVR_poly soft computing methodology.

  2. Relative accuracy of spatial predictive models for lynx Lynx canadensis derived using logistic regression-AIC, multiple criteria evaluation and Bayesian approaches

    Directory of Open Access Journals (Sweden)

    Shelley M. ALEXANDER

    2009-02-01

    Full Text Available We compared probability surfaces derived using one set of environmental variables in three Geographic Information Systems (GIS-based approaches: logistic regression and Akaike’s Information Criterion (AIC, Multiple Criteria Evaluation (MCE, and Bayesian Analysis (specifically Dempster-Shafer theory. We used lynx Lynx canadensis as our focal species, and developed our environment relationship model using track data collected in Banff National Park, Alberta, Canada, during winters from 1997 to 2000. The accuracy of the three spatial models were compared using a contingency table method. We determined the percentage of cases in which both presence and absence points were correctly classified (overall accuracy, the failure to predict a species where it occurred (omission error and the prediction of presence where there was absence (commission error. Our overall accuracy showed the logistic regression approach was the most accurate (74.51%. The multiple criteria evaluation was intermediate (39.22%, while the Dempster-Shafer (D-S theory model was the poorest (29.90%. However, omission and commission error tell us a different story: logistic regression had the lowest commission error, while D-S theory produced the lowest omission error. Our results provide evidence that habitat modellers should evaluate all three error measures when ascribing confidence in their model. We suggest that for our study area at least, the logistic regression model is optimal. However, where sample size is small or the species is very rare, it may also be useful to explore and/or use a more ecologically cautious modelling approach (e.g. Dempster-Shafer that would over-predict, protect more sites, and thereby minimize the risk of missing critical habitat in conservation plans[Current Zoology 55(1: 28 – 40, 2009].

  3. Development and evaluation of a regression-based model to predict cesium concentration ratios for freshwater fish

    International Nuclear Information System (INIS)

    Pinder, John E.; Rowan, David J.; Rasmussen, Joseph B.; Smith, Jim T.; Hinton, Thomas G.; Whicker, F.W.

    2014-01-01

    Data from published studies and World Wide Web sources were combined to produce and test a regression model to predict Cs concentration ratios for freshwater fish species. The accuracies of predicted concentration ratios, which were computed using 1) species trophic levels obtained from random resampling of known food items and 2) K concentrations in the water for 207 fish from 44 species and 43 locations, were tested against independent observations of ratios for 57 fish from 17 species from 25 locations. Accuracy was assessed as the percent of observed to predicted ratios within factors of 2 or 3. Conservatism, expressed as the lack of under prediction, was assessed as the percent of observed to predicted ratios that were less than 2 or less than 3. The model's median observed to predicted ratio was 1.26, which was not significantly different from 1, and 50% of the ratios were between 0.73 and 1.85. The percentages of ratios within factors of 2 or 3 were 67 and 82%, respectively. The percentages of ratios that were <2 or <3 were 79 and 88%, respectively. An example for Perca fluviatilis demonstrated that increased prediction accuracy could be obtained when more detailed knowledge of diet was available to estimate trophic level. - Highlights: • We developed a model to predict Cs concentration ratios for freshwater fish species. • The model uses only two variables to predict a species CR for any location. • One variable is the K concentration in the freshwater. • The other is a species mean trophic level measure easily obtained from (fishbase.org). • The median observed to predicted ratio for 57 independent test cases was 1.26

  4. Statistical Analyses of Second Indoor Bio-Release Field Evaluation Study at Idaho National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Amidan, Brett G.; Pulsipher, Brent A.; Matzke, Brett D.

    2009-12-17

    In September 2008 a large-scale testing operation (referred to as the INL-2 test) was performed within a two-story building (PBF-632) at the Idaho National Laboratory (INL). The report “Operational Observations on the INL-2 Experiment” defines the seven objectives for this test and discusses the results and conclusions. This is further discussed in the introduction of this report. The INL-2 test consisted of five tests (events) in which a floor (level) of the building was contaminated with the harmless biological warfare agent simulant Bg and samples were taken in most, if not all, of the rooms on the contaminated floor. After the sampling, the building was decontaminated, and the next test performed. Judgmental samples and probabilistic samples were determined and taken during each test. Vacuum, wipe, and swab samples were taken within each room. The purpose of this report is to study an additional four topics that were not within the scope of the original report. These topics are: 1) assess the quantitative assumptions about the data being normally or log-normally distributed; 2) evaluate differences and quantify the sample to sample variability within a room and across the rooms; 3) perform geostatistical types of analyses to study spatial correlations; and 4) quantify the differences observed between surface types and sampling methods for each scenario and study the consistency across the scenarios. The following four paragraphs summarize the results of each of the four additional analyses. All samples after decontamination came back negative. Because of this, it was not appropriate to determine if these clearance samples were normally distributed. As Table 1 shows, the characterization data consists of values between and inclusive of 0 and 100 CFU/cm2 (100 was the value assigned when the number is too numerous to count). The 100 values are generally much bigger than the rest of the data, causing the data to be right skewed. There are also a significant

  5. Lifetime evaluation of first wall and divertor plate by crack analyses during plasma disruptions

    International Nuclear Information System (INIS)

    Ohmori, Junji; Kobayashi, Takeshi; Yamada, Masao; Iida, Hiromasa

    1988-05-01

    The first wall and divertor armor in fusion devices are subjected to high heat and particle fluxes. In particular, disruption heating is an intense thermal shock which may cause melting or vaporization of the armor surfaces. The behavior of the armor materials is one of the major factors limiting the lifetime of these components. Generally the surface temperature of armor due to disruption gets so high that the surface may become cracked. However, even if only the surface of the armor is cracked, the function of the armor will not be lost as long as the damage is limited to within a small depth of the surface. In this study, the lifetime of the armor is evaluated by two stages: crack initiation life and crack propagation life which are related to the fatigue life and the energy release rate, respectively. Materials are graphite and C/C composite (carbon fiber reinforced carbon composite) for the first wall, and tungsten for the dinertor. For disruption conditions of Fusion Experimental Reactor, the fatigue life and the energy release rates are calculated by thermal, and stress analyses. Results show that crack initiation is expected after only a few disruptions, and the energy release rate as a function of the crack length comes up to the maximum value at a small crack length, and decreases with increasing of the crack length. This decreasing means that a crack propagation rate reduces. An unstable fracture does not occur if the maximum energy release rate does not exceed the critical energy release rate which can be obtained from the fracture toughness. (author)

  6. Comparing the analytical performances of Micro-NIR and FT-NIR spectrometers in the evaluation of acerola fruit quality, using PLS and SVM regression algorithms.

    Science.gov (United States)

    Malegori, Cristina; Nascimento Marques, Emanuel José; de Freitas, Sergio Tonetto; Pimentel, Maria Fernanda; Pasquini, Celio; Casiraghi, Ernestina

    2017-04-01

    The main goal of this study was to investigate the analytical performances of a state-of-the-art device, one of the smallest dispersion NIR spectrometers on the market (MicroNIR 1700), making a critical comparison with a benchtop FT-NIR spectrometer in the evaluation of the prediction accuracy. In particular, the aim of this study was to estimate in a non-destructive manner, titratable acidity and ascorbic acid content in acerola fruit during ripening, in a view of direct applicability in field of this new miniaturised handheld device. Acerola (Malpighia emarginata DC.) is a super-fruit characterised by a considerable amount of ascorbic acid, ranging from 1.0% to 4.5%. However, during ripening, acerola colour changes and the fruit may lose as much as half of its ascorbic acid content. Because the variability of chemical parameters followed a non-strictly linear profile, two different regression algorithms were compared: PLS and SVM. Regression models obtained with Micro-NIR spectra give better results using SVM algorithm, for both ascorbic acid and titratable acidity estimation. FT-NIR data give comparable results using both SVM and PLS algorithms, with lower errors for SVM regression. The prediction ability of the two instruments was statistically compared using the Passing-Bablok regression algorithm; the outcomes are critically discussed together with the regression models, showing the suitability of the portable Micro-NIR for in field monitoring of chemical parameters of interest in acerola fruits. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Using ordinal logistic regression to evaluate the performance of laser-Doppler predictions of burn-healing time

    Directory of Open Access Journals (Sweden)

    Pape Sarah A

    2009-02-01

    Full Text Available Abstract Background Laser-Doppler imaging (LDI of cutaneous blood flow is beginning to be used by burn surgeons to predict the healing time of burn wounds; predicted healing time is used to determine wound treatment as either dressings or surgery. In this paper, we do a statistical analysis of the performance of the technique. Methods We used data from a study carried out by five burn centers: LDI was done once between days 2 to 5 post burn, and healing was assessed at both 14 days and 21 days post burn. Random-effects ordinal logistic regression and other models such as the continuation ratio model were used to model healing-time as a function of the LDI data, and of demographic and wound history variables. Statistical methods were also used to study the false-color palette, which enables the laser-Doppler imager to be used by clinicians as a decision-support tool. Results Overall performance is that diagnoses are over 90% correct. Related questions addressed were what was the best blood flow summary statistic and whether, given the blood flow measurements, demographic and observational variables had any additional predictive power (age, sex, race, % total body surface area burned (%TBSA, site and cause of burn, day of LDI scan, burn center. It was found that mean laser-Doppler flux over a wound area was the best statistic, and that, given the same mean flux, women recover slightly more slowly than men. Further, the likely degradation in predictive performance on moving to a patient group with larger %TBSA than those in the data sample was studied, and shown to be small. Conclusion Modeling healing time is a complex statistical problem, with random effects due to multiple burn areas per individual, and censoring caused by patients missing hospital visits and undergoing surgery. This analysis applies state-of-the art statistical methods such as the bootstrap and permutation tests to a medical problem of topical interest. New medical findings are

  8. The use of artificial neural network analysis and multiple regression for trap quality evaluation: a case study of the Northern Kuqa Depression of Tarim Basin in western China

    Energy Technology Data Exchange (ETDEWEB)

    Guangren Shi; Xingxi Zhou; Guangya Zhang; Xiaofeng Shi; Honghui Li [Research Institute of Petroleum Exploration and Development, Beijing (China)

    2004-03-01

    Artificial neural network analysis is found to be far superior to multiple regression when applied to the evaluation of trap quality in the Northern Kuqa Depression, a gas-rich depression of Tarim Basin in western China. This is because this technique can correlate the complex and non-linear relationship between trap quality and related geological factors, whereas multiple regression can only describe a linear relationship. However, multiple regression can work as an auxiliary tool, as it is suited to high-speed calculations and can indicate the degree of dependence between the trap quality and its related geological factors which artificial neural network analysis cannot. For illustration, we have investigated 30 traps in the Northern Kuqa Depression. For each of the traps, the values of 14 selected geological factors were all known. While geologists were also able to assign individual trap quality values to 27 traps, they were less certain about the values for the other three traps. Multiple regression and artificial neural network analysis were, therefore, respectively used to ascertain these values. Data for the 27 traps were used as known sample data, while the three traps were used as prediction candidates. Predictions from artificial neural network analysis are found to agree with exploration results: where simulation predicted high trap quality, commercial quality flows were afterwards found, and where low trap quality is indicated, no such discoveries have yet been made. On the other hand, multiple regression results indicate the order of dependence of the trap quality on geological factors, which reconciles with what geologists have commonly recognized. We can conclude, therefore, that the application of artificial neural network analysis with the aid of multiple regression to trap evaluation in the Northern Kuqa Depression has been quite successful. To ensure the precision of the above mentioned geological factors and their related parameters for each

  9. Evaluating the effect of a third-party implementation of resolution recovery on the quality of SPECT bone scan imaging using visual grading regression.

    Science.gov (United States)

    Hay, Peter D; Smith, Julie; O'Connor, Richard A

    2016-02-01

    The aim of this study was to evaluate the benefits to SPECT bone scan image quality when applying resolution recovery (RR) during image reconstruction using software provided by a third-party supplier. Bone SPECT data from 90 clinical studies were reconstructed retrospectively using software supplied independent of the gamma camera manufacturer. The current clinical datasets contain 120×10 s projections and are reconstructed using an iterative method with a Butterworth postfilter. Five further reconstructions were created with the following characteristics: 10 s projections with a Butterworth postfilter (to assess intraobserver variation); 10 s projections with a Gaussian postfilter with and without RR; and 5 s projections with a Gaussian postfilter with and without RR. Two expert observers were asked to rate image quality on a five-point scale relative to our current clinical reconstruction. Datasets were anonymized and presented in random order. The benefits of RR on image scores were evaluated using ordinal logistic regression (visual grading regression). The application of RR during reconstruction increased the probability of both observers of scoring image quality as better than the current clinical reconstruction even where the dataset contained half the normal counts. Type of reconstruction and observer were both statistically significant variables in the ordinal logistic regression model. Visual grading regression was found to be a useful method for validating the local introduction of technological developments in nuclear medicine imaging. RR, as implemented by the independent software supplier, improved bone SPECT image quality when applied during image reconstruction. In the majority of clinical cases, acquisition times for bone SPECT intended for the purposes of localization can safely be halved (from 10 s projections to 5 s) when RR is applied.

  10. Climatological Downscaling and Evaluation of AGRMET Precipitation Analyses Over the Continental U.S.

    Science.gov (United States)

    Garcia, M.; Peters-Lidard, C. D.; Eylander, J. B.; Daly, C.; Tian, Y.; Zeng, J.

    2007-05-01

    near-real-time simulations in regions of interest. This work focuses on value added to the AGRMET precipitation product by the inclusion of high-quality climatological information on a monthly time scale. The AGRMET method uses microwave-based satellite precipitation estimates from various polar-orbiting platforms (NOAA POES and DMSP), infrared-based estimates from geostationary platforms (GOES, METEOSAT, etc.), related cloud analysis products, and surface gauge observations in a complex and hierarchical blending process. Results from processing of the legacy AGRMET precipitation products over the U.S. using LIS-based methods for downscaling, both with and without climatological factors, are evaluated against high-resolution monthly analyses using the PRISM knowledge- based method (Daly et al. 2002). It is demonstrated that the incorporation of climatological information in a downscaling procedure can significantly enhance the accuracy, and potential utility, of AFWA precipitation products for military and civilian customer applications.

  11. Status of science and technology with respect of preparation and evaluation of accident analyses and the use of analysis simulators

    International Nuclear Information System (INIS)

    Pointner, Winfried; Cuesta Morales, Alejandra; Draeger, Peer; Hartung, Juergen; Jakubowski, Zygmunt; Meyer, Gerhard; Palazzo, Simone; Moner, Guim Pallas; Perin, Yann; Pasichnyk, Ihor

    2014-07-01

    The scope of the work was to elaborate the prerequisites for short term accident analyses including recommendations for the application of new methodologies and computational procedures and technical aspects of safety evaluation. The following work packages were performed: Knowledge base for best estimate accident analyses; analytical studies on the PWR plant behavior in case of multiple safety system failures; extension and maintenance of the data base for plant specific analysis simulators.

  12. Meta-regression analysis to evaluate relationships between maternal blood levels of placentation biomarkers and low delivery weight.

    Science.gov (United States)

    Goto, Eita

    2018-05-03

    Caution is required for women at increased risk of low neonatal delivery weight. To evaluate relationships between maternal placentation biomarkers and the odds of low delivery weight. Databases including PubMed/MEDLINE were searched up to May 2017 using keywords involving biomarker names and "low birthweight." English language studies providing true- and false-positive, and true- and false-negative results of low delivery weight classified by maternal blood levels of placentation biomarkers (in units of multiple of the mean [MoM]) were included. Coefficients representing changes in log odds ratio for low delivery weight per 1 MoM increase in maternal blood placentation biomarkers, and those adjusted for race, sampling period, and/or study quality were calculated. Adjusted coefficients representing changes in log odds ratio for low delivery weight per 1 MoM increase in maternal blood levels of α-fetoprotein (AFP) and β-human chorionic gonadotropin (β-hCG) were significantly greater than 0 (both Plow delivery weight. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  13. Regression: A Bibliography.

    Science.gov (United States)

    Pedrini, D. T.; Pedrini, Bonnie C.

    Regression, another mechanism studied by Sigmund Freud, has had much research, e.g., hypnotic regression, frustration regression, schizophrenic regression, and infra-human-animal regression (often directly related to fixation). Many investigators worked with hypnotic age regression, which has a long history, going back to Russian reflexologists.…

  14. Heterogeneity index evaluated by slope of linear regression on {sup 18}F-FDG PET/CT as a prognostic marker for predicting tumor recurrence in pancreatic ductal adenocarcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yong-il [CHA University, Department of Nuclear Medicine, CHA Bundang Medical Center, Seongnam (Korea, Republic of); Seoul National University Hospital, Department of Nuclear Medicine, Seoul (Korea, Republic of); Kim, Yong Joong [Veterans Health Service Medical Center, Seoul (Korea, Republic of); Paeng, Jin Chul; Cheon, Gi Jeong; Lee, Dong Soo [Seoul National University Hospital, Department of Nuclear Medicine, Seoul (Korea, Republic of); Chung, June-Key [Seoul National University Hospital, Department of Nuclear Medicine, Seoul (Korea, Republic of); Seoul National University, Cancer Research Institute, Seoul (Korea, Republic of); Kang, Keon Wook [Seoul National University Hospital, Department of Nuclear Medicine, Seoul (Korea, Republic of); Seoul National University, Cancer Research Institute, Seoul (Korea, Republic of); Seoul National University College of Medicine, Department of Biomedical Sciences, Seoul (Korea, Republic of); Seoul National University College of Medicine, Department of Nuclear Medicine, Seoul (Korea, Republic of)

    2017-11-15

    {sup 18}F-Fluorodeoxyglucose (FDG) positron emission tomography (PET)/computed tomography (CT) has been investigated as a method to predict pancreatic cancer recurrence after pancreatic surgery. We evaluated the recently introduced heterogeneity indices of {sup 18}F-FDG PET/CT used for predicting pancreatic cancer recurrence after surgery and compared them with current clinicopathologic and {sup 18}F-FDG PET/CT parameters. A total of 93 pancreatic ductal adenocarcinoma patients (M:F = 60:33, mean age = 64.2 ± 9.1 years) who underwent preoperative {sup 18}F-FDG PET/CT following pancreatic surgery were retrospectively enrolled. The standardized uptake values (SUVs) and tumor-to-background ratios (TBR) were measured on each {sup 18}F-FDG PET/CT, as metabolic parameters. Metabolic tumor volume (MTV) and total lesion glycolysis (TLG) were examined as volumetric parameters. The coefficient of variance (heterogeneity index-1; SUVmean divided by the standard deviation) and linear regression slopes (heterogeneity index-2) of the MTV, according to SUV thresholds of 2.0, 2.5 and 3.0, were evaluated as heterogeneity indices. Predictive values of clinicopathologic and {sup 18}F-FDG PET/CT parameters and heterogeneity indices were compared in terms of pancreatic cancer recurrence. Seventy patients (75.3%) showed recurrence after pancreatic cancer surgery (mean recurrence = 9.4 ± 8.4 months). Comparing the recurrence and no recurrence patients, all of the {sup 18}F-FDG PET/CT parameters and heterogeneity indices demonstrated significant differences. In univariate Cox-regression analyses, MTV (P = 0.013), TLG (P = 0.007), and heterogeneity index-2 (P = 0.027) were significant. Among the clinicopathologic parameters, CA19-9 (P = 0.025) and venous invasion (P = 0.002) were selected as significant parameters. In multivariate Cox-regression analyses, MTV (P = 0.005), TLG (P = 0.004), and heterogeneity index-2 (P = 0.016) with venous invasion (P < 0.001, 0.001, and 0

  15. Better Autologistic Regression

    Directory of Open Access Journals (Sweden)

    Mark A. Wolters

    2017-11-01

    Full Text Available Autologistic regression is an important probability model for dichotomous random variables observed along with covariate information. It has been used in various fields for analyzing binary data possessing spatial or network structure. The model can be viewed as an extension of the autologistic model (also known as the Ising model, quadratic exponential binary distribution, or Boltzmann machine to include covariates. It can also be viewed as an extension of logistic regression to handle responses that are not independent. Not all authors use exactly the same form of the autologistic regression model. Variations of the model differ in two respects. First, the variable coding—the two numbers used to represent the two possible states of the variables—might differ. Common coding choices are (zero, one and (minus one, plus one. Second, the model might appear in either of two algebraic forms: a standard form, or a recently proposed centered form. Little attention has been paid to the effect of these differences, and the literature shows ambiguity about their importance. It is shown here that changes to either coding or centering in fact produce distinct, non-nested probability models. Theoretical results, numerical studies, and analysis of an ecological data set all show that the differences among the models can be large and practically significant. Understanding the nature of the differences and making appropriate modeling choices can lead to significantly improved autologistic regression analyses. The results strongly suggest that the standard model with plus/minus coding, which we call the symmetric autologistic model, is the most natural choice among the autologistic variants.

  16. Systematic literature reviews and meta-analyses: part 6 of a series on evaluation of scientific publications.

    Science.gov (United States)

    Ressing, Meike; Blettner, Maria; Klug, Stefanie J

    2009-07-01

    Because of the rising number of scientific publications, it is important to have a means of jointly summarizing and assessing different studies on a single topic. Systematic literature reviews, meta-analyses of published data, and meta-analyses of individual data (pooled reanalyses) are now being published with increasing frequency. We here describe the essential features of these methods and discuss their strengths and weaknesses. This article is based on a selective literature search. The different types of review and meta-analysis are described, the methods used in each are outlined so that they can be evaluated, and a checklist is given for the assessment of reviews and meta-analyses of scientific articles. Systematic literature reviews provide an overview of the state of research on a given topic and enable an assessment of the quality of individual studies. They also allow the results of different studies to be evaluated together when these are inconsistent. Meta-analyses additionally allow calculation of pooled estimates of an effect. The different types of review and meta-analysis are discussed with examples from the literature on one particular topic. Systematic literature reviews and meta-analyses enable the research findings and treatment effects obtained in different individual studies to be summed up and evaluated.

  17. Re-evaluation of a novel approach for quantitative myocardial oedema detection by analysing tissue inhomogeneity in acute myocarditis using T2-mapping

    International Nuclear Information System (INIS)

    Baessler, Bettina; Treutlein, Melanie; Maintz, David; Bunck, Alexander C.; Schaarschmidt, Frank; Stehning, Christian; Schnackenburg, Bernhard; Michels, Guido

    2017-01-01

    To re-evaluate a recently suggested approach of quantifying myocardial oedema and increased tissue inhomogeneity in myocarditis by T2-mapping. Cardiac magnetic resonance data of 99 patients with myocarditis were retrospectively analysed. Thirthy healthy volunteers served as controls. T2-mapping data were acquired at 1.5 T using a gradient-spin-echo T2-mapping sequence. T2-maps were segmented according to the 16-segments AHA-model. Segmental T2-values, segmental pixel-standard deviation (SD) and the derived parameters maxT2, maxSD and madSD were analysed and compared to the established Lake Louise criteria (LLC). A re-estimation of logistic regression models revealed that all models containing an SD-parameter were superior to any model containing global myocardial T2. Using a combined cut-off of 1.8 ms for madSD + 68 ms for maxT2 resulted in a diagnostic sensitivity of 75% and specificity of 80% and showed a similar diagnostic performance compared to LLC in receiver-operating-curve analyses. Combining madSD, maxT2 and late gadolinium enhancement (LGE) in a model resulted in a superior diagnostic performance compared to LLC (sensitivity 93%, specificity 83%). The results show that the novel T2-mapping-derived parameters exhibit an additional diagnostic value over LGE with the inherent potential to overcome the current limitations of T2-mapping. (orig.)

  18. Re-evaluation of a novel approach for quantitative myocardial oedema detection by analysing tissue inhomogeneity in acute myocarditis using T2-mapping

    Energy Technology Data Exchange (ETDEWEB)

    Baessler, Bettina; Treutlein, Melanie; Maintz, David; Bunck, Alexander C. [University Hospital of Cologne, Department of Radiology, Cologne (Germany); Schaarschmidt, Frank [Leibniz Universitaet Hannover, Institute of Biostatistics, Faculty of Natural Sciences, Hannover (Germany); Stehning, Christian [Philips Research, Hamburg (Germany); Schnackenburg, Bernhard [Philips, Healthcare Germany, Hamburg (Germany); Michels, Guido [University Hospital of Cologne, Department III of Internal Medicine, Heart Centre, Cologne (Germany)

    2017-12-15

    To re-evaluate a recently suggested approach of quantifying myocardial oedema and increased tissue inhomogeneity in myocarditis by T2-mapping. Cardiac magnetic resonance data of 99 patients with myocarditis were retrospectively analysed. Thirthy healthy volunteers served as controls. T2-mapping data were acquired at 1.5 T using a gradient-spin-echo T2-mapping sequence. T2-maps were segmented according to the 16-segments AHA-model. Segmental T2-values, segmental pixel-standard deviation (SD) and the derived parameters maxT2, maxSD and madSD were analysed and compared to the established Lake Louise criteria (LLC). A re-estimation of logistic regression models revealed that all models containing an SD-parameter were superior to any model containing global myocardial T2. Using a combined cut-off of 1.8 ms for madSD + 68 ms for maxT2 resulted in a diagnostic sensitivity of 75% and specificity of 80% and showed a similar diagnostic performance compared to LLC in receiver-operating-curve analyses. Combining madSD, maxT2 and late gadolinium enhancement (LGE) in a model resulted in a superior diagnostic performance compared to LLC (sensitivity 93%, specificity 83%). The results show that the novel T2-mapping-derived parameters exhibit an additional diagnostic value over LGE with the inherent potential to overcome the current limitations of T2-mapping. (orig.)

  19. Evaluation Framework and Analyses for Thermal Energy Storage Integrated with Packaged Air Conditioning

    Energy Technology Data Exchange (ETDEWEB)

    Kung, F.; Deru, M.; Bonnema, E.

    2013-10-01

    Few third-party guidance documents or tools are available for evaluating thermal energy storage (TES) integrated with packaged air conditioning (AC), as this type of TES is relatively new compared to TES integrated with chillers or hot water systems. To address this gap, researchers at the National Renewable Energy Laboratory conducted a project to improve the ability of potential technology adopters to evaluate TES technologies. Major project outcomes included: development of an evaluation framework to describe key metrics, methodologies, and issues to consider when assessing the performance of TES systems integrated with packaged AC; application of multiple concepts from the evaluation framework to analyze performance data from four demonstration sites; and production of a new simulation capability that enables modeling of TES integrated with packaged AC in EnergyPlus. This report includes the evaluation framework and analysis results from the project.

  20. Non Linear Analyses for the Evaluation of Seismic Behavior of Mixed R.C.-Masonry Structures

    International Nuclear Information System (INIS)

    Liberatore, Laura; Tocci, Cesare; Masiani, Renato

    2008-01-01

    In this work the seismic behavior of masonry buildings with mixed structural system, consisting of perimeter masonry walls and internal r.c. frames, is studied by means of non linear static (pushover) analyses. Several aspects, like the distribution of seismic action between masonry and r.c. elements, the local and global behavior of the structure, the crisis of the connections and the attainment of the ultimate strength of the whole structure are examined. The influence of some parameters, such as the masonry compressive and tensile strength, on the structural behavior is investigated. The numerical analyses are also repeated on a building in which the r.c. internal frames are replaced with masonry walls

  1. Evaluation Of Plutonium Oxide Destructive Chemical Analyses For Validity Of Original 3013 Container Binning

    International Nuclear Information System (INIS)

    Mcclard, J.; Kessinger, G.

    2010-01-01

    The surveillance program for 3013 containers is based, in part, on the separation of containers into various bins related to potential container failure mechanisms. The containers are assigned to bins based on moisture content and pre-storage estimates of content chemistry. While moisture content is measured during the packaging of each container, chemistry estimates are made by using a combination of process knowledge, packaging data and prompt gamma analyses to establish the moisture and chloride/fluoride content of the materials. Packages with high moisture and chloride/fluoride contents receive more detailed surveillances than packages with less chloride/fluoride and/or moisture. Moisture verification measurements and chemical analyses performed during the surveillance program provided an opportunity to validate the binning process. Validation results demonstrated that the binning effort was generally successful in placing the containers in the appropriate bin for surveillance and analysis.

  2. Using a binary logistic regression method and GIS for evaluating and mapping the groundwater spring potential in the Sultan Mountains (Aksehir, Turkey)

    Science.gov (United States)

    Ozdemir, Adnan

    2011-07-01

    SummaryThe purpose of this study is to produce a groundwater spring potential map of the Sultan Mountains in central Turkey, based on a logistic regression method within a Geographic Information System (GIS) environment. Using field surveys, the locations of the springs (440 springs) were determined in the study area. In this study, 17 spring-related factors were used in the analysis: geology, relative permeability, land use/land cover, precipitation, elevation, slope, aspect, total curvature, plan curvature, profile curvature, wetness index, stream power index, sediment transport capacity index, distance to drainage, distance to fault, drainage density, and fault density map. The coefficients of the predictor variables were estimated using binary logistic regression analysis and were used to calculate the groundwater spring potential for the entire study area. The accuracy of the final spring potential map was evaluated based on the observed springs. The accuracy of the model was evaluated by calculating the relative operating characteristics. The area value of the relative operating characteristic curve model was found to be 0.82. These results indicate that the model is a good estimator of the spring potential in the study area. The spring potential map shows that the areas of very low, low, moderate and high groundwater spring potential classes are 105.586 km 2 (28.99%), 74.271 km 2 (19.906%), 101.203 km 2 (27.14%), and 90.05 km 2 (24.671%), respectively. The interpretations of the potential map showed that stream power index, relative permeability of lithologies, geology, elevation, aspect, wetness index, plan curvature, and drainage density play major roles in spring occurrence and distribution in the Sultan Mountains. The logistic regression approach has not yet been used to delineate groundwater potential zones. In this study, the logistic regression method was used to locate potential zones for groundwater springs in the Sultan Mountains. The evolved model

  3. Structural brain alterations of Down's syndrome in early childhood evaluation by DTI and volumetric analyses

    International Nuclear Information System (INIS)

    Gunbey, Hediye Pinar; Bilgici, Meltem Ceyhan; Aslan, Kerim; Incesu, Lutfi; Has, Arzu Ceylan; Ogur, Methiye Gonul; Alhan, Aslihan

    2017-01-01

    To provide an initial assessment of white matter (WM) integrity with diffusion tensor imaging (DTI) and the accompanying volumetric changes in WM and grey matter (GM) through volumetric analyses of young children with Down's syndrome (DS). Ten children with DS and eight healthy control subjects were included in the study. Tract-based spatial statistics (TBSS) were used in the DTI study for whole-brain voxelwise analysis of fractional anisotropy (FA) and mean diffusivity (MD) of WM. Volumetric analyses were performed with an automated segmentation method to obtain regional measurements of cortical volumes. Children with DS showed significantly reduced FA in association tracts of the fronto-temporo-occipital regions as well as the corpus callosum (CC) and anterior limb of the internal capsule (p < 0.05). Volumetric reductions included total cortical GM, cerebellar GM and WM volume, basal ganglia, thalamus, brainstem and CC in DS compared with controls (p < 0.05). These preliminary results suggest that DTI and volumetric analyses may reflect the earliest complementary changes of the neurodevelopmental delay in children with DS and can serve as surrogate biomarkers of the specific elements of WM and GM integrity for cognitive development. (orig.)

  4. The evaluation method of soil-spring for the analyses of foundation structures on layered bedsoil

    International Nuclear Information System (INIS)

    Satoh, S.; Sasaki, F.

    1985-01-01

    When performing the finite element method analysis of foundation structures, such as mat slab of reactor buildings and turbine buildings, it is very important to evaluate and model the soil-spring mechanism between foundation and soil correctly. In this model, this paper presents the method in which soil-spring mechanism is evaluated from the theoretical solution. In this theory the semi-infinite elastic solid is assumed to be made of multi-layered soil systems. From the analytical example, it is concluded that the stress analysis of foundation structures on multi-layered soil systems cannot be evaluated by the conventional methods. (orig.)

  5. Evaluation of the computer code system RADHEAT-V4 by analysing benchmark problems on radiation shielding

    International Nuclear Information System (INIS)

    Sakamoto, Yukio; Naito, Yoshitaka

    1990-11-01

    A computer code system RADHEAT-V4 has been developed for safety evaluation on radiation shielding of nuclear fuel facilities. To evaluate the performance of the code system, 18 benchmark problem were selected and analysed. Evaluated radiations are neutron and gamma-ray. Benchmark problems consist of penetration, streaming and skyshine. The computed results show more accurate than those by the Sn codes ANISN and DOT3.5 or the Monte Carlo code MORSE. Big core memory and many times I/O are, however, required for RADHEAT-V4. (author)

  6. Evaluating risk factors for endemic human Salmonella Enteritidis infections with different phage types in Ontario, Canada using multinomial logistic regression and a case-case study approach

    Directory of Open Access Journals (Sweden)

    Varga Csaba

    2012-10-01

    Full Text Available Abstract Background Identifying risk factors for Salmonella Enteritidis (SE infections in Ontario will assist public health authorities to design effective control and prevention programs to reduce the burden of SE infections. Our research objective was to identify risk factors for acquiring SE infections with various phage types (PT in Ontario, Canada. We hypothesized that certain PTs (e.g., PT8 and PT13a have specific risk factors for infection. Methods Our study included endemic SE cases with various PTs whose isolates were submitted to the Public Health Laboratory-Toronto from January 20th to August 12th, 2011. Cases were interviewed using a standardized questionnaire that included questions pertaining to demographics, travel history, clinical symptoms, contact with animals, and food exposures. A multinomial logistic regression method using the Generalized Linear Latent and Mixed Model procedure and a case-case study design were used to identify risk factors for acquiring SE infections with various PTs in Ontario, Canada. In the multinomial logistic regression model, the outcome variable had three categories representing human infections caused by SE PT8, PT13a, and all other SE PTs (i.e., non-PT8/non-PT13a as a referent category to which the other two categories were compared. Results In the multivariable model, SE PT8 was positively associated with contact with dogs (OR=2.17, 95% CI 1.01-4.68 and negatively associated with pepper consumption (OR=0.35, 95% CI 0.13-0.94, after adjusting for age categories and gender, and using exposure periods and health regions as random effects to account for clustering. Conclusions Our study findings offer interesting hypotheses about the role of phage type-specific risk factors. Multinomial logistic regression analysis and the case-case study approach are novel methodologies to evaluate associations among SE infections with different PTs and various risk factors.

  7. An evaluation of high-resolution interferometer soundings and their use in mesoscale analyses

    Science.gov (United States)

    Bradshaw, John T.; Fuelberg, Henry E.

    1993-01-01

    An examination is made of temperature and dewpoint soundings obtained by an airborne prototype of the High-resolution Interferometer Sounder (HIS) on two flight days, to ascertain their error characteristics and their utility in mesoscale analyses. Crude estimates of Bowen ratio were obtained from HIS data using a mixing-line approach; the HIS retrievals indicated that areas of thunderstorm formation were the regions of greatest instability. HIS soundings were also able to detect some of the landscape variability and temperature and humidity fluctuations present.

  8. Evaluating the democratic accountability of governance networks: Analysing two Nordic Megaprojects

    DEFF Research Database (Denmark)

    Aarsæther, Nils; Bjørnå, Hilde; Fotel, Trine

    2009-01-01

    There is currently a need to analyse and measure the democratic accountability of governance networks. This kind of analysis and measurement calls for the development of an interactive conceptualisation of democratic accountability that makes it possible to measure the level of democratic...... accountability of concrete governance networks with reference to the extent to which they interact with (1) relevant politicians appointed through the institutions of representative democracy, (2) the relevant and affected stakeholders, and (3) the wider citizenry. A case study of two governance networks...... involved in two Nordic megaprojects illustrates how this measurement device can be brought into use and what the insights are that can be gained from it....

  9. Defining and systematic analyses of aggregation indices to evaluate degree of calcium oxalate crystal aggregation

    Science.gov (United States)

    Chaiyarit, Sakdithep; Thongboonkerd, Visith

    2017-12-01

    Crystal aggregation is one of the most crucial steps in kidney stone pathogenesis. However, previous studies of crystal aggregation were rarely done and quantitative analysis of aggregation degree was handicapped by a lack of the standard measurement. We thus performed an in vitro assay to generate aggregation of calcium oxalate monohydrate (COM) crystals with various concentrations (25-800 µg/ml) in saturated aggregation buffer. The crystal aggregates were analyzed by microscopic examination, UV-visible spectrophotometry, and GraphPad Prism6 software to define a total of 12 aggregation indices (including number of aggregates, aggregated mass index, optical density, aggregation coefficient, span, number of aggregates at plateau time-point, aggregated area index, aggregated diameter index, aggregated symmetry index, time constant, half-life, and rate constant). The data showed linear correlation between crystal concentration and almost all of these indices, except only for rate constant. Among these, number of aggregates provided the greatest regression coefficient (r=0.997; pr=0.993; pr=‑0.993; pr=0.991; p<0.001 for both). These five indices are thus recommended as the most appropriate indices for quantitative analysis of COM crystal aggregation in vitro.

  10. Analyses of fixed effects for genetic evaluation of dairy cattle using test day records in Indonesia

    Directory of Open Access Journals (Sweden)

    Asep Anang

    2010-06-01

    Full Text Available Season, rainfall, day of rain, temperature, humidity, year and farm are fixed effects, which have been reported to influence milk yield. Those factors are often linked together to contribute to the variation of milk production. This research is addressed to study the fixed effect factors, including lactation curve, which should be considered for genetic evaluation of milk yield based on test day records of dairy cattle. The data were taken from four different farms, which were PT. Taurus Dairy Farm, BPPT Cikole, Bandang Dairy Farm, and BBPTU Baturraden. In total of 16806 test day records were evaluated, consisting of 9,302 at first and 7,504 at second lactation, respectively. The results indicated that fixed effects were very specific and the influences had different patterns for each farm. Consequently, in a genetic evaluation, these factors such as lactation, temperature, year, day of rain, and humidity need to be evaluated first. Ali-Schaeffer curve represented the most appropriate curve to use in the genetic evaluation of dairy cattle in Indonesia.

  11. An application of the explicit method for analysing intersystem dependencies in the evaluation of event trees

    International Nuclear Information System (INIS)

    Oliveira, L.F.S.; Frutuoso e Melo, P.F.; Lima, J.E.P.; Stal, I.L.

    1985-01-01

    We discuss in this paper a computational application of the explicit method for analyzing event trees in the context of probabilistic risk assessments. A detailed analysis of the explicit method is presented, including the train level analysis (TLA) of safety systems and the impact vector method. It is shown that the penalty for not adopting TLA is that in some cases non-conservative results may be reached. The impact vector method can significantly reduce the number of sequences to be considered, and its use has inspired the definition of a dependency matrix, which enables the proper running of a computer code especially developed for analysing event trees. This code constructs and quantifies the event trees in the fashion just discussed, by receiving as input the construction and quantification dependencies defined in the dependency matrix. The code has been extensively used in the Angra 1 PRA currently underway. In its present version it gives as output the dominant sequences for each given initiator, properly classifying them in core-degradation classes as specified by the user. This calculation is made in a pointwise fashion. Extensions of this code are being developed in order to perform uncertainty analyses on the dominant sequences and also risk importance measures of the safety systems envolved. (orig.)

  12. Evaluation of European Schizophrenia GWAS Loci in Asian Populations via Comprehensive Meta-Analyses.

    Science.gov (United States)

    Xiao, Xiao; Luo, Xiong-Jian; Chang, Hong; Liu, Zichao; Li, Ming

    2017-08-01

    Schizophrenia is a severe and highly heritable neuropsychiatric disorder. Recent genetic analyses including genome-wide association studies (GWAS) have implicated multiple genome-wide significant variants for schizophrenia among European populations. However, many of these risk variants were not largely validated in other populations of different ancestry such as Asians. To validate whether these European GWAS significant loci are associated with schizophrenia in Asian populations, we conducted a systematic literature search and meta-analyses on 19 single nucleotide polymorphisms (SNPs) in Asian populations by combining all available case-control and family-based samples, including up to 30,000 individuals. We employed classical fixed (or random) effects inverse variance weighted methods to calculate summary odds ratios (ORs) and 95 % confidence intervals (CIs). Among the 19 GWAS loci, we replicated the risk associations of nine markers (e.g., SNPs at VRK2, ITIH3/4, NDST3, NOTCH4) surpassing significance level (two-tailed P Asian replication samples and initial European GWAS findings, and the successful replications of these GWAS loci in a different ethnic group provide stronger evidence for their clinical associations with schizophrenia. Further studies, focusing on the molecular mechanisms of these GWAS significant loci, will become increasingly important for understanding of the pathogenesis to schizophrenia.

  13. Evaluation and Improvement of Cloud and Convective Parameterizations from Analyses of ARM Observations and Models

    Energy Technology Data Exchange (ETDEWEB)

    Del Genio, Anthony D. [NASA Goddard Inst. for Space Studies (GISS), New York, NY (United States)

    2016-03-11

    Over this period the PI and his performed a broad range of data analysis, model evaluation, and model improvement studies using ARM data. These included cloud regimes in the TWP and their evolution over the MJO; M-PACE IOP SCM-CRM intercomparisons; simulations of convective updraft strength and depth during TWP-ICE; evaluation of convective entrainment parameterizations using TWP-ICE simulations; evaluation of GISS GCM cloud behavior vs. long-term SGP cloud statistics; classification of aerosol semi-direct effects on cloud cover; depolarization lidar constraints on cloud phase; preferred states of the winter Arctic atmosphere, surface, and sub-surface; sensitivity of convection to tropospheric humidity; constraints on the parameterization of mesoscale organization from TWP-ICE WRF simulations; updraft and downdraft properties in TWP-ICE simulated convection; insights from long-term ARM records at Manus and Nauru.

  14. Behavior of underclad cracks in reactor pressure vessels - evaluation of mechanical analyses with tests on cladded mock-ups

    International Nuclear Information System (INIS)

    Moinereau, D.; Rousselier, G.; Bethmont, M.

    1993-01-01

    Innocuity of underclad flaws in the reactor pressure vessels must be demonstrated in the French safety analyses, particularly in the case of a severe transient at the end of the pressure vessel lifetime, because of the radiation embrittlement of the vessel material. Safety analyses are usually performed with elastic and elasto-plastic analyses taking into account the effect of the stainless steel cladding. EDF has started a program including experiments on large size cladded specimens and their interpretations. The purpose of this program is to evaluate the different methods of fracture analysis used in safety studies. Several specimens made of ferritic steel A508 C1 3 with stainless steel cladding, containing small artificial defects, are loaded in four-point bending. Experiments are performed at very low temperature to simulate radiation embrittlement and to obtain crack instability by cleavage fracture. Three tests have been performed on mock-ups containing a small underclad crack (with depth about 5 mn) and a fourth test has been performed on one mock-up with a larger crack (depth about 13 mn). In each case, crack instability occurred by cleavage fracture in the base metal, without crack arrest, at a temperature of about - 170 deg C. Each test is interpreted using linear elastic analysis and elastic-plastic analysis by two-dimensional finite element computations. The fracture are conservatively predicted: the stress intensity factors deduced from the computations (K cp or K j ) are always greater than the base metal toughness. The comparison between the elastic analyses (including two plasticity corrections) and the elastic-plastic analyses shows that the elastic analyses are often conservative. The beneficial effect of the cladding in the analyses is also shown : the analyses are too conservative if the cladding effects is not taken into account. (authors). 9 figs., 6 tabs., 10 refs

  15. Basic data generation and pressure loss coefficient evaluation for HANARO core thermal-hydraulic analyses

    International Nuclear Information System (INIS)

    Chae, Hee Taek; Lee, Kye Hong

    1999-06-01

    MATRA-h, a HANARO subchannel analysis computer code, is used to evaluate thermal margin of the HANARO fuel. It's capability includes the assessments of CHF, ONB margin, and fuel temperature. In this report, basic input data and core design parameters required to perform the subchannel analysis with MATRA-h code are collected. These data include the subchannel geometric data, thermal-hydraulic correlations, empirical constants and material properties. The friction and form loss coefficients of the fuel assemblies were determined based on the results of the pressure drop test. At the same time, different form loss coefficients at the end plates and spacers are evaluated for various subchannels. The adequate correlations are applied to the evaluation of the form loss coefficients for various subchannels, which are corrected by measured values in order to have a same pressure drop at each flow channel. These basic input data and design parameters described in this report will be applied usefully to evaluate the thermal margin of the HANARO fuel. (author). 11 refs., 13 tabs., 11 figs

  16. An Evaluation of Emulsions in Wear-Metal-in-Oil Analyses | Fischer ...

    African Journals Online (AJOL)

    The oil samples were treated with acid and emulsified in water (1% w/w) using tetralin as a solvent and Triton X-100 as a surfactant. The performance characteristics (detection limits, accuracy, precision and spike recovery) of the emulsion methodology were evaluated. The calibration for the emulsion method compared ...

  17. Evaluation of ECMWF's soil moisture analyses using observations on the Tibetan Plateau

    NARCIS (Netherlands)

    Su, Zhongbo; de Rosnay, P.; Wen, J.; Wang, Lichun; Zeng, Yijian

    2013-01-01

    An analysis is carried out for two hydrologically contrasting but thermodynamically similar areas on the Tibetan Plateau, to evaluate soil moisture analysis based on the European Centre for Medium‐Range Weather Forecasts (ECMWF) previous optimum interpolation scheme and the current point‐wise

  18. An application of the explicit method for analysing intersystem dependencies in the evaluation of event trees

    International Nuclear Information System (INIS)

    Oliveira, L.F.S. de; Frutuoso e Melo, P.F.F.; Lima, J.E.P.; Stal, I.L.

    1985-01-01

    A computacional application of the explicit method for analyzing event trees in the context of probabilistic risk assessments is discussed. A detailed analysis of the explicit method is presented, including the train level analysis (TLA) of safety systems and the impact vector method. It is shown that the penalty for not adopting TLA is that in some cases non-conservative results may be reached. The impact vector method can significantly reduce the number of sequences to be considered, and its use has inspired the definition of a dependency matrix, which enables the proper running of a computer code especially developed for analysing event trees. The code has been extensively used in the Angra 1 PRA currently underway. In its present version it gives as output the dominant sequences for each given initiator, properly classiying them in core-degradation classes as specified by the user. (Author) [pt

  19. Integration of logistic regression and multicriteria land evaluation to simulation establishment of sustainable paddy field zone in Indramayu Regency, West Java Province, Indonesia

    Science.gov (United States)

    Nahib, Irmadi; Suryanta, Jaka; Niedyawati; Kardono, Priyadi; Turmudi; Lestari, Sri; Windiastuti, Rizka

    2018-05-01

    Ministry of Agriculture have targeted production of 1.718 million tons of dry grain harvest during period of 2016-2021 to achieve food self-sufficiency, through optimization of special commodities including paddy, soybean and corn. This research was conducted to develop a sustainable paddy field zone delineation model using logistic regression and multicriteria land evaluation in Indramayu Regency. A model was built on the characteristics of local function conversion by considering the concept of sustainable development. Spatial data overlay was constructed using available data, and then this model was built upon the occurrence of paddy field between 1998 and 2015. Equation for the model of paddy field changes obtained was: logit (paddy field conversion) = - 2.3048 + 0.0032*X1 – 0.0027*X2 + 0.0081*X3 + 0.0025*X4 + 0.0026*X5 + 0.0128*X6 – 0.0093*X7 + 0.0032*X8 + 0.0071*X9 – 0.0046*X10 where X1 to X10 were variables that determine the occurrence of changes in paddy fields, with a result value of Relative Operating Characteristics (ROC) of 0.8262. The weakest variable in influencing the change of paddy field function was X7 (paddy field price), while the most influential factor was X1 (distance from river). Result of the logistic regression was used as a weight for multicriteria land evaluation, which recommended three scenarios of paddy fields protection policy: standard, protective, and permissive. The result of this modelling, the priority paddy fields for protected scenario were obtained, as well as the buffer zones for the surrounding paddy fields.

  20. Performance of Aspergillus niger Cultivation in Geometrically Dissimilar Bioreactors Evaluated on the Basis of Morphological Analyses

    Directory of Open Access Journals (Sweden)

    M. A. Priede

    2002-01-01

    Full Text Available The growth of Aspergillus niger, citric acid production and mycelia morphology changes were compared under different mixing conditions in bioreactors with two types of stirrers: Rushton turbine stirrers (RTS1 or RTS2 and axial counterflow stirrers (ACS1 or ACS2. The characteristics of growth, productivity and morphology varied with the mixing system and the applied agitation regime. In the first series of experiments, the flow characteristics of Aspergillus niger broth under different mixing conditions were analysed in a model bioreactor using RTS1 and ACS1. The kinetic energy E of flow fluctuations was measured in gassed and ungassed water and fermentation broth systems using a stirring intensity measuring device (SIMD-f1. The difference of energy E values at different points was more pronounced in the bioreactor with RTS1 than in the case of ACS1. High viscous A. niger broths provided higher energy E values in comparison with water. It was observed that the Aspergillus niger growth rate and citric acid synthesis rate decreased at very high energy E values, the behaviour obviously being connected with the influence of the irreversible shear stress on the mycelial morphology. In the second series of experiments, a higher citric acid yield was achieved in the case of ACS2 at a power input approximately twice lower than in the case of RTS2. Morphological characterization of A. niger pellets was carried out by the image analysis method. ACS2 provided the development of morphology, where pellets and cores had larger area, perimeter and diameter, and the annular region of pellets was looser and more »hairy« in comparison with the case of RTS2. The pellets from the fermentation with RTS2 were smaller, denser, with shorter hyphae in the annular region of pellets, and the broth was characterized by a higher percentage of diffuse mycelia. Power input studies of RTS2 and ACS2 were made at different agitator rotation speeds and gas flow rates using water

  1. The importance of immunohistochemical analyses in evaluating the phenotype of Kv channel knockout mice.

    Science.gov (United States)

    Menegola, Milena; Clark, Eliana; Trimmer, James S

    2012-06-01

    To gain insights into the phenotype of voltage-gated potassium (Kv)1.1 and Kv4.2 knockout mice, we used immunohistochemistry to analyze the expression of component principal or α subunits and auxiliary subunits of neuronal Kv channels in knockout mouse brains. Genetic ablation of the Kv1.1 α subunit did not result in compensatory changes in the expression levels or subcellular distribution of related ion channel subunits in hippocampal medial perforant path and mossy fiber nerve terminals, where high levels of Kv1.1 are normally expressed. Genetic ablation of the Kv4.2 α subunit did not result in altered neuronal cytoarchitecture of the hippocampus. Although Kv4.2 knockout mice did not exhibit compensatory changes in the expression levels or subcellular distribution of the related Kv4.3 α subunit, we found dramatic decreases in the cellular and subcellular expression of specific Kv channel interacting proteins (KChIPs) that reflected their degree of association and colocalization with Kv4.2 in wild-type mouse and rat brains. These studies highlight the insights that can be gained by performing detailed immunohistochemical analyses of Kv channel knockout mouse brains. Wiley Periodicals, Inc. © 2012 International League Against Epilepsy.

  2. Evaluation of Temperature and Humidity Profiles of Unified Model and ECMWF Analyses Using GRUAN Radiosonde Observations

    Directory of Open Access Journals (Sweden)

    Young-Chan Noh

    2016-07-01

    Full Text Available Temperature and water vapor profiles from the Korea Meteorological Administration (KMA and the United Kingdom Met Office (UKMO Unified Model (UM data assimilation systems and from reanalysis fields from the European Centre for Medium-Range Weather Forecasts (ECMWF were assessed using collocated radiosonde observations from the Global Climate Observing System (GCOS Reference Upper-Air Network (GRUAN for January–December 2012. The motivation was to examine the overall performance of data assimilation outputs. The difference statistics of the collocated model outputs versus the radiosonde observations indicated a good agreement for the temperature, amongst datasets, while less agreement was found for the relative humidity. A comparison of the UM outputs from the UKMO and KMA revealed that they are similar to each other. The introduction of the new version of UM into the KMA in May 2012 resulted in an improved analysis performance, particularly for the moisture field. On the other hand, ECMWF reanalysis data showed slightly reduced performance for relative humidity compared with the UM, with a significant humid bias in the upper troposphere. ECMWF reanalysis temperature fields showed nearly the same performance as the two UM analyses. The root mean square differences (RMSDs of the relative humidity for the three models were larger for more humid conditions, suggesting that humidity forecasts are less reliable under these conditions.

  3. Succession of microbial communities during a biostimulation process as evaluated by DGGE and clone library analyses

    International Nuclear Information System (INIS)

    Ogino, A.; Nakahara, T.

    2001-01-01

    Aims: The objective of this study was to investigate the changes in the indigenous bacterial community structure for assessing the impact of biostimulation on spilled oil. Methods and Results: Changes in the bacterial community structure were monitored by denaturing gradient gel electrophoresis (DGGE) and clone library methods based on 16S rRNA gene (rDNA) sequences. The results of DGGE, coupled with the use of the Shannon index and principal component analysis (PCA) and clone library analyses, were consistent. In the treated (fertilized) area, one operational taxonomic unit (OTU) became dominant during the fertilization period, and it was most closely related to Pseudomonas putida. Conclusions: The bacterial community structure in the treated area was markedly different from that in the control (non-fertilized) area during the fertilization period, but in the two areas it became similar at 14 weeks after the end of fertilization. Significance and Impact of the Study: The results suggest that the bacterial community structure was disrupted by the biostimulation treatment, but that it recovered immediately after the end of fertilization. (Author)

  4. Shielding analysis method applied to nuclear ship 'MUTSU' and its evaluation based on experimental analyses

    International Nuclear Information System (INIS)

    Yamaji, Akio; Miyakoshi, Jun-ichi; Iwao, Yoshiaki; Tsubosaka, Akira; Saito, Tetsuo; Fujii, Takayoshi; Okumura, Yoshihiro; Suzuoki, Zenro; Kawakita, Takashi.

    1984-01-01

    Procedures of shielding analysis are described which were used for the shielding modification design of the Nuclear Ship ''MUTSU''. The calculations of the radiation distribution on board were made using Sn codes ANISN and TWOTRAN, a point kernel code QAD and a Monte Carlo code MORSE. The accuracies of these calculations were investigated through the analysis of various shielding experiments: the shield tank experiment of the Nuclear Ship ''Otto Hahn'', the shielding mock-up experiment for ''MUTSU'' performed in JRR-4, the shielding benchmark experiment using the 16 N radiation facility of AERE Harwell and the shielding effect experiment of the ship structure performed in the training ship ''Shintoku-Maru''. The values calculated by the ANISN agree with the data measured at ''Otto Hahn'' within a factor of 2 for fast neutrons and within a factor of 3 for epithermal and thermal neutrons. The γ-ray dose rates calculated by the QAD agree with the measured values within 30% for the analysis of the experiment in JRR-4. The design values for ''MUTSU'' were determined in consequence of these experimental analyses. (author)

  5. Evaluating cultural competence among Japanese clinical nurses: Analyses of a translated scale.

    Science.gov (United States)

    Noji, Ariko; Mochizuki, Yuki; Nosaki, Akiko; Glaser, Dale; Gonzales, Lucia; Mizobe, Akiko; Kanda, Katsuya

    2017-06-01

    This paper describes the factor analysis testing and construct validation of the Japanese version of the Caffrey Cultural Competence Health Services (J-CCCHS). The inventory, composed of 28 items, was translated using language and subject matter experts. Psychometric testing (exploratory factor, alpha reliability, and confirmatory factor analyses) was undertaken with nurses (N = 7494, 92% female, mean age 32.6 years) from 19 hospitals across Japan. Principal components extraction with varimax rotation yielded a 5-factor solution (62.31% variance explained) that was labeled: knowledge, comfort-proximal, comfort-distal, awareness, and awareness of national policy. Cronbach α for the subscales ranged from 0.756 to 0.892. In confirmatory factor analysis using the robust maximum likelihood estimator, the chi-square test was as follows: χ 2 (340) = 14604.44, P differences in J-CCCHS subscale scores between predefined groups. Taking into consideration that this is the first foray into construct validation for this instrument, and that fit was improved when a subsequent data driven model was tested, and it has the ability to distinguish between known groups that are expected to differ in cultural competence, the instrument can be of value to clinicians and educators alike. © 2017 John Wiley & Sons Australia, Ltd.

  6. Development and preliminary analyses of material balance evaluation model in nuclear fuel cycle

    International Nuclear Information System (INIS)

    Matsumura, Tetsuo

    1994-01-01

    Material balance evaluation model in nuclear fuel cycle has been developed using ORIGEN-2 code as basic engine. This model has feature of: It can treat more than 1000 nuclides including minor actinides and fission products. It has flexibility of modeling and graph output using a engineering work station. I made preliminary calculation of LWR fuel high burnup effect (reloading fuel average burnup of 60 GWd/t) on nuclear fuel cycle. The preliminary calculation shows LWR fuel high burnup has much effect on Japanese Pu balance problem. (author)

  7. Evaluation of the General Atomic codes TAP and RECA for HTGR accident analyses

    International Nuclear Information System (INIS)

    Ball, S.J.; Cleveland, J.C.; Sanders, J.P.

    1978-01-01

    The General Atomic codes TAP (Transient Analysis Program) and RECA (Reactor Emergency Cooling Analysis) are evaluated with respect to their capability for predicting the dynamic behavior of high-temperature gas-cooled reactors (HTGRs) for postulated accident conditions. Several apparent modeling problems are noted, and the susceptibility of the codes to misuse and input errors is discussed. A critique of code verification plans is also included. The several cases where direct comparisons could be made between TAP/RECA calculations and those based on other independently developed codes indicated generally good agreement, thus contributing to the credibility of the codes

  8. Reduced Rank Regression

    DEFF Research Database (Denmark)

    Johansen, Søren

    2008-01-01

    The reduced rank regression model is a multivariate regression model with a coefficient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure, which estimates the reduced rank regression model. It is related to canonical correlations and involves calculating...

  9. Candidate Genes for Testicular Cancer Evaluated by In Situ Protein Expression Analyses on Tissue Microarrays

    Directory of Open Access Journals (Sweden)

    Rolf I. Skotheim

    2003-09-01

    Full Text Available By the use of high-throughput molecular technologies, the number of genes and proteins potentially relevant to testicular germ cell tumor (TGCT and other diseases will increase rapidly. In a recent transcriptional profiling, we demonstrated the overexpression of GRB7 and JUP in TGCTs, confirmed the reported overexpression of CCND2. We also have recent evidences for frequent genetic alterations of FHIT and epigenetic alterations of MGMT. To evaluate whether the expression of these genes is related to any clinicopathological variables, we constructed a tissue microarray with 510 testicular tissue cores from 279 patients diagnosed with TGCT, covering various histological subgroups and clinical stages. By immunohistochemistry, we found that JUP, GRB7, CCND2 proteins were rarely present in normal testis, but frequently expressed at high levels in TGCT. Additionally, all premalignant intratubular germ cell neoplasias were JUP-immunopositive. MGMT and FHIT were expressed by normal testicular tissues, but at significantly lower frequencies in TGCT. Except for CCND2, the expressions of all markers were significantly associated with various TGCT subtypes. In summary, we have developed a high-throughput tool for the evaluation of TGCT markers, utilized this to validate five candidate genes whose protein expressions were indeed deregulated in TGCT.

  10. Correlating tephras and cryptotephras using glass compositional analyses and numerical and statistical methods: Review and evaluation

    Science.gov (United States)

    Lowe, David J.; Pearce, Nicholas J. G.; Jorgensen, Murray A.; Kuehn, Stephen C.; Tryon, Christian A.; Hayward, Chris L.

    2017-11-01

    We define tephras and cryptotephras and their components (mainly ash-sized particles of glass ± crystals in distal deposits) and summarize the basis of tephrochronology as a chronostratigraphic correlational and dating tool for palaeoenvironmental, geological, and archaeological research. We then document and appraise recent advances in analytical methods used to determine the major, minor, and trace elements of individual glass shards from tephra or cryptotephra deposits to aid their correlation and application. Protocols developed recently for the electron probe microanalysis of major elements in individual glass shards help to improve data quality and standardize reporting procedures. A narrow electron beam (diameter ∼3-5 μm) can now be used to analyze smaller glass shards than previously attainable. Reliable analyses of 'microshards' (defined here as glass shards T2 test). Randomization tests can be used where distributional assumptions such as multivariate normality underlying parametric tests are doubtful. Compositional data may be transformed and scaled before being subjected to multivariate statistical procedures including calculation of distance matrices, hierarchical cluster analysis, and PCA. Such transformations may make the assumption of multivariate normality more appropriate. A sequential procedure using Mahalanobis distance and the Hotelling two-sample T2 test is illustrated using glass major element data from trachytic to phonolitic Kenyan tephras. All these methods require a broad range of high-quality compositional data which can be used to compare 'unknowns' with reference (training) sets that are sufficiently complete to account for all possible correlatives, including tephras with heterogeneous glasses that contain multiple compositional groups. Currently, incomplete databases are tending to limit correlation efficacy. The development of an open, online global database to facilitate progress towards integrated, high

  11. Process evaluation for complex interventions in health services research: analysing context, text trajectories and disruptions.

    Science.gov (United States)

    Murdoch, Jamie

    2016-08-19

    Process evaluations assess the implementation and sustainability of complex healthcare interventions within clinical trials, with well-established theoretical models available for evaluating intervention delivery within specific contexts. However, there is a need to translate conceptualisations of context into analytical tools which enable the dynamic relationship between context and intervention implementation to be captured and understood. In this paper I propose an alternative approach to the design, implementation and analysis of process evaluations for complex health interventions through a consideration of trial protocols as textual documents, distributed and enacted at multiple contextual levels. As an example, I conduct retrospective analysis of a sample of field notes and transcripts collected during the ESTEEM study - a cluster randomised controlled trial of primary care telephone triage. I draw on theoretical perspectives associated with Linguistic Ethnography to examine the delivery of ESTEEM through staff orientation to different texts. In doing so I consider what can be learned from examining the flow and enactment of protocols for notions of implementation and theoretical fidelity (i.e. intervention delivered as intended and whether congruent with the intervention theory). Implementation of the triage intervention required staff to integrate essential elements of the protocol within everyday practice, seen through the adoption and use of different texts that were distributed across staff and within specific events. Staff were observed deploying texts in diverse ways (e.g. reinterpreting scripts, deviating from standard operating procedures, difficulty completing decision support software), providing numerous instances of disruption to maintaining intervention fidelity. Such observations exposed tensions between different contextual features in which the trial was implemented, offering theoretical explanations for the main trial findings. The value of

  12. Extra-regulatory impact tests and analyses of the structural evaluation test unit

    International Nuclear Information System (INIS)

    Ludwigsen, J.S.; Ammerman, D.J.

    1995-01-01

    The structural evaluation test unit is roughly equivalent to a 1/3 scale model of a high level waste rail cask. The test unit was designed to just meet the requirements of NRC Regulatory Guide 7.6 when subjected to a 9 m (30 ft) free drop resulting in an impact velocity of 13.4 m/s (30 mph) onto an unyielding target in the end-on orientation. The test unit was then subjected to impacts with higher velocities to determine the amount of built-in conservatism in this design approach. Test impacts of 13.4, 20.1 and 26.8 m/s (30, 45, and 60 mph) were performed. This paper will describe the design, testing, and comparison of measured strains and deformations to the equivalent analytical predictions

  13. Machinability evaluation of titanium alloys (Part 2)--Analyses of cutting force and spindle motor current.

    Science.gov (United States)

    Kikuchi, Masafumi; Okuno, Osamu

    2004-12-01

    To establish a method of determining the machinability of dental materials for CAD/CAM systems, the machinability of titanium, two titanium alloys (Ti-6Al-4V and Ti-6Al-7Nb), and free-cutting brass was evaluated through cutting force and spindle motor current. The metals were slotted using a milling machine and square end mills at four cutting conditions. Both the static and dynamic components of the cutting force represented well the machinability of the metals tested: the machinability of Ti-6Al-4V and Ti-6Al-7Nb was worse than that of titanium, while that of free-cutting brass was better. On the other hand, the results indicated that the spindle motor current was not sensitive enough to detect the material difference among the titanium and its alloys.

  14. Extra virgin olive oil bitterness evaluation by sensory and chemical analyses.

    Science.gov (United States)

    Favati, Fabio; Condelli, Nicola; Galgano, Fernanda; Caruso, Marisa Carmela

    2013-08-15

    An experimental investigation was performed on blend extra virgin olive oils (EVOOs) from different cultivars and EVOO from different olive monovarieties (Coratina, Leccino, Maiatica, Ogliarola) with the aim to evaluate the possibility of estimating the perceived bitterness intensity by using chemical indices, such as the total phenol content and the compounds responsible for oil bitterness measured spectrophotometrically at 225 nm (K225 value), as bitterness predictors in different EVOO. Therefore, a bitterness predictive model, based on the relationship between the perceived bitterness intensity of the selected stimuli and the chosen chemicals parameters has been built and validated. The results indicated that the oil bitterness intensity could be satisfactorily predicted by using the K225 values of oil samples. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. EVALUATION OF A BUFFERED SOLID PHASE DISPERSION PROCEDURE ADAPTED FOR PESTICIDE ANALYSES IN THE SOIL MATRIX

    Directory of Open Access Journals (Sweden)

    Ana María Domínguez

    2015-08-01

    Full Text Available An evaluation of the pesticides extracted from the soil matrix was conducted using a citrate-buffered solid phase dispersion sample preparation method (QuEChERS. The identification and quantitation of pesticide compounds was performed using gas chromatography-mass spectrometry. Because of the occurrence of the matrix effect in 87% of the analyzed pesticides, the quantification was performed using matrix-matched calibration. The method's quantification limits were between 0.01 and 0.5 mg kg-1. Repeatability and intermediate precision, expressed as a relative standard deviation percentage, were less than 20%. The recoveries in general ranged between 62% and 99%, with a relative standard deviation < 20%. All the responses were linear, with a correlation coefficient (r ≥0.99.

  16. Characterization of solar thermal concepts for electricity generation: Volume 1, Analyses and evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Williams, T.A.; Dirks, J.A.; Brown, D.R.; Drost, M.K.; Antoniac, Z.A.; Ross, B.A.

    1987-03-01

    This study is aimed at providing a relative comparison of the thermodynamic and economic performance in electric applications of several concepts that have been studied and developed in the DOE solar thermal program. Since the completion of earlier systems comparison studies in the late 1970's, there have been a number of years of progress in solar thermal technology. This progress has included development of new solar components, improvements in component and system design detail, construction of working systems, and collection of operating data on the systems. This study provides an updating of the expected performance and cost of the major components and the overall system energy cost for the concepts evaluated. The projections in this study are for the late 1990's time frame, based on the capabilities of the technologies that could be expected to be achieved with further technology development.

  17. Exhaust gas purification with sodium bicarbonate. Analysis and evaluation; Abgasreinigung mit Natriumhydrogencarbonat. Analyse und Bewertung

    Energy Technology Data Exchange (ETDEWEB)

    Quicker, Peter; Rotheut, Martin; Schulten, Marc [RWTH Aachen Univ. (Germany). Lehr- und Forschungsgebiet Technologie der Energierohstoffe (TEER); Athmann, Uwe [dezentec ingenieurgesellschaft mbH, Essen (Germany)

    2013-03-01

    The dry exhaust gas cleaning uses sodium bicarbonate in order to absorb acid components of exhaust gases such as sulphur dioxide or hydrochloric acid. Recently, sodium and calcium based adsorbents are compared with respect to their economic and ecologic options. None of the investigations performed considered decidedly practical experiences from the system operation such as differences in the management, availability, personnel expenditure and maintenance expenditure. Under this aspect, the authors of the contribution under consideration report on exhaust gas cleaning systems using sodium carbonate as well as lime adsorbents. The operators of these exhaust gas cleaning systems were questioned on their experiences, and all relevant operational data (consumption of additives, consumption of energy, emissions, standstill, maintenance effort) were recorded and evaluated at a very detailed level.

  18. Evaluation of the prediction precision capability of partial least squares regression approach for analysis of high alloy steel by laser induced breakdown spectroscopy

    Science.gov (United States)

    Sarkar, Arnab; Karki, Vijay; Aggarwal, Suresh K.; Maurya, Gulab S.; Kumar, Rohit; Rai, Awadhesh K.; Mao, Xianglei; Russo, Richard E.

    2015-06-01

    Laser induced breakdown spectroscopy (LIBS) was applied for elemental characterization of high alloy steel using partial least squares regression (PLSR) with an objective to evaluate the analytical performance of this multivariate approach. The optimization of the number of principle components for minimizing error in PLSR algorithm was investigated. The effect of different pre-treatment procedures on the raw spectral data before PLSR analysis was evaluated based on several statistical (standard error of prediction, percentage relative error of prediction etc.) parameters. The pre-treatment with "NORM" parameter gave the optimum statistical results. The analytical performance of PLSR model improved by increasing the number of laser pulses accumulated per spectrum as well as by truncating the spectrum to appropriate wavelength region. It was found that the statistical benefit of truncating the spectrum can also be accomplished by increasing the number of laser pulses per accumulation without spectral truncation. The constituents (Co and Mo) present in hundreds of ppm were determined with relative precision of 4-9% (2σ), whereas the major constituents Cr and Ni (present at a few percent levels) were determined with a relative precision of ~ 2%(2σ).

  19. Evaluation of Multiple Linear Regression-Based Limited Sampling Strategies for Enteric-Coated Mycophenolate Sodium in Adult Kidney Transplant Recipients.

    Science.gov (United States)

    Brooks, Emily K; Tett, Susan E; Isbel, Nicole M; McWhinney, Brett; Staatz, Christine E

    2018-04-01

    Although multiple linear regression-based limited sampling strategies (LSSs) have been published for enteric-coated mycophenolate sodium, none have been evaluated for the prediction of subsequent mycophenolic acid (MPA) exposure. This study aimed to examine the predictive performance of the published LSS for the estimation of future MPA area under the concentration-time curve from 0 to 12 hours (AUC0-12) in renal transplant recipients. Total MPA plasma concentrations were measured in 20 adult renal transplant patients on 2 occasions a week apart. All subjects received concomitant tacrolimus and were approximately 1 month after transplant. Samples were taken at 0, 0.33, 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 6, and 8 hours and 0, 0.25, 0.5, 0.75, 1, 1.25, 1.5, 2, 3, 4, 6, 9, and 12 hours after dose on the first and second sampling occasion, respectively. Predicted MPA AUC0-12 was calculated using 19 published LSSs and data from the first or second sampling occasion for each patient and compared with the second occasion full MPA AUC0-12 calculated using the linear trapezoidal rule. Bias (median percentage prediction error) and imprecision (median absolute prediction error) were determined. Median percentage prediction error and median absolute prediction error for the prediction of full MPA AUC0-12 were multiple linear regression-based LSS was not possible without concentrations up to at least 8 hours after the dose.

  20. Municipal solid waste composition: Sampling methodology, statistical analyses, and case study evaluation

    International Nuclear Information System (INIS)

    Edjabou, Maklawe Essonanawe; Jensen, Morten Bang; Götze, Ramona; Pivnenko, Kostyantyn; Petersen, Claus; Scheutz, Charlotte; Astrup, Thomas Fruergaard

    2015-01-01

    Highlights: • Tiered approach to waste sorting ensures flexibility and facilitates comparison of solid waste composition data. • Food and miscellaneous wastes are the main fractions contributing to the residual household waste. • Separation of food packaging from food leftovers during sorting is not critical for determination of the solid waste composition. - Abstract: Sound waste management and optimisation of resource recovery require reliable data on solid waste generation and composition. In the absence of standardised and commonly accepted waste characterisation methodologies, various approaches have been reported in literature. This limits both comparability and applicability of the results. In this study, a waste sampling and sorting methodology for efficient and statistically robust characterisation of solid waste was introduced. The methodology was applied to residual waste collected from 1442 households distributed among 10 individual sub-areas in three Danish municipalities (both single and multi-family house areas). In total 17 tonnes of waste were sorted into 10–50 waste fractions, organised according to a three-level (tiered approach) facilitating comparison of the waste data between individual sub-areas with different fractionation (waste from one municipality was sorted at “Level III”, e.g. detailed, while the two others were sorted only at “Level I”). The results showed that residual household waste mainly contained food waste (42 ± 5%, mass per wet basis) and miscellaneous combustibles (18 ± 3%, mass per wet basis). The residual household waste generation rate in the study areas was 3–4 kg per person per week. Statistical analyses revealed that the waste composition was independent of variations in the waste generation rate. Both, waste composition and waste generation rates were statistically similar for each of the three municipalities. While the waste generation rates were similar for each of the two housing types (single

  1. Municipal solid waste composition: Sampling methodology, statistical analyses, and case study evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Edjabou, Maklawe Essonanawe, E-mail: vine@env.dtu.dk [Department of Environmental Engineering, Technical University of Denmark, 2800 Kgs. Lyngby (Denmark); Jensen, Morten Bang; Götze, Ramona; Pivnenko, Kostyantyn [Department of Environmental Engineering, Technical University of Denmark, 2800 Kgs. Lyngby (Denmark); Petersen, Claus [Econet AS, Omøgade 8, 2.sal, 2100 Copenhagen (Denmark); Scheutz, Charlotte; Astrup, Thomas Fruergaard [Department of Environmental Engineering, Technical University of Denmark, 2800 Kgs. Lyngby (Denmark)

    2015-02-15

    Highlights: • Tiered approach to waste sorting ensures flexibility and facilitates comparison of solid waste composition data. • Food and miscellaneous wastes are the main fractions contributing to the residual household waste. • Separation of food packaging from food leftovers during sorting is not critical for determination of the solid waste composition. - Abstract: Sound waste management and optimisation of resource recovery require reliable data on solid waste generation and composition. In the absence of standardised and commonly accepted waste characterisation methodologies, various approaches have been reported in literature. This limits both comparability and applicability of the results. In this study, a waste sampling and sorting methodology for efficient and statistically robust characterisation of solid waste was introduced. The methodology was applied to residual waste collected from 1442 households distributed among 10 individual sub-areas in three Danish municipalities (both single and multi-family house areas). In total 17 tonnes of waste were sorted into 10–50 waste fractions, organised according to a three-level (tiered approach) facilitating comparison of the waste data between individual sub-areas with different fractionation (waste from one municipality was sorted at “Level III”, e.g. detailed, while the two others were sorted only at “Level I”). The results showed that residual household waste mainly contained food waste (42 ± 5%, mass per wet basis) and miscellaneous combustibles (18 ± 3%, mass per wet basis). The residual household waste generation rate in the study areas was 3–4 kg per person per week. Statistical analyses revealed that the waste composition was independent of variations in the waste generation rate. Both, waste composition and waste generation rates were statistically similar for each of the three municipalities. While the waste generation rates were similar for each of the two housing types (single

  2. Fungible weights in logistic regression.

    Science.gov (United States)

    Jones, Jeff A; Waller, Niels G

    2016-06-01

    In this article we develop methods for assessing parameter sensitivity in logistic regression models. To set the stage for this work, we first review Waller's (2008) equations for computing fungible weights in linear regression. Next, we describe 2 methods for computing fungible weights in logistic regression. To demonstrate the utility of these methods, we compute fungible logistic regression weights using data from the Centers for Disease Control and Prevention's (2010) Youth Risk Behavior Surveillance Survey, and we illustrate how these alternate weights can be used to evaluate parameter sensitivity. To make our work accessible to the research community, we provide R code (R Core Team, 2015) that will generate both kinds of fungible logistic regression weights. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. Evaluation of Uncertainties in hydrogeological modeling and groundwater flow analyses. Model calibration

    International Nuclear Information System (INIS)

    Ijiri, Yuji; Ono, Makoto; Sugihara, Yutaka; Shimo, Michito; Yamamoto, Hajime; Fumimura, Kenichi

    2003-03-01

    This study involves evaluation of uncertainty in hydrogeological modeling and groundwater flow analysis. Three-dimensional groundwater flow in Shobasama site in Tono was analyzed using two continuum models and one discontinuous model. The domain of this study covered area of four kilometers in east-west direction and six kilometers in north-south direction. Moreover, for the purpose of evaluating how uncertainties included in modeling of hydrogeological structure and results of groundwater simulation decreased with progress of investigation research, updating and calibration of the models about several modeling techniques of hydrogeological structure and groundwater flow analysis techniques were carried out, based on the information and knowledge which were newly acquired. The acquired knowledge is as follows. As a result of setting parameters and structures in renewal of the models following to the circumstances by last year, there is no big difference to handling between modeling methods. The model calibration is performed by the method of matching numerical simulation with observation, about the pressure response caused by opening and closing of a packer in MIU-2 borehole. Each analysis technique attains reducing of residual sum of squares of observations and results of numerical simulation by adjusting hydrogeological parameters. However, each model adjusts different parameters as water conductivity, effective porosity, specific storage, and anisotropy. When calibrating models, sometimes it is impossible to explain the phenomena only by adjusting parameters. In such case, another investigation may be required to clarify details of hydrogeological structure more. As a result of comparing research from beginning to this year, the following conclusions are obtained about investigation. (1) The transient hydraulic data are effective means in reducing the uncertainty of hydrogeological structure. (2) Effective porosity for calculating pore water velocity of

  4. Vanadium NMR Chemical Shifts of (Imido)vanadium(V) Dichloride Complexes with Imidazolin-2-iminato and Imidazolidin-2-iminato Ligands: Cooperation with Quantum-Chemical Calculations and Multiple Linear Regression Analyses.

    Science.gov (United States)

    Yi, Jun; Yang, Wenhong; Sun, Wen-Hua; Nomura, Kotohiro; Hada, Masahiko

    2017-11-30

    The NMR chemical shifts of vanadium ( 51 V) in (imido)vanadium(V) dichloride complexes with imidazolin-2-iminato and imidazolidin-2-iminato ligands were calculated by the density functional theory (DFT) method with GIAO. The calculated 51 V NMR chemical shifts were analyzed by the multiple linear regression (MLR) analysis (MLRA) method with a series of calculated molecular properties. Some of calculated NMR chemical shifts were incorrect using the optimized molecular geometries of the X-ray structures. After the global minimum geometries of all of the molecules were determined, the trend of the observed chemical shifts was well reproduced by the present DFT method. The MLRA method was performed to investigate the correlation between the 51 V NMR chemical shift and the natural charge, band energy gap, and Wiberg bond index of the V═N bond. The 51 V NMR chemical shifts obtained with the present MLR model were well reproduced with a correlation coefficient of 0.97.

  5. Analyses of component degradation to evaluate maintenance effectiveness and aging effects

    International Nuclear Information System (INIS)

    Samanta, P.K.; Hsu, F.; Subudhi, M.; Vesely, W.E.

    1991-01-01

    This paper describes degradation modeling, an approach for analyzing degradation and failure of components to understand the aging process of components. As used in our study, degradation modeling is the analysis of information on degradation of components for developing models of the degradation process and its implications. This modeling focuses on the analysis of the times of degradations of components, to model how the rate of degradation changes with the age of the component. With this methodology we also determine the effectiveness of maintenance as applicable to aging evaluations. The specific applications which are performed show quantitative models of degradation rates of components and failure rates of components from plant-specific data. The statistical techniques allow aging trends to be identified in the degradation data and in the failure data. Initial estimates of the effectiveness of maintenance in limiting degradations from becoming failures are developed. These results are important first steps in degradation modeling, and show that degradation can be modeled to identify aging trends. 2 refs., 8 figs., 1 tab

  6. Evaluation of Approaches to Deal with Low-Frequency Nuisance Covariates in Population Pharmacokinetic Analyses.

    Science.gov (United States)

    Lagishetty, Chakradhar V; Duffull, Stephen B

    2015-11-01

    Clinical studies include occurrences of rare variables, like genotypes, which due to their frequency and strength render their effects difficult to estimate from a dataset. Variables that influence the estimated value of a model-based parameter are termed covariates. It is often difficult to determine if such an effect is significant, since type I error can be inflated when the covariate is rare. Their presence may have either an insubstantial effect on the parameters of interest, hence are ignorable, or conversely they may be influential and therefore non-ignorable. In the case that these covariate effects cannot be estimated due to power and are non-ignorable, then these are considered nuisance, in that they have to be considered but due to type 1 error are of limited interest. This study assesses methods of handling nuisance covariate effects. The specific objectives include (1) calibrating the frequency of a covariate that is associated with type 1 error inflation, (2) calibrating its strength that renders it non-ignorable and (3) evaluating methods for handling these non-ignorable covariates in a nonlinear mixed effects model setting. Type 1 error was determined for the Wald test. Methods considered for handling the nuisance covariate effects were case deletion, Box-Cox transformation and inclusion of a specific fixed effects parameter. Non-ignorable nuisance covariates were found to be effectively handled through addition of a fixed effect parameter.

  7. Evaluating the Accuracy of Results for Teacher Implemented Trial-Based Functional Analyses.

    Science.gov (United States)

    Rispoli, Mandy; Ninci, Jennifer; Burke, Mack D; Zaini, Samar; Hatton, Heather; Sanchez, Lisa

    2015-09-01

    Trial-based functional analysis (TBFA) allows for the systematic and experimental assessment of challenging behavior in applied settings. The purposes of this study were to evaluate a professional development package focused on training three Head Start teachers to conduct TBFAs with fidelity during ongoing classroom routines. To assess the accuracy of the TBFA results, the effects of a function-based intervention derived from the TBFA were compared with the effects of a non-function-based intervention. Data were collected on child challenging behavior and appropriate communication. An A-B-A-C-D design was utilized in which A represented baseline, and B and C consisted of either function-based or non-function-based interventions counterbalanced across participants, and D represented teacher implementation of the most effective intervention. Results showed that the function-based intervention produced greater decreases in challenging behavior and greater increases in appropriate communication than the non-function-based intervention for all three children. © The Author(s) 2015.

  8. The Use of Nonparametric Kernel Regression Methods in Econometric Production Analysis

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard

    and nonparametric estimations of production functions in order to evaluate the optimal firm size. The second paper discusses the use of parametric and nonparametric regression methods to estimate panel data regression models. The third paper analyses production risk, price uncertainty, and farmers' risk preferences...... within a nonparametric panel data regression framework. The fourth paper analyses the technical efficiency of dairy farms with environmental output using nonparametric kernel regression in a semiparametric stochastic frontier analysis. The results provided in this PhD thesis show that nonparametric......This PhD thesis addresses one of the fundamental problems in applied econometric analysis, namely the econometric estimation of regression functions. The conventional approach to regression analysis is the parametric approach, which requires the researcher to specify the form of the regression...

  9. Refinement and evaluation of crack-opening-area analyses for circumferential through-wall cracks in pipes

    International Nuclear Information System (INIS)

    Rahman, S.; Brust, F.; Ghadiali, N.; Krishnaswamy, P.; Wilkowski, G.; Choi, Y.H.; Moberg, F.; Brickstad, B.

    1995-04-01

    Leak-before-break (LBB) analyses for circumferentially cracked pipes are currently being conducted in the nuclear industry to justify elimination of pipe whip restraints and jet impingement shields which are present because of the expected dynamic effects from pipe rupture. The application of the LBB methodology frequently requires calculation of leak rates. These leak rates depend on the crack-opening area of a through-wall crack in the pipe. In addition to LBB analyses, which assume a hypothetical flaw size, there is also interest in the integrity of actual leaking cracks corresponding to current leakage detection requirements in NRC Regulatory Guide 1.45, or for assessing temporary repair of Class 2 and 3 pipes that have leaks as are being evaluated in ASME Section 11. This study was requested by the NRC to review, evaluate, and refine current analytical models for crack-opening-area analyses of pipes with circumferential through-wall cracks. Twenty-five pipe experiments were analyzed to determine the accuracy of the predictive models. Several practical aspects of crack-opening such as; crack-face pressure, off-center cracks, restraint of pressure-induced bending, cracks in thickness transition regions, weld residual stresses, crack-morphology models, and thermal-hydraulic analysis, were also investigated. 140 refs., 105 figs., 41 tabs

  10. Age and sex as moderators of the placebo response – an evaluation of systematic reviews and meta-analyses across medicine.

    Science.gov (United States)

    Weimer, Katja; Colloca, Luana; Enck, Paul

    2015-01-01

    Predictors of the placebo response (PR) in randomized controlled trials (RCT) have been searched for ever since RCT have become the standard for testing novel therapies and age and gender are routinely documented data in all trials irrespective of the drug tested, its indication, and the primary and secondary end points chosen. To evaluate whether age and gender have been found to be reliable predictors of the PR across medical subspecialties, we extracted 75 systematic reviews, meta-analyses, and meta-regressions performed in major medical areas (neurology, psychiatry, internal medicine) known for high PR rates. The literature database used contains approximately 2,500 papers on various aspects of the genuine PR. These ‘meta-analyses’ were screened for statistical predictors of the PR across multiple RCT, including age and gender, but also other patient-based and design-based predictors of higher PR rates. Retrieved papers were sorted for areas and disease categories. Only 15 of the 75 analyses noted an effect of younger age to be associated with higher PR, and this was predominantly in psychiatric conditions but not in depression, and internal medicine but not in gastroenterology. Female gender was associated with higher PR in only 3 analyses. Among the patient-based predictors, the most frequently noted factor was lower symptom severity at baseline, and among the design- based factors, it was a randomization ratio that selected more patients to drugs than to placebo, more frequent study visits, and more recent trials that were associated with higher PR rates. While younger age may contribute to the PR in some conditions, sex does not. There is currently no evidence that the PR is different in the elderly. PR are, however, markedly influenced by the symptom severity at baseline, and by the likelihood of receiving active treatment in placebo- controlled trials. © 2014 S. Karger AG, Basel.

  11. Prototype of a subsurface drip irrigation emitter: Manufacturing, hydraulic evaluation and experimental analyses

    Science.gov (United States)

    Souza, Wanderley De Jesus; Rodrigues Sinobas, Leonor; Sánchez, Raúl; Arriel Botrel, Tarlei; Duarte Coelho, Rubens

    2013-04-01

    Root and soil intrusion into the conventional emitters is one of the major disadvantages to obtain a good uniformity of water application in subsurface drip irrigation (SDI). In the last years, there have been different approaches to reduce these problems such as the impregnation of emitters with herbicide, and the search for an emitter geometry impairing the intrusion of small roots. Within the last this study, has developed and evaluated an emitter model which geometry shows specific physical features to prevent emitter clogging. This work was developed at the Biosystems Engineering Department at ESALQ-USP/Brazil, and it is a part of a research in which an innovated emitteŕs model for SDI has been developed to prevent root and soil particles intrusion. An emitter with a mechanical-hydraulic mechanism (opening and closing the water outlet) for SDI was developed and manufactured using a mechanical lathe process. It was composed by a silicon elastic membrane a polyethylene tube and a Vnyl Polychloride membrane protector system. In this study the performance of the developed prototype was assessed in the laboratory and in the field conditions. In the laboratory, uniformity of water application was calculated by the water emission uniformity coefficient (CUE), and the manufacturer's coefficient of variation (CVm). In addition, variation in the membrane diameter submitted to internal pressures; head losses along the membrane, using the energy equation; and, precision and accuracy of the equation model, analyzed by Pearson's correlation coefficient (r), and by Willmott's concordance index (d) were also calculated with samples of the developed emitters. In the field, the emitters were installed in pots with and without sugar cane culture from October 2010 to January 2012. During this time, flow rate in 20 emitters were measured periodically, and the aspects of them about clogging at the end of the experiment. Emitters flow rates were measured quarterly to calculate

  12. Regression analysis by example

    CERN Document Server

    Chatterjee, Samprit

    2012-01-01

    Praise for the Fourth Edition: ""This book is . . . an excellent source of examples for regression analysis. It has been and still is readily readable and understandable."" -Journal of the American Statistical Association Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. Regression Analysis by Example, Fifth Edition has been expanded

  13. The influence of uncertainties of measurements in laboratory performance evaluation by intercomparison program in radionuclide analyses of environmental samples

    International Nuclear Information System (INIS)

    Tauhata, L.; Vianna, M.E.; Oliveira, A.E. de; Clain, A.F.; Ferreira, A.C.M.; Bernardes, E.M.

    2000-01-01

    The accuracy and precision of results of the radionuclide analyses in environmental samples are widely claimed internationally due to its consequences in the decision process coupled to evaluation of environmental pollution, impact, internal and external population exposure. These characteristics of measurement of the laboratories can be shown clearly using intercomparison data, due to the existence of a reference value and the need of three determinations for each analysis. In intercomparison studies accuracy in radionuclide assays in low-level environmental samples has usually been the main focus in performance evaluation and it can be estimated by taking into account the deviation between the experimental laboratory mean value and the reference value. The laboratory repeatability of measurements or their standard deviation is seldom included in performance evaluation. In order to show the influence of the uncertainties in performance evaluation of the laboratories, data of 22 intercomparison runs which distributed 790 spiked environmental samples to 20 Brazilian participant laboratories were compared, using the 'Normalised Standard Deviation' as statistical criteria for performance evaluation of U.S.EPA. It mainly takes into account the laboratory accuracy and the performance evaluation using the same data classified by normalised standard deviation modified by a weight reactor that includes the individual laboratory uncertainty. The results show a relative decrease in laboratory performance in each radionuclide assay: 1.8% for 65 Zn, 2.8% for 40 K, 3.4 for 60 Co, 3.7% for 134 Cs, 4.0% for 137 Cs, 4.4% for Th and U nat , 4.5% for 3 H, 6.3% for 133 Ba, 8.6% for 90 Sr, 10.6% for Gross Alpha, 10.9% for 106 Ru, 11.1% for 226 Ra, 11.5% for Gross Beta and 13.6% for 228 Ra. The changes in the parameters of the statistical distribution function were negligible and the distribution remained as Gaussian type for all radionuclides analysed. Data analyses in terms of

  14. Evaluation of biodegradable and nonbiodegradable liquid scintillation cocktails used for tritium-in-water and urine analyses

    International Nuclear Information System (INIS)

    Haddock, J.A.

    1992-11-01

    The performance of a number of liquid scintillation cocktails was evaluated for quench resistance, sample capacity, cost, waste reduction and limit of detection. Directed towards the specific applications of counting tritium in water and urine samples, this study illustrated the potential of the newer, biodegradable cocktails, which mostly exhibited comparable or superior counting performance to the traditional cocktails. Reduced cocktail volumes and the use of mini vials is recommended for medium-load cocktails used for routine urine or water analyses, since a significant decrease in the volume of waste generated in Canadian nuclear facilities would result. (Author) (13 refs., 5 tabs., 44 figs.)

  15. Regression to Causality : Regression-style presentation influences causal attribution

    DEFF Research Database (Denmark)

    Bordacconi, Mats Joe; Larsen, Martin Vinæs

    2014-01-01

    of equivalent results presented as either regression models or as a test of two sample means. Our experiment shows that the subjects who were presented with results as estimates from a regression model were more inclined to interpret these results causally. Our experiment implies that scholars using regression...... models – one of the primary vehicles for analyzing statistical results in political science – encourage causal interpretation. Specifically, we demonstrate that presenting observational results in a regression model, rather than as a simple comparison of means, makes causal interpretation of the results...... more likely. Our experiment drew on a sample of 235 university students from three different social science degree programs (political science, sociology and economics), all of whom had received substantial training in statistics. The subjects were asked to compare and evaluate the validity...

  16. The Evaluation of Bivariate Mixed Models in Meta-analyses of Diagnostic Accuracy Studies with SAS, Stata and R.

    Science.gov (United States)

    Vogelgesang, Felicitas; Schlattmann, Peter; Dewey, Marc

    2018-05-01

    Meta-analyses require a thoroughly planned procedure to obtain unbiased overall estimates. From a statistical point of view not only model selection but also model implementation in the software affects the results. The present simulation study investigates the accuracy of different implementations of general and generalized bivariate mixed models in SAS (using proc mixed, proc glimmix and proc nlmixed), Stata (using gllamm, xtmelogit and midas) and R (using reitsma from package mada and glmer from package lme4). Both models incorporate the relationship between sensitivity and specificity - the two outcomes of interest in meta-analyses of diagnostic accuracy studies - utilizing random effects. Model performance is compared in nine meta-analytic scenarios reflecting the combination of three sizes for meta-analyses (89, 30 and 10 studies) with three pairs of sensitivity/specificity values (97%/87%; 85%/75%; 90%/93%). The evaluation of accuracy in terms of bias, standard error and mean squared error reveals that all implementations of the generalized bivariate model calculate sensitivity and specificity estimates with deviations less than two percentage points. proc mixed which together with reitsma implements the general bivariate mixed model proposed by Reitsma rather shows convergence problems. The random effect parameters are in general underestimated. This study shows that flexibility and simplicity of model specification together with convergence robustness should influence implementation recommendations, as the accuracy in terms of bias was acceptable in all implementations using the generalized approach. Schattauer GmbH.

  17. Quantile Regression Methods

    DEFF Research Database (Denmark)

    Fitzenberger, Bernd; Wilke, Ralf Andreas

    2015-01-01

    if the mean regression model does not. We provide a short informal introduction into the principle of quantile regression which includes an illustrative application from empirical labor market research. This is followed by briefly sketching the underlying statistical model for linear quantile regression based......Quantile regression is emerging as a popular statistical approach, which complements the estimation of conditional mean models. While the latter only focuses on one aspect of the conditional distribution of the dependent variable, the mean, quantile regression provides more detailed insights...... by modeling conditional quantiles. Quantile regression can therefore detect whether the partial effect of a regressor on the conditional quantiles is the same for all quantiles or differs across quantiles. Quantile regression can provide evidence for a statistical relationship between two variables even...

  18. Use of probabilistic weights to enhance linear regression myoelectric control.

    Science.gov (United States)

    Smith, Lauren H; Kuiken, Todd A; Hargrove, Levi J

    2015-12-01

    Clinically available prostheses for transradial amputees do not allow simultaneous myoelectric control of degrees of freedom (DOFs). Linear regression methods can provide simultaneous myoelectric control, but frequently also result in difficulty with isolating individual DOFs when desired. This study evaluated the potential of using probabilistic estimates of categories of gross prosthesis movement, which are commonly used in classification-based myoelectric control, to enhance linear regression myoelectric control. Gaussian models were fit to electromyogram (EMG) feature distributions for three movement classes at each DOF (no movement, or movement in either direction) and used to weight the output of linear regression models by the probability that the user intended the movement. Eight able-bodied and two transradial amputee subjects worked in a virtual Fitts' law task to evaluate differences in controllability between linear regression and probability-weighted regression for an intramuscular EMG-based three-DOF wrist and hand system. Real-time and offline analyses in able-bodied subjects demonstrated that probability weighting improved performance during single-DOF tasks (p linear regression control. Use of probability weights can improve the ability to isolate individual during linear regression myoelectric control, while maintaining the ability to simultaneously control multiple DOFs.

  19. MEG time-frequency analyses for pre- and post-surgical evaluation of patients with epileptic rhythmic fast activity.

    Science.gov (United States)

    Sueda, Keitaro; Takeuchi, Fumiya; Shiraishi, Hideaki; Nakane, Shingo; Asahina, Naoko; Kohsaka, Shinobu; Nakama, Hideyuki; Otsuki, Taisuke; Sawamura, Yutaka; Saitoh, Shinji

    2010-02-01

    To evaluate the effectiveness of surgery for epilepsy, we analyzed rhythmic fast activity by magnetoencephalography (MEG) before and after surgery using time-frequency analysis. To assess reliability, the results obtained by pre-surgical MEG and intraoperative electrocorticography were compared. Four children with symptomatic localization-related epilepsy caused by circumscribed cortical lesion were examined in the present study using 204-channel helmet-shaped MEG with a sampling rate of 600Hz. One patient had dysembryoplastic neuroepithelial tumor (DNT) and three patients had focal cortical dysplasia (FCD). Aberrant areas were superimposed, to reconstruct 3D MRI images, and illustrated as moving images. In three patients, short-time Fourier transform (STFT) analyses of MEG showed rhythmic activities just above the lesion with FCD and in the vicinity of DNT. In one patient with FCD in the medial temporal lobe, rhythmic activity appeared in the ipsilateral frontal lobe and temporal lateral aspect. These findings correlate well with the results obtained by intraoperative electrocorticography. After the surgery, three patients were relieved of their seizures, and the area of rhythmic MEG activity disappeared or become smaller. One patient had residual rhythmic MEG activity, and she suffered from seizure relapse. Time-frequency analyses using STFT successfully depicted MEG rhythmic fast activity, and would provide valuable information for pre- and post-surgical evaluations to define surgical strategies for patients with epilepsy.

  20. Evaluation of Two Surface Sampling Methods for Microbiological and Chemical Analyses To Assess the Presence of Biofilms in Food Companies.

    Science.gov (United States)

    Maes, Sharon; Huu, Son Nguyen; Heyndrickx, Marc; Weyenberg, Stephanie van; Steenackers, Hans; Verplaetse, Alex; Vackier, Thijs; Sampers, Imca; Raes, Katleen; Reu, Koen De

    2017-12-01

    Biofilms are an important source of contamination in food companies, yet the composition of biofilms in practice is still mostly unknown. The chemical and microbiological characterization of surface samples taken after cleaning and disinfection is very important to distinguish free-living bacteria from the attached bacteria in biofilms. In this study, sampling methods that are potentially useful for both chemical and microbiological analyses of surface samples were evaluated. In the manufacturing facilities of eight Belgian food companies, surfaces were sampled after cleaning and disinfection using two sampling methods: the scraper-flocked swab method and the sponge stick method. Microbiological and chemical analyses were performed on these samples to evaluate the suitability of the sampling methods for the quantification of extracellular polymeric substance components and microorganisms originating from biofilms in these facilities. The scraper-flocked swab method was most suitable for chemical analyses of the samples because the material in these swabs did not interfere with determination of the chemical components. For microbiological enumerations, the sponge stick method was slightly but not significantly more effective than the scraper-flocked swab method. In all but one of the facilities, at least 20% of the sampled surfaces had more than 10 2 CFU/100 cm 2 . Proteins were found in 20% of the chemically analyzed surface samples, and carbohydrates and uronic acids were found in 15 and 8% of the samples, respectively. When chemical and microbiological results were combined, 17% of the sampled surfaces were contaminated with both microorganisms and at least one of the analyzed chemical components; thus, these surfaces were characterized as carrying biofilm. Overall, microbiological contamination in the food industry is highly variable by food sector and even within a facility at various sampling points and sampling times.

  1. Probabilistic evaluation of scenarios in long-term safety analyses. Results of the project ISIBEL; Probabilistische Bewertung von Szenarien in Langzeitsicherheitsanalysen. Ergebnisse des Vorhabens ISIBEL

    Energy Technology Data Exchange (ETDEWEB)

    Buhmann, Dieter; Becker, Dirk-Alexander; Laggiard, Eduardo; Ruebel, Andre; Spiessl, Sabine; Wolf, Jens

    2016-07-15

    In the frame of the project ISIBEL deterministic analyses on the radiological consequences of several possible developments of the final repository were performed (VSG: preliminary safety analysis of the site Gorleben). The report describes the probabilistic evaluation of the VSG scenarios using uncertainty and sensitivity analyses. It was shown that probabilistic analyses are important to evaluate the influence of uncertainties. The transfer of the selected scenarios in computational cases and the used modeling parameters are discussed.

  2. An empirical tool to evaluate the safety of cyclists: Community based, macro-level collision prediction models using negative binomial regression.

    Science.gov (United States)

    Wei, Feng; Lovegrove, Gordon

    2013-12-01

    Today, North American governments are more willing to consider compact neighborhoods with increased use of sustainable transportation modes. Bicycling, one of the most effective modes for short trips with distances less than 5km is being encouraged. However, as vulnerable road users (VRUs), cyclists are more likely to be injured when involved in collisions. In order to create a safe road environment for them, evaluating cyclists' road safety at a macro level in a proactive way is necessary. In this paper, different generalized linear regression methods for collision prediction model (CPM) development are reviewed and previous studies on micro-level and macro-level bicycle-related CPMs are summarized. On the basis of insights gained in the exploration stage, this paper also reports on efforts to develop negative binomial models for bicycle-auto collisions at a community-based, macro-level. Data came from the Central Okanagan Regional District (CORD), of British Columbia, Canada. The model results revealed two types of statistical associations between collisions and each explanatory variable: (1) An increase in bicycle-auto collisions is associated with an increase in total lane kilometers (TLKM), bicycle lane kilometers (BLKM), bus stops (BS), traffic signals (SIG), intersection density (INTD), and arterial-local intersection percentage (IALP). (2) A decrease in bicycle collisions was found to be associated with an increase in the number of drive commuters (DRIVE), and in the percentage of drive commuters (DRP). These results support our hypothesis that in North America, with its current low levels of bicycle use (macro-level CPMs. Copyright © 2012. Published by Elsevier Ltd.

  3. Re-evaluation of a novel approach for quantitative myocardial oedema detection by analysing tissue inhomogeneity in acute myocarditis using T2-mapping.

    Science.gov (United States)

    Baeßler, Bettina; Schaarschmidt, Frank; Treutlein, Melanie; Stehning, Christian; Schnackenburg, Bernhard; Michels, Guido; Maintz, David; Bunck, Alexander C

    2017-12-01

    To re-evaluate a recently suggested approach of quantifying myocardial oedema and increased tissue inhomogeneity in myocarditis by T2-mapping. Cardiac magnetic resonance data of 99 patients with myocarditis were retrospectively analysed. Thirthy healthy volunteers served as controls. T2-mapping data were acquired at 1.5 T using a gradient-spin-echo T2-mapping sequence. T2-maps were segmented according to the 16-segments AHA-model. Segmental T2-values, segmental pixel-standard deviation (SD) and the derived parameters maxT2, maxSD and madSD were analysed and compared to the established Lake Louise criteria (LLC). A re-estimation of logistic regression models revealed that all models containing an SD-parameter were superior to any model containing global myocardial T2. Using a combined cut-off of 1.8 ms for madSD + 68 ms for maxT2 resulted in a diagnostic sensitivity of 75% and specificity of 80% and showed a similar diagnostic performance compared to LLC in receiver-operating-curve analyses. Combining madSD, maxT2 and late gadolinium enhancement (LGE) in a model resulted in a superior diagnostic performance compared to LLC (sensitivity 93%, specificity 83%). The results show that the novel T2-mapping-derived parameters exhibit an additional diagnostic value over LGE with the inherent potential to overcome the current limitations of T2-mapping. • A novel quantitative approach to myocardial oedema imaging in myocarditis was re-evaluated. • The T2-mapping-derived parameters maxT2 and madSD were compared to traditional Lake-Louise criteria. • Using maxT2 and madSD with dedicated cut-offs performs similarly to Lake-Louise criteria. • Adding maxT2 and madSD to LGE results in further increased diagnostic performance. • This novel approach has the potential to overcome the limitations of T2-mapping.

  4. Understanding logistic regression analysis

    OpenAIRE

    Sperandei, Sandro

    2014-01-01

    Logistic regression is used to obtain odds ratio in the presence of more than one explanatory variable. The procedure is quite similar to multiple linear regression, with the exception that the response variable is binomial. The result is the impact of each variable on the odds ratio of the observed event of interest. The main advantage is to avoid confounding effects by analyzing the association of all variables together. In this article, we explain the logistic regression procedure using ex...

  5. Introduction to regression graphics

    CERN Document Server

    Cook, R Dennis

    2009-01-01

    Covers the use of dynamic and interactive computer graphics in linear regression analysis, focusing on analytical graphics. Features new techniques like plot rotation. The authors have composed their own regression code, using Xlisp-Stat language called R-code, which is a nearly complete system for linear regression analysis and can be utilized as the main computer program in a linear regression course. The accompanying disks, for both Macintosh and Windows computers, contain the R-code and Xlisp-Stat. An Instructor's Manual presenting detailed solutions to all the problems in the book is ava

  6. Alternative Methods of Regression

    CERN Document Server

    Birkes, David

    2011-01-01

    Of related interest. Nonlinear Regression Analysis and its Applications Douglas M. Bates and Donald G. Watts ".an extraordinary presentation of concepts and methods concerning the use and analysis of nonlinear regression models.highly recommend[ed].for anyone needing to use and/or understand issues concerning the analysis of nonlinear regression models." --Technometrics This book provides a balance between theory and practice supported by extensive displays of instructive geometrical constructs. Numerous in-depth case studies illustrate the use of nonlinear regression analysis--with all data s

  7. Ventilation/perfusion SPECT/CT in patients with pulmonary emphysema. Evaluation of software-based analysing.

    Science.gov (United States)

    Schreiter, V; Steffen, I; Huebner, H; Bredow, J; Heimann, U; Kroencke, T J; Poellinger, A; Doellinger, F; Buchert, R; Hamm, B; Brenner, W; Schreiter, N F

    2015-01-01

    The purpose of this study was to evaluate the reproducibility of a new software based analysing system for ventilation/perfusion single-photon emission computed tomography/computed tomography (V/P SPECT/CT) in patients with pulmonary emphysema and to compare it to the visual interpretation. 19 patients (mean age: 68.1 years) with pulmonary emphysema who underwent V/P SPECT/CT were included. Data were analysed by two independent observers in visual interpretation (VI) and by software based analysis system (SBAS). SBAS PMOD version 3.4 (Technologies Ltd, Zurich, Switzerland) was used to assess counts and volume per lung lobe/per lung and to calculate the count density per lung, lobe ratio of counts and ratio of count density. VI was performed using a visual scale to assess the mean counts per lung lobe. Interobserver variability and association for SBAS and VI were analysed using Spearman's rho correlation coefficient. Interobserver agreement correlated highly in perfusion (rho: 0.982, 0.957, 0.90, 0.979) and ventilation (rho: 0.972, 0.924, 0.941, 0.936) for count/count density per lobe and ratio of counts/count density in SBAS. Interobserver agreement correlated clearly for perfusion (rho: 0.655) and weakly for ventilation (rho: 0.458) in VI. SBAS provides more reproducible measures than VI for the relative tracer uptake in V/P SPECT/CTs in patients with pulmonary emphysema. However, SBAS has to be improved for routine clinical use.

  8. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules, F9-F11

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with three of the functional modules in the code. Those are the Morse-SGC for the SCALE system, Heating 7.2, and KENO V.a. The manual describes the latest released versions of the codes.

  9. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Control modules C4, C6

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U. S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume is part of the manual related to the control modules for the newest updated version of this computational package.

  10. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules, F9-F11

    International Nuclear Information System (INIS)

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with three of the functional modules in the code. Those are the Morse-SGC for the SCALE system, Heating 7.2, and KENO V.a. The manual describes the latest released versions of the codes

  11. Landfill mining: Resource potential of Austrian landfills--Evaluation and quality assessment of recovered municipal solid waste by chemical analyses.

    Science.gov (United States)

    Wolfsberger, Tanja; Aldrian, Alexia; Sarc, Renato; Hermann, Robert; Höllen, Daniel; Budischowsky, Andreas; Zöscher, Andreas; Ragoßnig, Arne; Pomberger, Roland

    2015-11-01

    Since the need for raw materials in countries undergoing industrialisation (like China) is rising, the availability of metal and fossil fuel energy resources (like ores or coal) has changed in recent years. Landfill sites can contain considerable amounts of recyclables and energy-recoverable materials, therefore, landfill mining is an option for exploiting dumped secondary raw materials, saving primary sources. For the purposes of this article, two sanitary landfill sites have been chosen for obtaining actual data to determine the resource potential of Austrian landfills. To evaluate how pretreating waste before disposal affects the resource potential of landfills, the first landfill site has been selected because it has received untreated waste, whereas mechanically-biologically treated waste was dumped in the second. The scope of this investigation comprised: (1) waste characterisation by sorting analyses of recovered waste; and (2) chemical analyses of specific waste fractions for quality assessment regarding potential energy recovery by using it as solid recovered fuels. The content of eight heavy metals and the net calorific values were determined for the chemical characterisation tests. © The Author(s) 2015.

  12. Big Data Analyses for Continuous Evaluation of Pharmacotherapy: A Proof of Principle with Doxapram in Preterm Infants.

    Science.gov (United States)

    Flint, Robert B; Weteringen, Willem van; Voller, Swantje; Poppe, Jarinda A; Koch, Birgit C P; de Groot, Ronald; Tibboel, Dick; Knibbe, Catherijne A J; Reiss, Irwin K M; Simons, Sinno H P; Dino Research Group

    2017-01-01

    Drug effect evaluation is often based on subjective interpretation of a selection of patient data. Continuous analyses of high frequency patient monitor data are a valuable source to measuring drug effects. However, these have not yet been fully explored in clinical care. We aim to evaluate the usefulness and applicability of high frequency physiological data for analyses of pharmacotherapy. As a proof of principle, the effects of doxapram, a respiratory stimulant, on the oxygenation in preterm infants were studied. Second-to-second physiological data were collected from 12 hours before until 36 hours after start of doxapram loading dose plus continuous maintenance dose in seven preterm infants. Besides physiological data, plasma concentrations of doxapram and keto-doxapram were measured. Arterial oxygen saturation (SpO2) increased after the start of doxapram treatment alongside an increase in heart rate. The respiratory rate remained unaffected. The number of saturation dips and the time below a saturation of 80%, as well as the area under the 80%-saturation-time curve (AUC), were significantly lowered after the start of doxapram. The AUC under 90% saturation also significantly improved after start of doxapram. Plasma concentrations of doxapram and keto-doxapram were measured. Using high-frequency monitoring data, we showed the detailed effects over time of pharmacotherapy. We could objectively determine the respiratory condition and the effects of doxapram treatment in preterm infants. This type of analysis might help to develop individualized drug treatments with tailored dose adjustments based on a closed-loop algorithm. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  13. Linear regression and the normality assumption.

    Science.gov (United States)

    Schmidt, Amand F; Finan, Chris

    2017-12-16

    Researchers often perform arbitrary outcome transformations to fulfill the normality assumption of a linear regression model. This commentary explains and illustrates that in large data settings, such transformations are often unnecessary, and worse may bias model estimates. Linear regression assumptions are illustrated using simulated data and an empirical example on the relation between time since type 2 diabetes diagnosis and glycated hemoglobin levels. Simulation results were evaluated on coverage; i.e., the number of times the 95% confidence interval included the true slope coefficient. Although outcome transformations bias point estimates, violations of the normality assumption in linear regression analyses do not. The normality assumption is necessary to unbiasedly estimate standard errors, and hence confidence intervals and P-values. However, in large sample sizes (e.g., where the number of observations per variable is >10) violations of this normality assumption often do not noticeably impact results. Contrary to this, assumptions on, the parametric model, absence of extreme observations, homoscedasticity, and independency of the errors, remain influential even in large sample size settings. Given that modern healthcare research typically includes thousands of subjects focusing on the normality assumption is often unnecessary, does not guarantee valid results, and worse may bias estimates due to the practice of outcome transformations. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Miscellaneous -- Volume 3, Revision 4

    Energy Technology Data Exchange (ETDEWEB)

    Petrie, L.M.; Jordon, W.C. [Oak Ridge National Lab., TN (United States); Edwards, A.L. [Oak Ridge National Lab., TN (United States)]|[Lawrence Livermore National Lab., CA (United States)] [and others

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice; (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System developments has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. This manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for the functional module documentation, and Volume 3--for the data libraries and subroutine libraries.

  15. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Control modules -- Volume 1, Revision 4

    Energy Technology Data Exchange (ETDEWEB)

    Landers, N.F.; Petrie, L.M.; Knight, J.R. [Oak Ridge National Lab., TN (United States)] [and others

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. This manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for the functional module documentation, and Volume 3 for the documentation of the data libraries and subroutine libraries.

  16. Evaluation method of corrosive conditions in cooling systems of nuclear power plants by combined analyses of flow dynamics and corrosion

    Energy Technology Data Exchange (ETDEWEB)

    Uchida, Shunsuke [Nuclear Power Engineering Corporation (NUPEC), Tokyo (Japan); Atomic Energy Society of Japan (AESJ) (Japan). Research Committee on Water Chemistry Standard; Naitoh, Masanori [Nuclear Power Engineering Corporation (NUPEC), Tokyo (Japan); Atomic Energy Society of Japan (AESJ) (Japan). Computational Science and Engineering Div.; Uehara, Yasushi; Okada, Hidetoshi [Nuclear Power Engineering Corporation (NUPEC), Tokyo (Japan); Hotta, Koji [ITOCHU Techno-Solutions Corporation (Japan); Ichikawa, Ryoko [Mizuho Information and Research Inst., Inc. (Japan); Koshizuka, Seiichi [Tokyo Univ. (Japan)

    2007-03-15

    Problems in major components and structural materials in nuclear power plants have often been caused by flow induced vibration, corrosion and their overlapping effects. In order to establish safe and reliable plant operation, it is necessary to predict future problems for structural materials based on combined analyses of flow dynamics and corrosion and to mitigate them before they become serious issues for plant operation. The analysis models are divided into two types. 1. Prediction models for future problems with structural materials: Distributions of oxidant concentrations along flow paths are obtained by solving water radiolysis reactions in the boiling water reactor (BWR) primary cooling water and hydrazine-oxygen reactions in the pressurized water reactor (PWR) secondary cooling water. Then, the electrochemical corrosion potential (ECP) at the point of interest is also obtained by the mixed potential model using oxidant concentration. Higher ECP enhances the possibility of intergranular stress corrosion cracking (IGSCC) in the BWR primary system, while lower ECP enhances flow accelerated corrosion (FAC) in the PWR secondary system. 2. Evaluation models of wall thinning caused by flow accelerated corrosion: The degree of wall thinning is evaluated at a location with a higher possibility of FAC occurrence, and lifetime is estimated for preventive maintenance. General features of models are reviewed in this paper and the prediction models for oxidant concentrations are briefly introduced. (orig.)

  17. Evaluation method of corrosive conditions in cooling systems of nuclear power plants by combined analyses of flow dynamics and corrosion

    International Nuclear Information System (INIS)

    Uchida, Shunsuke; Hotta, Koji; Ichikawa, Ryoko; Koshizuka, Seiichi

    2007-01-01

    Problems in major components and structural materials in nuclear power plants have often been caused by flow induced vibration, corrosion and their overlapping effects. In order to establish safe and reliable plant operation, it is necessary to predict future problems for structural materials based on combined analyses of flow dynamics and corrosion and to mitigate them before they become serious issues for plant operation. The analysis models are divided into two types. 1. Prediction models for future problems with structural materials: Distributions of oxidant concentrations along flow paths are obtained by solving water radiolysis reactions in the boiling water reactor (BWR) primary cooling water and hydrazine-oxygen reactions in the pressurized water reactor (PWR) secondary cooling water. Then, the electrochemical corrosion potential (ECP) at the point of interest is also obtained by the mixed potential model using oxidant concentration. Higher ECP enhances the possibility of intergranular stress corrosion cracking (IGSCC) in the BWR primary system, while lower ECP enhances flow accelerated corrosion (FAC) in the PWR secondary system. 2. Evaluation models of wall thinning caused by flow accelerated corrosion: The degree of wall thinning is evaluated at a location with a higher possibility of FAC occurrence, and lifetime is estimated for preventive maintenance. General features of models are reviewed in this paper and the prediction models for oxidant concentrations are briefly introduced. (orig.)

  18. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Miscellaneous -- Volume 3, Revision 4

    International Nuclear Information System (INIS)

    Petrie, L.M.; Jordon, W.C.; Edwards, A.L.

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice; (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System developments has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. This manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for the functional module documentation, and Volume 3--for the data libraries and subroutine libraries

  19. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Control modules -- Volume 1, Revision 4

    International Nuclear Information System (INIS)

    Landers, N.F.; Petrie, L.M.; Knight, J.R.

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. This manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for the functional module documentation, and Volume 3 for the documentation of the data libraries and subroutine libraries

  20. Evaluation of bentonite alteration due to interactions with iron. Sensitivity analyses to identify the important factors for the bentonite alteration

    International Nuclear Information System (INIS)

    Sasamoto, Hiroshi; Wilson, James; Sato, Tsutomu

    2013-01-01

    Performance assessment of geological disposal systems for high-level radioactive waste requires a consideration of long-term systems behaviour. It is possible that the alteration of swelling clay present in bentonite buffers might have an impact on buffer functions. In the present study, iron (as a candidate overpack material)-bentonite (I-B) interactions were evaluated as the main buffer alteration scenario. Existing knowledge on alteration of bentonite during I-B interactions was first reviewed, then the evaluation methodology was developed considering modeling techniques previously used overseas. A conceptual model for smectite alteration during I-B interactions was produced. The following reactions and processes were selected: 1) release of Fe 2+ due to overpack corrosion; 2) diffusion of Fe 2+ in compacted bentonite; 3) sorption of Fe 2+ on smectite edge and ion exchange in interlayers; 4) dissolution of primary phases and formation of alteration products. Sensitivity analyses were performed to identify the most important factors for the alteration of bentonite by I-B interactions. (author)

  1. A Simulation Investigation of Principal Component Regression.

    Science.gov (United States)

    Allen, David E.

    Regression analysis is one of the more common analytic tools used by researchers. However, multicollinearity between the predictor variables can cause problems in using the results of regression analyses. Problems associated with multicollinearity include entanglement of relative influences of variables due to reduced precision of estimation,…

  2. Boosted beta regression.

    Directory of Open Access Journals (Sweden)

    Matthias Schmid

    Full Text Available Regression analysis with a bounded outcome is a common problem in applied statistics. Typical examples include regression models for percentage outcomes and the analysis of ratings that are measured on a bounded scale. In this paper, we consider beta regression, which is a generalization of logit models to situations where the response is continuous on the interval (0,1. Consequently, beta regression is a convenient tool for analyzing percentage responses. The classical approach to fit a beta regression model is to use maximum likelihood estimation with subsequent AIC-based variable selection. As an alternative to this established - yet unstable - approach, we propose a new estimation technique called boosted beta regression. With boosted beta regression estimation and variable selection can be carried out simultaneously in a highly efficient way. Additionally, both the mean and the variance of a percentage response can be modeled using flexible nonlinear covariate effects. As a consequence, the new method accounts for common problems such as overdispersion and non-binomial variance structures.

  3. Understanding logistic regression analysis.

    Science.gov (United States)

    Sperandei, Sandro

    2014-01-01

    Logistic regression is used to obtain odds ratio in the presence of more than one explanatory variable. The procedure is quite similar to multiple linear regression, with the exception that the response variable is binomial. The result is the impact of each variable on the odds ratio of the observed event of interest. The main advantage is to avoid confounding effects by analyzing the association of all variables together. In this article, we explain the logistic regression procedure using examples to make it as simple as possible. After definition of the technique, the basic interpretation of the results is highlighted and then some special issues are discussed.

  4. Applied linear regression

    CERN Document Server

    Weisberg, Sanford

    2013-01-01

    Praise for the Third Edition ""...this is an excellent book which could easily be used as a course text...""-International Statistical Institute The Fourth Edition of Applied Linear Regression provides a thorough update of the basic theory and methodology of linear regression modeling. Demonstrating the practical applications of linear regression analysis techniques, the Fourth Edition uses interesting, real-world exercises and examples. Stressing central concepts such as model building, understanding parameters, assessing fit and reliability, and drawing conclusions, the new edition illus

  5. Applied logistic regression

    CERN Document Server

    Hosmer, David W; Sturdivant, Rodney X

    2013-01-01

     A new edition of the definitive guide to logistic regression modeling for health science and other applications This thoroughly expanded Third Edition provides an easily accessible introduction to the logistic regression (LR) model and highlights the power of this model by examining the relationship between a dichotomous outcome and a set of covariables. Applied Logistic Regression, Third Edition emphasizes applications in the health sciences and handpicks topics that best suit the use of modern statistical software. The book provides readers with state-of-

  6. Multilingual speaker age recognition: regression analyses on the Lwazi corpus

    CSIR Research Space (South Africa)

    Feld, M

    2009-12-01

    Full Text Available Multilinguality represents an area of significant opportunities for automatic speech-processing systems: whereas multilingual societies are commonplace, the majority of speechprocessing systems are developed with a single language in mind. As a step...

  7. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules F1-F8

    International Nuclear Information System (INIS)

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with eight of the functional modules in the code. Those are: BONAMI - resonance self-shielding by the Bondarenko method; NITAWL-II - SCALE system module for performing resonance shielding and working library production; XSDRNPM - a one-dimensional discrete-ordinates code for transport analysis; XSDOSE - a module for calculating fluxes and dose rates at points outside a shield; KENO IV/S - an improved monte carlo criticality program; COUPLE; ORIGEN-S - SCALE system module to calculate fuel depletion, actinide transmutation, fission product buildup and decay, and associated radiation source terms; ICE

  8. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules F1-F8

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with eight of the functional modules in the code. Those are: BONAMI - resonance self-shielding by the Bondarenko method; NITAWL-II - SCALE system module for performing resonance shielding and working library production; XSDRNPM - a one-dimensional discrete-ordinates code for transport analysis; XSDOSE - a module for calculating fluxes and dose rates at points outside a shield; KENO IV/S - an improved monte carlo criticality program; COUPLE; ORIGEN-S - SCALE system module to calculate fuel depletion, actinide transmutation, fission product buildup and decay, and associated radiation source terms; ICE.

  9. Understanding poisson regression.

    Science.gov (United States)

    Hayat, Matthew J; Higgins, Melinda

    2014-04-01

    Nurse investigators often collect study data in the form of counts. Traditional methods of data analysis have historically approached analysis of count data either as if the count data were continuous and normally distributed or with dichotomization of the counts into the categories of occurred or did not occur. These outdated methods for analyzing count data have been replaced with more appropriate statistical methods that make use of the Poisson probability distribution, which is useful for analyzing count data. The purpose of this article is to provide an overview of the Poisson distribution and its use in Poisson regression. Assumption violations for the standard Poisson regression model are addressed with alternative approaches, including addition of an overdispersion parameter or negative binomial regression. An illustrative example is presented with an application from the ENSPIRE study, and regression modeling of comorbidity data is included for illustrative purposes. Copyright 2014, SLACK Incorporated.

  10. Logistic regression for dichotomized counts.

    Science.gov (United States)

    Preisser, John S; Das, Kalyan; Benecha, Habtamu; Stamm, John W

    2016-12-01

    Sometimes there is interest in a dichotomized outcome indicating whether a count variable is positive or zero. Under this scenario, the application of ordinary logistic regression may result in efficiency loss, which is quantifiable under an assumed model for the counts. In such situations, a shared-parameter hurdle model is investigated for more efficient estimation of regression parameters relating to overall effects of covariates on the dichotomous outcome, while handling count data with many zeroes. One model part provides a logistic regression containing marginal log odds ratio effects of primary interest, while an ancillary model part describes the mean count of a Poisson or negative binomial process in terms of nuisance regression parameters. Asymptotic efficiency of the logistic model parameter estimators of the two-part models is evaluated with respect to ordinary logistic regression. Simulations are used to assess the properties of the models with respect to power and Type I error, the latter investigated under both misspecified and correctly specified models. The methods are applied to data from a randomized clinical trial of three toothpaste formulations to prevent incident dental caries in a large population of Scottish schoolchildren. © The Author(s) 2014.

  11. Evaluating Climate Causation of Conflict in Darfur Using Multi-temporal, Multi-resolution Satellite Image Datasets With Novel Analyses

    Science.gov (United States)

    Brown, I.; Wennbom, M.

    2013-12-01

    Climate change, population growth and changes in traditional lifestyles have led to instabilities in traditional demarcations between neighboring ethic and religious groups in the Sahel region. This has resulted in a number of conflicts as groups resort to arms to settle disputes. Such disputes often centre on or are justified by competition for resources. The conflict in Darfur has been controversially explained by resource scarcity resulting from climate change. Here we analyse established methods of using satellite imagery to assess vegetation health in Darfur. Multi-decadal time series of observations are available using low spatial resolution visible-near infrared imagery. Typically normalized difference vegetation index (NDVI) analyses are produced to describe changes in vegetation ';greenness' or ';health'. Such approaches have been widely used to evaluate the long term development of vegetation in relation to climate variations across a wide range of environments from the Arctic to the Sahel. These datasets typically measure peak NDVI observed over a given interval and may introduce bias. It is furthermore unclear how the spatial organization of sparse vegetation may affect low resolution NDVI products. We develop and assess alternative measures of vegetation including descriptors of the growing season, wetness and resource availability. Expanding the range of parameters used in the analysis reduces our dependence on peak NDVI. Furthermore, these descriptors provide a better characterization of the growing season than the single NDVI measure. Using multi-sensor data we combine high temporal/moderate spatial resolution data with low temporal/high spatial resolution data to improve the spatial representativity of the observations and to provide improved spatial analysis of vegetation patterns. The approach places the high resolution observations in the NDVI context space using a longer time series of lower resolution imagery. The vegetation descriptors

  12. Development of a model performance-based sign sheeting specification based on the evaluation of nighttime traffic signs using legibility and eye-tracker data : data and analyses.

    Science.gov (United States)

    2010-09-01

    This report presents data and technical analyses for Texas Department of Transportation Project 0-5235. This : project focused on the evaluation of traffic sign sheeting performance in terms of meeting the nighttime : driver needs. The goal was to de...

  13. Prospective evaluation of IOTA logistic regression models LR1 and LR2 in comparison with subjective pattern recognition for diagnosis of ovarian cancer in an outpatient setting.

    Science.gov (United States)

    Nunes, N; Ambler, G; Foo, X; Widschwendter, M; Jurkovic, D

    2018-06-01

    To determine whether International Ovarian Tumor Analysis (IOTA) logistic regression models LR1 and LR2 developed for the preoperative diagnosis of ovarian cancer could also be used to differentiate between benign and malignant adnexal tumors in the population of women attending gynecology outpatient clinics. This was a single-center prospective observational study of consecutive women attending our gynecological diagnostic outpatient unit, recruited between May 2009 and January 2012. All the women were first examined by a Level-II ultrasound operator. In those diagnosed with adnexal tumors, the IOTA-LR1/2 protocol was used to evaluate the masses. The LR1 and LR2 models were then used to assess the risk of malignancy. Subsequently, the women were also examined by a Level-III examiner, who used pattern recognition to differentiate between benign and malignant tumors. Women with an ultrasound diagnosis of malignancy were offered surgery, while asymptomatic women with presumed benign lesions were offered conservative management with a minimum follow-up of 12 months. The initial diagnosis was compared with two reference standards: histological findings and/or a comparative assessment of tumor morphology on follow-up ultrasound scans. All women for whom the tumor classification on follow-up changed from benign to malignant were offered surgery. In the final analysis, 489 women who had either or both of the reference standards were included. Their mean age was 50 years (range, 16-91 years) and 45% were postmenopausal. Of the included women, 342/489 (69.9%) had surgery and 147/489 (30.1%) were managed conservatively. The malignancy rate was 137/489 (28.0%). Overall, sensitivities of LR1 and LR2 for the diagnosis of malignancy were 97.1% (95% CI, 92.7-99.2%) and 94.9% (95% CI, 89.8-97.9%) and specificities were 77.3% (95% CI, 72.5-81.5%) and 76.7% (95% CI, 71.9-81.0%), respectively (P > 0.05). In comparison with pattern recognition (sensitivity 94.2% (95% CI, 88

  14. Quantitative analyses at baseline and interim PET evaluation for response assessment and outcome definition in patients with malignant pleural mesothelioma

    Energy Technology Data Exchange (ETDEWEB)

    Lopci, Egesta; Chiti, Arturo [Humanitas Research Hospital, Nuclear Medicine Department, Rozzano, Milan (Italy); Zucali, Paolo Andrea; Perrino, Matteo; Gianoncelli, Letizia; Lorenzi, Elena; Gemelli, Maria; Santoro, Armando [Humanitas Research Hospital, Oncology, Rozzano (Italy); Ceresoli, Giovanni Luca [Humanitas Gavazzeni, Oncology, Bergamo (Italy); Giordano, Laura [Humanitas Research Hospital, Biostatistics, Rozzano (Italy)

    2015-04-01

    Quantitative analyses on FDG PET for response assessment are increasingly used in clinical studies, particularly with respect to tumours in which radiological assessment is challenging and complete metabolic response is rarely achieved after treatment. A typical example is malignant pleural mesothelioma (MPM), an aggressive tumour originating from mesothelial cells of the pleura. We present our results concerning the use of semiquantitative and quantitative parameters, evaluated at the baseline and interim PET examinations, for the prediction of treatment response and disease outcome in patients with MPM. We retrospectively analysed data derived from 131 patients (88 men, 43 women; mean age 66 years) with MPM who were referred to our institution for treatment between May 2004 and July 2013. Patients were investigated using FDG PET at baseline and after two cycles of pemetrexed-based chemotherapy. Responses were determined using modified RECIST criteria based on the best CT response after treatment. Disease control rate, progression-free survival (PFS) and overall survival (OS) were calculated for the whole population and were correlated with semiquantitative and quantitative parameters evaluated at the baseline and interim PET examinations; these included SUV{sub max}, total lesion glycolysis (TLG), percentage change in SUV{sub max} (ΔSUV{sub max}) and percentage change in TLG (ΔTLG). Disease control was achieved in 84.7 % of the patients, and median PFS and OS for the entire cohort were 7.2 and 14.3 months, respectively. The log-rank test showed a statistically significant difference in PFS between patients with radiological progression and those with partial response (PR) or stable disease (SD) (1.8 vs. 8.6 months, p < 0.001). Baseline SUV{sub max} and TLG showed a statistically significant correlation with PFS and OS (p < 0.001). In the entire population, both ΔSUV{sub max} and ΔTLG were correlated with disease control based on best CT response (p < 0

  15. Relative performances of artificial neural network and regression mapping tools in evaluation of spinal loads and muscle forces during static lifting.

    Science.gov (United States)

    Arjmand, N; Ekrami, O; Shirazi-Adl, A; Plamondon, A; Parnianpour, M

    2013-05-31

    Two artificial neural networks (ANNs) are constructed, trained, and tested to map inputs of a complex trunk finite element (FE) model to its outputs for spinal loads and muscle forces. Five input variables (thorax flexion angle, load magnitude, its anterior and lateral positions, load handling technique, i.e., one- or two-handed static lifting) and four model outputs (L4-L5 and L5-S1 disc compression and anterior-posterior shear forces) for spinal loads and 76 model outputs (forces in individual trunk muscles) are considered. Moreover, full quadratic regression equations mapping input-outputs of the model developed here for muscle forces and previously for spine loads are used to compare the relative accuracy of these two mapping tools (ANN and regression equations). Results indicate that the ANNs are more accurate in mapping input-output relationships of the FE model (RMSE= 20.7 N for spinal loads and RMSE= 4.7 N for muscle forces) as compared to regression equations (RMSE= 120.4 N for spinal loads and RMSE=43.2 N for muscle forces). Quadratic regression equations map up to second order variations of outputs with inputs while ANNs capture higher order variations too. Despite satisfactory achievement in estimating overall muscle forces by the ANN, some inadequacies are noted including assigning force to antagonistic muscles with no activity in the optimization algorithm of the FE model or predicting slightly different forces in bilateral pair muscles in symmetric lifting activities. Using these user-friendly tools spine loads and trunk muscle forces during symmetric and asymmetric static lifts can be easily estimated. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Evaluation of Ordinary Least Square (OLS) and Geographically Weighted Regression (GWR) for Water Quality Monitoring: A Case Study for the Estimation of Salinity

    Science.gov (United States)

    Nazeer, Majid; Bilal, Muhammad

    2018-04-01

    Landsat-5 Thematic Mapper (TM) dataset have been used to estimate salinity in the coastal area of Hong Kong. Four adjacent Landsat TM images were used in this study, which was atmospherically corrected using the Second Simulation of the Satellite Signal in the Solar Spectrum (6S) radiative transfer code. The atmospherically corrected images were further used to develop models for salinity using Ordinary Least Square (OLS) regression and Geographically Weighted Regression (GWR) based on in situ data of October 2009. Results show that the coefficient of determination ( R 2) of 0.42 between the OLS estimated and in situ measured salinity is much lower than that of the GWR model, which is two times higher ( R 2 = 0.86). It indicates that the GWR model has more ability than the OLS regression model to predict salinity and show its spatial heterogeneity better. It was observed that the salinity was high in Deep Bay (north-western part of Hong Kong) which might be due to the industrial waste disposal, whereas the salinity was estimated to be constant (32 practical salinity units) towards the open sea.

  17. Pathological assessment of liver fibrosis regression

    Directory of Open Access Journals (Sweden)

    WANG Bingqiong

    2017-03-01

    Full Text Available Hepatic fibrosis is the common pathological outcome of chronic hepatic diseases. An accurate assessment of fibrosis degree provides an important reference for a definite diagnosis of diseases, treatment decision-making, treatment outcome monitoring, and prognostic evaluation. At present, many clinical studies have proven that regression of hepatic fibrosis and early-stage liver cirrhosis can be achieved by effective treatment, and a correct evaluation of fibrosis regression has become a hot topic in clinical research. Liver biopsy has long been regarded as the gold standard for the assessment of hepatic fibrosis, and thus it plays an important role in the evaluation of fibrosis regression. This article reviews the clinical application of current pathological staging systems in the evaluation of fibrosis regression from the perspectives of semi-quantitative scoring system, quantitative approach, and qualitative approach, in order to propose a better pathological evaluation system for the assessment of fibrosis regression.

  18. Evaluating the factor structure, item analyses, and internal consistency of hospital anxiety and depression scale in Iranian infertile patients

    Directory of Open Access Journals (Sweden)

    Payam Amini

    2017-09-01

    Full Text Available Background: The hospital anxiety and depression scale (HADS is a common screening tool designed to measure the level of anxiety and depression in different factor structures and has been extensively used in non-psychiatric populations and individuals experiencing fertility problems. Objective: The aims of this study were to evaluate the factor structure, item analyses, and internal consistency of HADS in Iranian infertile patients. Materials and Methods: This cross-sectional study included 651 infertile patients (248 men and 403 women referred to a referral infertility Center in Tehran, Iran between January 2014 and January 2015. Confirmatory factor analysis was used to determine the underlying factor structure of the HADS among one, two, and threefactor models. Several goodness of fit indices were utilized such as comparative, normed and goodness of fit indices, Akaike information criterion, and the root mean squared error of approximation. In addition to HADS, the Satisfaction with Life Scale questionnaires as well as demographic and clinical information were administered to all patients. Results: The goodness of fit indices through CFAs exposed that three and onefactor model provided the best and worst fit to the total, male and female datasets compared to the other factor structure models for the infertile patients. The Cronbach’s alpha for anxiety and depression subscales were 0.866 and 0.753 respectively. The HADS subscales significantly correlated with SWLS, indicating an acceptable convergent validity. Conclusion: The HADS was found to be a three-factor structure screening instrument in the field of infertility.

  19. Evaluation of a modified 16-item Readiness for Interprofessional Learning Scale (RIPLS): Exploratory and confirmatory factor analyses.

    Science.gov (United States)

    Yu, Tzu-Chieh; Jowsey, Tanisha; Henning, Marcus

    2018-04-18

    The Readiness for Interprofessional Learning Scale (RIPLS) was developed to assess undergraduate readiness for engaging in interprofessional education (IPE). It has become an accepted and commonly used instrument. To determine utility of a modified 16-item RIPLS instrument, exploratory and confirmatory factor analyses were performed. Data used were collected from a pre- and post-intervention study involving 360 New Zealand undergraduate students from one university. Just over half of the participants were enrolled in medicine (51%) while the remainder were in pharmacy (27%) and nursing (22%). The intervention was a two-day simulation-based IPE course focused on managing unplanned acute medical problems in hospital wards ("ward calls"). Immediately prior to the course, 288 RIPLS were collected and immediately afterwards, 322 (response rates 80% and 89%, respectively). Exploratory factor analysis involving principal axis factoring with an oblique rotation method was conducted using pre-course data. The scree plot suggested a three-factor solution over two- and four-factor solutions. Subsequent confirmatory factor analysis performed using post-course data demonstrated partial goodness-of-fit for this suggested three-factor model. Based on these findings, further robust psychometric testing of the RIPLS or modified versions of it is recommended before embarking on its use in evaluative research in various healthcare education settings.

  20. Vector regression introduced

    Directory of Open Access Journals (Sweden)

    Mok Tik

    2014-06-01

    Full Text Available This study formulates regression of vector data that will enable statistical analysis of various geodetic phenomena such as, polar motion, ocean currents, typhoon/hurricane tracking, crustal deformations, and precursory earthquake signals. The observed vector variable of an event (dependent vector variable is expressed as a function of a number of hypothesized phenomena realized also as vector variables (independent vector variables and/or scalar variables that are likely to impact the dependent vector variable. The proposed representation has the unique property of solving the coefficients of independent vector variables (explanatory variables also as vectors, hence it supersedes multivariate multiple regression models, in which the unknown coefficients are scalar quantities. For the solution, complex numbers are used to rep- resent vector information, and the method of least squares is deployed to estimate the vector model parameters after transforming the complex vector regression model into a real vector regression model through isomorphism. Various operational statistics for testing the predictive significance of the estimated vector parameter coefficients are also derived. A simple numerical example demonstrates the use of the proposed vector regression analysis in modeling typhoon paths.

  1. Multicollinearity and Regression Analysis

    Science.gov (United States)

    Daoud, Jamal I.

    2017-12-01

    In regression analysis it is obvious to have a correlation between the response and predictor(s), but having correlation among predictors is something undesired. The number of predictors included in the regression model depends on many factors among which, historical data, experience, etc. At the end selection of most important predictors is something objective due to the researcher. Multicollinearity is a phenomena when two or more predictors are correlated, if this happens, the standard error of the coefficients will increase [8]. Increased standard errors means that the coefficients for some or all independent variables may be found to be significantly different from In other words, by overinflating the standard errors, multicollinearity makes some variables statistically insignificant when they should be significant. In this paper we focus on the multicollinearity, reasons and consequences on the reliability of the regression model.

  2. Use of probabilistic weights to enhance linear regression myoelectric control

    Science.gov (United States)

    Smith, Lauren H.; Kuiken, Todd A.; Hargrove, Levi J.

    2015-12-01

    Objective. Clinically available prostheses for transradial amputees do not allow simultaneous myoelectric control of degrees of freedom (DOFs). Linear regression methods can provide simultaneous myoelectric control, but frequently also result in difficulty with isolating individual DOFs when desired. This study evaluated the potential of using probabilistic estimates of categories of gross prosthesis movement, which are commonly used in classification-based myoelectric control, to enhance linear regression myoelectric control. Approach. Gaussian models were fit to electromyogram (EMG) feature distributions for three movement classes at each DOF (no movement, or movement in either direction) and used to weight the output of linear regression models by the probability that the user intended the movement. Eight able-bodied and two transradial amputee subjects worked in a virtual Fitts’ law task to evaluate differences in controllability between linear regression and probability-weighted regression for an intramuscular EMG-based three-DOF wrist and hand system. Main results. Real-time and offline analyses in able-bodied subjects demonstrated that probability weighting improved performance during single-DOF tasks (p < 0.05) by preventing extraneous movement at additional DOFs. Similar results were seen in experiments with two transradial amputees. Though goodness-of-fit evaluations suggested that the EMG feature distributions showed some deviations from the Gaussian, equal-covariance assumptions used in this experiment, the assumptions were sufficiently met to provide improved performance compared to linear regression control. Significance. Use of probability weights can improve the ability to isolate individual during linear regression myoelectric control, while maintaining the ability to simultaneously control multiple DOFs.

  3. Minimax Regression Quantiles

    DEFF Research Database (Denmark)

    Bache, Stefan Holst

    A new and alternative quantile regression estimator is developed and it is shown that the estimator is root n-consistent and asymptotically normal. The estimator is based on a minimax ‘deviance function’ and has asymptotically equivalent properties to the usual quantile regression estimator. It is......, however, a different and therefore new estimator. It allows for both linear- and nonlinear model specifications. A simple algorithm for computing the estimates is proposed. It seems to work quite well in practice but whether it has theoretical justification is still an open question....

  4. riskRegression

    DEFF Research Database (Denmark)

    Ozenne, Brice; Sørensen, Anne Lyngholm; Scheike, Thomas

    2017-01-01

    In the presence of competing risks a prediction of the time-dynamic absolute risk of an event can be based on cause-specific Cox regression models for the event and the competing risks (Benichou and Gail, 1990). We present computationally fast and memory optimized C++ functions with an R interface...... for predicting the covariate specific absolute risks, their confidence intervals, and their confidence bands based on right censored time to event data. We provide explicit formulas for our implementation of the estimator of the (stratified) baseline hazard function in the presence of tied event times. As a by...... functionals. The software presented here is implemented in the riskRegression package....

  5. Prediction, Regression and Critical Realism

    DEFF Research Database (Denmark)

    Næss, Petter

    2004-01-01

    This paper considers the possibility of prediction in land use planning, and the use of statistical research methods in analyses of relationships between urban form and travel behaviour. Influential writers within the tradition of critical realism reject the possibility of predicting social...... phenomena. This position is fundamentally problematic to public planning. Without at least some ability to predict the likely consequences of different proposals, the justification for public sector intervention into market mechanisms will be frail. Statistical methods like regression analyses are commonly...... seen as necessary in order to identify aggregate level effects of policy measures, but are questioned by many advocates of critical realist ontology. Using research into the relationship between urban structure and travel as an example, the paper discusses relevant research methods and the kinds...

  6. Assessment of S(α, β) libraries for criticality safety evaluations of wet storage pools by refined trend analyses

    International Nuclear Information System (INIS)

    Kolbe, E.; Vasiliev, A.; Ferroukhi, H.

    2009-01-01

    In a recent criticality safety evaluation (CSE) of a commercial wet storage pool applying MCNPX-2.5.0 in combination with the ENDF/B-VII.0 and JEFF-3.1 continuous energy cross section libraries, the maximum permissible initial fuel-enrichment limit for water reflected configurations was found to be dependant upon the applied neutron cross section library. More detailed investigations indicated that the difference is mainly caused by different sub-libraries for thermal neutron scattering based on parameterizations of the S(α, β) scattering matrix. Hence an analysis of trends was done with respect to the low energy neutron flux in order to assess the S(α, β) data sets. First, when performing the trend analysis based on the full set of 149 benchmarks that were employed for the validation, significant trends could not be found. But by analyzing a selected subset of benchmarks clear trends with respect to the low energy neutron flux could be detected. The results presented in this paper demonstrate the sensitivity of specific configurations to the parameterizations of the S(α, β) scattering matrix and thus may help to improve CSE of wet storage pools. Finally, in addition to the low energy neutron flux, we also refined the trend analyses with respect to other key (spectrum-related) parameters by performing them with various selected subsets of the full suite of 149 benchmarks. The corresponding outcome using MCNPX 2.5.0 in combination with the ENDF/B-VII.0, ENDF/B-VI.8, JEFF-3.1, JEF-2.2, and JENDL-3.3 neutron cross section libraries are presented and discussed. (authors)

  7. Multiple linear regression analysis

    Science.gov (United States)

    Edwards, T. R.

    1980-01-01

    Program rapidly selects best-suited set of coefficients. User supplies only vectors of independent and dependent data and specifies confidence level required. Program uses stepwise statistical procedure for relating minimal set of variables to set of observations; final regression contains only most statistically significant coefficients. Program is written in FORTRAN IV for batch execution and has been implemented on NOVA 1200.

  8. Bayesian logistic regression analysis

    NARCIS (Netherlands)

    Van Erp, H.R.N.; Van Gelder, P.H.A.J.M.

    2012-01-01

    In this paper we present a Bayesian logistic regression analysis. It is found that if one wishes to derive the posterior distribution of the probability of some event, then, together with the traditional Bayes Theorem and the integrating out of nuissance parameters, the Jacobian transformation is an

  9. Linear Regression Analysis

    CERN Document Server

    Seber, George A F

    2012-01-01

    Concise, mathematically clear, and comprehensive treatment of the subject.* Expanded coverage of diagnostics and methods of model fitting.* Requires no specialized knowledge beyond a good grasp of matrix algebra and some acquaintance with straight-line regression and simple analysis of variance models.* More than 200 problems throughout the book plus outline solutions for the exercises.* This revision has been extensively class-tested.

  10. Nonlinear Regression with R

    CERN Document Server

    Ritz, Christian; Parmigiani, Giovanni

    2009-01-01

    R is a rapidly evolving lingua franca of graphical display and statistical analysis of experiments from the applied sciences. This book provides a coherent treatment of nonlinear regression with R by means of examples from a diversity of applied sciences such as biology, chemistry, engineering, medicine and toxicology.

  11. Bayesian ARTMAP for regression.

    Science.gov (United States)

    Sasu, L M; Andonie, R

    2013-10-01

    Bayesian ARTMAP (BA) is a recently introduced neural architecture which uses a combination of Fuzzy ARTMAP competitive learning and Bayesian learning. Training is generally performed online, in a single-epoch. During training, BA creates input data clusters as Gaussian categories, and also infers the conditional probabilities between input patterns and categories, and between categories and classes. During prediction, BA uses Bayesian posterior probability estimation. So far, BA was used only for classification. The goal of this paper is to analyze the efficiency of BA for regression problems. Our contributions are: (i) we generalize the BA algorithm using the clustering functionality of both ART modules, and name it BA for Regression (BAR); (ii) we prove that BAR is a universal approximator with the best approximation property. In other words, BAR approximates arbitrarily well any continuous function (universal approximation) and, for every given continuous function, there is one in the set of BAR approximators situated at minimum distance (best approximation); (iii) we experimentally compare the online trained BAR with several neural models, on the following standard regression benchmarks: CPU Computer Hardware, Boston Housing, Wisconsin Breast Cancer, and Communities and Crime. Our results show that BAR is an appropriate tool for regression tasks, both for theoretical and practical reasons. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Bounded Gaussian process regression

    DEFF Research Database (Denmark)

    Jensen, Bjørn Sand; Nielsen, Jens Brehm; Larsen, Jan

    2013-01-01

    We extend the Gaussian process (GP) framework for bounded regression by introducing two bounded likelihood functions that model the noise on the dependent variable explicitly. This is fundamentally different from the implicit noise assumption in the previously suggested warped GP framework. We...... with the proposed explicit noise-model extension....

  13. and Multinomial Logistic Regression

    African Journals Online (AJOL)

    This work presented the results of an experimental comparison of two models: Multinomial Logistic Regression (MLR) and Artificial Neural Network (ANN) for classifying students based on their academic performance. The predictive accuracy for each model was measured by their average Classification Correct Rate (CCR).

  14. Mechanisms of neuroblastoma regression

    Science.gov (United States)

    Brodeur, Garrett M.; Bagatell, Rochelle

    2014-01-01

    Recent genomic and biological studies of neuroblastoma have shed light on the dramatic heterogeneity in the clinical behaviour of this disease, which spans from spontaneous regression or differentiation in some patients, to relentless disease progression in others, despite intensive multimodality therapy. This evidence also suggests several possible mechanisms to explain the phenomena of spontaneous regression in neuroblastomas, including neurotrophin deprivation, humoral or cellular immunity, loss of telomerase activity and alterations in epigenetic regulation. A better understanding of the mechanisms of spontaneous regression might help to identify optimal therapeutic approaches for patients with these tumours. Currently, the most druggable mechanism is the delayed activation of developmentally programmed cell death regulated by the tropomyosin receptor kinase A pathway. Indeed, targeted therapy aimed at inhibiting neurotrophin receptors might be used in lieu of conventional chemotherapy or radiation in infants with biologically favourable tumours that require treatment. Alternative approaches consist of breaking immune tolerance to tumour antigens or activating neurotrophin receptor pathways to induce neuronal differentiation. These approaches are likely to be most effective against biologically favourable tumours, but they might also provide insights into treatment of biologically unfavourable tumours. We describe the different mechanisms of spontaneous neuroblastoma regression and the consequent therapeutic approaches. PMID:25331179

  15. Evaluating external nutrient and suspended-sediment loads to Upper Klamath Lake, Oregon, using surrogate regressions with real-time turbidity and acoustic backscatter data

    Science.gov (United States)

    Schenk, Liam N.; Anderson, Chauncey W.; Diaz, Paul; Stewart, Marc A.

    2016-12-22

    Executive SummarySuspended-sediment and total phosphorus loads were computed for two sites in the Upper Klamath Basin on the Wood and Williamson Rivers, the two main tributaries to Upper Klamath Lake. High temporal resolution turbidity and acoustic backscatter data were used to develop surrogate regression models to compute instantaneous concentrations and loads on these rivers. Regression models for the Williamson River site showed strong correlations of turbidity with total phosphorus and suspended-sediment concentrations (adjusted coefficients of determination [Adj R2]=0.73 and 0.95, respectively). Regression models for the Wood River site had relatively poor, although statistically significant, relations of turbidity with total phosphorus, and turbidity and acoustic backscatter with suspended sediment concentration, with high prediction uncertainty. Total phosphorus loads for the partial 2014 water year (excluding October and November 2013) were 39 and 28 metric tons for the Williamson and Wood Rivers, respectively. These values are within the low range of phosphorus loads computed for these rivers from prior studies using water-quality data collected by the Klamath Tribes. The 2014 partial year total phosphorus loads on the Williamson and Wood Rivers are assumed to be biased low because of the absence of data from the first 2 months of water year 2014, and the drought conditions that were prevalent during that water year. Therefore, total phosphorus and suspended-sediment loads in this report should be considered as representative of a low-water year for the two study sites. Comparing loads from the Williamson and Wood River monitoring sites for November 2013–September 2014 shows that the Williamson and Sprague Rivers combined, as measured at the Williamson River site, contributed substantially more suspended sediment to Upper Klamath Lake than the Wood River, with 4,360 and 1,450 metric tons measured, respectively.Surrogate techniques have proven useful at

  16. Japanese standard method for safety evaluation using best estimate code based on uncertainty and scaling analyses with statistical approach

    International Nuclear Information System (INIS)

    Mizokami, Shinya; Hotta, Akitoshi; Kudo, Yoshiro; Yonehara, Tadashi; Watada, Masayuki; Sakaba, Hiroshi

    2009-01-01

    Current licensing practice in Japan consists of using conservative boundary and initial conditions(BIC), assumptions and analytical codes. The safety analyses for licensing purpose are inherently deterministic. Therefore, conservative BIC and assumptions, such as single failure, must be employed for the analyses. However, using conservative analytical codes are not considered essential. The standard committee of Atomic Energy Society of Japan(AESJ) has drawn up the standard for using best estimate codes for safety analyses in 2008 after three-years of discussions reflecting domestic and international recent findings. (author)

  17. Ridge Regression Signal Processing

    Science.gov (United States)

    Kuhl, Mark R.

    1990-01-01

    The introduction of the Global Positioning System (GPS) into the National Airspace System (NAS) necessitates the development of Receiver Autonomous Integrity Monitoring (RAIM) techniques. In order to guarantee a certain level of integrity, a thorough understanding of modern estimation techniques applied to navigational problems is required. The extended Kalman filter (EKF) is derived and analyzed under poor geometry conditions. It was found that the performance of the EKF is difficult to predict, since the EKF is designed for a Gaussian environment. A novel approach is implemented which incorporates ridge regression to explain the behavior of an EKF in the presence of dynamics under poor geometry conditions. The basic principles of ridge regression theory are presented, followed by the derivation of a linearized recursive ridge estimator. Computer simulations are performed to confirm the underlying theory and to provide a comparative analysis of the EKF and the recursive ridge estimator.

  18. Subset selection in regression

    CERN Document Server

    Miller, Alan

    2002-01-01

    Originally published in 1990, the first edition of Subset Selection in Regression filled a significant gap in the literature, and its critical and popular success has continued for more than a decade. Thoroughly revised to reflect progress in theory, methods, and computing power, the second edition promises to continue that tradition. The author has thoroughly updated each chapter, incorporated new material on recent developments, and included more examples and references. New in the Second Edition:A separate chapter on Bayesian methodsComplete revision of the chapter on estimationA major example from the field of near infrared spectroscopyMore emphasis on cross-validationGreater focus on bootstrappingStochastic algorithms for finding good subsets from large numbers of predictors when an exhaustive search is not feasible Software available on the Internet for implementing many of the algorithms presentedMore examplesSubset Selection in Regression, Second Edition remains dedicated to the techniques for fitting...

  19. Regression in organizational leadership.

    Science.gov (United States)

    Kernberg, O F

    1979-02-01

    The choice of good leaders is a major task for all organizations. Inforamtion regarding the prospective administrator's personality should complement questions regarding his previous experience, his general conceptual skills, his technical knowledge, and the specific skills in the area for which he is being selected. The growing psychoanalytic knowledge about the crucial importance of internal, in contrast to external, object relations, and about the mutual relationships of regression in individuals and in groups, constitutes an important practical tool for the selection of leaders.

  20. Classification and regression trees

    CERN Document Server

    Breiman, Leo; Olshen, Richard A; Stone, Charles J

    1984-01-01

    The methodology used to construct tree structured rules is the focus of this monograph. Unlike many other statistical procedures, which moved from pencil and paper to calculators, this text's use of trees was unthinkable before computers. Both the practical and theoretical sides have been developed in the authors' study of tree methods. Classification and Regression Trees reflects these two sides, covering the use of trees as a data analysis method, and in a more mathematical framework, proving some of their fundamental properties.

  1. Comparison of Classical Linear Regression and Orthogonal Regression According to the Sum of Squares Perpendicular Distances

    OpenAIRE

    KELEŞ, Taliha; ALTUN, Murat

    2016-01-01

    Regression analysis is a statistical technique for investigating and modeling the relationship between variables. The purpose of this study was the trivial presentation of the equation for orthogonal regression (OR) and the comparison of classical linear regression (CLR) and OR techniques with respect to the sum of squared perpendicular distances. For that purpose, the analyses were shown by an example. It was found that the sum of squared perpendicular distances of OR is smaller. Thus, it wa...

  2. Evaluating the effect of sample type on American alligator (Alligator mississippiensis) analyte values in a point-of-care blood analyser

    OpenAIRE

    Hamilton, Matthew T.; Finger, John W.; Winzeler, Megan E.; Tuberville, Tracey D.

    2016-01-01

    The assessment of wildlife health has been enhanced by the ability of point-of-care (POC) blood analysers to provide biochemical analyses of non-domesticated animals in the field. However, environmental limitations (e.g. temperature, atmospheric humidity and rain) and lack of reference values may inhibit researchers from using such a device with certain wildlife species. Evaluating the use of alternative sample types, such as plasma, in a POC device may afford researchers the opportunity to d...

  3. Logistic regression models

    CERN Document Server

    Hilbe, Joseph M

    2009-01-01

    This book really does cover everything you ever wanted to know about logistic regression … with updates available on the author's website. Hilbe, a former national athletics champion, philosopher, and expert in astronomy, is a master at explaining statistical concepts and methods. Readers familiar with his other expository work will know what to expect-great clarity.The book provides considerable detail about all facets of logistic regression. No step of an argument is omitted so that the book will meet the needs of the reader who likes to see everything spelt out, while a person familiar with some of the topics has the option to skip "obvious" sections. The material has been thoroughly road-tested through classroom and web-based teaching. … The focus is on helping the reader to learn and understand logistic regression. The audience is not just students meeting the topic for the first time, but also experienced users. I believe the book really does meet the author's goal … .-Annette J. Dobson, Biometric...

  4. Dynamic Regression Model for Evaluating the Association Between Atmospheric Conditions and Deaths due to Respiratory Diseases in São Paulo, Brazil

    Directory of Open Access Journals (Sweden)

    Ana Carla dos Santos Gomes

    Full Text Available Abstract The article reports the modeling of mortality due to respiratory diseases emanating from atmospheric conditions, capturing significant associations and verifying the ability of stochastic modeling to predict deaths arising from the relationship between weather conditions and air pollution. The statistical methods used in the analysis were cross-correlation and pre-whitening, in addition to dynamic regression modeling combining the dynamics of time series and the effect of explanatory variables. The results show there are significant associations between mortality and sulfur dioxide, air temperature, atmospheric pressure, relative humidity, and autoregressive structure. The cross-correlations captured significant lags between atmospheric variables and deaths, of two months for SO2 and relative humidity, eleven months for PM10, seven months for O3, and eight months for air temperature and the cross-correlation without lag with NO2. With CO variables, precipitation and atmospheric pressure, cross-correlations were not detected. Stochastic modeling showed that deaths due to respiratory diseases can be predicted from the combination of meteorological and air pollution variables, especially considering the existing trend and seasonality.

  5. Steganalysis using logistic regression

    Science.gov (United States)

    Lubenko, Ivans; Ker, Andrew D.

    2011-02-01

    We advocate Logistic Regression (LR) as an alternative to the Support Vector Machine (SVM) classifiers commonly used in steganalysis. LR offers more information than traditional SVM methods - it estimates class probabilities as well as providing a simple classification - and can be adapted more easily and efficiently for multiclass problems. Like SVM, LR can be kernelised for nonlinear classification, and it shows comparable classification accuracy to SVM methods. This work is a case study, comparing accuracy and speed of SVM and LR classifiers in detection of LSB Matching and other related spatial-domain image steganography, through the state-of-art 686-dimensional SPAM feature set, in three image sets.

  6. SEPARATION PHENOMENA LOGISTIC REGRESSION

    Directory of Open Access Journals (Sweden)

    Ikaro Daniel de Carvalho Barreto

    2014-03-01

    Full Text Available This paper proposes an application of concepts about the maximum likelihood estimation of the binomial logistic regression model to the separation phenomena. It generates bias in the estimation and provides different interpretations of the estimates on the different statistical tests (Wald, Likelihood Ratio and Score and provides different estimates on the different iterative methods (Newton-Raphson and Fisher Score. It also presents an example that demonstrates the direct implications for the validation of the model and validation of variables, the implications for estimates of odds ratios and confidence intervals, generated from the Wald statistics. Furthermore, we present, briefly, the Firth correction to circumvent the phenomena of separation.

  7. riskRegression

    DEFF Research Database (Denmark)

    Ozenne, Brice; Sørensen, Anne Lyngholm; Scheike, Thomas

    2017-01-01

    In the presence of competing risks a prediction of the time-dynamic absolute risk of an event can be based on cause-specific Cox regression models for the event and the competing risks (Benichou and Gail, 1990). We present computationally fast and memory optimized C++ functions with an R interface......-product we obtain fast access to the baseline hazards (compared to survival::basehaz()) and predictions of survival probabilities, their confidence intervals and confidence bands. Confidence intervals and confidence bands are based on point-wise asymptotic expansions of the corresponding statistical...

  8. Adaptive metric kernel regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    2000-01-01

    Kernel smoothing is a widely used non-parametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this contribution, we propose an algorithm that adapts the input metric used in multivariate...... regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...

  9. Adaptive Metric Kernel Regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    Kernel smoothing is a widely used nonparametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this paper, we propose an algorithm that adapts the input metric used in multivariate regression...... by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...

  10. Polylinear regression analysis in radiochemistry

    International Nuclear Information System (INIS)

    Kopyrin, A.A.; Terent'eva, T.N.; Khramov, N.N.

    1995-01-01

    A number of radiochemical problems have been formulated in the framework of polylinear regression analysis, which permits the use of conventional mathematical methods for their solution. The authors have considered features of the use of polylinear regression analysis for estimating the contributions of various sources to the atmospheric pollution, for studying irradiated nuclear fuel, for estimating concentrations from spectral data, for measuring neutron fields of a nuclear reactor, for estimating crystal lattice parameters from X-ray diffraction patterns, for interpreting data of X-ray fluorescence analysis, for estimating complex formation constants, and for analyzing results of radiometric measurements. The problem of estimating the target parameters can be incorrect at certain properties of the system under study. The authors showed the possibility of regularization by adding a fictitious set of data open-quotes obtainedclose quotes from the orthogonal design. To estimate only a part of the parameters under consideration, the authors used incomplete rank models. In this case, it is necessary to take into account the possibility of confounding estimates. An algorithm for evaluating the degree of confounding is presented which is realized using standard software or regression analysis

  11. SIMS analyses of ultra-low-energy B ion implants in Si: Evaluation of profile shape and dose accuracy

    International Nuclear Information System (INIS)

    Magee, C.W.; Hockett, R.S.; Bueyueklimanli, T.H.; Abdelrehim, I.; Marino, J.W.

    2007-01-01

    Numerous experimental studies for near-surface analyses of B in Si have shown that the B distribution within the top few nanometers is distorted by secondary ion mass spectrometry (SIMS) depth profiling with O 2 -flooding or normal incidence O 2 bombardment. Furthermore, the presence of surface oxide affects the X j determination as well as B profile shape when SIMS analyses are conducted while fully oxidizing the analytical area. Nuclear techniques such as elastic recoil detection (ERD), nuclear reaction analysis (NRA), and high-resolution Rutherford backscattering spectrometry (HR-RBS), are known to provide a profile shape near the surface that is free of artifacts. Comparisons with SIMS analyses have shown that SIMS analyses without fully oxidizing the analytical area agree well with these techniques at sufficiently high concentrations (where the nuclear techniques are applicable). The ability to measure both the B profile and an oxide marker with this non-oxidizing SIMS technique also allows accurate positioning of the B profile with respect to the SiO 2 /Si interface. This SIMS analysis protocol has been used to study the differences in near-surface dopant distribution for plasma-based implants. This study specifically focuses on measuring near-surface profile shapes as well as total implant doses for ultra-shallow B implants in Si especially those made with high peak B concentrations

  12. 78 FR 63479 - Meta-Analyses of Randomized Controlled Clinical Trials (RCTs) for the Evaluation of Risk To...

    Science.gov (United States)

    2013-10-24

    ... pharmaceutical industry and health care organizations, and others from the general public, about the use of meta-analyses of randomized trials as a tool for safety assessment in the regulation of pharmaceutical products... PDUFA Goals Letter, titled ``Enhancing Regulatory Science and Expediting Drug Development,'' includes an...

  13. Aid and growth regressions

    DEFF Research Database (Denmark)

    Hansen, Henrik; Tarp, Finn

    2001-01-01

    This paper examines the relationship between foreign aid and growth in real GDP per capita as it emerges from simple augmentations of popular cross country growth specifications. It is shown that aid in all likelihood increases the growth rate, and this result is not conditional on ‘good’ policy....... investment. We conclude by stressing the need for more theoretical work before this kind of cross-country regressions are used for policy purposes.......This paper examines the relationship between foreign aid and growth in real GDP per capita as it emerges from simple augmentations of popular cross country growth specifications. It is shown that aid in all likelihood increases the growth rate, and this result is not conditional on ‘good’ policy...

  14. Structural brain alterations of Down's syndrome in early childhood evaluation by DTI and volumetric analyses

    Energy Technology Data Exchange (ETDEWEB)

    Gunbey, Hediye Pinar; Bilgici, Meltem Ceyhan; Aslan, Kerim; Incesu, Lutfi [Ondokuz Mayis University, Faculty of Medicine, Department of Radiology, Kurupelit, Samsun (Turkey); Has, Arzu Ceylan [Bilkent University, National Magnetic Resonance Research Center, Ankara (Turkey); Ogur, Methiye Gonul [Ondokuz Mayis University, Department of Genetics, Samsun (Turkey); Alhan, Aslihan [Ufuk University, Department of Statistics, Ankara (Turkey)

    2017-07-15

    To provide an initial assessment of white matter (WM) integrity with diffusion tensor imaging (DTI) and the accompanying volumetric changes in WM and grey matter (GM) through volumetric analyses of young children with Down's syndrome (DS). Ten children with DS and eight healthy control subjects were included in the study. Tract-based spatial statistics (TBSS) were used in the DTI study for whole-brain voxelwise analysis of fractional anisotropy (FA) and mean diffusivity (MD) of WM. Volumetric analyses were performed with an automated segmentation method to obtain regional measurements of cortical volumes. Children with DS showed significantly reduced FA in association tracts of the fronto-temporo-occipital regions as well as the corpus callosum (CC) and anterior limb of the internal capsule (p < 0.05). Volumetric reductions included total cortical GM, cerebellar GM and WM volume, basal ganglia, thalamus, brainstem and CC in DS compared with controls (p < 0.05). These preliminary results suggest that DTI and volumetric analyses may reflect the earliest complementary changes of the neurodevelopmental delay in children with DS and can serve as surrogate biomarkers of the specific elements of WM and GM integrity for cognitive development. (orig.)

  15. Modified Regression Correlation Coefficient for Poisson Regression Model

    Science.gov (United States)

    Kaengthong, Nattacha; Domthong, Uthumporn

    2017-09-01

    This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).

  16. A systematic meta-review of evaluations of youth violence prevention programs: Common and divergent findings from 25 years of meta-analyses and systematic reviews☆

    Science.gov (United States)

    Matjasko, Jennifer L.; Vivolo-Kantor, Alana M.; Massetti, Greta M.; Holland, Kristin M.; Holt, Melissa K.; Cruz, Jason Dela

    2018-01-01

    Violence among youth is a pervasive public health problem. In order to make progress in reducing the burden of injury and mortality that result from youth violence, it is imperative to identify evidence-based programs and strategies that have a significant impact on violence. There have been many rigorous evaluations of youth violence prevention programs. However, the literature is large, and it is difficult to draw conclusions about what works across evaluations from different disciplines, contexts, and types of programs. The current study reviews the meta-analyses and systematic reviews published prior to 2009 that synthesize evaluations of youth violence prevention programs. This meta-review reports the findings from 37 meta-analyses and 15 systematic reviews; the included reviews were coded on measures of the social ecology, prevention approach, program type, and study design. A majority of the meta-analyses and systematic reviews were found to demonstrate moderate program effects. Meta-analyses yielded marginally smaller effect sizes compared to systematic reviews, and those that included programs targeting family factors showed marginally larger effects than those that did not. In addition, there are a wide range of individual/family, program, and study moderators of program effect sizes. Implications of these findings and suggestions for future research are discussed. PMID:29503594

  17. A methodology for evaluating weighting functions using MCNP and its application to PWR ex-core analyses

    International Nuclear Information System (INIS)

    Pecchia, Marco; Vasiliev, Alexander; Ferroukhi, Hakim; Pautz, Andreas

    2017-01-01

    Highlights: • Evaluation of neutron source importance for a given tally. • Assessment of ex-core detector response plus its uncertainty. • Direct use of neutron track evaluated by a Monte Carlo neutron transport code. - Abstract: The ex-core neutron detectors are commonly used to control reactor power in light water reactors. Therefore, it is relevant to understand the importance of a neutron source to the ex-core detectors response. In mathematical terms, this information is conveniently represented by the so called weighting functions. A new methodology based on the MCNP code for evaluating the weighting functions starting from the neutron history database is presented in this work. A simultaneous evaluation of the weighting functions in a user-given Cartesian coverage mesh is the main advantage of the method. The capability to generate weighting functions simultaneously in both spatial and energy ranges is the innovative part of this work. Then, an interpolation tool complements the methodology, allowing the generation of weighting functions up to the pin-by-pin fuel segment, where a direct evaluation is not possible due to low statistical precision. A comparison to reference results provides a verification of the methodology. Finally, an application to investigate the role of ex-core detectors spatial location and core burnup for a Swiss nuclear power plant is provided.

  18. Evaluation of severe accident risks: Methodology for the containment, source term, consequence, and risk integration analyses. Volume 1, Revision 1

    International Nuclear Information System (INIS)

    Gorham, E.D.; Breeding, R.J.; Brown, T.D.; Harper, F.T.; Helton, J.C.; Murfin, W.B.; Hora, S.C.

    1993-12-01

    NUREG-1150 examines the risk to the public from five nuclear power plants. The NUREG-1150 plant studies are Level III probabilistic risk assessments (PRAs) and, as such, they consist of four analysis components: accident frequency analysis, accident progression analysis, source term analysis, and consequence analysis. This volume summarizes the methods utilized in performing the last three components and the assembly of these analyses into an overall risk assessment. The NUREG-1150 analysis approach is based on the following ideas: (1) general and relatively fast-running models for the individual analysis components, (2) well-defined interfaces between the individual analysis components, (3) use of Monte Carlo techniques together with an efficient sampling procedure to propagate uncertainties, (4) use of expert panels to develop distributions for important phenomenological issues, and (5) automation of the overall analysis. Many features of the new analysis procedures were adopted to facilitate a comprehensive treatment of uncertainty in the complete risk analysis. Uncertainties in the accident frequency, accident progression and source term analyses were included in the overall uncertainty assessment. The uncertainties in the consequence analysis were not included in this assessment. A large effort was devoted to the development of procedures for obtaining expert opinion and the execution of these procedures to quantify parameters and phenomena for which there is large uncertainty and divergent opinions in the reactor safety community

  19. Measurement Error in Education and Growth Regressions

    NARCIS (Netherlands)

    Portela, M.; Teulings, C.N.; Alessie, R.

    The perpetual inventory method used for the construction of education data per country leads to systematic measurement error. This paper analyses the effect of this measurement error on GDP regressions. There is a systematic difference in the education level between census data and observations

  20. Measurement error in education and growth regressions

    NARCIS (Netherlands)

    Portela, Miguel; Teulings, Coen; Alessie, R.

    2004-01-01

    The perpetual inventory method used for the construction of education data per country leads to systematic measurement error. This paper analyses the effect of this measurement error on GDP regressions. There is a systematic difference in the education level between census data and observations

  1. Panel data specifications in nonparametric kernel regression

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    parametric panel data estimators to analyse the production technology of Polish crop farms. The results of our nonparametric kernel regressions generally differ from the estimates of the parametric models but they only slightly depend on the choice of the kernel functions. Based on economic reasoning, we...

  2. The use of exploratory analyses within the National Institute for Health and Care Excellence single technology appraisal process: an evaluation and qualitative analysis.

    Science.gov (United States)

    Kaltenthaler, Eva; Carroll, Christopher; Hill-McManus, Daniel; Scope, Alison; Holmes, Michael; Rice, Stephen; Rose, Micah; Tappenden, Paul; Woolacott, Nerys

    2016-04-01

    raised by an ERG in its critique of the submitted economic evidence. These analyses had more influence on recommendations earlier in the STA process than later on in the process. The descriptions of analyses undertaken were often highly specific to a particular STA and could be inconsistent across ERG reports and thus difficult to interpret. Evidence Review Groups frequently conduct exploratory analyses to test or improve the economic evaluations submitted by companies as part of the STA process. ERG exploratory analyses often have an influence on the recommendations produced by the ACs. More in-depth analysis is needed to understand how ERGs make decisions regarding which exploratory analyses should be undertaken. More research is also needed to fully understand which types of exploratory analyses are most useful to ACs in their decision-making. The National Institute for Health Research Health Technology Assessment programme.

  3. ISP 22 OECD/NEA/CSNI International standard problem n. 22. Evaluation of post-test analyses

    International Nuclear Information System (INIS)

    1992-07-01

    The present report deals with the open re-evaluation of the originally double-blind CSNI International Standard Problem 22 based on the test SP-FW-02 performed in the SPES facility. The SPES apparatus is an experimental simulator of the Westinghouse PWR-PUN plant. The test SP-FW-02 (ISP22) simulates a complete loss of feedwater with delayed injection of auxiliary feedwater. The main parts of the report are: outline of the test facility and of the SP-FW-02 experiment; overview of pre-test activities; overview of input models used by post-test participants; evaluation of participant predictions; evaluation of qualitative and quantitative code accuracy of pre-test and post-test calculations

  4. Canonical variate regression.

    Science.gov (United States)

    Luo, Chongliang; Liu, Jin; Dey, Dipak K; Chen, Kun

    2016-07-01

    In many fields, multi-view datasets, measuring multiple distinct but interrelated sets of characteristics on the same set of subjects, together with data on certain outcomes or phenotypes, are routinely collected. The objective in such a problem is often two-fold: both to explore the association structures of multiple sets of measurements and to develop a parsimonious model for predicting the future outcomes. We study a unified canonical variate regression framework to tackle the two problems simultaneously. The proposed criterion integrates multiple canonical correlation analysis with predictive modeling, balancing between the association strength of the canonical variates and their joint predictive power on the outcomes. Moreover, the proposed criterion seeks multiple sets of canonical variates simultaneously to enable the examination of their joint effects on the outcomes, and is able to handle multivariate and non-Gaussian outcomes. An efficient algorithm based on variable splitting and Lagrangian multipliers is proposed. Simulation studies show the superior performance of the proposed approach. We demonstrate the effectiveness of the proposed approach in an [Formula: see text] intercross mice study and an alcohol dependence study. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. Evaluation par l'analyse sensorielle des qualité organoleptiques d'anciennes variétes de pommes

    Directory of Open Access Journals (Sweden)

    Lateur M.

    2001-01-01

    Full Text Available Organoleptic properties of old apple cultivars evaluated by sensory analysis. In the framework of a research programme focused on the evaluation and the valorization of fruit tree genetic resources, the sensory analysis technique has been used in order to evaluate the organoleptic properties of six old apple cultivars which are on trial in Belgium and North of France. The aims were to define their optimal picking time and storage aptitude, therefore marketing advices could be given to growers. Methodological aspects have been investigated on different sensory analysis parameters adapted to fresh fruits e.g. panel selection and formation, choice of descriptors. Sensory analysis was applied just after picking and periodically during the storage period. For each fruit sample, different characteristics were measured (brix, pH and flesh firmness with the objective to calculate the correlation between sensory analysis data and analytical data. The results show that for the Jonagold cv., there is a very high correlation between ""crunchiness"", ""flesh firmness"" and ""juiciness"" sensory evaluated data. Other results, based on the experimentation of the six old cvs, confirm the correlation between the sensory analysis of the ""flesh firmness"" and the ""juiciness"", they show that the ""flesh firmness"" can be well assessed by a sensory analysis panel, and that the global fruit quality appreciation depends mostly on ""juiciness"", ""flesh firmness"" and sweet sensation. The sensory analysis can give good indication for a better commercial management of apple cvs concerning picking date, storage capacity and the best storage conditions to be chosen.

  6. Evaluation of multivariate statistical analyses for monitoring and prediction of processes in an seawater reverse osmosis desalination plant

    International Nuclear Information System (INIS)

    Kolluri, Srinivas Sahan; Esfahani, Iman Janghorban; Garikiparthy, Prithvi Sai Nadh; Yoo, Chang Kyoo

    2015-01-01

    Our aim was to analyze, monitor, and predict the outcomes of processes in a full-scale seawater reverse osmosis (SWRO) desalination plant using multivariate statistical techniques. Multivariate analysis of variance (MANOVA) was used to investigate the performance and efficiencies of two SWRO processes, namely, pore controllable fiber filterreverse osmosis (PCF-SWRO) and sand filtration-ultra filtration-reverse osmosis (SF-UF-SWRO). Principal component analysis (PCA) was applied to monitor the two SWRO processes. PCA monitoring revealed that the SF-UF-SWRO process could be analyzed reliably with a low number of outliers and disturbances. Partial least squares (PLS) analysis was then conducted to predict which of the seven input parameters of feed flow rate, PCF/SF-UF filtrate flow rate, temperature of feed water, turbidity feed, pH, reverse osmosis (RO)flow rate, and pressure had a significant effect on the outcome variables of permeate flow rate and concentration. Root mean squared errors (RMSEs) of the PLS models for permeate flow rates were 31.5 and 28.6 for the PCF-SWRO process and SF-UF-SWRO process, respectively, while RMSEs of permeate concentrations were 350.44 and 289.4, respectively. These results indicate that the SF-UF-SWRO process can be modeled more accurately than the PCF-SWRO process, because the RMSE values of permeate flowrate and concentration obtained using a PLS regression model of the SF-UF-SWRO process were lower than those obtained for the PCF-SWRO process.

  7. Evaluation of multivariate statistical analyses for monitoring and prediction of processes in an seawater reverse osmosis desalination plant

    Energy Technology Data Exchange (ETDEWEB)

    Kolluri, Srinivas Sahan; Esfahani, Iman Janghorban; Garikiparthy, Prithvi Sai Nadh; Yoo, Chang Kyoo [Kyung Hee University, Yongin (Korea, Republic of)

    2015-08-15

    Our aim was to analyze, monitor, and predict the outcomes of processes in a full-scale seawater reverse osmosis (SWRO) desalination plant using multivariate statistical techniques. Multivariate analysis of variance (MANOVA) was used to investigate the performance and efficiencies of two SWRO processes, namely, pore controllable fiber filterreverse osmosis (PCF-SWRO) and sand filtration-ultra filtration-reverse osmosis (SF-UF-SWRO). Principal component analysis (PCA) was applied to monitor the two SWRO processes. PCA monitoring revealed that the SF-UF-SWRO process could be analyzed reliably with a low number of outliers and disturbances. Partial least squares (PLS) analysis was then conducted to predict which of the seven input parameters of feed flow rate, PCF/SF-UF filtrate flow rate, temperature of feed water, turbidity feed, pH, reverse osmosis (RO)flow rate, and pressure had a significant effect on the outcome variables of permeate flow rate and concentration. Root mean squared errors (RMSEs) of the PLS models for permeate flow rates were 31.5 and 28.6 for the PCF-SWRO process and SF-UF-SWRO process, respectively, while RMSEs of permeate concentrations were 350.44 and 289.4, respectively. These results indicate that the SF-UF-SWRO process can be modeled more accurately than the PCF-SWRO process, because the RMSE values of permeate flowrate and concentration obtained using a PLS regression model of the SF-UF-SWRO process were lower than those obtained for the PCF-SWRO process.

  8. Survey and evaluation of inherent safety characteristics and passive safety systems for use in probabilistic safety analyses

    International Nuclear Information System (INIS)

    Wetzel, N.; Scharfe, A.

    1998-01-01

    The present report examines the possibilities and limits of a probabilistic safety analysis to evaluate passive safety systems and inherent safety characteristics. The inherent safety characteristics are based on physical principles, that together with the safety system lead to no damage. A probabilistic evaluation of the inherent safety characteristic is not made. An inventory of passive safety systems of accomplished nuclear power plant types in the Federal Republic of Germany was drawn up. The evaluation of the passive safety system in the analysis of the accomplished nuclear power plant types was examined. The analysis showed that the passive manner of working was always assumed to be successful. A probabilistic evaluation was not performed. The unavailability of the passive safety system was determined by the failure of active components which are necessary in order to activate the passive safety system. To evaluate the passive safety features in new concepts of nuclear power plants the AP600 from Westinghouse, the SBWR from General Electric and the SWR 600 from Siemens, were selected. Under these three reactor concepts, the SWR 600 is specially attractive because the safety features need no energy sources and instrumentation in this concept. First approaches for the assessment of the reliability of passively operating systems are summarized. Generally it can be established that the core melt frequency for the passive concepts AP600 and SBWR is advantageous in comparison to the probabilistic objectives from the European Pressurized Water Reactor (EPR). Under the passive concepts is the SWR 600 particularly interesting. In this concept the passive systems need no energy sources and instrumentation, and has active operational systems and active safety equipment. Siemens argues that with this concept the frequency of a core melt will be two orders of magnitude lower than for the conventional reactors. (orig.) [de

  9. Evaluation of chemical changes during Myrciaria cauliflora (jabuticaba fruit) fermentation by {sup 1}H NMR spectroscopy and chemometric analyses

    Energy Technology Data Exchange (ETDEWEB)

    Fortes, Gilmara A.C.; Naves, Sara S.; Ferri, Pedro H.; Santos, Suzana C., E-mail: suzana.quimica.ufg@hotmail.com [Universidade Federal de Goias (UFG), Goiania, GO (Brazil). Inst. de Quimica. Lab. de Bioatividade Molecular

    2012-10-15

    Organic acids, sugars, alcohols, phenolic compounds, color properties, pH and titratable acidity were monitored during the commercial fermentation of jabuticaba (Myrciaria cauliflora) by {sup 1}H nuclear magnetic resonance (NMR) spectroscopy, spectrophotometric assays and standard methods of analysis. Data collected was analyzed by principal component (PCA), hierarchical cluster (HCA) and canonical correlation (CCA) analyses. Two sample groups were distinguished and the variables responsible for separation were sugars, anthocyanins, alcohols, hue and acetic and succinic acids. The canonical correlation analysis confirmed the influence of alcohols (ethanol, methanol and glycerol), organic acids (citric, succinic and acetic acids), pH and titratable acidity on the extraction and stability of anthocyanins and co pigments. As a result, color properties were also affected by phenolic variation throughout the fermentative process. (author)

  10. Multi-person and multi-attribute design evaluations using evidential reasoning based on subjective safety and cost analyses

    International Nuclear Information System (INIS)

    Wang, J.; Yang, J.B.; Sen, P.

    1996-01-01

    This paper presents an approach for ranking proposed design options based on subjective safety and cost analyses. Hierarchical system safety analysis is carried out using fuzzy sets and evidential reasoning. This involves safety modelling by fuzzy sets at the bottom level of a hierarchy and safety synthesis by evidential reasoning at higher levels. Fuzzy sets are also used to model the cost incurred for each design option. An evidential reasoning approach is then employed to synthesise the estimates of safety and cost, which are made by multiple designers. The developed approach is capable of dealing with problems of multiple designers, multiple attributes and multiple design options to select the best design. Finally, a practical engineering example is presented to demonstrate the proposed multi-person and multi-attribute design selection approach

  11. Lipid and DNA biomarker analyses of Narragansett Bay Sediments: Evaluating the UK'37 proxy in an Estuarine Environment

    Science.gov (United States)

    George, S. E.; Herbert, T.; Amaral-Zettler, L. A.; Richter, N.

    2017-12-01

    Long chain polyunsaturated alkenone (LCA) lipid biomarkers produced by haptophyte phytoplankton species within the Order Isochrysidales (Phylum Haptophyta) have proven exceptionally useful in paleotemperature studies by means of the Uk'37 and Uk37 indices. Two closely-related Group III haptophytes, Emiliania huxleyi and Gephyrocapsa oceanica are the primary alkenone synthesizers in the modern ocean, while freshwater systems host the distinct Group I phylotype, sometimes called the Greenland phylotype, in reference to the location of its original discovery. Group I haptophytes produce large quantities of the distinct C37:4 ketone, which acts as a chemical `fingerprint' in sediments. The utility of alkenones as a paleotemperature proxy in estuarine environments has remained largely untested, representing an under-utilized opportunity to construct high-resolution paleotemperature records from environments at the intersection of fluvial and marine systems. This uncertainty is due, in part, to the presence of multiple haptophyte groups in estuaries, resulting in a mixed alkenone signature. To determine the community composition of alkenone-producing haptophytes within Narragansett Bay, four geographically separated cores from within the Bay were analyzed for alkenones as well as haptophyte rRNA biomarker gene presence. Haptophyte rRNA genes (small and large subunit) were recovered from surface and near-subsurface samples, and in conjunction with alkenone profiles, reveal recent haptophyte community structure and alkenone production regimes throughout the Bay. A surprising result is the recovery of rRNA biomarker genes with a 100% match to the open-ocean alkenone producer E. huxleyi in locations away from large fresh water inputs to the Bay. Results of these analyses elucidate the effect of salinity and nutrient dynamics on alkenone-producing haptophyte communities and enhance applicability of long chain polyunsaturated alkenones as lipid biomarkers in estuarine

  12. Human factors evaluation of remote afterloading brachytherapy. Supporting analyses of human-system interfaces, procedures and practices, training and organizational practices and policies. Volume 3

    International Nuclear Information System (INIS)

    Callan, J.R.; Kelly, R.T.; Quinn, M.L.

    1995-07-01

    A human factors project on the use of nuclear by-product material to treat cancer using remotely operated afterloaders was undertaken by the Nuclear Regulatory Commission. The purpose of the project was to identify factors that contribute to human error in the system for remote afterloading brachytherapy (RAB). This report documents the findings from the second, third, fourth, and fifth phases of the project, which involved detailed analyses of four major aspects of the RAB system linked to human error: human-system interfaces; procedures and practices; training practices and policies; and organizational practices and policies, respectively. Findings based on these analyses provided factual and conceptual support for the final phase of this project, which identified factors leading to human error in RAB. The impact of those factors on RAB performance was then evaluated and prioritized in terms of safety significance, and alternative approaches for resolving safety significant problems were identified and evaluated

  13. Human factors evaluation of remote afterloading brachytherapy. Supporting analyses of human-system interfaces, procedures and practices, training and organizational practices and policies. Volume 3

    Energy Technology Data Exchange (ETDEWEB)

    Callan, J.R.; Kelly, R.T.; Quinn, M.L. [Pacific Science & Engineering Group, San Diego, CA (United States)] [and others

    1995-07-01

    A human factors project on the use of nuclear by-product material to treat cancer using remotely operated afterloaders was undertaken by the Nuclear Regulatory Commission. The purpose of the project was to identify factors that contribute to human error in the system for remote afterloading brachytherapy (RAB). This report documents the findings from the second, third, fourth, and fifth phases of the project, which involved detailed analyses of four major aspects of the RAB system linked to human error: human-system interfaces; procedures and practices; training practices and policies; and organizational practices and policies, respectively. Findings based on these analyses provided factual and conceptual support for the final phase of this project, which identified factors leading to human error in RAB. The impact of those factors on RAB performance was then evaluated and prioritized in terms of safety significance, and alternative approaches for resolving safety significant problems were identified and evaluated.

  14. Evaluation of maintenance strategies for steam generator tubes in pressurized waster reactors. 2. Cost and profitability analyses

    International Nuclear Information System (INIS)

    Isobe, Y.; Sagisaka, M.; Yoshimura, S.; Yagawa, G.

    2000-01-01

    As an application of probabilistic fracture mechanics (PFM), risk-benefit analysis was carried out to evaluate maintenance activities of steam generator (SG) tubes used in pressurized water reactors (PWRs). The analysis was conducted for SG tubes made of Inconel 600, and Inconel 690 as well assuming its crack initiation and crack propagation law based on Inconel 600 data. The following results were drawn from the analysis. Improvement of inspection accuracy reduces the maintenance costs significantly and is preferable from the viewpoint of profitability due to reduction of SG tube leakage and rupture. There is a certain region of SCC properties of SG tubes where sampling inspection is effective. (author)

  15. A comparison of Cox and logistic regression for use in genome-wide association studies of cohort and case-cohort design.

    Science.gov (United States)

    Staley, James R; Jones, Edmund; Kaptoge, Stephen; Butterworth, Adam S; Sweeting, Michael J; Wood, Angela M; Howson, Joanna M M

    2017-06-01

    Logistic regression is often used instead of Cox regression to analyse genome-wide association studies (GWAS) of single-nucleotide polymorphisms (SNPs) and disease outcomes with cohort and case-cohort designs, as it is less computationally expensive. Although Cox and logistic regression models have been compared previously in cohort studies, this work does not completely cover the GWAS setting nor extend to the case-cohort study design. Here, we evaluated Cox and logistic regression applied to cohort and case-cohort genetic association studies using simulated data and genetic data from the EPIC-CVD study. In the cohort setting, there was a modest improvement in power to detect SNP-disease associations using Cox regression compared with logistic regression, which increased as the disease incidence increased. In contrast, logistic regression had more power than (Prentice weighted) Cox regression in the case-cohort setting. Logistic regression yielded inflated effect estimates (assuming the hazard ratio is the underlying measure of association) for both study designs, especially for SNPs with greater effect on disease. Given logistic regression is substantially more computationally efficient than Cox regression in both settings, we propose a two-step approach to GWAS in cohort and case-cohort studies. First to analyse all SNPs with logistic regression to identify associated variants below a pre-defined P-value threshold, and second to fit Cox regression (appropriately weighted in case-cohort studies) to those identified SNPs to ensure accurate estimation of association with disease.

  16. Polynomial regression analysis and significance test of the regression function

    International Nuclear Information System (INIS)

    Gao Zhengming; Zhao Juan; He Shengping

    2012-01-01

    In order to analyze the decay heating power of a certain radioactive isotope per kilogram with polynomial regression method, the paper firstly demonstrated the broad usage of polynomial function and deduced its parameters with ordinary least squares estimate. Then significance test method of polynomial regression function is derived considering the similarity between the polynomial regression model and the multivariable linear regression model. Finally, polynomial regression analysis and significance test of the polynomial function are done to the decay heating power of the iso tope per kilogram in accord with the authors' real work. (authors)

  17. The Impact of Regression to the Mean on Economic Evaluation in Quasi-Experimental Pre-Post Studies: The Example of Total Knee Replacement Using Data from the Osteoarthritis Initiative.

    Science.gov (United States)

    Schilling, Chris; Petrie, Dennis; Dowsey, Michelle M; Choong, Peter F; Clarke, Philip

    2017-12-01

    Many treatments are evaluated using quasi-experimental pre-post studies susceptible to regression to the mean (RTM). Ignoring RTM could bias the economic evaluation. We investigated this issue using the contemporary example of total knee replacement (TKR), a common treatment for end-stage osteoarthritis of the knee. Data (n = 4796) were obtained from the Osteoarthritis Initiative database, a longitudinal observational study of osteoarthritis. TKR patients (n = 184) were matched to non-TKR patients, using propensity score matching on the predicted hazard of TKR and exact matching on osteoarthritis severity and health-related quality of life (HrQoL). The economic evaluation using the matched control group was compared to the standard method of using the pre-surgery score as the control. Matched controls were identified for 56% of the primary TKRs. The matched control HrQoL trajectory showed evidence of RTM accounting for a third of the estimated QALY gains from surgery using the pre-surgery HrQoL as the control. Incorporating RTM into the economic evaluation significantly reduced the estimated cost effectiveness of TKR and increased the uncertainty. A generalized ICER bias correction factor was derived to account for RTM in cost-effectiveness analysis. RTM should be considered in economic evaluations based on quasi-experimental pre-post studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  18. Recursive Algorithm For Linear Regression

    Science.gov (United States)

    Varanasi, S. V.

    1988-01-01

    Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.

  19. Experimental model to evaluate in vivo and in vitro cartilage MR imaging by means of histological analyses

    International Nuclear Information System (INIS)

    Bittersohl, B.; Mamisch, T.C.; Welsch, G.H.; Stratmann, J.; Forst, R.; Swoboda, B.; Bautz, W.; Rechenberg, B. von; Cavallaro, A.

    2009-01-01

    Objectives: Implementation of an experimental model to compare cartilage MR imaging by means of histological analyses. Material and methods: MRI was obtained from 4 patients expecting total knee replacement at 1.5 and/or 3 T prior surgery. The timeframe between pre-op MRI and knee replacement was within two days. Resected cartilage-bone samples were tagged with Ethi-pins to reproduce the histological cutting course. Pre-operative scanning at 1.5 T included following parameters for fast low angle shot (FLASH: TR/TE/FA = 33 ms/6 ms/30 deg., BW = 110 kHz, 120 mm x 120 mm FOV, 256 x 256 matrix, 0.65 mm slice-thickness) and double echo steady state (DESS: TR/TE/FA = 23.7 ms/6.9 ms/40 deg., BW = 130 kHz, 120 x 120 mm FOV, 256 x 256 matrix, 0.65 mm slice-thickness). At 3 T, scan parameters were: FLASH (TR/TE/FA = 12.2 ms/5.1 ms/10 deg., BW = 130 kHz, 170 x 170 mm FOV, 320 x 320, 0.5 mm slice-thickness) and DESS (TR/TE/FA = 15.6 ms/4.5 ms/25 deg., BW = 200 kHz, 135 mm x 150 mm FOV, 288 x 320 matrix, 0.5 mm slice-thickness). Imaging of the specimens was done the same day at 1.5 T. MRI (Noyes) and histological (Mankin) score scales were correlated using the paired t-test. Sensitivity and specificity for the detection of different grades of cartilage degeneration were assessed. Inter-reader and intra-reader reliability was determined using Kappa analysis. Results: Low correlation (sensitivity, specificity) was found for both sequences in normal to mild Mankin grades. Only moderate to severe changes were diagnosed with higher significance and specificity. The use of higher field-strengths was advantageous for both protocols with sensitivity values ranging from 13.6% to 93.3% (FLASH) and 20.5% to 96.2% (DESS). Kappa values ranged from 0.488 to 0.944. Conclusions: Correlating MR images with continuous histological slices was feasible by using three-dimensional imaging, multi-planar-reformat and marker pins. The capability of diagnosing early cartilage changes with high accuracy

  20. Evaluating the intensity of fire at the Acheulian site of Gesher Benot Ya'aqov-Spatial and thermoluminescence analyses.

    Directory of Open Access Journals (Sweden)

    Nira Alperson-Afil

    Full Text Available This manuscript presents an attempt to evaluate the intensity of fire through spatial patterning and thermoluminescence methodology. Previous studies of Layer II-6 Level 2 at the Acheulian site of Gesher Benot Ya'aqov suggested that hominins differentiated their activities across space, including multiple activities around a hearth reconstructed on the basis of the distribution of burned flint artifacts. A transect of ~4 m was extended from the center of the reconstructed hearth of Level 2 to its periphery in order to examine the intensity of fire. Burned and unburned flint microartifacts were sampled along this transect. The results of earlier and current thermoluminescence (TL analysis demonstrate a general agreement with the macroscopic determination of burning, indicating that the possibility of misinterpretation based on macroscopic observations is negligible. The TL signal from flint microartifacts close to the hearth's center shows unambiguous signs of strong heating, whereas with increasing distance from the hearth the TL signal can be interpreted as a result of decreasing temperatures and/or shorter durations of exposure to fire in addition to a decreasing number of flints showing fire damage. Our study shows that TL analysis can identify some variation in fire intensity, which allows a more precise classification of burned flint microartifacts with respect to their heating history.

  1. Cation-π interactions: computational analyses of the aromatic box motif and the fluorination strategy for experimental evaluation.

    Science.gov (United States)

    Davis, Matthew R; Dougherty, Dennis A

    2015-11-21

    Cation-π interactions are common in biological systems, and many structural studies have revealed the aromatic box as a common motif. With the aim of understanding the nature of the aromatic box, several computational methods were evaluated for their ability to reproduce experimental cation-π binding energies. We find the DFT method M06 with the 6-31G(d,p) basis set performs best of several methods tested. The binding of benzene to a number of different cations (sodium, potassium, ammonium, tetramethylammonium, and guanidinium) was studied. In addition, the binding of the organic cations NH4(+) and NMe4(+) to ab initio generated aromatic boxes as well as examples of aromatic boxes from protein crystal structures were investigated. These data, along with a study of the distance dependence of the cation-π interaction, indicate that multiple aromatic residues can meaningfully contribute to cation binding, even with displacements of more than an angstrom from the optimal cation-π interaction. Progressive fluorination of benzene and indole was studied as well, and binding energies obtained were used to reaffirm the validity of the "fluorination strategy" to study cation-π interactions in vivo.

  2. Combining Alphas via Bounded Regression

    Directory of Open Access Journals (Sweden)

    Zura Kakushadze

    2015-11-01

    Full Text Available We give an explicit algorithm and source code for combining alpha streams via bounded regression. In practical applications, typically, there is insufficient history to compute a sample covariance matrix (SCM for a large number of alphas. To compute alpha allocation weights, one then resorts to (weighted regression over SCM principal components. Regression often produces alpha weights with insufficient diversification and/or skewed distribution against, e.g., turnover. This can be rectified by imposing bounds on alpha weights within the regression procedure. Bounded regression can also be applied to stock and other asset portfolio construction. We discuss illustrative examples.

  3. Regression in autistic spectrum disorders.

    Science.gov (United States)

    Stefanatos, Gerry A

    2008-12-01

    A significant proportion of children diagnosed with Autistic Spectrum Disorder experience a developmental regression characterized by a loss of previously-acquired skills. This may involve a loss of speech or social responsitivity, but often entails both. This paper critically reviews the phenomena of regression in autistic spectrum disorders, highlighting the characteristics of regression, age of onset, temporal course, and long-term outcome. Important considerations for diagnosis are discussed and multiple etiological factors currently hypothesized to underlie the phenomenon are reviewed. It is argued that regressive autistic spectrum disorders can be conceptualized on a spectrum with other regressive disorders that may share common pathophysiological features. The implications of this viewpoint are discussed.

  4. An Evaluation Quality Framework for Analysing School-Based Learning (SBL) to Work-Based Learning (WBL) Transition Module

    International Nuclear Information System (INIS)

    Alseddiqi, M; Mishra, R; Pislaru, C

    2012-01-01

    The paper presents the results from a quality framework to measure the effectiveness of a new engineering course entitled 'school-based learning (SBL) to work-based learning (WBL) transition module' in the Technical and Vocational Education (TVE) system in Bahrain. The framework is an extended version of existing information quality frameworks with respect to pedagogical and technological contexts. It incorporates specific pedagogical and technological dimensions as per the Bahrain modern industry requirements. Users' views questionnaire on the effectiveness of the new transition module was distributed to various stakeholders including TVE teachers and students. The aim was to receive critical information in diagnosing, monitoring and evaluating different views and perceptions about the effectiveness of the new module. The analysis categorised the quality dimensions by their relative importance. This was carried out using the principal component analysis available in SPSS. The analysis clearly identified the most important quality dimensions integrated in the new module for SBL-to-WBL transition. It was also apparent that the new module contains workplace proficiencies, prepares TVE students for work placement, provides effective teaching and learning methodologies, integrates innovative technology in the process of learning, meets modern industrial needs, and presents a cooperative learning environment for TVE students. From the principal component analysis finding, to calculate the percentage of relative importance of each factor and its quality dimensions, was significant. The percentage comparison would justify the most important factor as well as the most important quality dimensions. Also, the new, re-arranged quality dimensions from the finding with an extended number of factors tended to improve the extended version of the quality information framework to a revised quality framework.

  5. Statistical evaluation of the performance of gridded monthly precipitation products from reanalysis data, satellite estimates, and merged analyses over China

    Science.gov (United States)

    Deng, Xueliang; Nie, Suping; Deng, Weitao; Cao, Weihua

    2018-04-01

    In this study, we compared the following four different gridded monthly precipitation products: the National Centers for Environmental Prediction version 2 (NCEP-2) reanalysis data, the satellite-based Climate Prediction Center Morphing technique (CMORPH) data, the merged satellite-gauge Global Precipitation Climatology Project (GPCP) data, and the merged satellite-gauge-model data from the Beijing Climate Center Merged Estimation of Precipitation (BMEP). We evaluated the performances of these products using monthly precipitation observations spanning the period of January 2003 to December 2013 from a dense, national, rain gauge network in China. Our assessment involved several statistical techniques, including spatial pattern, temporal variation, bias, root-mean-square error (RMSE), and correlation coefficient (CC) analysis. The results show that NCEP-2, GPCP, and BMEP generally overestimate monthly precipitation at the national scale and CMORPH underestimates it. However, all of the datasets successfully characterized the northwest to southeast increase in the monthly precipitation over China. Because they include precipitation gauge information from the Global Telecommunication System (GTS) network, GPCP and BMEP have much smaller biases, lower RMSEs, and higher CCs than NCEP-2 and CMORPH. When the seasonal and regional variations are considered, NCEP-2 has a larger error over southern China during the summer. CMORPH poorly reproduces the magnitude of the precipitation over southeastern China and the temporal correlation over western and northwestern China during all seasons. BMEP has a lower RMSE and higher CC than GPCP over eastern and southern China, where the station network is dense. In contrast, BMEP has a lower CC than GPCP over western and northwestern China, where the gauge network is relatively sparse.

  6. Linear regression in astronomy. I

    Science.gov (United States)

    Isobe, Takashi; Feigelson, Eric D.; Akritas, Michael G.; Babu, Gutti Jogesh

    1990-01-01

    Five methods for obtaining linear regression fits to bivariate data with unknown or insignificant measurement errors are discussed: ordinary least-squares (OLS) regression of Y on X, OLS regression of X on Y, the bisector of the two OLS lines, orthogonal regression, and 'reduced major-axis' regression. These methods have been used by various researchers in observational astronomy, most importantly in cosmic distance scale applications. Formulas for calculating the slope and intercept coefficients and their uncertainties are given for all the methods, including a new general form of the OLS variance estimates. The accuracy of the formulas was confirmed using numerical simulations. The applicability of the procedures is discussed with respect to their mathematical properties, the nature of the astronomical data under consideration, and the scientific purpose of the regression. It is found that, for problems needing symmetrical treatment of the variables, the OLS bisector performs significantly better than orthogonal or reduced major-axis regression.

  7. Industrial Fuel Gas Demonstration Plant Program. Conceptual design and evaluation of commercial plant. Volume III. Economic analyses (Deliverable Nos. 15 and 16)

    Energy Technology Data Exchange (ETDEWEB)

    None

    1978-01-01

    This report presents the results of Task I of Phase I in the form of a Conceptual Design and Evaluation of Commercial Plant report. The report is presented in four volumes as follows: I - Executive Summary, II - Commercial Plant Design, III - Economic Analyses, IV - Demonstration Plant Recommendations. Volume III presents the economic analyses for the commercial plant and the supporting data. General cost and financing factors used in the analyses are tabulated. Three financing modes are considered. The product gas cost calculation procedure is identified and appendices present computer inputs and sample computer outputs for the MLGW, Utility, and Industry Base Cases. The results of the base case cost analyses for plant fenceline gas costs are as follows: Municipal Utility, (e.g. MLGW), $3.76/MM Btu; Investor Owned Utility, (25% equity), $4.48/MM Btu; and Investor Case, (100% equity), $5.21/MM Btu. The results of 47 IFG product cost sensitivity cases involving a dozen sensitivity variables are presented. Plant half size, coal cost, plant investment, and return on equity (industrial) are the most important sensitivity variables. Volume III also presents a summary discussion of the socioeconomic impact of the plant and a discussion of possible commercial incentives for development of IFG plants.

  8. Evaluation of the Pseudostatic Analyses of Earth Dams Using FE Simulation and Observed Earthquake-Induced Deformations: Case Studies of Upper San Fernando and Kitayama Dams

    Directory of Open Access Journals (Sweden)

    Tohid Akhlaghi

    2014-01-01

    Full Text Available Evaluation of the accuracy of the pseudostatic approach is governed by the accuracy with which the simple pseudostatic inertial forces represent the complex dynamic inertial forces that actually exist in an earthquake. In this study, the Upper San Fernando and Kitayama earth dams, which have been designed using the pseudostatic approach and damaged during the 1971 San Fernando and 1995 Kobe earthquakes, were investigated and analyzed. The finite element models of the dams were prepared based on the detailed available data and results of in situ and laboratory material tests. Dynamic analyses were conducted to simulate the earthquake-induced deformations of the dams using the computer program Plaxis code. Then the pseudostatic seismic coefficient used in the design and analyses of the dams were compared with the seismic coefficients obtained from dynamic analyses of the simulated model as well as the other available proposed pseudostatic correlations. Based on the comparisons made, the accuracy and reliability of the pseudostatic seismic coefficients are evaluated and discussed.

  9. Advanced statistics: linear regression, part I: simple linear regression.

    Science.gov (United States)

    Marill, Keith A

    2004-01-01

    Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.

  10. Analysis and evaluation of GAVE chains. Part 2 of 3. Final report (sheets presentation); Analyse en evaluatie van GAVE-ketens. Deel 2 van 3. Final report (sheetpresentation)

    Energy Technology Data Exchange (ETDEWEB)

    Bosma, W.J.P. [Arthur D. Little International, Rotterdam (Netherlands)

    1999-12-01

    This report contains the detailed findings of the analysis, evaluation, and integration of Novem GAVE options. This main report is meant for the reader who is interested in the detailed findings, as well as an overview of the results. For readers who are mainly interested in the high-level results, and are comfortable with Dutch, there is a short text summary of our results, entitled 'Analyse en evaluatie van GAVE-ketens, management summary' (part 1). Readers who are interested in the underlying data and detailed assumptions are encouraged to consult the appendix to this report, entitled 'Analyse en evaluatie van GAVE-ketens, appendices' (part 3)

  11. Avaliação de medidas da persistência da lactação de cabras da raça Saanen sob modelo de regressão aleatória Evaluation of persistency lactation measures of Saanen goats under random regression model

    Directory of Open Access Journals (Sweden)

    Gilberto Romeiro de Oliveira Menezes

    2010-08-01

    Full Text Available Utilizaram-se 10.238 registros semanais de produção de leite no dia do controle, provenientes de 388 primeiras lactações de cabras da raça Saanen, na avaliação de seis medidas da persistência da lactação, a fim de verificar qual a mais adequada para o uso em avaliações genéticas para a característica. As seis medidas avaliadas são adaptações de medidas utilizadas em bovinos de leite, obtidas por substituir, nas fórmulas, os valores de referência de bovinos pelos de caprinos. Os valores usados nos cálculos foram obtidos de modelos de regressão aleatória. As estimativas de herdabilidade para as medidas de persistência variaram entre 0,03 e 0,09. As correlações genéticas entre medidas de persistência e produção de leite até 268 dias variaram entre -0,64 e 0,67. Por apresentar a menor correlação genética com produção aos 268 dias (0,14, a medida de persistência PS4, obtida pelo somatório dos valores do 41º ao 240º dia de lactação como desvios da produção aos 40 dias de lactação, é a mais indicada em avaliações genéticas para persistência da lactação em cabras da raça Saanen. Assim, a seleção de cabras de melhor persistência da lactação não altera a produção aos 268 dias. Em razão da baixa herdabilidade dessa medida (0,03, pequenas respostas à seleção são esperadas neste rebanho.It was used 10,238 weekly milk production records on the control day from the first 388 lactations of Saanen goats on the evalution of six lactation persistency measures in order to find out which was the best fitted for using in genetic evaluations on this trait. These six evaluated measures are adaptations from those used on dairy cattle, obtained by replacing, in the formula, bovine reference values by the goat ones. The values used in the calculations were obtained from random regression models. Heritability estimates for persistency measures ranged from 0.03 to 0.09. Genetic correlations between

  12. Time series regression model for infectious disease and weather.

    Science.gov (United States)

    Imai, Chisato; Armstrong, Ben; Chalabi, Zaid; Mangtani, Punam; Hashizume, Masahiro

    2015-10-01

    Time series regression has been developed and long used to evaluate the short-term associations of air pollution and weather with mortality or morbidity of non-infectious diseases. The application of the regression approaches from this tradition to infectious diseases, however, is less well explored and raises some new issues. We discuss and present potential solutions for five issues often arising in such analyses: changes in immune population, strong autocorrelations, a wide range of plausible lag structures and association patterns, seasonality adjustments, and large overdispersion. The potential approaches are illustrated with datasets of cholera cases and rainfall from Bangladesh and influenza and temperature in Tokyo. Though this article focuses on the application of the traditional time series regression to infectious diseases and weather factors, we also briefly introduce alternative approaches, including mathematical modeling, wavelet analysis, and autoregressive integrated moving average (ARIMA) models. Modifications proposed to standard time series regression practice include using sums of past cases as proxies for the immune population, and using the logarithm of lagged disease counts to control autocorrelation due to true contagion, both of which are motivated from "susceptible-infectious-recovered" (SIR) models. The complexity of lag structures and association patterns can often be informed by biological mechanisms and explored by using distributed lag non-linear models. For overdispersed models, alternative distribution models such as quasi-Poisson and negative binomial should be considered. Time series regression can be used to investigate dependence of infectious diseases on weather, but may need modifying to allow for features specific to this context. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Linear regression in astronomy. II

    Science.gov (United States)

    Feigelson, Eric D.; Babu, Gutti J.

    1992-01-01

    A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.

  14. Time-adaptive quantile regression

    DEFF Research Database (Denmark)

    Møller, Jan Kloppenborg; Nielsen, Henrik Aalborg; Madsen, Henrik

    2008-01-01

    and an updating procedure are combined into a new algorithm for time-adaptive quantile regression, which generates new solutions on the basis of the old solution, leading to savings in computation time. The suggested algorithm is tested against a static quantile regression model on a data set with wind power......An algorithm for time-adaptive quantile regression is presented. The algorithm is based on the simplex algorithm, and the linear optimization formulation of the quantile regression problem is given. The observations have been split to allow a direct use of the simplex algorithm. The simplex method...... production, where the models combine splines and quantile regression. The comparison indicates superior performance for the time-adaptive quantile regression in all the performance parameters considered....

  15. Analysis and evaluation of GAVE chains. Part 3 of 3. Appendices; Analyse en evaluatie van GAVE-ketens. Deel 3 van 3. Appendices

    Energy Technology Data Exchange (ETDEWEB)

    Bosma, W.J.P. [Arthur D. Little International, Rotterdam (Netherlands)

    1999-12-01

    The final report (part 2) contains the detailed findings of the analysis, evaluation, and integration of Novem GAVE options and aims at the reader who is interested in the detailed findings, as well as an overview of the results. For readers who are mainly interested in the high-level results, and are comfortable with Dutch, there is a short text summary of our results, entitled 'Analyse en evaluatie van GAVE-ketens, management summary' (part 1). These appendices is for readers who are interested in the underlying data and detailed assumptions. 70 refs.

  16. Retro-regression--another important multivariate regression improvement.

    Science.gov (United States)

    Randić, M

    2001-01-01

    We review the serious problem associated with instabilities of the coefficients of regression equations, referred to as the MRA (multivariate regression analysis) "nightmare of the first kind". This is manifested when in a stepwise regression a descriptor is included or excluded from a regression. The consequence is an unpredictable change of the coefficients of the descriptors that remain in the regression equation. We follow with consideration of an even more serious problem, referred to as the MRA "nightmare of the second kind", arising when optimal descriptors are selected from a large pool of descriptors. This process typically causes at different steps of the stepwise regression a replacement of several previously used descriptors by new ones. We describe a procedure that resolves these difficulties. The approach is illustrated on boiling points of nonanes which are considered (1) by using an ordered connectivity basis; (2) by using an ordering resulting from application of greedy algorithm; and (3) by using an ordering derived from an exhaustive search for optimal descriptors. A novel variant of multiple regression analysis, called retro-regression (RR), is outlined showing how it resolves the ambiguities associated with both "nightmares" of the first and the second kind of MRA.

  17. Quantile regression theory and applications

    CERN Document Server

    Davino, Cristina; Vistocco, Domenico

    2013-01-01

    A guide to the implementation and interpretation of Quantile Regression models This book explores the theory and numerous applications of quantile regression, offering empirical data analysis as well as the software tools to implement the methods. The main focus of this book is to provide the reader with a comprehensivedescription of the main issues concerning quantile regression; these include basic modeling, geometrical interpretation, estimation and inference for quantile regression, as well as issues on validity of the model, diagnostic tools. Each methodological aspect is explored and

  18. SCALE: A modular code system for performing Standardized Computer Analyses for Licensing Evaluation. Volume 1, Part 2: Control modules S1--H1; Revision 5

    International Nuclear Information System (INIS)

    1997-03-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automated the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.3 of the system

  19. SCALE: A modular code system for performing Standardized Computer Analyses for Licensing Evaluation. Volume 2, Part 3: Functional modules F16--F17; Revision 5

    International Nuclear Information System (INIS)

    1997-03-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automated the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.3 of the system

  20. SCALE: A modular code system for performing Standardized Computer Analyses for Licensing Evaluation. Volume 2, Part 3: Functional modules F16--F17; Revision 5

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automated the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.3 of the system.

  1. Logistic regression applied to natural hazards: rare event logistic regression with replications

    Directory of Open Access Journals (Sweden)

    M. Guns

    2012-06-01

    Full Text Available Statistical analysis of natural hazards needs particular attention, as most of these phenomena are rare events. This study shows that the ordinary rare event logistic regression, as it is now commonly used in geomorphologic studies, does not always lead to a robust detection of controlling factors, as the results can be strongly sample-dependent. In this paper, we introduce some concepts of Monte Carlo simulations in rare event logistic regression. This technique, so-called rare event logistic regression with replications, combines the strength of probabilistic and statistical methods, and allows overcoming some of the limitations of previous developments through robust variable selection. This technique was here developed for the analyses of landslide controlling factors, but the concept is widely applicable for statistical analyses of natural hazards.

  2. Logistic regression applied to natural hazards: rare event logistic regression with replications

    Science.gov (United States)

    Guns, M.; Vanacker, V.

    2012-06-01

    Statistical analysis of natural hazards needs particular attention, as most of these phenomena are rare events. This study shows that the ordinary rare event logistic regression, as it is now commonly used in geomorphologic studies, does not always lead to a robust detection of controlling factors, as the results can be strongly sample-dependent. In this paper, we introduce some concepts of Monte Carlo simulations in rare event logistic regression. This technique, so-called rare event logistic regression with replications, combines the strength of probabilistic and statistical methods, and allows overcoming some of the limitations of previous developments through robust variable selection. This technique was here developed for the analyses of landslide controlling factors, but the concept is widely applicable for statistical analyses of natural hazards.

  3. Post-processing through linear regression

    Science.gov (United States)

    van Schaeybroeck, B.; Vannitsem, S.

    2011-03-01

    Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS) method, a new time-dependent Tikhonov regularization (TDTR) method, the total least-square method, a new geometric-mean regression (GM), a recently introduced error-in-variables (EVMOS) method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified. These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise). At long lead times the regression schemes (EVMOS, TDTR) which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.

  4. Post-processing through linear regression

    Directory of Open Access Journals (Sweden)

    B. Van Schaeybroeck

    2011-03-01

    Full Text Available Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS method, a new time-dependent Tikhonov regularization (TDTR method, the total least-square method, a new geometric-mean regression (GM, a recently introduced error-in-variables (EVMOS method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified.

    These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise. At long lead times the regression schemes (EVMOS, TDTR which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.

  5. Panel Smooth Transition Regression Models

    DEFF Research Database (Denmark)

    González, Andrés; Terasvirta, Timo; Dijk, Dick van

    We introduce the panel smooth transition regression model. This new model is intended for characterizing heterogeneous panels, allowing the regression coefficients to vary both across individuals and over time. Specifically, heterogeneity is allowed for by assuming that these coefficients are bou...

  6. Testing discontinuities in nonparametric regression

    KAUST Repository

    Dai, Wenlin

    2017-01-19

    In nonparametric regression, it is often needed to detect whether there are jump discontinuities in the mean function. In this paper, we revisit the difference-based method in [13 H.-G. Müller and U. Stadtmüller, Discontinuous versus smooth regression, Ann. Stat. 27 (1999), pp. 299–337. doi: 10.1214/aos/1018031100

  7. Testing discontinuities in nonparametric regression

    KAUST Repository

    Dai, Wenlin; Zhou, Yuejin; Tong, Tiejun

    2017-01-01

    In nonparametric regression, it is often needed to detect whether there are jump discontinuities in the mean function. In this paper, we revisit the difference-based method in [13 H.-G. Müller and U. Stadtmüller, Discontinuous versus smooth regression, Ann. Stat. 27 (1999), pp. 299–337. doi: 10.1214/aos/1018031100

  8. Logistic Regression: Concept and Application

    Science.gov (United States)

    Cokluk, Omay

    2010-01-01

    The main focus of logistic regression analysis is classification of individuals in different groups. The aim of the present study is to explain basic concepts and processes of binary logistic regression analysis intended to determine the combination of independent variables which best explain the membership in certain groups called dichotomous…

  9. Noise Exposure of Teachers in Nursery Schools-Evaluation of Measures for Noise Reduction When Dropping DUPLO Toy Bricks into Storage Cases by Sound Analyses.

    Science.gov (United States)

    Gebauer, Konstanze; Scharf, Thomas; Baumann, Uwe; Groneberg, David A; Bundschuh, Matthias

    2016-07-04

    Although noise is one of the leading work-related health risk factors for teachers, many nursery schools lack sufficient noise reduction measures. This intervention study evaluated the noise exposure of nursery school teachers when dropping DUPLO toy bricks into storage cases. Sound analyses of the impact included assessment of the maximum sound pressure level (LAFmax) as well as frequency analyses with 1/3 octave band filter. For the purpose of standardization, a customized gadget was developed. Recordings were performed in 11 cases of different materials and designs to assess the impact on sound level reduction. Thereby, the acoustic effects of three damping materials (foam rubber, carpet, and PU-foam) were investigated. The lowest LAFmax was measured in cases consisting of "metal grid" (90.71 dB) or of a woven willow "basket" (91.61 dB), whereas a case of "aluminium" (103.34 dB) generated the highest impact LAFmax. The frequency analyses determined especially low LAFmax in the frequency bands between 80 and 2500 Hz in cases designs "metal grid" and "basket". The insertion of PU-foam achieved the most significant attenuation of LAFmax (-13.88 dB) and, in the frequency analyses, the best sound damping. The dropping of DUPLO bricks in cases contributes to the high noise level in nursery schools, but measured LAFmax show no evidence for the danger of acute hearing loss. However, continuous exposure may lead to functional impairment of the hair cells and trigger stress reactions. We recommend noise reduction by utilizing cases of woven "basket" with an insert of PU-foam.

  10. Noise Exposure of Teachers in Nursery Schools—Evaluation of Measures for Noise Reduction When Dropping DUPLO Toy Bricks into Storage Cases by Sound Analyses

    Directory of Open Access Journals (Sweden)

    Konstanze Gebauer

    2016-07-01

    Full Text Available Background: Although noise is one of the leading work-related health risk factors for teachers, many nursery schools lack sufficient noise reduction measures. Methods: This intervention study evaluated the noise exposure of nursery school teachers when dropping DUPLO toy bricks into storage cases. Sound analyses of the impact included assessment of the maximum sound pressure level (LAFmax as well as frequency analyses with 1/3 octave band filter. For the purpose of standardization, a customized gadget was developed. Recordings were performed in 11 cases of different materials and designs to assess the impact on sound level reduction. Thereby, the acoustic effects of three damping materials (foam rubber, carpet, and PU-foam were investigated. Results: The lowest LAFmax was measured in cases consisting of “metal grid” (90.71 dB or of a woven willow “basket” (91.61 dB, whereas a case of “aluminium” (103.34 dB generated the highest impact LAFmax. The frequency analyses determined especially low LAFmax in the frequency bands between 80 and 2500 Hz in cases designs “metal grid” and “basket”. The insertion of PU-foam achieved the most significant attenuation of LAFmax (−13.88 dB and, in the frequency analyses, the best sound damping. Conclusion: The dropping of DUPLO bricks in cases contributes to the high noise level in nursery schools, but measured LAFmax show no evidence for the danger of acute hearing loss. However, continuous exposure may lead to functional impairment of the hair cells and trigger stress reactions. We recommend noise reduction by utilizing cases of woven “basket” with an insert of PU-foam.

  11. Ordinary least square regression, orthogonal regression, geometric mean regression and their applications in aerosol science

    International Nuclear Information System (INIS)

    Leng Ling; Zhang Tianyi; Kleinman, Lawrence; Zhu Wei

    2007-01-01

    Regression analysis, especially the ordinary least squares method which assumes that errors are confined to the dependent variable, has seen a fair share of its applications in aerosol science. The ordinary least squares approach, however, could be problematic due to the fact that atmospheric data often does not lend itself to calling one variable independent and the other dependent. Errors often exist for both measurements. In this work, we examine two regression approaches available to accommodate this situation. They are orthogonal regression and geometric mean regression. Comparisons are made theoretically as well as numerically through an aerosol study examining whether the ratio of organic aerosol to CO would change with age

  12. Evaluation of the diagnostic accuracy of four-view radiography and conventional computed tomography analysing sacral and pelvic fractures in dogs.

    Science.gov (United States)

    Stieger-Vanegas, S M; Senthirajah, S K J; Nemanic, S; Baltzer, W; Warnock, J; Bobe, G

    2015-01-01

    The purpose of our study was (1) to determine whether four-view radiography of the pelvis is as reliable and accurate as computed tomography (CT) in diagnosing sacral and pelvic fractures, in addition to coxofemoral and sacroiliac joint subluxation or luxation, and (2) to evaluate the effect of the amount of training in reading diagnostic imaging studies on the accuracy of diagnosing sacral and pelvic fractures in dogs. Sacral and pelvic fractures were created in 11 canine cadavers using a lateral impactor. In all cadavers, frog-legged ventro-dorsal, lateral, right and left ventro-45°-medial to dorsolateral oblique frog leg ("rollover 45-degree view") radiographs and a CT of the pelvis were obtained. Two radiologists, two surgeons and two veterinary students classified fractures using a confidence scale and noted the duration of evaluation for each imaging modality and case. The imaging results were compared to gross dissection. All evaluators required significantly more time to analyse CT images compared to radiographic images. Sacral and pelvic fractures, specifically those of the sacral body, ischiatic table, and the pubic bone, were more accurately diagnosed using CT compared to radiography. Fractures of the acetabulum and iliac body were diagnosed with similar accuracy (at least 86%) using either modality. Computed tomography is a better method for detecting canine sacral and some pelvic fractures compared to radiography. Computed tomography provided an accuracy of close to 100% in persons trained in evaluating CT images.

  13. Evaluation of the Tier 1 Program of Project P.A.T.H.S.: Secondary Data Analyses of Conclusions Drawn by the Program Implementers

    Directory of Open Access Journals (Sweden)

    Daniel T. L. Shek

    2008-01-01

    Full Text Available The Tier 1 Program of the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes is a curricula-based positive youth development program. In the experimental implementation phase, 52 schools participated in the program. Based on subjective outcome evaluation data collected from the program participants (Form A and program implementers (Form B in each school, the program implementers were invited to write down five conclusions based on an integration of the evaluation findings (N = 52. The conclusions stated in the 52 evaluation reports were further analyzed via secondary data analyses in this paper. Results showed that most of the conclusions concerning perceptions of the Tier 1 Program, instructors, and effectiveness of the programs were positive in nature. There were also conclusions reflecting the respondents’ appreciation of the program. Finally, responses on the difficulties encountered and suggestions for improvements were observed. In conjunction with the previous evaluation findings, the present study suggests that the Tier 1 Program was well received by the stakeholders and the program was beneficial to the development of the program participants.

  14. Tumor regression patterns in retinoblastoma

    International Nuclear Information System (INIS)

    Zafar, S.N.; Siddique, S.N.; Zaheer, N.

    2016-01-01

    To observe the types of tumor regression after treatment, and identify the common pattern of regression in our patients. Study Design: Descriptive study. Place and Duration of Study: Department of Pediatric Ophthalmology and Strabismus, Al-Shifa Trust Eye Hospital, Rawalpindi, Pakistan, from October 2011 to October 2014. Methodology: Children with unilateral and bilateral retinoblastoma were included in the study. Patients were referred to Pakistan Institute of Medical Sciences, Islamabad, for chemotherapy. After every cycle of chemotherapy, dilated funds examination under anesthesia was performed to record response of the treatment. Regression patterns were recorded on RetCam II. Results: Seventy-four tumors were included in the study. Out of 74 tumors, 3 were ICRB group A tumors, 43 were ICRB group B tumors, 14 tumors belonged to ICRB group C, and remaining 14 were ICRB group D tumors. Type IV regression was seen in 39.1% (n=29) tumors, type II in 29.7% (n=22), type III in 25.6% (n=19), and type I in 5.4% (n=4). All group A tumors (100%) showed type IV regression. Seventeen (39.5%) group B tumors showed type IV regression. In group C, 5 tumors (35.7%) showed type II regression and 5 tumors (35.7%) showed type IV regression. In group D, 6 tumors (42.9%) regressed to type II non-calcified remnants. Conclusion: The response and success of the focal and systemic treatment, as judged by the appearance of different patterns of tumor regression, varies with the ICRB grouping of the tumor. (author)

  15. Predictors of course in obsessive-compulsive disorder: logistic regression versus Cox regression for recurrent events.

    Science.gov (United States)

    Kempe, P T; van Oppen, P; de Haan, E; Twisk, J W R; Sluis, A; Smit, J H; van Dyck, R; van Balkom, A J L M

    2007-09-01

    Two methods for predicting remissions in obsessive-compulsive disorder (OCD) treatment are evaluated. Y-BOCS measurements of 88 patients with a primary OCD (DSM-III-R) diagnosis were performed over a 16-week treatment period, and during three follow-ups. Remission at any measurement was defined as a Y-BOCS score lower than thirteen combined with a reduction of seven points when compared with baseline. Logistic regression models were compared with a Cox regression for recurrent events model. Logistic regression yielded different models at different evaluation times. The recurrent events model remained stable when fewer measurements were used. Higher baseline levels of neuroticism and more severe OCD symptoms were associated with a lower chance of remission, early age of onset and more depressive symptoms with a higher chance. Choice of outcome time affects logistic regression prediction models. Recurrent events analysis uses all information on remissions and relapses. Short- and long-term predictors for OCD remission show overlap.

  16. Comparison of multinomial logistic regression and logistic regression: which is more efficient in allocating land use?

    Science.gov (United States)

    Lin, Yingzhi; Deng, Xiangzheng; Li, Xing; Ma, Enjun

    2014-12-01

    Spatially explicit simulation of land use change is the basis for estimating the effects of land use and cover change on energy fluxes, ecology and the environment. At the pixel level, logistic regression is one of the most common approaches used in spatially explicit land use allocation models to determine the relationship between land use and its causal factors in driving land use change, and thereby to evaluate land use suitability. However, these models have a drawback in that they do not determine/allocate land use based on the direct relationship between land use change and its driving factors. Consequently, a multinomial logistic regression method was introduced to address this flaw, and thereby, judge the suitability of a type of land use in any given pixel in a case study area of the Jiangxi Province, China. A comparison of the two regression methods indicated that the proportion of correctly allocated pixels using multinomial logistic regression was 92.98%, which was 8.47% higher than that obtained using logistic regression. Paired t-test results also showed that pixels were more clearly distinguished by multinomial logistic regression than by logistic regression. In conclusion, multinomial logistic regression is a more efficient and accurate method for the spatial allocation of land use changes. The application of this method in future land use change studies may improve the accuracy of predicting the effects of land use and cover change on energy fluxes, ecology, and environment.

  17. Augmenting Data with Published Results in Bayesian Linear Regression

    Science.gov (United States)

    de Leeuw, Christiaan; Klugkist, Irene

    2012-01-01

    In most research, linear regression analyses are performed without taking into account published results (i.e., reported summary statistics) of similar previous studies. Although the prior density in Bayesian linear regression could accommodate such prior knowledge, formal models for doing so are absent from the literature. The goal of this…

  18. Predicting Word Reading Ability: A Quantile Regression Study

    Science.gov (United States)

    McIlraith, Autumn L.

    2018-01-01

    Predictors of early word reading are well established. However, it is unclear if these predictors hold for readers across a range of word reading abilities. This study used quantile regression to investigate predictive relationships at different points in the distribution of word reading. Quantile regression analyses used preschool and…

  19. Advanced statistics: linear regression, part II: multiple linear regression.

    Science.gov (United States)

    Marill, Keith A

    2004-01-01

    The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.

  20. Logic regression and its extensions.

    Science.gov (United States)

    Schwender, Holger; Ruczinski, Ingo

    2010-01-01

    Logic regression is an adaptive classification and regression procedure, initially developed to reveal interacting single nucleotide polymorphisms (SNPs) in genetic association studies. In general, this approach can be used in any setting with binary predictors, when the interaction of these covariates is of primary interest. Logic regression searches for Boolean (logic) combinations of binary variables that best explain the variability in the outcome variable, and thus, reveals variables and interactions that are associated with the response and/or have predictive capabilities. The logic expressions are embedded in a generalized linear regression framework, and thus, logic regression can handle a variety of outcome types, such as binary responses in case-control studies, numeric responses, and time-to-event data. In this chapter, we provide an introduction to the logic regression methodology, list some applications in public health and medicine, and summarize some of the direct extensions and modifications of logic regression that have been proposed in the literature. Copyright © 2010 Elsevier Inc. All rights reserved.

  1. Evaluating the effect of sample type on American alligator (Alligator mississippiensis) analyte values in a point-of-care blood analyser.

    Science.gov (United States)

    Hamilton, Matthew T; Finger, John W; Winzeler, Megan E; Tuberville, Tracey D

    2016-01-01

    The assessment of wildlife health has been enhanced by the ability of point-of-care (POC) blood analysers to provide biochemical analyses of non-domesticated animals in the field. However, environmental limitations (e.g. temperature, atmospheric humidity and rain) and lack of reference values may inhibit researchers from using such a device with certain wildlife species. Evaluating the use of alternative sample types, such as plasma, in a POC device may afford researchers the opportunity to delay sample analysis and the ability to use banked samples. In this study, we examined fresh whole blood, fresh plasma and frozen plasma (sample type) pH, partial pressure of carbon dioxide (PCO2), bicarbonate (HCO3 (-)), total carbon dioxide (TCO2), base excess (BE), partial pressure of oxygen (PO2), oxygen saturation (sO2) and lactate concentrations in 23 juvenile American alligators (Alligator mississippiensis) using an i-STAT CG4+ cartridge. Our results indicate that sample type had no effect on lactate concentration values (F 2,65 = 0.37, P = 0.963), suggesting that the i-STAT analyser can be used reliably to quantify lactate concentrations in fresh and frozen plasma samples. In contrast, the other seven blood parameters measured by the CG4+ cartridge were significantly affected by sample type. Lastly, we were able to collect blood samples from all alligators within 2 min of capture to establish preliminary reference ranges for juvenile alligators based on values obtained using fresh whole blood.

  2. Evaluation properties of the French version of the OUT-PATSAT35 satisfaction with care questionnaire according to classical and item response theory analyses.

    Science.gov (United States)

    Panouillères, M; Anota, A; Nguyen, T V; Brédart, A; Bosset, J F; Monnier, A; Mercier, M; Hardouin, J B

    2014-09-01

    The present study investigates the properties of the French version of the OUT-PATSAT35 questionnaire, which evaluates the outpatients' satisfaction with care in oncology using classical analysis (CTT) and item response theory (IRT). This cross-sectional multicenter study includes 692 patients who completed the questionnaire at the end of their ambulatory treatment. CTT analyses tested the main psychometric properties (convergent and divergent validity, and internal consistency). IRT analyses were conducted separately for each OUT-PATSAT35 domain (the doctors, the nurses or the radiation therapists and the services/organization) by models from the Rasch family. We examined the fit of the data to the model expectations and tested whether the model assumptions of unidimensionality, monotonicity and local independence were respected. A total of 605 (87.4%) respondents were analyzed with a mean age of 64 years (range 29-88). Internal consistency for all scales separately and for the three main domains was good (Cronbach's α 0.74-0.98). IRT analyses were performed with the partial credit model. No disordered thresholds of polytomous items were found. Each domain showed high reliability but fitted poorly to the Rasch models. Three items in particular, the item about "promptness" in the doctors' domain and the items about "accessibility" and "environment" in the services/organization domain, presented the highest default of fit. A correct fit of the Rasch model can be obtained by dropping these items. Most of the local dependence concerned items about "information provided" in each domain. A major deviation of unidimensionality was found in the nurses' domain. CTT showed good psychometric properties of the OUT-PATSAT35. However, the Rasch analysis revealed some misfitting and redundant items. Taking the above problems into consideration, it could be interesting to refine the questionnaire in a future study.

  3. Abstract Expression Grammar Symbolic Regression

    Science.gov (United States)

    Korns, Michael F.

    This chapter examines the use of Abstract Expression Grammars to perform the entire Symbolic Regression process without the use of Genetic Programming per se. The techniques explored produce a symbolic regression engine which has absolutely no bloat, which allows total user control of the search space and output formulas, which is faster, and more accurate than the engines produced in our previous papers using Genetic Programming. The genome is an all vector structure with four chromosomes plus additional epigenetic and constraint vectors, allowing total user control of the search space and the final output formulas. A combination of specialized compiler techniques, genetic algorithms, particle swarm, aged layered populations, plus discrete and continuous differential evolution are used to produce an improved symbolic regression sytem. Nine base test cases, from the literature, are used to test the improvement in speed and accuracy. The improved results indicate that these techniques move us a big step closer toward future industrial strength symbolic regression systems.

  4. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying; Carroll, Raymond J.

    2009-01-01

    . The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a

  5. Evaluation of geological documents available for provisional safety analyses of potential sites for nuclear waste repositories - Are additional geological investigations needed?

    International Nuclear Information System (INIS)

    2010-10-01

    The procedure for selecting repository sites for all categories of radioactive waste in Switzerland is defined in the conceptual part of the Sectoral Plan for Deep Geological Repositories, which foresees a selection of sites in three stages. In Stage I, Nagra proposed geological siting regions based on criteria relating to safety and engineering feasibility. The Swiss Government (the Federal Council) is expected to decide on the siting proposals in 2011. The objective of Stage 2 is to prepare proposals for the location of the surface facilities within the planning perimeters defined by the Federal Council in its decision on Stage 1 and to identify potential sites. Nagra also has to carry out a provisional safety analysis for each site and a safety-based comparison of the sites. Based on this, and taking into account the results of the socio-economic-ecological impact studies, Nagra then has to propose at least two sites for each repository type to be carried through to Stage 3. The proposed sites will then be investigated in more detail in Stage 3 to ensure that the selection of the sites for the General Licence Applications is well founded. In order to realise the objectives of the upcoming Stage 2, the state of knowledge of the geological conditions at the sites has to be sufficient to perform the provisional safety analyses. Therefore, in preparation for Stage 2, the conceptual part of the Sectoral Plan requires Nagra to clarify the need for additional investigations aimed at providing input for the provisional safety analyses. The purpose of the present report is to document Nagra's technical-scientific assessment of this need. The focus is on evaluating the geological information based on processes and parameters that are relevant for safety and engineering feasibility. In evaluating the state of knowledge the key question is whether additional information could lead to a different decision regarding the selection of the sites to be carried through to Stage 3

  6. From Rasch scores to regression

    DEFF Research Database (Denmark)

    Christensen, Karl Bang

    2006-01-01

    Rasch models provide a framework for measurement and modelling latent variables. Having measured a latent variable in a population a comparison of groups will often be of interest. For this purpose the use of observed raw scores will often be inadequate because these lack interval scale propertie....... This paper compares two approaches to group comparison: linear regression models using estimated person locations as outcome variables and latent regression models based on the distribution of the score....

  7. Testing Heteroscedasticity in Robust Regression

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2011-01-01

    Roč. 1, č. 4 (2011), s. 25-28 ISSN 2045-3345 Grant - others:GA ČR(CZ) GA402/09/0557 Institutional research plan: CEZ:AV0Z10300504 Keywords : robust regression * heteroscedasticity * regression quantiles * diagnostics Subject RIV: BB - Applied Statistics , Operational Research http://www.researchjournals.co.uk/documents/Vol4/06%20Kalina.pdf

  8. Regression methods for medical research

    CERN Document Server

    Tai, Bee Choo

    2013-01-01

    Regression Methods for Medical Research provides medical researchers with the skills they need to critically read and interpret research using more advanced statistical methods. The statistical requirements of interpreting and publishing in medical journals, together with rapid changes in science and technology, increasingly demands an understanding of more complex and sophisticated analytic procedures.The text explains the application of statistical models to a wide variety of practical medical investigative studies and clinical trials. Regression methods are used to appropriately answer the

  9. Forecasting with Dynamic Regression Models

    CERN Document Server

    Pankratz, Alan

    2012-01-01

    One of the most widely used tools in statistical forecasting, single equation regression models is examined here. A companion to the author's earlier work, Forecasting with Univariate Box-Jenkins Models: Concepts and Cases, the present text pulls together recent time series ideas and gives special attention to possible intertemporal patterns, distributed lag responses of output to input series and the auto correlation patterns of regression disturbance. It also includes six case studies.

  10. Applications of quaternary stratigraphic, soil-geomorphic, and quantitative geomorphic analyses to the evaluation of tectonic activity and landscape evolution in the Upper Coastal Plain, South Carolina

    International Nuclear Information System (INIS)

    Hanson, K.L.; Bullard, T.F.; Wit, M.W. de; Stieve, A.L.

    1993-01-01

    Geomorphic analyses combined with mapping of fluvial terraces and upland geomorphic surfaces provide new approaches and data for evaluating the Quaternary activity of post-Cretaceous faults that are recognized in subsurface data at the Savannah River Site in the Upper Coastal Plain of southwestern South Carolina. Analyses of longitudinal stream and terrace profiles, regional slope maps, and drainage basin morphometry indicate long-term uplift and southeast tilt of the site region. Preliminary results of drainage basin characterization suggests an apparent rejuvenation of drainages along the trace of the Pen Branch fault (a Tertiary reactivated reverse fault that initiated as a basin-margin normal fault along the northern boundary of the Triassic Dunbarton Basin). This apparent rejuvenation of drainages may be the result of nontectonic geomorphic processes or local tectonic uplift and tilting within a framework of regional uplift. Longitudinal profiles of fluvial terrace surfaces that are laterally continuous across the projected surface trace of the Pen Branch fault show no obvious evidence of warping or faulting within a resolution of ∼3 m. This combined with the estimated age of the terrace surfaces (350 ka to 1 Ma) indicates that if the Pen Branch fault is active, the Pleistocene rate of slip is very low (0.002 to 0.009 mm/yr)

  11. Analysis and evaluation of GAVE chains. Part 1 of 3. Management summary; Analyse en evaluatie van GAVE-ketens. Deel 1 van 3. Management summary

    Energy Technology Data Exchange (ETDEWEB)

    Van den Heuvel, E.J.M.T.

    1999-12-01

    This main report contains a summary of the detailed findings of the analysis, evaluation, and integration of Novem GAVE options. The details are presented in the final report (part 2) in the form of copies of overhead sheets. That report aims at the reader who is interested in the detailed findings, as well as an overview of the results. For readers who are mainly interested in the high-level results, and are comfortable with Dutch, there is this short text summary of our results. Readers who are interested in the underlying data and detailed assumptions are encouraged to consult the appendix to this report, entitled 'Analyse en evaluatie van GAVE-ketens, appendices' (part 3)

  12. The CM SAF SSM/I-based total column water vapour climate data record: methods and evaluation against re-analyses and satellite

    Directory of Open Access Journals (Sweden)

    M. Schröder

    2013-03-01

    Full Text Available The European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT Satellite Application Facility on Climate Monitoring (CM SAF aims at the provision and sound validation of well documented Climate Data Records (CDRs in sustained and operational environments. In this study, a total column water vapour path (WVPA climatology from CM SAF is presented and inter-compared to water vapour data records from various data sources. Based on homogenised brightness temperatures from the Special Sensor Microwave Imager (SSM/I, a climatology of WVPA has been generated within the Hamburg Ocean–Atmosphere Fluxes and Parameters from Satellite (HOAPS framework. Within a research and operation transition activity the HOAPS data and operation capabilities have been successfully transferred to the CM SAF where the complete HOAPS data and processing schemes are hosted in an operational environment. An objective analysis for interpolation, namely kriging, has been applied to the swath-based WVPA retrievals from the HOAPS data set. The resulting climatology consists of daily and monthly mean fields of WVPA over the global ice-free ocean. The temporal coverage ranges from July 1987 to August 2006. After a comparison to the precursor product the CM SAF SSM/I-based climatology has been comprehensively compared to different types of meteorological analyses from the European Centre for Medium-Range Weather Forecasts (ECMWF-ERA40, ERA INTERIM and operational analyses and from the Japan Meteorological Agency (JMA–JRA. This inter-comparison shows an overall good agreement between the climatology and the analyses, with daily absolute biases generally smaller than 2 kg m−2. The absolute value of the bias to JRA and ERA INTERIM is typically smaller than 0.5 kg m−2. For the period 1991–2006, the root mean square error (RMSE for both reanalyses is approximately 2 kg m−2. As SSM/I WVPA and radiances are assimilated into JMA and all ECMWF analyses and

  13. Diet and ADHD, Reviewing the Evidence: A Systematic Review of Meta-Analyses of Double-Blind Placebo-Controlled Trials Evaluating the Efficacy of Diet Interventions on the Behavior of Children with ADHD.

    Directory of Open Access Journals (Sweden)

    Lidy M Pelsser

    Full Text Available Attention-deficit/hyperactivity disorder (ADHD is a debilitating mental health problem hampering the child's development. The underlying causes include both genetic and environmental factors and may differ between individuals. The efficacy of diet treatments in ADHD was recently evaluated in three reviews, reporting divergent and confusing conclusions based on heterogeneous studies and subjects. To address this inconsistency we conducted a systematic review of meta-analyses of double-blind placebo-controlled trials evaluating the effect of diet interventions (elimination and supplementation on ADHD.Our literature search resulted in 14 meta-analyses, six of which confined to double-blind placebo-controlled trials applying homogeneous diet interventions, i.e. artificial food color (AFC elimination, a few-foods diet (FFD and poly-unsaturated fatty acid (PUFA supplementation. Effect sizes (ES and Confidence intervals (CI of study outcomes were depicted in a forest plot. I2 was calculated to assess heterogeneity if necessary and additional random effects subgroup meta-regression was conducted if substantial heterogeneity was present.The AFC ESs were 0.44 (95% CI: 0.16-0.72, I2 = 11% and 0.21 (95% CI: -0.02-0.43, I2 = 68% [parent ratings], 0.08 (95% CI: -0.07-0.24, I2 = 0% [teacher ratings] and 0.11 (95% CI: -0.13-0.34, I2 = 12% [observer ratings]. The FFD ESs were 0.80 (95% CI: 0.41-1.19, I2 = 61% [parent ratings] and 0.51 (95% CI: -0.02-1.04, I2 = 72% [other ratings], while the PUFA ESs were 0.17 (95% CI: -0.03-0.38, I2 = 38% [parent ratings], -0.05 (95% CI: -0.27-0.18, I2 = 0% [teacher ratings] and 0.16 (95% CI: 0.01-0.31, I2 = 0% [parent and teacher ratings]. Three meta-analyses (two FFD and one AFC resulted in high I2 without presenting subgroup results. The FFD meta-analyses provided sufficient data to perform subgroup analyses on intervention type, resulting in a decrease of heterogeneity to 0% (diet design and 37.8% (challenge design

  14. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Functional modules F1--F8 -- Volume 2, Part 1, Revision 4

    International Nuclear Information System (INIS)

    Greene, N.M.; Petrie, L.M.; Westfall, R.M.; Bucholz, J.A.; Hermann, O.W.; Fraley, S.K.

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation; Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries

  15. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Functional modules F9--F16 -- Volume 2, Part 2, Revision 4

    Energy Technology Data Exchange (ETDEWEB)

    West, J.T.; Hoffman, T.J.; Emmett, M.B.; Childs, K.W.; Petrie, L.M.; Landers, N.F.; Bryan, C.B.; Giles, G.E. [Oak Ridge National Lab., TN (United States)

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries. This volume discusses the following functional modules: MORSE-SGC; HEATING 7.2; KENO V.a; JUNEBUG-II; HEATPLOT-S; REGPLOT 6; PLORIGEN; and OCULAR.

  16. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Functional modules F1--F8 -- Volume 2, Part 1, Revision 4

    Energy Technology Data Exchange (ETDEWEB)

    Greene, N.M.; Petrie, L.M.; Westfall, R.M.; Bucholz, J.A.; Hermann, O.W.; Fraley, S.K. [Oak Ridge National Lab., TN (United States)

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation; Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries.

  17. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Functional modules F9--F16 -- Volume 2, Part 2, Revision 4

    International Nuclear Information System (INIS)

    West, J.T.; Hoffman, T.J.; Emmett, M.B.; Childs, K.W.; Petrie, L.M.; Landers, N.F.; Bryan, C.B.; Giles, G.E.

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries. This volume discusses the following functional modules: MORSE-SGC; HEATING 7.2; KENO V.a; JUNEBUG-II; HEATPLOT-S; REGPLOT 6; PLORIGEN; and OCULAR

  18. Producing The New Regressive Left

    DEFF Research Database (Denmark)

    Crone, Christine

    members, this thesis investigates a growing political trend and ideological discourse in the Arab world that I have called The New Regressive Left. On the premise that a media outlet can function as a forum for ideology production, the thesis argues that an analysis of this material can help to trace...... the contexture of The New Regressive Left. If the first part of the thesis lays out the theoretical approach and draws the contextual framework, through an exploration of the surrounding Arab media-and ideoscapes, the second part is an analytical investigation of the discourse that permeates the programmes aired...... becomes clear from the analytical chapters is the emergence of the new cross-ideological alliance of The New Regressive Left. This emerging coalition between Shia Muslims, religious minorities, parts of the Arab Left, secular cultural producers, and the remnants of the political,strategic resistance...

  19. Evaluation of the Risk of Grade 3 Oral and Pharyngeal Dysphagia Using Atlas-Based Method and Multivariate Analyses of Individual Patient Dose Distributions

    Energy Technology Data Exchange (ETDEWEB)

    Otter, Sophie [Department of Clinical Oncology, Royal Marsden Hospital, Sutton and London (United Kingdom); Schick, Ulrike; Gulliford, Sarah [Department of Clinical Oncology, Royal Marsden Hospital, Sutton and London (United Kingdom); The Institute of Cancer Research, London (United Kingdom); Lal, Punita [Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow India (India); Franceschini, Davide [Department of Radiotherapy and Radiosurgery, Humanitas Research Hospital, Milan (Italy); Newbold, Katie; Nutting, Christopher; Harrington, Kevin [Department of Clinical Oncology, Royal Marsden Hospital, Sutton and London (United Kingdom); The Institute of Cancer Research, London (United Kingdom); Bhide, Shreerang, E-mail: shreerang.bhide@icr.ac.uk [Department of Clinical Oncology, Royal Marsden Hospital, Sutton and London (United Kingdom); The Institute of Cancer Research, London (United Kingdom); Department of Radiotherapy and Radiosurgery, Humanitas Research Hospital, Milan (Italy)

    2015-11-01

    Purpose: The study aimed to apply the atlas of complication incidence (ACI) method to patients receiving radical treatment for head and neck squamous cell carcinomas (HNSCC), to generate constraints based on dose-volume histograms (DVHs), and to identify clinical and dosimetric parameters that predict the risk of grade 3 oral mucositis (g3OM) and pharyngeal dysphagia (g3PD). Methods and Materials: Oral and pharyngeal mucosal DVHs were generated for 253 patients who received radiation (RT) or chemoradiation (CRT). They were used to produce ACI for g3OM and g3PD. Multivariate analysis (MVA) of the effect of dosimetry, clinical, and patient-related variables was performed using logistic regression and bootstrapping. Receiver operating curve (ROC) analysis was also performed, and the Youden index was used to find volume constraints that discriminated between volumes that predicted for toxicity. Results: We derived statistically significant dose-volume constraints for g3OM over the range v28 to v70. Only 3 statistically significant constraints were derived for g3PD v67, v68, and v69. On MVA, mean dose to the oral mucosa predicted for g3OM and concomitant chemotherapy and mean dose to the inferior constrictor (IC) predicted for g3PD. Conclusions: We have used the ACI method to evaluate incidences of g3OM and g3PD and ROC analysis to generate constraints to predict g3OM and g3PD derived from entire individual patient DVHs. On MVA, the strongest predictors were radiation dose (for g3OM) and concomitant chemotherapy (for g3PD).

  20. Utilização de regressão multivariada para avaliação espectrofotométrica da demanda química de oxigênio em amostras de relevância ambiental Use of multivariate regression in spectrophotometric evaluation of chemical oxigen demand in samples of environmental relevance

    Directory of Open Access Journals (Sweden)

    Patricio Peralta-Zamora

    2005-10-01

    Full Text Available In this work, a partial least squares regression routine was used to develop a multivariate calibration model to predict the chemical oxygen demand (COD in substrates of environmental relevance (paper effluents and landfill leachates from UV-Vis spectral data. The calibration models permit the fast determination of the COD with typical relative errors lower by 10% with respect to the conventional methodology.

  1. A Matlab program for stepwise regression

    Directory of Open Access Journals (Sweden)

    Yanhong Qi

    2016-03-01

    Full Text Available The stepwise linear regression is a multi-variable regression for identifying statistically significant variables in the linear regression equation. In present study, we presented the Matlab program of stepwise regression.

  2. Correlation and simple linear regression.

    Science.gov (United States)

    Zou, Kelly H; Tuncali, Kemal; Silverman, Stuart G

    2003-06-01

    In this tutorial article, the concepts of correlation and regression are reviewed and demonstrated. The authors review and compare two correlation coefficients, the Pearson correlation coefficient and the Spearman rho, for measuring linear and nonlinear relationships between two continuous variables. In the case of measuring the linear relationship between a predictor and an outcome variable, simple linear regression analysis is conducted. These statistical concepts are illustrated by using a data set from published literature to assess a computed tomography-guided interventional technique. These statistical methods are important for exploring the relationships between variables and can be applied to many radiologic studies.

  3. Regression filter for signal resolution

    International Nuclear Information System (INIS)

    Matthes, W.

    1975-01-01

    The problem considered is that of resolving a measured pulse height spectrum of a material mixture, e.g. gamma ray spectrum, Raman spectrum, into a weighed sum of the spectra of the individual constituents. The model on which the analytical formulation is based is described. The problem reduces to that of a multiple linear regression. A stepwise linear regression procedure was constructed. The efficiency of this method was then tested by transforming the procedure in a computer programme which was used to unfold test spectra obtained by mixing some spectra, from a library of arbitrary chosen spectra, and adding a noise component. (U.K.)

  4. Nonparametric Mixture of Regression Models.

    Science.gov (United States)

    Huang, Mian; Li, Runze; Wang, Shaoli

    2013-07-01

    Motivated by an analysis of US house price index data, we propose nonparametric finite mixture of regression models. We study the identifiability issue of the proposed models, and develop an estimation procedure by employing kernel regression. We further systematically study the sampling properties of the proposed estimators, and establish their asymptotic normality. A modified EM algorithm is proposed to carry out the estimation procedure. We show that our algorithm preserves the ascent property of the EM algorithm in an asymptotic sense. Monte Carlo simulations are conducted to examine the finite sample performance of the proposed estimation procedure. An empirical analysis of the US house price index data is illustrated for the proposed methodology.

  5. Patterns of use and impact of standardised MedDRA query analyses on the safety evaluation and review of new drug and biologics license applications.

    Science.gov (United States)

    Chang, Lin-Chau; Mahmood, Riaz; Qureshi, Samina; Breder, Christopher D

    2017-01-01

    Standardised MedDRA Queries (SMQs) have been developed since the early 2000's and used by academia, industry, public health, and government sectors for detecting safety signals in adverse event safety databases. The purpose of the present study is to characterize how SMQs are used and the impact in safety analyses for New Drug Application (NDA) and Biologics License Application (BLA) submissions to the United States Food and Drug Administration (USFDA). We used the PharmaPendium database to capture SMQ use in Summary Basis of Approvals (SBoAs) of drugs and biologics approved by the USFDA. Characteristics of the drugs and the SMQ use were employed to evaluate the role of SMQ safety analyses in regulatory decisions and the veracity of signals they revealed. A comprehensive search of the SBoAs yielded 184 regulatory submissions approved from 2006 to 2015. Search strategies more frequently utilized restrictive searches with "narrow terms" to enhance specificity over strategies using "broad terms" to increase sensitivity, while some involved modification of search terms. A majority (59%) of 1290 searches used descriptive statistics, however inferential statistics were utilized in 35% of them. Commentary from reviewers and supervisory staff suggested that a small, yet notable percentage (18%) of 1290 searches supported regulatory decisions. The searches with regulatory impact were found in 73 submissions (40% of the submissions investigated). Most searches (75% of 227 searches) with regulatory implications described how the searches were confirmed, indicating prudence in the decision-making process. SMQs have an increasing role in the presentation and review of safety analysis for NDAs/BLAs and their regulatory reviews. This study suggests that SMQs are best used for screening process, with descriptive statistics, description of SMQ modifications, and systematic verification of cases which is crucial for drawing regulatory conclusions.

  6. Patterns of use and impact of standardised MedDRA query analyses on the safety evaluation and review of new drug and biologics license applications.

    Directory of Open Access Journals (Sweden)

    Lin-Chau Chang

    Full Text Available Standardised MedDRA Queries (SMQs have been developed since the early 2000's and used by academia, industry, public health, and government sectors for detecting safety signals in adverse event safety databases. The purpose of the present study is to characterize how SMQs are used and the impact in safety analyses for New Drug Application (NDA and Biologics License Application (BLA submissions to the United States Food and Drug Administration (USFDA.We used the PharmaPendium database to capture SMQ use in Summary Basis of Approvals (SBoAs of drugs and biologics approved by the USFDA. Characteristics of the drugs and the SMQ use were employed to evaluate the role of SMQ safety analyses in regulatory decisions and the veracity of signals they revealed.A comprehensive search of the SBoAs yielded 184 regulatory submissions approved from 2006 to 2015. Search strategies more frequently utilized restrictive searches with "narrow terms" to enhance specificity over strategies using "broad terms" to increase sensitivity, while some involved modification of search terms. A majority (59% of 1290 searches used descriptive statistics, however inferential statistics were utilized in 35% of them. Commentary from reviewers and supervisory staff suggested that a small, yet notable percentage (18% of 1290 searches supported regulatory decisions. The searches with regulatory impact were found in 73 submissions (40% of the submissions investigated. Most searches (75% of 227 searches with regulatory implications described how the searches were confirmed, indicating prudence in the decision-making process.SMQs have an increasing role in the presentation and review of safety analysis for NDAs/BLAs and their regulatory reviews. This study suggests that SMQs are best used for screening process, with descriptive statistics, description of SMQ modifications, and systematic verification of cases which is crucial for drawing regulatory conclusions.

  7. Modeling Acequia Irrigation Systems Using System Dynamics: Model Development, Evaluation, and Sensitivity Analyses to Investigate Effects of Socio-Economic and Biophysical Feedbacks

    Directory of Open Access Journals (Sweden)

    Benjamin L. Turner

    2016-10-01

    Full Text Available Agriculture-based irrigation communities of northern New Mexico have survived for centuries despite the arid environment in which they reside. These irrigation communities are threatened by regional population growth, urbanization, a changing demographic profile, economic development, climate change, and other factors. Within this context, we investigated the extent to which community resource management practices centering on shared resources (e.g., water for agricultural in the floodplains and grazing resources in the uplands and mutualism (i.e., shared responsibility of local residents to maintaining traditional irrigation policies and upholding cultural and spiritual observances embedded within the community structure influence acequia function. We used a system dynamics modeling approach as an interdisciplinary platform to integrate these systems, specifically the relationship between community structure and resource management. In this paper we describe the background and context of acequia communities in northern New Mexico and the challenges they face. We formulate a Dynamic Hypothesis capturing the endogenous feedbacks driving acequia community vitality. Development of the model centered on major stock-and-flow components, including linkages for hydrology, ecology, community, and economics. Calibration metrics were used for model evaluation, including statistical correlation of observed and predicted values and Theil inequality statistics. Results indicated that the model reproduced trends exhibited by the observed system. Sensitivity analyses of socio-cultural processes identified absentee decisions, cumulative income effect on time in agriculture, and land use preference due to time allocation, community demographic effect, effect of employment on participation, and farm size effect as key determinants of system behavior and response. Sensitivity analyses of biophysical parameters revealed that several key parameters (e.g., acres per

  8. Source apportionment of speciated PM2.5 and non-parametric regressions of PM2.5 and PM(coarse) mass concentrations from Denver and Greeley, Colorado, and construction and evaluation of dichotomous filter samplers

    Science.gov (United States)

    Piedrahita, Ricardo A.

    The Denver Aerosol Sources and Health study (DASH) was a long-term study of the relationship between the variability in fine particulate mass and chemical constituents (PM2.5, particulate matter less than 2.5mum) and adverse health effects such as cardio-respiratory illnesses and mortality. Daily filter samples were chemically analyzed for multiple species. We present findings based on 2.8 years of DASH data, from 2003 to 2005. Multilinear Engine 2 (ME-2), a receptor-based source apportionment model was applied to the data to estimate source contributions to PM2.5 mass concentrations. This study relied on two different ME-2 models: (1) a 2-way model that closely reflects PMF-2; and (2) an enhanced model with meteorological data that used additional temporal and meteorological factors. The Coarse Rural Urban Sources and Health study (CRUSH) is a long-term study of the relationship between the variability in coarse particulate mass (PMcoarse, particulate matter between 2.5 and 10mum) and adverse health effects such as cardio-respiratory illnesses, pre-term births, and mortality. Hourly mass concentrations of PMcoarse and fine particulate matter (PM2.5) are measured using tapered element oscillating microbalances (TEOMs) with Filter Dynamics Measurement Systems (FDMS), at two rural and two urban sites. We present findings based on nine months of mass concentration data, including temporal trends, and non-parametric regressions (NPR) results, which were used to characterize the wind speed and wind direction relationships that might point to sources. As part of CRUSH, 1-year coarse and fine mode particulate matter filter sampling network, will allow us to characterize the chemical composition of the particulate matter collected and perform spatial comparisons. This work describes the construction and validation testing of four dichotomous filter samplers for this purpose. The use of dichotomous splitters with an approximate 2.5mum cut point, coupled with a 10mum cut

  9. Assessing risk factors for periodontitis using regression

    Science.gov (United States)

    Lobo Pereira, J. A.; Ferreira, Maria Cristina; Oliveira, Teresa

    2013-10-01

    Multivariate statistical analysis is indispensable to assess the associations and interactions between different factors and the risk of periodontitis. Among others, regression analysis is a statistical technique widely used in healthcare to investigate and model the relationship between variables. In our work we study the impact of socio-demographic, medical and behavioral factors on periodontal health. Using regression, linear and logistic models, we can assess the relevance, as risk factors for periodontitis disease, of the following independent variables (IVs): Age, Gender, Diabetic Status, Education, Smoking status and Plaque Index. The multiple linear regression analysis model was built to evaluate the influence of IVs on mean Attachment Loss (AL). Thus, the regression coefficients along with respective p-values will be obtained as well as the respective p-values from the significance tests. The classification of a case (individual) adopted in the logistic model was the extent of the destruction of periodontal tissues defined by an Attachment Loss greater than or equal to 4 mm in 25% (AL≥4mm/≥25%) of sites surveyed. The association measures include the Odds Ratios together with the correspondent 95% confidence intervals.

  10. Contesting Citizenship: Comparative Analyses

    DEFF Research Database (Denmark)

    Siim, Birte; Squires, Judith

    2007-01-01

    importance of particularized experiences and multiple ineequality agendas). These developments shape the way citizenship is both practiced and analysed. Mapping neat citizenship modles onto distinct nation-states and evaluating these in relation to formal equality is no longer an adequate approach....... Comparative citizenship analyses need to be considered in relation to multipleinequalities and their intersections and to multiple governance and trans-national organisinf. This, in turn, suggests that comparative citizenship analysis needs to consider new spaces in which struggles for equal citizenship occur...

  11. Cactus: An Introduction to Regression

    Science.gov (United States)

    Hyde, Hartley

    2008-01-01

    When the author first used "VisiCalc," the author thought it a very useful tool when he had the formulas. But how could he design a spreadsheet if there was no known formula for the quantities he was trying to predict? A few months later, the author relates he learned to use multiple linear regression software and suddenly it all clicked into…

  12. Regression Models for Repairable Systems

    Czech Academy of Sciences Publication Activity Database

    Novák, Petr

    2015-01-01

    Roč. 17, č. 4 (2015), s. 963-972 ISSN 1387-5841 Institutional support: RVO:67985556 Keywords : Reliability analysis * Repair models * Regression Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.782, year: 2015 http://library.utia.cas.cz/separaty/2015/SI/novak-0450902.pdf

  13. Survival analysis II: Cox regression

    NARCIS (Netherlands)

    Stel, Vianda S.; Dekker, Friedo W.; Tripepi, Giovanni; Zoccali, Carmine; Jager, Kitty J.

    2011-01-01

    In contrast to the Kaplan-Meier method, Cox proportional hazards regression can provide an effect estimate by quantifying the difference in survival between patient groups and can adjust for confounding effects of other variables. The purpose of this article is to explain the basic concepts of the

  14. Kernel regression with functional response

    OpenAIRE

    Ferraty, Frédéric; Laksaci, Ali; Tadj, Amel; Vieu, Philippe

    2011-01-01

    We consider kernel regression estimate when both the response variable and the explanatory one are functional. The rates of uniform almost complete convergence are stated as function of the small ball probability of the predictor and as function of the entropy of the set on which uniformity is obtained.

  15. Multivariate Analyses and Evaluation of Heavy Metals by Chemometric BCR Sequential Extraction Method in Surface Sediments from Lingdingyang Bay, South China

    Directory of Open Access Journals (Sweden)

    Linglong Cao

    2015-04-01

    Full Text Available Sediments in estuary areas are recognized as the ultimate reservoirs for numerous contaminants, e.g., toxic metals. Multivariate analyses by chemometric evaluation were performed to classify metal ions (Cu, Zn, As, Cr, Pb, Ni and Cd in superficial sediments from Lingdingyang Bay and to determine whether or not there were potential contamination risks based on the BCR sequential extraction scheme. The results revealed that Cd was mainly in acid-soluble form with an average of 75.99% of its total contents and thus of high potential availability, indicating significant anthropogenic sources, while Cr, As, Ni were enriched in the residual fraction which could be considered as the safest ingredients to the environment. According to the proportion of secondary to primary phases (KRSP, Cd had the highest bioavailable fraction and represented high or very high risk, followed by Pb and Cu with medium risks in most of samples. The combined evaluation of the Pollution Load Index (PLI and the mean Effect Range Median Quotient (mERM-Q highlighted that the greatest potential environmental risk area was in the northwest of Lingdingyang Bay. Almost all of the sediments had a 21% probability of toxicity. Additionally, Principal Component Analysis (PCA revealed that the survey region was significantly affected by two main sources of anthropogenic contributions: PC1 showed increased loadings of variables in acid-soluble and reducible fractions that were consistent with the input from industrial wastes (such as manufacturing, metallurgy, chemical industry and domestic sewages; PC2 was characterized by increased loadings of variables in residual fraction that could be attributed to leaching and weathering of parent rocks. The results obtained demonstrated the need for appropriate remediation measures to alleviate soil pollution problem due to the more aggregation of potentially risky metals. Therefore, it is of crucial significance to implement the targeted

  16. Evaluation and standardization of different purification procedures for fish bile and liver metallothionein quantification by spectrophotometry and SDS-PAGE analyses.

    Science.gov (United States)

    Tenório-Daussat, Carolina Lyrio; Resende, Marcia Carolina Martinho; Ziolli, Roberta L; Hauser-Davis, Rachel Ann; Schaumloffel, Dirk; Saint'Pierre, Tatiana D

    2014-03-01

    Fish bile metallothioneins (MT) have been recently reported as biomarkers for environmental metal contamination; however, no studies regarding standardizations for their purification are available. Therefore, different procedures (varying centrifugation times and heat-treatment temperatures) and reducing agents (DTT, β-mercaptoethanol and TCEP) were applied to purify MT isolated from fish (Oreochromis niloticus) bile and liver. Liver was also analyzed, since these two organs are intrinsically connected and show the same trend regarding MT expression. Spectrophotometrical analyses were used to quantify the resulting MT samples, and SDS-PAGE gels were used to qualitatively assess the different procedure results. Each procedure was then statistically evaluated and a multivariate statistical analysis was then applied. A response surface methodology was also applied for bile samples, in order to further evaluate the responses for this matrix. Heat treatment effectively removes most undesired proteins from the samples, however results indicate that temperatures above 70 °C are not efficient since they also remove MTs from both bile and liver samples. Our results also indicate that the centrifugation times described in the literature can be decreased in order to analyze more samples in the same timeframe, of importance in environmental monitoring contexts where samples are usually numerous. In an environmental context, biliary MT was lower than liver MT, as expected, since liver accumulates MT with slower detoxification rates than bile, which is released from the gallbladder during feeding, and then diluted by water. Therefore, bile MT seems to be more adequate in environmental monitoring scopes regarding recent exposure to xenobiotics that may affect the proteomic and metalloproteomic expression of this biological matrix. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Intercomparison and analyses of the climatology of the West African monsoon in the West African monsoon modeling and evaluation project (WAMME) first model intercomparison experiment

    Energy Technology Data Exchange (ETDEWEB)

    Xue, Yongkang; Sales, Fernando De [University of California, Los Angeles, CA (United States); Lau, W.K.M.; Schubert, Siegfried D.; Wu, Man-Li C. [NASA, Goddard Space Flight Center, Greenbelt, MD (United States); Boone, Aaron [Centre National de Recherches Meteorologiques, Meteo-France Toulouse, Toulouse (France); Feng, Jinming [University of California, Los Angeles, CA (United States); Chinese Academy of Sciences, Institute of Atmospheric Physics, Beijing (China); Dirmeyer, Paul; Guo, Zhichang [Center for Ocean-Land-Atmosphere Interactions, Calverton, MD (United States); Kim, Kyu-Myong [University of Maryland Baltimore County, Baltimore, MD (United States); Kitoh, Akio [Meteorological Research Institute, Tsukuba (Japan); Kumar, Vadlamani [National Center for Environmental Prediction, Camp Springs, MD (United States); Wyle Information Systems, Gaithersburg, MD (United States); Poccard-Leclercq, Isabelle [Universite de Bourgogne, Centre de Recherches de Climatologie UMR5210 CNRS, Dijon (France); Mahowald, Natalie [Cornell University, Ithaca, NY (United States); Moufouma-Okia, Wilfran; Rowell, David P. [Met Office Hadley Centre, Exeter (United Kingdom); Pegion, Phillip [NASA, Goddard Space Flight Center, Greenbelt, MD (United States); National Center for Environmental Prediction, Camp Springs, MD (United States); Schemm, Jae; Thiaw, Wassila M. [National Center for Environmental Prediction, Camp Springs, MD (United States); Sealy, Andrea [The Caribbean Institute for Meteorology and Hydrology, St. James (Barbados); Vintzileos, Augustin [National Center for Environmental Prediction, Camp Springs, MD (United States); Science Applications International Corporation, Camp Springs, MD (United States); Williams, Steven F. [National Center for Atmospheric Research, Boulder, CO (United States)

    2010-07-15

    This paper briefly presents the West African monsoon (WAM) modeling and evaluation project (WAMME) and evaluates WAMME general circulation models' (GCM) performances in simulating variability of WAM precipitation, surface temperature, and major circulation features at seasonal and intraseasonal scales in the first WAMME experiment. The analyses indicate that models with specified sea surface temperature generally have reasonable simulations of the pattern of spatial distribution of WAM seasonal mean precipitation and surface temperature as well as the averaged zonal wind in latitude-height cross-section and low level circulation. But there are large differences among models in simulating spatial correlation, intensity, and variance of precipitation compared with observations. Furthermore, the majority of models fail to produce proper intensities of the African Easterly Jet (AEJ) and the tropical easterly jet. AMMA Land Surface Model Intercomparison Project (ALMIP) data are used to analyze the association between simulated surface processes and the WAM and to investigate the WAM mechanism. It has been identified that the spatial distributions of surface sensible heat flux, surface temperature, and moisture convergence are closely associated with the simulated spatial distribution of precipitation; while surface latent heat flux is closely associated with the AEJ and contributes to divergence in AEJ simulation. Common empirical orthogonal functions (CEOF) analysis is applied to characterize the WAM precipitation evolution and has identified a major WAM precipitation mode and two temperature modes (Sahara mode and Sahel mode). Results indicate that the WAMME models produce reasonable temporal evolutions of major CEOF modes but have deficiencies/uncertainties in producing variances explained by major modes. Furthermore, the CEOF analysis shows that WAM precipitation evolution is closely related to the enhanced Sahara mode and the weakened Sahel mode, supporting

  18. Evaluation of a compact spectrograph for in-situ and stand-off Laser-Induced Breakdown Spectroscopy analyses of geological samples on Mars missions

    International Nuclear Information System (INIS)

    Salle, Beatrice; Cremers, David A.; Maurice, Sylvestre; Wiens, Roger C.; Fichet, Pascal

    2005-01-01

    Laser-induced Breakdown Spectroscopy (LIBS) is actively under development for future use on surface probes to Mars. The analytical method can be deployed for in-situ and/or stand-off analysis with the latter embodiment providing the greatest advantages compared to previous and current elemental analysis methods used for planetary surface analysis. For this application, LIBS must be thoroughly investigated in terms of analytical capabilities and flight-rated instruments must be developed. Because of the low pressure of the predominantly CO 2 atmosphere on Mars, studies are needed to understand analytical requirements and to determine performance under these conditions. Stand-off analysis demands the most stringent requirements on instrumentation. Therefore, it must be determined if the high performance components that are normally used in a typical LIBS laboratory setup, which are generally not optimized for small size and weight, are essential to obtain the maximum scientific return from a mission. A key component of a LIBS apparatus is the detection system consisting of a spectrograph and a detector. Here we present an evaluation of one design of a compact spectrograph (Ocean Optics HR2000) for in-situ and stand-off LIBS analyses of geological samples under Mars atmospheric conditions

  19. Are increases in cigarette taxation regressive?

    Science.gov (United States)

    Borren, P; Sutton, M

    1992-12-01

    Using the latest published data from Tobacco Advisory Council surveys, this paper re-evaluates the question of whether or not increases in cigarette taxation are regressive in the United Kingdom. The extended data set shows no evidence of increasing price-elasticity by social class as found in a major previous study. To the contrary, there appears to be no clear pattern in the price responsiveness of smoking behaviour across different social classes. Increases in cigarette taxation, while reducing smoking levels in all groups, fall most heavily on men and women in the lowest social class. Men and women in social class five can expect to pay eight and eleven times more of a tax increase respectively, than their social class one counterparts. Taken as a proportion of relative incomes, the regressive nature of increases in cigarette taxation is even more pronounced.

  20. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying

    2009-08-27

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.

  1. Multivariate and semiparametric kernel regression

    OpenAIRE

    Härdle, Wolfgang; Müller, Marlene

    1997-01-01

    The paper gives an introduction to theory and application of multivariate and semiparametric kernel smoothing. Multivariate nonparametric density estimation is an often used pilot tool for examining the structure of data. Regression smoothing helps in investigating the association between covariates and responses. We concentrate on kernel smoothing using local polynomial fitting which includes the Nadaraya-Watson estimator. Some theory on the asymptotic behavior and bandwidth selection is pro...

  2. Regression algorithm for emotion detection

    OpenAIRE

    Berthelon , Franck; Sander , Peter

    2013-01-01

    International audience; We present here two components of a computational system for emotion detection. PEMs (Personalized Emotion Maps) store links between bodily expressions and emotion values, and are individually calibrated to capture each person's emotion profile. They are an implementation based on aspects of Scherer's theoretical complex system model of emotion~\\cite{scherer00, scherer09}. We also present a regression algorithm that determines a person's emotional feeling from sensor m...

  3. Directional quantile regression in R

    Czech Academy of Sciences Publication Activity Database

    Boček, Pavel; Šiman, Miroslav

    2017-01-01

    Roč. 53, č. 3 (2017), s. 480-492 ISSN 0023-5954 R&D Projects: GA ČR GA14-07234S Institutional support: RVO:67985556 Keywords : multivariate quantile * regression quantile * halfspace depth * depth contour Subject RIV: BD - Theory of Information OBOR OECD: Applied mathematics Impact factor: 0.379, year: 2016 http://library.utia.cas.cz/separaty/2017/SI/bocek-0476587.pdf

  4. Gaussian Process Regression Model in Spatial Logistic Regression

    Science.gov (United States)

    Sofro, A.; Oktaviarina, A.

    2018-01-01

    Spatial analysis has developed very quickly in the last decade. One of the favorite approaches is based on the neighbourhood of the region. Unfortunately, there are some limitations such as difficulty in prediction. Therefore, we offer Gaussian process regression (GPR) to accommodate the issue. In this paper, we will focus on spatial modeling with GPR for binomial data with logit link function. The performance of the model will be investigated. We will discuss the inference of how to estimate the parameters and hyper-parameters and to predict as well. Furthermore, simulation studies will be explained in the last section.

  5. The Effect of Sitagliptin on the Regression of Carotid Intima-Media Thickening in Patients with Type 2 Diabetes Mellitus: A Post Hoc Analysis of the Sitagliptin Preventive Study of Intima-Media Thickness Evaluation

    Directory of Open Access Journals (Sweden)

    Tomoya Mita

    2017-01-01

    Full Text Available Background. The effect of dipeptidyl peptidase-4 (DPP-4 inhibitors on the regression of carotid IMT remains largely unknown. The present study aimed to clarify whether sitagliptin, DPP-4 inhibitor, could regress carotid intima-media thickness (IMT in insulin-treated patients with type 2 diabetes mellitus (T2DM. Methods. This is an exploratory analysis of a randomized trial in which we investigated the effect of sitagliptin on the progression of carotid IMT in insulin-treated patients with T2DM. Here, we compared the efficacy of sitagliptin treatment on the number of patients who showed regression of carotid IMT of ≥0.10 mm in a post hoc analysis. Results. The percentages of the number of the patients who showed regression of mean-IMT-CCA (28.9% in the sitagliptin group versus 16.4% in the conventional group, P = 0.022 and left max-IMT-CCA (43.0% in the sitagliptin group versus 26.2% in the conventional group, P = 0.007, but not right max-IMT-CCA, were higher in the sitagliptin treatment group compared with those in the non-DPP-4 inhibitor treatment group. In multiple logistic regression analysis, sitagliptin treatment significantly achieved higher target attainment of mean-IMT-CCA ≥0.10 mm and right and left max-IMT-CCA ≥0.10 mm compared to conventional treatment. Conclusions. Our data suggested that DPP-4 inhibitors were associated with the regression of carotid atherosclerosis in insulin-treated T2DM patients. This study has been registered with the University Hospital Medical Information Network Clinical Trials Registry (UMIN000007396.

  6. A Methodology for Generating Placement Rules that Utilizes Logistic Regression

    Science.gov (United States)

    Wurtz, Keith

    2008-01-01

    The purpose of this article is to provide the necessary tools for institutional researchers to conduct a logistic regression analysis and interpret the results. Aspects of the logistic regression procedure that are necessary to evaluate models are presented and discussed with an emphasis on cutoff values and choosing the appropriate number of…

  7. Spatial regression methods to evaluate beekeeping production in the state of Rio de Janeiro Métodos de regressão espacial para avaliação da produção apícola do estado do Rio de Janeiro

    Directory of Open Access Journals (Sweden)

    W.S. Tassinari

    2013-04-01

    Full Text Available Brazilian beekeeping has been developed from the africanization of the honeybees and its high performance launches Brazil as one of the world´s largest honey producer. The Southeastern region has an expressive position in this market (45%, but the state of Rio de Janeiro is the smallest producer, despite presenting large areas of wild vegetation for honey production. In order to analyze the honey productivity in the state of Rio de Janeiro, this research used classic and spatial regression approaches. The data used in this study comprised the responses regarding beekeeping from 1418 beekeepers distributed throughout 72 counties of this state. The best statistical fit was a semiparametric spatial model. The proposed model could be used to estimate the annual honey yield per hive in regions and to detect production factors more related to beekeeping. Honey productivity was associated with the number of hives, wild swarm collection and losses in the apiaries. This paper highlights that the beekeeping sector needs support and help to elucidate the problems plaguing beekeepers, and the inclusion of spatial effects in the regression models is a useful tool in geographical data.A apicultura brasileira se desenvolveu a partir da africanização das abelhas melíferas, e seu bom desempenho permitiu lançar o Brasil como um dos maiores produtores mundiais de mel. A região Sudeste ocupa uma posição significativa no mercado, mas o estado do Rio de Janeiro é o menor produtor, apesar de apresentar áreas expressivas de vegetação silvestre para a produção de mel. Para analisar a produtividade de mel no estado do Rio de Janeiro, esta pesquisa estudou diversos métodos de regressão clássica e espacial. Os dados analisados compreenderam respostas sobre apicultura de 1418 apicultores distribuídos em 72 municípios do Rio de Janeiro. O melhor ajuste estatístico utilizado foi um modelo semiparamétrico espacial. A utilidade do modelo proposto é estimar

  8. Analyses of fecal and hair glucocorticoids to evaluate short- and long-term stress and recovery of Asiatic black bears (Ursus thibetanus) removed from bile farms in China.

    Science.gov (United States)

    Malcolm, K D; McShea, W J; Van Deelen, T R; Bacon, H J; Liu, F; Putman, S; Zhu, X; Brown, J L

    2013-05-01

    Demand for traditional Chinese medicines has given rise to the practice of maintaining Asiatic black bears (Ursus thibetanus) in captivity to harvest bile. We evaluated hypothalamic-pituitary-adrenal (HPA) activity in Asiatic black bears on a bile farm in China by measuring cortisol in hair. We also monitored hair and fecal glucocorticoid metabolites as bears acclimated to improved husbandry at the Animals Asia Foundation China Bear Rescue Center (CBRC) after removal from other bile farms. Fecal samples were collected twice weekly for ~1 year, and hair was obtained from bears upon arrival at the CBRC and again ≥163 days later. Paired hair samples showed declines in cortisol concentrations of 12-88% in 38 of 45 (84%, pbears after arrival and acclimation at the rehabilitation facility. Concentrations of cortisol in hair from bears on the bile farm were similar to initial concentrations upon arrival at the CBRC but were higher than those collected after bears had been at the CBRC for ≥163 days. Fecal glucocorticoid concentrations varied across months and were highest in April and declined through December, possibly reflecting seasonal patterns, responses to the arrival and socialization of new bears at the CBRC, and/or annual metabolic change. Data from segmental analysis of hair supports the first of these explanations. Our findings indicate that bears produced elevated concentrations of glucocorticoids on bile farms, and that activity of the HPA axis declined following relocation. Thus, hair cortisol analyses are particularly well suited to long-term, retrospective assessments of glucocorticoids in ursids. By contrast, fecal measures were not clearly associated with rehabilitation, but rather reflected more subtle endocrine changes, possibly related to seasonality. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Downscaling global land cover projections from an integrated assessment model for use in regional analyses: results and evaluation for the US from 2005 to 2095

    International Nuclear Information System (INIS)

    West, Tristram O; Le Page, Yannick; Wolf, Julie; Thomson, Allison M; Huang, Maoyi

    2014-01-01

    Projections of land cover change generated from integrated assessment models (IAM) and other economic-based models can be applied for analyses of environmental impacts at sub-regional and landscape scales. For those IAM and economic models that project land cover change at the continental or regional scale, these projections must be downscaled and spatially distributed prior to use in climate or ecosystem models. Downscaling efforts to date have been conducted at the national extent with relatively high spatial resolution (30 m) and at the global extent with relatively coarse spatial resolution (0.5°). We revised existing methods to downscale global land cover change projections for the US to 0.05° resolution using MODIS land cover data as the initial proxy for land class distribution. Land cover change realizations generated here represent a reference scenario and two emissions mitigation pathways (MPs) generated by the global change assessment model (GCAM). Future gridded land cover realizations are constructed for each MODIS plant functional type (PFT) from 2005 to 2095, commensurate with the community land model PFT land classes, and archived for public use. The GCAM land cover realizations provide spatially explicit estimates of potential shifts in croplands, grasslands, shrublands, and forest lands. Downscaling of the MPs indicate a net replacement of grassland by cropland in the western US and by forest in the eastern US. An evaluation of the downscaling method indicates that it is able to reproduce recent changes in cropland and grassland distributions in respective areas in the US, suggesting it could provide relevant insights into the potential impacts of socio-economic and environmental drivers on future changes in land cover. (letters)

  10. Seeking excellence: An evaluation of 235 international laboratories conducting water isotope analyses by isotope-ratio and laser-absorption spectrometry.

    Science.gov (United States)

    Wassenaar, L I; Terzer-Wassmuth, S; Douence, C; Araguas-Araguas, L; Aggarwal, P K; Coplen, T B

    2018-03-15

    Water stable isotope ratios (δ 2 H and δ 18 O values) are widely used tracers in environmental studies; hence, accurate and precise assays are required for providing sound scientific information. We tested the analytical performance of 235 international laboratories conducting water isotope analyses using dual-inlet and continuous-flow isotope ratio mass spectrometers and laser spectrometers through a water isotope inter-comparison test. Eight test water samples were distributed by the IAEA to international stable isotope laboratories. These consisted of a core set of five samples spanning the common δ-range of natural waters, and three optional samples (highly depleted, enriched, and saline). The fifth core sample contained unrevealed trace methanol to assess analyst vigilance to the impact of organic contamination on water isotopic measurements made by all instrument technologies. For the core and optional samples ~73 % of laboratories gave acceptable results within 0.2 ‰ and 1.5 ‰ of the reference values for δ 18 O and δ 2 H, respectively; ~27 % produced unacceptable results. Top performance for δ 18 O values was dominated by dual-inlet IRMS laboratories; top performance for δ 2 H values was led by laser spectrometer laboratories. Continuous-flow instruments yielded comparatively intermediate results. Trace methanol contamination of water resulted in extreme outlier δ-values for laser instruments, but also affected reactor-based continuous-flow IRMS systems; however, dual-inlet IRMS δ-values were unaffected. Analysis of the laboratory results and their metadata suggested inaccurate or imprecise performance stemmed mainly from skill- and knowledge-based errors including: calculation mistakes, inappropriate or compromised laboratory calibration standards, poorly performing instrumentation, lack of vigilance to contamination, or inattention to unreasonable isotopic outcomes. To counteract common errors, we recommend that laboratories include 1-2 'known

  11. Seeking excellence: An evaluation of 235 international laboratories conducting water isotope analyses by isotope-ratio and laser-absorption spectrometry

    Science.gov (United States)

    Wassenaar, L. I.; Terzer-Wassmuth, S.; Douence, C.; Araguas-Araguas, L.; Aggarwal, P. K.; Coplen, Tyler B.

    2018-01-01

    RationaleWater stable isotope ratios (δ2H and δ18O values) are widely used tracers in environmental studies; hence, accurate and precise assays are required for providing sound scientific information. We tested the analytical performance of 235 international laboratories conducting water isotope analyses using dual-inlet and continuous-flow isotope ratio mass spectrometers and laser spectrometers through a water isotope inter-comparison test.MethodsEight test water samples were distributed by the IAEA to international stable isotope laboratories. These consisted of a core set of five samples spanning the common δ-range of natural waters, and three optional samples (highly depleted, enriched, and saline). The fifth core sample contained unrevealed trace methanol to assess analyst vigilance to the impact of organic contamination on water isotopic measurements made by all instrument technologies.ResultsFor the core and optional samples ~73 % of laboratories gave acceptable results within 0.2 ‰ and 1.5 ‰ of the reference values for δ18O and δ2H, respectively; ~27 % produced unacceptable results. Top performance for δ18O values was dominated by dual-inlet IRMS laboratories; top performance for δ2H values was led by laser spectrometer laboratories. Continuous-flow instruments yielded comparatively intermediate results. Trace methanol contamination of water resulted in extreme outlier δ-values for laser instruments, but also affected reactor-based continuous-flow IRMS systems; however, dual-inlet IRMS δ-values were unaffected.ConclusionsAnalysis of the laboratory results and their metadata suggested inaccurate or imprecise performance stemmed mainly from skill- and knowledge-based errors including: calculation mistakes, inappropriate or compromised laboratory calibration standards, poorly performing instrumentation, lack of vigilance to contamination, or inattention to unreasonable isotopic outcomes. To counteract common errors, we recommend that

  12. Tutorial on Using Regression Models with Count Outcomes Using R

    Directory of Open Access Journals (Sweden)

    A. Alexander Beaujean

    2016-02-01

    Full Text Available Education researchers often study count variables, such as times a student reached a goal, discipline referrals, and absences. Most researchers that study these variables use typical regression methods (i.e., ordinary least-squares either with or without transforming the count variables. In either case, using typical regression for count data can produce parameter estimates that are biased, thus diminishing any inferences made from such data. As count-variable regression models are seldom taught in training programs, we present a tutorial to help educational researchers use such methods in their own research. We demonstrate analyzing and interpreting count data using Poisson, negative binomial, zero-inflated Poisson, and zero-inflated negative binomial regression models. The count regression methods are introduced through an example using the number of times students skipped class. The data for this example are freely available and the R syntax used run the example analyses are included in the Appendix.

  13. An Ordered Regression Model to Predict Transit Passengers’ Behavioural Intentions

    Energy Technology Data Exchange (ETDEWEB)

    Oña, J. de; Oña, R. de; Eboli, L.; Forciniti, C.; Mazzulla, G.

    2016-07-01

    Passengers’ behavioural intentions after experiencing transit services can be viewed as signals that show if a customer continues to utilise a company’s service. Users’ behavioural intentions can depend on a series of aspects that are difficult to measure directly. More recently, transit passengers’ behavioural intentions have been just considered together with the concepts of service quality and customer satisfaction. Due to the characteristics of the ways for evaluating passengers’ behavioural intentions, service quality and customer satisfaction, we retain that this kind of issue could be analysed also by applying ordered regression models. This work aims to propose just an ordered probit model for analysing service quality factors that can influence passengers’ behavioural intentions towards the use of transit services. The case study is the LRT of Seville (Spain), where a survey was conducted in order to collect the opinions of the passengers about the existing transit service, and to have a measure of the aspects that can influence the intentions of the users to continue using the transit service in the future. (Author)

  14. Face Alignment via Regressing Local Binary Features.

    Science.gov (United States)

    Ren, Shaoqing; Cao, Xudong; Wei, Yichen; Sun, Jian

    2016-03-01

    This paper presents a highly efficient and accurate regression approach for face alignment. Our approach has two novel components: 1) a set of local binary features and 2) a locality principle for learning those features. The locality principle guides us to learn a set of highly discriminative local binary features for each facial landmark independently. The obtained local binary features are used to jointly learn a linear regression for the final output. This approach achieves the state-of-the-art results when tested on the most challenging benchmarks to date. Furthermore, because extracting and regressing local binary features are computationally very cheap, our system is much faster than previous methods. It achieves over 3000 frames per second (FPS) on a desktop or 300 FPS on a mobile phone for locating a few dozens of landmarks. We also study a key issue that is important but has received little attention in the previous research, which is the face detector used to initialize alignment. We investigate several face detectors and perform quantitative evaluation on how they affect alignment accuracy. We find that an alignment friendly detector can further greatly boost the accuracy of our alignment method, reducing the error up to 16% relatively. To facilitate practical usage of face detection/alignment methods, we also propose a convenient metric to measure how good a detector is for alignment initialization.

  15. On logistic regression analysis of dichotomized responses.

    Science.gov (United States)

    Lu, Kaifeng

    2017-01-01

    We study the properties of treatment effect estimate in terms of odds ratio at the study end point from logistic regression model adjusting for the baseline value when the underlying continuous repeated measurements follow a multivariate normal distribution. Compared with the analysis that does not adjust for the baseline value, the adjusted analysis produces a larger treatment effect as well as a larger standard error. However, the increase in standard error is more than offset by the increase in treatment effect so that the adjusted analysis is more powerful than the unadjusted analysis for detecting the treatment effect. On the other hand, the true adjusted odds ratio implied by the normal distribution of the underlying continuous variable is a function of the baseline value and hence is unlikely to be able to be adequately represented by a single value of adjusted odds ratio from the logistic regression model. In contrast, the risk difference function derived from the logistic regression model provides a reasonable approximation to the true risk difference function implied by the normal distribution of the underlying continuous variable over the range of the baseline distribution. We show that different metrics of treatment effect have similar statistical power when evaluated at the baseline mean. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  16. Multivariate Regression of Liver on Intestine of Mice: A ...

    African Journals Online (AJOL)

    Multivariate Regression of Liver on Intestine of Mice: A Chemotherapeutic Evaluation of Plant ... Using an analysis of covariance model, the effects ... The findings revealed, with the aid of likelihood-ratio statistic, a marked improvement in

  17. Spontaneous regression of pulmonary bullae

    International Nuclear Information System (INIS)

    Satoh, H.; Ishikawa, H.; Ohtsuka, M.; Sekizawa, K.

    2002-01-01

    The natural history of pulmonary bullae is often characterized by gradual, progressive enlargement. Spontaneous regression of bullae is, however, very rare. We report a case in which complete resolution of pulmonary bullae in the left upper lung occurred spontaneously. The management of pulmonary bullae is occasionally made difficult because of gradual progressive enlargement associated with abnormal pulmonary function. Some patients have multiple bulla in both lungs and/or have a history of pulmonary emphysema. Others have a giant bulla without emphysematous change in the lungs. Our present case had treated lung cancer with no evidence of local recurrence. He had no emphysematous change in lung function test and had no complaints, although the high resolution CT scan shows evidence of underlying minimal changes of emphysema. Ortin and Gurney presented three cases of spontaneous reduction in size of bulla. Interestingly, one of them had a marked decrease in the size of a bulla in association with thickening of the wall of the bulla, which was observed in our patient. This case we describe is of interest, not only because of the rarity with which regression of pulmonary bulla has been reported in the literature, but also because of the spontaneous improvements in the radiological picture in the absence of overt infection or tumor. Copyright (2002) Blackwell Science Pty Ltd

  18. Quantum algorithm for linear regression

    Science.gov (United States)

    Wang, Guoming

    2017-07-01

    We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.

  19. Interpretation of commonly used statistical regression models.

    Science.gov (United States)

    Kasza, Jessica; Wolfe, Rory

    2014-01-01

    A review of some regression models commonly used in respiratory health applications is provided in this article. Simple linear regression, multiple linear regression, logistic regression and ordinal logistic regression are considered. The focus of this article is on the interpretation of the regression coefficients of each model, which are illustrated through the application of these models to a respiratory health research study. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.

  20. Multiple regression and beyond an introduction to multiple regression and structural equation modeling

    CERN Document Server

    Keith, Timothy Z

    2014-01-01

    Multiple Regression and Beyond offers a conceptually oriented introduction to multiple regression (MR) analysis and structural equation modeling (SEM), along with analyses that flow naturally from those methods. By focusing on the concepts and purposes of MR and related methods, rather than the derivation and calculation of formulae, this book introduces material to students more clearly, and in a less threatening way. In addition to illuminating content necessary for coursework, the accessibility of this approach means students are more likely to be able to conduct research using MR or SEM--and more likely to use the methods wisely. Covers both MR and SEM, while explaining their relevance to one another Also includes path analysis, confirmatory factor analysis, and latent growth modeling Figures and tables throughout provide examples and illustrate key concepts and techniques For additional resources, please visit: http://tzkeith.com/.

  1. Effects of pharmacists' interventions on appropriateness of prescribing and evaluation of the instruments' (MAI, STOPP and STARTs' ability to predict hospitalization--analyses from a randomized controlled trial.

    Directory of Open Access Journals (Sweden)

    Ulrika Gillespie

    Full Text Available Appropriateness of prescribing can be assessed by various measures and screening instruments. The aims of this study were to investigate the effects of pharmacists' interventions on appropriateness of prescribing in elderly patients, and to explore the relationship between these results and hospital care utilization during a 12-month follow-up period.The study population from a previous randomized controlled study, in which the effects of a comprehensive pharmacist intervention on re-hospitalization was investigated, was used. The criteria from the instruments MAI, STOPP and START were applied retrospectively to the 368 study patients (intervention group (I n = 182, control group (C n = 186. The assessments were done on admission and at discharge to detect differences over time and between the groups. Hospital care consumption was recorded and the association between scores for appropriateness, and hospitalization was analysed.The number of Potentially Inappropriate Medicines (PIMs per patient as identified by STOPP was reduced for I but not for C (1.42 to 0.93 vs. 1.46 to 1.66 respectively, p<0.01. The number of Potential Prescription Omissions (PPOs per patient as identified by START was reduced for I but not for C (0.36 to 0.09 vs. 0.42 to 0.45 respectively, p<0.001. The summated score for MAI was reduced for I but not for C (8.5 to 5.0 and 8.7 to 10.0 respectively, p<0.001. There was a positive association between scores for MAI and STOPP and drug-related readmissions (RR 8-9% and 30-34% respectively. No association was detected between the scores of the tools and total re-visits to hospital.The interventions significantly improved the appropriateness of prescribing for patients in the intervention group as evaluated by the instruments MAI, STOPP and START. High scores in MAI and STOPP were associated with a higher number of drug-related readmissions.

  2. Gaucher disease: MR evaluation of bone marrow features during treatment with enzyme replacement; Morbus Gaucher: Analyse der Knochenmarkveraenderungen in der MRT waehrend Enzymersatztherapie

    Energy Technology Data Exchange (ETDEWEB)

    Poll, L.W.; Koch, J.A.; Boerner, D.; Cohnen, M.; Jung, G.; Scherer, A.; Moedder, U. [Duesseldorf Univ. (DF). Inst. fuer Diagnostische Radiologie; Dahl, S. vom; Haeussinger, D. [Duesseldorf Univ. (Germany). Klinik fuer Gastroenterologie, Hepatologie und Infektiologie; Willers, R. [Rechenzentrum, Heinrich-Heine-Univ. Duesseldorf (Germany); Niederau, C. [Innere Abt., St. Josef-Hospital Oberhausen, Akademisches Lehrkrankenhaus der Univ. Essen (Germany)

    2001-10-01

    Purpose: Enzyme replacement therapy (ERT) arrests and reverses the hematological and visceral symptoms of adult Gaucher disease, the most frequent lysosomal storage disorder. There are only a few studies available evaluating bone disease during ERT. The aim of this study was to investigate the features of bone marrow (bm) by magnetic resonance imaging (MRI) in these patients during ERT. Materials and Methods: MRI was performed prospectively in thirty adult type I Gaucher patients before and during ERT with a mean follow-up of 3 years. Spin-echo sequences (T{sub 1}/T{sub 2}) of the lower extremities were obtained and the reconversion (response) or lack of reconversion (non-response) to fatty marrow during treatment was analyzed. The morphological features of bm involvement, a homogeneous or non-homogeneous distribution of bm changes and focal bone lesions surrounded by a rim of reduced signal intensity (SI), were analyzed. Results: Infiltration of bm by Gaucher cells is characterized by a reduction of Sl on both T{sub 1}- and T{sub 2}-weighted sequences. Bone marrow responses were seen in 19 patients (63%) during treatment. Focal bone lesions, surrounded by a rim of reduced Sl, did not respond to ERT and correlated with a non-homogenous distribution of bone involvement and splenectomy. (orig.) [German] Ziel: Unter Enzymersatztherapie (enzyme replacement therapy = ERT) zeigen Patienten mit adulter Form des Morbus Gaucher, der haeufigsten lysosomalen Speicherkrankheit, eine deutliche Besserung der haematologischen und visceralen Symptome. Bislang liegen nur wenige Untersuchungen zur Analyse der Knochenveraenderungen waehrend der ERT vor. Ziel war es, die Knochenmarkveraenderungen bei Gaucher-Patienten waehrend der Enzymersatztherapie mit Alglucerase/Imiglucerase in der Magnetresonanztomographie (MRT) zu evaluieren. Material und Methoden: In einer prospektiven Untersuchung wurden 30 adulte Patienten mit gesichertem Morbus Gaucher vor und waehrend der ERT in der MRT

  3. Accuracy Evaluation of The Depth of Six Kinds of Sperm Counting Chambers for both Manual and Computer-Aided Semen Analyses

    Directory of Open Access Journals (Sweden)

    Jin-Chun Lu

    2016-12-01

    Full Text Available Background: Although the depth of the counting chamber is an important factor influencing sperm counting, no research has yet been reported on the measurement and comparison of the depth of the chamber. We measured the exact depths of six kinds of sperm counting chambers and evaluated their accuracy. Materials and Methods: In this prospective study, the depths of six kinds of sperm counting chambers for both manual and computer-aided semen analyses, including Makler (n=24, Macro (n=32, Geoffrey (n=34, GoldCyto (n=20, Leja (n=20 and Cell-VU (n=20, were measured with the Filmetrics F20 Spectral Reflectance Thin-Film Measurement System, then the mean depth, the range and the coefficient of variation (CV of each chamber, and the mean depth, relative deviation and acceptability of each kind of chamber were calculated by the closeness to the nominal value. Among the 24 Makler chambers, 5 were new and 19 were used, and the other five kinds were all new chambers. Results: The depths (mean ± SD, μm of Makler (new, Macro and Geoffrey chambers were 11.07 ± 0.41, 10.19 ± 0.48 and 10.00 ± 0.28, respectively, while those of GoldCyto, Leja and Cell-VU chambers were 23.76 ± 2.15, 20.49 ± 0.22 and 24.22 ± 2.58, respectively. The acceptability of Geoffrey chambers was the highest (94.12%, followed by Macro (65.63%, Leja (35% and Makler (20%, while that of the other two kinds and the used Makler chamber was zero. Conclusion: There existed some difference between the actual depth and the corresponding nominal value for sperm counting chambers, and the overall acceptability was very low. Moreover, the abrasion caused by the long use, as of Makler chamber, for example, may result in unacceptability of the chamber. In order to ensure the accuracy and repeatability of sperm concentration results, the depth of the sperm counting chamber must be checked regularly.

  4. On Weighted Support Vector Regression

    DEFF Research Database (Denmark)

    Han, Xixuan; Clemmensen, Line Katrine Harder

    2014-01-01

    We propose a new type of weighted support vector regression (SVR), motivated by modeling local dependencies in time and space in prediction of house prices. The classic weights of the weighted SVR are added to the slack variables in the objective function (OF‐weights). This procedure directly...... shrinks the coefficient of each observation in the estimated functions; thus, it is widely used for minimizing influence of outliers. We propose to additionally add weights to the slack variables in the constraints (CF‐weights) and call the combination of weights the doubly weighted SVR. We illustrate...... the differences and similarities of the two types of weights by demonstrating the connection between the Least Absolute Shrinkage and Selection Operator (LASSO) and the SVR. We show that an SVR problem can be transformed to a LASSO problem plus a linear constraint and a box constraint. We demonstrate...

  5. Significance testing in ridge regression for genetic data

    Directory of Open Access Journals (Sweden)

    De Iorio Maria

    2011-09-01

    Full Text Available Abstract Background Technological developments have increased the feasibility of large scale genetic association studies. Densely typed genetic markers are obtained using SNP arrays, next-generation sequencing technologies and imputation. However, SNPs typed using these methods can be highly correlated due to linkage disequilibrium among them, and standard multiple regression techniques fail with these data sets due to their high dimensionality and correlation structure. There has been increasing interest in using penalised regression in the analysis of high dimensional data. Ridge regression is one such penalised regression technique which does not perform variable selection, instead estimating a regression coefficient for each predictor variable. It is therefore desirable to obtain an estimate of the significance of each ridge regression coefficient. Results We develop and evaluate a test of significance for ridge regression coefficients. Using simulation studies, we demonstrate that the performance of the test is comparable to that of a permutation test, with the advantage of a much-reduced computational cost. We introduce the p-value trace, a plot of the negative logarithm of the p-values of ridge regression coefficients with increasing shrinkage parameter, which enables the visualisation of the change in p-value of the regression coefficients with increasing penalisation. We apply the proposed method to a lung cancer case-control data set from EPIC, the European Prospective Investigation into Cancer and Nutrition. Conclusions The proposed test is a useful alternative to a permutation test for the estimation of the significance of ridge regression coefficients, at a much-reduced computational cost. The p-value trace is an informative graphical tool for evaluating the results of a test of significance of ridge regression coefficients as the shrinkage parameter increases, and the proposed test makes its production computationally feasible.

  6. Credit Scoring Problem Based on Regression Analysis

    OpenAIRE

    Khassawneh, Bashar Suhil Jad Allah

    2014-01-01

    ABSTRACT: This thesis provides an explanatory introduction to the regression models of data mining and contains basic definitions of key terms in the linear, multiple and logistic regression models. Meanwhile, the aim of this study is to illustrate fitting models for the credit scoring problem using simple linear, multiple linear and logistic regression models and also to analyze the found model functions by statistical tools. Keywords: Data mining, linear regression, logistic regression....

  7. Variable selection and model choice in geoadditive regression models.

    Science.gov (United States)

    Kneib, Thomas; Hothorn, Torsten; Tutz, Gerhard

    2009-06-01

    Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection.

  8. Biomass feedstock analyses

    Energy Technology Data Exchange (ETDEWEB)

    Wilen, C.; Moilanen, A.; Kurkela, E. [VTT Energy, Espoo (Finland). Energy Production Technologies

    1996-12-31

    The overall objectives of the project `Feasibility of electricity production from biomass by pressurized gasification systems` within the EC Research Programme JOULE II were to evaluate the potential of advanced power production systems based on biomass gasification and to study the technical and economic feasibility of these new processes with different type of biomass feed stocks. This report was prepared as part of this R and D project. The objectives of this task were to perform fuel analyses of potential woody and herbaceous biomasses with specific regard to the gasification properties of the selected feed stocks. The analyses of 15 Scandinavian and European biomass feed stock included density, proximate and ultimate analyses, trace compounds, ash composition and fusion behaviour in oxidizing and reducing atmospheres. The wood-derived fuels, such as whole-tree chips, forest residues, bark and to some extent willow, can be expected to have good gasification properties. Difficulties caused by ash fusion and sintering in straw combustion and gasification are generally known. The ash and alkali metal contents of the European biomasses harvested in Italy resembled those of the Nordic straws, and it is expected that they behave to a great extent as straw in gasification. Any direct relation between the ash fusion behavior (determined according to the standard method) and, for instance, the alkali metal content was not found in the laboratory determinations. A more profound characterisation of the fuels would require gasification experiments in a thermobalance and a PDU (Process development Unit) rig. (orig.) (10 refs.)

  9. An Original Stepwise Multilevel Logistic Regression Analysis of Discriminatory Accuracy

    DEFF Research Database (Denmark)

    Merlo, Juan; Wagner, Philippe; Ghith, Nermin

    2016-01-01

    BACKGROUND AND AIM: Many multilevel logistic regression analyses of "neighbourhood and health" focus on interpreting measures of associations (e.g., odds ratio, OR). In contrast, multilevel analysis of variance is rarely considered. We propose an original stepwise analytical approach that disting...

  10. Interpreting Multiple Linear Regression: A Guidebook of Variable Importance

    Science.gov (United States)

    Nathans, Laura L.; Oswald, Frederick L.; Nimon, Kim

    2012-01-01

    Multiple regression (MR) analyses are commonly employed in social science fields. It is also common for interpretation of results to typically reflect overreliance on beta weights, often resulting in very limited interpretations of variable importance. It appears that few researchers employ other methods to obtain a fuller understanding of what…

  11. Evaluating the performance of commonly used gas analysers for methane eddy covariance flux measurements: the InGOS inter-comparison field experiment

    NARCIS (Netherlands)

    Peltola, O.; Hensen, A.; Helfter, C.; Belelli Marchesini, L.; Bosveld, F.C.; Bulk, van de W.C.M.; Elbers, J.A.; Haapanala, S.; Holst, J.; Laurila, T.; Lindroth, A.; Nemitz, E.; Röckmann, T.; Vermeulen, A.T.; Mammarella, I.

    2014-01-01

    The performance of eight fast-response methane (CH4) gas analysers suitable for eddy covariance flux measurements were tested at a grassland site near the Cabauw tall tower (Netherlands) during June 2012. The instruments were positioned close to each other in order to minimise the effect of varying

  12. Evaluating the performance of commonly used gas analysers for methane eddy covariance flux measurements : the InGOS inter-comparison field experiment

    NARCIS (Netherlands)

    Peltola, O.; Hensen, A.; Helfter, C.; Belelli Marchesini, L.; Bosveld, F. C.; Van Den Bulk, W. C M; Elbers, J. A.; Haapanala, S.; Holst, J.; Laurila, T.; Lindroth, A.; Nemitz, E.; Röckmann, T.; Vermeulen, A. T.; Mammarella, I.

    2014-01-01

    The performance of eight fast-response methane (CH4) gas analysers suitable for eddy covariance flux measurements were tested at a grassland site near the Cabauw tall tower (Netherlands) during June 2012. The instruments were positioned close to each other in order to minimise the effect of varying

  13. Evaluating the performance of commonly used gas analysers for methane eddy covariance flux measurements: the InGOS inter-comparison field experiment

    NARCIS (Netherlands)

    Peltola, O.; Hensen, A.; Helfter, C.; Belelli Marchesini, L.; Bosveld, F. C.; Van Den Bulk, W. C. M.; Elbers, J. A.; Haapanala, S.; Holst, J.; Laurila, T.; Lindroth, A.; Nemitz, E.; Röckmann, T.; Vermeulen, A. T.; Mammarella, I.

    2014-01-01

    The performance of eight fast-response methane (CH4) gas analysers suitable for eddy covariance flux measurements were tested at a grassland site near the Cabauw tall tower (Netherlands) during June 2012. The instruments were positioned close to each other in order to minimize the effect of varying

  14. Regularized Label Relaxation Linear Regression.

    Science.gov (United States)

    Fang, Xiaozhao; Xu, Yong; Li, Xuelong; Lai, Zhihui; Wong, Wai Keung; Fang, Bingwu

    2018-04-01

    Linear regression (LR) and some of its variants have been widely used for classification problems. Most of these methods assume that during the learning phase, the training samples can be exactly transformed into a strict binary label matrix, which has too little freedom to fit the labels adequately. To address this problem, in this paper, we propose a novel regularized label relaxation LR method, which has the following notable characteristics. First, the proposed method relaxes the strict binary label matrix into a slack variable matrix by introducing a nonnegative label relaxation matrix into LR, which provides more freedom to fit the labels and simultaneously enlarges the margins between different classes as much as possible. Second, the proposed method constructs the class compactness graph based on manifold learning and uses it as the regularization item to avoid the problem of overfitting. The class compactness graph is used to ensure that the samples sharing the same labels can be kept close after they are transformed. Two different algorithms, which are, respectively, based on -norm and -norm loss functions are devised. These two algorithms have compact closed-form solutions in each iteration so that they are easily implemented. Extensive experiments show that these two algorithms outperform the state-of-the-art algorithms in terms of the classification accuracy and running time.

  15. Application of nonlinear regression analysis for ammonium exchange by natural (Bigadic) clinoptilolite

    International Nuclear Information System (INIS)

    Gunay, Ahmet

    2007-01-01

    The experimental data of ammonium exchange by natural Bigadic clinoptilolite was evaluated using nonlinear regression analysis. Three two-parameters isotherm models (Langmuir, Freundlich and Temkin) and three three-parameters isotherm models (Redlich-Peterson, Sips and Khan) were used to analyse the equilibrium data. Fitting of isotherm models was determined using values of standard normalization error procedure (SNE) and coefficient of determination (R 2 ). HYBRID error function provided lowest sum of normalized error and Khan model had better performance for modeling the equilibrium data. Thermodynamic investigation indicated that ammonium removal by clinoptilolite was favorable at lower temperatures and exothermic in nature

  16. Estimating the exceedance probability of rain rate by logistic regression

    Science.gov (United States)

    Chiu, Long S.; Kedem, Benjamin

    1990-01-01

    Recent studies have shown that the fraction of an area with rain intensity above a fixed threshold is highly correlated with the area-averaged rain rate. To estimate the fractional rainy area, a logistic regression model, which estimates the conditional probability that rain rate over an area exceeds a fixed threshold given the values of related covariates, is developed. The problem of dependency in the data in the estimation procedure is bypassed by the method of partial likelihood. Analyses of simulated scanning multichannel microwave radiometer and observed electrically scanning microwave radiometer data during the Global Atlantic Tropical Experiment period show that the use of logistic regression in pixel classification is superior to multiple regression in predicting whether rain rate at each pixel exceeds a given threshold, even in the presence of noisy data. The potential of the logistic regression technique in satellite rain rate estimation is discussed.

  17. Independent contrasts and PGLS regression estimators are equivalent.

    Science.gov (United States)

    Blomberg, Simon P; Lefevre, James G; Wells, Jessie A; Waterhouse, Mary

    2012-05-01

    We prove that the slope parameter of the ordinary least squares regression of phylogenetically independent contrasts (PICs) conducted through the origin is identical to the slope parameter of the method of generalized least squares (GLSs) regression under a Brownian motion model of evolution. This equivalence has several implications: 1. Understanding the structure of the linear model for GLS regression provides insight into when and why phylogeny is important in comparative studies. 2. The limitations of the PIC regression analysis are the same as the limitations of the GLS model. In particular, phylogenetic covariance applies only to the response variable in the regression and the explanatory variable should be regarded as fixed. Calculation of PICs for explanatory variables should be treated as a mathematical idiosyncrasy of the PIC regression algorithm. 3. Since the GLS estimator is the best linear unbiased estimator (BLUE), the slope parameter estimated using PICs is also BLUE. 4. If the slope is estimated using different branch lengths for the explanatory and response variables in the PIC algorithm, the estimator is no longer the BLUE, so this is not recommended. Finally, we discuss whether or not and how to accommodate phylogenetic covariance in regression analyses, particularly in relation to the problem of phylogenetic uncertainty. This discussion is from both frequentist and Bayesian perspectives.

  18. Evaluating the performance of commonly used gas analysers for methane eddy covariance flux measurements: the InGOS inter-comparison field experiment

    Science.gov (United States)

    Peltola, O.; Hensen, A.; Helfter, C.; Belelli Marchesini, L.; Bosveld, F. C.; van den Bulk, W. C. M.; Elbers, J. A.; Haapanala, S.; Holst, J.; Laurila, T.; Lindroth, A.; Nemitz, E.; Röckmann, T.; Vermeulen, A. T.; Mammarella, I.

    2014-06-01

    The performance of eight fast-response methane (CH4) gas analysers suitable for eddy covariance flux measurements were tested at a grassland site near the Cabauw tall tower (Netherlands) during June 2012. The instruments were positioned close to each other in order to minimise the effect of varying turbulent conditions. The moderate CH4 fluxes observed at the location, of the order of 25 nmol m-2 s-1, provided a suitable signal for testing the instruments' performance. Generally, all analysers tested were able to quantify the concentration fluctuations at the frequency range relevant for turbulent exchange and were able to deliver high-quality data. The tested cavity ringdown spectrometer (CRDS) instruments from Picarro, models G2311-f and G1301-f, were superior to other CH4 analysers with respect to instrumental noise. As an open-path instrument susceptible to the effects of rain, the LI-COR LI-7700 achieved lower data coverage and also required larger density corrections; however, the system is especially useful for remote sites that are restricted in power availability. In this study the open-path LI-7700 results were compromised due to a data acquisition problem in our data-logging setup. Some of the older closed-path analysers tested do not measure H2O concentrations alongside CH4 (i.e. FMA1 and DLT-100 by Los Gatos Research) and this complicates data processing since the required corrections for dilution and spectroscopic interactions have to be based on external information. To overcome this issue, we used H2O mole fractions measured by other gas analysers, adjusted them with different methods and then applied them to correct the CH4 fluxes. Following this procedure we estimated a bias of the order of 0.1 g (CH4) m-2 (8% of the measured mean flux) in the processed and corrected CH4 fluxes on a monthly scale due to missing H2O concentration measurements. Finally, cumulative CH4 fluxes over 14 days from three closed-path gas analysers, G2311-f (Picarro Inc

  19. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration

    DEFF Research Database (Denmark)

    Liberati, Alessandro; Altman, Douglas G; Tetzlaff, Jennifer

    2009-01-01

    Systematic reviews and meta-analyses are essential to summarize evidence relating to efficacy and safety of health care interventions accurately and reliably. The clarity and transparency of these reports, however, is not optimal. Poor reporting of systematic reviews diminishes their value...... to clinicians, policy makers, and other users. Since the development of the QUOROM (QUality Of Reporting Of Meta-analysis) Statement--a reporting guideline published in 1999--there have been several conceptual, methodological, and practical advances regarding the conduct and reporting of systematic reviews...... and meta-analyses. Also, reviews of published systematic reviews have found that key information about these studies is often poorly reported. Realizing these issues, an international group that included experienced authors and methodologists developed PRISMA (Preferred Reporting Items for Systematic...

  20. Supporting Regularized Logistic Regression Privately and Efficiently

    Science.gov (United States)

    Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei

    2016-01-01

    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738

  1. Supporting Regularized Logistic Regression Privately and Efficiently.

    Science.gov (United States)

    Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei

    2016-01-01

    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.

  2. Supporting Regularized Logistic Regression Privately and Efficiently.

    Directory of Open Access Journals (Sweden)

    Wenfa Li

    Full Text Available As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.

  3. Principal component regression analysis with SPSS.

    Science.gov (United States)

    Liu, R X; Kuang, J; Gong, Q; Hou, X L

    2003-06-01

    The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.

  4. Adjusting for Confounding in Early Postlaunch Settings: Going Beyond Logistic Regression Models.

    Science.gov (United States)

    Schmidt, Amand F; Klungel, Olaf H; Groenwold, Rolf H H

    2016-01-01

    Postlaunch data on medical treatments can be analyzed to explore adverse events or relative effectiveness in real-life settings. These analyses are often complicated by the number of potential confounders and the possibility of model misspecification. We conducted a simulation study to compare the performance of logistic regression, propensity score, disease risk score, and stabilized inverse probability weighting methods to adjust for confounding. Model misspecification was induced in the independent derivation dataset. We evaluated performance using relative bias confidence interval coverage of the true effect, among other metrics. At low events per coefficient (1.0 and 0.5), the logistic regression estimates had a large relative bias (greater than -100%). Bias of the disease risk score estimates was at most 13.48% and 18.83%. For the propensity score model, this was 8.74% and >100%, respectively. At events per coefficient of 1.0 and 0.5, inverse probability weighting frequently failed or reduced to a crude regression, resulting in biases of -8.49% and 24.55%. Coverage of logistic regression estimates became less than the nominal level at events per coefficient ≤5. For the disease risk score, inverse probability weighting, and propensity score, coverage became less than nominal at events per coefficient ≤2.5, ≤1.0, and ≤1.0, respectively. Bias of misspecified disease risk score models was 16.55%. In settings with low events/exposed subjects per coefficient, disease risk score methods can be useful alternatives to logistic regression models, especially when propensity score models cannot be used. Despite better performance of disease risk score methods than logistic regression and propensity score models in small events per coefficient settings, bias, and coverage still deviated from nominal.

  5. [THE VIRTUAL CYTOLOGIC SLIDES FOR EXTERNAL EVALUATION OF QUALITY OF IMPLEMENTATION OF CYTOLOGIC ANALYSES IN CLINICAL DIAGNOSTIC LABORATORIES: POSSIBILITIES AND PERSPECTIVES].

    Science.gov (United States)

    Djangirova, T V; Shabalova, I P; Pronichev, A N; Polyakov, E V

    2015-08-01

    The article considers application of technology of analysis of cytological slides in external quality control of clinical diagnostic laboratories. The advantages of virtual slides are demonstrated against other applied technologies of external evaluation of quality i.e. slide plate and digital micro-photography. The conditions of formation of virtual slides for external evaluation of quality of clinical diagnostic laboratories. The technology of their application is described. The success of practical application of considered technology in the Federal system of external evaluation of quality is emphasized.

  6. Regression Trees Identify Relevant Interactions: Can This Improve the Predictive Performance of Risk Adjustment?

    Science.gov (United States)

    Buchner, Florian; Wasem, Jürgen; Schillo, Sonja

    2017-01-01

    Risk equalization formulas have been refined since their introduction about two decades ago. Because of the complexity and the abundance of possible interactions between the variables used, hardly any interactions are considered. A regression tree is used to systematically search for interactions, a methodologically new approach in risk equalization. Analyses are based on a data set of nearly 2.9 million individuals from a major German social health insurer. A two-step approach is applied: In the first step a regression tree is built on the basis of the learning data set. Terminal nodes characterized by more than one morbidity-group-split represent interaction effects of different morbidity groups. In the second step the 'traditional' weighted least squares regression equation is expanded by adding interaction terms for all interactions detected by the tree, and regression coefficients are recalculated. The resulting risk adjustment formula shows an improvement in the adjusted R 2 from 25.43% to 25.81% on the evaluation data set. Predictive ratios are calculated for subgroups affected by the interactions. The R 2 improvement detected is only marginal. According to the sample level performance measures used, not involving a considerable number of morbidity interactions forms no relevant loss in accuracy. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  7. Comparing parametric and nonparametric regression methods for panel data

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    We investigate and compare the suitability of parametric and non-parametric stochastic regression methods for analysing production technologies and the optimal firm size. Our theoretical analysis shows that the most commonly used functional forms in empirical production analysis, Cobb......-Douglas and Translog, are unsuitable for analysing the optimal firm size. We show that the Translog functional form implies an implausible linear relationship between the (logarithmic) firm size and the elasticity of scale, where the slope is artificially related to the substitutability between the inputs....... The practical applicability of the parametric and non-parametric regression methods is scrutinised and compared by an empirical example: we analyse the production technology and investigate the optimal size of Polish crop farms based on a firm-level balanced panel data set. A nonparametric specification test...

  8. Antimicrobial activity of four different dental gel formulas on cariogenic bacteria evaluated using the linear regression method Atividade antimicrobiana de quatro formulações diferentes de géis dentais em bactérias cariogênicas avaliada pelo método de regressão linear

    Directory of Open Access Journals (Sweden)

    Nádia Araci Bou-Chacra

    2005-09-01

    Full Text Available The antimicrobial activity of four different dental gel formulas was evaluated on three microorganisms associated with cariogenesis: Streptococcus mutans, Lactobacillus casei and Actinomyces viscosus. The preliminary antimicrobial activity evaluation was performed using an agar diffusion method. In addition, the formulas were challenged using each microorganism with subsequent determinations of survivors at time intervals of 1, 5, 10, 20 and 30 minutes. The decimal reduction time (D-value calculated from the obtained curves (logCFU/mL vs. time was employed for the antimicrobial activity comparison of the formulas. The selected method for survivor enumeration was validated according to official compendia. Results revealed intense bactericidal activity, even at 1:2 dilution, on S. mutans and L. casei. The data concerning A. viscosus showed the absence of microbial reduction in the challenge employing diluted formulas at the selected time interval. The obtained D-values were 0.21, 2.08, 1.93 and 5.79 minutes for formulas 1, 2, 3 and 4, respectively. After comparing the obtained results, formula 1 can be considered to have the highest bactericidal activity.A atividade antimicrobiana de quatro diferentes fórmulas de gel dental foi avaliada empregando três microrganismos associados à cariogênese: Streptococcus mutans, Lactobacillus casei e Actinomyces viscosus. A avaliação preliminar foi efetuada utilizando método por difusão em ágar. Além disso, as fórmulas foram desafiadas empregando cada microrganismo e as determinações relativas aos sobreviventes foram efetuadas após 1, 5, 10, 20 e 30 minutos do desafio. O tempo de redução decimal (valor-D foi calculado por meio das curvas obtidas (UFC/mL x tempo, objetivando a comparação da atividade antimicrobiana entre as fórmulas. O método selecionado para a enumeração dos sobreviventes foi validado de acordo com compêndio oficial. Os resultados revelaram intensa atividade antimicrobiana

  9. Conjoined legs: Sirenomelia or caudal regression syndrome?

    Directory of Open Access Journals (Sweden)

    Sakti Prasad Das

    2013-01-01

    Full Text Available Presence of single umbilical persistent vitelline artery distinguishes sirenomelia from caudal regression syndrome. We report a case of a12-year-old boy who had bilateral umbilical arteries presented with fusion of both legs in the lower one third of leg. Both feet were rudimentary. The right foot had a valgus rocker-bottom deformity. All toes were present but rudimentary. The left foot showed absence of all toes. Physical examination showed left tibia vara. The chest evaluation in sitting revealed pigeon chest and elevated right shoulder. Posterior examination of the trunk showed thoracic scoliosis with convexity to right. The patient was operated and at 1 year followup the boy had two separate legs with a good aesthetic and functional results.

  10. Conjoined legs: Sirenomelia or caudal regression syndrome?

    Science.gov (United States)

    Das, Sakti Prasad; Ojha, Niranjan; Ganesh, G Shankar; Mohanty, Ram Narayan

    2013-07-01

    Presence of single umbilical persistent vitelline artery distinguishes sirenomelia from caudal regression syndrome. We report a case of a12-year-old boy who had bilateral umbilical arteries presented with fusion of both legs in the lower one third of leg. Both feet were rudimentary. The right foot had a valgus rocker-bottom deformity. All toes were present but rudimentary. The left foot showed absence of all toes. Physical examination showed left tibia vara. The chest evaluation in sitting revealed pigeon chest and elevated right shoulder. Posterior examination of the trunk showed thoracic scoliosis with convexity to right. The patient was operated and at 1 year followup the boy had two separate legs with a good aesthetic and functional results.

  11. Radiotoxicological analyses of 239+240Pu and 241Am in biological samples by anion-exchange and extraction chromatography: a preliminary study for internal contamination evaluations

    International Nuclear Information System (INIS)

    Ridone, S.; Arginelli, D.; Bortoluzzi, S.; Canuto, G.; Montalto, M.; Nocente, M.; Vegro, M.

    2006-01-01

    Many biological samples (urines and faeces) have been analysed by means of chromatographic extraction columns, utilising two different resins (AG 1-X2 resin chloride and T.R.U.), in order to detect the possible internal contamination of 239 + 240 Pu and 241 Am, for some workers of a reprocessing nuclear plant in the decommissioning phase. The results obtained show on one hand the great suitability of the first resin for the determination of plutonium, and on the other the great selectivity of the second one for the determination of americium

  12. Mixed kernel function support vector regression for global sensitivity analysis

    Science.gov (United States)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  13. Unbalanced Regressions and the Predictive Equation

    DEFF Research Database (Denmark)

    Osterrieder, Daniela; Ventosa-Santaulària, Daniel; Vera-Valdés, J. Eduardo

    Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness in the theoreti......Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness...

  14. Semiparametric regression during 2003–2007

    KAUST Repository

    Ruppert, David; Wand, M.P.; Carroll, Raymond J.

    2009-01-01

    Semiparametric regression is a fusion between parametric regression and nonparametric regression that integrates low-rank penalized splines, mixed model and hierarchical Bayesian methodology – thus allowing more streamlined handling of longitudinal and spatial correlation. We review progress in the field over the five-year period between 2003 and 2007. We find semiparametric regression to be a vibrant field with substantial involvement and activity, continual enhancement and widespread application.

  15. Gaussian process regression analysis for functional data

    CERN Document Server

    Shi, Jian Qing

    2011-01-01

    Gaussian Process Regression Analysis for Functional Data presents nonparametric statistical methods for functional regression analysis, specifically the methods based on a Gaussian process prior in a functional space. The authors focus on problems involving functional response variables and mixed covariates of functional and scalar variables.Covering the basics of Gaussian process regression, the first several chapters discuss functional data analysis, theoretical aspects based on the asymptotic properties of Gaussian process regression models, and new methodological developments for high dime

  16. Regression Analysis by Example. 5th Edition

    Science.gov (United States)

    Chatterjee, Samprit; Hadi, Ali S.

    2012-01-01

    Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. "Regression Analysis by Example, Fifth Edition" has been expanded and thoroughly…

  17. Standards for Standardized Logistic Regression Coefficients

    Science.gov (United States)

    Menard, Scott

    2011-01-01

    Standardized coefficients in logistic regression analysis have the same utility as standardized coefficients in linear regression analysis. Although there has been no consensus on the best way to construct standardized logistic regression coefficients, there is now sufficient evidence to suggest a single best approach to the construction of a…

  18. A Seemingly Unrelated Poisson Regression Model

    OpenAIRE

    King, Gary

    1989-01-01

    This article introduces a new estimator for the analysis of two contemporaneously correlated endogenous event count variables. This seemingly unrelated Poisson regression model (SUPREME) estimator combines the efficiencies created by single equation Poisson regression model estimators and insights from "seemingly unrelated" linear regression models.

  19. Evaluation of in-line Raman data for end-point determination of a coating process: Comparison of Science-Based Calibration, PLS-regression and univariate data analysis.

    Science.gov (United States)

    Barimani, Shirin; Kleinebudde, Peter

    2017-10-01

    A multivariate analysis method, Science-Based Calibration (SBC), was used for the first time for endpoint determination of a tablet coating process using Raman data. Two types of tablet cores, placebo and caffeine cores, received a coating suspension comprising a polyvinyl alcohol-polyethylene glycol graft-copolymer and titanium dioxide to a maximum coating thickness of 80µm. Raman spectroscopy was used as in-line PAT tool. The spectra were acquired every minute and correlated to the amount of applied aqueous coating suspension. SBC was compared to another well-known multivariate analysis method, Partial Least Squares-regression (PLS) and a simpler approach, Univariate Data Analysis (UVDA). All developed calibration models had coefficient of determination values (R 2 ) higher than 0.99. The coating endpoints could be predicted with root mean square errors (RMSEP) less than 3.1% of the applied coating suspensions. Compared to PLS and UVDA, SBC proved to be an alternative multivariate calibration method with high predictive power. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Evaluation of jaw and neck muscle activities while chewing using EMG-EMG transfer function and EMG-EMG coherence function analyses in healthy subjects.

    Science.gov (United States)

    Ishii, Tomohiro; Narita, Noriyuki; Endo, Hiroshi

    2016-06-01

    This study aims to quantitatively clarify the physiological features in rhythmically coordinated jaw and neck muscle EMG activities while chewing gum using EMG-EMG transfer function and EMG-EMG coherence function analyses in 20 healthy subjects. The chewing side masseter muscle EMG signal was used as the reference signal, while the other jaw (non-chewing side masseter muscle, bilateral anterior temporal muscles, and bilateral anterior digastric muscles) and neck muscle (bilateral sternocleidomastoid muscles) EMG signals were used as the examined signals in EMG-EMG transfer function and EMG-EMG coherence function analyses. Chewing-related jaw and neck muscle activities were aggregated in the first peak of the power spectrum in rhythmic chewing. The gain in the peak frequency represented the power relationships between jaw and neck muscle activities during rhythmic chewing. The phase in the peak frequency represented the temporal relationships between the jaw and neck muscle activities, while the non-chewing side neck muscle presented a broad range of distributions across jaw closing and opening phases. Coherence in the peak frequency represented the synergistic features in bilateral jaw closing muscles and chewing side neck muscle activities. The coherence and phase in non-chewing side neck muscle activities exhibited a significant negative correlation. From above, the bilateral coordination between the jaw and neck muscle activities is estimated while chewing when the non-chewing side neck muscle is synchronously activated with the jaw closing muscles, while the unilateral coordination is estimated when the non-chewing side neck muscle is irregularly activated in the jaw opening phase. Thus, the occurrence of bilateral or unilateral coordinated features in the jaw and neck muscle activities may correspond to the phase characteristics in the non-chewing side neck muscle activities during rhythmical chewing. Considering these novel findings in healthy subjects, EMG