WorldWideScience

Sample records for methods confounding variables

  1. Effects of categorization method, regression type, and variable distribution on the inflation of Type-I error rate when categorizing a confounding variable.

    Science.gov (United States)

    Barnwell-Ménard, Jean-Louis; Li, Qing; Cohen, Alan A

    2015-03-15

    The loss of signal associated with categorizing a continuous variable is well known, and previous studies have demonstrated that this can lead to an inflation of Type-I error when the categorized variable is a confounder in a regression analysis estimating the effect of an exposure on an outcome. However, it is not known how the Type-I error may vary under different circumstances, including logistic versus linear regression, different distributions of the confounder, and different categorization methods. Here, we analytically quantified the effect of categorization and then performed a series of 9600 Monte Carlo simulations to estimate the Type-I error inflation associated with categorization of a confounder under different regression scenarios. We show that Type-I error is unacceptably high (>10% in most scenarios and often 100%). The only exception was when the variable categorized was a continuous mixture proxy for a genuinely dichotomous latent variable, where both the continuous proxy and the categorized variable are error-ridden proxies for the dichotomous latent variable. As expected, error inflation was also higher with larger sample size, fewer categories, and stronger associations between the confounder and the exposure or outcome. We provide online tools that can help researchers estimate the potential error inflation and understand how serious a problem this is. Copyright © 2014 John Wiley & Sons, Ltd.

  2. Methods to control for unmeasured confounding in pharmacoepidemiology: an overview.

    Science.gov (United States)

    Uddin, Md Jamal; Groenwold, Rolf H H; Ali, Mohammed Sanni; de Boer, Anthonius; Roes, Kit C B; Chowdhury, Muhammad A B; Klungel, Olaf H

    2016-06-01

    Background Unmeasured confounding is one of the principal problems in pharmacoepidemiologic studies. Several methods have been proposed to detect or control for unmeasured confounding either at the study design phase or the data analysis phase. Aim of the Review To provide an overview of commonly used methods to detect or control for unmeasured confounding and to provide recommendations for proper application in pharmacoepidemiology. Methods/Results Methods to control for unmeasured confounding in the design phase of a study are case only designs (e.g., case-crossover, case-time control, self-controlled case series) and the prior event rate ratio adjustment method. Methods that can be applied in the data analysis phase include, negative control method, perturbation variable method, instrumental variable methods, sensitivity analysis, and ecological analysis. A separate group of methods are those in which additional information on confounders is collected from a substudy. The latter group includes external adjustment, propensity score calibration, two-stage sampling, and multiple imputation. Conclusion As the performance and application of the methods to handle unmeasured confounding may differ across studies and across databases, we stress the importance of using both statistical evidence and substantial clinical knowledge for interpretation of the study results.

  3. Confounding of three binary-variables counterfactual model

    OpenAIRE

    Liu, Jingwei; Hu, Shuang

    2011-01-01

    Confounding of three binary-variables counterfactual model is discussed in this paper. According to the effect between the control variable and the covariate variable, we investigate three counterfactual models: the control variable is independent of the covariate variable, the control variable has the effect on the covariate variable and the covariate variable affects the control variable. Using the ancillary information based on conditional independence hypotheses, the sufficient conditions...

  4. Bias formulas for sensitivity analysis of unmeasured confounding for general outcomes, treatments, and confounders.

    Science.gov (United States)

    Vanderweele, Tyler J; Arah, Onyebuchi A

    2011-01-01

    Uncontrolled confounding in observational studies gives rise to biased effect estimates. Sensitivity analysis techniques can be useful in assessing the magnitude of these biases. In this paper, we use the potential outcomes framework to derive a general class of sensitivity-analysis formulas for outcomes, treatments, and measured and unmeasured confounding variables that may be categorical or continuous. We give results for additive, risk-ratio and odds-ratio scales. We show that these results encompass a number of more specific sensitivity-analysis methods in the statistics and epidemiology literature. The applicability, usefulness, and limits of the bias-adjustment formulas are discussed. We illustrate the sensitivity-analysis techniques that follow from our results by applying them to 3 different studies. The bias formulas are particularly simple and easy to use in settings in which the unmeasured confounding variable is binary with constant effect on the outcome across treatment levels.

  5. Predictive modelling using neuroimaging data in the presence of confounds.

    Science.gov (United States)

    Rao, Anil; Monteiro, Joao M; Mourao-Miranda, Janaina

    2017-04-15

    When training predictive models from neuroimaging data, we typically have available non-imaging variables such as age and gender that affect the imaging data but which we may be uninterested in from a clinical perspective. Such variables are commonly referred to as 'confounds'. In this work, we firstly give a working definition for confound in the context of training predictive models from samples of neuroimaging data. We define a confound as a variable which affects the imaging data and has an association with the target variable in the sample that differs from that in the population-of-interest, i.e., the population over which we intend to apply the estimated predictive model. The focus of this paper is the scenario in which the confound and target variable are independent in the population-of-interest, but the training sample is biased due to a sample association between the target and confound. We then discuss standard approaches for dealing with confounds in predictive modelling such as image adjustment and including the confound as a predictor, before deriving and motivating an Instance Weighting scheme that attempts to account for confounds by focusing model training so that it is optimal for the population-of-interest. We evaluate the standard approaches and Instance Weighting in two regression problems with neuroimaging data in which we train models in the presence of confounding, and predict samples that are representative of the population-of-interest. For comparison, these models are also evaluated when there is no confounding present. In the first experiment we predict the MMSE score using structural MRI from the ADNI database with gender as the confound, while in the second we predict age using structural MRI from the IXI database with acquisition site as the confound. Considered over both datasets we find that none of the methods for dealing with confounding gives more accurate predictions than a baseline model which ignores confounding, although

  6. Which Propensity Score Method Best Reduces Confounder Imbalance? An Example From a Retrospective Evaluation of a Childhood Obesity Intervention.

    Science.gov (United States)

    Schroeder, Krista; Jia, Haomiao; Smaldone, Arlene

    Propensity score (PS) methods are increasingly being employed by researchers to reduce bias arising from confounder imbalance when using observational data to examine intervention effects. The purpose of this study was to examine PS theory and methodology and compare application of three PS methods (matching, stratification, weighting) to determine which best improves confounder balance. Baseline characteristics of a sample of 20,518 school-aged children with severe obesity (of whom 1,054 received an obesity intervention) were assessed prior to PS application. Three PS methods were then applied to the data to determine which showed the greatest improvement in confounder balance between the intervention and control group. The effect of each PS method on the outcome variable-body mass index percentile change at one year-was also examined. SAS 9.4 and Comprehensive Meta-analysis statistical software were used for analyses. Prior to PS adjustment, the intervention and control groups differed significantly on seven of 11 potential confounders. PS matching removed all differences. PS stratification and weighting both removed one difference but created two new differences. Sensitivity analyses did not change these results. Body mass index percentile at 1 year decreased in both groups. The size of the decrease was smaller in the intervention group, and the estimate of the decrease varied by PS method. Selection of a PS method should be guided by insight from statistical theory and simulation experiments, in addition to observed improvement in confounder balance. For this data set, PS matching worked best to correct confounder imbalance. Because each method varied in correcting confounder imbalance, we recommend that multiple PS methods be compared for ability to improve confounder balance before implementation in evaluating treatment effects in observational data.

  7. Methods to control for unmeasured confounding in pharmacoepidemiology : an overview

    NARCIS (Netherlands)

    Uddin, Md Jamal; Groenwold, Rolf H H; Ali, Mohammed Sanni; de Boer, Anthonius; Roes, Kit C B; Chowdhury, Muhammad A B; Klungel, Olaf H.

    2016-01-01

    Background Unmeasured confounding is one of the principal problems in pharmacoepidemiologic studies. Several methods have been proposed to detect or control for unmeasured confounding either at the study design phase or the data analysis phase. Aim of the Review To provide an overview of commonly

  8. Confounding adjustment through front-door blocking in longitudinal studies

    Directory of Open Access Journals (Sweden)

    Arvid Sjölander

    2013-03-01

    Full Text Available A common aim of epidemiological research is to estimate the causal effect of a particular exposure on a particular outcome. Towards this end, observed associations are often ‘adjusted’ for potential confounding variables. When the potential confounders are unmeasured, explicit adjustment becomes unfeasible. It has been demonstrated that causal effects can be estimated even in the presence of umeasured confounding, utilizing a method called ‘front-door blocking’. In this paper we generalize this method to longitudinal studies. We demonstrate that the method of front-door blocking poses a number of challenging statistical problems, analogous to the famous problems associ- ated with the method of ‘back-door blocking’.

  9. [COMPUTER TECHNOLOGY FOR ACCOUNTING OF CONFOUNDERS IN THE RISK ASSESSMENT IN COMPARATIVE STUDIES ON THE BASE OF THE METHOD OF STANDARDIZATION].

    Science.gov (United States)

    Shalaumova, Yu V; Varaksin, A N; Panov, V G

    2016-01-01

    There was performed an analysis of the accounting of the impact of concomitant variables (confounders), introducing a systematic error in the assessment of the impact of risk factors on the resulting variable. The analysis showed that standardization is an effective method for the reduction of the shift of risk assessment. In the work there is suggested an algorithm implementing the method of standardization based on stratification, providing for the minimization of the difference of distributions of confounders in groups on risk factors. To automate the standardization procedures there was developed a software available on the website of the Institute of Industrial Ecology, UB RAS. With the help of the developed software by numerically modeling there were determined conditions of the applicability of the method of standardization on the basis of stratification for the case of the normal distribution on the response and confounder and linear relationship between them. Comparison ofresults obtained with the help of the standardization with statistical methods (logistic regression and analysis of covariance) in solving the problem of human ecology, has shown that obtaining close results is possible if there will be met exactly conditions for the applicability of statistical methods. Standardization is less sensitive to violations of conditions of applicability.

  10. Control principles of confounders in ecological comparative studies: standardization and regressive modelss

    Directory of Open Access Journals (Sweden)

    Varaksin Anatoly

    2014-03-01

    Full Text Available The methods of the analysis of research data including the concomitant variables (confounders associated with both the response and the current factor are considered. There are two usual ways to take into account such variables: the first, at the stage of planning the experiment and the second, in analyzing the received data. Despite the equal effectiveness of these approaches, there exists strong reason to restrict the usage of regression method to accounting for confounders by ANCOVA. Authors consider the standardization by stratification as a reliable method to account for the effect of confounding factors as opposed to the widely-implemented application of logistic regression and the covariance analysis. The program for the automation of standardization procedure is proposed, it is available at the site of the Institute of Industrial Ecology.

  11. Mismeasurement and the resonance of strong confounders: uncorrelated errors.

    Science.gov (United States)

    Marshall, J R; Hastrup, J L

    1996-05-15

    Greenland first documented (Am J Epidemiol 1980; 112:564-9) that error in the measurement of a confounder could resonate--that it could bias estimates of other study variables, and that the bias could persist even with statistical adjustment for the confounder as measured. An important question is raised by this finding: can such bias be more than trivial within the bounds of realistic data configurations? The authors examine several situations involving dichotomous and continuous data in which a confounder and a null variable are measured with error, and they assess the extent of resultant bias in estimates of the effect of the null variable. They show that, with continuous variables, measurement error amounting to 40% of observed variance in the confounder could cause the observed impact of the null study variable to appear to alter risk by as much as 30%. Similarly, they show, with dichotomous independent variables, that 15% measurement error in the form of misclassification could lead the null study variable to appear to alter risk by as much as 50%. Such bias would result only from strong confounding. Measurement error would obscure the evidence that strong confounding is a likely problem. These results support the need for every epidemiologic inquiry to include evaluations of measurement error in each variable considered.

  12. A Comprehensive Analysis of the SRS-Schwab Adult Spinal Deformity Classification and Confounding Variables

    DEFF Research Database (Denmark)

    Hallager, Dennis Winge; Hansen, Lars Valentin; Dragsted, Casper Rokkjær

    2016-01-01

    STUDY DESIGN: Cross-sectional analyses on a consecutive, prospective cohort. OBJECTIVE: To evaluate the ability of the Scoliosis Research Society (SRS)-Schwab Adult Spinal Deformity Classification to group patients by widely used health-related quality-of-life (HRQOL) scores and examine possible...... to confounding. However, age group and aetiology had individual significant effects. CONCLUSION: The SRS-Schwab sagittal modifiers reliably grouped patients graded 0 versus + / +  + according to the most widely used HRQOL scores and the effects of increasing grade level on odds for worse ODI scores remained...... confounding variables. SUMMARY OF BACKGROUND DATA: The SRS-Schwab Adult Spinal Deformity Classification includes sagittal modifiers considered important for HRQOL and the clinical impact of the classification has been validated in patients from the International Spine Study Group database; however, equivocal...

  13. Mismeasurement and the resonance of strong confounders: correlated errors.

    Science.gov (United States)

    Marshall, J R; Hastrup, J L; Ross, J S

    1999-07-01

    Confounding in epidemiology, and the limits of standard methods of control for an imperfectly measured confounder, have been understood for some time. However, most treatments of this problem are based on the assumption that errors of measurement in confounding and confounded variables are independent. This paper considers the situation in which a strong risk factor (confounder) and an inconsequential but suspected risk factor (confounded) are each measured with errors that are correlated; the situation appears especially likely to occur in the field of nutritional epidemiology. Error correlation appears to add little to measurement error as a source of bias in estimating the impact of a strong risk factor: it can add to, diminish, or reverse the bias induced by measurement error in estimating the impact of the inconsequential risk factor. Correlation of measurement errors can add to the difficulty involved in evaluating structures in which confounding and measurement error are present. In its presence, observed correlations among risk factors can be greater than, less than, or even opposite to the true correlations. Interpretation of multivariate epidemiologic structures in which confounding is likely requires evaluation of measurement error structures, including correlations among measurement errors.

  14. Directed acyclic graphs (DAGs): an aid to assess confounding in dental research.

    Science.gov (United States)

    Merchant, Anwar T; Pitiphat, Waranuch

    2002-12-01

    Confounding, a special type of bias, occurs when an extraneous factor is associated with the exposure and independently affects the outcome. In order to get an unbiased estimate of the exposure-outcome relationship, we need to identify potential confounders, collect information on them, design appropriate studies, and adjust for confounding in data analysis. However, it is not always clear which variables to collect information on and adjust for in the analyses. Inappropriate adjustment for confounding can even introduce bias where none existed. Directed acyclic graphs (DAGs) provide a method to select potential confounders and minimize bias in the design and analysis of epidemiological studies. DAGs have been used extensively in expert systems and robotics. Robins (1987) introduced the application of DAGs in epidemiology to overcome shortcomings of traditional methods to control for confounding, especially as they related to unmeasured confounding. DAGs provide a quick and visual way to assess confounding without making parametric assumptions. We introduce DAGs, starting with definitions and rules for basic manipulation, stressing more on applications than theory. We then demonstrate their application in the control of confounding through examples of observational and cross-sectional epidemiological studies.

  15. Measuring the surgical 'learning curve': methods, variables and competency.

    Science.gov (United States)

    Khan, Nuzhath; Abboudi, Hamid; Khan, Mohammed Shamim; Dasgupta, Prokar; Ahmed, Kamran

    2014-03-01

    To describe how learning curves are measured and what procedural variables are used to establish a 'learning curve' (LC). To assess whether LCs are a valuable measure of competency. A review of the surgical literature pertaining to LCs was conducted using the Medline and OVID databases. Variables should be fully defined and when possible, patient-specific variables should be used. Trainee's prior experience and level of supervision should be quantified; the case mix and complexity should ideally be constant. Logistic regression may be used to control for confounding variables. Ideally, a learning plateau should reach a predefined/expert-derived competency level, which should be fully defined. When the group splitting method is used, smaller cohorts should be used in order to narrow the range of the LC. Simulation technology and competence-based objective assessments may be used in training and assessment in LC studies. Measuring the surgical LC has potential benefits for patient safety and surgical education. However, standardisation in the methods and variables used to measure LCs is required. Confounding variables, such as participant's prior experience, case mix, difficulty of procedures and level of supervision, should be controlled. Competency and expert performance should be fully defined. © 2013 The Authors. BJU International © 2013 BJU International.

  16. Bias, Confounding, and Interaction: Lions and Tigers, and Bears, Oh My!

    Science.gov (United States)

    Vetter, Thomas R; Mascha, Edward J

    2017-09-01

    Epidemiologists seek to make a valid inference about the causal effect between an exposure and a disease in a specific population, using representative sample data from a specific population. Clinical researchers likewise seek to make a valid inference about the association between an intervention and outcome(s) in a specific population, based upon their randomly collected, representative sample data. Both do so by using the available data about the sample variable to make a valid estimate about its corresponding or underlying, but unknown population parameter. Random error in an experiment can be due to the natural, periodic fluctuation or variation in the accuracy or precision of virtually any data sampling technique or health measurement tool or scale. In a clinical research study, random error can be due to not only innate human variability but also purely chance. Systematic error in an experiment arises from an innate flaw in the data sampling technique or measurement instrument. In the clinical research setting, systematic error is more commonly referred to as systematic bias. The most commonly encountered types of bias in anesthesia, perioperative, critical care, and pain medicine research include recall bias, observational bias (Hawthorne effect), attrition bias, misclassification or informational bias, and selection bias. A confounding variable is a factor associated with both the exposure of interest and the outcome of interest. A confounding variable (confounding factor or confounder) is a variable that correlates (positively or negatively) with both the exposure and outcome. Confounding is typically not an issue in a randomized trial because the randomized groups are sufficiently balanced on all potential confounding variables, both observed and nonobserved. However, confounding can be a major problem with any observational (nonrandomized) study. Ignoring confounding in an observational study will often result in a "distorted" or incorrect estimate of

  17. Sensitivity analysis for direct and indirect effects in the presence of exposure-induced mediator-outcome confounders

    Science.gov (United States)

    Chiba, Yasutaka

    2014-01-01

    Questions of mediation are often of interest in reasoning about mechanisms, and methods have been developed to address these questions. However, these methods make strong assumptions about the absence of confounding. Even if exposure is randomized, there may be mediator-outcome confounding variables. Inference about direct and indirect effects is particularly challenging if these mediator-outcome confounders are affected by the exposure because in this case these effects are not identified irrespective of whether data is available on these exposure-induced mediator-outcome confounders. In this paper, we provide a sensitivity analysis technique for natural direct and indirect effects that is applicable even if there are mediator-outcome confounders affected by the exposure. We give techniques for both the difference and risk ratio scales and compare the technique to other possible approaches. PMID:25580387

  18. Instrumental variable methods in comparative safety and effectiveness research.

    Science.gov (United States)

    Brookhart, M Alan; Rassen, Jeremy A; Schneeweiss, Sebastian

    2010-06-01

    Instrumental variable (IV) methods have been proposed as a potential approach to the common problem of uncontrolled confounding in comparative studies of medical interventions, but IV methods are unfamiliar to many researchers. The goal of this article is to provide a non-technical, practical introduction to IV methods for comparative safety and effectiveness research. We outline the principles and basic assumptions necessary for valid IV estimation, discuss how to interpret the results of an IV study, provide a review of instruments that have been used in comparative effectiveness research, and suggest some minimal reporting standards for an IV analysis. Finally, we offer our perspective of the role of IV estimation vis-à-vis more traditional approaches based on statistical modeling of the exposure or outcome. We anticipate that IV methods will be often underpowered for drug safety studies of very rare outcomes, but may be potentially useful in studies of intended effects where uncontrolled confounding may be substantial.

  19. Instrumental variable methods in comparative safety and effectiveness research†

    Science.gov (United States)

    Brookhart, M. Alan; Rassen, Jeremy A.; Schneeweiss, Sebastian

    2010-01-01

    Summary Instrumental variable (IV) methods have been proposed as a potential approach to the common problem of uncontrolled confounding in comparative studies of medical interventions, but IV methods are unfamiliar to many researchers. The goal of this article is to provide a non-technical, practical introduction to IV methods for comparative safety and effectiveness research. We outline the principles and basic assumptions necessary for valid IV estimation, discuss how to interpret the results of an IV study, provide a review of instruments that have been used in comparative effectiveness research, and suggest some minimal reporting standards for an IV analysis. Finally, we offer our perspective of the role of IV estimation vis-à-vis more traditional approaches based on statistical modeling of the exposure or outcome. We anticipate that IV methods will be often underpowered for drug safety studies of very rare outcomes, but may be potentially useful in studies of intended effects where uncontrolled confounding may be substantial. PMID:20354968

  20. Effect decomposition in the presence of an exposure-induced mediator-outcome confounder

    Science.gov (United States)

    VanderWeele, Tyler J.; Vansteelandt, Stijn; Robins, James M.

    2014-01-01

    Methods from causal mediation analysis have generalized the traditional approach to direct and indirect effects in the epidemiologic and social science literature by allowing for interaction and non-linearities. However, the methods from the causal inference literature have themselves been subject to a major limitation in that the so-called natural direct and indirect effects that are employed are not identified from data whenever there is a variable that is affected by the exposure, which also confounds the relationship between the mediator and the outcome. In this paper we describe three alternative approaches to effect decomposition that give quantities that can be interpreted as direct and indirect effects, and that can be identified from data even in the presence of an exposure-induced mediator-outcome confounder. We describe a simple weighting-based estimation method for each of these three approaches, illustrated with data from perinatal epidemiology. The methods described here can shed insight into pathways and questions of mediation even when an exposure-induced mediator-outcome confounder is present. PMID:24487213

  1. Environmental confounding in gene-environment interaction studies.

    Science.gov (United States)

    Vanderweele, Tyler J; Ko, Yi-An; Mukherjee, Bhramar

    2013-07-01

    We show that, in the presence of uncontrolled environmental confounding, joint tests for the presence of a main genetic effect and gene-environment interaction will be biased if the genetic and environmental factors are correlated, even if there is no effect of either the genetic factor or the environmental factor on the disease. When environmental confounding is ignored, such tests will in fact reject the joint null of no genetic effect with a probability that tends to 1 as the sample size increases. This problem with the joint test vanishes under gene-environment independence, but it still persists if estimating the gene-environment interaction parameter itself is of interest. Uncontrolled environmental confounding will bias estimates of gene-environment interaction parameters even under gene-environment independence, but it will not do so if the unmeasured confounding variable itself does not interact with the genetic factor. Under gene-environment independence, if the interaction parameter without controlling for the environmental confounder is nonzero, then there is gene-environment interaction either between the genetic factor and the environmental factor of interest or between the genetic factor and the unmeasured environmental confounder. We evaluate several recently proposed joint tests in a simulation study and discuss the implications of these results for the conduct of gene-environment interaction studies.

  2. The impact of bilinguism on cognitive aging and dementia:Finding a path through a forest of confounding variables

    OpenAIRE

    Bak, Thomas

    2016-01-01

    Within the current debates on cognitive reserve, cognitive aging and dementia, showing increasingly a positive effect of mental, social and physical activities on health in older age, bilingualism remains one of the most controversial issues. Some reasons for it might be social or even ideological. However, one of the most important genuine problems facing bilingualism research is the high number of potential confounding variables. Bilingual communities often differ from monolingual ones in a...

  3. Assessment of Confounding in Studies of Delay and Survival

    DEFF Research Database (Denmark)

    Tørring, Marie Louise; Vedsted, Peter; Frydenberg, Morten

    BACKGROUND: Whether longer time to diagnosis (diagnostic delay) in patients with cancer symptoms is directly and independently associated with poor prognosis cannot be determined in randomised controlled trials. Analysis of observational data is therefore necessary. Many previous studies of the i......BACKGROUND: Whether longer time to diagnosis (diagnostic delay) in patients with cancer symptoms is directly and independently associated with poor prognosis cannot be determined in randomised controlled trials. Analysis of observational data is therefore necessary. Many previous studies......) Clarify which factors are considered confounders or intermediate variables in the literature. 2) Assess how and to what extent these factors bias survival estimates. CONSIDERATIONS: As illustrated in Figure 1, symptoms of cancer may alert patients, GP's, and hospital doctors differently and influence both...... delay and survival time in different ways. We therefore assume that the impact of confounding factors depends on the type of delay studied (e.g., patient delay, GP delay, referral delay, or treatment delay). MATERIALS & METHODS: The project includes systematic review and methodological developments...

  4. A Comprehensive Analysis of the SRS-Schwab Adult Spinal Deformity Classification and Confounding Variables: A Prospective, Non-US Cross-sectional Study in 292 Patients.

    Science.gov (United States)

    Hallager, Dennis Winge; Hansen, Lars Valentin; Dragsted, Casper Rokkjær; Peytz, Nina; Gehrchen, Martin; Dahl, Benny

    2016-05-01

    Cross-sectional analyses on a consecutive, prospective cohort. To evaluate the ability of the Scoliosis Research Society (SRS)-Schwab Adult Spinal Deformity Classification to group patients by widely used health-related quality-of-life (HRQOL) scores and examine possible confounding variables. The SRS-Schwab Adult Spinal Deformity Classification includes sagittal modifiers considered important for HRQOL and the clinical impact of the classification has been validated in patients from the International Spine Study Group database; however, equivocal results were reported for the Pelvic Tilt modifier and potential confounding variables were not evaluated. Between March 2013 and May 2014, all adult spinal deformity patients from our outpatient clinic with sufficient radiographs were prospectively enrolled. Analyses of HRQOL variance and post hoc analyses were performed for each SRS-Schwab modifier. Age, history of spine surgery, and aetiology of spinal deformity were considered potential confounders and their influence on the association between SRS-Schwab modifiers and aggregated Oswestry Disability Index (ODI) scores was evaluated with multivariate proportional odds regressions. P values were adjusted for multiple testing. Two hundred ninety-two of 460 eligible patients were included for analyses. The SRS-Schwab Classification significantly discriminated HRQOL scores between normal and abnormal sagittal modifier classifications. Individual grade comparisons showed equivocal results; however, Pelvic Tilt grade + versus +  + did not discriminate patients according to any HRQOL score. All modifiers showed significant proportional odds for worse aggregated ODI scores with increasing grade levels and the effects were robust to confounding. However, age group and aetiology had individual significant effects. The SRS-Schwab sagittal modifiers reliably grouped patients graded 0 versus + / +  + according to the most widely used HRQOL scores and the

  5. Correction of confounding bias in non-randomized studies by appropriate weighting.

    Science.gov (United States)

    Schmoor, Claudia; Gall, Christine; Stampf, Susanne; Graf, Erika

    2011-03-01

    In non-randomized studies, the assessment of a causal effect of treatment or exposure on outcome is hampered by possible confounding. Applying multiple regression models including the effects of treatment and covariates on outcome is the well-known classical approach to adjust for confounding. In recent years other approaches have been promoted. One of them is based on the propensity score and considers the effect of possible confounders on treatment as a relevant criterion for adjustment. Another proposal is based on using an instrumental variable. Here inference relies on a factor, the instrument, which affects treatment but is thought to be otherwise unrelated to outcome, so that it mimics randomization. Each of these approaches can basically be interpreted as a simple reweighting scheme, designed to address confounding. The procedures will be compared with respect to their fundamental properties, namely, which bias they aim to eliminate, which effect they aim to estimate, and which parameter is modelled. We will expand our overview of methods for analysis of non-randomized studies to methods for analysis of randomized controlled trials and show that analyses of both study types may target different effects and different parameters. The considerations will be illustrated using a breast cancer study with a so-called Comprehensive Cohort Study design, including a randomized controlled trial and a non-randomized study in the same patient population as sub-cohorts. This design offers ideal opportunities to discuss and illustrate the properties of the different approaches. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Confounding Underlies the Apparent Month of Birth Effect in Multiple Sclerosis

    OpenAIRE

    Fiddes, Barnaby; Wason, James; Kemppinen, Anu; Ban, Maria; Compston, Alastair; Sawcer, Stephen

    2013-01-01

    Objective Several groups have reported apparent association between month of birth and multiple sclerosis. We sought to test the extent to which such studies might be confounded by extraneous variables such as year and place of birth. Methods Using national birth statistics from 2 continents, we assessed the evidence for seasonal variations in birth rate and tested the extent to which these are subject to regional and temporal variation. We then established the age and regional origin distrib...

  7. Using ecological propensity score to adjust for missing confounders in small area studies.

    Science.gov (United States)

    Wang, Yingbo; Pirani, Monica; Hansell, Anna L; Richardson, Sylvia; Blangiardo, Marta

    2017-11-09

    Small area ecological studies are commonly used in epidemiology to assess the impact of area level risk factors on health outcomes when data are only available in an aggregated form. However, the resulting estimates are often biased due to unmeasured confounders, which typically are not available from the standard administrative registries used for these studies. Extra information on confounders can be provided through external data sets such as surveys or cohorts, where the data are available at the individual level rather than at the area level; however, such data typically lack the geographical coverage of administrative registries. We develop a framework of analysis which combines ecological and individual level data from different sources to provide an adjusted estimate of area level risk factors which is less biased. Our method (i) summarizes all available individual level confounders into an area level scalar variable, which we call ecological propensity score (EPS), (ii) implements a hierarchical structured approach to impute the values of EPS whenever they are missing, and (iii) includes the estimated and imputed EPS into the ecological regression linking the risk factors to the health outcome. Through a simulation study, we show that integrating individual level data into small area analyses via EPS is a promising method to reduce the bias intrinsic in ecological studies due to unmeasured confounders; we also apply the method to a real case study to evaluate the effect of air pollution on coronary heart disease hospital admissions in Greater London. © The Author 2017. Published by Oxford University Press.

  8. Poppers, Kaposi's sarcoma, and HIV infection: empirical example of a strong confounding effect?

    Science.gov (United States)

    Morabia, A

    1995-01-01

    Are there empirical examples of strong confounding effects? Textbooks usually show examples of weak confounding or use hypothetical examples of strong confounding to illustrate the paradoxical consequences of not separating out the effect of the studied exposure from that of second factor acting as a confounder. HIV infection is a candidate strong confounder of the spuriously high association reported between consumption of poppers, a sexual stimulant, and risk of Kaposi's sarcoma in the early phase of the AIDS epidemic. To examine this hypothesis, assumptions must be made on the prevalence of HIV infection among cases of Kaposi's sarcoma and on the prevalence of heavy popper consumption according to HIV infection in cases and controls. Results show that HIV infection may have confounded the poppers-Kaposi's sarcoma association. However, it cannot be ruled out that HIV did not qualify as a confounder because it was either an intermediate variable or an effect modifier of the association between popper inhalation and Kaposi's sarcoma. This example provides a basis to discuss the mechanism by which confounding occurs as well as the practical importance of confounding in epidemiologic research.

  9. Sensitivity analysis for unobserved confounding of direct and indirect effects using uncertainty intervals.

    Science.gov (United States)

    Lindmark, Anita; de Luna, Xavier; Eriksson, Marie

    2018-05-10

    To estimate direct and indirect effects of an exposure on an outcome from observed data, strong assumptions about unconfoundedness are required. Since these assumptions cannot be tested using the observed data, a mediation analysis should always be accompanied by a sensitivity analysis of the resulting estimates. In this article, we propose a sensitivity analysis method for parametric estimation of direct and indirect effects when the exposure, mediator, and outcome are all binary. The sensitivity parameters consist of the correlations between the error terms of the exposure, mediator, and outcome models. These correlations are incorporated into the estimation of the model parameters and identification sets are then obtained for the direct and indirect effects for a range of plausible correlation values. We take the sampling variability into account through the construction of uncertainty intervals. The proposed method is able to assess sensitivity to both mediator-outcome confounding and confounding involving the exposure. To illustrate the method, we apply it to a mediation study based on the data from the Swedish Stroke Register (Riksstroke). An R package that implements the proposed method is available. Copyright © 2018 John Wiley & Sons, Ltd.

  10. Pre-Analytical Parameters Affecting Vascular Endothelial Growth Factor Measurement in Plasma: Identifying Confounders.

    Science.gov (United States)

    Walz, Johanna M; Boehringer, Daniel; Deissler, Heidrun L; Faerber, Lothar; Goepfert, Jens C; Heiduschka, Peter; Kleeberger, Susannah M; Klettner, Alexa; Krohne, Tim U; Schneiderhan-Marra, Nicole; Ziemssen, Focke; Stahl, Andreas

    2016-01-01

    Vascular endothelial growth factor-A (VEGF-A) is intensively investigated in various medical fields. However, comparing VEGF-A measurements is difficult because sample acquisition and pre-analytic procedures differ between studies. We therefore investigated which variables act as confounders of VEGF-A measurements. Following a standardized protocol, blood was taken at three clinical sites from six healthy participants (one male and one female participant at each center) twice one week apart. The following pre-analytical parameters were varied in order to analyze their impact on VEGF-A measurements: analyzing center, anticoagulant (EDTA vs. PECT / CTAD), cannula (butterfly vs. neonatal), type of centrifuge (swing-out vs. fixed-angle), time before and after centrifugation, filling level (completely filled vs. half-filled tubes) and analyzing method (ELISA vs. multiplex bead array). Additionally, intrapersonal variations over time and sex differences were explored. Statistical analysis was performed using a linear regression model. The following parameters were identified as statistically significant independent confounders of VEGF-A measurements: analyzing center, anticoagulant, centrifuge, analyzing method and sex of the proband. The following parameters were no significant confounders in our data set: intrapersonal variation over one week, cannula, time before and after centrifugation and filling level of collection tubes. VEGF-A measurement results can be affected significantly by the identified pre-analytical parameters. We recommend the use of CTAD anticoagulant, a standardized type of centrifuge and one central laboratory using the same analyzing method for all samples.

  11. Pre-Analytical Parameters Affecting Vascular Endothelial Growth Factor Measurement in Plasma: Identifying Confounders.

    Directory of Open Access Journals (Sweden)

    Johanna M Walz

    Full Text Available Vascular endothelial growth factor-A (VEGF-A is intensively investigated in various medical fields. However, comparing VEGF-A measurements is difficult because sample acquisition and pre-analytic procedures differ between studies. We therefore investigated which variables act as confounders of VEGF-A measurements.Following a standardized protocol, blood was taken at three clinical sites from six healthy participants (one male and one female participant at each center twice one week apart. The following pre-analytical parameters were varied in order to analyze their impact on VEGF-A measurements: analyzing center, anticoagulant (EDTA vs. PECT / CTAD, cannula (butterfly vs. neonatal, type of centrifuge (swing-out vs. fixed-angle, time before and after centrifugation, filling level (completely filled vs. half-filled tubes and analyzing method (ELISA vs. multiplex bead array. Additionally, intrapersonal variations over time and sex differences were explored. Statistical analysis was performed using a linear regression model.The following parameters were identified as statistically significant independent confounders of VEGF-A measurements: analyzing center, anticoagulant, centrifuge, analyzing method and sex of the proband. The following parameters were no significant confounders in our data set: intrapersonal variation over one week, cannula, time before and after centrifugation and filling level of collection tubes.VEGF-A measurement results can be affected significantly by the identified pre-analytical parameters. We recommend the use of CTAD anticoagulant, a standardized type of centrifuge and one central laboratory using the same analyzing method for all samples.

  12. Resting-state FMRI confounds and cleanup

    Science.gov (United States)

    Murphy, Kevin; Birn, Rasmus M.; Bandettini, Peter A.

    2013-01-01

    The goal of resting-state functional magnetic resonance imaging (FMRI) is to investigate the brain’s functional connections by using the temporal similarity between blood oxygenation level dependent (BOLD) signals in different regions of the brain “at rest” as an indicator of synchronous neural activity. Since this measure relies on the temporal correlation of FMRI signal changes between different parts of the brain, any non-neural activity-related process that affects the signals will influence the measure of functional connectivity, yielding spurious results. To understand the sources of these resting-state FMRI confounds, this article describes the origins of the BOLD signal in terms of MR physics and cerebral physiology. Potential confounds arising from motion, cardiac and respiratory cycles, arterial CO2 concentration, blood pressure/cerebral autoregulation, and vasomotion are discussed. Two classes of techniques to remove confounds from resting-state BOLD time series are reviewed: 1) those utilising external recordings of physiology and 2) data-based cleanup methods that only use the resting-state FMRI data itself. Further methods that remove noise from functional connectivity measures at a group level are also discussed. For successful interpretation of resting-state FMRI comparisons and results, noise cleanup is an often over-looked but essential step in the analysis pipeline. PMID:23571418

  13. Analysis of Longitudinal Studies With Repeated Outcome Measures: Adjusting for Time-Dependent Confounding Using Conventional Methods.

    Science.gov (United States)

    Keogh, Ruth H; Daniel, Rhian M; VanderWeele, Tyler J; Vansteelandt, Stijn

    2018-05-01

    Estimation of causal effects of time-varying exposures using longitudinal data is a common problem in epidemiology. When there are time-varying confounders, which may include past outcomes, affected by prior exposure, standard regression methods can lead to bias. Methods such as inverse probability weighted estimation of marginal structural models have been developed to address this problem. However, in this paper we show how standard regression methods can be used, even in the presence of time-dependent confounding, to estimate the total effect of an exposure on a subsequent outcome by controlling appropriately for prior exposures, outcomes, and time-varying covariates. We refer to the resulting estimation approach as sequential conditional mean models (SCMMs), which can be fitted using generalized estimating equations. We outline this approach and describe how including propensity score adjustment is advantageous. We compare the causal effects being estimated using SCMMs and marginal structural models, and we compare the two approaches using simulations. SCMMs enable more precise inferences, with greater robustness against model misspecification via propensity score adjustment, and easily accommodate continuous exposures and interactions. A new test for direct effects of past exposures on a subsequent outcome is described.

  14. Sensitivity analysis and power for instrumental variable studies.

    Science.gov (United States)

    Wang, Xuran; Jiang, Yang; Zhang, Nancy R; Small, Dylan S

    2018-03-31

    In observational studies to estimate treatment effects, unmeasured confounding is often a concern. The instrumental variable (IV) method can control for unmeasured confounding when there is a valid IV. To be a valid IV, a variable needs to be independent of unmeasured confounders and only affect the outcome through affecting the treatment. When applying the IV method, there is often concern that a putative IV is invalid to some degree. We present an approach to sensitivity analysis for the IV method which examines the sensitivity of inferences to violations of IV validity. Specifically, we consider sensitivity when the magnitude of association between the putative IV and the unmeasured confounders and the direct effect of the IV on the outcome are limited in magnitude by a sensitivity parameter. Our approach is based on extending the Anderson-Rubin test and is valid regardless of the strength of the instrument. A power formula for this sensitivity analysis is presented. We illustrate its usage via examples about Mendelian randomization studies and its implications via a comparison of using rare versus common genetic variants as instruments. © 2018, The International Biometric Society.

  15. Detection rates of geckos in visual surveys: Turning confounding variables into useful knowledge

    Science.gov (United States)

    Lardner, Bjorn; Rodda, Gordon H.; Yackel Adams, Amy A.; Savidge, Julie A.; Reed, Robert N.

    2016-01-01

    Transect surveys without some means of estimating detection probabilities generate population size indices prone to bias because survey conditions differ in time and space. Knowing what causes such bias can help guide the collection of relevant survey covariates, correct the survey data, anticipate situations where bias might be unacceptably large, and elucidate the ecology of target species. We used negative binomial regression to evaluate confounding variables for gecko (primarily Hemidactylus frenatus and Lepidodactylus lugubris) counts on 220-m-long transects surveyed at night, primarily for snakes, on 9,475 occasions. Searchers differed in gecko detection rates by up to a factor of six. The worst and best headlamps differed by a factor of at least two. Strong winds had a negative effect potentially as large as those of searchers or headlamps. More geckos were seen during wet weather conditions, but the effect size was small. Compared with a detection nadir during waxing gibbous (nearly full) moons above the horizon, we saw 28% more geckos during waning crescent moons below the horizon. A sine function suggested that we saw 24% more geckos at the end of the wet season than at the end of the dry season. Fluctuations on a longer timescale also were verified. Disturbingly, corrected data exhibited strong short-term fluctuations that covariates apparently failed to capture. Although some biases can be addressed with measured covariates, others will be difficult to eliminate as a significant source of error in longterm monitoring programs.

  16. Quantitative assessment of unobserved confounding is mandatory in nonrandomized intervention studies

    NARCIS (Netherlands)

    Groenwold, R H H; Hak, E; Hoes, A W

    OBJECTIVE: In nonrandomized intervention studies unequal distribution of patient characteristics in the groups under study may hinder comparability of prognosis and therefore lead to confounding bias. Our objective was to review methods to control for observed confounding, as well as unobserved

  17. Interpretational confounding is due to misspecification, not to type of indicator: comment on Howell, Breivik, and Wilcox (2007).

    Science.gov (United States)

    Bollen, Kenneth A

    2007-06-01

    R. D. Howell, E. Breivik, and J. B. Wilcox (2007) have argued that causal (formative) indicators are inherently subject to interpretational confounding. That is, they have argued that using causal (formative) indicators leads the empirical meaning of a latent variable to be other than that assigned to it by a researcher. Their critique of causal (formative) indicators rests on several claims: (a) A latent variable exists apart from the model when there are effect (reflective) indicators but not when there are causal (formative) indicators, (b) causal (formative) indicators need not have the same consequences, (c) causal (formative) indicators are inherently subject to interpretational confounding, and (d) a researcher cannot detect interpretational confounding when using causal (formative) indicators. This article shows that each claim is false. Rather, interpretational confounding is more a problem of structural misspecification of a model combined with an underidentified model that leaves these misspecifications undetected. Interpretational confounding does not occur if the model is correctly specified whether a researcher has causal (formative) or effect (reflective) indicators. It is the validity of a model not the type of indicator that determines the potential for interpretational confounding. Copyright 2007 APA, all rights reserved.

  18. 'Mechanical restraint-confounders, risk, alliance score'

    DEFF Research Database (Denmark)

    Deichmann Nielsen, Lea; Bech, Per; Hounsgaard, Lise

    2017-01-01

    . AIM: To clinically validate a new, structured short-term risk assessment instrument called the Mechanical Restraint-Confounders, Risk, Alliance Score (MR-CRAS), with the intended purpose of supporting the clinicians' observation and assessment of the patient's readiness to be released from mechanical...... restraint. METHODS: The content and layout of MR-CRAS and its user manual were evaluated using face validation by forensic mental health clinicians, content validation by an expert panel, and pilot testing within two, closed forensic mental health inpatient units. RESULTS: The three sub-scales (Confounders......, Risk, and a parameter of Alliance) showed excellent content validity. The clinical validations also showed that MR-CRAS was perceived and experienced as a comprehensible, relevant, comprehensive, and useable risk assessment instrument. CONCLUSIONS: MR-CRAS contains 18 clinically valid items...

  19. Confounding in statistical mediation analysis: What it is and how to address it.

    Science.gov (United States)

    Valente, Matthew J; Pelham, William E; Smyth, Heather; MacKinnon, David P

    2017-11-01

    Psychology researchers are often interested in mechanisms underlying how randomized interventions affect outcomes such as substance use and mental health. Mediation analysis is a common statistical method for investigating psychological mechanisms that has benefited from exciting new methodological improvements over the last 2 decades. One of the most important new developments is methodology for estimating causal mediated effects using the potential outcomes framework for causal inference. Potential outcomes-based methods developed in epidemiology and statistics have important implications for understanding psychological mechanisms. We aim to provide a concise introduction to and illustration of these new methods and emphasize the importance of confounder adjustment. First, we review the traditional regression approach for estimating mediated effects. Second, we describe the potential outcomes framework. Third, we define what a confounder is and how the presence of a confounder can provide misleading evidence regarding mechanisms of interventions. Fourth, we describe experimental designs that can help rule out confounder bias. Fifth, we describe new statistical approaches to adjust for measured confounders of the mediator-outcome relation and sensitivity analyses to probe effects of unmeasured confounders on the mediated effect. All approaches are illustrated with application to a real counseling intervention dataset. Counseling psychologists interested in understanding the causal mechanisms of their interventions can benefit from incorporating the most up-to-date techniques into their mediation analyses. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. Sensitivity analysis for the effects of multiple unmeasured confounders.

    Science.gov (United States)

    Groenwold, Rolf H H; Sterne, Jonathan A C; Lawlor, Debbie A; Moons, Karel G M; Hoes, Arno W; Tilling, Kate

    2016-09-01

    Observational studies are prone to (unmeasured) confounding. Sensitivity analysis of unmeasured confounding typically focuses on a single unmeasured confounder. The purpose of this study was to assess the impact of multiple (possibly weak) unmeasured confounders. Simulation studies were performed based on parameters estimated from the British Women's Heart and Health Study, including 28 measured confounders and assuming no effect of ascorbic acid intake on mortality. In addition, 25, 50, or 100 unmeasured confounders were simulated, with various mutual correlations and correlations with measured confounders. The correlated unmeasured confounders did not need to be strongly associated with exposure and outcome to substantially bias the exposure-outcome association at interest, provided that there are sufficiently many unmeasured confounders. Correlations between unmeasured confounders, in addition to the strength of their relationship with exposure and outcome, are key drivers of the magnitude of unmeasured confounding and should be considered in sensitivity analyses. However, if the unmeasured confounders are correlated with measured confounders, the bias yielded by unmeasured confounders is partly removed through adjustment for the measured confounders. Discussions of the potential impact of unmeasured confounding in observational studies, and sensitivity analyses to examine this, should focus on the potential for the joint effect of multiple unmeasured confounders to bias results. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. CONFOUNDING STRUCTURE OF TWO-LEVEL NONREGULAR FACTORIAL DESIGNS

    Institute of Scientific and Technical Information of China (English)

    Ren Junbai

    2012-01-01

    In design theory,the alias structure of regular fractional factorial designs is elegantly described with group theory.However,this approach cannot be applied to nonregular designs directly. For an arbitrary nonregular design,a natural question is how to describe the confounding relations between its effects,is there any inner structure similar to regular designs? The aim of this article is to answer this basic question.Using coefficients of indicator function,confounding structure of nonregular fractional factorial designs is obtained as linear constrains on the values of effects.A method to estimate the sparse significant effects in an arbitrary nonregular design is given through an example.

  2. The Combined Effects of Measurement Error and Omitting Confounders in the Single-Mediator Model.

    Science.gov (United States)

    Fritz, Matthew S; Kenny, David A; MacKinnon, David P

    2016-01-01

    Mediation analysis requires a number of strong assumptions be met in order to make valid causal inferences. Failing to account for violations of these assumptions, such as not modeling measurement error or omitting a common cause of the effects in the model, can bias the parameter estimates of the mediated effect. When the independent variable is perfectly reliable, for example when participants are randomly assigned to levels of treatment, measurement error in the mediator tends to underestimate the mediated effect, while the omission of a confounding variable of the mediator-to-outcome relation tends to overestimate the mediated effect. Violations of these two assumptions often co-occur, however, in which case the mediated effect could be overestimated, underestimated, or even, in very rare circumstances, unbiased. To explore the combined effect of measurement error and omitted confounders in the same model, the effect of each violation on the single-mediator model is first examined individually. Then the combined effect of having measurement error and omitted confounders in the same model is discussed. Throughout, an empirical example is provided to illustrate the effect of violating these assumptions on the mediated effect.

  3. The Combined Effects of Measurement Error and Omitting Confounders in the Single-Mediator Model

    Science.gov (United States)

    Fritz, Matthew S.; Kenny, David A.; MacKinnon, David P.

    2016-01-01

    Mediation analysis requires a number of strong assumptions be met in order to make valid causal inferences. Failing to account for violations of these assumptions, such as not modeling measurement error or omitting a common cause of the effects in the model, can bias the parameter estimates of the mediated effect. When the independent variable is perfectly reliable, for example when participants are randomly assigned to levels of treatment, measurement error in the mediator tends to underestimate the mediated effect, while the omission of a confounding variable of the mediator to outcome relation tends to overestimate the mediated effect. Violations of these two assumptions often co-occur, however, in which case the mediated effect could be overestimated, underestimated, or even, in very rare circumstances, unbiased. In order to explore the combined effect of measurement error and omitted confounders in the same model, the impact of each violation on the single-mediator model is first examined individually. Then the combined effect of having measurement error and omitted confounders in the same model is discussed. Throughout, an empirical example is provided to illustrate the effect of violating these assumptions on the mediated effect. PMID:27739903

  4. Consequences of exposure measurement error for confounder identification in environmental epidemiology

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe

    2003-01-01

    Non-differential measurement error in the exposure variable is known to attenuate the dose-response relationship. The amount of attenuation introduced in a given situation is not only a function of the precision of the exposure measurement but also depends on the conditional variance of the true...... exposure given the other independent variables. In addition, confounder effects may also be affected by the exposure measurement error. These difficulties in statistical model development are illustrated by examples from a epidemiological study performed in the Faroe Islands to investigate the adverse...

  5. Examining the role of unmeasured confounding in mediation analysis with genetic and genomic applications.

    Science.gov (United States)

    Lutz, Sharon M; Thwing, Annie; Schmiege, Sarah; Kroehl, Miranda; Baker, Christopher D; Starling, Anne P; Hokanson, John E; Ghosh, Debashis

    2017-07-19

    In mediation analysis if unmeasured confounding is present, the estimates for the direct and mediated effects may be over or under estimated. Most methods for the sensitivity analysis of unmeasured confounding in mediation have focused on the mediator-outcome relationship. The Umediation R package enables the user to simulate unmeasured confounding of the exposure-mediator, exposure-outcome, and mediator-outcome relationships in order to see how the results of the mediation analysis would change in the presence of unmeasured confounding. We apply the Umediation package to the Genetic Epidemiology of Chronic Obstructive Pulmonary Disease (COPDGene) study to examine the role of unmeasured confounding due to population stratification on the effect of a single nucleotide polymorphism (SNP) in the CHRNA5/3/B4 locus on pulmonary function decline as mediated by cigarette smoking. Umediation is a flexible R package that examines the role of unmeasured confounding in mediation analysis allowing for normally distributed or Bernoulli distributed exposures, outcomes, mediators, measured confounders, and unmeasured confounders. Umediation also accommodates multiple measured confounders, multiple unmeasured confounders, and allows for a mediator-exposure interaction on the outcome. Umediation is available as an R package at https://github.com/SharonLutz/Umediation A tutorial on how to install and use the Umediation package is available in the Additional file 1.

  6. Purposeful selection of variables in logistic regression

    Directory of Open Access Journals (Sweden)

    Williams David Keith

    2008-12-01

    Full Text Available Abstract Background The main problem in many model-building situations is to choose from a large set of covariates those that should be included in the "best" model. A decision to keep a variable in the model might be based on the clinical or statistical significance. There are several variable selection algorithms in existence. Those methods are mechanical and as such carry some limitations. Hosmer and Lemeshow describe a purposeful selection of covariates within which an analyst makes a variable selection decision at each step of the modeling process. Methods In this paper we introduce an algorithm which automates that process. We conduct a simulation study to compare the performance of this algorithm with three well documented variable selection procedures in SAS PROC LOGISTIC: FORWARD, BACKWARD, and STEPWISE. Results We show that the advantage of this approach is when the analyst is interested in risk factor modeling and not just prediction. In addition to significant covariates, this variable selection procedure has the capability of retaining important confounding variables, resulting potentially in a slightly richer model. Application of the macro is further illustrated with the Hosmer and Lemeshow Worchester Heart Attack Study (WHAS data. Conclusion If an analyst is in need of an algorithm that will help guide the retention of significant covariates as well as confounding ones they should consider this macro as an alternative tool.

  7. An introduction to sensitivity analysis for unobserved confounding in nonexperimental prevention research.

    Science.gov (United States)

    Liu, Weiwei; Kuramoto, S Janet; Stuart, Elizabeth A

    2013-12-01

    Despite the fact that randomization is the gold standard for estimating causal relationships, many questions in prevention science are often left to be answered through nonexperimental studies because randomization is either infeasible or unethical. While methods such as propensity score matching can adjust for observed confounding, unobserved confounding is the Achilles heel of most nonexperimental studies. This paper describes and illustrates seven sensitivity analysis techniques that assess the sensitivity of study results to an unobserved confounder. These methods were categorized into two groups to reflect differences in their conceptualization of sensitivity analysis, as well as their targets of interest. As a motivating example, we examine the sensitivity of the association between maternal suicide and offspring's risk for suicide attempt hospitalization. While inferences differed slightly depending on the type of sensitivity analysis conducted, overall, the association between maternal suicide and offspring's hospitalization for suicide attempt was found to be relatively robust to an unobserved confounder. The ease of implementation and the insight these analyses provide underscores sensitivity analysis techniques as an important tool for nonexperimental studies. The implementation of sensitivity analysis can help increase confidence in results from nonexperimental studies and better inform prevention researchers and policy makers regarding potential intervention targets.

  8. Multisample adjusted U-statistics that account for confounding covariates.

    Science.gov (United States)

    Satten, Glen A; Kong, Maiying; Datta, Somnath

    2018-06-19

    Multisample U-statistics encompass a wide class of test statistics that allow the comparison of 2 or more distributions. U-statistics are especially powerful because they can be applied to both numeric and nonnumeric data, eg, ordinal and categorical data where a pairwise similarity or distance-like measure between categories is available. However, when comparing the distribution of a variable across 2 or more groups, observed differences may be due to confounding covariates. For example, in a case-control study, the distribution of exposure in cases may differ from that in controls entirely because of variables that are related to both exposure and case status and are distributed differently among case and control participants. We propose to use individually reweighted data (ie, using the stratification score for retrospective data or the propensity score for prospective data) to construct adjusted U-statistics that can test the equality of distributions across 2 (or more) groups in the presence of confounding covariates. Asymptotic normality of our adjusted U-statistics is established and a closed form expression of their asymptotic variance is presented. The utility of our approach is demonstrated through simulation studies, as well as in an analysis of data from a case-control study conducted among African-Americans, comparing whether the similarity in haplotypes (ie, sets of adjacent genetic loci inherited from the same parent) occurring in a case and a control participant differs from the similarity in haplotypes occurring in 2 control participants. Copyright © 2018 John Wiley & Sons, Ltd.

  9. Carotta: Revealing Hidden Confounder Markers in Metabolic Breath Profiles

    Directory of Open Access Journals (Sweden)

    Anne-Christin Hauschild

    2015-06-01

    Full Text Available Computational breath analysis is a growing research area aiming at identifying volatile organic compounds (VOCs in human breath to assist medical diagnostics of the next generation. While inexpensive and non-invasive bioanalytical technologies for metabolite detection in exhaled air and bacterial/fungal vapor exist and the first studies on the power of supervised machine learning methods for profiling of the resulting data were conducted, we lack methods to extract hidden data features emerging from confounding factors. Here, we present Carotta, a new cluster analysis framework dedicated to uncovering such hidden substructures by sophisticated unsupervised statistical learning methods. We study the power of transitivity clustering and hierarchical clustering to identify groups of VOCs with similar expression behavior over most patient breath samples and/or groups of patients with a similar VOC intensity pattern. This enables the discovery of dependencies between metabolites. On the one hand, this allows us to eliminate the effect of potential confounding factors hindering disease classification, such as smoking. On the other hand, we may also identify VOCs associated with disease subtypes or concomitant diseases. Carotta is an open source software with an intuitive graphical user interface promoting data handling, analysis and visualization. The back-end is designed to be modular, allowing for easy extensions with plugins in the future, such as new clustering methods and statistics. It does not require much prior knowledge or technical skills to operate. We demonstrate its power and applicability by means of one artificial dataset. We also apply Carotta exemplarily to a real-world example dataset on chronic obstructive pulmonary disease (COPD. While the artificial data are utilized as a proof of concept, we will demonstrate how Carotta finds candidate markers in our real dataset associated with confounders rather than the primary disease (COPD

  10. The alarming problems of confounding equivalence using logistic regression models in the perspective of causal diagrams

    Directory of Open Access Journals (Sweden)

    Yuanyuan Yu

    2017-12-01

    Full Text Available Abstract Background Confounders can produce spurious associations between exposure and outcome in observational studies. For majority of epidemiologists, adjusting for confounders using logistic regression model is their habitual method, though it has some problems in accuracy and precision. It is, therefore, important to highlight the problems of logistic regression and search the alternative method. Methods Four causal diagram models were defined to summarize confounding equivalence. Both theoretical proofs and simulation studies were performed to verify whether conditioning on different confounding equivalence sets had the same bias-reducing potential and then to select the optimum adjusting strategy, in which logistic regression model and inverse probability weighting based marginal structural model (IPW-based-MSM were compared. The “do-calculus” was used to calculate the true causal effect of exposure on outcome, then the bias and standard error were used to evaluate the performances of different strategies. Results Adjusting for different sets of confounding equivalence, as judged by identical Markov boundaries, produced different bias-reducing potential in the logistic regression model. For the sets satisfied G-admissibility, adjusting for the set including all the confounders reduced the equivalent bias to the one containing the parent nodes of the outcome, while the bias after adjusting for the parent nodes of exposure was not equivalent to them. In addition, all causal effect estimations through logistic regression were biased, although the estimation after adjusting for the parent nodes of exposure was nearest to the true causal effect. However, conditioning on different confounding equivalence sets had the same bias-reducing potential under IPW-based-MSM. Compared with logistic regression, the IPW-based-MSM could obtain unbiased causal effect estimation when the adjusted confounders satisfied G-admissibility and the optimal

  11. Beyond total treatment effects in randomised controlled trials: Baseline measurement of intermediate outcomes needed to reduce confounding in mediation investigations.

    Science.gov (United States)

    Landau, Sabine; Emsley, Richard; Dunn, Graham

    2018-06-01

    Random allocation avoids confounding bias when estimating the average treatment effect. For continuous outcomes measured at post-treatment as well as prior to randomisation (baseline), analyses based on (A) post-treatment outcome alone, (B) change scores over the treatment phase or (C) conditioning on baseline values (analysis of covariance) provide unbiased estimators of the average treatment effect. The decision to include baseline values of the clinical outcome in the analysis is based on precision arguments, with analysis of covariance known to be most precise. Investigators increasingly carry out explanatory analyses to decompose total treatment effects into components that are mediated by an intermediate continuous outcome and a non-mediated part. Traditional mediation analysis might be performed based on (A) post-treatment values of the intermediate and clinical outcomes alone, (B) respective change scores or (C) conditioning on baseline measures of both intermediate and clinical outcomes. Using causal diagrams and Monte Carlo simulation, we investigated the performance of the three competing mediation approaches. We considered a data generating model that included three possible confounding processes involving baseline variables: The first two processes modelled baseline measures of the clinical variable or the intermediate variable as common causes of post-treatment measures of these two variables. The third process allowed the two baseline variables themselves to be correlated due to past common causes. We compared the analysis models implied by the competing mediation approaches with this data generating model to hypothesise likely biases in estimators, and tested these in a simulation study. We applied the methods to a randomised trial of pragmatic rehabilitation in patients with chronic fatigue syndrome, which examined the role of limiting activities as a mediator. Estimates of causal mediation effects derived by approach (A) will be biased if one of

  12. Some confounding factors in the study of mortality and occupational exposures

    International Nuclear Information System (INIS)

    Gilbert, E.S.

    1982-01-01

    With the recent interest in the study of occupational exposures, the impact of certain selective biases in the groups studied is a matter of some concern. In this paper, data from the Hanford nuclear facility population (southeastern Washington State, 1947-1976), which includes many radiation workers, are used to illustrate a method for examining the effect on mortality of such potentially confounding variables as calendar year, length of time since entering the industry, employment status, length of employment, job category, and initial employment year. The analysis, which is based on the Mantel-Haenszel procedure as adapted for a prospective study, differs from most previous studies of occupational variables which have relied primarily on comparing standardized mortality ratios (utilizing an external control) for various subgroups of the population. Results of this analysis confirm other studies in that reduced death rates are observed for early years of follow-up and for those with higher socioeconomic status (as indicated by job category). In addition, workers employed less than two years and especially terminated workers are found to have elevated death rates as compared with the remainder of the study population. It is important that such correlations be taken into account in planning and interpreting analyses of the effects of occupational exposure

  13. An Introduction to Sensitivity Analysis for Unobserved Confounding in Non-Experimental Prevention Research

    Science.gov (United States)

    Kuramoto, S. Janet; Stuart, Elizabeth A.

    2013-01-01

    Despite that randomization is the gold standard for estimating causal relationships, many questions in prevention science are left to be answered through non-experimental studies often because randomization is either infeasible or unethical. While methods such as propensity score matching can adjust for observed confounding, unobserved confounding is the Achilles heel of most non-experimental studies. This paper describes and illustrates seven sensitivity analysis techniques that assess the sensitivity of study results to an unobserved confounder. These methods were categorized into two groups to reflect differences in their conceptualization of sensitivity analysis, as well as their targets of interest. As a motivating example we examine the sensitivity of the association between maternal suicide and offspring’s risk for suicide attempt hospitalization. While inferences differed slightly depending on the type of sensitivity analysis conducted, overall the association between maternal suicide and offspring’s hospitalization for suicide attempt was found to be relatively robust to an unobserved confounder. The ease of implementation and the insight these analyses provide underscores sensitivity analysis techniques as an important tool for non-experimental studies. The implementation of sensitivity analysis can help increase confidence in results from non-experimental studies and better inform prevention researchers and policymakers regarding potential intervention targets. PMID:23408282

  14. Cardiovascular health, traffic-related air pollution and noise: are associations mutually confounded? A systematic review.

    Science.gov (United States)

    Tétreault, Louis-François; Perron, Stéphane; Smargiassi, Audrey

    2013-10-01

    This review assessed the confounding effect of one traffic-related exposure (noise or air pollutants) on the association between the other exposure and cardiovascular outcomes. A systematic review was conducted with the databases Medline and Embase. The confounding effects in studies were assessed by using change in the estimate with a 10 % cutoff point. The influence on the change in the estimate of the quality of the studies, the exposure assessment methods and the correlation between road noise and air pollutions were also assessed. Nine publications were identified. For most studies, the specified confounders produced changes in estimates noise and pollutants, the quality of the study and of the exposure assessment do not seem to influence the confounding effects. Results from this review suggest that confounding of cardiovascular effects by noise or air pollutants is low, though with further improvements in exposure assessment, the situation may change. More studies using pollution indicators specific to road traffic are needed to properly assess if noise and air pollution are subjected to confounding.

  15. Controlling confounding by frailty when estimating influenza vaccine effectiveness using predictors of dependency in activities of daily living.

    Science.gov (United States)

    Zhang, Henry T; McGrath, Leah J; Wyss, Richard; Ellis, Alan R; Stürmer, Til

    2017-12-01

    To improve control of confounding by frailty when estimating the effect of influenza vaccination on all-cause mortality by controlling for a published set of claims-based predictors of dependency in activities of daily living (ADL). Using Medicare claims data, a cohort of beneficiaries >65 years of age was followed from September 1, 2007, to April 12, 2008, with covariates assessed in the 6 months before follow-up. We estimated Cox proportional hazards models of all-cause mortality, with influenza vaccination as a time-varying exposure. We controlled for common demographics, comorbidities, and health care utilization variables and then added 20 ADL dependency predictors. To gauge residual confounding, we estimated pre-influenza season hazard ratios (HRs) between September 1, 2007 and January 5, 2008, which should be 1.0 in the absence of bias. A cohort of 2 235 140 beneficiaries was created, with a median follow-up of 224 days. Overall, 52% were vaccinated and 4% died during follow-up. During the pre-influenza season period, controlling for demographics, comorbidities, and health care use resulted in a HR of 0.66 (0.64, 0.67). Adding the ADL dependency predictors moved the HR to 0.68 (0.67, 0.70). Controlling for demographics and ADL dependency predictors alone resulted in a HR of 0.68 (0.66, 0.70). Results were consistent with those in the literature, with significant uncontrolled confounding after adjustment for demographics, comorbidities, and health care use. Adding ADL dependency predictors moved HRs slightly closer to the null. Of the comorbidities, health care use variables, and ADL dependency predictors, the last set reduced confounding most. However, substantial uncontrolled confounding remained. Copyright © 2017 John Wiley & Sons, Ltd.

  16. Role of environmental confounding in the association between FKBP5 and first-episode psychosis

    Directory of Open Access Journals (Sweden)

    Olesya eAjnakina

    2014-07-01

    Full Text Available Background: Failure to account for the etiological diversity that typically occurs in psychiatric cohorts may increase the potential for confounding, as a proportion of genetic variance will be specific to exposures that have variable distribution in cases. This study investigated whether minimizing the potential for such confounding strengthened the evidence for a genetic candidate currently unsupported at the genome-wide level.Methods: 291 first-episode psychosis cases from South London UK, and 218 unaffected controls were evaluated for a functional polymorphism at the rs1360780 locus in FKBP5. The relationship between FKBP5 and psychosis was modelled using logistic regression. Cannabis use (Cannabis Experiences Questionnaire and parental separation (Childhood Experience of Care and Abuse Questionnaire were modelled as confounders in the analysis.Results: Association at rs1360780 was not detected until the effects of the two environmental factors had been adjusted for in the model (OR=2.81, 95% CI 1.23-6.43, p=0.02. A statistical interaction between rs1360780 and parental separation was confirmed by stratified tests (OR=2.8, p=0.02 vs. OR=0.89, p=0.80. The genetic main effect was directionally-consistent with findings in other (stress-related clinical phenotypes. Moreover, the variation in effect magnitude was explained by the level of power associated with different cannabis constructs used in the model (r=0.95.Conclusions: Our results suggest that the extent to which genetic variants in FKBP5 can influence susceptibility to psychosis may depend on the other etiological factors involved. This finding requires further validation in other large independent cohorts. Potentially this work could have translational implications, as the ability to discriminate between genetic etiologies, based on a case-by-case understanding of exposure history would confer an important clinical advantage that would benefit the delivery of personalizable treatment

  17. The alarming problems of confounding equivalence using logistic regression models in the perspective of causal diagrams.

    Science.gov (United States)

    Yu, Yuanyuan; Li, Hongkai; Sun, Xiaoru; Su, Ping; Wang, Tingting; Liu, Yi; Yuan, Zhongshang; Liu, Yanxun; Xue, Fuzhong

    2017-12-28

    Confounders can produce spurious associations between exposure and outcome in observational studies. For majority of epidemiologists, adjusting for confounders using logistic regression model is their habitual method, though it has some problems in accuracy and precision. It is, therefore, important to highlight the problems of logistic regression and search the alternative method. Four causal diagram models were defined to summarize confounding equivalence. Both theoretical proofs and simulation studies were performed to verify whether conditioning on different confounding equivalence sets had the same bias-reducing potential and then to select the optimum adjusting strategy, in which logistic regression model and inverse probability weighting based marginal structural model (IPW-based-MSM) were compared. The "do-calculus" was used to calculate the true causal effect of exposure on outcome, then the bias and standard error were used to evaluate the performances of different strategies. Adjusting for different sets of confounding equivalence, as judged by identical Markov boundaries, produced different bias-reducing potential in the logistic regression model. For the sets satisfied G-admissibility, adjusting for the set including all the confounders reduced the equivalent bias to the one containing the parent nodes of the outcome, while the bias after adjusting for the parent nodes of exposure was not equivalent to them. In addition, all causal effect estimations through logistic regression were biased, although the estimation after adjusting for the parent nodes of exposure was nearest to the true causal effect. However, conditioning on different confounding equivalence sets had the same bias-reducing potential under IPW-based-MSM. Compared with logistic regression, the IPW-based-MSM could obtain unbiased causal effect estimation when the adjusted confounders satisfied G-admissibility and the optimal strategy was to adjust for the parent nodes of outcome, which

  18. Hypnotics and mortality – confounding by disease and socioeconomic position

    DEFF Research Database (Denmark)

    Kriegbaum, Margit; Hendriksen, Carsten; Vass, Mikkel

    2015-01-01

    Purpose The aim of this Cohort study of 10 527 Danish men was to investigate the extent to which the association between hypnotics and mortality is confounded by several markers of disease and living conditions. Methods Exposure was purchases of hypnotics 1995–1999 (“low users” (150 or less defined......% confidence intervals (CI). Results When covariates were entered one at a time, the changes in HR estimates showed that psychiatric disease, socioeconomic position and substance abuse reduced the excess risk by 17–36% in the low user group and by 45–52% in the high user group. Somatic disease, intelligence...... point at psychiatric disease, substance abuse and socioeconomic position as potential confounding factors partly explaining the association between use of hypnotics and all-cause mortality....

  19. Confounding and Statistical Significance of Indirect Effects: Childhood Adversity, Education, Smoking, and Anxious and Depressive Symptomatology

    Directory of Open Access Journals (Sweden)

    Mashhood Ahmed Sheikh

    2017-08-01

    Full Text Available The life course perspective, the risky families model, and stress-and-coping models provide the rationale for assessing the role of smoking as a mediator in the association between childhood adversity and anxious and depressive symptomatology (ADS in adulthood. However, no previous study has assessed the independent mediating role of smoking in the association between childhood adversity and ADS in adulthood. Moreover, the importance of mediator-response confounding variables has rarely been demonstrated empirically in social and psychiatric epidemiology. The aim of this paper was to (i assess the mediating role of smoking in adulthood in the association between childhood adversity and ADS in adulthood, and (ii assess the change in estimates due to different mediator-response confounding factors (education, alcohol intake, and social support. The present analysis used data collected from 1994 to 2008 within the framework of the Tromsø Study (N = 4,530, a representative prospective cohort study of men and women. Seven childhood adversities (low mother's education, low father's education, low financial conditions, exposure to passive smoke, psychological abuse, physical abuse, and substance abuse distress were used to create a childhood adversity score. Smoking status was measured at a mean age of 54.7 years (Tromsø IV, and ADS in adulthood was measured at a mean age of 61.7 years (Tromsø V. Mediation analysis was used to assess the indirect effect and the proportion of mediated effect (% of childhood adversity on ADS in adulthood via smoking in adulthood. The test-retest reliability of smoking was good (Kappa: 0.67, 95% CI: 0.63; 0.71 in this sample. Childhood adversity was associated with a 10% increased risk of smoking in adulthood (Relative risk: 1.10, 95% CI: 1.03; 1.18, and both childhood adversity and smoking in adulthood were associated with greater levels of ADS in adulthood (p < 0.001. Smoking in adulthood did not significantly

  20. Enhancing the estimation of blood pressure using pulse arrival time and two confounding factors

    International Nuclear Information System (INIS)

    Baek, Hyun Jae; Kim, Ko Keun; Kim, Jung Soo; Lee, Boreom; Park, Kwang Suk

    2010-01-01

    A new method of blood pressure (BP) estimation using multiple regression with pulse arrival time (PAT) and two confounding factors was evaluated in clinical and unconstrained monitoring situations. For the first analysis with clinical data, electrocardiogram (ECG), photoplethysmogram (PPG) and invasive BP signals were obtained by a conventional patient monitoring device during surgery. In the second analysis, ECG, PPG and non-invasive BP were measured using systems developed to obtain data under conditions in which the subject was not constrained. To enhance the performance of BP estimation methods, heart rate (HR) and arterial stiffness were considered as confounding factors in regression analysis. The PAT and HR were easily extracted from ECG and PPG signals. For arterial stiffness, the duration from the maximum derivative point to the maximum of the dicrotic notch in the PPG signal, a parameter called TDB, was employed. In two experiments that normally cause BP variation, the correlation between measured BP and the estimated BP was investigated. Multiple-regression analysis with the two confounding factors improved correlation coefficients for diastolic blood pressure and systolic blood pressure to acceptable confidence levels, compared to existing methods that consider PAT only. In addition, reproducibility for the proposed method was determined using constructed test sets. Our results demonstrate that non-invasive, non-intrusive BP estimation can be obtained using methods that can be applied in both clinical and daily healthcare situations

  1. Enhancing the estimation of blood pressure using pulse arrival time and two confounding factors.

    Science.gov (United States)

    Baek, Hyun Jae; Kim, Ko Keun; Kim, Jung Soo; Lee, Boreom; Park, Kwang Suk

    2010-02-01

    A new method of blood pressure (BP) estimation using multiple regression with pulse arrival time (PAT) and two confounding factors was evaluated in clinical and unconstrained monitoring situations. For the first analysis with clinical data, electrocardiogram (ECG), photoplethysmogram (PPG) and invasive BP signals were obtained by a conventional patient monitoring device during surgery. In the second analysis, ECG, PPG and non-invasive BP were measured using systems developed to obtain data under conditions in which the subject was not constrained. To enhance the performance of BP estimation methods, heart rate (HR) and arterial stiffness were considered as confounding factors in regression analysis. The PAT and HR were easily extracted from ECG and PPG signals. For arterial stiffness, the duration from the maximum derivative point to the maximum of the dicrotic notch in the PPG signal, a parameter called TDB, was employed. In two experiments that normally cause BP variation, the correlation between measured BP and the estimated BP was investigated. Multiple-regression analysis with the two confounding factors improved correlation coefficients for diastolic blood pressure and systolic blood pressure to acceptable confidence levels, compared to existing methods that consider PAT only. In addition, reproducibility for the proposed method was determined using constructed test sets. Our results demonstrate that non-invasive, non-intrusive BP estimation can be obtained using methods that can be applied in both clinical and daily healthcare situations.

  2. LD Score regression distinguishes confounding from polygenicity in genome-wide association studies

    DEFF Research Database (Denmark)

    Bulik-Sullivan, Brendan K.; Loh, Po-Ru; Finucane, Hilary K.

    2015-01-01

    Both polygenicity (many small genetic effects) and confounding biases, such as cryptic relatedness and population stratification, can yield an inflated distribution of test statistics in genome-wide association studies (GWAS). However, current methods cannot distinguish between inflation from...

  3. Assessing moderated mediation in linear models requires fewer confounding assumptions than assessing mediation.

    Science.gov (United States)

    Loeys, Tom; Talloen, Wouter; Goubert, Liesbet; Moerkerke, Beatrijs; Vansteelandt, Stijn

    2016-11-01

    It is well known from the mediation analysis literature that the identification of direct and indirect effects relies on strong no unmeasured confounding assumptions of no unmeasured confounding. Even in randomized studies the mediator may still be correlated with unobserved prognostic variables that affect the outcome, in which case the mediator's role in the causal process may not be inferred without bias. In the behavioural and social science literature very little attention has been given so far to the causal assumptions required for moderated mediation analysis. In this paper we focus on the index for moderated mediation, which measures by how much the mediated effect is larger or smaller for varying levels of the moderator. We show that in linear models this index can be estimated without bias in the presence of unmeasured common causes of the moderator, mediator and outcome under certain conditions. Importantly, one can thus use the test for moderated mediation to support evidence for mediation under less stringent confounding conditions. We illustrate our findings with data from a randomized experiment assessing the impact of being primed with social deception upon observer responses to others' pain, and from an observational study of individuals who ended a romantic relationship assessing the effect of attachment anxiety during the relationship on mental distress 2 years after the break-up. © 2016 The British Psychological Society.

  4. Confounding and exposure measurement error in air pollution epidemiology

    NARCIS (Netherlands)

    Sheppard, L.; Burnett, R.T.; Szpiro, A.A.; Kim, J.Y.; Jerrett, M.; Pope, C.; Brunekreef, B.|info:eu-repo/dai/nl/067548180

    2012-01-01

    Studies in air pollution epidemiology may suffer from some specific forms of confounding and exposure measurement error. This contribution discusses these, mostly in the framework of cohort studies. Evaluation of potential confounding is critical in studies of the health effects of air pollution.

  5. Childhood trauma is not a confounder of the overlap between autistic and schizotypal traits: A study in a non-clinical adult sample.

    Science.gov (United States)

    Gong, Jing-Bo; Wang, Ya; Lui, Simon S Y; Cheung, Eric F C; Chan, Raymond C K

    2017-11-01

    Childhood trauma has been shown to be a robust risk factor for mental disorders, and may exacerbate schizotypal traits or contribute to autistic trait severity. However, little is known whether childhood trauma confounds the overlap between schizotypal traits and autistic traits. This study examined whether childhood trauma acts as a confounding variable in the overlap between autistic and schizotypal traits in a large non-clinical adult sample. A total of 2469 participants completed the Autism Spectrum Quotient (AQ), the Schizotypal Personality Questionnaire (SPQ), and the Childhood Trauma Questionnaire-Short Form. Correlation analysis showed that the majority of associations between AQ variables and SPQ variables were significant (p autistic and schizotypal traits could not be explained by shared variance in terms of exposure to childhood trauma. The findings point to important overlaps in the conceptualization of ASD and SSD, independent of childhood trauma. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. PERMANOVA-S: association test for microbial community composition that accommodates confounders and multiple distances.

    Science.gov (United States)

    Tang, Zheng-Zheng; Chen, Guanhua; Alekseyenko, Alexander V

    2016-09-01

    Recent advances in sequencing technology have made it possible to obtain high-throughput data on the composition of microbial communities and to study the effects of dysbiosis on the human host. Analysis of pairwise intersample distances quantifies the association between the microbiome diversity and covariates of interest (e.g. environmental factors, clinical outcomes, treatment groups). In the design of these analyses, multiple choices for distance metrics are available. Most distance-based methods, however, use a single distance and are underpowered if the distance is poorly chosen. In addition, distance-based tests cannot flexibly handle confounding variables, which can result in excessive false-positive findings. We derive presence-weighted UniFrac to complement the existing UniFrac distances for more powerful detection of the variation in species richness. We develop PERMANOVA-S, a new distance-based method that tests the association of microbiome composition with any covariates of interest. PERMANOVA-S improves the commonly-used Permutation Multivariate Analysis of Variance (PERMANOVA) test by allowing flexible confounder adjustments and ensembling multiple distances. We conducted extensive simulation studies to evaluate the performance of different distances under various patterns of association. Our simulation studies demonstrate that the power of the test relies on how well the selected distance captures the nature of the association. The PERMANOVA-S unified test combines multiple distances and achieves good power regardless of the patterns of the underlying association. We demonstrate the usefulness of our approach by reanalyzing several real microbiome datasets. miProfile software is freely available at https://medschool.vanderbilt.edu/tang-lab/software/miProfile z.tang@vanderbilt.edu or g.chen@vanderbilt.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  7. Variable selection for confounder control, flexible modeling and Collaborative Targeted Minimum Loss-based Estimation in causal inference

    Science.gov (United States)

    Schnitzer, Mireille E.; Lok, Judith J.; Gruber, Susan

    2015-01-01

    This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios. PMID:26226129

  8. Confounding environmental colour and distribution shape leads to underestimation of population extinction risk.

    Science.gov (United States)

    Fowler, Mike S; Ruokolainen, Lasse

    2013-01-01

    The colour of environmental variability influences the size of population fluctuations when filtered through density dependent dynamics, driving extinction risk through dynamical resonance. Slow fluctuations (low frequencies) dominate in red environments, rapid fluctuations (high frequencies) in blue environments and white environments are purely random (no frequencies dominate). Two methods are commonly employed to generate the coloured spatial and/or temporal stochastic (environmental) series used in combination with population (dynamical feedback) models: autoregressive [AR(1)] and sinusoidal (1/f) models. We show that changing environmental colour from white to red with 1/f models, and from white to red or blue with AR(1) models, generates coloured environmental series that are not normally distributed at finite time-scales, potentially confounding comparison with normally distributed white noise models. Increasing variability of sample Skewness and Kurtosis and decreasing mean Kurtosis of these series alter the frequency distribution shape of the realised values of the coloured stochastic processes. These changes in distribution shape alter patterns in the probability of single and series of extreme conditions. We show that the reduced extinction risk for undercompensating (slow growing) populations in red environments previously predicted with traditional 1/f methods is an artefact of changes in the distribution shapes of the environmental series. This is demonstrated by comparison with coloured series controlled to be normally distributed using spectral mimicry. Changes in the distribution shape that arise using traditional methods lead to underestimation of extinction risk in normally distributed, red 1/f environments. AR(1) methods also underestimate extinction risks in traditionally generated red environments. This work synthesises previous results and provides further insight into the processes driving extinction risk in model populations. We must let

  9. Quantification of the impact of a confounding variable on functional connectivity confirms anti-correlated networks in the resting-state.

    Science.gov (United States)

    Carbonell, F; Bellec, P; Shmuel, A

    2014-02-01

    The effect of regressing out the global average signal (GAS) in resting state fMRI data has become a concern for interpreting functional connectivity analyses. It is not clear whether the reported anti-correlations between the Default Mode and the Dorsal Attention Networks are intrinsic to the brain, or are artificially created by regressing out the GAS. Here we introduce a concept, Impact of the Global Average on Functional Connectivity (IGAFC), for quantifying the sensitivity of seed-based correlation analyses to the regression of the GAS. This voxel-wise IGAFC index is defined as the product of two correlation coefficients: the correlation between the GAS and the fMRI time course of a voxel, times the correlation between the GAS and the seed time course. This definition enables the calculation of a threshold at which the impact of regressing-out the GAS would be large enough to introduce spurious negative correlations. It also yields a post-hoc impact correction procedure via thresholding, which eliminates spurious correlations introduced by regressing out the GAS. In addition, we introduce an Artificial Negative Correlation Index (ANCI), defined as the absolute difference between the IGAFC index and the impact threshold. The ANCI allows a graded confidence scale for ranking voxels according to their likelihood of showing artificial correlations. By applying this method, we observed regions in the Default Mode and Dorsal Attention Networks that were anti-correlated. These findings confirm that the previously reported negative correlations between the Dorsal Attention and Default Mode Networks are intrinsic to the brain and not the result of statistical manipulations. Our proposed quantification of the impact that a confound may have on functional connectivity can be generalized to global effect estimators other than the GAS. It can be readily applied to other confounds, such as systemic physiological or head movement interferences, in order to quantify their

  10. A review of instrumental variable estimators for Mendelian randomization.

    Science.gov (United States)

    Burgess, Stephen; Small, Dylan S; Thompson, Simon G

    2017-10-01

    Instrumental variable analysis is an approach for obtaining causal inferences on the effect of an exposure (risk factor) on an outcome from observational data. It has gained in popularity over the past decade with the use of genetic variants as instrumental variables, known as Mendelian randomization. An instrumental variable is associated with the exposure, but not associated with any confounder of the exposure-outcome association, nor is there any causal pathway from the instrumental variable to the outcome other than via the exposure. Under the assumption that a single instrumental variable or a set of instrumental variables for the exposure is available, the causal effect of the exposure on the outcome can be estimated. There are several methods available for instrumental variable estimation; we consider the ratio method, two-stage methods, likelihood-based methods, and semi-parametric methods. Techniques for obtaining statistical inferences and confidence intervals are presented. The statistical properties of estimates from these methods are compared, and practical advice is given about choosing a suitable analysis method. In particular, bias and coverage properties of estimators are considered, especially with weak instruments. Settings particularly relevant to Mendelian randomization are prioritized in the paper, notably the scenario of a continuous exposure and a continuous or binary outcome.

  11. Extraction Methods, Variability Encountered in

    NARCIS (Netherlands)

    Bodelier, P.L.E.; Nelson, K.E.

    2014-01-01

    Synonyms Bias in DNA extractions methods; Variation in DNA extraction methods Definition The variability in extraction methods is defined as differences in quality and quantity of DNA observed using various extraction protocols, leading to differences in outcome of microbial community composition

  12. Influence of euthanasia method on blood and gill variables in normoxic and hypoxic Gulf killifish Fundulus grandis.

    Science.gov (United States)

    Larter, K F; Rees, B B

    2017-06-01

    In many experiments, euthanasia, or humane killing, of animals is necessary. Some methods of euthanasia cause death through cessation of respiratory or cardiovascular systems, causing oxygen levels of blood and tissues to drop. For experiments where the goal is to measure the effects of environmental low oxygen (hypoxia), the choice of euthanasia technique, therefore, may confound the results. This study examined the effects of four euthanasia methods commonly used in fish biology (overdose of MS-222, overdose of clove oil, rapid cooling and blunt trauma to the head) on variables known to be altered during hypoxia (haematocrit, plasma cortisol, blood lactate and blood glucose) or reflecting gill damage (trypan blue exclusion) and energetic status (ATP, ADP and ATP:ADP) in Gulf killifish Fundulus grandis after 24 h exposure to well-aerated conditions (normoxia, 7·93 mg O 2  l -1 , c. 150 mm Hg or c. 20 kPa) or reduced oxygen levels (0·86 mg O 2  l -1 , c. 17 mm Hg or c. 2·2 kPa). Regardless of oxygen treatment, fish euthanized by an overdose of MS-222 had higher haematocrit and lower gill ATP:ADP than fish euthanized by other methods. The effects of 24 h hypoxic exposure on these and other variables, however, were equivalent among methods of euthanasia (i.e. there were no significant interactions between euthanasia method and oxygen treatment). The choice of an appropriate euthanasia method, therefore, will depend upon the magnitude of the treatment effects (e.g. hypoxia) relative to potential artefacts caused by euthanasia on the variables of interest. © 2017 The Fisheries Society of the British Isles.

  13. The ad-libitum alcohol 'taste test': secondary analyses of potential confounds and construct validity.

    Science.gov (United States)

    Jones, Andrew; Button, Emily; Rose, Abigail K; Robinson, Eric; Christiansen, Paul; Di Lemma, Lisa; Field, Matt

    2016-03-01

    Motivation to drink alcohol can be measured in the laboratory using an ad-libitum 'taste test', in which participants rate the taste of alcoholic drinks whilst their intake is covertly monitored. Little is known about the construct validity of this paradigm. The objective of this study was to investigate variables that may compromise the validity of this paradigm and its construct validity. We re-analysed data from 12 studies from our laboratory that incorporated an ad-libitum taste test. We considered time of day and participants' awareness of the purpose of the taste test as potential confounding variables. We examined whether gender, typical alcohol consumption, subjective craving, scores on the Alcohol Use Disorders Identification Test and perceived pleasantness of the drinks predicted ad-libitum consumption (construct validity). We included 762 participants (462 female). Participant awareness and time of day were not related to ad-libitum alcohol consumption. Males drank significantly more alcohol than females (p alcohol consumption (p = 0.04), craving (p alcohol consumption. The construct validity of the taste test was supported by relationships between ad-libitum consumption and typical alcohol consumption, craving and pleasantness ratings of the drinks. The ad-libitum taste test is a valid method for the assessment of alcohol intake in the laboratory.

  14. Interpretational Confounding Is Due to Misspecification, Not to Type of Indicator: Comment on Howell, Breivik, and Wilcox (2007)

    Science.gov (United States)

    Bollen, Kenneth A.

    2007-01-01

    R. D. Howell, E. Breivik, and J. B. Wilcox (2007) have argued that causal (formative) indicators are inherently subject to interpretational confounding. That is, they have argued that using causal (formative) indicators leads the empirical meaning of a latent variable to be other than that assigned to it by a researcher. Their critique of causal…

  15. Adolescent sleep disturbance and school performance: the confounding variable of socioeconomics.

    Science.gov (United States)

    Pagel, James F; Forister, Natalie; Kwiatkowki, Carol

    2007-02-15

    To assess how selected socioeconomic variables known to affect school performance alter the association between reported sleep disturbance and poor school performance in a contiguous middle school/high school population. A school district/college IRB approved questionnaire was distributed in science and health classes in middle school and high school. This questionnaire included a frequency scaled pediatric sleep disturbance questionnaire for completion by students and a permission and demographic questionnaire for completion by parents (completed questionnaires n = 238 with 69.3% including GPA). Sleep complaints occur at high frequency in this sample (sleep onset insomnia 60% > 1 x /wk.; 21.2% every night; sleepiness during the day (45.7% > 1 x /wk.; 15.2 % every night), and difficulty concentrating (54.6% > 1 x /wk.; 12.9% always). Students with lower grade point averages (GPAs) were more likely to have restless/aching legs when trying to fall asleep, difficulty concentrating during the day, snoring every night, difficulty waking in the morning, sleepiness during the day, and falling asleep in class. Lower reported GPAs were significantly associated with lower household incomes. After statistically controlling for income, restless legs, sleepiness during the day, and difficulty with concentration continued to significantly affect school performance. This study provides additional evidence indicating that sleep disturbances occur at high frequencies in adolescents and significantly affect daytime performance, as measured by GPA. The socioeconomic variable of household income also significantly affects GPA. After statistically controlling for age and household income, the number and type of sleep variables noted to significantly affect GPA are altered but persistent in demonstrating significant effects on school performance.

  16. Can statistic adjustment of OR minimize the potential confounding bias for meta-analysis of case-control study? A secondary data analysis.

    Science.gov (United States)

    Liu, Tianyi; Nie, Xiaolu; Wu, Zehao; Zhang, Ying; Feng, Guoshuang; Cai, Siyu; Lv, Yaqi; Peng, Xiaoxia

    2017-12-29

    Different confounder adjustment strategies were used to estimate odds ratios (ORs) in case-control study, i.e. how many confounders original studies adjusted and what the variables are. This secondary data analysis is aimed to detect whether there are potential biases caused by difference of confounding factor adjustment strategies in case-control study, and whether such bias would impact the summary effect size of meta-analysis. We included all meta-analyses that focused on the association between breast cancer and passive smoking among non-smoking women, as well as each original case-control studies included in these meta-analyses. The relative deviations (RDs) of each original study were calculated to detect how magnitude the adjustment would impact the estimation of ORs, compared with crude ORs. At the same time, a scatter diagram was sketched to describe the distribution of adjusted ORs with different number of adjusted confounders. Substantial inconsistency existed in meta-analysis of case-control studies, which would influence the precision of the summary effect size. First, mixed unadjusted and adjusted ORs were used to combine individual OR in majority of meta-analysis. Second, original studies with different adjustment strategies of confounders were combined, i.e. the number of adjusted confounders and different factors being adjusted in each original study. Third, adjustment did not make the effect size of original studies trend to constringency, which suggested that model fitting might have failed to correct the systematic error caused by confounding. The heterogeneity of confounder adjustment strategies in case-control studies may lead to further bias for summary effect size in meta-analyses, especially for weak or medium associations so that the direction of causal inference would be even reversed. Therefore, further methodological researches are needed, referring to the assessment of confounder adjustment strategies, as well as how to take this kind

  17. The Association between Headaches and Temporomandibular Disorders is Confounded by Bruxism and Somatic Complaints

    NARCIS (Netherlands)

    van der Meer, Hedwig A.; Speksnijder, Caroline M.; Engelbert, Raoul; Lobbezoo, Frank; Nijhuis – van der Sanden, Maria W G; Visscher, Corine M.

    OBJECTIVES:: The objective of this observational study was to establish the possible presence of confounders on the association between temporomandibular disorders (TMD) and headaches in a patient population from a TMD and Orofacial Pain Clinic. METHODS:: Several subtypes of headaches were

  18. The Association Between Headaches and Temporomandibular Disorders is Confounded by Bruxism and Somatic Symptoms

    NARCIS (Netherlands)

    Meer, H.A. van der; Speksnijder, C.M.; Engelbert, R.H.; Lobbezoo, F.; Nijhuis-Van der Sanden, M.W.G.; Visscher, C.M.

    2017-01-01

    OBJECTIVES: The objective of this observational study was to establish the possible presence of confounders on the association between temporomandibular disorders (TMD) and headaches in a patient population from a TMD and Orofacial Pain Clinic. MATERIALS AND METHODS: Several subtypes of headaches

  19. External adjustment of unmeasured confounders in a case-control study of benzodiazepine use and cancer risk

    DEFF Research Database (Denmark)

    Thygesen, Lau Caspar; Pottegård, Anton; Ersbøll, Annette Kjaer

    2017-01-01

    AIMS: Previous studies have reported diverging results on the association between benzodiazepine use and cancer risk. METHODS: We investigated this association in a matched case-control study including incident cancer cases during 2002-2009 in the Danish Cancer Registry (n = 94 923) and age......% confidence interval 1.00-1.19) and for smoking-related cancers from 1.20 to 1.10 (95% confidence interval 1.00-1.21). CONCLUSION: We conclude that the increased risk observed in the solely register-based study could partly be attributed to unmeasured confounding....... PSs were used: The error-prone PS using register-based confounders and the calibrated PS based on both register- and survey-based confounders, retrieved from the Health Interview Survey. RESULTS: Register-based data showed that cancer cases had more diagnoses, higher comorbidity score and more co...

  20. Confounding environmental colour and distribution shape leads to underestimation of population extinction risk.

    Directory of Open Access Journals (Sweden)

    Mike S Fowler

    Full Text Available The colour of environmental variability influences the size of population fluctuations when filtered through density dependent dynamics, driving extinction risk through dynamical resonance. Slow fluctuations (low frequencies dominate in red environments, rapid fluctuations (high frequencies in blue environments and white environments are purely random (no frequencies dominate. Two methods are commonly employed to generate the coloured spatial and/or temporal stochastic (environmental series used in combination with population (dynamical feedback models: autoregressive [AR(1] and sinusoidal (1/f models. We show that changing environmental colour from white to red with 1/f models, and from white to red or blue with AR(1 models, generates coloured environmental series that are not normally distributed at finite time-scales, potentially confounding comparison with normally distributed white noise models. Increasing variability of sample Skewness and Kurtosis and decreasing mean Kurtosis of these series alter the frequency distribution shape of the realised values of the coloured stochastic processes. These changes in distribution shape alter patterns in the probability of single and series of extreme conditions. We show that the reduced extinction risk for undercompensating (slow growing populations in red environments previously predicted with traditional 1/f methods is an artefact of changes in the distribution shapes of the environmental series. This is demonstrated by comparison with coloured series controlled to be normally distributed using spectral mimicry. Changes in the distribution shape that arise using traditional methods lead to underestimation of extinction risk in normally distributed, red 1/f environments. AR(1 methods also underestimate extinction risks in traditionally generated red environments. This work synthesises previous results and provides further insight into the processes driving extinction risk in model populations. We

  1. Confounding and exposure measurement error in air pollution epidemiology.

    Science.gov (United States)

    Sheppard, Lianne; Burnett, Richard T; Szpiro, Adam A; Kim, Sun-Young; Jerrett, Michael; Pope, C Arden; Brunekreef, Bert

    2012-06-01

    Studies in air pollution epidemiology may suffer from some specific forms of confounding and exposure measurement error. This contribution discusses these, mostly in the framework of cohort studies. Evaluation of potential confounding is critical in studies of the health effects of air pollution. The association between long-term exposure to ambient air pollution and mortality has been investigated using cohort studies in which subjects are followed over time with respect to their vital status. In such studies, control for individual-level confounders such as smoking is important, as is control for area-level confounders such as neighborhood socio-economic status. In addition, there may be spatial dependencies in the survival data that need to be addressed. These issues are illustrated using the American Cancer Society Cancer Prevention II cohort. Exposure measurement error is a challenge in epidemiology because inference about health effects can be incorrect when the measured or predicted exposure used in the analysis is different from the underlying true exposure. Air pollution epidemiology rarely if ever uses personal measurements of exposure for reasons of cost and feasibility. Exposure measurement error in air pollution epidemiology comes in various dominant forms, which are different for time-series and cohort studies. The challenges are reviewed and a number of suggested solutions are discussed for both study domains.

  2. Time-Dependent Confounding in the Study of the Effects of Regular Physical Activity in Chronic Obstructive Pulmonary Disease: An Application of the Marginal Structural Model

    DEFF Research Database (Denmark)

    Garcia-Aymerich, J.; Lange, P.; Serra, I.

    2008-01-01

    this type of confounding. We sought to assess the presence of time-dependent confounding in the association between physical activity and COPD development and course by comparing risk estimates between standard statistical methods and MSMs. METHODS: By using the population-based cohort Copenhagen City Heart...

  3. Medical versus surgical abortion: comparing satisfaction and potential confounders in a partly randomized study

    DEFF Research Database (Denmark)

    Rørbye, Christina; Nørgaard, Mogens; Nilas, Lisbeth

    2005-01-01

    BACKGROUND: The aim of the study was to compare satisfaction with medical and surgical abortion and to identify potential confounders affecting satisfaction. METHODS: 1033 women with gestational age (GA) < or = 63 days had either a medical (600 mg mifepristone followed by 1 mg gemeprost) or a sur...

  4. The performance of random coefficient regression in accounting for residual confounding.

    Science.gov (United States)

    Gustafson, Paul; Greenland, Sander

    2006-09-01

    Greenland (2000, Biometrics 56, 915-921) describes the use of random coefficient regression to adjust for residual confounding in a particular setting. We examine this setting further, giving theoretical and empirical results concerning the frequentist and Bayesian performance of random coefficient regression. Particularly, we compare estimators based on this adjustment for residual confounding to estimators based on the assumption of no residual confounding. This devolves to comparing an estimator from a nonidentified but more realistic model to an estimator from a less realistic but identified model. The approach described by Gustafson (2005, Statistical Science 20, 111-140) is used to quantify the performance of a Bayesian estimator arising from a nonidentified model. From both theoretical calculations and simulations we find support for the idea that superior performance can be obtained by replacing unrealistic identifying constraints with priors that allow modest departures from those constraints. In terms of point-estimator bias this superiority arises when the extent of residual confounding is substantial, but the advantage is much broader in terms of interval estimation. The benefit from modeling residual confounding is maintained when the prior distributions employed only roughly correspond to reality, for the standard identifying constraints are equivalent to priors that typically correspond much worse.

  5. Gait variability: methods, modeling and meaning

    Directory of Open Access Journals (Sweden)

    Hausdorff Jeffrey M

    2005-07-01

    Full Text Available Abstract The study of gait variability, the stride-to-stride fluctuations in walking, offers a complementary way of quantifying locomotion and its changes with aging and disease as well as a means of monitoring the effects of therapeutic interventions and rehabilitation. Previous work has suggested that measures of gait variability may be more closely related to falls, a serious consequence of many gait disorders, than are measures based on the mean values of other walking parameters. The Current JNER series presents nine reports on the results of recent investigations into gait variability. One novel method for collecting unconstrained, ambulatory data is reviewed, and a primer on analysis methods is presented along with a heuristic approach to summarizing variability measures. In addition, the first studies of gait variability in animal models of neurodegenerative disease are described, as is a mathematical model of human walking that characterizes certain complex (multifractal features of the motor control's pattern generator. Another investigation demonstrates that, whereas both healthy older controls and patients with a higher-level gait disorder walk more slowly in reduced lighting, only the latter's stride variability increases. Studies of the effects of dual tasks suggest that the regulation of the stride-to-stride fluctuations in stride width and stride time may be influenced by attention loading and may require cognitive input. Finally, a report of gait variability in over 500 subjects, probably the largest study of this kind, suggests how step width variability may relate to fall risk. Together, these studies provide new insights into the factors that regulate the stride-to-stride fluctuations in walking and pave the way for expanded research into the control of gait and the practical application of measures of gait variability in the clinical setting.

  6. Propensity score methodology for confounding control in health care utilization databases

    Directory of Open Access Journals (Sweden)

    Elisabetta Patorno

    2013-06-01

    Full Text Available Propensity score (PS methodology is a common approach to control for confounding in nonexperimental studies of treatment effects using health care utilization databases. This methodology offers researchers many advantages compared with conventional multivariate models: it directly focuses on the determinants of treatment choice, facilitating the understanding of the clinical decision-making process by the researcher; it allows for graphical comparisons of the distribution of propensity scores and truncation of subjects without overlapping PS indicating a lack of equipoise; it allows transparent assessment of the confounder balance achieved by the PS at baseline; and it offers a straightforward approach to reduce the dimensionality of sometimes large arrays of potential confounders in utilization databases, directly addressing the “curse of dimensionality” in the context of rare events. This article provides an overview of the use of propensity score methodology for pharmacoepidemiologic research with large health care utilization databases, covering recent discussions on covariate selection, the role of automated techniques for addressing unmeasurable confounding via proxies, strategies to maximize clinical equipoise at baseline, and the potential of machine-learning algorithms for optimized propensity score estimation. The appendix discusses the available software packages for PS methodology. Propensity scores are a frequently used and versatile tool for transparent and comprehensive adjustment of confounding in pharmacoepidemiology with large health care databases.

  7. Probabilistic Power Flow Method Considering Continuous and Discrete Variables

    Directory of Open Access Journals (Sweden)

    Xuexia Zhang

    2017-04-01

    Full Text Available This paper proposes a probabilistic power flow (PPF method considering continuous and discrete variables (continuous and discrete power flow, CDPF for power systems. The proposed method—based on the cumulant method (CM and multiple deterministic power flow (MDPF calculations—can deal with continuous variables such as wind power generation (WPG and loads, and discrete variables such as fuel cell generation (FCG. In this paper, continuous variables follow a normal distribution (loads or a non-normal distribution (WPG, and discrete variables follow a binomial distribution (FCG. Through testing on IEEE 14-bus and IEEE 118-bus power systems, the proposed method (CDPF has better accuracy compared with the CM, and higher efficiency compared with the Monte Carlo simulation method (MCSM.

  8. Cross-sectional analysis of food choice frequency, sleep confounding beverages, and psychological distress predictors of sleep quality.

    Science.gov (United States)

    Knowlden, Adam P; Burns, Maranda; Harcrow, Andy; Shewmake, Meghan E

    2016-03-16

    Poor sleep quality is a significant public health problem. The role of nutrition in predicting sleep quality is a relatively unexplored area of inquiry. The purpose of this study was to evaluate the capacity of 10 food choice categories, sleep confounding beverages, and psychological distress to predict the sleep quality of college students. A logistic regression model comprising 10 food choice variables (healthy proteins, unhealthy proteins, healthy dairy, unhealthy dairy, healthy grains, unhealthy grains, healthy fruits and vegetables, unhealthy empty calories, healthy beverages, unhealthy beverages), sleep confounding beverages (caffeinated/alcoholic beverages), as well as psychological distress (low, moderate, serious distress) was computed to determine the capacity of the variables to predict sleep quality (good/poor). The odds of poor sleep quality were 32.4% lower for each unit of increased frequency of healthy proteins consumed (pempty calorie food choices consumed (p=0.003; OR=1.131), and 107.3% higher for those classified in the moderate psychological distress (p=0.016; OR=2.073). Collectively, healthy proteins, healthy dairy, unhealthy empty calories, and moderate psychological distress were moderately predictive of sleep quality in the sample (Nagelkerke R2=23.8%). Results of the study suggested higher frequency of consumption of healthy protein and healthy dairy food choices reduced the odds of poor sleep quality, while higher consumption of empty calories and moderate psychological distress increased the odds of poor sleep quality.

  9. Variable selection by lasso-type methods

    Directory of Open Access Journals (Sweden)

    Sohail Chand

    2011-09-01

    Full Text Available Variable selection is an important property of shrinkage methods. The adaptive lasso is an oracle procedure and can do consistent variable selection. In this paper, we provide an explanation that how use of adaptive weights make it possible for the adaptive lasso to satisfy the necessary and almost sufcient condition for consistent variable selection. We suggest a novel algorithm and give an important result that for the adaptive lasso if predictors are normalised after the introduction of adaptive weights, it makes the adaptive lasso performance identical to the lasso.

  10. Overview of potential procedural and participant-related confounds for neuroimaging of the resting state

    Science.gov (United States)

    Duncan, Niall W.; Northoff, Georg

    2013-01-01

    Studies of intrinsic brain activity in the resting state have become increasingly common. A productive discussion of what analysis methods are appropriate, of the importance of physiologic correction and of the potential interpretations of results has been ongoing. However, less attention has been paid to factors other than physiologic noise that may confound resting-state experiments. These range from straightforward factors, such as ensuring that participants are all instructed in the same manner, to more obscure participant-related factors, such as body weight. We provide an overview of such potentially confounding factors, along with some suggested approaches for minimizing their impact. A particular theme that emerges from the overview is the range of systematic differences between types of study groups (e.g., between patients and controls) that may influence resting-state study results. PMID:22964258

  11. Variable importance and prediction methods for longitudinal problems with missing variables.

    Directory of Open Access Journals (Sweden)

    Iván Díaz

    Full Text Available We present prediction and variable importance (VIM methods for longitudinal data sets containing continuous and binary exposures subject to missingness. We demonstrate the use of these methods for prognosis of medical outcomes of severe trauma patients, a field in which current medical practice involves rules of thumb and scoring methods that only use a few variables and ignore the dynamic and high-dimensional nature of trauma recovery. Well-principled prediction and VIM methods can provide a tool to make care decisions informed by the high-dimensional patient's physiological and clinical history. Our VIM parameters are analogous to slope coefficients in adjusted regressions, but are not dependent on a specific statistical model, nor require a certain functional form of the prediction regression to be estimated. In addition, they can be causally interpreted under causal and statistical assumptions as the expected outcome under time-specific clinical interventions, related to changes in the mean of the outcome if each individual experiences a specified change in the variable (keeping other variables in the model fixed. Better yet, the targeted MLE used is doubly robust and locally efficient. Because the proposed VIM does not constrain the prediction model fit, we use a very flexible ensemble learner (the SuperLearner, which returns a linear combination of a list of user-given algorithms. Not only is such a prediction algorithm intuitive appealing, it has theoretical justification as being asymptotically equivalent to the oracle selector. The results of the analysis show effects whose size and significance would have been not been found using a parametric approach (such as stepwise regression or LASSO. In addition, the procedure is even more compelling as the predictor on which it is based showed significant improvements in cross-validated fit, for instance area under the curve (AUC for a receiver-operator curve (ROC. Thus, given that 1 our VIM

  12. Assessing mediation using marginal structural models in the presence of confounding and moderation

    OpenAIRE

    Coffman, Donna L.; Zhong, Wei

    2012-01-01

    This paper presents marginal structural models (MSMs) with inverse propensity weighting (IPW) for assessing mediation. Generally, individuals are not randomly assigned to levels of the mediator. Therefore, confounders of the mediator and outcome may exist that limit causal inferences, a goal of mediation analysis. Either regression adjustment or IPW can be used to take confounding into account, but IPW has several advantages. Regression adjustment of even one confounder of the mediator and ou...

  13. Variable Lifting Index (VLI): A New Method for Evaluating Variable Lifting Tasks.

    Science.gov (United States)

    Waters, Thomas; Occhipinti, Enrico; Colombini, Daniela; Alvarez-Casado, Enrique; Fox, Robert

    2016-08-01

    We seek to develop a new approach for analyzing the physical demands of highly variable lifting tasks through an adaptation of the Revised NIOSH (National Institute for Occupational Safety and Health) Lifting Equation (RNLE) into a Variable Lifting Index (VLI). There are many jobs that contain individual lifts that vary from lift to lift due to the task requirements. The NIOSH Lifting Equation is not suitable in its present form to analyze variable lifting tasks. In extending the prior work on the VLI, two procedures are presented to allow users to analyze variable lifting tasks. One approach involves the sampling of lifting tasks performed by a worker over a shift and the calculation of the Frequency Independent Lift Index (FILI) for each sampled lift and the aggregation of the FILI values into six categories. The Composite Lift Index (CLI) equation is used with lifting index (LI) category frequency data to calculate the VLI. The second approach employs a detailed systematic collection of lifting task data from production and/or organizational sources. The data are organized into simplified task parameter categories and further aggregated into six FILI categories, which also use the CLI equation to calculate the VLI. The two procedures will allow practitioners to systematically employ the VLI method to a variety of work situations where highly variable lifting tasks are performed. The scientific basis for the VLI procedure is similar to that for the CLI originally presented by NIOSH; however, the VLI method remains to be validated. The VLI method allows an analyst to assess highly variable manual lifting jobs in which the task characteristics vary from lift to lift during a shift. © 2015, Human Factors and Ergonomics Society.

  14. On the Confounding Effect of Temperature on Chemical Shift-Encoded Fat Quantification

    Science.gov (United States)

    Hernando, Diego; Sharma, Samir D.; Kramer, Harald; Reeder, Scott B.

    2014-01-01

    Purpose To characterize the confounding effect of temperature on chemical shift-encoded (CSE) fat quantification. Methods The proton resonance frequency of water, unlike triglycerides, depends on temperature. This leads to a temperature dependence of the spectral models of fat (relative to water) that are commonly used by CSE-MRI methods. Simulation analysis was performed for 1.5 Tesla CSE fat–water signals at various temperatures and echo time combinations. Oil–water phantoms were constructed and scanned at temperatures between 0 and 40°C using spectroscopy and CSE imaging at three echo time combinations. An explanted human liver, rejected for transplantation due to steatosis, was scanned using spectroscopy and CSE imaging. Fat–water reconstructions were performed using four different techniques: magnitude and complex fitting, with standard or temperature-corrected signal modeling. Results In all experiments, magnitude fitting with standard signal modeling resulted in large fat quantification errors. Errors were largest for echo time combinations near TEinit ≈ 1.3 ms, ΔTE ≈ 2.2 ms. Errors in fat quantification caused by temperature-related frequency shifts were smaller with complex fitting, and were avoided using a temperature-corrected signal model. Conclusion Temperature is a confounding factor for fat quantification. If not accounted for, it can result in large errors in fat quantifications in phantom and ex vivo acquisitions. PMID:24123362

  15. An improved Lobatto discrete variable representation by a phase optimisation and variable mapping method

    International Nuclear Information System (INIS)

    Yu, Dequan; Cong, Shu-Lin; Sun, Zhigang

    2015-01-01

    Highlights: • An optimised finite element discrete variable representation method is proposed. • The method is tested by solving one and two dimensional Schrödinger equations. • The method is quite efficient in solving the molecular Schrödinger equation. • It is very easy to generalise the method to multidimensional problems. - Abstract: The Lobatto discrete variable representation (LDVR) proposed by Manoloupolos and Wyatt (1988) has unique features but has not been generally applied in the field of chemical dynamics. Instead, it has popular application in solving atomic physics problems, in combining with the finite element method (FE-DVR), due to its inherent abilities for treating the Coulomb singularity in spherical coordinates. In this work, an efficient phase optimisation and variable mapping procedure is proposed to improve the grid efficiency of the LDVR/FE-DVR method, which makes it not only be competing with the popular DVR methods, such as the Sinc-DVR, but also keep its advantages for treating with the Coulomb singularity. The method is illustrated by calculations for one-dimensional Coulomb potential, and the vibrational states of one-dimensional Morse potential, two-dimensional Morse potential and two-dimensional Henon–Heiles potential, which prove the efficiency of the proposed scheme and promise more general applications of the LDVR/FE-DVR method

  16. An improved Lobatto discrete variable representation by a phase optimisation and variable mapping method

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Dequan [School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian 116024 (China); State Key Laboratory of Molecular Reaction Dynamics and Center for Theoretical and Computational Chemistry, Dalian Institute of Chemical Physics, Chinese Academy of Science, Dalian 116023 (China); Cong, Shu-Lin, E-mail: shlcong@dlut.edu.cn [School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian 116024 (China); Sun, Zhigang, E-mail: zsun@dicp.ac.cn [State Key Laboratory of Molecular Reaction Dynamics and Center for Theoretical and Computational Chemistry, Dalian Institute of Chemical Physics, Chinese Academy of Science, Dalian 116023 (China); Center for Advanced Chemical Physics and 2011 Frontier Center for Quantum Science and Technology, University of Science and Technology of China, 96 Jinzhai Road, Hefei 230026 (China)

    2015-09-08

    Highlights: • An optimised finite element discrete variable representation method is proposed. • The method is tested by solving one and two dimensional Schrödinger equations. • The method is quite efficient in solving the molecular Schrödinger equation. • It is very easy to generalise the method to multidimensional problems. - Abstract: The Lobatto discrete variable representation (LDVR) proposed by Manoloupolos and Wyatt (1988) has unique features but has not been generally applied in the field of chemical dynamics. Instead, it has popular application in solving atomic physics problems, in combining with the finite element method (FE-DVR), due to its inherent abilities for treating the Coulomb singularity in spherical coordinates. In this work, an efficient phase optimisation and variable mapping procedure is proposed to improve the grid efficiency of the LDVR/FE-DVR method, which makes it not only be competing with the popular DVR methods, such as the Sinc-DVR, but also keep its advantages for treating with the Coulomb singularity. The method is illustrated by calculations for one-dimensional Coulomb potential, and the vibrational states of one-dimensional Morse potential, two-dimensional Morse potential and two-dimensional Henon–Heiles potential, which prove the efficiency of the proposed scheme and promise more general applications of the LDVR/FE-DVR method.

  17. Eliminating Survivor Bias in Two-stage Instrumental Variable Estimators.

    Science.gov (United States)

    Vansteelandt, Stijn; Walter, Stefan; Tchetgen Tchetgen, Eric

    2018-07-01

    Mendelian randomization studies commonly focus on elderly populations. This makes the instrumental variables analysis of such studies sensitive to survivor bias, a type of selection bias. A particular concern is that the instrumental variable conditions, even when valid for the source population, may be violated for the selective population of individuals who survive the onset of the study. This is potentially very damaging because Mendelian randomization studies are known to be sensitive to bias due to even minor violations of the instrumental variable conditions. Interestingly, the instrumental variable conditions continue to hold within certain risk sets of individuals who are still alive at a given age when the instrument and unmeasured confounders exert additive effects on the exposure, and moreover, the exposure and unmeasured confounders exert additive effects on the hazard of death. In this article, we will exploit this property to derive a two-stage instrumental variable estimator for the effect of exposure on mortality, which is insulated against the above described selection bias under these additivity assumptions.

  18. Modelling Cardiac Signal as a Confound in EEG-fMRI and its Application in Focal Epilepsy

    DEFF Research Database (Denmark)

    Liston, Adam David; Salek-Haddadi, Afraim; Hamandi, Khalid

    2005-01-01

    Cardiac noise has been shown to reduce the sensitivity of functional Magnetic Resonance Imaging (fMRI) to an experimental effect due to its confounding presence in the blood oxygenation level-dependent (BOLD) signal. Its effect is most severe in particular regions of the brain and a method is yet...

  19. A comparison of Bayesian and Monte Carlo sensitivity analysis for unmeasured confounding.

    Science.gov (United States)

    McCandless, Lawrence C; Gustafson, Paul

    2017-08-15

    Bias from unmeasured confounding is a persistent concern in observational studies, and sensitivity analysis has been proposed as a solution. In the recent years, probabilistic sensitivity analysis using either Monte Carlo sensitivity analysis (MCSA) or Bayesian sensitivity analysis (BSA) has emerged as a practical analytic strategy when there are multiple bias parameters inputs. BSA uses Bayes theorem to formally combine evidence from the prior distribution and the data. In contrast, MCSA samples bias parameters directly from the prior distribution. Intuitively, one would think that BSA and MCSA ought to give similar results. Both methods use similar models and the same (prior) probability distributions for the bias parameters. In this paper, we illustrate the surprising finding that BSA and MCSA can give very different results. Specifically, we demonstrate that MCSA can give inaccurate uncertainty assessments (e.g. 95% intervals) that do not reflect the data's influence on uncertainty about unmeasured confounding. Using a data example from epidemiology and simulation studies, we show that certain combinations of data and prior distributions can result in dramatic prior-to-posterior changes in uncertainty about the bias parameters. This occurs because the application of Bayes theorem in a non-identifiable model can sometimes rule out certain patterns of unmeasured confounding that are not compatible with the data. Consequently, the MCSA approach may give 95% intervals that are either too wide or too narrow and that do not have 95% frequentist coverage probability. Based on our findings, we recommend that analysts use BSA for probabilistic sensitivity analysis. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  20. Confounder selection in environmental epidemiology: Assessment of health effects of prenatal mercury exposure

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe

    2007-01-01

    PURPOSE: The purpose of the study is to compare different approaches to the identification of confounders needed for analyzing observational data. Whereas standard analysis usually is conducted as if the confounders were known a priori, selection uncertainty also must be taken into account. METHO...

  1. Comparison of statistical approaches dealing with time-dependent confounding in drug effectiveness studies.

    Science.gov (United States)

    Karim, Mohammad Ehsanul; Petkau, John; Gustafson, Paul; Platt, Robert W; Tremlett, Helen

    2018-06-01

    In longitudinal studies, if the time-dependent covariates are affected by the past treatment, time-dependent confounding may be present. For a time-to-event response, marginal structural Cox models are frequently used to deal with such confounding. To avoid some of the problems of fitting marginal structural Cox model, the sequential Cox approach has been suggested as an alternative. Although the estimation mechanisms are different, both approaches claim to estimate the causal effect of treatment by appropriately adjusting for time-dependent confounding. We carry out simulation studies to assess the suitability of the sequential Cox approach for analyzing time-to-event data in the presence of a time-dependent covariate that may or may not be a time-dependent confounder. Results from these simulations revealed that the sequential Cox approach is not as effective as marginal structural Cox model in addressing the time-dependent confounding. The sequential Cox approach was also found to be inadequate in the presence of a time-dependent covariate. We propose a modified version of the sequential Cox approach that correctly estimates the treatment effect in both of the above scenarios. All approaches are applied to investigate the impact of beta-interferon treatment in delaying disability progression in the British Columbia Multiple Sclerosis cohort (1995-2008).

  2. An application of the variable-r method to subpopulation growth rates in a 19th century agricultural population

    Directory of Open Access Journals (Sweden)

    Corey Sparks

    2009-07-01

    Full Text Available This paper presents an analysis of the differential growth rates of the farming and non-farming segments of a rural Scottish community during the 19th and early 20th centuries using the variable-r method allowing for net migration. Using this method, I find that the farming population of Orkney, Scotland, showed less variability in their reproduction and growth rates than the non-farming population during a period of net population decline. I conclude by suggesting that the variable-r method can be used in general cases where the relative growth of subpopulations or subpopulation reproduction is of interest.

  3. Parametric methods outperformed non-parametric methods in comparisons of discrete numerical variables

    Directory of Open Access Journals (Sweden)

    Sandvik Leiv

    2011-04-01

    Full Text Available Abstract Background The number of events per individual is a widely reported variable in medical research papers. Such variables are the most common representation of the general variable type called discrete numerical. There is currently no consensus on how to compare and present such variables, and recommendations are lacking. The objective of this paper is to present recommendations for analysis and presentation of results for discrete numerical variables. Methods Two simulation studies were used to investigate the performance of hypothesis tests and confidence interval methods for variables with outcomes {0, 1, 2}, {0, 1, 2, 3}, {0, 1, 2, 3, 4}, and {0, 1, 2, 3, 4, 5}, using the difference between the means as an effect measure. Results The Welch U test (the T test with adjustment for unequal variances and its associated confidence interval performed well for almost all situations considered. The Brunner-Munzel test also performed well, except for small sample sizes (10 in each group. The ordinary T test, the Wilcoxon-Mann-Whitney test, the percentile bootstrap interval, and the bootstrap-t interval did not perform satisfactorily. Conclusions The difference between the means is an appropriate effect measure for comparing two independent discrete numerical variables that has both lower and upper bounds. To analyze this problem, we encourage more frequent use of parametric hypothesis tests and confidence intervals.

  4. Comparison of methods for the analysis of relatively simple mediation models.

    Science.gov (United States)

    Rijnhart, Judith J M; Twisk, Jos W R; Chinapaw, Mai J M; de Boer, Michiel R; Heymans, Martijn W

    2017-09-01

    Statistical mediation analysis is an often used method in trials, to unravel the pathways underlying the effect of an intervention on a particular outcome variable. Throughout the years, several methods have been proposed, such as ordinary least square (OLS) regression, structural equation modeling (SEM), and the potential outcomes framework. Most applied researchers do not know that these methods are mathematically equivalent when applied to mediation models with a continuous mediator and outcome variable. Therefore, the aim of this paper was to demonstrate the similarities between OLS regression, SEM, and the potential outcomes framework in three mediation models: 1) a crude model, 2) a confounder-adjusted model, and 3) a model with an interaction term for exposure-mediator interaction. Secondary data analysis of a randomized controlled trial that included 546 schoolchildren. In our data example, the mediator and outcome variable were both continuous. We compared the estimates of the total, direct and indirect effects, proportion mediated, and 95% confidence intervals (CIs) for the indirect effect across OLS regression, SEM, and the potential outcomes framework. OLS regression, SEM, and the potential outcomes framework yielded the same effect estimates in the crude mediation model, the confounder-adjusted mediation model, and the mediation model with an interaction term for exposure-mediator interaction. Since OLS regression, SEM, and the potential outcomes framework yield the same results in three mediation models with a continuous mediator and outcome variable, researchers can continue using the method that is most convenient to them.

  5. Combining fixed effects and instrumental variable approaches for estimating the effect of psychosocial job quality on mental health: evidence from 13 waves of a nationally representative cohort study.

    Science.gov (United States)

    Milner, Allison; Aitken, Zoe; Kavanagh, Anne; LaMontagne, Anthony D; Pega, Frank; Petrie, Dennis

    2017-06-23

    Previous studies suggest that poor psychosocial job quality is a risk factor for mental health problems, but they use conventional regression analytic methods that cannot rule out reverse causation, unmeasured time-invariant confounding and reporting bias. This study combines two quasi-experimental approaches to improve causal inference by better accounting for these biases: (i) linear fixed effects regression analysis and (ii) linear instrumental variable analysis. We extract 13 annual waves of national cohort data including 13 260 working-age (18-64 years) employees. The exposure variable is self-reported level of psychosocial job quality. The instruments used are two common workplace entitlements. The outcome variable is the Mental Health Inventory (MHI-5). We adjust for measured time-varying confounders. In the fixed effects regression analysis adjusted for time-varying confounders, a 1-point increase in psychosocial job quality is associated with a 1.28-point improvement in mental health on the MHI-5 scale (95% CI: 1.17, 1.40; P variable analysis, a 1-point increase psychosocial job quality is related to 1.62-point improvement on the MHI-5 scale (95% CI: -0.24, 3.48; P = 0.088). Our quasi-experimental results provide evidence to confirm job stressors as risk factors for mental ill health using methods that improve causal inference. © The Author 2017. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  6. Prostate-Specific Antigen Velocity Before and After Elimination of Factors That Can Confound the Prostate-Specific Antigen Level

    International Nuclear Information System (INIS)

    Park, Jessica J.; Chen, Ming-Hui; Loffredo, Marian; D’Amico, Anthony V.

    2012-01-01

    Purpose: Prostate-specific antigen (PSA) velocity, like PSA level, can be confounded. In this study, we estimated the impact that confounding factors could have on correctly identifying a patient with a PSA velocity >2 ng/ml/y. Methods and Materials: Between 2006 and 2010, a total of 50 men with newly diagnosed PC comprised the study cohort. We calculated and compared the false-positive and false-negative PSA velocity >2 ng/ml/y rates for all men and those with low-risk disease using two approaches to calculate PSA velocity. First, we used PSA values obtained within 18 months of diagnosis; second, we used values within 18 months of diagnosis, substituting the prebiopsy PSA for a repeat, nonconfounded PSA that was obtained using the same assay and without confounders. Results: Using PSA levels pre-biopsy, 46% of all men had a PSA velocity >2 ng/ml/y; whereas this value declined to 32% when substituting the last prebiopsy PSA for a repeat, nonconfounded PSA using the same assay and without confounders. The false-positive rate for PSA velocity >2 ng/ml/y was 43% as compared with a false-negative rate of PSA velocity >2 ng/ml/y of 11% (p = 0.0008) in the overall cohort. These respective values in the low-risk subgroup were 60% and 16.7% (p = 0.09). Conclusion: This study provides evidence to explain the discordance in cancer-specific outcomes among groups investigating the prognostic significance of PSA velocity >2 ng/ml/y, and highlights the importance of patient education on potential confounders of the PSA test before obtaining PSA levels.

  7. Time-Dependent Confounding in the Study of the Effects of Regular Physical Activity in Chronic Obstructive Pulmonary Disease: An Application of the Marginal Structural Model

    DEFF Research Database (Denmark)

    Garcia-Aymerich, Judith; Lange, Peter; Serra, Ignasi

    2008-01-01

    PURPOSE: Results from longitudinal studies about the association between physical activity and chronic obstructive pulmonary disease (COPD) may have been biased because they did not properly adjust for time-dependent confounders. Marginal structural models (MSMs) have been proposed to address...... this type of confounding. We sought to assess the presence of time-dependent confounding in the association between physical activity and COPD development and course by comparing risk estimates between standard statistical methods and MSMs. METHODS: By using the population-based cohort Copenhagen City Heart...... Study, 6,568 subjects selected from the general population in 1976 were followed up until 2004 with three repeated examinations. RESULTS: Moderate to high compared with low physical activity was associated with a reduced risk of developing COPD both in the standard analysis (odds ratio [OR] 0.76, p = 0...

  8. A method based on a separation of variables in magnetohydrodynamics (MHD); Une methode de separation des variables en magnetohydrodynamique

    Energy Technology Data Exchange (ETDEWEB)

    Cessenat, M.; Genta, P.

    1996-12-31

    We use a method based on a separation of variables for solving a system of first order partial differential equations, in a very simple modelling of MHD. The method consists in introducing three unknown variables {phi}1, {phi}2, {phi}3 in addition of the time variable {tau} and then searching a solution which is separated with respect to {phi}1 and {tau} only. This is allowed by a very simple relation, called a `metric separation equation`, which governs the type of solutions with respect to time. The families of solutions for the system of equations thus obtained, correspond to a radial evolution of the fluid. Solving the MHD equations is then reduced to find the transverse component H{sub {Sigma}} of the magnetic field on the unit sphere {Sigma} by solving a non linear partial differential equation on {Sigma}. Thus we generalize ideas due to Courant-Friedrichs and to Sedov on dimensional analysis and self-similar solutions. (authors).

  9. Is the association between general cognitive ability and violent crime caused by family-level confounders?

    Directory of Open Access Journals (Sweden)

    Thomas Frisell

    Full Text Available BACKGROUND: Research has consistently found lower cognitive ability to be related to increased risk for violent and other antisocial behaviour. Since this association has remained when adjusting for childhood socioeconomic position, ethnicity, and parental characteristics, it is often assumed to be causal, potentially mediated through school adjustment problems and conduct disorder. Socioeconomic differences are notoriously difficult to quantify, however, and it is possible that the association between intelligence and delinquency suffer substantial residual confounding. METHODS: We linked longitudinal Swedish total population registers to study the association of general cognitive ability (intelligence at age 18 (the Conscript Register, 1980-1993 with the incidence proportion of violent criminal convictions (the Crime Register, 1973-2009, among all men born in Sweden 1961-1975 (N = 700,514. Using probit regression, we controlled for measured childhood socioeconomic variables, and further employed sibling comparisons (family pedigree data from the Multi-Generation Register to adjust for shared familial characteristics. RESULTS: Cognitive ability in early adulthood was inversely associated to having been convicted of a violent crime (β = -0.19, 95% CI: -0.19; -0.18, the association remained when adjusting for childhood socioeconomic factors (β = -0.18, 95% CI: -0.18; -0.17. The association was somewhat lower within half-brothers raised apart (β = -0.16, 95% CI: -0.18; -0.14, within half-brothers raised together (β = -0.13, 95% CI: (-0.15; -0.11, and lower still in full-brother pairs (β = -0.10, 95% CI: -0.11; -0.09. The attenuation among half-brothers raised together and full brothers was too strong to be attributed solely to attenuation from measurement error. DISCUSSION: Our results suggest that the association between general cognitive ability and violent criminality is confounded partly by factors shared by

  10. Adjusting for Confounding in Early Postlaunch Settings: Going Beyond Logistic Regression Models.

    Science.gov (United States)

    Schmidt, Amand F; Klungel, Olaf H; Groenwold, Rolf H H

    2016-01-01

    Postlaunch data on medical treatments can be analyzed to explore adverse events or relative effectiveness in real-life settings. These analyses are often complicated by the number of potential confounders and the possibility of model misspecification. We conducted a simulation study to compare the performance of logistic regression, propensity score, disease risk score, and stabilized inverse probability weighting methods to adjust for confounding. Model misspecification was induced in the independent derivation dataset. We evaluated performance using relative bias confidence interval coverage of the true effect, among other metrics. At low events per coefficient (1.0 and 0.5), the logistic regression estimates had a large relative bias (greater than -100%). Bias of the disease risk score estimates was at most 13.48% and 18.83%. For the propensity score model, this was 8.74% and >100%, respectively. At events per coefficient of 1.0 and 0.5, inverse probability weighting frequently failed or reduced to a crude regression, resulting in biases of -8.49% and 24.55%. Coverage of logistic regression estimates became less than the nominal level at events per coefficient ≤5. For the disease risk score, inverse probability weighting, and propensity score, coverage became less than nominal at events per coefficient ≤2.5, ≤1.0, and ≤1.0, respectively. Bias of misspecified disease risk score models was 16.55%. In settings with low events/exposed subjects per coefficient, disease risk score methods can be useful alternatives to logistic regression models, especially when propensity score models cannot be used. Despite better performance of disease risk score methods than logistic regression and propensity score models in small events per coefficient settings, bias, and coverage still deviated from nominal.

  11. Two methods for studying the X-ray variability

    NARCIS (Netherlands)

    Yan, Shu-Ping; Ji, Li; Méndez, Mariano; Wang, Na; Liu, Siming; Li, Xiang-Dong

    2016-01-01

    The X-ray aperiodic variability and quasi-periodic oscillation (QPO) are the important tools to study the structure of the accretion flow of X-ray binaries. However, the origin of the complex X-ray variability from X-ray binaries remains yet unsolved. We proposed two methods for studying the X-ray

  12. A Streamlined Artificial Variable Free Version of Simplex Method

    OpenAIRE

    Inayatullah, Syed; Touheed, Nasir; Imtiaz, Muhammad

    2015-01-01

    This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new ...

  13. Interpersonal discrimination and depressive symptomatology: examination of several personality-related characteristics as potential confounders in a racial/ethnic heterogeneous adult sample

    Science.gov (United States)

    2013-01-01

    Background Research suggests that reports of interpersonal discrimination result in poor mental health. Because personality characteristics may either confound or mediate the link between these reports and mental health, there is a need to disentangle its role in order to better understand the nature of discrimination-mental health association. We examined whether hostility, anger repression and expression, pessimism, optimism, and self-esteem served as confounders in the association between perceived interpersonal discrimination and CESD-based depressive symptoms in a race/ethnic heterogeneous probability-based sample of community-dwelling adults. Methods We employed a series of ordinary least squares regression analyses to examine the potential confounding effect of hostility, anger repression and expression, pessimism, optimism, and self-esteem between interpersonal discrimination and depressive symptoms. Results Hostility, anger repression, pessimism and self-esteem were significant as possible confounders of the relationship between interpersonal discrimination and depressive symptoms, together accounting for approximately 38% of the total association (beta: 0.1892, p interpersonal discrimination remained a positive predictor of depressive symptoms (beta: 0.1176, p personality characteristics in the association between reports of interpersonal discrimination and mental health, our results suggest that personality-related characteristics may serve as potential confounders. Nevertheless, our results also suggest that, net of these characteristics, reports of interpersonal discrimination are associated with poor mental health. PMID:24256578

  14. Wide Variability in Emergency Physician Admission Rates: A Target to Reduce Costs Without Compromising Quality

    Directory of Open Access Journals (Sweden)

    Jeffrey J. Guterman

    2016-09-01

    Full Text Available Introduction: Attending physician judgment is the traditional standard of care for emergency department (ED admission decisions. The extent to which variability in admission decisions affect cost and quality is not well understood. We sought to determine the impact of variability in admission decisions on cost and quality. Methods: We performed a retrospective observational study of patients presenting to a university-affiliated, urban ED from October 1, 2007, through September 30, 2008. The main outcome measures were admission rate, fiscal indicators (Medicaid-denied payment days, and quality indicators (15- and 30-day ED returns; delayed hospital admissions. We asked each Attending to estimate their inpatient admission rate and correlated their personal assessment with actual admission rates. Results: Admission rates, even after adjusting for known confounders, were highly variable (15.2%-32.0% and correlated with Medicaid denied-payment day rates (p=0.038. There was no correlation with quality outcome measures (30-day ED return or delayed hospital admission. There was no significant correlation between actual and self-described admission rate; the range of mis-estimation was 0% to 117%. Conclusion: Emergency medicine attending admission rates at this institution are highly variable, unexplained by known confounding variables, and unrelated to quality of care, as measured by 30-day ED return or delayed hospital admission. Admission optimization represents an important untapped potential for cost reduction through avoidable hospitalizations, with no apparent adverse effects on quality.

  15. Prenatal Paracetamol Exposure and Wheezing in Childhood: Causation or Confounding?

    Directory of Open Access Journals (Sweden)

    Enrica Migliore

    Full Text Available Several studies have reported an increased risk of wheezing in the children of mothers who used paracetamol during pregnancy. We evaluated to what extent this association is explained by confounding.We investigated the association between maternal paracetamol use in the first and third trimester of pregnancy and ever wheezing or recurrent wheezing/asthma in infants in the NINFEA cohort study. Risks ratios (RR and 95% confidence intervals (CI were estimated after adjustment for confounders, including maternal infections and antibiotic use during pregnancy.The prevalence of maternal paracetamol use was 30.6% during the first and 36.7% during the third trimester of pregnancy. The prevalence of ever wheezing and recurrent wheezing/asthma was 16.9% and 5.6%, respectively. After full adjustment, the RR for ever wheezing decreased from 1.25 [1.07-1.47] to 1.10 [0.94-1.30] in the first, and from 1.26 [1.08-1.47] to 1.10 [0.93-1.29] in the third trimester. A similar pattern was observed for recurrent wheezing/asthma. Duration of maternal paracetamol use was not associated with either outcome. Further analyses on paracetamol use for three non-infectious disorders (sciatica, migraine, and headache revealed no increased risk of wheezing in children.The association between maternal paracetamol use during pregnancy and infant wheezing is mainly, if not completely explained by confounding.

  16. Investigating the Idoho oil spillage into Lagos: Some confounding ...

    African Journals Online (AJOL)

    ... caused by these spillages must consider the socio-economic characteristics of the population as this may reveal a true picture of the event and facilitate proper interpretation of the result. Keywords: Toxicity, Idoho Oil Spillage, Confounders, Socio economic factors. Nigerian Journal of Health and Biomedical Sciences Vol.

  17. Biasogram: visualization of confounding technical bias in gene expression data

    DEFF Research Database (Denmark)

    Krzystanek, Marcin; Szallasi, Zoltan Imre; Eklund, Aron Charles

    2013-01-01

    Gene expression profiles of clinical cohorts can be used to identify genes that are correlated with a clinical variable of interest such as patient outcome or response to a particular drug. However, expression measurements are susceptible to technical bias caused by variation in extraneous factors...... such as RNA quality and array hybridization conditions. If such technical bias is correlated with the clinical variable of interest, the likelihood of identifying false positive genes is increased. Here we describe a method to visualize an expression matrix as a projection of all genes onto a plane defined...... by a clinical variable and a technical nuisance variable. The resulting plot indicates the extent to which each gene is correlated with the clinical variable or the technical variable. We demonstrate this method by applying it to three clinical trial microarray data sets, one of which identified genes that may...

  18. Model reduction method using variable-separation for stochastic saddle point problems

    Science.gov (United States)

    Jiang, Lijian; Li, Qiuqi

    2018-02-01

    In this paper, we consider a variable-separation (VS) method to solve the stochastic saddle point (SSP) problems. The VS method is applied to obtain the solution in tensor product structure for stochastic partial differential equations (SPDEs) in a mixed formulation. The aim of such a technique is to construct a reduced basis approximation of the solution of the SSP problems. The VS method attempts to get a low rank separated representation of the solution for SSP in a systematic enrichment manner. No iteration is performed at each enrichment step. In order to satisfy the inf-sup condition in the mixed formulation, we enrich the separated terms for the primal system variable at each enrichment step. For the SSP problems by regularization or penalty, we propose a more efficient variable-separation (VS) method, i.e., the variable-separation by penalty method. This can avoid further enrichment of the separated terms in the original mixed formulation. The computation of the variable-separation method decomposes into offline phase and online phase. Sparse low rank tensor approximation method is used to significantly improve the online computation efficiency when the number of separated terms is large. For the applications of SSP problems, we present three numerical examples to illustrate the performance of the proposed methods.

  19. Variation in faecal water content may confound estimates of gastro-intestinal parasite intensity in wild African herbivores.

    Science.gov (United States)

    Turner, W C; Cizauskas, C A; Getz, W M

    2010-03-01

    Estimates of parasite intensity within host populations are essential for many studies of host-parasite relationships. Here we evaluated the seasonal, age- and sex-related variability in faecal water content for two wild ungulate species, springbok (Antidorcas marsupialis) and plains zebra (Equus quagga). We then assessed whether or not faecal water content biased conclusions regarding differences in strongyle infection rates by season, age or sex. There was evidence of significant variation in faecal water content by season and age for both species, and by sex in springbok. Analyses of faecal egg counts demonstrated that sex was a near-significant factor in explaining variation in strongyle parasite infection rates in zebra (P = 0.055) and springbok (P = 0.052) using wet-weight faecal samples. However, once these intensity estimates were re-scaled by the percent of dry matter in the faeces, sex was no longer a significant factor (zebra, P = 0.268; springbok, P = 0.234). These results demonstrate that variation in faecal water content may confound analyses and could produce spurious conclusions, as was the case with host sex as a factor in the analysis. We thus recommend that researchers assess whether water variation could be a confounding factor when designing and performing research using faecal indices of parasite intensity.

  20. Indoor biofuel air pollution and respiratory health: the role of confounding factors among women in highland Guatemala.

    Science.gov (United States)

    Bruce, N; Neufeld, L; Boy, E; West, C

    1998-06-01

    A number of studies have reported associations between indoor biofuel air pollution in developing countries and chronic obstructive lung disease (COLD) in adults and acute lower respiratory infection (ALRI) in children. Most of these studies have used indirect measures of exposure and generally dealt inadequately with confounding. More reliable, quantified information about this presumed effect is an important pre-requisite for prevention, not least because of the technical, economic and cultural barriers to achieving substantial exposure reductions in the world's poorest households, where ambient pollution levels are typically between ten and a hundred times higher than recommended standards. This study was carried out as part of a programme of research designed to inform the development of intervention studies capable of providing quantified estimates of health benefits. The association between respiratory symptoms and the use of open fires and chimney woodstoves ('planchas'), and the distribution of confounding factors, were examined in a cross-sectional study of 340 women aged 15-45 years, living in a poor rural area in the western highlands of Guatemala. The prevalence of reported cough and phlegm was significantly higher for three of six symptom measures among women using open fires. Although this finding is consistent with a number of other studies, none has systematically examined the extent to which strong associations with confounding variables in these settings limit the ability of observational studies to define the effect of indoor air pollution adequately. Very strong associations (P air pollution and health, although there is a reasonable case for believing that the observed association is causal. Intervention studies are required for stronger evidence of this association, and more importantly, to determine the size of health benefit achievable through feasible exposure reductions.

  1. Variable identification in group method of data handling methodology

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Iraci Martinez, E-mail: martinez@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Bueno, Elaine Inacio [Instituto Federal de Educacao, Ciencia e Tecnologia, Guarulhos, SP (Brazil)

    2011-07-01

    The Group Method of Data Handling - GMDH is a combinatorial multi-layer algorithm in which a network of layers and nodes is generated using a number of inputs from the data stream being evaluated. The GMDH network topology has been traditionally determined using a layer by layer pruning process based on a preselected criterion of what constitutes the best nodes at each level. The traditional GMDH method is based on an underlying assumption that the data can be modeled by using an approximation of the Volterra Series or Kolmorgorov-Gabor polynomial. A Monitoring and Diagnosis System was developed based on GMDH and Artificial Neural Network - ANN methodologies, and applied to the IPEN research Reactor IEA-R1. The GMDH was used to study the best set of variables to be used to train an ANN, resulting in a best monitoring variable estimative. The system performs the monitoring by comparing these estimative calculated values with measured ones. The IPEN Reactor Data Acquisition System is composed of 58 variables (process and nuclear variables). As the GMDH is a self-organizing methodology, the input variables choice is made automatically, and the real input variables used in the Monitoring and Diagnosis System were not showed in the final result. This work presents a study of variable identification of GMDH methodology by means of an algorithm that works in parallel with the GMDH algorithm and traces the initial variables paths, resulting in an identification of the variables that composes the best Monitoring and Diagnosis Model. (author)

  2. Variable identification in group method of data handling methodology

    International Nuclear Information System (INIS)

    Pereira, Iraci Martinez; Bueno, Elaine Inacio

    2011-01-01

    The Group Method of Data Handling - GMDH is a combinatorial multi-layer algorithm in which a network of layers and nodes is generated using a number of inputs from the data stream being evaluated. The GMDH network topology has been traditionally determined using a layer by layer pruning process based on a preselected criterion of what constitutes the best nodes at each level. The traditional GMDH method is based on an underlying assumption that the data can be modeled by using an approximation of the Volterra Series or Kolmorgorov-Gabor polynomial. A Monitoring and Diagnosis System was developed based on GMDH and Artificial Neural Network - ANN methodologies, and applied to the IPEN research Reactor IEA-R1. The GMDH was used to study the best set of variables to be used to train an ANN, resulting in a best monitoring variable estimative. The system performs the monitoring by comparing these estimative calculated values with measured ones. The IPEN Reactor Data Acquisition System is composed of 58 variables (process and nuclear variables). As the GMDH is a self-organizing methodology, the input variables choice is made automatically, and the real input variables used in the Monitoring and Diagnosis System were not showed in the final result. This work presents a study of variable identification of GMDH methodology by means of an algorithm that works in parallel with the GMDH algorithm and traces the initial variables paths, resulting in an identification of the variables that composes the best Monitoring and Diagnosis Model. (author)

  3. The variability of piezoelectric measurements. Material and measurement method contributions

    International Nuclear Information System (INIS)

    Stewart, M.; Cain, M.

    2002-01-01

    The variability of piezoelectric materials measurements has been investigated in order to separate the contributions from intrinsic instrumental variability, and the contributions from the variability in materials. The work has pinpointed several areas where weaknesses in the measurement methods result in high variability, and also show that good correlation between piezoelectric parameters allow simpler measurement methods to be used. The Berlincourt method has been shown to be unreliable when testing thin discs, however when testing thicker samples there is a good correlation between this and other methods. The high field permittivity and low field permittivity correlate well, so tolerances on low field measurements would predict high field performance. In trying to identify microstructural origins of samples that behave differently to others within a batch, no direct evidence was found to suggest that outliers originate from either differences in microstructure or crystallography. Some of the samples chosen as maximum outliers showed pin-holes, probably from electrical breakdown during poling, even though these defects would ordinarily be detrimental to piezoelectric output. (author)

  4. New complex variable meshless method for advection—diffusion problems

    International Nuclear Information System (INIS)

    Wang Jian-Fei; Cheng Yu-Min

    2013-01-01

    In this paper, an improved complex variable meshless method (ICVMM) for two-dimensional advection—diffusion problems is developed based on improved complex variable moving least-square (ICVMLS) approximation. The equivalent functional of two-dimensional advection—diffusion problems is formed, the variation method is used to obtain the equation system, and the penalty method is employed to impose the essential boundary conditions. The difference method for two-point boundary value problems is used to obtain the discrete equations. Then the corresponding formulas of the ICVMM for advection—diffusion problems are presented. Two numerical examples with different node distributions are used to validate and inestigate the accuracy and efficiency of the new method in this paper. It is shown that ICVMM is very effective for advection—diffusion problems, and has a good convergent character, accuracy, and computational efficiency

  5. Non-Chemical Distant Cellular Interactions as a potential confounder of Cell Biology Experiments

    Directory of Open Access Journals (Sweden)

    Ashkan eFarhadi

    2014-10-01

    Full Text Available Distant cells can communicate with each other through a variety of methods. Two such methods involve electrical and/or chemical mechanisms. Non-chemical, distant cellular interactions may be another method of communication that cells can use to modify the behavior of other cells that are mechanically separated. Moreover, non-chemical, distant cellular interactions may explain some cases of confounding effects in Cell Biology experiments. In this article, we review non-chemical, distant cellular interactions studies to try to shed light on the mechanisms in this highly unconventional field of cell biology. Despite the existence of several theories that try to explain the mechanism of non-chemical, distant cellular interactions, this phenomenon is still speculative. Among candidate mechanisms, electromagnetic waves appear to have the most experimental support. In this brief article, we try to answer a few key questions that may further clarify this mechanism.

  6. Confounding by dietary patterns of the inverse association between alcohol consumption and type 2 diabetes risk

    Science.gov (United States)

    Epidemiology of dietary components and disease risk limits interpretability due to potential residual confounding by correlated dietary components. Dietary pattern analyses by factor analysis or partial least squares may overcome this limitation. To examine confounding by dietary pattern as well as ...

  7. Confounding by dietary pattern of the inverse association between alcohol consumption and type 2 diabetes risk

    Science.gov (United States)

    Epidemiology of dietary components and disease risk limits interpretability due to potential residual confounding by correlated dietary components. Dietary pattern analyses by factor analysis or partial least squares may overcome the limitation. To examine confounding by dietary pattern as well as ...

  8. A survey of variable selection methods in two Chinese epidemiology journals

    Directory of Open Access Journals (Sweden)

    Lynn Henry S

    2010-09-01

    Full Text Available Abstract Background Although much has been written on developing better procedures for variable selection, there is little research on how it is practiced in actual studies. This review surveys the variable selection methods reported in two high-ranking Chinese epidemiology journals. Methods Articles published in 2004, 2006, and 2008 in the Chinese Journal of Epidemiology and the Chinese Journal of Preventive Medicine were reviewed. Five categories of methods were identified whereby variables were selected using: A - bivariate analyses; B - multivariable analysis; e.g. stepwise or individual significance testing of model coefficients; C - first bivariate analyses, followed by multivariable analysis; D - bivariate analyses or multivariable analysis; and E - other criteria like prior knowledge or personal judgment. Results Among the 287 articles that reported using variable selection methods, 6%, 26%, 30%, 21%, and 17% were in categories A through E, respectively. One hundred sixty-three studies selected variables using bivariate analyses, 80% (130/163 via multiple significance testing at the 5% alpha-level. Of the 219 multivariable analyses, 97 (44% used stepwise procedures, 89 (41% tested individual regression coefficients, but 33 (15% did not mention how variables were selected. Sixty percent (58/97 of the stepwise routines also did not specify the algorithm and/or significance levels. Conclusions The variable selection methods reported in the two journals were limited in variety, and details were often missing. Many studies still relied on problematic techniques like stepwise procedures and/or multiple testing of bivariate associations at the 0.05 alpha-level. These deficiencies should be rectified to safeguard the scientific validity of articles published in Chinese epidemiology journals.

  9. A Comparison of Methods to Test Mediation and Other Intervening Variable Effects

    Science.gov (United States)

    MacKinnon, David P.; Lockwood, Chondra M.; Hoffman, Jeanne M.; West, Stephen G.; Sheets, Virgil

    2010-01-01

    A Monte Carlo study compared 14 methods to test the statistical significance of the intervening variable effect. An intervening variable (mediator) transmits the effect of an independent variable to a dependent variable. The commonly used R. M. Baron and D. A. Kenny (1986) approach has low statistical power. Two methods based on the distribution of the product and 2 difference-in-coefficients methods have the most accurate Type I error rates and greatest statistical power except in 1 important case in which Type I error rates are too high. The best balance of Type I error and statistical power across all cases is the test of the joint significance of the two effects comprising the intervening variable effect. PMID:11928892

  10. Partial differential equations with variable exponents variational methods and qualitative analysis

    CERN Document Server

    Radulescu, Vicentiu D

    2015-01-01

    Partial Differential Equations with Variable Exponents: Variational Methods and Qualitative Analysis provides researchers and graduate students with a thorough introduction to the theory of nonlinear partial differential equations (PDEs) with a variable exponent, particularly those of elliptic type. The book presents the most important variational methods for elliptic PDEs described by nonhomogeneous differential operators and containing one or more power-type nonlinearities with a variable exponent. The authors give a systematic treatment of the basic mathematical theory and constructive meth

  11. Examining confounding by diet in the association between perfluoroalkyl acids and serum cholesterol in pregnancy

    Energy Technology Data Exchange (ETDEWEB)

    Skuladottir, Margret; Ramel, Alfons [Faculty of Food Science and Nutrition, University of Iceland, Reykjavik (Iceland); Unit for Nutrition Research, Landspitali National University Hospital, Reykjavik (Iceland); Rytter, Dorte [Department of Public Health, Section for Epidemiology, Aarhus University, Aarhus (Denmark); Haug, Line Småstuen; Sabaredzovic, Azemira [Division of Environmental Medicine, Norwegian Institute of Public Health, Oslo (Norway); Bech, Bodil Hammer [Department of Public Health, Section for Epidemiology, Aarhus University, Aarhus (Denmark); Henriksen, Tine Brink [Pediatric Department, Aarhus University Hospital, Aarhus (Denmark); Olsen, Sjurdur F. [Center for Fetal Programming, Department of Epidemiology Research, Statens Serum Institut, Copenhagen (Denmark); Department of Nutrition, Harvard School of Public Health, Boston, MA (United States); Halldorsson, Thorhallur I., E-mail: tih@hi.is [Faculty of Food Science and Nutrition, University of Iceland, Reykjavik (Iceland); Unit for Nutrition Research, Landspitali National University Hospital, Reykjavik (Iceland); Center for Fetal Programming, Department of Epidemiology Research, Statens Serum Institut, Copenhagen (Denmark)

    2015-11-15

    Background: Perfluorooctane sulfonate (PFOS) and perfluorooctanoic acid (PFOA) have consistently been associated with higher cholesterol levels in cross sectional studies. Concerns have, however, been raised about potential confounding by diet and clinical relevance. Objective: To examine the association between concentrations of PFOS and PFOA and total cholesterol in serum during pregnancy taking into considerations confounding by diet. Methods: 854 Danish women who gave birth in 1988–89 and provided a blood sample and reported their diet in week 30 of gestation. Results: Mean serum PFOS, PFOA and total cholesterol concentrations were 22.3 ng/mL, 4.1 ng/mL and 7.3 mmol/L, respectively. Maternal diet was a significant predictor of serum PFOS and PFOA concentrations. In particular intake of meat and meat products was positively associated while intake of vegetables was inversely associated (P for trend <0.01) with relative difference between the highest and lowest quartile in PFOS and PFOA concentrations ranging between 6% and 25% of mean values. After adjustment for dietary factors both PFOA and PFOS were positively and similarly associated with serum cholesterol (P for trend ≤0.01). For example, the mean increase in serum cholesterol was 0.39 mmol/L (95%CI: 0.09, 0.68) when comparing women in the highest to lowest quintile of PFOA concentrations. In comparison the mean increase in serum cholesterol was 0.61 mmol/L (95%CI: 0.17, 1.05) when comparing women in the highest to lowest quintile of saturated fat intake. Conclusion: In this study associations between PFOS and PFOA with serum cholesterol appeared unrelated to dietary intake and were similar in magnitude as the associations between saturated fat intake and serum cholesterol. - Highlights: • PFOS and PFOA have consistently been linked with raised serum cholesterol • Clinical relevance remains uncertain and confounding by diet has been suggested • The aim of this study was to address these issues in

  12. Examining confounding by diet in the association between perfluoroalkyl acids and serum cholesterol in pregnancy

    International Nuclear Information System (INIS)

    Skuladottir, Margret; Ramel, Alfons; Rytter, Dorte; Haug, Line Småstuen; Sabaredzovic, Azemira; Bech, Bodil Hammer; Henriksen, Tine Brink; Olsen, Sjurdur F.; Halldorsson, Thorhallur I.

    2015-01-01

    Background: Perfluorooctane sulfonate (PFOS) and perfluorooctanoic acid (PFOA) have consistently been associated with higher cholesterol levels in cross sectional studies. Concerns have, however, been raised about potential confounding by diet and clinical relevance. Objective: To examine the association between concentrations of PFOS and PFOA and total cholesterol in serum during pregnancy taking into considerations confounding by diet. Methods: 854 Danish women who gave birth in 1988–89 and provided a blood sample and reported their diet in week 30 of gestation. Results: Mean serum PFOS, PFOA and total cholesterol concentrations were 22.3 ng/mL, 4.1 ng/mL and 7.3 mmol/L, respectively. Maternal diet was a significant predictor of serum PFOS and PFOA concentrations. In particular intake of meat and meat products was positively associated while intake of vegetables was inversely associated (P for trend <0.01) with relative difference between the highest and lowest quartile in PFOS and PFOA concentrations ranging between 6% and 25% of mean values. After adjustment for dietary factors both PFOA and PFOS were positively and similarly associated with serum cholesterol (P for trend ≤0.01). For example, the mean increase in serum cholesterol was 0.39 mmol/L (95%CI: 0.09, 0.68) when comparing women in the highest to lowest quintile of PFOA concentrations. In comparison the mean increase in serum cholesterol was 0.61 mmol/L (95%CI: 0.17, 1.05) when comparing women in the highest to lowest quintile of saturated fat intake. Conclusion: In this study associations between PFOS and PFOA with serum cholesterol appeared unrelated to dietary intake and were similar in magnitude as the associations between saturated fat intake and serum cholesterol. - Highlights: • PFOS and PFOA have consistently been linked with raised serum cholesterol • Clinical relevance remains uncertain and confounding by diet has been suggested • The aim of this study was to address these issues in

  13. College quality and hourly wages: evidence from the self-revelation model, sibling models and instrumental variables.

    Science.gov (United States)

    Borgen, Nicolai T

    2014-11-01

    This paper addresses the recent discussion on confounding in the returns to college quality literature using the Norwegian case. The main advantage of studying Norway is the quality of the data. Norwegian administrative data provide information on college applications, family relations and a rich set of control variables for all Norwegian citizens applying to college between 1997 and 2004 (N = 141,319) and their succeeding wages between 2003 and 2010 (676,079 person-year observations). With these data, this paper uses a subset of the models that have rendered mixed findings in the literature in order to investigate to what extent confounding biases the returns to college quality. I compare estimates obtained using standard regression models to estimates obtained using the self-revelation model of Dale and Krueger (2002), a sibling fixed effects model and the instrumental variable model used by Long (2008). Using these methods, I consistently find increasing returns to college quality over the course of students' work careers, with positive returns only later in students' work careers. I conclude that the standard regression estimate provides a reasonable estimate of the returns to college quality. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Extensions of von Neumann's method for generating random variables

    International Nuclear Information System (INIS)

    Monahan, J.F.

    1979-01-01

    Von Neumann's method of generating random variables with the exponential distribution and Forsythe's method for obtaining distributions with densities of the form e/sup -G//sup( x/) are generalized to apply to certain power series representations. The flexibility of the power series methods is illustrated by algorithms for the Cauchy and geometric distributions

  15. Evaluation in medical education: A topical review of target parameters, data collection tools and confounding factors

    Science.gov (United States)

    Schiekirka, Sarah; Feufel, Markus A.; Herrmann-Lingen, Christoph; Raupach, Tobias

    2015-01-01

    Background and objective: Evaluation is an integral part of education in German medical schools. According to the quality standards set by the German Society for Evaluation, evaluation tools must provide an accurate and fair appraisal of teaching quality. Thus, data collection tools must be highly reliable and valid. This review summarises the current literature on evaluation of medical education with regard to the possible dimensions of teaching quality, the psychometric properties of survey instruments and potential confounding factors. Methods: We searched Pubmed, PsycINFO and PSYNDEX for literature on evaluation in medical education and included studies published up until June 30, 2011 as well as articles identified in the “grey literature”. Results are presented as a narrative review. Results: We identified four dimensions of teaching quality: structure, process, teacher characteristics, and outcome. Student ratings are predominantly used to address the first three dimensions, and a number of reliable tools are available for this purpose. However, potential confounders of student ratings pose a threat to the validity of these instruments. Outcome is usually operationalised in terms of student performance on examinations, but methodological problems may limit the usability of these data for evaluation purposes. In addition, not all examinations at German medical schools meet current quality standards. Conclusion: The choice of tools for evaluating medical education should be guided by the dimension that is targeted by the evaluation. Likewise, evaluation results can only be interpreted within the context of the construct addressed by the data collection tool that was used as well as its specific confounding factors. PMID:26421003

  16. A streamlined artificial variable free version of simplex method.

    Directory of Open Access Journals (Sweden)

    Syed Inayatullah

    Full Text Available This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new method has also been presented which provides a way to easily implement the phase 1 of traditional dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our methods provide full freedom to the user, that whether to start with primal artificial free version or dual artificial free version without making any reformulation to the LP structure. Last but not the least, it provides a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement.

  17. A streamlined artificial variable free version of simplex method.

    Science.gov (United States)

    Inayatullah, Syed; Touheed, Nasir; Imtiaz, Muhammad

    2015-01-01

    This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new method has also been presented which provides a way to easily implement the phase 1 of traditional dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our methods provide full freedom to the user, that whether to start with primal artificial free version or dual artificial free version without making any reformulation to the LP structure. Last but not the least, it provides a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement.

  18. The Association Between Headaches and Temporomandibular Disorders is Confounded by Bruxism and Somatic Symptoms.

    Science.gov (United States)

    van der Meer, Hedwig A; Speksnijder, Caroline M; Engelbert, Raoul H H; Lobbezoo, Frank; Nijhuis-van der Sanden, Maria W G; Visscher, Corine M

    2017-09-01

    The objective of this observational study was to establish the possible presence of confounders on the association between temporomandibular disorders (TMD) and headaches in a patient population from a TMD and Orofacial Pain Clinic. Several subtypes of headaches have been diagnosed: self-reported headache, (probable) migraine, (probable) tension-type headache, and secondary headache attributed to TMD. The presence of TMD was subdivided into 2 subtypes: painful TMD and function-related TMD. The associations between the subtypes of TMD and headaches were evaluated by single regression models. To study the influence of possible confounding factors on this association, the regression models were extended with age, sex, bruxism, stress, depression, and somatic symptoms. Of the included patients (n=203), 67.5% experienced headaches. In the subsample of patients with a painful TMD (n=58), the prevalence of self-reported headaches increased to 82.8%. The associations found between self-reported headache and (1) painful TMD and (2) function-related TMD were confounded by the presence of somatic symptoms. For probable migraine, both somatic symptoms and bruxism confounded the initial association found with painful TMD. The findings of this study imply that there is a central working mechanism overlapping TMD and headache. Health care providers should not regard these disorders separately, but rather look at the bigger picture to appreciate the complex nature of the diagnostic and therapeutic process.

  19. Development of a localized probabilistic sensitivity method to determine random variable regional importance

    International Nuclear Information System (INIS)

    Millwater, Harry; Singh, Gulshan; Cortina, Miguel

    2012-01-01

    There are many methods to identify the important variable out of a set of random variables, i.e., “inter-variable” importance; however, to date there are no comparable methods to identify the “region” of importance within a random variable, i.e., “intra-variable” importance. Knowledge of the critical region of an input random variable (tail, near-tail, and central region) can provide valuable information towards characterizing, understanding, and improving a model through additional modeling or testing. As a result, an intra-variable probabilistic sensitivity method was developed and demonstrated for independent random variables that computes the partial derivative of a probabilistic response with respect to a localized perturbation in the CDF values of each random variable. These sensitivities are then normalized in absolute value with respect to the largest sensitivity within a distribution to indicate the region of importance. The methodology is implemented using the Score Function kernel-based method such that existing samples can be used to compute sensitivities for negligible cost. Numerical examples demonstrate the accuracy of the method through comparisons with finite difference and numerical integration quadrature estimates. - Highlights: ► Probabilistic sensitivity methodology. ► Determines the “region” of importance within random variables such as left tail, near tail, center, right tail, etc. ► Uses the Score Function approach to reuse the samples, hence, negligible cost. ► No restrictions on the random variable types or limit states.

  20. Statistical methods and regression analysis of stratospheric ozone and meteorological variables in Isfahan

    Science.gov (United States)

    Hassanzadeh, S.; Hosseinibalam, F.; Omidvari, M.

    2008-04-01

    Data of seven meteorological variables (relative humidity, wet temperature, dry temperature, maximum temperature, minimum temperature, ground temperature and sun radiation time) and ozone values have been used for statistical analysis. Meteorological variables and ozone values were analyzed using both multiple linear regression and principal component methods. Data for the period 1999-2004 are analyzed jointly using both methods. For all periods, temperature dependent variables were highly correlated, but were all negatively correlated with relative humidity. Multiple regression analysis was used to fit the meteorological variables using the meteorological variables as predictors. A variable selection method based on high loading of varimax rotated principal components was used to obtain subsets of the predictor variables to be included in the linear regression model of the meteorological variables. In 1999, 2001 and 2002 one of the meteorological variables was weakly influenced predominantly by the ozone concentrations. However, the model did not predict that the meteorological variables for the year 2000 were not influenced predominantly by the ozone concentrations that point to variation in sun radiation. This could be due to other factors that were not explicitly considered in this study.

  1. Variable discrete ordinates method for radiation transfer in plane-parallel semi-transparent media with variable refractive index

    Science.gov (United States)

    Sarvari, S. M. Hosseini

    2017-09-01

    The traditional form of discrete ordinates method is applied to solve the radiative transfer equation in plane-parallel semi-transparent media with variable refractive index through using the variable discrete ordinate directions and the concept of refracted radiative intensity. The refractive index are taken as constant in each control volume, such that the direction cosines of radiative rays remain non-variant through each control volume, and then, the directions of discrete ordinates are changed locally by passing each control volume, according to the Snell's law of refraction. The results are compared by the previous studies in this field. Despite simplicity, the results show that the variable discrete ordinate method has a good accuracy in solving the radiative transfer equation in the semi-transparent media with arbitrary distribution of refractive index.

  2. Assessment of hip dysplasia and osteoarthritis: Variability of different methods

    International Nuclear Information System (INIS)

    Troelsen, Anders; Elmengaard, Brian; Soeballe, Kjeld; Roemer, Lone; Kring, Soeren

    2010-01-01

    Background: Reliable assessment of hip dysplasia and osteoarthritis is crucial in young adults who may benefit from joint-preserving surgery. Purpose: To investigate the variability of different methods for diagnostic assessment of hip dysplasia and osteoarthritis. Material and Methods: By each of four observers, two assessments were done by vision and two by angle construction. For both methods, the intra- and interobserver variability of center-edge and acetabular index angle assessment were analyzed. The observers' ability to diagnose hip dysplasia and osteoarthritis were assessed. All measures were compared to those made on computed tomography scan. Results: Intra- and interobserver variability of angle assessment was less when angles were drawn compared with assessment by vision, and the observers' ability to diagnose hip dysplasia improved when angles were drawn. Assessment of osteoarthritis in general showed poor agreement with findings on computed tomography scan. Conclusion: We recommend that angles always should be drawn for assessment of hip dysplasia on pelvic radiographs. Given the inherent variability of diagnostic assessment of hip dysplasia, a computed tomography scan could be considered in patients with relevant hip symptoms and a center-edge angle between 20 deg and 30 deg. Osteoarthritis should be assessed by measuring the joint space width or by classifying the Toennis grade as either 0-1 or 2-3

  3. [The intelligence quotient and malnutrition. Iron deficiency and the lead concentration as confusing variables].

    Science.gov (United States)

    Vega-Franco, L; Mejía, A M; Robles, B; Moreno, L; Pérez, Y

    1991-11-01

    This study gave us the opportunity to know the roles iron deficiency and the presence of lead in blood play, as confounding variables, in relation to the state of malnutrition and the intellect of those children. A sample of 169 school children were classified according to their state of nutrition, their condition in reference to serum iron and lead concentrations. In addition, their intelligence was evaluated. The results confirmed that those children with lower weights and heights registered lesser points of intelligence; in fact, iron deficiency cancels out the difference in favor of those taller and weighing more. Lead did not contribute as a confounding variable, but more than half of the children showed possible toxic levels of this metal.

  4. Comparing daily temperature averaging methods: the role of surface and atmosphere variables in determining spatial and seasonal variability

    Science.gov (United States)

    Bernhardt, Jase; Carleton, Andrew M.

    2018-05-01

    The two main methods for determining the average daily near-surface air temperature, twice-daily averaging (i.e., [Tmax+Tmin]/2) and hourly averaging (i.e., the average of 24 hourly temperature measurements), typically show differences associated with the asymmetry of the daily temperature curve. To quantify the relative influence of several land surface and atmosphere variables on the two temperature averaging methods, we correlate data for 215 weather stations across the Contiguous United States (CONUS) for the period 1981-2010 with the differences between the two temperature-averaging methods. The variables are land use-land cover (LULC) type, soil moisture, snow cover, cloud cover, atmospheric moisture (i.e., specific humidity, dew point temperature), and precipitation. Multiple linear regression models explain the spatial and monthly variations in the difference between the two temperature-averaging methods. We find statistically significant correlations between both the land surface and atmosphere variables studied with the difference between temperature-averaging methods, especially for the extreme (i.e., summer, winter) seasons (adjusted R2 > 0.50). Models considering stations with certain LULC types, particularly forest and developed land, have adjusted R2 values > 0.70, indicating that both surface and atmosphere variables control the daily temperature curve and its asymmetry. This study improves our understanding of the role of surface and near-surface conditions in modifying thermal climates of the CONUS for a wide range of environments, and their likely importance as anthropogenic forcings—notably LULC changes and greenhouse gas emissions—continues.

  5. Resistance Torque Based Variable Duty-Cycle Control Method for a Stage II Compressor

    Science.gov (United States)

    Zhong, Meipeng; Zheng, Shuiying

    2017-07-01

    The resistance torque of a piston stage II compressor generates strenuous fluctuations in a rotational period, and this can lead to negative influences on the working performance of the compressor. To restrain the strenuous fluctuations in the piston stage II compressor, a variable duty-cycle control method based on the resistance torque is proposed. A dynamic model of a stage II compressor is set up, and the resistance torque and other characteristic parameters are acquired as the control targets. Then, a variable duty-cycle control method is applied to track the resistance torque, thereby improving the working performance of the compressor. Simulated results show that the compressor, driven by the proposed method, requires lower current, while the rotating speed and the output torque remain comparable to the traditional variable-frequency control methods. A variable duty-cycle control system is developed, and the experimental results prove that the proposed method can help reduce the specific power, input power, and working noise of the compressor to 0.97 kW·m-3·min-1, 0.09 kW and 3.10 dB, respectively, under the same conditions of discharge pressure of 2.00 MPa and a discharge volume of 0.095 m3/min. The proposed variable duty-cycle control method tracks the resistance torque dynamically, and improves the working performance of a Stage II Compressor. The proposed variable duty-cycle control method can be applied to other compressors, and can provide theoretical guidance for the compressor.

  6. Improvement of the variable storage coefficient method with water surface gradient as a variable

    Science.gov (United States)

    The variable storage coefficient (VSC) method has been used for streamflow routing in continuous hydrological simulation models such as the Agricultural Policy/Environmental eXtender (APEX) and the Soil and Water Assessment Tool (SWAT) for more than 30 years. APEX operates on a daily time step and ...

  7. Reducing confounding and suppression effects in TCGA data: an integrated analysis of chemotherapy response in ovarian cancer

    Directory of Open Access Journals (Sweden)

    Hsu Fang-Han

    2012-10-01

    Full Text Available Abstract Background Despite initial response in adjuvant chemotherapy, ovarian cancer patients treated with the combination of paclitaxel and carboplatin frequently suffer from recurrence after few cycles of treatment, and the underlying mechanisms causing the chemoresistance remain unclear. Recently, The Cancer Genome Atlas (TCGA research network concluded an ovarian cancer study and released the dataset to the public. The TCGA dataset possesses large sample size, comprehensive molecular profiles, and clinical outcome information; however, because of the unknown molecular subtypes in ovarian cancer and the great diversity of adjuvant treatments TCGA patients went through, studying chemotherapeutic response using the TCGA data is difficult. Additionally, factors such as sample batches, patient ages, and tumor stages further confound or suppress the identification of relevant genes, and thus the biological functions and disease mechanisms. Results To address these issues, herein we propose an analysis procedure designed to reduce suppression effect by focusing on a specific chemotherapeutic treatment, and to remove confounding effects such as batch effect, patient's age, and tumor stages. The proposed procedure starts with a batch effect adjustment, followed by a rigorous sample selection process. Then, the gene expression, copy number, and methylation profiles from the TCGA ovarian cancer dataset are analyzed using a semi-supervised clustering method combined with a novel scoring function. As a result, two molecular classifications, one with poor copy number profiles and one with poor methylation profiles, enriched with unfavorable scores are identified. Compared with the samples enriched with favorable scores, these two classifications exhibit poor progression-free survival (PFS and might be associated with poor chemotherapy response specifically to the combination of paclitaxel and carboplatin. Significant genes and biological processes are

  8. Latent variable method for automatic adaptation to background states in motor imagery BCI

    Science.gov (United States)

    Dagaev, Nikolay; Volkova, Ksenia; Ossadtchi, Alexei

    2018-02-01

    Objective. Brain-computer interface (BCI) systems are known to be vulnerable to variabilities in background states of a user. Usually, no detailed information on these states is available even during the training stage. Thus there is a need in a method which is capable of taking background states into account in an unsupervised way. Approach. We propose a latent variable method that is based on a probabilistic model with a discrete latent variable. In order to estimate the model’s parameters, we suggest to use the expectation maximization algorithm. The proposed method is aimed at assessing characteristics of background states without any corresponding data labeling. In the context of asynchronous motor imagery paradigm, we applied this method to the real data from twelve able-bodied subjects with open/closed eyes serving as background states. Main results. We found that the latent variable method improved classification of target states compared to the baseline method (in seven of twelve subjects). In addition, we found that our method was also capable of background states recognition (in six of twelve subjects). Significance. Without any supervised information on background states, the latent variable method provides a way to improve classification in BCI by taking background states into account at the training stage and then by making decisions on target states weighted by posterior probabilities of background states at the prediction stage.

  9. Assessment of hip dysplasia and osteoarthritis: Variability of different methods

    Energy Technology Data Exchange (ETDEWEB)

    Troelsen, Anders; Elmengaard, Brian; Soeballe, Kjeld (Orthopedic Research Unit, Univ. Hospital of Aarhus, Aarhus (Denmark)), e-mail: a_troelsen@hotmail.com; Roemer, Lone (Dept. of Radiology, Univ. Hospital of Aarhus, Aarhus (Denmark)); Kring, Soeren (Dept. of Orthopedic Surgery, Aabenraa Hospital, Aabenraa (Denmark))

    2010-03-15

    Background: Reliable assessment of hip dysplasia and osteoarthritis is crucial in young adults who may benefit from joint-preserving surgery. Purpose: To investigate the variability of different methods for diagnostic assessment of hip dysplasia and osteoarthritis. Material and Methods: By each of four observers, two assessments were done by vision and two by angle construction. For both methods, the intra- and interobserver variability of center-edge and acetabular index angle assessment were analyzed. The observers' ability to diagnose hip dysplasia and osteoarthritis were assessed. All measures were compared to those made on computed tomography scan. Results: Intra- and interobserver variability of angle assessment was less when angles were drawn compared with assessment by vision, and the observers' ability to diagnose hip dysplasia improved when angles were drawn. Assessment of osteoarthritis in general showed poor agreement with findings on computed tomography scan. Conclusion: We recommend that angles always should be drawn for assessment of hip dysplasia on pelvic radiographs. Given the inherent variability of diagnostic assessment of hip dysplasia, a computed tomography scan could be considered in patients with relevant hip symptoms and a center-edge angle between 20 deg and 30 deg. Osteoarthritis should be assessed by measuring the joint space width or by classifying the Toennis grade as either 0-1 or 2-3

  10. Accounting for Time-Varying Confounding in the Relationship Between Obesity and Coronary Heart Disease: Analysis With G-Estimation: The ARIC Study.

    Science.gov (United States)

    Shakiba, Maryam; Mansournia, Mohammad Ali; Salari, Arsalan; Soori, Hamid; Mansournia, Nasrin; Kaufman, Jay S

    2018-06-01

    In longitudinal studies, standard analysis may yield biased estimates of exposure effect in the presence of time-varying confounders that are also intermediate variables. We aimed to quantify the relationship between obesity and coronary heart disease (CHD) by appropriately adjusting for time-varying confounders. This study was performed in a subset of participants from the Atherosclerosis Risk in Communities (ARIC) Study (1987-2010), a US study designed to investigate risk factors for atherosclerosis. General obesity was defined as body mass index (weight (kg)/height (m)2) ≥30, and abdominal obesity (AOB) was defined according to either waist circumference (≥102 cm in men and ≥88 cm in women) or waist:hip ratio (≥0.9 in men and ≥0.85 in women). The association of obesity with CHD was estimated by G-estimation and compared with results from accelerated failure-time models using 3 specifications. The first model, which adjusted for baseline covariates, excluding metabolic mediators of obesity, showed increased risk of CHD for all obesity measures. Further adjustment for metabolic mediators in the second model and time-varying variables in the third model produced negligible changes in the hazard ratios. The hazard ratios estimated by G-estimation were 1.15 (95% confidence interval (CI): 0.83, 1.47) for general obesity, 1.65 (95% CI: 1.35, 1.92) for AOB based on waist circumference, and 1.38 (95% CI: 1.13, 1.99) for AOB based on waist:hip ratio, suggesting that AOB increased the risk of CHD. The G-estimated hazard ratios for both measures were further from the null than those derived from standard models.

  11. Falsification Testing of Instrumental Variables Methods for Comparative Effectiveness Research.

    Science.gov (United States)

    Pizer, Steven D

    2016-04-01

    To demonstrate how falsification tests can be used to evaluate instrumental variables methods applicable to a wide variety of comparative effectiveness research questions. Brief conceptual review of instrumental variables and falsification testing principles and techniques accompanied by an empirical application. Sample STATA code related to the empirical application is provided in the Appendix. Comparative long-term risks of sulfonylureas and thiazolidinediones for management of type 2 diabetes. Outcomes include mortality and hospitalization for an ambulatory care-sensitive condition. Prescribing pattern variations are used as instrumental variables. Falsification testing is an easily computed and powerful way to evaluate the validity of the key assumption underlying instrumental variables analysis. If falsification tests are used, instrumental variables techniques can help answer a multitude of important clinical questions. © Health Research and Educational Trust.

  12. Climatological variability in regional air pollution

    International Nuclear Information System (INIS)

    Shannon, J.D.; Trexler, E.C. Jr.

    1995-01-01

    Although some air pollution modeling studies examine events that have already occurred (e.g., the Chernobyl plume) with relevant meteorological conditions largely known, most pollution modeling studies address expected or potential scenarios for the future. Future meteorological conditions, the major pollutant forcing function other than emissions, are inherently uncertain although much relevant information is contained in past observational data. For convenience in our discussions of regional pollutant variability unrelated to emission changes, we define meteorological variability as short-term (within-season) pollutant variability and climatological variability as year-to-year changes in seasonal averages and accumulations of pollutant variables. In observations and in some of our simulations the effects are confounded because for seasons of two different years both the mean and the within-season character of a pollutant variable may change. Effects of climatological and meteorological variability on means and distributions of air pollution parameters, particularly those related to regional visibility, are illustrated. Over periods of up to a decade climatological variability may mask or overstate improvements resulting from emission controls. The importance of including climatological uncertainties in assessing potential policies, particularly when based partly on calculated source-receptor relationships, is highlighted

  13. Marine oils: Complex, confusing, confounded?

    Directory of Open Access Journals (Sweden)

    Benjamin B. Albert

    2016-09-01

    Full Text Available Marine oils gained prominence following the report that Greenland Inuits who consumed a high-fat diet rich in long-chain n-3 polyunsaturated fatty acids (PUFAs also had low rates of cardiovascular disease. Marine n-3 PUFAs have since become a billion dollar industry, which will continue to grow based on current trends. However, recent systematic reviews question the health benefits of marine oil supplements, particularly in the prevention of cardiovascular disease. Marine oils constitute an extremely complex dietary intervention for a number of reasons: i the many chemical compounds they contain; ii the many biological processes affected by n-3 PUFAs; iii their tendency to deteriorate and form potentially toxic primary and secondary oxidation products; and iv inaccuracy in the labelling of consumer products. These complexities may confound the clinical literature, limiting the ability to make substantive conclusions for some key health outcomes. Thus, there is a pressing need for clinical trials using marine oils whose composition has been independently verified and demonstrated to be minimally oxidised. Without such data, it is premature to conclude that n-3 PUFA rich supplements are ineffective.

  14. Is the association between general cognitive ability and violent crime caused by family-level confounders?

    Science.gov (United States)

    Frisell, Thomas; Pawitan, Yudi; Långström, Niklas

    2012-01-01

    Research has consistently found lower cognitive ability to be related to increased risk for violent and other antisocial behaviour. Since this association has remained when adjusting for childhood socioeconomic position, ethnicity, and parental characteristics, it is often assumed to be causal, potentially mediated through school adjustment problems and conduct disorder. Socioeconomic differences are notoriously difficult to quantify, however, and it is possible that the association between intelligence and delinquency suffer substantial residual confounding. We linked longitudinal Swedish total population registers to study the association of general cognitive ability (intelligence) at age 18 (the Conscript Register, 1980-1993) with the incidence proportion of violent criminal convictions (the Crime Register, 1973-2009), among all men born in Sweden 1961-1975 (N = 700,514). Using probit regression, we controlled for measured childhood socioeconomic variables, and further employed sibling comparisons (family pedigree data from the Multi-Generation Register) to adjust for shared familial characteristics. Cognitive ability in early adulthood was inversely associated to having been convicted of a violent crime (β = -0.19, 95% CI: -0.19; -0.18), the association remained when adjusting for childhood socioeconomic factors (β = -0.18, 95% CI: -0.18; -0.17). The association was somewhat lower within half-brothers raised apart (β = -0.16, 95% CI: -0.18; -0.14), within half-brothers raised together (β = -0.13, 95% CI: (-0.15; -0.11), and lower still in full-brother pairs (β = -0.10, 95% CI: -0.11; -0.09). The attenuation among half-brothers raised together and full brothers was too strong to be attributed solely to attenuation from measurement error. Our results suggest that the association between general cognitive ability and violent criminality is confounded partly by factors shared by brothers. However, most of the association remains even

  15. A Novel Flood Forecasting Method Based on Initial State Variable Correction

    Directory of Open Access Journals (Sweden)

    Kuang Li

    2017-12-01

    Full Text Available The influence of initial state variables on flood forecasting accuracy by using conceptual hydrological models is analyzed in this paper and a novel flood forecasting method based on correction of initial state variables is proposed. The new method is abbreviated as ISVC (Initial State Variable Correction. The ISVC takes the residual between the measured and forecasted flows during the initial period of the flood event as the objective function, and it uses a particle swarm optimization algorithm to correct the initial state variables, which are then used to drive the flood forecasting model. The historical flood events of 11 watersheds in south China are forecasted and verified, and important issues concerning the ISVC application are then discussed. The study results show that the ISVC is effective and applicable in flood forecasting tasks. It can significantly improve the flood forecasting accuracy in most cases.

  16. Phenotypic variation as an indicator of pesticide stress in gudgeon: Accounting for confounding factors in the wild.

    Science.gov (United States)

    Shinn, Cândida; Blanchet, Simon; Loot, Géraldine; Lek, Sovan; Grenouillet, Gaël

    2015-12-15

    The response of organisms to environmental stress is currently used in the assessment of ecosystem health. Morphological changes integrate the multiple effects of one or several stress factors upon the development of the exposed organisms. In a natural environment, many factors determine the patterns of morphological differentiation between individuals. However, few studies have sought to distinguish and measure the independent effect of these factors (genetic diversity and structure, spatial structuring of populations, physical-chemical conditions, etc.). Here we investigated the relationship between pesticide levels measured at 11 sites sampled in rivers of the Garonne river basin (SW France) and morphological changes of a freshwater fish species, the gudgeon (Gobio gobio). Each individual sampled was genotyped using 8 microsatellite markers and their phenotype characterized via 17 morphological traits. Our analysis detected a link between population genetic structure (revealed by a Bayesian method) and morphometry (linear discriminant analysis) of the studied populations. We then developed an original method based on general linear models using distance matrices, an extension of the partial Mantel test beyond 3 matrices. This method was used to test the relationship between contamination (toxicity index) and morphometry (PST of morphometric traits), taking into account (1) genetic differentiation between populations (FST), (2) geographical distances between sites, (3) site catchment area, and (4) various physical-chemical parameters for each sampling site. Upon removal of confounding effects, 3 of the 17 morphological traits studied were significantly correlated with pesticide toxicity, suggesting a response of these traits to the anthropogenic stress. These results underline the importance of taking into account the different sources of phenotypic variability between organisms when identifying the stress factors involved. The separation and quantification of

  17. Quantifying the potential role of unmeasured confounders : the example of influenza vaccination

    NARCIS (Netherlands)

    Groenwold, R H H; Hoes, A W; Nichol, K L; Hak, E

    2008-01-01

    BACKGROUND: The validity of non-randomized studies using healthcare databases is often challenged because they lack information on potentially important confounders, such as functional health status and socioeconomic status. In a study quantifying the effects of influenza vaccination among

  18. Uncovering noisy social signals : Using optimization methods from experimental physics to study social phenomena

    NARCIS (Netherlands)

    Kaptein, Maurits; Van Emden, Robin; Iannuzzi, Davide

    2017-01-01

    Due to the ubiquitous presence of treatment heterogeneity, measurement error, and contextual confounders, numerous social phenomena are hard to study. Precise control of treatment variables and possible confounders is often key to the success of studies in the social sciences, yet often proves out

  19. Uncovering noisy social signals: Using optimization methods from experimental physics to study social phenomena

    NARCIS (Netherlands)

    Kaptein, M.C.; Emden, R. van; Iannuzzi, D.

    2017-01-01

    Due to the ubiquitous presence of treatment heterogeneity, measurement error, and contextual confounders, numerous social phenomena are hard to study. Precise control of treatment variables and possible confounders is often key to the success of studies in the social sciences, yet often proves out

  20. A stochastic Galerkin method for the Euler equations with Roe variable transformation

    KAUST Repository

    Pettersson, Per; Iaccarino, Gianluca; Nordströ m, Jan

    2014-01-01

    The Euler equations subject to uncertainty in the initial and boundary conditions are investigated via the stochastic Galerkin approach. We present a new fully intrusive method based on a variable transformation of the continuous equations. Roe variables are employed to get quadratic dependence in the flux function and a well-defined Roe average matrix that can be determined without matrix inversion.In previous formulations based on generalized polynomial chaos expansion of the physical variables, the need to introduce stochastic expansions of inverse quantities, or square roots of stochastic quantities of interest, adds to the number of possible different ways to approximate the original stochastic problem. We present a method where the square roots occur in the choice of variables, resulting in an unambiguous problem formulation.The Roe formulation saves computational cost compared to the formulation based on expansion of conservative variables. Moreover, the Roe formulation is more robust and can handle cases of supersonic flow, for which the conservative variable formulation fails to produce a bounded solution. For certain stochastic basis functions, the proposed method can be made more effective and well-conditioned. This leads to increased robustness for both choices of variables. We use a multi-wavelet basis that can be chosen to include a large number of resolution levels to handle more extreme cases (e.g. strong discontinuities) in a robust way. For smooth cases, the order of the polynomial representation can be increased for increased accuracy. © 2013 Elsevier Inc.

  1. The functional variable method for solving the fractional Korteweg ...

    Indian Academy of Sciences (India)

    The physical and engineering processes have been modelled by means of fractional ... very important role in various fields such as economics, chemistry, notably control the- .... In §3, the functional variable method is applied for finding exact.

  2. The relationship between urinary tract infection during pregnancy and preeclampsia: causal, confounded or spurious?

    Science.gov (United States)

    Karmon, Anatte; Sheiner, Eyal

    2008-06-01

    Preeclampsia is a major cause of maternal morbidity, although its precise etiology remains elusive. A number of studies suggest that urinary tract infection (UTI) during the course of gestation is associated with elevated risk for preeclampsia, while others have failed to prove such an association. In our medical center, pregnant women who were exposed to at least one UTI episode during pregnancy were 1.3 times more likely to have mild preeclampsia and 1.8 times more likely to have severe preeclampsia as compared to unexposed women. Our results are based on univariate analyses and are not adjusted for potential confounders. This editorial aims to discuss the relationship between urinary tract infection and preeclampsia, as well as examine the current problems regarding the interpretation of this association. Although the relationship between UTI and preeclampsia has been demonstrated in studies with various designs, carried-out in a variety of settings, the nature of this association is unclear. By taking into account timeline, dose-response effects, treatment influences, and potential confounders, as well as by neutralizing potential biases, future studies may be able to clarify the relationship between UTI and preeclampsia by determining if it is causal, confounded, or spurious.

  3. A sizing method for stand-alone PV installations with variable demand

    Energy Technology Data Exchange (ETDEWEB)

    Posadillo, R. [Grupo de Investigacion en Energias y Recursos Renovables, Dpto. de Fisica Aplicada, E.P.S., Universidad de Cordoba, Avda. Menendez Pidal s/n, 14004 Cordoba (Spain); Lopez Luque, R. [Grupo de Investigacion de Fisica Para las Energias y Recursos Renovables, Dpto. de Fisica Aplicada, Edificio C2 Campus de Rabanales, 14071 Cordoba (Spain)

    2008-05-15

    The practical applicability of the considerations made in a previous paper to characterize energy balances in stand-alone photovoltaic systems (SAPV) is presented. Given that energy balances were characterized based on monthly estimations, the method is appropriate for sizing installations with variable monthly demands and variable monthly panel tilt (for seasonal estimations). The method presented is original in that it is the only method proposed for this type of demand. The method is based on the rational utilization of daily solar radiation distribution functions. When exact mathematical expressions are not available, approximate empirical expressions can be used. The more precise the statistical characterization of the solar radiation on the receiver module, the more precise the sizing method given that the characterization will solely depend on the distribution function of the daily global irradiation on the tilted surface H{sub g{beta}}{sub i}. This method, like previous ones, uses the concept of loss of load probability (LLP) as a parameter to characterize system design and includes information on the standard deviation of this parameter ({sigma}{sub LLP}) as well as two new parameters: annual number of system failures (f) and the standard deviation of annual number of system failures ({sigma}{sub f}). This paper therefore provides an analytical method for evaluating and sizing stand-alone PV systems with variable monthly demand and panel inclination. The sizing method has also been applied in a practical manner. (author)

  4. A fast collocation method for a variable-coefficient nonlocal diffusion model

    Science.gov (United States)

    Wang, Che; Wang, Hong

    2017-02-01

    We develop a fast collocation scheme for a variable-coefficient nonlocal diffusion model, for which a numerical discretization would yield a dense stiffness matrix. The development of the fast method is achieved by carefully handling the variable coefficients appearing inside the singular integral operator and exploiting the structure of the dense stiffness matrix. The resulting fast method reduces the computational work from O (N3) required by a commonly used direct solver to O (Nlog ⁡ N) per iteration and the memory requirement from O (N2) to O (N). Furthermore, the fast method reduces the computational work of assembling the stiffness matrix from O (N2) to O (N). Numerical results are presented to show the utility of the fast method.

  5. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.

    Science.gov (United States)

    Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  6. A generalized fractional sub-equation method for fractional differential equations with variable coefficients

    International Nuclear Information System (INIS)

    Tang, Bo; He, Yinnian; Wei, Leilei; Zhang, Xindong

    2012-01-01

    In this Letter, a generalized fractional sub-equation method is proposed for solving fractional differential equations with variable coefficients. Being concise and straightforward, this method is applied to the space–time fractional Gardner equation with variable coefficients. As a result, many exact solutions are obtained including hyperbolic function solutions, trigonometric function solutions and rational solutions. It is shown that the considered method provides a very effective, convenient and powerful mathematical tool for solving many other fractional differential equations in mathematical physics. -- Highlights: ► Study of fractional differential equations with variable coefficients plays a role in applied physical sciences. ► It is shown that the proposed algorithm is effective for solving fractional differential equations with variable coefficients. ► The obtained solutions may give insight into many considerable physical processes.

  7. Insulin-Like Growth Factor 1 (IGF-1) in Parkinson's Disease: Potential as Trait-, Progression- and Prediction Marker and Confounding Factors

    Science.gov (United States)

    Binder, Gerhard; Weber, Karin; Apel, Anja; Roeben, Benjamin; Deuschle, Christian; Maechtel, Mirjam; Heger, Tanja; Nussbaum, Susanne; Gasser, Thomas; Maetzler, Walter; Berg, Daniela

    2016-01-01

    Introduction Biomarkers indicating trait, progression and prediction of pathology and symptoms in Parkinson's disease (PD) often lack specificity or reliability. Investigating biomarker variance between individuals and over time and the effect of confounding factors is essential for the evaluation of biomarkers in PD, such as insulin-like growth factor 1 (IGF-1). Materials and Methods IGF-1 serum levels were investigated in up to 8 biannual visits in 37 PD patients and 22 healthy controls (HC) in the longitudinal MODEP study. IGF-1 baseline levels and annual changes in IGF-1 were compared between PD patients and HC while accounting for baseline disease duration (19 early stage: ≤3.5 years; 18 moderate stage: >4 years), age, sex, body mass index (BMI) and common medical factors putatively modulating IGF-1. In addition, associations of baseline IGF-1 with annual changes of motor, cognitive and depressive symptoms and medication dose were investigated. Results PD patients in moderate (130±26 ng/mL; p = .004), but not early stages (115±19, p>.1), showed significantly increased baseline IGF-1 levels compared with HC (106±24 ng/mL; p = .017). Age had a significant negative correlation with IGF-1 levels in HC (r = -.47, p = .028) and no correlation in PD patients (r = -.06, p>.1). BMI was negatively correlated in the overall group (r = -.28, p = .034). The annual changes in IGF-1 did not differ significantly between groups and were not correlated with disease duration. Baseline IGF-1 levels were not associated with annual changes of clinical parameters. Discussion Elevated IGF-1 in serum might differentiate between patients in moderate PD stages and HC. However, the value of serum IGF-1 as a trait-, progression- and prediction marker in PD is limited as IGF-1 showed large inter- and intraindividual variability and may be modulated by several confounders. PMID:26967642

  8. A two-stage model in a Bayesian framework to estimate a survival endpoint in the presence of confounding by indication.

    Science.gov (United States)

    Bellera, Carine; Proust-Lima, Cécile; Joseph, Lawrence; Richaud, Pierre; Taylor, Jeremy; Sandler, Howard; Hanley, James; Mathoulin-Pélissier, Simone

    2018-04-01

    Background Biomarker series can indicate disease progression and predict clinical endpoints. When a treatment is prescribed depending on the biomarker, confounding by indication might be introduced if the treatment modifies the marker profile and risk of failure. Objective Our aim was to highlight the flexibility of a two-stage model fitted within a Bayesian Markov Chain Monte Carlo framework. For this purpose, we monitored the prostate-specific antigens in prostate cancer patients treated with external beam radiation therapy. In the presence of rising prostate-specific antigens after external beam radiation therapy, salvage hormone therapy can be prescribed to reduce both the prostate-specific antigens concentration and the risk of clinical failure, an illustration of confounding by indication. We focused on the assessment of the prognostic value of hormone therapy and prostate-specific antigens trajectory on the risk of failure. Methods We used a two-stage model within a Bayesian framework to assess the role of the prostate-specific antigens profile on clinical failure while accounting for a secondary treatment prescribed by indication. We modeled prostate-specific antigens using a hierarchical piecewise linear trajectory with a random changepoint. Residual prostate-specific antigens variability was expressed as a function of prostate-specific antigens concentration. Covariates in the survival model included hormone therapy, baseline characteristics, and individual predictions of the prostate-specific antigens nadir and timing and prostate-specific antigens slopes before and after the nadir as provided by the longitudinal process. Results We showed positive associations between an increased prostate-specific antigens nadir, an earlier changepoint and a steeper post-nadir slope with an increased risk of failure. Importantly, we highlighted a significant benefit of hormone therapy, an effect that was not observed when the prostate-specific antigens trajectory was

  9. The functional variable method for finding exact solutions of some ...

    Indian Academy of Sciences (India)

    Abstract. In this paper, we implemented the functional variable method and the modified. Riemann–Liouville derivative for the exact solitary wave solutions and periodic wave solutions of the time-fractional Klein–Gordon equation, and the time-fractional Hirota–Satsuma coupled. KdV system. This method is extremely simple ...

  10. Optimization of PID Parameters Utilizing Variable Weight Grey-Taguchi Method and Particle Swarm Optimization

    Science.gov (United States)

    Azmi, Nur Iffah Mohamed; Arifin Mat Piah, Kamal; Yusoff, Wan Azhar Wan; Romlay, Fadhlur Rahman Mohd

    2018-03-01

    Controller that uses PID parameters requires a good tuning method in order to improve the control system performance. Tuning PID control method is divided into two namely the classical methods and the methods of artificial intelligence. Particle swarm optimization algorithm (PSO) is one of the artificial intelligence methods. Previously, researchers had integrated PSO algorithms in the PID parameter tuning process. This research aims to improve the PSO-PID tuning algorithms by integrating the tuning process with the Variable Weight Grey- Taguchi Design of Experiment (DOE) method. This is done by conducting the DOE on the two PSO optimizing parameters: the particle velocity limit and the weight distribution factor. Computer simulations and physical experiments were conducted by using the proposed PSO- PID with the Variable Weight Grey-Taguchi DOE and the classical Ziegler-Nichols methods. They are implemented on the hydraulic positioning system. Simulation results show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE has reduced the rise time by 48.13% and settling time by 48.57% compared to the Ziegler-Nichols method. Furthermore, the physical experiment results also show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE tuning method responds better than Ziegler-Nichols tuning. In conclusion, this research has improved the PSO-PID parameter by applying the PSO-PID algorithm together with the Variable Weight Grey-Taguchi DOE method as a tuning method in the hydraulic positioning system.

  11. Error response test system and method using test mask variable

    Science.gov (United States)

    Gender, Thomas K. (Inventor)

    2006-01-01

    An error response test system and method with increased functionality and improved performance is provided. The error response test system provides the ability to inject errors into the application under test to test the error response of the application under test in an automated and efficient manner. The error response system injects errors into the application through a test mask variable. The test mask variable is added to the application under test. During normal operation, the test mask variable is set to allow the application under test to operate normally. During testing, the error response test system can change the test mask variable to introduce an error into the application under test. The error response system can then monitor the application under test to determine whether the application has the correct response to the error.

  12. Viscoelastic Earthquake Cycle Simulation with Memory Variable Method

    Science.gov (United States)

    Hirahara, K.; Ohtani, M.

    2017-12-01

    There have so far been no EQ (earthquake) cycle simulations, based on RSF (rate and state friction) laws, in viscoelastic media, except for Kato (2002), who simulated cycles on a 2-D vertical strike-slip fault, and showed nearly the same cycles as those in elastic cases. The viscoelasticity could, however, give more effects on large dip-slip EQ cycles. In a boundary element approach, stress is calculated using a hereditary integral of stress relaxation function and slip deficit rate, where we need the past slip rates, leading to huge computational costs. This is a cause for almost no simulations in viscoelastic media. We have investigated the memory variable method utilized in numerical computation of wave propagation in dissipative media (e.g., Moczo and Kristek, 2005). In this method, introducing memory variables satisfying 1st order differential equations, we need no hereditary integrals in stress calculation and the computational costs are the same order of those in elastic cases. Further, Hirahara et al. (2012) developed the iterative memory variable method, referring to Taylor et al. (1970), in EQ cycle simulations in linear viscoelastic media. In this presentation, first, we introduce our method in EQ cycle simulations and show the effect of the linear viscoelasticity on stick-slip cycles in a 1-DOF block-SLS (standard linear solid) model, where the elastic spring of the traditional block-spring model is replaced by SLS element and we pull, in a constant rate, the block obeying RSF law. In this model, the memory variable stands for the displacement of the dash-pot in SLS element. The use of smaller viscosity reduces the recurrence time to a minimum value. The smaller viscosity means the smaller relaxation time, which makes the stress recovery quicker, leading to the smaller recurrence time. Second, we show EQ cycles on a 2-D dip-slip fault with the dip angel of 20 degrees in an elastic layer with thickness of 40 km overriding a Maxwell viscoelastic half

  13. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    Directory of Open Access Journals (Sweden)

    Jun-He Yang

    2017-01-01

    Full Text Available Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir’s water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir’s water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  14. Wind turbines and idiopathic symptoms: The confounding effect of concurrent environmental exposures.

    Science.gov (United States)

    Blanes-Vidal, Victoria; Schwartz, Joel

    2016-01-01

    Whether or not wind turbines pose a risk to human health is a matter of heated debate. Personal reactions to other environmental exposures occurring in the same settings as wind turbines may be responsible of the reported symptoms. However, these have not been accounted for in previous studies. We investigated whether there is an association between residential proximity to wind turbines and idiopathic symptoms, after controlling for personal reactions to other environmental co-exposures. We assessed wind turbine exposures in 454 residences as the distance to the closest wind turbine (Dw) and number of wind turbines turbines and agricultural odor exposure, we did not observe a significant relationship between residential proximity to wind turbines and symptoms and the parameter estimates were attenuated toward zero. Wind turbines-health associations can be confounded by personal reactions to other environmental co-exposures. Isolated associations reported in the literature may be due to confounding bias. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Estimating the monetary value of willingness to pay for E-book reader's attributes using partially confounded factorial conjoint choice experiment

    Science.gov (United States)

    Yong, Chin-Khian

    2013-09-01

    A partially confounded factorial conjoint choice experiments design was used to examine the monetary value of the willingness to pay for E-book Reader's attributes. Conjoint analysis is an efficient, cost-effective, and most widely used quantitative method in marketing research to understand consumer preferences and value trade-off. Value can be interpreted by customer or consumer as the received of multiple benefits from a price that was paid. The monetary value of willingness to pay for battery life, internal memory, external memory, screen size, text to Speech, touch screen, and converting handwriting to digital text of E-book reader were estimated in this study. Due to the significant interaction effect of the attributes with the price, the monetary values for the seven attributes were found to be different at different values of odds of purchasing versus not purchasing. The significant interactions effects were one of the main contribution of the partially confounded factorial conjoint choice experiment.

  16. Detecting correlation between allele frequencies and environmental variables as a signature of selection. A fast computational approach for genome-wide studies

    DEFF Research Database (Denmark)

    Guillot, Gilles; Vitalis, Renaud; Rouzic, Arnaud le

    2014-01-01

    to disentangle the potential effect of environmental variables from the confounding effect of population history. For the routine analysis of genome-wide datasets, one also needs fast inference and model selection algorithms. We propose a method based on an explicit spatial model which is an instance of spatial...... for the most common types of genetic markers, obtained either at the individual or at the population level. Analyzing the simulated data produced under a geostatistical model then under an explicit model of selection, we show that the method is efficient. We also re-analyze a dataset relative to nineteen pine...

  17. The relationship between venture capital investment and macro economic variables via statistical computation method

    Science.gov (United States)

    Aygunes, Gunes

    2017-07-01

    The objective of this paper is to survey and determine the macroeconomic factors affecting the level of venture capital (VC) investments in a country. The literary depends on venture capitalists' quality and countries' venture capital investments. The aim of this paper is to give relationship between venture capital investment and macro economic variables via statistical computation method. We investigate the countries and macro economic variables. By using statistical computation method, we derive correlation between venture capital investments and macro economic variables. According to method of logistic regression model (logit regression or logit model), macro economic variables are correlated with each other in three group. Venture capitalists regard correlations as a indicator. Finally, we give correlation matrix of our results.

  18. Influence of potentially confounding factors on sea urchin porewater toxicity tests

    Science.gov (United States)

    Carr, R.S.; Biedenbach, J.M.; Nipper, M.

    2006-01-01

    The influence of potentially confounding factors has been identified as a concern for interpreting sea urchin porewater toxicity test data. The results from >40 sediment-quality assessment surveys using early-life stages of the sea urchin Arbacia punctulata were compiled and examined to determine acceptable ranges of natural variables such as pH, ammonia, and dissolved organic carbon on the fertilization and embryological development endpoints. In addition, laboratory experiments were also conducted with A. punctulata and compared with information from the literature. Pore water with pH as low as 6.9 is an unlikely contributor to toxicity for the fertilization and embryological development tests with A. punctulata. Other species of sea urchin have narrower pH tolerance ranges. Ammonia is rarely a contributing factor in pore water toxicity tests using the fertilization endpoint, but the embryological development endpoint may be influenced by ammonia concentrations commonly found in porewater samples. Therefore, ammonia needs to be considered when interpreting results for the embryological development test. Humic acid does not affect sea urchin fertilization at saturation concentrations, but it could have an effect on the embryological development endpoint at near-saturation concentrations. There was no correlation between sediment total organic carbon concentrations and porewater dissolved organic carbon concentrations. Because of the potential for many varying substances to activate parthenogenesis in sea urchin eggs, it is recommended that a no-sperm control be included with every fertilization test treatment. ?? 2006 Springer Science+Business Media, Inc.

  19. Wind resource in metropolitan France: assessment methods, variability and trends

    International Nuclear Information System (INIS)

    Jourdier, Benedicte

    2015-01-01

    France has one of the largest wind potentials in Europe, yet far from being fully exploited. The wind resource and energy yield assessment is a key step before building a wind farm, aiming at predicting the future electricity production. Any over-estimation in the assessment process puts in jeopardy the project's profitability. This has been the case in the recent years, when wind farm managers have noticed that they produced less than expected. The under-production problem leads to questioning both the validity of the assessment methods and the inter-annual wind variability. This thesis tackles these two issues. In a first part are investigated the errors linked to the assessment methods, especially in two steps: the vertical extrapolation of wind measurements and the statistical modelling of wind-speed data by a Weibull distribution. The second part investigates the inter-annual to decadal variability of wind speeds, in order to understand how this variability may have contributed to the under-production and so that it is better taken into account in the future. (author) [fr

  20. Impact of Uniform Methods on Interlaboratory Antibody Titration Variability: Antibody Titration and Uniform Methods.

    Science.gov (United States)

    Bachegowda, Lohith S; Cheng, Yan H; Long, Thomas; Shaz, Beth H

    2017-01-01

    -Substantial variability between different antibody titration methods prompted development and introduction of uniform methods in 2008. -To determine whether uniform methods consistently decrease interlaboratory variation in proficiency testing. -Proficiency testing data for antibody titration between 2009 and 2013 were obtained from the College of American Pathologists. Each laboratory was supplied plasma and red cells to determine anti-A and anti-D antibody titers by their standard method: gel or tube by uniform or other methods at different testing phases (immediate spin and/or room temperature [anti-A], and/or anti-human globulin [AHG: anti-A and anti-D]) with different additives. Interlaboratory variations were compared by analyzing the distribution of titer results by method and phase. -A median of 574 and 1100 responses were reported for anti-A and anti-D antibody titers, respectively, during a 5-year period. The 3 most frequent (median) methods performed for anti-A antibody were uniform tube room temperature (147.5; range, 119-159), uniform tube AHG (143.5; range, 134-150), and other tube AHG (97; range, 82-116); for anti-D antibody, the methods were other tube (451; range, 431-465), uniform tube (404; range, 382-462), and uniform gel (137; range, 121-153). Of the larger reported methods, uniform gel AHG phase for anti-A and anti-D antibodies had the most participants with the same result (mode). For anti-A antibody, 0 of 8 (uniform versus other tube room temperature) and 1 of 8 (uniform versus other tube AHG), and for anti-D antibody, 0 of 8 (uniform versus other tube) and 0 of 8 (uniform versus other gel) proficiency tests showed significant titer variability reduction. -Uniform methods harmonize laboratory techniques but rarely reduce interlaboratory titer variance in comparison with other methods.

  1. Do patient and practice characteristics confound age-group differences in preferences for general practice care? A quantitative study

    Science.gov (United States)

    2013-01-01

    Background Previous research showed inconsistent results regarding the relationship between the age of patients and preference statements regarding GP care. This study investigates whether elderly patients have different preference scores and ranking orders concerning 58 preference statements for GP care than younger patients. Moreover, this study examines whether patient characteristics and practice location may confound the relationship between age and the categorisation of a preference score as very important. Methods Data of the Consumer Quality Index GP Care were used, which were collected in 32 general practices in the Netherlands. The rank order and preference score were calculated for 58 preference statements for four age groups (0–30, 31–50, 51–74, 75 years and older). Using chi-square tests and logistic regression analyses, it was investigated whether a significant relationship between age and preference score was confounded by patient characteristics and practice location. Results Elderly patients did not have a significant different ranking order for the preference statements than the other three age groups (r = 0.0193; p = 0.41). However, in 53% of the statements significant differences were found in preference score between the four age groups. Elderly patients categorized significantly less preference statements as ‘very important’. In most cases, the significant relationships were not confounded by gender, education, perceived health, the number of GP contacts and location of the GP practice. Conclusion The preferences of elderly patients for GP care concern the same items as younger patients. However, their preferences are less strong, which cannot be ascribed to gender, education, perceived health, the number of GP contacts and practice location. PMID:23800156

  2. The Leech method for diagnosing constipation: intra- and interobserver variability and accuracy

    International Nuclear Information System (INIS)

    Lorijn, Fleur de; Voskuijl, Wieger P.; Taminiau, Jan A.; Benninga, Marc A.; Rijn, Rick R. van; Henneman, Onno D.F.; Heijmans, Jarom; Reitsma, Johannes B.

    2006-01-01

    The data concerning the value of a plain abdominal radiograph in childhood constipation are inconsistent. Recently, positive results have been reported of a new radiographic scoring system, ''the Leech method'', for assessing faecal loading. To assess intra- and interobserver variability and determine diagnostic accuracy of the Leech method in identifying children with functional constipation (FC). A total of 89 children (median age 9.8 years) with functional gastrointestinal disorders were included in the study. Based on clinical parameters, 52 fulfilled the criteria for FC, six fulfilled the criteria for functional abdominal pain (FAP), and 31 for functional non-retentive faecal incontinence (FNRFI); the latter two groups provided the controls. To assess intra- and interobserver variability of the Leech method three scorers scored the same abdominal radiograph twice. A Leech score of 9 or more was considered as suggestive of constipation. ROC analysis was used to determine the diagnostic accuracy of the Leech method in separating patients with FC from control patients. Significant intraobserver variability was found between two scorers (P=0.005 and P<0.0001), whereas there was no systematic difference between the two scores of the other scorer (P=0.89). The scores between scorers differed systematically and displayed large variability. The area under the ROC curve was 0.68 (95% CI 0.58-0.80), indicating poor diagnostic accuracy. The Leech scoring method for assessing faecal loading on a plain abdominal radiograph is of limited value in the diagnosis of FC in children. (orig.)

  3. Predicting Teacher Value-Added Results in Non-Tested Subjects Based on Confounding Variables: A Multinomial Logistic Regression

    Science.gov (United States)

    Street, Nathan Lee

    2017-01-01

    Teacher value-added measures (VAM) are designed to provide information regarding teachers' causal impact on the academic growth of students while controlling for exogenous variables. While some researchers contend VAMs successfully and authentically measure teacher causality on learning, others suggest VAMs cannot adequately control for exogenous…

  4. VOLUMETRIC METHOD FOR EVALUATION OF BEACHES VARIABILITY BASED ON GIS-TOOLS

    Directory of Open Access Journals (Sweden)

    V. V. Dolotov

    2015-01-01

    Full Text Available In frame of cadastral beach evaluation the volumetric method of natural variability index is proposed. It base on spatial calculations with Cut-Fill method and volume accounting ofboththe common beach contour and specific areas for the each time.

  5. Selecting minimum dataset soil variables using PLSR as a regressive multivariate method

    Science.gov (United States)

    Stellacci, Anna Maria; Armenise, Elena; Castellini, Mirko; Rossi, Roberta; Vitti, Carolina; Leogrande, Rita; De Benedetto, Daniela; Ferrara, Rossana M.; Vivaldi, Gaetano A.

    2017-04-01

    Long-term field experiments and science-based tools that characterize soil status (namely the soil quality indices, SQIs) assume a strategic role in assessing the effect of agronomic techniques and thus in improving soil management especially in marginal environments. Selecting key soil variables able to best represent soil status is a critical step for the calculation of SQIs. Current studies show the effectiveness of statistical methods for variable selection to extract relevant information deriving from multivariate datasets. Principal component analysis (PCA) has been mainly used, however supervised multivariate methods and regressive techniques are progressively being evaluated (Armenise et al., 2013; de Paul Obade et al., 2016; Pulido Moncada et al., 2014). The present study explores the effectiveness of partial least square regression (PLSR) in selecting critical soil variables, using a dataset comparing conventional tillage and sod-seeding on durum wheat. The results were compared to those obtained using PCA and stepwise discriminant analysis (SDA). The soil data derived from a long-term field experiment in Southern Italy. On samples collected in April 2015, the following set of variables was quantified: (i) chemical: total organic carbon and nitrogen (TOC and TN), alkali-extractable C (TEC and humic substances - HA-FA), water extractable N and organic C (WEN and WEOC), Olsen extractable P, exchangeable cations, pH and EC; (ii) physical: texture, dry bulk density (BD), macroporosity (Pmac), air capacity (AC), and relative field capacity (RFC); (iii) biological: carbon of the microbial biomass quantified with the fumigation-extraction method. PCA and SDA were previously applied to the multivariate dataset (Stellacci et al., 2016). PLSR was carried out on mean centered and variance scaled data of predictors (soil variables) and response (wheat yield) variables using the PLS procedure of SAS/STAT. In addition, variable importance for projection (VIP

  6. KEELE, Minimization of Nonlinear Function with Linear Constraints, Variable Metric Method

    International Nuclear Information System (INIS)

    Westley, G.W.

    1975-01-01

    1 - Description of problem or function: KEELE is a linearly constrained nonlinear programming algorithm for locating a local minimum of a function of n variables with the variables subject to linear equality and/or inequality constraints. 2 - Method of solution: A variable metric procedure is used where the direction of search at each iteration is obtained by multiplying the negative of the gradient vector by a positive definite matrix which approximates the inverse of the matrix of second partial derivatives associated with the function. 3 - Restrictions on the complexity of the problem: Array dimensions limit the number of variables to 20 and the number of constraints to 50. These can be changed by the user

  7. An education gradient in health, a health gradient in education, or a confounded gradient in both?

    Science.gov (United States)

    Lynch, Jamie L; von Hippel, Paul T

    2016-04-01

    There is a positive gradient associating educational attainment with health, yet the explanation for this gradient is not clear. Does higher education improve health (causation)? Do the healthy become highly educated (selection)? Or do good health and high educational attainment both result from advantages established early in the life course (confounding)? This study evaluates these competing explanations by tracking changes in educational attainment and Self-rated Health (SRH) from age 15 to age 31 in the National Longitudinal Study of Youth, 1997 cohort. Ordinal logistic regression confirms that high-SRH adolescents are more likely to become highly educated. This is partly because adolescent SRH is associated with early advantages including adolescents' academic performance, college plans, and family background (confounding); however, net of these confounders adolescent SRH still predicts adult educational attainment (selection). Fixed-effects longitudinal regression shows that educational attainment has little causal effect on SRH at age 31. Completion of a high school diploma or associate's degree has no effect on SRH, while completion of a bachelor's or graduate degree have effects that, though significant, are quite small (less than 0.1 points on a 5-point scale). While it is possible that educational attainment would have greater effect on health at older ages, at age 31 what we see is a health gradient in education, shaped primarily by selection and confounding rather than by a causal effect of education on health. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Propulsion and launching analysis of variable-mass rockets by analytical methods

    OpenAIRE

    D.D. Ganji; M. Gorji; M. Hatami; A. Hasanpour; N. Khademzadeh

    2013-01-01

    In this study, applications of some analytical methods on nonlinear equation of the launching of a rocket with variable mass are investigated. Differential transformation method (DTM), homotopy perturbation method (HPM) and least square method (LSM) were applied and their results are compared with numerical solution. An excellent agreement with analytical methods and numerical ones is observed in the results and this reveals that analytical methods are effective and convenient. Also a paramet...

  9. Theoretical investigations of the new Cokriging method for variable-fidelity surrogate modeling

    DEFF Research Database (Denmark)

    Zimmermann, Ralf; Bertram, Anna

    2018-01-01

    Cokriging is a variable-fidelity surrogate modeling technique which emulates a target process based on the spatial correlation of sampled data of different levels of fidelity. In this work, we address two theoretical questions associated with the so-called new Cokriging method for variable fidelity...

  10. Constrained variable projection method for blind deconvolution

    International Nuclear Information System (INIS)

    Cornelio, A; Piccolomini, E Loli; Nagy, J G

    2012-01-01

    This paper is focused on the solution of the blind deconvolution problem, here modeled as a separable nonlinear least squares problem. The well known ill-posedness, both on recovering the blurring operator and the true image, makes the problem really difficult to handle. We show that, by imposing appropriate constraints on the variables and with well chosen regularization parameters, it is possible to obtain an objective function that is fairly well behaved. Hence, the resulting nonlinear minimization problem can be effectively solved by classical methods, such as the Gauss-Newton algorithm.

  11. Recursive form of general limited memory variable metric methods

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Vlček, Jan

    2013-01-01

    Roč. 49, č. 2 (2013), s. 224-235 ISSN 0023-5954 Institutional support: RVO:67985807 Keywords : unconstrained optimization * large scale optimization * limited memory methods * variable metric updates * recursive matrix formulation * algorithms Subject RIV: BA - General Mathematics Impact factor: 0.563, year: 2013 http://dml.cz/handle/10338.dmlcz/143365

  12. Systematically missing confounders in individual participant data meta-analysis of observational cohort studies

    DEFF Research Database (Denmark)

    Jackson, D.; White, I.; Kostis, J.B.

    2009-01-01

    One difficulty in performing meta-analyses of observational cohort studies is that the availability of confounders may vary between cohorts, so that some cohorts provide fully adjusted analyses while others only provide partially adjusted analyses. Commonly, analyses of the association between an...

  13. Systematically missing confounders in individual participant data meta-analysis of observational cohort studies

    NARCIS (Netherlands)

    Jackson, D.; White, I.; Kostis, J.B.; Wilson, A.C.; Folsom, A.R.; Feskens, E.J.M.

    2009-01-01

    One difficulty in performing meta-analyses of observational cohort studies is that the availability of confounders may vary between cohorts, so that some cohorts provide fully adjusted analyses while others only provide partially adjusted analyses. Commonly, analyses of the association between an

  14. Syphilis may be a confounding factor, not a causative agent, in syphilitic ALS.

    Science.gov (United States)

    Tuk, Bert

    2016-01-01

    Based upon a review of published clinical observations regarding syphilitic amyotrophic lateral sclerosis (ALS), I hypothesize that syphilis is actually a confounding factor, not a causative factor, in syphilitic ALS. Moreover, I propose that the successful treatment of ALS symptoms in patients with syphilitic ALS using penicillin G and hydrocortisone is an indirect consequence of the treatment regimen and is not due to the treatment of syphilis. Specifically, I propose that the observed effect is due to the various pharmacological activities of penicillin G ( e.g ., a GABA receptor antagonist) and/or the multifaceted pharmacological activity of hydrocortisone. The notion that syphilis may be a confounding factor in syphilitic ALS is highly relevant, as it suggests that treating ALS patients with penicillin G and hydrocortisone-regardless of whether they present with syphilitic ALS or non-syphilitic ALS-may be effective at treating this rapidly progressive, highly devastating disease.

  15. The Leech method for diagnosing constipation: intra- and interobserver variability and accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Lorijn, Fleur de; Voskuijl, Wieger P.; Taminiau, Jan A.; Benninga, Marc A. [Emma Children' s Hospital, Department of Paediatric Gastroenterology and Nutrition, Amsterdam (Netherlands); Rijn, Rick R. van; Henneman, Onno D.F. [Academic Medical Centre, Department of Radiology, Amsterdam (Netherlands); Heijmans, Jarom [Emma Children' s Hospital, Department of Paediatric Gastroenterology and Nutrition, Amsterdam (Netherlands); Academic Medical Centre, Department of Radiology, Amsterdam (Netherlands); Reitsma, Johannes B. [Academic Medical Centre, Department of Clinical Epidemiology and Biostatistics, Amsterdam (Netherlands)

    2006-01-01

    The data concerning the value of a plain abdominal radiograph in childhood constipation are inconsistent. Recently, positive results have been reported of a new radiographic scoring system, ''the Leech method'', for assessing faecal loading. To assess intra- and interobserver variability and determine diagnostic accuracy of the Leech method in identifying children with functional constipation (FC). A total of 89 children (median age 9.8 years) with functional gastrointestinal disorders were included in the study. Based on clinical parameters, 52 fulfilled the criteria for FC, six fulfilled the criteria for functional abdominal pain (FAP), and 31 for functional non-retentive faecal incontinence (FNRFI); the latter two groups provided the controls. To assess intra- and interobserver variability of the Leech method three scorers scored the same abdominal radiograph twice. A Leech score of 9 or more was considered as suggestive of constipation. ROC analysis was used to determine the diagnostic accuracy of the Leech method in separating patients with FC from control patients. Significant intraobserver variability was found between two scorers (P=0.005 and P<0.0001), whereas there was no systematic difference between the two scores of the other scorer (P=0.89). The scores between scorers differed systematically and displayed large variability. The area under the ROC curve was 0.68 (95% CI 0.58-0.80), indicating poor diagnostic accuracy. The Leech scoring method for assessing faecal loading on a plain abdominal radiograph is of limited value in the diagnosis of FC in children. (orig.)

  16. Heterogeneity in white blood cells has potential to confound DNA methylation measurements.

    Directory of Open Access Journals (Sweden)

    Bjorn T Adalsteinsson

    Full Text Available Epigenetic studies are commonly conducted on DNA from tissue samples. However, tissues are ensembles of cells that may each have their own epigenetic profile, and therefore inter-individual cellular heterogeneity may compromise these studies. Here, we explore the potential for such confounding on DNA methylation measurement outcomes when using DNA from whole blood. DNA methylation was measured using pyrosequencing-based methodology in whole blood (n = 50-179 and in two white blood cell fractions (n = 20, isolated using density gradient centrifugation, in four CGIs (CpG Islands located in genes HHEX (10 CpG sites assayed, KCNJ11 (8 CpGs, KCNQ1 (4 CpGs and PM20D1 (7 CpGs. Cellular heterogeneity (variation in proportional white blood cell counts of neutrophils, lymphocytes, monocytes, eosinophils and basophils, counted by an automated cell counter explained up to 40% (p<0.0001 of the inter-individual variation in whole blood DNA methylation levels in the HHEX CGI, but not a significant proportion of the variation in the other three CGIs tested. DNA methylation levels in the two cell fractions, polymorphonuclear and mononuclear cells, differed significantly in the HHEX CGI; specifically the average absolute difference ranged between 3.4-15.7 percentage points per CpG site. In the other three CGIs tested, methylation levels in the two fractions did not differ significantly, and/or the difference was more moderate. In the examined CGIs, methylation levels were highly correlated between cell fractions. In summary, our analysis detects region-specific differential DNA methylation between white blood cell subtypes, which can confound the outcome of whole blood DNA methylation measurements. Finally, by demonstrating the high correlation between methylation levels in cell fractions, our results suggest a possibility to use a proportional number of a single white blood cell type to correct for this confounding effect in analyses.

  17. The obesity paradox in stable chronic heart failure does not persist after matching for indicators of disease severity and confounders.

    Science.gov (United States)

    Frankenstein, Lutz; Zugck, Christian; Nelles, Manfred; Schellberg, Dieter; Katus, Hugo A; Remppis, B Andrew

    2009-12-01

    To verify whether controlling for indicators of disease severity and confounders represents a solution to the obesity paradox in chronic heart failure (CHF). From a cohort of 1790 patients, we formed 230 nested matched triplets by individually matching patients with body mass index (BMI) > 30 kg/m(2) (Group 3), BMI 20-24.9 k/m(2) (Group 1) and BMI 25-29.9 kg/m(2) (Group 2), according to NT-proBNP, age, sex, and NYHA class (triplet = one matched patient from each group). Although in the pre-matching cohort, BMI group was a significant univariable prognostic indicator, it did not retain significance [heart rate (HR): 0.91, 95% CI: 0.78-1.05, chi(2): 1.67] when controlled for group propensities as covariates. Furthermore, in the matched cohort, 1-year mortality and 3-year mortality did not differ significantly. Here, BMI again failed to reach statistical significance for prognosis, either as a continuous or categorical variable, whether crude or adjusted. This result was confirmed in the patients not selected for matching. NT-proBNP, however, remained statistically significant (log(NT-proBNP): HR: 1.49, 95% CI: 1.13-1.97, chi(2): 7.82) after multivariable adjustment. The obesity paradox does not appear to persist in a matched setting with respect to indicators of disease severity and other confounders. NT-proBNP remains an independent prognostic indicator of adverse outcome irrespective of obesity status.

  18. A method based on a separation of variables in magnetohydrodynamics (MHD)

    International Nuclear Information System (INIS)

    Cessenat, M.; Genta, P.

    1996-01-01

    We use a method based on a separation of variables for solving a system of first order partial differential equations, in a very simple modelling of MHD. The method consists in introducing three unknown variables φ1, φ2, φ3 in addition of the time variable τ and then searching a solution which is separated with respect to φ1 and τ only. This is allowed by a very simple relation, called a 'metric separation equation', which governs the type of solutions with respect to time. The families of solutions for the system of equations thus obtained, correspond to a radial evolution of the fluid. Solving the MHD equations is then reduced to find the transverse component H Σ of the magnetic field on the unit sphere Σ by solving a non linear partial differential equation on Σ. Thus we generalize ideas due to Courant-Friedrichs and to Sedov on dimensional analysis and self-similar solutions. (authors)

  19. An effective method for finding special solutions of nonlinear differential equations with variable coefficients

    International Nuclear Information System (INIS)

    Qin Maochang; Fan Guihong

    2008-01-01

    There are many interesting methods can be utilized to construct special solutions of nonlinear differential equations with constant coefficients. However, most of these methods are not applicable to nonlinear differential equations with variable coefficients. A new method is presented in this Letter, which can be used to find special solutions of nonlinear differential equations with variable coefficients. This method is based on seeking appropriate Bernoulli equation corresponding to the equation studied. Many well-known equations are chosen to illustrate the application of this method

  20. Variable selection methods in PLS regression - a comparison study on metabolomics data

    DEFF Research Database (Denmark)

    Karaman, İbrahim; Hedemann, Mette Skou; Knudsen, Knud Erik Bach

    . The aim of the metabolomics study was to investigate the metabolic profile in pigs fed various cereal fractions with special attention to the metabolism of lignans using LC-MS based metabolomic approach. References 1. Lê Cao KA, Rossouw D, Robert-Granié C, Besse P: A Sparse PLS for Variable Selection when...... integrated approach. Due to the high number of variables in data sets (both raw data and after peak picking) the selection of important variables in an explorative analysis is difficult, especially when different data sets of metabolomics data need to be related. Variable selection (or removal of irrelevant...... different strategies for variable selection on PLSR method were considered and compared with respect to selected subset of variables and the possibility for biological validation. Sparse PLSR [1] as well as PLSR with Jack-knifing [2] was applied to data in order to achieve variable selection prior...

  1. Associations between lifestyle and air pollution exposure: Potential for confounding in large administrative data cohorts.

    Science.gov (United States)

    Strak, Maciej; Janssen, Nicole; Beelen, Rob; Schmitz, Oliver; Karssenberg, Derek; Houthuijs, Danny; van den Brink, Carolien; Dijst, Martin; Brunekreef, Bert; Hoek, Gerard

    2017-07-01

    Cohorts based on administrative data have size advantages over individual cohorts in investigating air pollution risks, but often lack in-depth information on individual risk factors related to lifestyle. If there is a correlation between lifestyle and air pollution, omitted lifestyle variables may result in biased air pollution risk estimates. Correlations between lifestyle and air pollution can be induced by socio-economic status affecting both lifestyle and air pollution exposure. Our overall aim was to assess potential confounding by missing lifestyle factors on air pollution mortality risk estimates. The first aim was to assess associations between long-term exposure to several air pollutants and lifestyle factors. The second aim was to assess whether these associations were sensitive to adjustment for individual and area-level socioeconomic status (SES), and whether they differed between subgroups of the population. Using the obtained air pollution-lifestyle associations and indirect adjustment methods, our third aim was to investigate the potential bias due to missing lifestyle information on air pollution mortality risk estimates in administrative cohorts. We used a recent Dutch national health survey of 387,195 adults to investigate the associations of PM 10 , PM 2.5 , PM 2.5-10 , PM 2.5 absorbance, OP DTT, OP ESR and NO 2 annual average concentrations at the residential address from land use regression models with individual smoking habits, alcohol consumption, physical activity and body mass index. We assessed the associations with and without adjustment for neighborhood and individual SES characteristics typically available in administrative data cohorts. We illustrated the effect of including lifestyle information on the air pollution mortality risk estimates in administrative cohort studies using a published indirect adjustment method. Current smoking and alcohol consumption were generally positively associated with air pollution. Physical activity

  2. Everything that you have ever been told about assessment center ratings is confounded.

    Science.gov (United States)

    Jackson, Duncan J R; Michaelides, George; Dewberry, Chris; Kim, Young-Jae

    2016-07-01

    Despite a substantial research literature on the influence of dimensions and exercises in assessment centers (ACs), the relative impact of these 2 sources of variance continues to raise uncertainties because of confounding. With confounded effects, it is not possible to establish the degree to which any 1 effect, including those related to exercises and dimensions, influences AC ratings. In the current study (N = 698) we used Bayesian generalizability theory to unconfound all of the possible effects contributing to variance in AC ratings. Our results show that ≤1.11% of the variance in AC ratings was directly attributable to behavioral dimensions, suggesting that dimension-related effects have no practical impact on the reliability of ACs. Even when taking aggregation level into consideration, effects related to general performance and exercises accounted for almost all of the reliable variance in AC ratings. The implications of these findings for recent dimension- and exercise-based perspectives on ACs are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. Relation between sick leave and selected exposure variables among women semiconductor workers in Malaysia

    Science.gov (United States)

    Chee, H; Rampal, K

    2003-01-01

    Aims: To determine the relation between sick leave and selected exposure variables among women semiconductor workers. Methods: This was a cross sectional survey of production workers from 18 semiconductor factories. Those selected had to be women, direct production operators up to the level of line leader, and Malaysian citizens. Sick leave and exposure to physical and chemical hazards were determined by self reporting. Three sick leave variables were used; number of sick leave days taken in the past year was the variable of interest in logistic regression models where the effects of age, marital status, work task, work schedule, work section, and duration of work in factory and work section were also explored. Results: Marital status was strongly linked to the taking of sick leave. Age, work schedule, and duration of work in the factory were significant confounders only in certain cases. After adjusting for these confounders, chemical and physical exposures, with the exception of poor ventilation and smelling chemicals, showed no significant relation to the taking of sick leave within the past year. Work section was a good predictor for taking sick leave, as wafer polishing workers faced higher odds of taking sick leave for each of the three cut off points of seven days, three days, and not at all, while parts assembly workers also faced significantly higher odds of taking sick leave. Conclusion: In Malaysia, the wafer fabrication factories only carry out a limited portion of the work processes, in particular, wafer polishing and the processes immediately prior to and following it. This study, in showing higher illness rates for workers in wafer polishing compared to semiconductor assembly, has implications for the governmental policy of encouraging the setting up of wafer fabrication plants with the full range of work processes. PMID:12660374

  4. Automatic variable selection method and a comparison for quantitative analysis in laser-induced breakdown spectroscopy

    Science.gov (United States)

    Duan, Fajie; Fu, Xiao; Jiang, Jiajia; Huang, Tingting; Ma, Ling; Zhang, Cong

    2018-05-01

    In this work, an automatic variable selection method for quantitative analysis of soil samples using laser-induced breakdown spectroscopy (LIBS) is proposed, which is based on full spectrum correction (FSC) and modified iterative predictor weighting-partial least squares (mIPW-PLS). The method features automatic selection without artificial processes. To illustrate the feasibility and effectiveness of the method, a comparison with genetic algorithm (GA) and successive projections algorithm (SPA) for different elements (copper, barium and chromium) detection in soil was implemented. The experimental results showed that all the three methods could accomplish variable selection effectively, among which FSC-mIPW-PLS required significantly shorter computation time (12 s approximately for 40,000 initial variables) than the others. Moreover, improved quantification models were got with variable selection approaches. The root mean square errors of prediction (RMSEP) of models utilizing the new method were 27.47 (copper), 37.15 (barium) and 39.70 (chromium) mg/kg, which showed comparable prediction effect with GA and SPA.

  5. Second-order particle-in-cell (PIC) computational method in the one-dimensional variable Eulerian mesh system

    International Nuclear Information System (INIS)

    Pyun, J.J.

    1981-01-01

    As part of an effort to incorporate the variable Eulerian mesh into the second-order PIC computational method, a truncation error analysis was performed to calculate the second-order error terms for the variable Eulerian mesh system. The results that the maximum mesh size increment/decrement is limited to be α(Δr/sub i/) 2 where Δr/sub i/ is a non-dimensional mesh size of the ith cell, and α is a constant of order one. The numerical solutions of Burgers' equation by the second-order PIC method in the variable Eulerian mesh system wer compared with its exact solution. It was found that the second-order accuracy in the PIC method was maintained under the above condition. Additional problems were analyzed using the second-order PIC methods in both variable and uniform Eulerian mesh systems. The results indicate that the second-order PIC method in the variable Eulerian mesh system can provide substantial computational time saving with no loss in accuracy

  6. Assessing Mediation Using Marginal Structural Models in the Presence of Confounding and Moderation

    Science.gov (United States)

    Coffman, Donna L.; Zhong, Wei

    2012-01-01

    This article presents marginal structural models with inverse propensity weighting (IPW) for assessing mediation. Generally, individuals are not randomly assigned to levels of the mediator. Therefore, confounders of the mediator and outcome may exist that limit causal inferences, a goal of mediation analysis. Either regression adjustment or IPW…

  7. Testing concordance of instrumental variable effects in generalized linear models with application to Mendelian randomization

    Science.gov (United States)

    Dai, James Y.; Chan, Kwun Chuen Gary; Hsu, Li

    2014-01-01

    Instrumental variable regression is one way to overcome unmeasured confounding and estimate causal effect in observational studies. Built on structural mean models, there has been considerale work recently developed for consistent estimation of causal relative risk and causal odds ratio. Such models can sometimes suffer from identification issues for weak instruments. This hampered the applicability of Mendelian randomization analysis in genetic epidemiology. When there are multiple genetic variants available as instrumental variables, and causal effect is defined in a generalized linear model in the presence of unmeasured confounders, we propose to test concordance between instrumental variable effects on the intermediate exposure and instrumental variable effects on the disease outcome, as a means to test the causal effect. We show that a class of generalized least squares estimators provide valid and consistent tests of causality. For causal effect of a continuous exposure on a dichotomous outcome in logistic models, the proposed estimators are shown to be asymptotically conservative. When the disease outcome is rare, such estimators are consistent due to the log-linear approximation of the logistic function. Optimality of such estimators relative to the well-known two-stage least squares estimator and the double-logistic structural mean model is further discussed. PMID:24863158

  8. Chaos synchronization using single variable feedback based on backstepping method

    International Nuclear Information System (INIS)

    Zhang Jian; Li Chunguang; Zhang Hongbin; Yu Juebang

    2004-01-01

    In recent years, backstepping method has been developed in the field of nonlinear control, such as controller, observer and output regulation. In this paper, an effective backstepping design is applied to chaos synchronization. There are some advantages in this method for synchronizing chaotic systems, such as (a) the synchronization error is exponential convergent; (b) only one variable information of the master system is needed; (c) it presents a systematic procedure for selecting a proper controller. Numerical simulations for the Chua's circuit and the Roessler system demonstrate that this method is very effective

  9. Confounding factors in determining causal soil moisture-precipitation feedback

    Science.gov (United States)

    Tuttle, Samuel E.; Salvucci, Guido D.

    2017-07-01

    Identification of causal links in the land-atmosphere system is important for construction and testing of land surface and general circulation models. However, the land and atmosphere are highly coupled and linked by a vast number of complex, interdependent processes. Statistical methods, such as Granger causality, can help to identify feedbacks from observational data, independent of the different parameterizations of physical processes and spatiotemporal resolution effects that influence feedbacks in models. However, statistical causal identification methods can easily be misapplied, leading to erroneous conclusions about feedback strength and sign. Here, we discuss three factors that must be accounted for in determination of causal soil moisture-precipitation feedback in observations and model output: seasonal and interannual variability, precipitation persistence, and endogeneity. The effect of neglecting these factors is demonstrated in simulated and observational data. The results show that long-timescale variability and precipitation persistence can have a substantial effect on detected soil moisture-precipitation feedback strength, while endogeneity has a smaller effect that is often masked by measurement error and thus is more likely to be an issue when analyzing model data or highly accurate observational data.

  10. [Correlation coefficient-based classification method of hydrological dependence variability: With auto-regression model as example].

    Science.gov (United States)

    Zhao, Yu Xi; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi

    2018-04-01

    Hydrological process evaluation is temporal dependent. Hydrological time series including dependence components do not meet the data consistency assumption for hydrological computation. Both of those factors cause great difficulty for water researches. Given the existence of hydrological dependence variability, we proposed a correlationcoefficient-based method for significance evaluation of hydrological dependence based on auto-regression model. By calculating the correlation coefficient between the original series and its dependence component and selecting reasonable thresholds of correlation coefficient, this method divided significance degree of dependence into no variability, weak variability, mid variability, strong variability, and drastic variability. By deducing the relationship between correlation coefficient and auto-correlation coefficient in each order of series, we found that the correlation coefficient was mainly determined by the magnitude of auto-correlation coefficient from the 1 order to p order, which clarified the theoretical basis of this method. With the first-order and second-order auto-regression models as examples, the reasonability of the deduced formula was verified through Monte-Carlo experiments to classify the relationship between correlation coefficient and auto-correlation coefficient. This method was used to analyze three observed hydrological time series. The results indicated the coexistence of stochastic and dependence characteristics in hydrological process.

  11. A QSAR Study of Environmental Estrogens Based on a Novel Variable Selection Method

    Directory of Open Access Journals (Sweden)

    Aiqian Zhang

    2012-05-01

    Full Text Available A large number of descriptors were employed to characterize the molecular structure of 53 natural, synthetic, and environmental chemicals which are suspected of disrupting endocrine functions by mimicking or antagonizing natural hormones and may thus pose a serious threat to the health of humans and wildlife. In this work, a robust quantitative structure-activity relationship (QSAR model with a novel variable selection method has been proposed for the effective estrogens. The variable selection method is based on variable interaction (VSMVI with leave-multiple-out cross validation (LMOCV to select the best subset. During variable selection, model construction and assessment, the Organization for Economic Co-operation and Development (OECD principles for regulation of QSAR acceptability were fully considered, such as using an unambiguous multiple-linear regression (MLR algorithm to build the model, using several validation methods to assessment the performance of the model, giving the define of applicability domain and analyzing the outliers with the results of molecular docking. The performance of the QSAR model indicates that the VSMVI is an effective, feasible and practical tool for rapid screening of the best subset from large molecular descriptors.

  12. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    Energy Technology Data Exchange (ETDEWEB)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan [Toosi University of Technology, Tehran (Korea, Republic of)

    2012-05-15

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms.

  13. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    International Nuclear Information System (INIS)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan

    2012-01-01

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms

  14. Quantification and variability in colonic volume with a novel magnetic resonance imaging method

    DEFF Research Database (Denmark)

    Nilsson, M; Sandberg, Thomas Holm; Poulsen, Jakob Lykke

    2015-01-01

    Background: Segmental distribution of colorectal volume is relevant in a number of diseases, but clinical and experimental use demands robust reliability and validity. Using a novel semi-automatic magnetic resonance imaging-based technique, the aims of this study were to describe: (i) inter......-individual and intra-individual variability of segmental colorectal volumes between two observations in healthy subjects and (ii) the change in segmental colorectal volume distribution before and after defecation. Methods: The inter-individual and intra-individual variability of four colorectal volumes (cecum...... (p = 0.02). Conclusions & Inferences: Imaging of segmental colorectal volume, morphology, and fecal accumulation is advantageous to conventional methods in its low variability, high spatial resolution, and its absence of contrast-enhancing agents and irradiation. Hence, the method is suitable...

  15. Comorbidities, confounders, and the white matter transcriptome in chronic alcoholism.

    Science.gov (United States)

    Sutherland, Greg T; Sheedy, Donna; Sheahan, Pam J; Kaplan, Warren; Kril, Jillian J

    2014-04-01

    Alcohol abuse is the world's third leading cause of disease and disability, and one potential sequel of chronic abuse is alcohol-related brain damage (ARBD). This clinically manifests as cognitive dysfunction and pathologically as atrophy of white matter (WM) in particular. The mechanism linking chronic alcohol intoxication with ARBD remains largely unknown but it is also complicated by common comorbidities such as liver damage and nutritional deficiencies. Liver cirrhosis, in particular, often leads to hepatic encephalopathy (HE), a primary glial disease. In a novel transcriptomic study, we targeted the WM only of chronic alcoholics in an attempt to tease apart the pathogenesis of ARBD. Specifically, in alcoholics with and without HE, we explored both the prefrontal and primary motor cortices, 2 regions that experience differential levels of neuronal loss. Our results suggest that HE, along with 2 confounders, gray matter contamination, and low RNA quality are major drivers of gene expression in ARBD. All 3 exceeded the effects of alcohol itself. In particular, low-quality RNA samples were characterized by an up-regulation of translation machinery, while HE was associated with a down-regulation of mitochondrial energy metabolism pathways. The findings in HE alcoholics are consistent with the metabolic acidosis seen in this condition. In contrast non-HE alcoholics had widespread but only subtle changes in gene expression in their WM. Notwithstanding the latter result, this study demonstrates that significant confounders in transcriptomic studies of human postmortem brain tissue can be identified, quantified, and "removed" to reveal disease-specific signals. Copyright © 2014 by the Research Society on Alcoholism.

  16. Improved method for solving the neutron transport problem by discretization of space and energy variables

    International Nuclear Information System (INIS)

    Bosevski, T.

    1971-01-01

    The polynomial interpolation of neutron flux between the chosen space and energy variables enabled transformation of the integral transport equation into a system of linear equations with constant coefficients. Solutions of this system are the needed values of flux for chosen values of space and energy variables. The proposed improved method for solving the neutron transport problem including the mathematical formalism is simple and efficient since the number of needed input data is decreased both in treating the spatial and energy variables. Mathematical method based on this approach gives more stable solutions with significantly decreased probability of numerical errors. Computer code based on the proposed method was used for calculations of one heavy water and one light water reactor cell, and the results were compared to results of other very precise calculations. The proposed method was better concerning convergence rate, decreased computing time and needed computer memory. Discretization of variables enabled direct comparison of theoretical and experimental results

  17. Handling stress may confound murine gut microbiota studies

    Directory of Open Access Journals (Sweden)

    Cary R. Allen-Blevins

    2017-01-01

    Full Text Available Background Accumulating evidence indicates interactions between human milk composition, particularly sugars (human milk oligosaccharides or HMO, the gut microbiota of human infants, and behavioral effects. Some HMO secreted in human milk are unable to be endogenously digested by the human infant but are able to be metabolized by certain species of gut microbiota, including Bifidobacterium longum subsp. infantis (B. infantis, a species sensitive to host stress (Bailey & Coe, 2004. Exposure to gut bacteria like B. infantisduring critical neurodevelopment windows in early life appears to have behavioral consequences; however, environmental, physical, and social stress during this period can also have behavioral and microbial consequences. While rodent models are a useful method for determining causal relationships between HMO, gut microbiota, and behavior, murine studies of gut microbiota usually employ oral gavage, a technique stressful to the mouse. Our aim was to develop a less-invasive technique for HMO administration to remove the potential confound of gavage stress. Under the hypothesis that stress affects gut microbiota, particularly B. infantis, we predicted the pups receiving a prebiotic solution in a less-invasive manner would have the highest amount of Bifidobacteria in their gut. Methods This study was designed to test two methods, active and passive, of solution administration to mice and the effects on their gut microbiome. Neonatal C57BL/6J mice housed in a specific-pathogen free facility received increasing doses of fructooligosaccharide (FOS solution or deionized, distilled water. Gastrointestinal (GI tracts were collected from five dams, six sires, and 41 pups over four time points. Seven fecal pellets from unhandled pups and two pellets from unhandled dams were also collected. Qualitative real-time polymerase chain reaction (qRT-PCR was used to quantify and compare the amount of Bifidobacterium, Bacteroides, Bacteroidetes, and

  18. Bayesian methods for meta-analysis of causal relationships estimated using genetic instrumental variables

    DEFF Research Database (Denmark)

    Burgess, Stephen; Thompson, Simon G; Thompson, Grahame

    2010-01-01

    Genetic markers can be used as instrumental variables, in an analogous way to randomization in a clinical trial, to estimate the causal relationship between a phenotype and an outcome variable. Our purpose is to extend the existing methods for such Mendelian randomization studies to the context o...

  19. Ultrahigh-dimensional variable selection method for whole-genome gene-gene interaction analysis

    Directory of Open Access Journals (Sweden)

    Ueki Masao

    2012-05-01

    Full Text Available Abstract Background Genome-wide gene-gene interaction analysis using single nucleotide polymorphisms (SNPs is an attractive way for identification of genetic components that confers susceptibility of human complex diseases. Individual hypothesis testing for SNP-SNP pairs as in common genome-wide association study (GWAS however involves difficulty in setting overall p-value due to complicated correlation structure, namely, the multiple testing problem that causes unacceptable false negative results. A large number of SNP-SNP pairs than sample size, so-called the large p small n problem, precludes simultaneous analysis using multiple regression. The method that overcomes above issues is thus needed. Results We adopt an up-to-date method for ultrahigh-dimensional variable selection termed the sure independence screening (SIS for appropriate handling of numerous number of SNP-SNP interactions by including them as predictor variables in logistic regression. We propose ranking strategy using promising dummy coding methods and following variable selection procedure in the SIS method suitably modified for gene-gene interaction analysis. We also implemented the procedures in a software program, EPISIS, using the cost-effective GPGPU (General-purpose computing on graphics processing units technology. EPISIS can complete exhaustive search for SNP-SNP interactions in standard GWAS dataset within several hours. The proposed method works successfully in simulation experiments and in application to real WTCCC (Wellcome Trust Case–control Consortium data. Conclusions Based on the machine-learning principle, the proposed method gives powerful and flexible genome-wide search for various patterns of gene-gene interaction.

  20. Use of a variable tracer infusion method to determine glucose turnover in humans

    International Nuclear Information System (INIS)

    Molina, J.M.; Baron, A.D.; Edelman, S.V.; Brechtel, G.; Wallace, P.; Olefsky, J.M.

    1990-01-01

    The single-compartment pool fraction model, when used with the hyperinsulinemic glucose clamp technique to measure rates of glucose turnover, sometimes underestimates true rates of glucose appearance (Ra) resulting in negative values for hepatic glucose output (HGO). We focused our attention on isotope discrimination and model error as possible explanations for this underestimation. We found no difference in [3-3H] glucose specific activity in samples obtained simultaneously from the femoral artery and vein (2,400 +/- 455 vs. 2,454 +/- 522 dpm/mg) in 6 men during a hyperinsulinemic euglycemic clamp study where insulin was infused at 40 mU.m-2.min-1 for 3 h; therefore, isotope discrimination did not occur. We compared the ability of a constant (0.6 microCi/min) vs. variable tracer infusion method (tracer added to the glucose infusate) to measure non-steady-state Ra during hyperinsulinemic clamp studies. Plasma specific activity fell during the constant tracer infusion studies but did not change from base line during the variable tracer infusion studies. By maintaining a constant plasma specific activity the variable tracer infusion method eliminates uncertainty about changes in glucose pool size. This overcame modeling error and more accurately measures non-steady-state Ra (P less than 0.001 by analysis of variance vs. constant infusion method). In conclusion, underestimation of Ra determined isotopically during hyperinsulinemic clamp studies is largely due to modeling error that can be overcome by use of the variable tracer infusion method. This method allows more accurate determination of Ra and HGO under non-steady-state conditions

  1. A method to forecast quantitative variables relating to nuclear public acceptance

    International Nuclear Information System (INIS)

    Ohnishi, T.

    1992-01-01

    A methodology is proposed for forecasting the future trend of quantitative variables profoundly related to the public acceptance (PA) of nuclear energy. The social environment influencing PA is first modeled by breaking it down into a finite number of fundamental elements and then the interactive formulae between the quantitative variables, which are attributed to and characterize each element, are determined by using the actual values of the variables in the past. Inputting the estimated values of exogenous variables into these formulae, the forecast values of endogenous variables can finally be obtained. Using this method, the problem of nuclear PA in Japan is treated as, for example, where the context is considered to comprise a public sector and the general social environment and socio-psychology. The public sector is broken down into three elements of the general public, the inhabitants living around nuclear facilities and the activists of anti-nuclear movements, whereas the social environment and socio-psychological factors are broken down into several elements, such as news media and psychological factors. Twenty-seven endogenous and seven exogenous variables are introduced to quantify these elements. After quantitatively formulating the interactive features between them and extrapolating the exogenous variables into the future estimates are made of the growth or attenuation of the endogenous variables, such as the pro- and anti-nuclear fractions in public opinion polls and the frequency of occurrence of anti-nuclear movements. (author)

  2. A new hydraulic regulation method on district heating system with distributed variable-speed pumps

    International Nuclear Information System (INIS)

    Wang, Hai; Wang, Haiying; Zhu, Tong

    2017-01-01

    Highlights: • A hydraulic regulation method was presented for district heating with distributed variable speed pumps. • Information and automation technologies were utilized to support the proposed method. • A new hydraulic model was developed for distributed variable speed pumps. • A new optimization model was developed based on genetic algorithm. • Two scenarios of a multi-source looped system was illustrated to validate the method. - Abstract: Compared with the hydraulic configuration based on the conventional central circulating pump, a district heating system with distributed variable-speed-pumps configuration can often save 30–50% power consumption on circulating pumps with frequency inverters. However, the hydraulic regulations on distributed variable-speed-pumps configuration could be more complicated than ever while all distributed pumps need to be adjusted to their designated flow rates. Especially in a multi-source looped structure heating network where the distributed pumps have strongly coupled and severe non-linear hydraulic connections with each other, it would be rather difficult to maintain the hydraulic balance during the regulations. In this paper, with the help of the advanced automation and information technologies, a new hydraulic regulation method was proposed to achieve on-site hydraulic balance for the district heating systems with distributed variable-speed-pumps configuration. The proposed method was comprised of a new hydraulic model, which was developed to adapt the distributed variable-speed-pumps configuration, and a calibration model with genetic algorithm. By carrying out the proposed method step by step, the flow rates of all distributed pumps can be progressively adjusted to their designated values. A hypothetic district heating system with 2 heat sources and 10 substations was taken as a case study to illustrate the feasibility of the proposed method. Two scenarios were investigated respectively. In Scenario I, the

  3. Instrumental variables I: instrumental variables exploit natural variation in nonexperimental data to estimate causal relationships.

    Science.gov (United States)

    Rassen, Jeremy A; Brookhart, M Alan; Glynn, Robert J; Mittleman, Murray A; Schneeweiss, Sebastian

    2009-12-01

    The gold standard of study design for treatment evaluation is widely acknowledged to be the randomized controlled trial (RCT). Trials allow for the estimation of causal effect by randomly assigning participants either to an intervention or comparison group; through the assumption of "exchangeability" between groups, comparing the outcomes will yield an estimate of causal effect. In the many cases where RCTs are impractical or unethical, instrumental variable (IV) analysis offers a nonexperimental alternative based on many of the same principles. IV analysis relies on finding a naturally varying phenomenon, related to treatment but not to outcome except through the effect of treatment itself, and then using this phenomenon as a proxy for the confounded treatment variable. This article demonstrates how IV analysis arises from an analogous but potentially impossible RCT design, and outlines the assumptions necessary for valid estimation. It gives examples of instruments used in clinical epidemiology and concludes with an outline on estimation of effects.

  4. OCOPTR, Minimization of Nonlinear Function, Variable Metric Method, Derivative Calculation. DRVOCR, Minimization of Nonlinear Function, Variable Metric Method, Derivative Calculation

    International Nuclear Information System (INIS)

    Nazareth, J. L.

    1979-01-01

    1 - Description of problem or function: OCOPTR and DRVOCR are computer programs designed to find minima of non-linear differentiable functions f: R n →R with n dimensional domains. OCOPTR requires that the user only provide function values (i.e. it is a derivative-free routine). DRVOCR requires the user to supply both function and gradient information. 2 - Method of solution: OCOPTR and DRVOCR use the variable metric (or quasi-Newton) method of Davidon (1975). For OCOPTR, the derivatives are estimated by finite differences along a suitable set of linearly independent directions. For DRVOCR, the derivatives are user- supplied. Some features of the codes are the storage of the approximation to the inverse Hessian matrix in lower trapezoidal factored form and the use of an optimally-conditioned updating method. Linear equality constraints are permitted subject to the initial Hessian factor being chosen correctly. 3 - Restrictions on the complexity of the problem: The functions to which the routine is applied are assumed to be differentiable. The routine also requires (n 2 /2) + 0(n) storage locations where n is the problem dimension

  5. Neuroticism explains unwanted variance in Implicit Association Tests of personality: Possible evidence for an affective valence confound

    Directory of Open Access Journals (Sweden)

    Monika eFleischhauer

    2013-09-01

    Full Text Available Meta-analytic data highlight the value of the Implicit Association Test (IAT as an indirect measure of personality. Based on evidence suggesting that confounding factors such as cognitive abilities contribute to the IAT effect, this study provides a first investigation of whether basic personality traits explain unwanted variance in the IAT. In a gender-balanced sample of 204 volunteers, the Big-Five dimensions were assessed via self-report, peer-report, and IAT. By means of structural equation modeling, latent Big-Five personality factors (based on self- and peer-report were estimated and their predictive value for unwanted variance in the IAT was examined. In a first analysis, unwanted variance was defined in the sense of method-specific variance which may result from differences in task demands between the two IAT block conditions and which can be mirrored by the absolute size of the IAT effects. In a second analysis, unwanted variance was examined in a broader sense defined as those systematic variance components in the raw IAT scores that are not explained by the latent implicit personality factors. In contrast to the absolute IAT scores, this also considers biases associated with the direction of IAT effects (i.e., whether they are positive or negative in sign, biases that might result, for example, from the IAT’s stimulus or category features. None of the explicit Big-Five factors was predictive for method-specific variance in the IATs (first analysis. However, when considering unwanted variance that goes beyond pure method-specific variance (second analysis, a substantial effect of neuroticism occurred that may have been driven by the affective valence of IAT attribute categories and the facilitated processing of negative stimuli, typically associated with neuroticism. The findings thus point to the necessity of using attribute category labels and stimuli of similar affective valence in personality IATs to avoid confounding due to

  6. Association between Anxiety Disorders and Heart Rate Variability in The Netherlands Study of Depression and Anxiety (NESDA)

    NARCIS (Netherlands)

    Licht, Carmilla M. M.; de Geus, Eco J. C.; van Dyck, Richard; Penninx, Brenda W. J. H.

    Objective: To determine whether patients with different types of anxiety disorder (panic disorder, social phobia, generalized anxiety disorder) have higher heart rate and lower heart rate variability compared with healthy controls in a sample that was sufficiently powered to examine the confounding

  7. Partial Granger causality--eliminating exogenous inputs and latent variables.

    Science.gov (United States)

    Guo, Shuixia; Seth, Anil K; Kendrick, Keith M; Zhou, Cong; Feng, Jianfeng

    2008-07-15

    Attempts to identify causal interactions in multivariable biological time series (e.g., gene data, protein data, physiological data) can be undermined by the confounding influence of environmental (exogenous) inputs. Compounding this problem, we are commonly only able to record a subset of all related variables in a system. These recorded variables are likely to be influenced by unrecorded (latent) variables. To address this problem, we introduce a novel variant of a widely used statistical measure of causality--Granger causality--that is inspired by the definition of partial correlation. Our 'partial Granger causality' measure is extensively tested with toy models, both linear and nonlinear, and is applied to experimental data: in vivo multielectrode array (MEA) local field potentials (LFPs) recorded from the inferotemporal cortex of sheep. Our results demonstrate that partial Granger causality can reveal the underlying interactions among elements in a network in the presence of exogenous inputs and latent variables in many cases where the existing conditional Granger causality fails.

  8. Accounting for genetic and environmental confounds in associations between parent and child characteristics: a systematic review of children-of-twins studies.

    Science.gov (United States)

    McAdams, Tom A; Neiderhiser, Jenae M; Rijsdijk, Fruhling V; Narusyte, Jurgita; Lichtenstein, Paul; Eley, Thalia C

    2014-07-01

    Parental psychopathology, parenting style, and the quality of intrafamilial relationships are all associated with child mental health outcomes. However, most research can say little about the causal pathways underlying these associations. This is because most studies are not genetically informative and are therefore not able to account for the possibility that associations are confounded by gene-environment correlation. That is, biological parents not only provide a rearing environment for their child, but also contribute 50% of their genes. Any associations between parental phenotype and child phenotype are therefore potentially confounded. One technique for disentangling genetic from environmental effects is the children-of-twins (COT) method. This involves using data sets comprising twin parents and their children to distinguish genetic from environmental associations between parent and child phenotypes. The COT technique has grown in popularity in the last decade, and we predict that this surge in popularity will continue. In the present article we explain the COT method for those unfamiliar with its use. We present the logic underlying this approach, discuss strengths and weaknesses, and highlight important methodological considerations for researchers interested in the COT method. We also cover variations on basic COT approaches, including the extended-COT method, capable of distinguishing forms of gene-environment correlation. We then present a systematic review of all the behavioral COT studies published to date. These studies cover such diverse phenotypes as psychosis, substance abuse, internalizing, externalizing, parenting, and marital difficulties. In reviewing this literature, we highlight past applications, identify emergent patterns, and suggest avenues for future research. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  9. 'Mechanical restraint-confounders, risk, alliance score': testing the clinical validity of a new risk assessment instrument.

    Science.gov (United States)

    Deichmann Nielsen, Lea; Bech, Per; Hounsgaard, Lise; Alkier Gildberg, Frederik

    2017-08-01

    Unstructured risk assessment, as well as confounders (underlying reasons for the patient's risk behaviour and alliance), risk behaviour, and parameters of alliance, have been identified as factors that prolong the duration of mechanical restraint among forensic mental health inpatients. To clinically validate a new, structured short-term risk assessment instrument called the Mechanical Restraint-Confounders, Risk, Alliance Score (MR-CRAS), with the intended purpose of supporting the clinicians' observation and assessment of the patient's readiness to be released from mechanical restraint. The content and layout of MR-CRAS and its user manual were evaluated using face validation by forensic mental health clinicians, content validation by an expert panel, and pilot testing within two, closed forensic mental health inpatient units. The three sub-scales (Confounders, Risk, and a parameter of Alliance) showed excellent content validity. The clinical validations also showed that MR-CRAS was perceived and experienced as a comprehensible, relevant, comprehensive, and useable risk assessment instrument. MR-CRAS contains 18 clinically valid items, and the instrument can be used to support the clinical decision-making regarding the possibility of releasing the patient from mechanical restraint. The present three studies have clinically validated a short MR-CRAS scale that is currently being psychometrically tested in a larger study.

  10. Methods to assess intended effects of drug treatment in observational studies are reviewed

    NARCIS (Netherlands)

    Klungel, Olaf H|info:eu-repo/dai/nl/181447649; Martens, Edwin P|info:eu-repo/dai/nl/088859010; Psaty, Bruce M; Grobbee, Diederik E; Sullivan, Sean D; Stricker, Bruno H Ch; Leufkens, Hubert G M|info:eu-repo/dai/nl/075255049; de Boer, A|info:eu-repo/dai/nl/075097346

    2004-01-01

    BACKGROUND AND OBJECTIVE: To review methods that seek to adjust for confounding in observational studies when assessing intended drug effects. METHODS: We reviewed the statistical, economical and medical literature on the development, comparison and use of methods adjusting for confounding. RESULTS:

  11. An Extended TOPSIS Method for Multiple Attribute Decision Making based on Interval Neutrosophic Uncertain Linguistic Variables

    Directory of Open Access Journals (Sweden)

    Said Broumi

    2015-03-01

    Full Text Available The interval neutrosophic uncertain linguistic variables can easily express the indeterminate and inconsistent information in real world, and TOPSIS is a very effective decision making method more and more extensive applications. In this paper, we will extend the TOPSIS method to deal with the interval neutrosophic uncertain linguistic information, and propose an extended TOPSIS method to solve the multiple attribute decision making problems in which the attribute value takes the form of the interval neutrosophic uncertain linguistic variables and attribute weight is unknown. Firstly, the operational rules and properties for the interval neutrosophic variables are introduced. Then the distance between two interval neutrosophic uncertain linguistic variables is proposed and the attribute weight is calculated by the maximizing deviation method, and the closeness coefficients to the ideal solution for each alternatives. Finally, an illustrative example is given to illustrate the decision making steps and the effectiveness of the proposed method.

  12. The control variable method: a fully implicit numerical method for solving conservation equations for unsteady multidimensional fluid flow

    International Nuclear Information System (INIS)

    Le Coq, G.; Boudsocq, G.; Raymond, P.

    1983-03-01

    The Control Variable Method is extended to multidimensional fluid flow transient computations. In this paper basic principles of the method are given. The method uses a fully implicit space discretization and is based on the decomposition of the momentum flux tensor into scalar, vectorial, and tensorial, terms. Finally some computations about viscous-driven flow and buoyancy-driven flow in cavity are presented

  13. Locating disease genes using Bayesian variable selection with the Haseman-Elston method

    Directory of Open Access Journals (Sweden)

    He Qimei

    2003-12-01

    Full Text Available Abstract Background We applied stochastic search variable selection (SSVS, a Bayesian model selection method, to the simulated data of Genetic Analysis Workshop 13. We used SSVS with the revisited Haseman-Elston method to find the markers linked to the loci determining change in cholesterol over time. To study gene-gene interaction (epistasis and gene-environment interaction, we adopted prior structures, which incorporate the relationship among the predictors. This allows SSVS to search in the model space more efficiently and avoid the less likely models. Results In applying SSVS, instead of looking at the posterior distribution of each of the candidate models, which is sensitive to the setting of the prior, we ranked the candidate variables (markers according to their marginal posterior probability, which was shown to be more robust to the prior. Compared with traditional methods that consider one marker at a time, our method considers all markers simultaneously and obtains more favorable results. Conclusions We showed that SSVS is a powerful method for identifying linked markers using the Haseman-Elston method, even for weak effects. SSVS is very effective because it does a smart search over the entire model space.

  14. Improved flux calculations for viscous incompressible flow by the variable penalty method

    International Nuclear Information System (INIS)

    Kheshgi, H.; Luskin, M.

    1985-01-01

    The Navier-Stokes system for viscous, incompressible flow is considered, taking into account a replacement of the continuity equation by the perturbed continuity equation. The introduction of the approximation allows the pressure variable to be eliminated to obtain the system of equations for the approximate velocity. The penalty approximation is often applied to numerical discretizations since it provides a reduction in the size and band-width of the system of equations. Attention is given to error estimates, and to two numerical experiments which illustrate the error estimates considered. It is found that the variable penalty method provides an accurate solution for a much wider range of epsilon than the classical penalty method. 8 references

  15. Modelling cardiac signal as a confound in EEG-fMRI and its application in focal epilepsy studies

    DEFF Research Database (Denmark)

    Liston, A. D.; Ellegaard Lund, Torben; Salek-Haddadi, A

    2006-01-01

    effects to be modelled, as effects of no interest. Our model is based on an over-complete basis set covering a linear relationship between cardiac-related MR signal and the phase of the cardiac cycle or time after pulse (TAP). This method showed that, on average, 24.6 +/- 10.9% of grey matter voxels......Cardiac noise has been shown to reduce the sensitivity of functional Magnetic Resonance Imaging (fMRI) to an experimental effect due to its confounding presence in the blood oxygenation level-dependent (BOLD) signal. Its effect is most severe in particular regions of the brain and a method is yet...... to take it into account in routine fMRI analysis. This paper reports the development of a general and robust technique to improve the reliability of EEG-fMRI studies to BOLD signal correlated with interictal epileptiform discharges (IEDs). In these studies, ECG is routinely recorded, enabling cardiac...

  16. Using traditional methods and indigenous technologies for coping with climate variability

    NARCIS (Netherlands)

    Stigter, C.J.; Zheng Dawei,; Onyewotu, L.O.Z.; Mei Xurong,

    2005-01-01

    In agrometeorology and management of meteorology related natural resources, many traditional methods and indigenous technologies are still in use or being revived for managing low external inputs sustainable agriculture (LEISA) under conditions of climate variability. This paper starts with the

  17. Variable scaling method and Stark effect in hydrogen atom

    International Nuclear Information System (INIS)

    Choudhury, R.K.R.; Ghosh, B.

    1983-09-01

    By relating the Stark effect problem in hydrogen-like atoms to that of the spherical anharmonic oscillator we have found simple formulas for energy eigenvalues for the Stark effect. Matrix elements have been calculated using 0(2,1) algebra technique after Armstrong and then the variable scaling method has been used to find optimal solutions. Our numerical results are compared with those of Hioe and Yoo and also with the results obtained by Lanczos. (author)

  18. Collective variables method in relativistic theory

    International Nuclear Information System (INIS)

    Shurgaya, A.V.

    1983-01-01

    Classical theory of N-component field is considered. The method of collective variables accurately accounting for conservation laws proceeding from invariance theory under homogeneous Lorentz group is developed within the frames of generalized hamiltonian dynamics. Hyperboloids are invariant surfaces Under the homogeneous Lorentz group. Proceeding from this, field transformation is introduced, and the surface is parametrized so that generators of the homogeneous Lorentz group do not include components dependent on interaction and their effect on the field function is reduced to geometrical. The interaction is completely included in the expression for the energy-momentum vector of the system which is a dynamical value. Gauge is chosen where parameters of four-dimensional translations and their canonically-conjugated pulses are non-physical and thus phase space is determined by parameters of the homogeneous Lorentz group, field function and their canonically-conjugated pulses. So it is managed to accurately account for conservation laws proceeding from the requirement of lorentz-invariance

  19. Investigation of load reduction for a variable speed, variable pitch, and variable coning wind turbine

    Energy Technology Data Exchange (ETDEWEB)

    Pierce, K. [Univ. of Utah, Salt Lake City, UT (United States)

    1997-12-31

    A two bladed, variable speed and variable pitch wind turbine was modeled using ADAMS{reg_sign} to evaluate load reduction abilities of a variable coning configuration as compared to a teetered rotor, and also to evaluate control methods. The basic dynamic behavior of the variable coning turbine was investigated and compared to the teetered rotor under constant wind conditions as well as turbulent wind conditions. Results indicate the variable coning rotor has larger flap oscillation amplitudes and much lower root flap bending moments than the teetered rotor. Three methods of control were evaluated for turbulent wind simulations. These were a standard IPD control method, a generalized predictive control method, and a bias estimate control method. Each control method was evaluated for both the variable coning configuration and the teetered configuration. The ability of the different control methods to maintain the rotor speed near the desired set point is evaluated from the RMS error of rotor speed. The activity of the control system is evaluated from cycles per second of the blade pitch angle. All three of the methods were found to produce similar results for the variable coning rotor and the teetered rotor, as well as similar results to each other.

  20. A design method of compensators for multi-variable control system with PID controllers 'CHARLY'

    International Nuclear Information System (INIS)

    Fujiwara, Toshitaka; Yamada, Katsumi

    1985-01-01

    A systematic design method of compensators for a multi-variable control system having usual PID controllers in its loops is presented in this paper. The method itself is able: to determine the main manipulating variable corresponding to each controlled variable with a sensitivity analysis in the frequency domain. to tune PID controllers sufficiently to realize adequate control actions with a searching technique of minimum values of cost functionals. to design compensators improving the control preformance and to simulate a total system for confirming the designed compensators. In the phase of compensator design, the state variable feed-back gain is obtained by means of the OPTIMAL REGULATOR THEORY for the composite system of plant and PID controllers. The transfer function type compensators the configurations of which were previously given are, then, designed to approximate the frequency responces of the above mentioned state feed-back system. An example is illustrated for convenience. (author)

  1. Adjusting for the Confounding Effects of Treatment Switching—The BREAK-3 Trial: Dabrafenib Versus Dacarbazine

    Science.gov (United States)

    Abrams, Keith R.; Amonkar, Mayur M.; Stapelkamp, Ceilidh; Swann, R. Suzanne

    2015-01-01

    Background. Patients with previously untreated BRAF V600E mutation-positive melanoma in BREAK-3 showed a median overall survival (OS) of 18.2 months for dabrafenib versus 15.6 months for dacarbazine (hazard ratio [HR], 0.76; 95% confidence interval, 0.48–1.21). Because patients receiving dacarbazine were allowed to switch to dabrafenib at disease progression, we attempted to adjust for the confounding effects on OS. Materials and Methods. Rank preserving structural failure time models (RPSFTMs) and the iterative parameter estimation (IPE) algorithm were used. Two analyses, “treatment group” (assumes treatment effect could continue until death) and “on-treatment observed” (assumes treatment effect disappears with discontinuation), were used to test the assumptions around the durability of the treatment effect. Results. A total of 36 of 63 patients (57%) receiving dacarbazine switched to dabrafenib. The adjusted OS HRs ranged from 0.50 to 0.55, depending on the analysis. The RPSFTM and IPE “treatment group” and “on-treatment observed” analyses performed similarly well. Conclusion. RPSFTM and IPE analyses resulted in point estimates for the OS HR that indicate a substantial increase in the treatment effect compared with the unadjusted OS HR of 0.76. The results are uncertain because of the assumptions associated with the adjustment methods. The confidence intervals continued to cross 1.00; thus, the adjusted estimates did not provide statistically significant evidence of a treatment benefit on survival. However, it is clear that a standard intention-to-treat analysis will be confounded in the presence of treatment switching—a reliance on unadjusted analyses could lead to inappropriate practice. Adjustment analyses provide useful additional information on the estimated treatment effects to inform decision making. Implications for Practice: Treatment switching is common in oncology trials, and the implications of this for the interpretation of the

  2. Clinical and evoked pain, personality traits, and emotional states: can familial confounding explain the associations?

    Science.gov (United States)

    Strachan, Eric; Poeschla, Brian; Dansie, Elizabeth; Succop, Annemarie; Chopko, Laura; Afari, Niloofar

    2015-01-01

    Pain is a complex phenomenon influenced by context and person-specific factors. Affective dimensions of pain involve both enduring personality traits and fleeting emotional states. We examined how personality traits and emotional states are linked with clinical and evoked pain in a twin sample. 99 female twin pairs were evaluated for clinical and evoked pain using the McGill Pain Questionnaire (MPQ) and dolorimetry, and completed the 120-item International Personality Item Pool (IPIP), the Positive and Negative Affect Scale (PANAS), and ratings of stress and mood. Using a co-twin control design we examined a) the relationship of personality traits and emotional states with clinical and evoked pain and b) whether genetics and common environment (i.e. familial factors) may account for the associations. Neuroticism was associated with the sensory component of the MPQ; this relationship was not confounded by familial factors. None of the emotional state measures was associated with the MPQ. PANAS negative affect was associated with lower evoked pressure pain threshold and tolerance; these associations were confounded by familial factors. There were no associations between IPIP traits and evoked pain. A relationship exists between neuroticism and clinical pain that is not confounded by familial factors. There is no similar relationship between negative emotional states and clinical pain. In contrast, the relationship between negative emotional states and evoked pain is strong while the relationship with enduring personality traits is weak. The relationship between negative emotional states and evoked pain appears to be non-causal and due to familial factors. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. The Typicality Ranking Task: A New Method to Derive Typicality Judgments from Children

    Science.gov (United States)

    Ameel, Eef; Storms, Gert

    2016-01-01

    An alternative method for deriving typicality judgments, applicable in young children that are not familiar with numerical values yet, is introduced, allowing researchers to study gradedness at younger ages in concept development. Contrary to the long tradition of using rating-based procedures to derive typicality judgments, we propose a method that is based on typicality ranking rather than rating, in which items are gradually sorted according to their typicality, and that requires a minimum of linguistic knowledge. The validity of the method is investigated and the method is compared to the traditional typicality rating measurement in a large empirical study with eight different semantic concepts. The results show that the typicality ranking task can be used to assess children’s category knowledge and to evaluate how this knowledge evolves over time. Contrary to earlier held assumptions in studies on typicality in young children, our results also show that preference is not so much a confounding variable to be avoided, but that both variables are often significantly correlated in older children and even in adults. PMID:27322371

  4. Risk assessment of groundwater level variability using variable Kriging methods

    Science.gov (United States)

    Spanoudaki, Katerina; Kampanis, Nikolaos A.

    2015-04-01

    Assessment of the water table level spatial variability in aquifers provides useful information regarding optimal groundwater management. This information becomes more important in basins where the water table level has fallen significantly. The spatial variability of the water table level in this work is estimated based on hydraulic head measured during the wet period of the hydrological year 2007-2008, in a sparsely monitored basin in Crete, Greece, which is of high socioeconomic and agricultural interest. Three Kriging-based methodologies are elaborated in Matlab environment to estimate the spatial variability of the water table level in the basin. The first methodology is based on the Ordinary Kriging approach, the second involves auxiliary information from a Digital Elevation Model in terms of Residual Kriging and the third methodology calculates the probability of the groundwater level to fall below a predefined minimum value that could cause significant problems in groundwater resources availability, by means of Indicator Kriging. The Box-Cox methodology is applied to normalize both the data and the residuals for improved prediction results. In addition, various classical variogram models are applied to determine the spatial dependence of the measurements. The Matérn model proves to be the optimal, which in combination with Kriging methodologies provides the most accurate cross validation estimations. Groundwater level and probability maps are constructed to examine the spatial variability of the groundwater level in the basin and the associated risk that certain locations exhibit regarding a predefined minimum value that has been set for the sustainability of the basin's groundwater resources. Acknowledgement The work presented in this paper has been funded by the Greek State Scholarships Foundation (IKY), Fellowships of Excellence for Postdoctoral Studies (Siemens Program), 'A simulation-optimization model for assessing the best practices for the

  5. Quantifying Vegetation Biophysical Variables from Imaging Spectroscopy Data: A Review on Retrieval Methods

    Science.gov (United States)

    Verrelst, Jochem; Malenovský, Zbyněk; Van der Tol, Christiaan; Camps-Valls, Gustau; Gastellu-Etchegorry, Jean-Philippe; Lewis, Philip; North, Peter; Moreno, Jose

    2018-06-01

    An unprecedented spectroscopic data stream will soon become available with forthcoming Earth-observing satellite missions equipped with imaging spectroradiometers. This data stream will open up a vast array of opportunities to quantify a diversity of biochemical and structural vegetation properties. The processing requirements for such large data streams require reliable retrieval techniques enabling the spatiotemporally explicit quantification of biophysical variables. With the aim of preparing for this new era of Earth observation, this review summarizes the state-of-the-art retrieval methods that have been applied in experimental imaging spectroscopy studies inferring all kinds of vegetation biophysical variables. Identified retrieval methods are categorized into: (1) parametric regression, including vegetation indices, shape indices and spectral transformations; (2) nonparametric regression, including linear and nonlinear machine learning regression algorithms; (3) physically based, including inversion of radiative transfer models (RTMs) using numerical optimization and look-up table approaches; and (4) hybrid regression methods, which combine RTM simulations with machine learning regression methods. For each of these categories, an overview of widely applied methods with application to mapping vegetation properties is given. In view of processing imaging spectroscopy data, a critical aspect involves the challenge of dealing with spectral multicollinearity. The ability to provide robust estimates, retrieval uncertainties and acceptable retrieval processing speed are other important aspects in view of operational processing. Recommendations towards new-generation spectroscopy-based processing chains for operational production of biophysical variables are given.

  6. Comparison of Sparse and Jack-knife partial least squares regression methods for variable selection

    DEFF Research Database (Denmark)

    Karaman, Ibrahim; Qannari, El Mostafa; Martens, Harald

    2013-01-01

    The objective of this study was to compare two different techniques of variable selection, Sparse PLSR and Jack-knife PLSR, with respect to their predictive ability and their ability to identify relevant variables. Sparse PLSR is a method that is frequently used in genomics, whereas Jack-knife PL...

  7. Strong Stability Preserving Explicit Linear Multistep Methods with Variable Step Size

    KAUST Repository

    Hadjimichael, Yiannis

    2016-09-08

    Strong stability preserving (SSP) methods are designed primarily for time integration of nonlinear hyperbolic PDEs, for which the permissible SSP step size varies from one step to the next. We develop the first SSP linear multistep methods (of order two and three) with variable step size, and prove their optimality, stability, and convergence. The choice of step size for multistep SSP methods is an interesting problem because the allowable step size depends on the SSP coefficient, which in turn depends on the chosen step sizes. The description of the methods includes an optimal step-size strategy. We prove sharp upper bounds on the allowable step size for explicit SSP linear multistep methods and show the existence of methods with arbitrarily high order of accuracy. The effectiveness of the methods is demonstrated through numerical examples.

  8. Study of input variables in group method of data handling methodology

    International Nuclear Information System (INIS)

    Pereira, Iraci Martinez; Bueno, Elaine Inacio

    2013-01-01

    The Group Method of Data Handling - GMDH is a combinatorial multi-layer algorithm in which a network of layers and nodes is generated using a number of inputs from the data stream being evaluated. The GMDH network topology has been traditionally determined using a layer by layer pruning process based on a pre-selected criterion of what constitutes the best nodes at each level. The traditional GMDH method is based on an underlying assumption that the data can be modeled by using an approximation of the Volterra Series or Kolmorgorov-Gabor polynomial. A Monitoring and Diagnosis System was developed based on GMDH and ANN methodologies, and applied to the IPEN research Reactor IEA-1. The system performs the monitoring by comparing the GMDH and ANN calculated values with measured ones. As the GMDH is a self-organizing methodology, the input variables choice is made automatically. On the other hand, the results of ANN methodology are strongly dependent on which variables are used as neural network input. (author)

  9. Application of Muskingum routing method with variable parameters in ungauged basin

    Directory of Open Access Journals (Sweden)

    Xiao-meng Song

    2011-03-01

    Full Text Available This paper describes a flood routing method applied in an ungauged basin, utilizing the Muskingum model with variable parameters of wave travel time K and weight coefficient of discharge x based on the physical characteristics of the river reach and flood, including the reach slope, length, width, and flood discharge. Three formulas for estimating parameters of wide rectangular, triangular, and parabolic cross sections are proposed. The influence of the flood on channel flow routing parameters is taken into account. The HEC-HMS hydrological model and the geospatial hydrologic analysis module HEC-GeoHMS were used to extract channel or watershed characteristics and to divide sub-basins. In addition, the initial and constant-rate method, user synthetic unit hydrograph method, and exponential recession method were used to estimate runoff volumes, the direct runoff hydrograph, and the baseflow hydrograph, respectively. The Muskingum model with variable parameters was then applied in the Louzigou Basin in Henan Province of China, and of the results, the percentages of flood events with a relative error of peak discharge less than 20% and runoff volume less than 10% are both 100%. They also show that the percentages of flood events with coefficients of determination greater than 0.8 are 83.33%, 91.67%, and 87.5%, respectively, for rectangular, triangular, and parabolic cross sections in 24 flood events. Therefore, this method is applicable to ungauged basins.

  10. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology.

    Science.gov (United States)

    Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H

    2017-07-01

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in

  11. LandScape: a simple method to aggregate p--Values and other stochastic variables without a priori grouping

    DEFF Research Database (Denmark)

    Wiuf, Carsten; Pallesen, Jonatan; Foldager, Leslie

    2016-01-01

    variables without assuming a priori defined groups. We provide different ways to evaluate the significance of the aggregated variables based on theoretical considerations and resampling techniques, and show that under certain assumptions the FWER is controlled in the strong sense. Validity of the method...... and the results might depend on the chosen criteria. Methods that summarize, or aggregate, test statistics or p-values, without relying on a priori criteria, are therefore desirable. We present a simple method to aggregate a sequence of stochastic variables, such as test statistics or p-values, into fewer...

  12. A method to standardize gait and balance variables for gait velocity.

    NARCIS (Netherlands)

    Iersel, M.B. van; Olde Rikkert, M.G.M.; Borm, G.F.

    2007-01-01

    Many gait and balance variables depend on gait velocity, which seriously hinders the interpretation of gait and balance data derived from walks at different velocities. However, as far as we know there is no widely accepted method to correct for effects of gait velocity on other gait and balance

  13. phMRI: methodological considerations for mitigating potential confounding factors

    Directory of Open Access Journals (Sweden)

    Julius H Bourke

    2015-05-01

    Full Text Available Pharmacological Magnetic Resonance Imaging (phMRI is a variant of conventional MRI that adds pharmacological manipulations in order to study the effects of drugs, or uses pharmacological probes to investigate basic or applied (e.g. clinical neuroscience questions. Issues that may confound the interpretation of results from various types of phMRI studies are briefly discussed, and a set of methodological strategies that can mitigate these problems are described. These include strategies that can be employed at every stage of investigation, from study design to interpretation of resulting data, and additional techniques suited for use with clinical populations are also featured. Pharmacological MRI is a challenging area of research that has both significant advantages and formidable difficulties, however with due consideration and use of these strategies many of the key obstacles can be overcome.

  14. Control Method for Variable Speed Wind Turbines to Support Temporary Primary Frequency Control

    DEFF Research Database (Denmark)

    Wang, Haijiao; Chen, Zhe; Jiang, Quanyuan

    2014-01-01

    This paper develops a control method for variable speed wind turbines (VSWTs) to support temporary primary frequency control of power system. The control method contains two parts: (1) up-regulate support control when a frequency drop event occurs; (2) down-regulate support control when a frequen...

  15. Modeling the solute transport by particle-tracing method with variable weights

    Science.gov (United States)

    Jiang, J.

    2016-12-01

    Particle-tracing method is usually used to simulate the solute transport in fracture media. In this method, the concentration at one point is proportional to number of particles visiting this point. However, this method is rather inefficient at the points with small concentration. Few particles visit these points, which leads to violent oscillation or gives zero value of concentration. In this paper, we proposed a particle-tracing method with variable weights. The concentration at one point is proportional to the sum of the weights of the particles visiting it. It adjusts the weight factors during simulations according to the estimated probabilities of corresponding walks. If the weight W of a tracking particle is larger than the relative concentration C at the corresponding site, the tracking particle will be splitted into Int(W/C) copies and each copy will be simulated independently with the weight W/Int(W/C) . If the weight W of a tracking particle is less than the relative concentration C at the corresponding site, the tracking particle will be continually tracked with a probability W/C and the weight will be adjusted to be C. By adjusting weights, the number of visiting particles distributes evenly in the whole range. Through this variable weights scheme, we can eliminate the violent oscillation and increase the accuracy of orders of magnitudes.

  16. Selection of variables for neural network analysis. Comparisons of several methods with high energy physics data

    International Nuclear Information System (INIS)

    Proriol, J.

    1994-01-01

    Five different methods are compared for selecting the most important variables with a view to classifying high energy physics events with neural networks. The different methods are: the F-test, Principal Component Analysis (PCA), a decision tree method: CART, weight evaluation, and Optimal Cell Damage (OCD). The neural networks use the variables selected with the different methods. We compare the percentages of events properly classified by each neural network. The learning set and the test set are the same for all the neural networks. (author)

  17. Interpersonal discrimination and depressive symptomatology: examination of several personality-related characteristics as potential confounders in a racial/ethnic heterogeneous adult sample

    OpenAIRE

    Hunte, Haslyn ER; King, Katherine; Hicken, Margaret; Lee, Hedwig; Lewis, Ten? T

    2013-01-01

    Background Research suggests that reports of interpersonal discrimination result in poor mental health. Because personality characteristics may either confound or mediate the link between these reports and mental health, there is a need to disentangle its role in order to better understand the nature of discrimination-mental health association. We examined whether hostility, anger repression and expression, pessimism, optimism, and self-esteem served as confounders in the association between ...

  18. Walking speed-related changes in stride time variability: effects of decreased speed

    Directory of Open Access Journals (Sweden)

    Dubost Veronique

    2009-08-01

    Full Text Available Abstract Background Conflicting results have been reported regarding the relationship between stride time variability (STV and walking speed. While some studies failed to establish any relationship, others reported either a linear or a non-linear relationship. We therefore sought to determine the extent to which decrease in self-selected walking speed influenced STV among healthy young adults. Methods The mean value, the standard deviation and the coefficient of variation of stride time, as well as the mean value of stride velocity were recorded while steady-state walking using the GAITRite® system in 29 healthy young adults who walked consecutively at 88%, 79%, 71%, 64%, 58%, 53%, 46% and 39% of their preferred walking speed. Results The decrease in stride velocity increased significantly mean values, SD and CoV of stride time (p Conclusion The results support the assumption that gait variability increases while walking speed decreases and, thus, gait might be more unstable when healthy subjects walk slower compared with their preferred walking speed. Furthermore, these results highlight that a decrease in walking speed can be a potential confounder while evaluating STV.

  19. Neuropsychological functioning in older people with type 2 diabetes: the effect of controlling for confounding factors.

    Science.gov (United States)

    Asimakopoulou, K G; Hampson, S E; Morrish, N J

    2002-04-01

    Neuropsychological functioning was examined in a group of 33 older (mean age 62.40 +/- 9.62 years) people with Type 2 diabetes (Group 1) and 33 non-diabetic participants matched with Group 1 on age, sex, premorbid intelligence and presence of hypertension and cardio/cerebrovascular conditions (Group 2). Data statistically corrected for confounding factors obtained from the diabetic group were compared with the matched control group. The results suggested small cognitive deficits in diabetic people's verbal memory and mental flexibility (Logical Memory A and SS7). No differences were seen between the two samples in simple and complex visuomotor attention, sustained complex visual attention, attention efficiency, mental double tracking, implicit memory, and self-reported memory problems. These findings indicate minimal cognitive impairment in relatively uncomplicated Type 2 diabetes and demonstrate the importance of control and matching for confounding factors.

  20. DATA COLLECTION METHOD FOR PEDESTRIAN MOVEMENT VARIABLES

    Directory of Open Access Journals (Sweden)

    Hajime Inamura

    2000-01-01

    Full Text Available The need of tools for design and evaluation of pedestrian areas, subways stations, entrance hall, shopping mall, escape routes, stadium etc lead to the necessity of a pedestrian model. One approach pedestrian model is Microscopic Pedestrian Simulation Model. To be able to develop and calibrate a microscopic pedestrian simulation model, a number of variables need to be considered. As the first step of model development, some data was collected using video and the coordinate of the head path through image processing were also taken. Several numbers of variables can be gathered to describe the behavior of pedestrian from a different point of view. This paper describes how to obtain variables from video taking and simple image processing that can represent the movement of pedestrians and its variables

  1. Chasing the effects of Pre-analytical Confounders - a Multicentre Study on CSF-AD biomarkers

    Directory of Open Access Journals (Sweden)

    Maria Joao Leitao

    2015-07-01

    Full Text Available Core cerebrospinal fluid (CSF biomarkers-Aβ42, Tau and pTau–have been recently incorporated in the revised criteria for Alzheimer’s disease (AD. However, their widespread clinical application lacks standardization. Pre-analytical sample handling and storage play an important role in the reliable measurement of these biomarkers across laboratories. In this study, we aim to surpass the efforts from previous studies, by employing a multicentre approach to assess the impact of less studied CSF pre-analytical confounders in AD-biomarkers quantification. Four different centres participated in this study and followed the same established protocol. CSF samples were analysed for three biomarkers (Aβ42, Tau and pTau and tested for different spinning conditions (temperature: Room temperature (RT vs. 4oC; speed: 500g vs. 2000g vs. 3000g, storage volume variations (25%, 50% and 75% of tube total volume as well as freezing-thaw cycles (up to 5 cyles. The influence of sample routine parameters, inter-centre variability and relative value of each biomarker (reported as normal/abnormal, was analysed. Centrifugation conditions did not influence biomarkers levels, except for samples with a high CSF total protein content, where either non centrifugation or centrifugation at RT, compared to 4ºC, led to higher Aβ42 levels. Reducing CSF storage volume from 75% to 50% of total tube capacity, decreased Aβ42 concentration (within analytical CV of the assay, whereas no change in Tau or pTau was observed. Moreover, the concentration of Tau and pTau appears to be stable up to 5 freeze-thaw cycles, whereas Aβ42 levels decrease if CSF is freeze-thawed more than 3 times. This systematic study reinforces the need for CSF centrifugation at 4ºC prior to storage and highlights the influence of storage conditions in Aβ42 levels. This study contributes to the establishment of harmonized standard operating procedures that will help reducing inter-lab variability of CSF

  2. Variable aperture-based ptychographical iterative engine method

    Science.gov (United States)

    Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-02-01

    A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches.

  3. Comparison of different calibration methods suited for calibration problems with many variables

    DEFF Research Database (Denmark)

    Holst, Helle

    1992-01-01

    This paper describes and compares different kinds of statistical methods proposed in the literature as suited for solving calibration problems with many variables. These are: principal component regression, partial least-squares, and ridge regression. The statistical techniques themselves do...

  4. Effect of water quality and confounding factors on digestive enzyme activities in Gammarus fossarum.

    Science.gov (United States)

    Charron, L; Geffard, O; Chaumot, A; Coulaud, R; Queau, H; Geffard, A; Dedourge-Geffard, O

    2013-12-01

    The feeding activity and subsequent assimilation of the products resulting from food digestion allow organisms to obtain energy for growth, maintenance and reproduction. Among these biological parameters, we studied digestive enzymes (amylase, cellulase and trypsin) in Gammarus fossarum to assess the impact of contaminants on their access to energy resources. However, to enable objective assessment of a toxic effect of decreased water quality on an organisms' digestive capacity, it is necessary to establish reference values based on its natural variability as a function of changing biotic and abiotic factors. To limit the confounding influence of biotic factors, a caging approach with calibrated male organisms from the same population was used. This study applied an in situ deployment at 23 sites of the Rhone basin rivers, complemented by a laboratory experiment assessing the influence of two abiotic factors (temperature and conductivity). The results showed a small effect of conductivity on cellulase activity and a significant effect of temperature on digestive enzyme activity but only at the lowest temperature (7 °C). The experimental conditions allowed us to define an environmental reference value for digestive enzyme activities to select sites where the quality of the water impacted the digestive capacity of the organisms. In addition to the feeding rate, this study showed the relevance of digestive enzymes as biomarkers to be used as an early warning tool to reflect organisms' health and the chemical quality of aquatic ecosystems.

  5. Biological variables for the site survey of surface ecosystems - existing data and survey methods

    International Nuclear Information System (INIS)

    Kylaekorpi, Lasse; Berggren, Jens; Larsson, Mats; Liberg, Maria; Rydgren, Bernt

    2000-06-01

    In the process of selecting a safe and environmentally acceptable location for the deep level repository of nuclear waste, site surveys will be carried out. These site surveys will also include studies of the biota at the site, in order to assure that the chosen site will not conflict with important ecological interests, and to establish a thorough baseline for future impact assessments and monitoring programmes. As a preparation to the site survey programme, a review of the variables that need to be surveyed is conducted. This report contains the review for some of those variables. For each variable, existing data sources and their characteristics are listed. For those variables for which existing data sources are inadequate, suggestions are made for appropriate methods that will enable the establishment of an acceptable baseline. In this report the following variables are reviewed: Fishery, Landscape, Vegetation types, Key biotopes, Species (flora and fauna), Red-listed species (flora and fauna), Biomass (flora and fauna), Water level, water retention time (incl. water body and flow), Nutrients/toxins, Oxygen concentration, Layering, stratification, Light conditions/transparency, Temperature, Sediment transport, (Marine environments are excluded from this review). For a major part of the variables, the existing data coverage is most likely insufficient. Both the temporal and/or the geographical resolution is often limited, which means that complementary surveys must be performed during (or before) the site surveys. It is, however, in general difficult to make exact judgements on the extent of existing data, and also to give suggestions for relevant methods to use in the site surveys. This can be finally decided only when the locations for the sites are decided upon. The relevance of the different variables also depends on the environmental characteristics of the sites. Therefore, we suggest that when the survey sites are selected, an additional review is

  6. Biological variables for the site survey of surface ecosystems - existing data and survey methods

    Energy Technology Data Exchange (ETDEWEB)

    Kylaekorpi, Lasse; Berggren, Jens; Larsson, Mats; Liberg, Maria; Rydgren, Bernt [SwedPower AB, Stockholm (Sweden)

    2000-06-01

    In the process of selecting a safe and environmentally acceptable location for the deep level repository of nuclear waste, site surveys will be carried out. These site surveys will also include studies of the biota at the site, in order to assure that the chosen site will not conflict with important ecological interests, and to establish a thorough baseline for future impact assessments and monitoring programmes. As a preparation to the site survey programme, a review of the variables that need to be surveyed is conducted. This report contains the review for some of those variables. For each variable, existing data sources and their characteristics are listed. For those variables for which existing data sources are inadequate, suggestions are made for appropriate methods that will enable the establishment of an acceptable baseline. In this report the following variables are reviewed: Fishery, Landscape, Vegetation types, Key biotopes, Species (flora and fauna), Red-listed species (flora and fauna), Biomass (flora and fauna), Water level, water retention time (incl. water body and flow), Nutrients/toxins, Oxygen concentration, Layering, stratification, Light conditions/transparency, Temperature, Sediment transport, (Marine environments are excluded from this review). For a major part of the variables, the existing data coverage is most likely insufficient. Both the temporal and/or the geographical resolution is often limited, which means that complementary surveys must be performed during (or before) the site surveys. It is, however, in general difficult to make exact judgements on the extent of existing data, and also to give suggestions for relevant methods to use in the site surveys. This can be finally decided only when the locations for the sites are decided upon. The relevance of the different variables also depends on the environmental characteristics of the sites. Therefore, we suggest that when the survey sites are selected, an additional review is

  7. Heart period variability and psychopathology in urban boys at risk for delinquency.

    Science.gov (United States)

    Pine, D S; Wasserman, G A; Miller, L; Coplan, J D; Bagiella, E; Kovelenku, P; Myers, M M; Sloan, R P

    1998-09-01

    To examine associations between heart period variability (HPV) and psychopathology in young urban boys at risk for delinquency, a series of 697-11-year-old younger brothers of adjudicated delinquents received a standardized psychiatric evaluation and an assessment of heart period variability (HPV). Psychiatric symptoms were rated in two domains: externalizing and internalizing psychopathology. Continuous measures of both externalizing and internalizing psychopathology were associated with reductions in HPV components related to parasympathetic activity. These associations could not be explained by a number of potentially confounding variables, such as age, ethnicity, social class, body size, or family history of hypertension. Although familial hypertension predicted reduced HPV and externalizing psychopathology, associations between externalizing psychopathology and HPV were independent of familial hypertension. Psychiatric symptoms are associated with reduced HPV in young urban boys at risk for delinquency.

  8. Approaches for developing a sizing method for stand-alone PV systems with variable demand

    Energy Technology Data Exchange (ETDEWEB)

    Posadillo, R. [Grupo de Investigacion en Energias y Recursos Renovables, Dpto. de Fisica Aplicada, E.P.S., Universidad de Cordoba, Avda. Menendez Pidal s/n, 14004 Cordoba (Spain); Lopez Luque, R. [Grupo de Investigacion de Fisica para las Energias y Recursos Renovables, Dpto. de Fisica Aplicada. Edificio C2 Campus de Rabanales, 14071 Cordoba (Spain)

    2008-05-15

    Accurate sizing is one of the most important aspects to take into consideration when designing a stand-alone photovoltaic system (SAPV). Various methods, which differ in terms of their simplicity or reliability, have been developed for this purpose. Analytical methods, which seek functional relationships between variables of interest to the sizing problem, are one of these approaches. A series of rational considerations are presented in this paper with the aim of shedding light upon the basic principles and results of various sizing methods proposed by different authors. These considerations set the basis for a new analytical method that has been designed for systems with variable monthly energy demands. Following previous approaches, the method proposed is based on the concept of loss of load probability (LLP) - a parameter that is used to characterize system design. The method includes information on the standard deviation of loss of load probability ({sigma}{sub LLP}) and on two new parameters: annual number of system failures (f) and standard deviation of annual number of failures ({sigma}{sub f}). The method proves useful for sizing a PV system in a reliable manner and serves to explain the discrepancies found in the research on systems with LLP<10{sup -2}. We demonstrate that reliability depends not only on the sizing variables and on the distribution function of solar radiation, but on the minimum value as well, which in a given location and with a monthly average clearness index, achieves total solar radiation on the receiver surface. (author)

  9. P-Link: A method for generating multicomponent cytochrome P450 fusions with variable linker length

    DEFF Research Database (Denmark)

    Belsare, Ketaki D.; Ruff, Anna Joelle; Martinez, Ronny

    2014-01-01

    Fusion protein construction is a widely employed biochemical technique, especially when it comes to multi-component enzymes such as cytochrome P450s. Here we describe a novel method for generating fusion proteins with variable linker lengths, protein fusion with variable linker insertion (P...

  10. Variable aperture-based ptychographical iterative engine method.

    Science.gov (United States)

    Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-02-01

    A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  11. ASSOCIATION BETWEEN EMOTIONAL VARIABLES AND SCHOOL ACHIEVEMENT

    Directory of Open Access Journals (Sweden)

    Christoph Randler

    2009-07-01

    Full Text Available Recent psychological studies highlight emotional aspects, and they show an important role within individual learning processes. Hereby, positive emotions were supposed to positively influence learning and achievement processes and negative ones do the contrary. In this study, an educational unit “ecosystem lake” was used during which achievement (three tests and emotional variables (interest, well-being, anxiety and boredom; measured at the end of three pre-selected lessons were monitored. The research question was to explore correlations between emotional variables and the learning outcome of the teaching unit. Prior knowledge was regressed against the subsequent tests to account for its confounding effect. Regressions showed a highly significant influence of prior knowledge on the subsequent measurements of achievement. However, after accounting for prior knowledge, a positive correlation between interest/well-being and achievement and a negative correlation between anxiety/boredom and achievement was found. Further research and interventions should try to enhance positive emotions in biology lessons to positively influence achievement.

  12. Cumulative Mass and NIOSH Variable Lifting Index Method for Risk Assessment: Possible Relations.

    Science.gov (United States)

    Stucchi, Giulia; Battevi, Natale; Pandolfi, Monica; Galinotti, Luca; Iodice, Simona; Favero, Chiara

    2018-02-01

    Objective The aim of this study was to explore whether the Variable Lifting Index (VLI) can be corrected for cumulative mass and thus test its efficacy in predicting the risk of low-back pain (LBP). Background A validation study of the VLI method was published in this journal reporting promising results. Although several studies highlighted a positive correlation between cumulative load and LBP, cumulative mass has never been considered in any of the studies investigating the relationship between manual material handling and LBP. Method Both VLI and cumulative mass were calculated for 2,374 exposed subjects using a systematic approach. Due to high variability of cumulative mass values, a stratification within VLI categories was employed. Dummy variables (1-4) were assigned to each class and used as a multiplier factor for the VLI, resulting in a new index (VLI_CMM). Data on LBP were collected by occupational physicians at the study sites. Logistic regression was used to estimate the risk of acute LBP within levels of risk exposure when compared with a control group formed by 1,028 unexposed subjects. Results Data showed greatly variable values of cumulative mass across all VLI classes. The potential effect of cumulative mass on damage emerged as not significant ( p value = .6526). Conclusion When comparing VLI_CMM with raw VLI, the former failed to prove itself as a better predictor of LBP risk. Application To recognize cumulative mass as a modifier, especially for lumbar degenerative spine diseases, authors of future studies should investigate potential association between the VLI and other damage variables.

  13. Sparse reconstruction for quantitative bioluminescence tomography based on the incomplete variables truncated conjugate gradient method.

    Science.gov (United States)

    He, Xiaowei; Liang, Jimin; Wang, Xiaorui; Yu, Jingjing; Qu, Xiaochao; Wang, Xiaodong; Hou, Yanbin; Chen, Duofang; Liu, Fang; Tian, Jie

    2010-11-22

    In this paper, we present an incomplete variables truncated conjugate gradient (IVTCG) method for bioluminescence tomography (BLT). Considering the sparse characteristic of the light source and insufficient surface measurement in the BLT scenarios, we combine a sparseness-inducing (ℓ1 norm) regularization term with a quadratic error term in the IVTCG-based framework for solving the inverse problem. By limiting the number of variables updated at each iterative and combining a variable splitting strategy to find the search direction more efficiently, it obtains fast and stable source reconstruction, even without a priori information of the permissible source region and multispectral measurements. Numerical experiments on a mouse atlas validate the effectiveness of the method. In vivo mouse experimental results further indicate its potential for a practical BLT system.

  14. Separating decadal global water cycle variability from sea level rise.

    Science.gov (United States)

    Hamlington, B D; Reager, J T; Lo, M-H; Karnauskas, K B; Leben, R R

    2017-04-20

    Under a warming climate, amplification of the water cycle and changes in precipitation patterns over land are expected to occur, subsequently impacting the terrestrial water balance. On global scales, such changes in terrestrial water storage (TWS) will be reflected in the water contained in the ocean and can manifest as global sea level variations. Naturally occurring climate-driven TWS variability can temporarily obscure the long-term trend in sea level rise, in addition to modulating the impacts of sea level rise through natural periodic undulation in regional and global sea level. The internal variability of the global water cycle, therefore, confounds both the detection and attribution of sea level rise. Here, we use a suite of observations to quantify and map the contribution of TWS variability to sea level variability on decadal timescales. In particular, we find that decadal sea level variability centered in the Pacific Ocean is closely tied to low frequency variability of TWS in key areas across the globe. The unambiguous identification and clean separation of this component of variability is the missing step in uncovering the anthropogenic trend in sea level and understanding the potential for low-frequency modulation of future TWS impacts including flooding and drought.

  15. Variable separation solutions for the Nizhnik-Novikov-Veselov equation via the extended tanh-function method

    International Nuclear Information System (INIS)

    Zhang Jiefang; Dai Chaoqing; Zong Fengde

    2007-01-01

    In this paper, with the variable separation approach and based on the general reduction theory, we successfully generalize this extended tanh-function method to obtain new types of variable separation solutions for the following Nizhnik-Novikov-Veselov (NNV) equation. Among the solutions, two solutions are new types of variable separation solutions, while the last solution is similar to the solution given by Darboux transformation in Hu et al 2003 Chin. Phys. Lett. 20 1413

  16. Variable threshold method for ECG R-peak detection.

    Science.gov (United States)

    Kew, Hsein-Ping; Jeong, Do-Un

    2011-10-01

    In this paper, a wearable belt-type ECG electrode worn around the chest by measuring the real-time ECG is produced in order to minimize the inconvenient in wearing. ECG signal is detected using a potential instrument system. The measured ECG signal is transmits via an ultra low power consumption wireless data communications unit to personal computer using Zigbee-compatible wireless sensor node. ECG signals carry a lot of clinical information for a cardiologist especially the R-peak detection in ECG. R-peak detection generally uses the threshold value which is fixed. There will be errors in peak detection when the baseline changes due to motion artifacts and signal size changes. Preprocessing process which includes differentiation process and Hilbert transform is used as signal preprocessing algorithm. Thereafter, variable threshold method is used to detect the R-peak which is more accurate and efficient than fixed threshold value method. R-peak detection using MIT-BIH databases and Long Term Real-Time ECG is performed in this research in order to evaluate the performance analysis.

  17. Propulsion and launching analysis of variable-mass rockets by analytical methods

    Directory of Open Access Journals (Sweden)

    D.D. Ganji

    2013-09-01

    Full Text Available In this study, applications of some analytical methods on nonlinear equation of the launching of a rocket with variable mass are investigated. Differential transformation method (DTM, homotopy perturbation method (HPM and least square method (LSM were applied and their results are compared with numerical solution. An excellent agreement with analytical methods and numerical ones is observed in the results and this reveals that analytical methods are effective and convenient. Also a parametric study is performed here which includes the effect of exhaust velocity (Ce, burn rate (BR of fuel and diameter of cylindrical rocket (d on the motion of a sample rocket, and contours for showing the sensitivity of these parameters are plotted. The main results indicate that the rocket velocity and altitude are increased with increasing the Ce and BR and decreased with increasing the rocket diameter and drag coefficient.

  18. The application of variable sampling method in the audit testing of insurance companies' premium income

    Directory of Open Access Journals (Sweden)

    Jovković Biljana

    2012-12-01

    Full Text Available The aim of this paper is to present the procedure of audit sampling using the variable sampling methods for conducting the tests of income from insurance premiums in insurance company 'Takovo'. Since the incomes from the insurance premiums from vehicle insurance and third-party vehicle insurance have the dominant share of the insurance company's income, the application of this method will be shown in the audit examination of these incomes - incomes from VI and TPVI premiums. For investigating the applicability of these methods in testing the income of other insurance companies, we shall implement the method of variable sampling in the audit testing of the premium income from the three leading insurance companies in Serbia, 'Dunav', 'DDOR' and 'Delta Generali' Insurance.

  19. Do time-invariant confounders explain away the association between job stress and workers' mental health? Evidence from Japanese occupational panel data.

    Science.gov (United States)

    Oshio, Takashi; Tsutsumi, Akizumi; Inoue, Akiomi

    2015-02-01

    It is well known that job stress is negatively related to workers' mental health, but most recent studies have not controlled for unobserved time-invariant confounders. In the current study, we attempted to validate previous observations on the association between job stress and workers' mental health, by removing the effects of unobserved time-invariant confounders. We used data from three to four waves of an occupational Japanese cohort survey, focusing on 31,382 observations of 9741 individuals who participated in at least two consecutive waves. We estimated mean-centered fixed effects models to explain psychological distress in terms of the Kessler 6 (K6) scores (range: 0-24) by eight job stress indicators related to the job demands-control, effort-reward imbalance, and organizational injustice models. Mean-centered fixed effects models reduced the magnitude of the association between jobs stress and K6 scores to 44.8-54.2% of those observed from pooled ordinary least squares. However, the association remained highly significant even after controlling for unobserved time-invariant confounders for all job stress indicators. In addition, alternatively specified models showed the robustness of the results. In all, we concluded that the validity of major job stress models, which link job stress and workers' mental health, was robust, although unobserved time-invariant confounders led to an overestimation of the association. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. A sibling method for identifying vQTLs.

    Science.gov (United States)

    Conley, Dalton; Johnson, Rebecca; Domingue, Ben; Dawes, Christopher; Boardman, Jason; Siegal, Mark

    2018-01-01

    The propensity of a trait to vary within a population may have evolutionary, ecological, or clinical significance. In the present study we deploy sibling models to offer a novel and unbiased way to ascertain loci associated with the extent to which phenotypes vary (variance-controlling quantitative trait loci, or vQTLs). Previous methods for vQTL-mapping either exclude genetically related individuals or treat genetic relatedness among individuals as a complicating factor addressed by adjusting estimates for non-independence in phenotypes. The present method uses genetic relatedness as a tool to obtain unbiased estimates of variance effects rather than as a nuisance. The family-based approach, which utilizes random variation between siblings in minor allele counts at a locus, also allows controls for parental genotype, mean effects, and non-linear (dominance) effects that may spuriously appear to generate variation. Simulations show that the approach performs equally well as two existing methods (squared Z-score and DGLM) in controlling type I error rates when there is no unobserved confounding, and performs significantly better than these methods in the presence of small degrees of confounding. Using height and BMI as empirical applications, we investigate SNPs that alter within-family variation in height and BMI, as well as pathways that appear to be enriched. One significant SNP for BMI variability, in the MAST4 gene, replicated. Pathway analysis revealed one gene set, encoding members of several signaling pathways related to gap junction function, which appears significantly enriched for associations with within-family height variation in both datasets (while not enriched in analysis of mean levels). We recommend approximating laboratory random assignment of genotype using family data and more careful attention to the possible conflation of mean and variance effects.

  1. A sibling method for identifying vQTLs

    Science.gov (United States)

    Domingue, Ben; Dawes, Christopher; Boardman, Jason; Siegal, Mark

    2018-01-01

    The propensity of a trait to vary within a population may have evolutionary, ecological, or clinical significance. In the present study we deploy sibling models to offer a novel and unbiased way to ascertain loci associated with the extent to which phenotypes vary (variance-controlling quantitative trait loci, or vQTLs). Previous methods for vQTL-mapping either exclude genetically related individuals or treat genetic relatedness among individuals as a complicating factor addressed by adjusting estimates for non-independence in phenotypes. The present method uses genetic relatedness as a tool to obtain unbiased estimates of variance effects rather than as a nuisance. The family-based approach, which utilizes random variation between siblings in minor allele counts at a locus, also allows controls for parental genotype, mean effects, and non-linear (dominance) effects that may spuriously appear to generate variation. Simulations show that the approach performs equally well as two existing methods (squared Z-score and DGLM) in controlling type I error rates when there is no unobserved confounding, and performs significantly better than these methods in the presence of small degrees of confounding. Using height and BMI as empirical applications, we investigate SNPs that alter within-family variation in height and BMI, as well as pathways that appear to be enriched. One significant SNP for BMI variability, in the MAST4 gene, replicated. Pathway analysis revealed one gene set, encoding members of several signaling pathways related to gap junction function, which appears significantly enriched for associations with within-family height variation in both datasets (while not enriched in analysis of mean levels). We recommend approximating laboratory random assignment of genotype using family data and more careful attention to the possible conflation of mean and variance effects. PMID:29617452

  2. Quantitative Assessment of Blood Pressure Measurement Accuracy and Variability from Visual Auscultation Method by Observers without Receiving Medical Training

    Science.gov (United States)

    Feng, Yong; Chen, Aiqing

    2017-01-01

    This study aimed to quantify blood pressure (BP) measurement accuracy and variability with different techniques. Thirty video clips of BP recordings from the BHS training database were converted to Korotkoff sound waveforms. Ten observers without receiving medical training were asked to determine BPs using (a) traditional manual auscultatory method and (b) visual auscultation method by visualizing the Korotkoff sound waveform, which was repeated three times on different days. The measurement error was calculated against the reference answers, and the measurement variability was calculated from the SD of the three repeats. Statistical analysis showed that, in comparison with the auscultatory method, visual method significantly reduced overall variability from 2.2 to 1.1 mmHg for SBP and from 1.9 to 0.9 mmHg for DBP (both p auscultation methods). In conclusion, the visual auscultation method had the ability to achieve an acceptable degree of BP measurement accuracy, with smaller variability in comparison with the traditional auscultatory method. PMID:29423405

  3. Educational gains in cause-specific mortality: Accounting for cognitive ability and family-level confounders using propensity score weighting.

    Science.gov (United States)

    Bijwaard, Govert E; Myrskylä, Mikko; Tynelius, Per; Rasmussen, Finn

    2017-07-01

    A negative educational gradient has been found for many causes of death. This association may be partly explained by confounding factors that affect both educational attainment and mortality. We correct the cause-specific educational gradient for observed individual background and unobserved family factors using an innovative method based on months lost due to a specific cause of death re-weighted by the probability of attaining a higher educational level. We use data on men with brothers from the Swedish Military Conscription Registry (1951-1983), linked to administrative registers. This dataset of some 700,000 men allows us to distinguish between five education levels and many causes of death. The empirical results reveal that raising the educational level from primary to tertiary would result in an additional 20 months of survival between ages 18 and 63. This improvement in mortality is mainly attributable to fewer deaths from external causes. The highly educated gain more than nine months due to the reduction in deaths from external causes, but gain only two months due to the reduction in cancer mortality and four months due to the reduction in cardiovascular mortality. Ignoring confounding would lead to an underestimation of the gains by educational attainment, especially for the less educated. Our results imply that if the education distribution of 50,000 Swedish men from the 1951 cohort were replaced with that of the corresponding 1983 cohort, 22% of the person-years that were lost to death between ages 18 and 63 would have been saved for this cohort. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Adjusting for the Confounding Effects of Treatment Switching-The BREAK-3 Trial: Dabrafenib Versus Dacarbazine.

    Science.gov (United States)

    Latimer, Nicholas R; Abrams, Keith R; Amonkar, Mayur M; Stapelkamp, Ceilidh; Swann, R Suzanne

    2015-07-01

    Patients with previously untreated BRAF V600E mutation-positive melanoma in BREAK-3 showed a median overall survival (OS) of 18.2 months for dabrafenib versus 15.6 months for dacarbazine (hazard ratio [HR], 0.76; 95% confidence interval, 0.48-1.21). Because patients receiving dacarbazine were allowed to switch to dabrafenib at disease progression, we attempted to adjust for the confounding effects on OS. Rank preserving structural failure time models (RPSFTMs) and the iterative parameter estimation (IPE) algorithm were used. Two analyses, "treatment group" (assumes treatment effect could continue until death) and "on-treatment observed" (assumes treatment effect disappears with discontinuation), were used to test the assumptions around the durability of the treatment effect. A total of 36 of 63 patients (57%) receiving dacarbazine switched to dabrafenib. The adjusted OS HRs ranged from 0.50 to 0.55, depending on the analysis. The RPSFTM and IPE "treatment group" and "on-treatment observed" analyses performed similarly well. RPSFTM and IPE analyses resulted in point estimates for the OS HR that indicate a substantial increase in the treatment effect compared with the unadjusted OS HR of 0.76. The results are uncertain because of the assumptions associated with the adjustment methods. The confidence intervals continued to cross 1.00; thus, the adjusted estimates did not provide statistically significant evidence of a treatment benefit on survival. However, it is clear that a standard intention-to-treat analysis will be confounded in the presence of treatment switching-a reliance on unadjusted analyses could lead to inappropriate practice. Adjustment analyses provide useful additional information on the estimated treatment effects to inform decision making. Treatment switching is common in oncology trials, and the implications of this for the interpretation of the clinical effectiveness and cost-effectiveness of the novel treatment are important to consider. If

  5. Neighbourhood social and built environment factors and falls in community-dwelling canadian older adults: A validation study and exploration of structural confounding

    Directory of Open Access Journals (Sweden)

    Afshin Vafaei

    2016-12-01

    Full Text Available Older persons are vulnerable to the ill effects of their social and built environment due to age-related limitations in mobility and bio-psychological vulnerability. Falls are common in older adults and result from complex interactions between individual, social, and contextual determinants. We addressed two methodological issues of neighbourhood-health and social epidemiological studies in this analysis: (1 validity of measures of neighbourhood contexts, and (2 structural confounding resulting from social sorting mechanisms. Baseline data from International Mobility in Aging Study were used. Samples included community-dwelling Canadians older than 65 living in Kingston (Ontario and St-Hyacinthe (Quebec. We performed factor analysis and ecometric analysis to assess the validity of measures of neighbourhood social capital, socioeconomic status, and the built environment and stratified tabular analyses to explore structural confounding. The scales all demonstrated good psychometric and ecometric properties. There was an evidence of the existence of structural confounding in this sample of Canadian older adults as some combinations of strata for the three neighbourhood measures had no population. This limits causal inference in studying relationships between neighbourhood factors and falls and should be taken into account in aetiological aging research. Keywords: Ecometric analysis, Falls, Social and built environment, Neighbourhoods, Older adults, Social Capital, Structural confounding, Validity

  6. System and method of modulating electrical signals using photoconductive wide bandgap semiconductors as variable resistors

    Science.gov (United States)

    Harris, John Richardson; Caporaso, George J; Sampayan, Stephen E

    2013-10-22

    A system and method for producing modulated electrical signals. The system uses a variable resistor having a photoconductive wide bandgap semiconductor material construction whose conduction response to changes in amplitude of incident radiation is substantially linear throughout a non-saturation region to enable operation in non-avalanche mode. The system also includes a modulated radiation source, such as a modulated laser, for producing amplitude-modulated radiation with which to direct upon the variable resistor and modulate its conduction response. A voltage source and an output port, are both operably connected to the variable resistor so that an electrical signal may be produced at the output port by way of the variable resistor, either generated by activation of the variable resistor or propagating through the variable resistor. In this manner, the electrical signal is modulated by the variable resistor so as to have a waveform substantially similar to the amplitude-modulated radiation.

  7. Instrumental variable analysis as a complementary analysis in studies of adverse effects : venous thromboembolism and second-generation versus third-generation oral contraceptives

    NARCIS (Netherlands)

    Boef, Anna G C; Souverein, Patrick C|info:eu-repo/dai/nl/243074948; Vandenbroucke, Jan P; van Hylckama Vlieg, Astrid; de Boer, Anthonius|info:eu-repo/dai/nl/075097346; le Cessie, Saskia; Dekkers, Olaf M

    2016-01-01

    PURPOSE: A potentially useful role for instrumental variable (IV) analysis may be as a complementary analysis to assess the presence of confounding when studying adverse drug effects. There has been discussion on whether the observed increased risk of venous thromboembolism (VTE) for

  8. Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant

    Science.gov (United States)

    Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa

    2013-09-17

    System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.

  9. Nonlinear Predictive Models for Multiple Mediation Analysis: With an Application to Explore Ethnic Disparities in Anxiety and Depression Among Cancer Survivors.

    Science.gov (United States)

    Yu, Qingzhao; Medeiros, Kaelen L; Wu, Xiaocheng; Jensen, Roxanne E

    2018-04-02

    Mediation analysis allows the examination of effects of a third variable (mediator/confounder) in the causal pathway between an exposure and an outcome. The general multiple mediation analysis method (MMA), proposed by Yu et al., improves traditional methods (e.g., estimation of natural and controlled direct effects) to enable consideration of multiple mediators/confounders simultaneously and the use of linear and nonlinear predictive models for estimating mediation/confounding effects. Previous studies find that compared with non-Hispanic cancer survivors, Hispanic survivors are more likely to endure anxiety and depression after cancer diagnoses. In this paper, we applied MMA on MY-Health study to identify mediators/confounders and quantify the indirect effect of each identified mediator/confounder in explaining ethnic disparities in anxiety and depression among cancer survivors who enrolled in the study. We considered a number of socio-demographic variables, tumor characteristics, and treatment factors as potential mediators/confounders and found that most of the ethnic differences in anxiety or depression between Hispanic and non-Hispanic white cancer survivors were explained by younger diagnosis age, lower education level, lower proportions of employment, less likely of being born in the USA, less insurance, and less social support among Hispanic patients.

  10. Regularized variable metric method versus the conjugate gradient method in solution of radiative boundary design problem

    International Nuclear Information System (INIS)

    Kowsary, F.; Pooladvand, K.; Pourshaghaghy, A.

    2007-01-01

    In this paper, an appropriate distribution of the heating elements' strengths in a radiation furnace is estimated using inverse methods so that a pre-specified temperature and heat flux distribution is attained on the design surface. Minimization of the sum of the squares of the error function is performed using the variable metric method (VMM), and the results are compared with those obtained by the conjugate gradient method (CGM) established previously in the literature. It is shown via test cases and a well-founded validation procedure that the VMM, when using a 'regularized' estimator, is more accurate and is able to reach at a higher quality final solution as compared to the CGM. The test cases used in this study were two-dimensional furnaces filled with an absorbing, emitting, and scattering gas

  11. Negative confounding by essential fatty acids in methylmercury neurotoxicity associations

    DEFF Research Database (Denmark)

    Choi, Anna L; Mogensen, Ulla Brasch; Bjerve, Kristian S

    2014-01-01

    acid concentrations in the analysis (-22.0, 95% confidence interval [CI]=-39.4, -4.62). In structural equation models, poorer memory function (corresponding to a lower score in the learning trials and short delay recall in CVLT) was associated with a doubling of prenatal exposure to methylmercury after...... concentrations of fatty acids were determined in cord serum phospholipids. Neuropsychological performance in verbal, motor, attention, spatial, and memory functions was assessed at 7 years of age. Multiple regression and structural equation models (SEMs) were carried out to determine the confounder......-adjusted associations with methylmercury exposure. RESULTS: A short delay recall (in percent change) in the California Verbal Learning Test (CVLT) was associated with a doubling of cord blood methylmercury (-18.9, 95% confidence interval [CI]=-36.3, -1.51). The association became stronger after the inclusion of fatty...

  12. Comparison of Two- and Three-Dimensional Methods for Analysis of Trunk Kinematic Variables in the Golf Swing.

    Science.gov (United States)

    Smith, Aimée C; Roberts, Jonathan R; Wallace, Eric S; Kong, Pui; Forrester, Stephanie E

    2016-02-01

    Two-dimensional methods have been used to compute trunk kinematic variables (flexion/extension, lateral bend, axial rotation) and X-factor (difference in axial rotation between trunk and pelvis) during the golf swing. Recent X-factor studies advocated three-dimensional (3D) analysis due to the errors associated with two-dimensional (2D) methods, but this has not been investigated for all trunk kinematic variables. The purpose of this study was to compare trunk kinematic variables and X-factor calculated by 2D and 3D methods to examine how different approaches influenced their profiles during the swing. Trunk kinematic variables and X-factor were calculated for golfers from vectors projected onto the global laboratory planes and from 3D segment angles. Trunk kinematic variable profiles were similar in shape; however, there were statistically significant differences in trunk flexion (-6.5 ± 3.6°) at top of backswing and trunk right-side lateral bend (8.7 ± 2.9°) at impact. Differences between 2D and 3D X-factor (approximately 16°) could largely be explained by projection errors introduced to the 2D analysis through flexion and lateral bend of the trunk and pelvis segments. The results support the need to use a 3D method for kinematic data calculation to accurately analyze the golf swing.

  13. Stochastic methods for uncertainty treatment of functional variables in computer codes: application to safety studies

    International Nuclear Information System (INIS)

    Nanty, Simon

    2015-01-01

    This work relates to the framework of uncertainty quantification for numerical simulators, and more precisely studies two industrial applications linked to the safety studies of nuclear plants. These two applications have several common features. The first one is that the computer code inputs are functional and scalar variables, functional ones being dependent. The second feature is that the probability distribution of functional variables is known only through a sample of their realizations. The third feature, relative to only one of the two applications, is the high computational cost of the code, which limits the number of possible simulations. The main objective of this work was to propose a complete methodology for the uncertainty analysis of numerical simulators for the two considered cases. First, we have proposed a methodology to quantify the uncertainties of dependent functional random variables from a sample of their realizations. This methodology enables to both model the dependency between variables and their link to another variable, called co-variate, which could be, for instance, the output of the considered code. Then, we have developed an adaptation of a visualization tool for functional data, which enables to simultaneously visualize the uncertainties and features of dependent functional variables. Second, a method to perform the global sensitivity analysis of the codes used in the two studied cases has been proposed. In the case of a computationally demanding code, the direct use of quantitative global sensitivity analysis methods is intractable. To overcome this issue, the retained solution consists in building a surrogate model or meta model, a fast-running model approximating the computationally expensive code. An optimized uniform sampling strategy for scalar and functional variables has been developed to build a learning basis for the meta model. Finally, a new approximation approach for expensive codes with functional outputs has been

  14. The application of seasonal latent variable in forecasting electricity demand as an alternative method

    International Nuclear Information System (INIS)

    Sumer, Kutluk Kagan; Goktas, Ozlem; Hepsag, Aycan

    2009-01-01

    In this study, we used ARIMA, seasonal ARIMA (SARIMA) and alternatively the regression model with seasonal latent variable in forecasting electricity demand by using data that belongs to 'Kayseri and Vicinity Electricity Joint-Stock Company' over the 1997:1-2005:12 periods. This study tries to examine the advantages of forecasting with ARIMA, SARIMA methods and with the model has seasonal latent variable to each other. The results support that ARIMA and SARIMA models are unsuccessful in forecasting electricity demand. The regression model with seasonal latent variable used in this study gives more successful results than ARIMA and SARIMA models because also this model can consider seasonal fluctuations and structural breaks

  15. A Method of MPPT Control Based on Power Variable Step-size in Photovoltaic Converter System

    Directory of Open Access Journals (Sweden)

    Xu Hui-xiang

    2016-01-01

    Full Text Available Since the disadvantage of traditional MPPT algorithms of variable step-size, proposed power tracking based on variable step-size with the advantage method of the constant-voltage and the perturb-observe (P&O[1-3]. The control strategy modify the problem of voltage fluctuation caused by perturb-observe method, at the same time, introducing the advantage of constant-voltage method and simplify the circuit topology. With the theoretical derivation, control the output power of photovoltaic modules to change the duty cycle of main switch. Achieve the maximum power stabilization output, reduce the volatility of energy loss effectively, and improve the inversion efficiency[3,4]. Given the result of experimental test based theoretical derivation and the curve of MPPT when the prototype work.

  16. Excess Mortality in Hyperthyroidism: The Influence of Preexisting Comorbidity and Genetic Confounding: A Danish Nationwide Register-Based Cohort Study of Twins and Singletons

    Science.gov (United States)

    Brandt, Frans; Almind, Dorthe; Christensen, Kaare; Green, Anders; Brix, Thomas Heiberg

    2012-01-01

    Context: Hyperthyroidism is associated with severe comorbidity, such as stroke, and seems to confer increased mortality. However, it is unknown whether this increased mortality is explained by hyperthyroidism per se, comorbidity, and/or genetic confounding. Objective: The objective of the study was to investigate whether hyperthyroidism is associated with an increased mortality and, if so, whether the association is influenced by comorbidity and/or genetic confounding. Methods: This was an observational cohort study using record-linkage data from nationwide Danish health registers. We identified 4850 singletons and 926 twins from same-sex pairs diagnosed with hyperthyroidism. Each case was matched with four controls for age and gender. The Charlson score was calculated from discharge diagnoses on an individual level to measure comorbidity. Cases and controls were followed up for a mean of 10 yr (range 0–31 yr), and the hazard ratio (HR) for mortality was calculated using Cox regression analyses. Results: In singletons there was a significantly higher mortality in individuals diagnosed with hyperthyroidism than in controls [HR 1.37; 95% confidence interval (CI) 1.30–1.46]. This persisted after adjustment for preexisting comorbidity (HR 1,28; 95% CI 1.21–1.36). In twin pairs discordant for hyperthyroidism (625 pairs), the twin with hyperthyroidism had an increased mortality compared with the corresponding cotwin (HR 1.43; 95% CI 1.09–1.88). However, this was found only in dizygotic pairs (HR 1.80; 95% CI 1.27–2.55) but not in monozygotic pairs (HR 0.95; 95% CI 0.60–1.50). Conclusions: Hyperthyroidism is associated with an increased mortality independent of preexisting comorbidity. The study of twin pairs discordant for hyperthyroidism suggests that genetic confounding influences the association between hyperthyroidism and mortality. PMID:22930783

  17. Inter- and Intra-method Variability of VS Profiles and VS30 at ARRA-funded Sites

    Science.gov (United States)

    Yong, A.; Boatwright, J.; Martin, A. J.

    2015-12-01

    The 2009 American Recovery and Reinvestment Act (ARRA) funded geophysical site characterizations at 191 seismographic stations in California and in the central and eastern United States. Shallow boreholes were considered cost- and environmentally-prohibitive, thus non-invasive methods (passive and active surface- and body-wave techniques) were used at these stations. The drawback, however, is that these techniques measure seismic properties indirectly and introduce more uncertainty than borehole methods. The principal methods applied were Array Microtremor (AM), Multi-channel Analysis of Surface Waves (MASW; Rayleigh and Love waves), Spectral Analysis of Surface Waves (SASW), Refraction Microtremor (ReMi), and P- and S-wave refraction tomography. Depending on the apparent geologic or seismic complexity of the site, field crews applied one or a combination of these methods to estimate the shear-wave velocity (VS) profile and calculate VS30, the time-averaged VS to a depth of 30 meters. We study the inter- and intra-method variability of VS and VS30 at each seismographic station where combinations of techniques were applied. For each site, we find both types of variability in VS30 remain insignificant (5-10% difference) despite substantial variability observed in the VS profiles. We also find that reliable VS profiles are best developed using a combination of techniques, e.g., surface-wave VS profiles correlated against P-wave tomography to constrain variables (Poisson's ratio and density) that are key depth-dependent parameters used in modeling VS profiles. The most reliable results are based on surface- or body-wave profiles correlated against independent observations such as material properties inferred from outcropping geology nearby. For example, mapped geology describes station CI.LJR as a hard rock site (VS30 > 760 m/s). However, decomposed rock outcrops were found nearby and support the estimated VS30 of 303 m/s derived from the MASW (Love wave) profile.

  18. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

    Science.gov (United States)

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

  19. The relationship between glass ceiling and power distance as a cultural variable by a new method

    OpenAIRE

    Naide Jahangirov; Guler Saglam Ari; Seymur Jahangirov; Nuray Guneri Tosunoglu

    2015-01-01

    Glass ceiling symbolizes a variety of barriers and obstacles that arise from gender inequality at business life. With this mind, culture influences gender dynamics. The purpose of this research was to examine the relationship between the glass ceiling and the power distance as a cultural variable within organizations. Gender variable is taken as a moderator variable in relationship between the concepts. In addition to conventional correlation analysis, we employed a new method to investigate ...

  20. The spatial distribution of known predictors of autism spectrum disorders impacts geographic variability in prevalence in central North Carolina

    Directory of Open Access Journals (Sweden)

    Hoffman Kate

    2012-10-01

    Full Text Available Abstract Background The causes of autism spectrum disorders (ASD remain largely unknown and widely debated; however, evidence increasingly points to the importance of environmental exposures. A growing number of studies use geographic variability in ASD prevalence or exposure patterns to investigate the association between environmental factors and ASD. However, differences in the geographic distribution of established risk and predictive factors for ASD, such as maternal education or age, can interfere with investigations of ASD etiology. We evaluated geographic variability in the prevalence of ASD in central North Carolina and the impact of spatial confounding by known risk and predictive factors. Methods Children meeting a standardized case definition for ASD at 8 years of age were identified through records-based surveillance for 8 counties biennially from 2002 to 2008 (n=532. Vital records were used to identify the underlying cohort (15% random sample of children born in the same years as children with an ASD, n=11,034, and to obtain birth addresses. We used generalized additive models (GAMs to estimate the prevalence of ASD across the region by smoothing latitude and longitude. GAMs, unlike methods used in previous spatial analyses of ASD, allow for extensive adjustment of individual-level risk factors (e.g. maternal age and education when evaluating spatial variability of disease prevalence. Results Unadjusted maps revealed geographic variation in surveillance-recognized ASD. Children born in certain regions of the study area were up to 1.27 times as likely to be recognized as having ASD compared to children born in the study area as a whole (prevalence ratio (PR range across the study area 0.57-1.27; global P=0.003. However, geographic gradients of ASD prevalence were attenuated after adjusting for spatial confounders (adjusted PR range 0.72-1.12 across the study area; global P=0.052. Conclusions In these data, spatial variation of ASD

  1. Daily commuting to work is not associated with variables of health.

    Science.gov (United States)

    Mauss, Daniel; Jarczok, Marc N; Fischer, Joachim E

    2016-01-01

    Commuting to work is thought to have a negative impact on employee health. We tested the association of work commute and different variables of health in German industrial employees. Self-rated variables of an industrial cohort (n = 3805; 78.9 % male) including absenteeism, presenteeism and indices reflecting stress and well-being were assessed by a questionnaire. Fasting blood samples, heart-rate variability and anthropometric data were collected. Commuting was grouped into one of four categories: 0-19.9, 20-44.9, 45-59.9, ≥60 min travelling one way to work. Bivariate associations between commuting and all variables under study were calculated. Linear regression models tested this association further, controlling for potential confounders. Commuting was positively correlated with waist circumference and inversely with triglycerides. These associations did not remain statistically significant in linear regression models controlling for age, gender, marital status, and shiftwork. No other association with variables of physical, psychological, or mental health and well-being could be found. The results indicate that commuting to work has no significant impact on well-being and health of German industrial employees.

  2. Efficient Method for Calculating the Composite Stiffness of Parabolic Leaf Springs with Variable Stiffness for Vehicle Rear Suspension

    Directory of Open Access Journals (Sweden)

    Wen-ku Shi

    2016-01-01

    Full Text Available The composite stiffness of parabolic leaf springs with variable stiffness is difficult to calculate using traditional integral equations. Numerical integration or FEA may be used but will require computer-aided software and long calculation times. An efficient method for calculating the composite stiffness of parabolic leaf springs with variable stiffness is developed and evaluated to reduce the complexity of calculation and shorten the calculation time. A simplified model for double-leaf springs with variable stiffness is built, and a composite stiffness calculation method for the model is derived using displacement superposition and material deformation continuity. The proposed method can be applied on triple-leaf and multileaf springs. The accuracy of the calculation method is verified by the rig test and FEA analysis. Finally, several parameters that should be considered during the design process of springs are discussed. The rig test and FEA analytical results indicate that the calculated results are acceptable. The proposed method can provide guidance for the design and production of parabolic leaf springs with variable stiffness. The composite stiffness of the leaf spring can be calculated quickly and accurately when the basic parameters of the leaf spring are known.

  3. Stress Intensity Factor for Interface Cracks in Bimaterials Using Complex Variable Meshless Manifold Method

    Directory of Open Access Journals (Sweden)

    Hongfen Gao

    2014-01-01

    Full Text Available This paper describes the application of the complex variable meshless manifold method (CVMMM to stress intensity factor analyses of structures containing interface cracks between dissimilar materials. A discontinuous function and the near-tip asymptotic displacement functions are added to the CVMMM approximation using the framework of complex variable moving least-squares (CVMLS approximation. This enables the domain to be modeled by CVMMM without explicitly meshing the crack surfaces. The enriched crack-tip functions are chosen as those that span the asymptotic displacement fields for an interfacial crack. The complex stress intensity factors for bimaterial interfacial cracks were numerically evaluated using the method. Good agreement between the numerical results and the reference solutions for benchmark interfacial crack problems is realized.

  4. Discrete curved ray-tracing method for radiative transfer in an absorbing-emitting semitransparent slab with variable spatial refractive index

    International Nuclear Information System (INIS)

    Liu, L.H.

    2004-01-01

    A discrete curved ray-tracing method is developed to analyze the radiative transfer in one-dimensional absorbing-emitting semitransparent slab with variable spatial refractive index. The curved ray trajectory is locally treated as straight line and the complicated and time-consuming computation of ray trajectory is cut down. A problem of radiative equilibrium with linear variable spatial refractive index is taken as an example to examine the accuracy of the proposed method. The temperature distributions are determined by the proposed method and compared with the data in references, which are obtained by other different methods. The results show that the discrete curved ray-tracing method has a good accuracy in solving the radiative transfer in one-dimensional semitransparent slab with variable spatial refractive index

  5. Correlation between radon level and confounders of cancer. A note on epidemiological inference at low doses

    International Nuclear Information System (INIS)

    Hajnal, M.A.; Toth, E.; Hamori, K.; Minda, M.; Koteles, Gy.J.

    2007-01-01

    Complete text of publication follows. Objective. The aim of this study was to examine and further clarify the extent of radon and progeny induced carcinogenesis, both separated from and combined with other confounders and health risk factors. This work was financed by National Development Agency, Hungary, with GVOP-3.1.1.-2004-05-0384/3.0. Methods. A case-control study was conducted in a Hungarian countryside region where the proportion of houses with yearly average radon level above 200 Bq.m -3 was estimated to be higher than 20% by our preceding regional surveys. Radon levels were measured with CR39 closed etched detectors for three seasons separately yielding yearly average by estimating the low summer level. The detectors were placed in the bedrooms, where people were expected to spend one third of a day. 520 patients with diagnosed cancers were included in these measurements, amongst which 77 developed lung or respiratory cancers. The control group consisted 6333 individuals, above 30 years of age. Lifestyle risk factors of cancers were collected by surveys including social status, pollution from indoor heating, smoking and alcohol history, nutrition, exercise and mental health index 5. Except smoking and alcohol habits, these cofactors were only available for the control group. Comparing disease occurrences the authors selected the multivariate generalised linear models. The case and control proportions along a given factor are binomially distributed, thus the logit link function was used. For radon both log and linear terms were probed for. Results. Many known health confounders of cancers correlated with radon levels, with an estimated total net increase of 50-150 Bq m -3 with increased risks. For lung cancers the model with the terms radon, age, gender and smoking was found to have the lowest Akaike Information Criterion (AIC). Heavy dependency on age, gender and smoking contribute largely to observed lung cancer incidence. However log linear relationship

  6. Modeling intraindividual variability with repeated measures data methods and applications

    CERN Document Server

    Hershberger, Scott L

    2013-01-01

    This book examines how individuals behave across time and to what degree that behavior changes, fluctuates, or remains stable.It features the most current methods on modeling repeated measures data as reported by a distinguished group of experts in the field. The goal is to make the latest techniques used to assess intraindividual variability accessible to a wide range of researchers. Each chapter is written in a ""user-friendly"" style such that even the ""novice"" data analyst can easily apply the techniques.Each chapter features:a minimum discussion of mathematical detail;an empirical examp

  7. Phosphate binder use and mortality among hemodialysis patients in the Dialysis Outcomes and Practice Patterns Study (DOPPS): evaluation of possible confounding by nutritional status.

    Science.gov (United States)

    Lopes, Antonio Alberto; Tong, Lin; Thumma, Jyothi; Li, Yun; Fuller, Douglas S; Morgenstern, Hal; Bommer, Jürgen; Kerr, Peter G; Tentori, Francesca; Akiba, Takashi; Gillespie, Brenda W; Robinson, Bruce M; Port, Friedrich K; Pisoni, Ronald L

    2012-07-01

    Poor nutritional status and both hyper- and hypophosphatemia are associated with increased mortality in maintenance hemodialysis (HD) patients. We assessed associations of phosphate binder prescription with survival and indicators of nutritional status in maintenance HD patients. Prospective cohort study (DOPPS [Dialysis Outcomes and Practice Patterns Study]), 1996-2008. 23,898 maintenance HD patients at 923 facilities in 12 countries. Patient-level phosphate binder prescription and case-mix-adjusted facility percentage of phosphate binder prescription using an instrumental-variable analysis. All-cause mortality. Overall, 88% of patients were prescribed phosphate binders. Distributions of age, comorbid conditions, and other characteristics showed small differences between facilities with higher and lower percentages of phosphate binder prescription. Patient-level phosphate binder prescription was associated strongly at baseline with indicators of better nutrition, ie, higher values for serum creatinine, albumin, normalized protein catabolic rate, and body mass index and absence of cachectic appearance. Overall, patients prescribed phosphate binders had 25% lower mortality (HR, 0.75; 95% CI, 0.68-0.83) when adjusted for serum phosphorus level and other covariates; further adjustment for nutritional indicators attenuated this association (HR, 0.88; 95% CI, 0.80-0.97). However, this inverse association was observed for only patients with serum phosphorus levels ≥3.5 mg/dL. In the instrumental-variable analysis, case-mix-adjusted facility percentage of phosphate binder prescription (range, 23%-100%) was associated positively with better nutritional status and inversely with mortality (HR for 10% more phosphate binders, 0.93; 95% CI, 0.89-0.96). Further adjustment for nutritional indicators reduced this association to an HR of 0.95 (95% CI, 0.92-0.99). Results were based on phosphate binder prescription; phosphate binder and nutritional data were cross

  8. Read margin analysis of crossbar arrays using the cell-variability-aware simulation method

    Science.gov (United States)

    Sun, Wookyung; Choi, Sujin; Shin, Hyungsoon

    2018-02-01

    This paper proposes a new concept of read margin analysis of crossbar arrays using cell-variability-aware simulation. The size of the crossbar array should be considered to predict the read margin characteristic of the crossbar array because the read margin depends on the number of word lines and bit lines. However, an excessively high-CPU time is required to simulate large arrays using a commercial circuit simulator. A variability-aware MATLAB simulator that considers independent variability sources is developed to analyze the characteristics of the read margin according to the array size. The developed MATLAB simulator provides an effective method for reducing the simulation time while maintaining the accuracy of the read margin estimation in the crossbar array. The simulation is also highly efficient in analyzing the characteristic of the crossbar memory array considering the statistical variations in the cell characteristics.

  9. Application of a primitive variable Newton's method for the calculation of an axisymmetric laminar diffusion flame

    International Nuclear Information System (INIS)

    Xu, Yuenong; Smooke, M.D.

    1993-01-01

    In this paper we present a primitive variable Newton-based solution method with a block-line linear equation solver for the calculation of reacting flows. The present approach is compared with the stream function-vorticity Newton's method and the SIMPLER algorithm on the calculation of a system of fully elliptic equations governing an axisymmetric methane-air laminar diffusion flame. The chemical reaction is modeled by the flame sheet approximation. The numerical solution agrees well with experimental data in the major chemical species. The comparison of three sets of numerical results indicates that the stream function-vorticity solution using the approximate boundary conditions reported in the previous calculations predicts a longer flame length and a broader flame shape. With a new set of modified vorticity boundary conditions, we obtain agreement between the primitive variable and stream function-vorticity solutions. The primitive variable Newton's method converges much faster than the other two methods. Because of much less computer memory required for the block-line tridiagonal solver compared to a direct solver, the present approach makes it possible to calculate multidimensional flames with detailed reaction mechanisms. The SIMPLER algorithm shows a slow convergence rate compared to the other two methods in the present calculation

  10. r2VIM: A new variable selection method for random forests in genome-wide association studies.

    Science.gov (United States)

    Szymczak, Silke; Holzinger, Emily; Dasgupta, Abhijit; Malley, James D; Molloy, Anne M; Mills, James L; Brody, Lawrence C; Stambolian, Dwight; Bailey-Wilson, Joan E

    2016-01-01

    Machine learning methods and in particular random forests (RFs) are a promising alternative to standard single SNP analyses in genome-wide association studies (GWAS). RFs provide variable importance measures (VIMs) to rank SNPs according to their predictive power. However, in contrast to the established genome-wide significance threshold, no clear criteria exist to determine how many SNPs should be selected for downstream analyses. We propose a new variable selection approach, recurrent relative variable importance measure (r2VIM). Importance values are calculated relative to an observed minimal importance score for several runs of RF and only SNPs with large relative VIMs in all of the runs are selected as important. Evaluations on simulated GWAS data show that the new method controls the number of false-positives under the null hypothesis. Under a simple alternative hypothesis with several independent main effects it is only slightly less powerful than logistic regression. In an experimental GWAS data set, the same strong signal is identified while the approach selects none of the SNPs in an underpowered GWAS. The novel variable selection method r2VIM is a promising extension to standard RF for objectively selecting relevant SNPs in GWAS while controlling the number of false-positive results.

  11. Determining Confounding Sensitivities In Eddy Current Thin Film Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Gros, Ethan; Udpa, Lalita; Smith, James A.; Wachs, Katelyn

    2016-07-01

    Determining Confounding Sensitivities In Eddy Current Thin Film Measurements Ethan Gros, Lalita Udpa, Electrical Engineering, Michigan State University, East Lansing MI 48824 James A. Smith, Experiment Analysis, Idaho National Laboratory, Idaho Falls ID 83415 Eddy current (EC) techniques are widely used in industry to measure the thickness of non-conductive films on a metal substrate. This is done using a system whereby a coil carrying a high-frequency alternating current is used to create an alternating magnetic field at the surface of the instrument's probe. When the probe is brought near a conductive surface, the alternating magnetic field will induce ECs in the conductor. The substrate characteristics and the distance of the probe from the substrate (the coating thickness) affect the magnitude of the ECs. The induced currents load the probe coil affecting the terminal impedance of the coil. The measured probe impedance is related to the lift off between coil and conductor as well as conductivity of the test sample. For a known conductivity sample, the probe impedance can be converted into an equivalent film thickness value. The EC measurement can be confounded by a number of measurement parameters. It is the goal of this research to determine which physical properties of the measurement set-up and sample can adversely affect the thickness measurement. The eddy current testing is performed using a commercially available, hand held eddy current probe (ETA3.3H spring loaded eddy probe running at 8 MHz) that comes with a stand to hold the probe. The stand holds the probe and adjusts the probe on the z-axis to help position the probe in the correct area as well as make precise measurements. The signal from the probe is sent to a hand held readout, where the results are recorded directly in terms of liftoff or film thickness. Understanding the effect of certain factors on the measurements of film thickness, will help to evaluate how accurate the ETA3.3H spring

  12. Identification of solid state fermentation degree with FT-NIR spectroscopy: Comparison of wavelength variable selection methods of CARS and SCARS

    Science.gov (United States)

    Jiang, Hui; Zhang, Hang; Chen, Quansheng; Mei, Congli; Liu, Guohai

    2015-10-01

    The use of wavelength variable selection before partial least squares discriminant analysis (PLS-DA) for qualitative identification of solid state fermentation degree by FT-NIR spectroscopy technique was investigated in this study. Two wavelength variable selection methods including competitive adaptive reweighted sampling (CARS) and stability competitive adaptive reweighted sampling (SCARS) were employed to select the important wavelengths. PLS-DA was applied to calibrate identified model using selected wavelength variables by CARS and SCARS for identification of solid state fermentation degree. Experimental results showed that the number of selected wavelength variables by CARS and SCARS were 58 and 47, respectively, from the 1557 original wavelength variables. Compared with the results of full-spectrum PLS-DA, the two wavelength variable selection methods both could enhance the performance of identified models. Meanwhile, compared with CARS-PLS-DA model, the SCARS-PLS-DA model achieved better results with the identification rate of 91.43% in the validation process. The overall results sufficiently demonstrate the PLS-DA model constructed using selected wavelength variables by a proper wavelength variable method can be more accurate identification of solid state fermentation degree.

  13. Interpolation decoding method with variable parameters for fractal image compression

    International Nuclear Information System (INIS)

    He Chuanjiang; Li Gaoping; Shen Xiaona

    2007-01-01

    The interpolation fractal decoding method, which is introduced by [He C, Yang SX, Huang X. Progressive decoding method for fractal image compression. IEE Proc Vis Image Signal Process 2004;3:207-13], involves generating progressively the decoded image by means of an interpolation iterative procedure with a constant parameter. It is well-known that the majority of image details are added at the first steps of iterations in the conventional fractal decoding; hence the constant parameter for the interpolation decoding method must be set as a smaller value in order to achieve a better progressive decoding. However, it needs to take an extremely large number of iterations to converge. It is thus reasonable for some applications to slow down the iterative process at the first stages of decoding and then to accelerate it afterwards (e.g., at some iteration as we need). To achieve the goal, this paper proposed an interpolation decoding scheme with variable (iteration-dependent) parameters and proved the convergence of the decoding process mathematically. Experimental results demonstrate that the proposed scheme has really achieved the above-mentioned goal

  14. The relationship between glass ceiling and power distance as a cultural variable by a new method

    Directory of Open Access Journals (Sweden)

    Naide Jahangirov

    2015-12-01

    Full Text Available Glass ceiling symbolizes a variety of barriers and obstacles that arise from gender inequality at business life. With this mind, culture influences gender dynamics. The purpose of this research was to examine the relationship between the glass ceiling and the power distance as a cultural variable within organizations. Gender variable is taken as a moderator variable in relationship between the concepts. In addition to conventional correlation analysis, we employed a new method to investigate this relationship in detail. The survey data were obtained from 109 people working at a research center which operated as a part of the non-profit private university in Ankara, Turkey. The relationship between the variables was revealed by a new method which was developed as an addition to the correlation in survey. The analysis revealed that the female staff perceived the glass ceiling and the power distance more intensely than the male staff. In addition, the medium level relation was determined between the power distance and the glass ceiling perception among female staff.

  15. Good research practices for comparative effectiveness research: approaches to mitigate bias and confounding in the design of nonrandomized studies of treatment effects using secondary data sources: the International Society for Pharmacoeconomics and Outcomes Research Good Research Practices for Retrospective Database Analysis Task Force Report--Part II.

    Science.gov (United States)

    Cox, Emily; Martin, Bradley C; Van Staa, Tjeerd; Garbe, Edeltraut; Siebert, Uwe; Johnson, Michael L

    2009-01-01

    The goal of comparative effectiveness analysis is to examine the relationship between two variables, treatment, or exposure and effectiveness or outcome. Unlike data obtained through randomized controlled trials, researchers face greater challenges with causal inference with observational studies. Recognizing these challenges, a task force was formed to develop a guidance document on methodological approaches to addresses these biases. The task force was commissioned and a Chair was selected by the International Society for Pharmacoeconomics and Outcomes Research Board of Directors in October 2007. This report, the second of three reported in this issue of the Journal, discusses the inherent biases when using secondary data sources for comparative effectiveness analysis and provides methodological recommendations to help mitigate these biases. The task force report provides recommendations and tools for researchers to mitigate threats to validity from bias and confounding in measurement of exposure and outcome. Recommendations on design of study included: the need for data analysis plan with causal diagrams; detailed attention to classification bias in definition of exposure and clinical outcome; careful and appropriate use of restriction; extreme care to identify and control for confounding factors, including time-dependent confounding. Design of nonrandomized studies of comparative effectiveness face several daunting issues, including measurement of exposure and outcome challenged by misclassification and confounding. Use of causal diagrams and restriction are two techniques that can improve the theoretical basis for analyzing treatment effects in study populations of more homogeneity, with reduced loss of generalizability.

  16. VITAMIN A DEFICIENCY IN BRAZILIAN CHILDREN AND ASSOCIATED VARIABLES.

    Science.gov (United States)

    Lima, Daniela Braga; Damiani, Lucas Petri; Fujimori, Elizabeth

    2018-03-29

    To analyze the variables associated with vitamin A deficiency (VAD) in Brazilian children aged 6 to 59 months, considering a hierarchical model of determination. This is part of the National Survey on Demography and Health of Women and Children, held in 2006. Data analysis included 3,417 children aged from six to 59 months with retinol data. Vitamin A deficiency was defined as serum retinol Poisson regression analysis were performed, with significance level set at 5%, using a hierarchical model of determination that considered three conglomerates of variables: those linked to the structural processes of community (socioeconomic-demographic variables); to the immediate environment of the child (maternal variables, safety and food consumption); and individual features (biological characteristics of the child). Data were expressed in prevalence ratio (PR). After adjustment for confounding variables, the following remained associated with VAD: living in the Southeast [PR=1,59; 95%CI 1,19-2,17] and Northeast [PR=1,56; 95%CI 1,16-2,15]; in urban area [RP=1,31; 95%CI 1,02-1,72]; and mother aged ≥36 years [RP=2,28; 95%CI 1,37-3,98], the consumption of meat at least once in the last seven days was a protective factor [PR=0,24; 95%CI 0,13-0,42]. The main variables associated with VAD in the country are related to structural processes of society and to the immediate, but not individual, environment of the child.

  17. Bayesian inference in a discrete shock model using confounded common cause data

    International Nuclear Information System (INIS)

    Kvam, Paul H.; Martz, Harry F.

    1995-01-01

    We consider redundant systems of identical components for which reliability is assessed statistically using only demand-based failures and successes. Direct assessment of system reliability can lead to gross errors in estimation if there exist external events in the working environment that cause two or more components in the system to fail in the same demand period which have not been included in the reliability model. We develop a simple Bayesian model for estimating component reliability and the corresponding probability of common cause failure in operating systems for which the data is confounded; that is, the common cause failures cannot be distinguished from multiple independent component failures in the narrative event descriptions

  18. Supermathematics and its applications in statistical physics Grassmann variables and the method of supersymmetry

    CERN Document Server

    Wegner, Franz

    2016-01-01

    This text presents the mathematical concepts of Grassmann variables and the method of supersymmetry to a broad audience of physicists interested in applying these tools to disordered and critical systems, as well as related topics in statistical physics. Based on many courses and seminars held by the author, one of the pioneers in this field, the reader is given a systematic and tutorial introduction to the subject matter. The algebra and analysis of Grassmann variables is presented in part I. The mathematics of these variables is applied to a random matrix model, path integrals for fermions, dimer models and the Ising model in two dimensions. Supermathematics - the use of commuting and anticommuting variables on an equal footing - is the subject of part II. The properties of supervectors and supermatrices, which contain both commuting and Grassmann components, are treated in great detail, including the derivation of integral theorems. In part III, supersymmetric physical models are considered. While supersym...

  19. No evidence for thermal transgenerational plasticity in metabolism when minimizing the potential for confounding effects.

    Science.gov (United States)

    Kielland, Ø N; Bech, C; Einum, S

    2017-01-11

    Environmental change may cause phenotypic changes that are inherited across generations through transgenerational plasticity (TGP). If TGP is adaptive, offspring fitness increases with an increasing match between parent and offspring environment. Here we test for adaptive TGP in somatic growth and metabolic rate in response to temperature in the clonal zooplankton Daphnia pulex Animals of the first focal generation experienced thermal transgenerational 'mismatch' (parental and offspring temperatures differed), whereas conditions of the next two generations matched the (grand)maternal thermal conditions. Adjustments of metabolic rate occurred during the lifetime of the first generation (i.e. within-generation plasticity). However, no further change was observed during the subsequent two generations, as would be expected under TGP. Furthermore, we observed no tendency for increased juvenile somatic growth (a trait highly correlated with fitness in Daphnia) over the three generations when reared at new temperatures. These results are inconsistent with existing studies of thermal TGP, and we describe how previous experimental designs may have confounded TGP with within-generation plasticity and selective mortality. We suggest that the current evidence for thermal TGP is weak. To increase our understanding of the ecological and evolutionary role of TGP, future studies should more carefully identify possible confounding factors. © 2017 The Author(s).

  20. Method of collective variables with reference system for the grand canonical ensemble

    International Nuclear Information System (INIS)

    Yukhnovskii, I.R.

    1989-01-01

    A method of collective variables with special reference system for the grand canonical ensemble is presented. An explicit form is obtained for the basis sixth-degree measure density needed to describe the liquid-gas phase transition. Here the author presents the fundamentals of the method, which are as follows: (1) the functional form for the partition function in the grand canonical ensemble; (2) derivation of thermodynamic relations for the coefficients of the Jacobian; (3) transition to the problem on an adequate lattice; and (4) obtaining of the explicit form for the functional of the partition function

  1. High SNR Acquisitions Improve the Repeatability of Liver Fat Quantification Using Confounder-corrected Chemical Shift-encoded MR Imaging

    Science.gov (United States)

    Motosugi, Utaroh; Hernando, Diego; Wiens, Curtis; Bannas, Peter; Reeder, Scott. B

    2017-01-01

    Purpose: To determine whether high signal-to-noise ratio (SNR) acquisitions improve the repeatability of liver proton density fat fraction (PDFF) measurements using confounder-corrected chemical shift-encoded magnetic resonance (MR) imaging (CSE-MRI). Materials and Methods: Eleven fat-water phantoms were scanned with 8 different protocols with varying SNR. After repositioning the phantoms, the same scans were repeated to evaluate the test-retest repeatability. Next, an in vivo study was performed with 20 volunteers and 28 patients scheduled for liver magnetic resonance imaging (MRI). Two CSE-MRI protocols with standard- and high-SNR were repeated to assess test-retest repeatability. MR spectroscopy (MRS)-based PDFF was acquired as a standard of reference. The standard deviation (SD) of the difference (Δ) of PDFF measured in the two repeated scans was defined to ascertain repeatability. The correlation between PDFF of CSE-MRI and MRS was calculated to assess accuracy. The SD of Δ and correlation coefficients of the two protocols (standard- and high-SNR) were compared using F-test and t-test, respectively. Two reconstruction algorithms (complex-based and magnitude-based) were used for both the phantom and in vivo experiments. Results: The phantom study demonstrated that higher SNR improved the repeatability for both complex- and magnitude-based reconstruction. Similarly, the in vivo study demonstrated that the repeatability of the high-SNR protocol (SD of Δ = 0.53 for complex- and = 0.85 for magnitude-based fit) was significantly higher than using the standard-SNR protocol (0.77 for complex, P magnitude-based fit, P = 0.003). No significant difference was observed in the accuracy between standard- and high-SNR protocols. Conclusion: Higher SNR improves the repeatability of fat quantification using confounder-corrected CSE-MRI. PMID:28190853

  2. Offspring ADHD as a risk factor for parental marital problems: controls for genetic and environmental confounds.

    Science.gov (United States)

    Schermerhorn, Alice C; D'Onofrio, Brian M; Slutske, Wendy S; Emery, Robert E; Turkheimer, Eric; Harden, K Paige; Heath, Andrew C; Martin, Nicholas G

    2012-12-01

    Previous studies have found that child attention-deficit/hyperactivity disorder (ADHD) is associated with more parental marital problems. However, the reasons for this association are unclear. The association might be due to genetic or environmental confounds that contribute to both marital problems and ADHD. Data were drawn from the Australian Twin Registry, including 1,296 individual twins, their spouses, and offspring. We studied adult twins who were discordant for offspring ADHD.Using a discordant twin pairs design, we examined the extent to which genetic and environmental confounds,as well as measured parental and offspring characteristics, explain the ADHD-marital problems association. Offspring ADHD predicted parental divorce and marital conflict. The associations were also robust when comparing differentially exposed identical twins to control for unmeasured genetic and environmental factors, when controlling for measured maternal and paternal psychopathology,when restricting the sample based on timing of parental divorce and ADHD onset, and when controlling for other forms of offspring psychopathology. Each of these controls rules out alternative explanations for the association. The results of the current study converge with those of prior research in suggesting that factors directly associated with offspring ADHD increase parental marital problems.

  3. LLNA variability: An essential ingredient for a comprehensive assessment of non-animal skin sensitization test methods and strategies.

    Science.gov (United States)

    Hoffmann, Sebastian

    2015-01-01

    The development of non-animal skin sensitization test methods and strategies is quickly progressing. Either individually or in combination, the predictive capacity is usually described in comparison to local lymph node assay (LLNA) results. In this process the important lesson from other endpoints, such as skin or eye irritation, to account for variability reference test results - here the LLNA - has not yet been fully acknowledged. In order to provide assessors as well as method and strategy developers with appropriate estimates, we investigated the variability of EC3 values from repeated substance testing using the publicly available NICEATM (NTP Interagency Center for the Evaluation of Alternative Toxicological Methods) LLNA database. Repeat experiments for more than 60 substances were analyzed - once taking the vehicle into account and once combining data over all vehicles. In general, variability was higher when different vehicles were used. In terms of skin sensitization potential, i.e., discriminating sensitizer from non-sensitizers, the false positive rate ranged from 14-20%, while the false negative rate was 4-5%. In terms of skin sensitization potency, the rate to assign a substance to the next higher or next lower potency class was approx.10-15%. In addition, general estimates for EC3 variability are provided that can be used for modelling purposes. With our analysis we stress the importance of considering the LLNA variability in the assessment of skin sensitization test methods and strategies and provide estimates thereof.

  4. On Estimation of the Survivor Average Causal Effect in Observational Studies when Important Confounders are Missing Due to Death

    Science.gov (United States)

    Egleston, Brian L.; Scharfstein, Daniel O.; MacKenzie, Ellen

    2008-01-01

    We focus on estimation of the causal effect of treatment on the functional status of individuals at a fixed point in time t* after they have experienced a catastrophic event, from observational data with the following features: (1) treatment is imposed shortly after the event and is non-randomized, (2) individuals who survive to t* are scheduled to be interviewed, (3) there is interview non-response, (4) individuals who die prior to t* are missing information on pre-event confounders, (5) medical records are abstracted on all individuals to obtain information on post-event, pre-treatment confounding factors. To address the issue of survivor bias, we seek to estimate the survivor average causal effect (SACE), the effect of treatment on functional status among the cohort of individuals who would survive to t* regardless of whether or not assigned to treatment. To estimate this effect from observational data, we need to impose untestable assumptions, which depend on the collection of all confounding factors. Since pre-event information is missing on those who die prior to t*, it is unlikely that these data are missing at random (MAR). We introduce a sensitivity analysis methodology to evaluate the robustness of SACE inferences to deviations from the MAR assumption. We apply our methodology to the evaluation of the effect of trauma center care on vitality outcomes using data from the National Study on Costs and Outcomes of Trauma Care. PMID:18759833

  5. Cascaded discrimination of normal, abnormal, and confounder classes in histopathology: Gleason grading of prostate cancer

    Directory of Open Access Journals (Sweden)

    Doyle Scott

    2012-10-01

    Full Text Available Abstract Background Automated classification of histopathology involves identification of multiple classes, including benign, cancerous, and confounder categories. The confounder tissue classes can often mimic and share attributes with both the diseased and normal tissue classes, and can be particularly difficult to identify, both manually and by automated classifiers. In the case of prostate cancer, they may be several confounding tissue types present in a biopsy sample, posing as major sources of diagnostic error for pathologists. Two common multi-class approaches are one-shot classification (OSC, where all classes are identified simultaneously, and one-versus-all (OVA, where a “target” class is distinguished from all “non-target” classes. OSC is typically unable to handle discrimination of classes of varying similarity (e.g. with images of prostate atrophy and high grade cancer, while OVA forces several heterogeneous classes into a single “non-target” class. In this work, we present a cascaded (CAS approach to classifying prostate biopsy tissue samples, where images from different classes are grouped to maximize intra-group homogeneity while maximizing inter-group heterogeneity. Results We apply the CAS approach to categorize 2000 tissue samples taken from 214 patient studies into seven classes: epithelium, stroma, atrophy, prostatic intraepithelial neoplasia (PIN, and prostate cancer Gleason grades 3, 4, and 5. A series of increasingly granular binary classifiers are used to split the different tissue classes until the images have been categorized into a single unique class. Our automatically-extracted image feature set includes architectural features based on location of the nuclei within the tissue sample as well as texture features extracted on a per-pixel level. The CAS strategy yields a positive predictive value (PPV of 0.86 in classifying the 2000 tissue images into one of 7 classes, compared with the OVA (0.77 PPV and OSC

  6. Familial confounding of the association between maternal smoking during pregnancy and offspring substance use and problems.

    Science.gov (United States)

    D'Onofrio, Brian M; Rickert, Martin E; Langström, Niklas; Donahue, Kelly L; Coyne, Claire A; Larsson, Henrik; Ellingson, Jarrod M; Van Hulle, Carol A; Iliadou, Anastasia N; Rathouz, Paul J; Lahey, Benjamin B; Lichtenstein, Paul

    2012-11-01

    Previous epidemiological, animal, and human cognitive neuroscience research suggests that maternal smoking during pregnancy (SDP) causes increased risk of substance use/problems in offspring. To determine the extent to which the association between SDP and offspring substance use/problems depends on confounded familial background factors by using a quasi-experimental design. We used 2 separate samples from the United States and Sweden. The analyses prospectively predicted multiple indices of substance use and problems while controlling for statistical covariates and comparing differentially exposed siblings to minimize confounding. Offspring of a representative sample of women in the United States (sample 1) and the total Swedish population born during the period from January 1, 1983, to December 31, 1995 (sample 2). Adolescent offspring of the women in the National Longitudinal Survey of Youth 1979 (n = 6904) and all offspring born in Sweden during the 13-year period (n = 1,187,360). Self-reported adolescent alcohol, cigarette, and marijuana use and early onset (before 14 years of age) of each substance (sample 1) and substance-related convictions and hospitalizations for an alcohol- or other drug-related problem (sample 2). The same pattern emerged for each index of substance use/problems across the 2 samples. At the population level, maternal SDP predicted every measure of offspring substance use/problems in both samples, ranging from adolescent alcohol use (hazard ratio [HR](moderate), 1.32 [95% CI, 1.22-1.43]; HR(high), 1.33 [1.17-1.53]) to a narcotics-related conviction (HR(moderate), 2.23 [2.14-2.31]; HR(high), 2.97 [2.86-3.09]). When comparing differentially exposed siblings to minimize genetic and environmental confounds, however, the association between SDP and each measure of substance use/problems was minimal and not statistically significant. The association between maternal SDP and offspring substance use/problems is likely due to familial background

  7. Recommendations to standardize preanalytical confounding factors in Alzheimer's and Parkinson's disease cerebrospinal fluid biomarkers

    DEFF Research Database (Denmark)

    del Campo, Marta; Mollenhauer, Brit; Bertolotto, Antonio

    2012-01-01

    Early diagnosis of neurodegenerative disorders such as Alzheimer's (AD) or Parkinson's disease (PD) is needed to slow down or halt the disease at the earliest stage. Cerebrospinal fluid (CSF) biomarkers can be a good tool for early diagnosis. However, their use in clinical practice is challenging...... the need to establish standardized operating procedures. Here, we merge two previous consensus guidelines for preanalytical confounding factors in order to achieve one exhaustive guideline updated with new evidence for Aβ42, total tau and phosphorylated tau, and α-synuclein. The proposed standardized...

  8. The complex variable boundary element method: Applications in determining approximative boundaries

    Science.gov (United States)

    Hromadka, T.V.

    1984-01-01

    The complex variable boundary element method (CVBEM) is used to determine approximation functions for boundary value problems of the Laplace equation such as occurs in potential theory. By determining an approximative boundary upon which the CVBEM approximator matches the desired constant (level curves) boundary conditions, the CVBEM is found to provide the exact solution throughout the interior of the transformed problem domain. Thus, the acceptability of the CVBEM approximation is determined by the closeness-of-fit of the approximative boundary to the study problem boundary. ?? 1984.

  9. A meshless method for solving two-dimensional variable-order time fractional advection-diffusion equation

    Science.gov (United States)

    Tayebi, A.; Shekari, Y.; Heydari, M. H.

    2017-07-01

    Several physical phenomena such as transformation of pollutants, energy, particles and many others can be described by the well-known convection-diffusion equation which is a combination of the diffusion and advection equations. In this paper, this equation is generalized with the concept of variable-order fractional derivatives. The generalized equation is called variable-order time fractional advection-diffusion equation (V-OTFA-DE). An accurate and robust meshless method based on the moving least squares (MLS) approximation and the finite difference scheme is proposed for its numerical solution on two-dimensional (2-D) arbitrary domains. In the time domain, the finite difference technique with a θ-weighted scheme and in the space domain, the MLS approximation are employed to obtain appropriate semi-discrete solutions. Since the newly developed method is a meshless approach, it does not require any background mesh structure to obtain semi-discrete solutions of the problem under consideration, and the numerical solutions are constructed entirely based on a set of scattered nodes. The proposed method is validated in solving three different examples including two benchmark problems and an applied problem of pollutant distribution in the atmosphere. In all such cases, the obtained results show that the proposed method is very accurate and robust. Moreover, a remarkable property so-called positive scheme for the proposed method is observed in solving concentration transport phenomena.

  10. Method for curing polymers using variable-frequency microwave heating

    Science.gov (United States)

    Lauf, Robert J.; Bible, Don W.; Paulauskas, Felix L.

    1998-01-01

    A method for curing polymers (11) incorporating a variable frequency microwave furnace system (10) designed to allow modulation of the frequency of the microwaves introduced into a furnace cavity (34). By varying the frequency of the microwave signal, non-uniformities within the cavity (34) are minimized, thereby achieving a more uniform cure throughout the workpiece (36). A directional coupler (24) is provided for detecting the direction of a signal and further directing the signal depending on the detected direction. A first power meter (30) is provided for measuring the power delivered to the microwave furnace (32). A second power meter (26) detects the magnitude of reflected power. The furnace cavity (34) may be adapted to be used to cure materials defining a continuous sheet or which require compressive forces during curing.

  11. Original method to compute epipoles using variable homography: application to measure emergent fibers on textile fabrics

    Science.gov (United States)

    Xu, Jun; Cudel, Christophe; Kohler, Sophie; Fontaine, Stéphane; Haeberlé, Olivier; Klotz, Marie-Louise

    2012-04-01

    Fabric's smoothness is a key factor in determining the quality of finished textile products and has great influence on the functionality of industrial textiles and high-end textile products. With popularization of the zero defect industrial concept, identifying and measuring defective material in the early stage of production is of great interest to the industry. In the current market, many systems are able to achieve automatic monitoring and control of fabric, paper, and nonwoven material during the entire production process, however online measurement of hairiness is still an open topic and highly desirable for industrial applications. We propose a computer vision approach to compute epipole by using variable homography, which can be used to measure emergent fiber length on textile fabrics. The main challenges addressed in this paper are the application of variable homography on textile monitoring and measurement, as well as the accuracy of the estimated calculation. We propose that a fibrous structure can be considered as a two-layer structure, and then we show how variable homography combined with epipolar geometry can estimate the length of the fiber defects. Simulations are carried out to show the effectiveness of this method. The true length of selected fibers is measured precisely using a digital optical microscope, and then the same fibers are tested by our method. Our experimental results suggest that smoothness monitored by variable homography is an accurate and robust method of quality control for important industrial fabrics.

  12. Translational Rodent Models for Research on Parasitic Protozoa-A Review of Confounders and Possibilities.

    Science.gov (United States)

    Ehret, Totta; Torelli, Francesca; Klotz, Christian; Pedersen, Amy B; Seeber, Frank

    2017-01-01

    Rodents, in particular Mus musculus , have a long and invaluable history as models for human diseases in biomedical research, although their translational value has been challenged in a number of cases. We provide some examples in which rodents have been suboptimal as models for human biology and discuss confounders which influence experiments and may explain some of the misleading results. Infections of rodents with protozoan parasites are no exception in requiring close consideration upon model choice. We focus on the significant differences between inbred, outbred and wild animals, and the importance of factors such as microbiota, which are gaining attention as crucial variables in infection experiments. Frequently, mouse or rat models are chosen for convenience, e.g., availability in the institution rather than on an unbiased evaluation of whether they provide the answer to a given question. Apart from a general discussion on translational success or failure, we provide examples where infections with single-celled parasites in a chosen lab rodent gave contradictory or misleading results, and when possible discuss the reason for this. We present emerging alternatives to traditional rodent models, such as humanized mice and organoid primary cell cultures. So-called recombinant inbred strains such as the Collaborative Cross collection are also a potential solution for certain challenges. In addition, we emphasize the advantages of using wild rodents for certain immunological, ecological, and/or behavioral questions. The experimental challenges (e.g., availability of species-specific reagents) that come with the use of such non-model systems are also discussed. Our intention is to foster critical judgment of both traditional and newly available translational rodent models for research on parasitic protozoa that can complement the existing mouse and rat models.

  13. Translational Rodent Models for Research on Parasitic Protozoa—A Review of Confounders and Possibilities

    Directory of Open Access Journals (Sweden)

    Totta Ehret

    2017-06-01

    Full Text Available Rodents, in particular Mus musculus, have a long and invaluable history as models for human diseases in biomedical research, although their translational value has been challenged in a number of cases. We provide some examples in which rodents have been suboptimal as models for human biology and discuss confounders which influence experiments and may explain some of the misleading results. Infections of rodents with protozoan parasites are no exception in requiring close consideration upon model choice. We focus on the significant differences between inbred, outbred and wild animals, and the importance of factors such as microbiota, which are gaining attention as crucial variables in infection experiments. Frequently, mouse or rat models are chosen for convenience, e.g., availability in the institution rather than on an unbiased evaluation of whether they provide the answer to a given question. Apart from a general discussion on translational success or failure, we provide examples where infections with single-celled parasites in a chosen lab rodent gave contradictory or misleading results, and when possible discuss the reason for this. We present emerging alternatives to traditional rodent models, such as humanized mice and organoid primary cell cultures. So-called recombinant inbred strains such as the Collaborative Cross collection are also a potential solution for certain challenges. In addition, we emphasize the advantages of using wild rodents for certain immunological, ecological, and/or behavioral questions. The experimental challenges (e.g., availability of species-specific reagents that come with the use of such non-model systems are also discussed. Our intention is to foster critical judgment of both traditional and newly available translational rodent models for research on parasitic protozoa that can complement the existing mouse and rat models.

  14. Salivary alpha-amylase: More than an enzyme Investigating confounders of stress-induced and basal amylase activity

    OpenAIRE

    Strahler, Jana

    2010-01-01

    Summary: Salivary alpha-amylase: More than an enzyme - Investigating confounders of stress-induced and basal amylase activity (Dipl.-Psych. Jana Strahler) The hypothalamus-pituitary-adrenal (HPA) axis and the autonomic nervous system (ANS) are two of the major systems playing a role in the adaptation of organisms to developmental changes that threaten homeostasis. The HPA system involves the secretion of glucocorticoids, including cortisol, into the circulatory system. Numerous studies hav...

  15. A Review of Spectral Methods for Variable Amplitude Fatigue Prediction and New Results

    Science.gov (United States)

    Larsen, Curtis E.; Irvine, Tom

    2013-01-01

    A comprehensive review of the available methods for estimating fatigue damage from variable amplitude loading is presented. The dependence of fatigue damage accumulation on power spectral density (psd) is investigated for random processes relevant to real structures such as in offshore or aerospace applications. Beginning with the Rayleigh (or narrow band) approximation, attempts at improved approximations or corrections to the Rayleigh approximation are examined by comparison to rainflow analysis of time histories simulated from psd functions representative of simple theoretical and real world applications. Spectral methods investigated include corrections by Wirsching and Light, Ortiz and Chen, the Dirlik formula, and the Single-Moment method, among other more recent proposed methods. Good agreement is obtained between the spectral methods and the time-domain rainflow identification for most cases, with some limitations. Guidelines are given for using the several spectral methods to increase confidence in the damage estimate.

  16. Field calculations. Part I: Choice of variables and methods

    International Nuclear Information System (INIS)

    Turner, L.R.

    1981-01-01

    Magnetostatic calculations can involve (in order of increasing complexity) conductors only, material with constant or infinite permeability, or material with variable permeability. We consider here only the most general case, calculations involving ferritic material with variable permeability. Variables suitable for magnetostatic calculations are the magnetic field, the magnetic vector potential, and the magnetic scalar potential. For two-dimensional calculations the potentials, which each have only one component, have advantages over the field, which has two components. Because it is a single-valued variable, the vector potential is perhaps the best variable for two-dimensional calculations. In three dimensions, both the field and the vector potential have three components; the scalar potential, with only one component,provides a much smaller system of equations to be solved. However the scalar potential is not single-valued. To circumvent this problem, a calculation with two scalar potentials can be performed. The scalar potential whose source is the conductors can be calculated directly by the Biot-Savart law, and the scalar potential whose source is the magnetized material is single valued. However in some situations, the fields from the two potentials nearly cancel; and the numerical accuracy is lost. The 3-D magnetostatic program TOSCA employs a single total scalar potential; the program GFUN uses the magnetic field as its variable

  17. Spike Pattern Structure Influences Synaptic Efficacy Variability Under STDP and Synaptic Homeostasis. II: Spike Shuffling Methods on LIF Networks

    Directory of Open Access Journals (Sweden)

    Zedong Bi

    2016-08-01

    Full Text Available Synapses may undergo variable changes during plasticity because of the variability of spike patterns such as temporal stochasticity and spatial randomness. Here, we call the variability of synaptic weight changes during plasticity to be efficacy variability. In this paper, we investigate how four aspects of spike pattern statistics (i.e., synchronous firing, burstiness/regularity, heterogeneity of rates and heterogeneity of cross-correlations influence the efficacy variability under pair-wise additive spike-timing dependent plasticity (STDP and synaptic homeostasis (the mean strength of plastic synapses into a neuron is bounded, by implementing spike shuffling methods onto spike patterns self-organized by a network of excitatory and inhibitory leaky integrate-and-fire (LIF neurons. With the increase of the decay time scale of the inhibitory synaptic currents, the LIF network undergoes a transition from asynchronous state to weak synchronous state and then to synchronous bursting state. We first shuffle these spike patterns using a variety of methods, each designed to evidently change a specific pattern statistics; and then investigate the change of efficacy variability of the synapses under STDP and synaptic homeostasis, when the neurons in the network fire according to the spike patterns before and after being treated by a shuffling method. In this way, we can understand how the change of pattern statistics may cause the change of efficacy variability. Our results are consistent with those of our previous study which implements spike-generating models on converging motifs. We also find that burstiness/regularity is important to determine the efficacy variability under asynchronous states, while heterogeneity of cross-correlations is the main factor to cause efficacy variability when the network moves into synchronous bursting states (the states observed in epilepsy.

  18. Influence of Post-Mortem Sperm Recovery Method and Extender on Unstored and Refrigerated Rooster Sperm Variables.

    Science.gov (United States)

    Villaverde-Morcillo, S; Esteso, M C; Castaño, C; Santiago-Moreno, J

    2016-02-01

    Many post-mortem sperm collection techniques have been described for mammalian species, but their use in birds is scarce. This paper compares the efficacy of two post-mortem sperm retrieval techniques - the flushing and float-out methods - in the collection of rooster sperm, in conjunction with the use of two extenders, i.e., L&R-84 medium and Lake 7.1 medium. To determine whether the protective effects of these extenders against refrigeration are different for post-mortem and ejaculated sperm, pooled ejaculated samples (procured via the massage technique) were also diluted in the above extenders. Post-mortem and ejaculated sperm variables were assessed immediately at room temperature (0 h), and after refrigeration at 5°C for 24 and 48 h. The flushing method retrieved more sperm than the float-out method (596.5 ± 75.4 million sperm vs 341.0 ± 87.6 million sperm; p < 0.05); indeed, the number retrieved by the former method was similar to that obtained by massage-induced ejaculation (630.3 ± 78.2 million sperm). For sperm collected by all methods, the L&R-84 medium provided an advantage in terms of sperm motility variables at 0 h. In the refrigerated sperm samples, however, the Lake 7.1 medium was associated with higher percentages of viable sperm, and had a greater protective effect (p < 0.05) with respect to most motility variables. In conclusion, the flushing method is recommended for collecting sperm from dead birds. If this sperm needs to be refrigerated at 5°C until analysis, Lake 7.1 medium is recommended as an extender. © 2015 Blackwell Verlag GmbH.

  19. High blood pressure and sedentary behavior in adolescents are associated even after controlling for confounding factors.

    Science.gov (United States)

    Christofaro, Diego Giulliano Destro; De Andrade, Selma Maffei; Cardoso, Jefferson Rosa; Mesas, Arthur Eumann; Codogno, Jamile Sanches; Fernandes, Rômulo Araújo

    2015-01-01

    The aim of this study was to determine whether high blood pressure (HBP) is associated with sedentary behavior in young people even after controlling for potential confounders (gender, age, socioeconomic level, tobacco, alcohol, obesity and physical activity). In this epidemiological study, 1231 adolescents were evaluated. Blood pressure was measured with an oscillometric device and waist circumference with an inextensible tape. Sedentary behavior (watching television, computer use and playing video games) and physical activity were assessed by a questionnaire. We used mean and standard deviation to describe the statistical analysis, and the association between HBP and sedentary behavior was assessed by the chi-squared test. Binary logistic regression was used to observe the magnitude of association and cluster analyses (sedentary behavior and abdominal obesity; sedentary behavior and physical inactivity). HBP was associated with sedentary behaviors [odds ratio (OR) = 2.21, 95% confidence interval (CI) = 1.41-3.96], even after controlling for various confounders (OR = 1.68, CI = 1.03-2.75). In cluster analysis the combination of sedentary behavior and elevated abdominal obesity contributed significantly to an increased likelihood of having HBP (OR = 13.51, CI 7.21-23.97). Sedentary behavior was associated with HBP, and excess fat in the abdominal region contributed to the modulation of this association.

  20. Fresh fruit intake and asthma symptoms in young British adults: confounding or effect modification by smoking?

    Science.gov (United States)

    Butland, B K; Strachan, D P; Anderson, H R

    1999-04-01

    Antioxidant vitamins have been postulated as a protective factor in asthma. The associations between the frequency of fresh fruit consumption in summer, and the prevalence of self-reported asthma symptoms were investigated. The analysis was based on 5,582 males and 5,770 females, born in England, Wales and Scotland between March 3-9, 1958 and aged 33 yrs at the time of survey. The 12-month period prevalence of wheeze and frequent wheeze were inversely associated with frequent intakes of fresh fruit and salad/raw vegetables and positively associated with smoking and lower social class. After adjustment for mutual confounding and sex, associations with smoking persisted, but those with social class and salad/raw vegetable consumption lost significance. The frequency of fresh fruit intake was no longer associated with wheeze after adjustment, but was inversely associated with frequent wheeze and speech-limiting attacks. The association with frequent wheeze differed significantly between smoking groups (never, former, current) and appeared to be confined to exsmokers and current smokers. These findings support postulated associations between infrequent fresh fruit consumption and the prevalence of frequent or severe asthma symptoms in adults. Associations appeared to be restricted to smokers, with effect modification as a more likely explanation of this pattern than residual confounding by smoking.

  1. Modified quasi-boundary value method for Cauchy problems of elliptic equations with variable coefficients

    Directory of Open Access Journals (Sweden)

    Hongwu Zhang

    2011-08-01

    Full Text Available In this article, we study a Cauchy problem for an elliptic equation with variable coefficients. It is well-known that such a problem is severely ill-posed; i.e., the solution does not depend continuously on the Cauchy data. We propose a modified quasi-boundary value regularization method to solve it. Convergence estimates are established under two a priori assumptions on the exact solution. A numerical example is given to illustrate our proposed method.

  2. Pathogen prevalence predicts human cross-cultural variability in individualism/collectivism.

    Science.gov (United States)

    Fincher, Corey L; Thornhill, Randy; Murray, Damian R; Schaller, Mark

    2008-06-07

    Pathogenic diseases impose selection pressures on the social behaviour of host populations. In humans (Homo sapiens), many psychological phenomena appear to serve an antipathogen defence function. One broad implication is the existence of cross-cultural differences in human cognition and behaviour contingent upon the relative presence of pathogens in the local ecology. We focus specifically on one fundamental cultural variable: differences in individualistic versus collectivist values. We suggest that specific behavioural manifestations of collectivism (e.g. ethnocentrism, conformity) can inhibit the transmission of pathogens; and so we hypothesize that collectivism (compared with individualism) will more often characterize cultures in regions that have historically had higher prevalence of pathogens. Drawing on epidemiological data and the findings of worldwide cross-national surveys of individualism/collectivism, our results support this hypothesis: the regional prevalence of pathogens has a strong positive correlation with cultural indicators of collectivism and a strong negative correlation with individualism. The correlations remain significant even when controlling for potential confounding variables. These results help to explain the origin of a paradigmatic cross-cultural difference, and reveal previously undocumented consequences of pathogenic diseases on the variable nature of human societies.

  3. Are Changes in Heart Rate Variability During Hypoglycemia Confounded by the Presence of Cardiovascular Autonomic Neuropathy in Patients with Diabetes?

    DEFF Research Database (Denmark)

    Cichosz, Simon Lebech; Frystyk, Jan; Tarnow, Lise

    2017-01-01

    BACKGROUND: We have recently shown how the combination of information from continuous glucose monitor (CGM) and heart rate variability (HRV) measurements can be used to construct an algorithm for prediction of hypoglycemia in both bedbound and active patients with type 1 diabetes (T1D). Questions...... with CGM and a Holter device while they performed normal daily activities. CAN was diagnosed using two cardiac reflex tests: (1) deep breathing and (2) orthostatic hypotension and end organ symptoms. Early CAN was defined as the presence of one abnormal reflex test and severe CAN was defined as two...

  4. Variability of bronchial measurements obtained by sequential CT using two computer-based methods

    International Nuclear Information System (INIS)

    Brillet, Pierre-Yves; Fetita, Catalin I.; Mitrea, Mihai; Preteux, Francoise; Capderou, Andre; Dreuil, Serge; Simon, Jean-Marc; Grenier, Philippe A.

    2009-01-01

    This study aimed to evaluate the variability of lumen (LA) and wall area (WA) measurements obtained on two successive MDCT acquisitions using energy-driven contour estimation (EDCE) and full width at half maximum (FWHM) approaches. Both methods were applied to a database of segmental and subsegmental bronchi with LA > 4 mm 2 containing 42 bronchial segments of 10 successive slices that best matched on each acquisition. For both methods, the 95% confidence interval between repeated MDCT was between -1.59 and 1.5 mm 2 for LA, and -3.31 and 2.96 mm 2 for WA. The values of the coefficient of measurement variation (CV 10 , i.e., percentage ratio of the standard deviation obtained from the 10 successive slices to their mean value) were strongly correlated between repeated MDCT data acquisitions (r > 0.72; p 2 , whereas WA values were lower for bronchi with WA 2 ; no systematic EDCE underestimation or overestimation was observed for thicker-walled bronchi. In conclusion, variability between CT examinations and assessment techniques may impair measurements. Therefore, new parameters such as CV 10 need to be investigated to study bronchial remodeling. Finally, EDCE and FWHM are not interchangeable in longitudinal studies. (orig.)

  5. The comet assay as a rapid test in biomonitoring occupational exposure to DNA-damaging agents and effect of confounding factors

    DEFF Research Database (Denmark)

    Møller, P; Knudsen, Lisbeth E.; Loft, S

    2000-01-01

    appeared to have less power than the positive studies. Also, there were poor dose-response relationships in many of the biomonitoring studies. Many factors have been reported to produce effects by the comet assay, e.g., age, air pollution exposure, diet, exercise, gender, infection, residential radon...... be used as criteria for the selection of populations and that data on exercise, diet, and recent infections be registered before blood sampling. Samples from exposed and unexposed populations should be collected at the same time to avoid seasonal variation. In general, the comet assay is considered...... exposure, smoking, and season. Until now, the use of the comet assay has been hampered by the uncertainty of the influence of confounding factors. We argue that none of the confounding factors are unequivocally positive in the majority of the studies. We recommend that age, gender, and smoking status...

  6. A Method for Analyzing the Dynamic Response of a Structural System with Variable Mass, Damping and Stiffness

    Directory of Open Access Journals (Sweden)

    Mike D.R. Zhang

    2001-01-01

    Full Text Available In this paper, a method for analyzing the dynamic response of a structural system with variable mass, damping and stiffness is first presented. The dynamic equations of the structural system with variable mass and stiffness are derived according to the whole working process of a bridge bucket unloader. At the end of the paper, an engineering numerical example is given.

  7. Feasibility of wavelet expansion methods to treat the energy variable

    International Nuclear Information System (INIS)

    Van Rooijen, W. F. G.

    2012-01-01

    This paper discusses the use of the Discrete Wavelet Transform (DWT) to implement a functional expansion of the energy variable in neutron transport. The motivation of the work is to investigate the possibility of adapting the expansion level of the neutron flux in a material region to the complexity of the cross section in that region. If such an adaptive treatment is possible, 'simple' material regions (e.g., moderator regions) require little effort, while a detailed treatment is used for 'complex' regions (e.g., fuel regions). Our investigations show that in fact adaptivity cannot be achieved. The most fundamental reason is that in a multi-region system, the energy dependence of the cross section in a material region does not imply that the neutron flux in that region has a similar energy dependence. If it is chosen to sacrifice adaptivity, then the DWT method can be very accurate, but the complexity of such a method is higher than that of an equivalent hyper-fine group calculation. The conclusion is thus that, unfortunately, the DWT approach is not very practical. (authors)

  8. Variability in clinical data is often more useful than the mean: illustration of concept and simple methods of assessment

    NARCIS (Netherlands)

    Zwinderman, A. H.; Cleophas, T. J.

    2005-01-01

    BACKGROUND: Clinical investigators, although they are generally familiar with testing differences between averages, have difficulty testing differences between variabilities. OBJECTIVE: To give examples of situations where variability is more relevant than averages and to describe simple methods for

  9. Influence of management history and landscape variables on soil organic carbon and soil redistribution

    Science.gov (United States)

    Venteris, E.R.; McCarty, G.W.; Ritchie, J.C.; Gish, T.

    2004-01-01

    Controlled studies to investigate the interaction between crop growth, soil properties, hydrology, and management practices are common in agronomy. These sites (much as with real world farmland) often have complex management histories and topographic variability that must be considered. In 1993 an interdisiplinary study was started for a 20-ha site in Beltsville, MD. Soil cores (271) were collected in 1999 in a 30-m grid (with 5-m nesting) and analyzed as part of the site characterization. Soil organic carbon (SOC) and 137Cesium (137Cs) were measured. Analysis of aerial photography from 1992 and of farm management records revealed that part of the site had been maintained as a swine pasture and the other portion as cropped land. Soil properties, particularly soil redistribution and SOC, show large differences in mean values between the two areas. Mass C is 0.8 kg m -2 greater in the pasture area than in the cropped portion. The pasture area is primarily a deposition site, whereas the crop area is dominated by erosion. Management influence is suggested, but topographic variability confounds interpretation. Soil organic carbon is spatially structured, with a regionalized variable of 120 m. 137Cs activity lacks spatial structure, suggesting disturbance of the profile by animal activity and past structures such as swine shelters and roads. Neither SOC nor 137Cs were strongly correlated to terrain parameters, crop yields, or a seasonal soil moisture index predicted from crop yields. SOC and 137Cs were weakly correlated (r2 ???0.2, F-test P-value 0.001), suggesting that soil transport controls, in part, SOC distribution. The study illustrates the importance of past site history when interpreting the landscape distribution of soil properties, especially those strongly influenced by human activity. Confounding variables, complex soil hydrology, and incomplete documentation of land use history make definitive interpretations of the processes behind the spatial distributions

  10. Quantifying the Relative Contributions of Forest Change and Climatic Variability to Hydrology in Large Watersheds: A Critical Review of Research Methods

    Directory of Open Access Journals (Sweden)

    Xiaohua Wei

    2013-06-01

    Full Text Available Forest change and climatic variability are two major drivers for influencing change in watershed hydrology in forest–dominated watersheds. Quantifying their relative contributions is important to fully understand their individual effects. This review paper summarizes the progress on quantifying the relative contributions of forest or land cover change and climatic variability to hydrology in large watersheds using available case studies. It compared pros and cons of various research methods, identified research challenges and proposed future research priorities. Our synthesis shows that the relative hydrological effects of forest changes and climatic variability are largely dependent on their own change magnitudes and watershed characteristics. In some severely disturbed watersheds, impacts of forest changes or land use changes can be as important as those from climatic variability. This paper provides a brief review on eight selected research methods for this type of research. Because each method or technique has its own strengths and weaknesses, combining two or more methods is a more robust approach than using any single method alone. Future research priorities include conducting more case studies, refining research methods, and considering mechanism-based research using landscape ecology and geochemistry approaches.

  11. Control selection and confounding factors: A lesson from a Japanese case-control study to examine acellular pertussis vaccine effectiveness.

    Science.gov (United States)

    Ohfuji, Satoko; Okada, Kenji; Nakano, Takashi; Ito, Hiroaki; Hara, Megumi; Kuroki, Haruo; Hirota, Yoshio

    2017-08-24

    When using a case-control study design to examine vaccine effectiveness, both the selection of control subjects and the consideration of potential confounders must be the important issues to ensure accurate results. In this report, we described our experience from a case-control study conducted to evaluate the effectiveness of acellular pertussis vaccine combined with diphtheria-tetanus toxoids (DTaP vaccine). Newly diagnosed pertussis cases and age- and sex-matched friend-controls were enrolled, and the history of DTaP vaccination was compared between groups. Logistic regression models were used to calculate odds ratios (ORs) and 95% confidence intervals (CIs) of vaccination for development of pertussis. After adjustment for potential confounders, four doses of DTaP vaccination showed a lower OR for pediatrician-diagnosed pertussis (OR=0.11, 95% CI, 0.01-0.99). In addition, the decreasing OR of four doses vaccination was more pronounced for laboratory-confirmed pertussis (OR=0.07, 95%CI, 0.01-0.82). Besides, positive association with pertussis was observed in subjects with a history of steroid treatment (OR=5.67) and those with a recent contact with a lasting cough (OR=4.12). When using a case-control study to evaluate the effectiveness of vaccines, particularly those for uncommon infectious diseases such as pertussis, the use of friend-controls may be optimal due to the fact that they shared a similar experience for exposure to the pathogen as the cases. In addition, to assess vaccine effectiveness as accurately as possible, the effects of confounding should be adequately controlled with a matching or analysis technique. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  12. (Super Variable Costing-Throughput Costing)

    OpenAIRE

    Çakıcı, Cemal

    2006-01-01

    (Super Variable Costing-Throughput Costing) The aim of this study is to explain the super-variable costing method which is a new subject in cost and management accounting and to show it’s working practicly.Shortly, super-variable costing can be defined as a costing method which is use only direct material costs in calculate of product costs and treats all costs except these (direct labor and overhead) as periad costs or operating costs.By using super-variable costing method, product costs ar...

  13. Variability in CT lung-nodule volumetry: Effects of dose reduction and reconstruction methods.

    Science.gov (United States)

    Young, Stefano; Kim, Hyun J Grace; Ko, Moe Moe; Ko, War War; Flores, Carlos; McNitt-Gray, Michael F

    2015-05-01

    Measuring the size of nodules on chest CT is important for lung cancer staging and measuring therapy response. 3D volumetry has been proposed as a more robust alternative to 1D and 2D sizing methods. There have also been substantial advances in methods to reduce radiation dose in CT. The purpose of this work was to investigate the effect of dose reduction and reconstruction methods on variability in 3D lung-nodule volumetry. Reduced-dose CT scans were simulated by applying a noise-addition tool to the raw (sinogram) data from clinically indicated patient scans acquired on a multidetector-row CT scanner (Definition Flash, Siemens Healthcare). Scans were simulated at 25%, 10%, and 3% of the dose of their clinical protocol (CTDIvol of 20.9 mGy), corresponding to CTDIvol values of 5.2, 2.1, and 0.6 mGy. Simulated reduced-dose data were reconstructed with both conventional filtered backprojection (B45 kernel) and iterative reconstruction methods (SAFIRE: I44 strength 3 and I50 strength 3). Three lab technologist readers contoured "measurable" nodules in 33 patients under each of the different acquisition/reconstruction conditions in a blinded study design. Of the 33 measurable nodules, 17 were used to estimate repeatability with their clinical reference protocol, as well as interdose and inter-reconstruction-method reproducibilities. The authors compared the resulting distributions of proportional differences across dose and reconstruction methods by analyzing their means, standard deviations (SDs), and t-test and F-test results. The clinical-dose repeatability experiment yielded a mean proportional difference of 1.1% and SD of 5.5%. The interdose reproducibility experiments gave mean differences ranging from -5.6% to -1.7% and SDs ranging from 6.3% to 9.9%. The inter-reconstruction-method reproducibility experiments gave mean differences of 2.0% (I44 strength 3) and -0.3% (I50 strength 3), and SDs were identical at 7.3%. For the subset of repeatability cases, inter-reconstruction-method

  14. Risk adjustment models for interhospital comparison of CS rates using Robson's ten group classification system and other socio-demographic and clinical variables.

    Science.gov (United States)

    Colais, Paola; Fantini, Maria P; Fusco, Danilo; Carretta, Elisa; Stivanello, Elisa; Lenzi, Jacopo; Pieri, Giulia; Perucci, Carlo A

    2012-06-21

    Caesarean section (CS) rate is a quality of health care indicator frequently used at national and international level. The aim of this study was to assess whether adjustment for Robson's Ten Group Classification System (TGCS), and clinical and socio-demographic variables of the mother and the fetus is necessary for inter-hospital comparisons of CS rates. The study population includes 64,423 deliveries in Emilia-Romagna between January 1, 2003 and December 31, 2004, classified according to theTGCS. Poisson regression was used to estimate crude and adjusted hospital relative risks of CS compared to a reference category. Analyses were carried out in the overall population and separately according to the Robson groups (groups I, II, III, IV and V-X combined). Adjusted relative risks (RR) of CS were estimated using two risk-adjustment models; the first (M1) including the TGCS group as the only adjustment factor; the second (M2) including in addition demographic and clinical confounders identified using a stepwise selection procedure. Percentage variations between crude and adjusted RRs by hospital were calculated to evaluate the confounding effect of covariates. The percentage variations from crude to adjusted RR proved to be similar in M1 and M2 model. However, stratified analyses by Robson's classification groups showed that residual confounding for clinical and demographic variables was present in groups I (nulliparous, single, cephalic, ≥37 weeks, spontaneous labour) and III (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, spontaneous labour) and IV (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, induced or CS before labour) and to a minor extent in groups II (nulliparous, single, cephalic, ≥37 weeks, induced or CS before labour) and IV (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, induced or CS before labour). The TGCS classification is useful for inter-hospital comparison of CS section rates, but

  15. Probabilistic Requirements (Partial) Verification Methods Best Practices Improvement. Variables Acceptance Sampling Calculators: Empirical Testing. Volume 2

    Science.gov (United States)

    Johnson, Kenneth L.; White, K. Preston, Jr.

    2012-01-01

    The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. In this paper, the results of empirical tests intended to assess the accuracy of acceptance sampling plan calculators implemented for six variable distributions are presented.

  16. Improved variable reduction in partial least squares modelling by Global-Minimum Error Uninformative-Variable Elimination.

    Science.gov (United States)

    Andries, Jan P M; Vander Heyden, Yvan; Buydens, Lutgarde M C

    2017-08-22

    The calibration performance of Partial Least Squares regression (PLS) can be improved by eliminating uninformative variables. For PLS, many variable elimination methods have been developed. One is the Uninformative-Variable Elimination for PLS (UVE-PLS). However, the number of variables retained by UVE-PLS is usually still large. In UVE-PLS, variable elimination is repeated as long as the root mean squared error of cross validation (RMSECV) is decreasing. The set of variables in this first local minimum is retained. In this paper, a modification of UVE-PLS is proposed and investigated, in which UVE is repeated until no further reduction in variables is possible, followed by a search for the global RMSECV minimum. The method is called Global-Minimum Error Uninformative-Variable Elimination for PLS, denoted as GME-UVE-PLS or simply GME-UVE. After each iteration, the predictive ability of the PLS model, built with the remaining variable set, is assessed by RMSECV. The variable set with the global RMSECV minimum is then finally selected. The goal is to obtain smaller sets of variables with similar or improved predictability than those from the classical UVE-PLS method. The performance of the GME-UVE-PLS method is investigated using four data sets, i.e. a simulated set, NIR and NMR spectra, and a theoretical molecular descriptors set, resulting in twelve profile-response (X-y) calibrations. The selective and predictive performances of the models resulting from GME-UVE-PLS are statistically compared to those from UVE-PLS and 1-step UVE, one-sided paired t-tests. The results demonstrate that variable reduction with the proposed GME-UVE-PLS method, usually eliminates significantly more variables than the classical UVE-PLS, while the predictive abilities of the resulting models are better. With GME-UVE-PLS, a lower number of uninformative variables, without a chemical meaning for the response, may be retained than with UVE-PLS. The selectivity of the classical UVE method

  17. A statistical, task-based evaluation method for three-dimensional x-ray breast imaging systems using variable-background phantoms

    International Nuclear Information System (INIS)

    Park, Subok; Jennings, Robert; Liu Haimo; Badano, Aldo; Myers, Kyle

    2010-01-01

    Purpose: For the last few years, development and optimization of three-dimensional (3D) x-ray breast imaging systems, such as digital breast tomosynthesis (DBT) and computed tomography, have drawn much attention from the medical imaging community, either academia or industry. However, there is still much room for understanding how to best optimize and evaluate the devices over a large space of many different system parameters and geometries. Current evaluation methods, which work well for 2D systems, do not incorporate the depth information from the 3D imaging systems. Therefore, it is critical to develop a statistically sound evaluation method to investigate the usefulness of inclusion of depth and background-variability information into the assessment and optimization of the 3D systems. Methods: In this paper, we present a mathematical framework for a statistical assessment of planar and 3D x-ray breast imaging systems. Our method is based on statistical decision theory, in particular, making use of the ideal linear observer called the Hotelling observer. We also present a physical phantom that consists of spheres of different sizes and materials for producing an ensemble of randomly varying backgrounds to be imaged for a given patient class. Lastly, we demonstrate our evaluation method in comparing laboratory mammography and three-angle DBT systems for signal detection tasks using the phantom's projection data. We compare the variable phantom case to that of a phantom of the same dimensions filled with water, which we call the uniform phantom, based on the performance of the Hotelling observer as a function of signal size and intensity. Results: Detectability trends calculated using the variable and uniform phantom methods are different from each other for both mammography and DBT systems. Conclusions: Our results indicate that measuring the system's detection performance with consideration of background variability may lead to differences in system performance

  18. Internal Variability and Disequilibrium Confound Estimates of Climate Sensitivity From Observations

    Science.gov (United States)

    Marvel, Kate; Pincus, Robert; Schmidt, Gavin A.; Miller, Ron L.

    2018-02-01

    An emerging literature suggests that estimates of equilibrium climate sensitivity (ECS) derived from recent observations and energy balance models are biased low because models project more positive climate feedback in the far future. Here we use simulations from the Coupled Model Intercomparison Project Phase 5 (CMIP5) to show that across models, ECS inferred from the recent historical period (1979-2005) is indeed almost uniformly lower than that inferred from simulations subject to abrupt increases in CO2 radiative forcing. However, ECS inferred from simulations in which sea surface temperatures are prescribed according to observations is lower still. ECS inferred from simulations with prescribed sea surface temperatures is strongly linked to changes to tropical marine low clouds. However, feedbacks from these clouds are a weak constraint on long-term model ECS. One interpretation is that observations of recent climate changes constitute a poor direct proxy for long-term sensitivity.

  19. A variable pressure method for characterizing nanoparticle surface charge using pore sensors.

    Science.gov (United States)

    Vogel, Robert; Anderson, Will; Eldridge, James; Glossop, Ben; Willmott, Geoff

    2012-04-03

    A novel method using resistive pulse sensors for electrokinetic surface charge measurements of nanoparticles is presented. This method involves recording the particle blockade rate while the pressure applied across a pore sensor is varied. This applied pressure acts in a direction which opposes transport due to the combination of electro-osmosis, electrophoresis, and inherent pressure. The blockade rate reaches a minimum when the velocity of nanoparticles in the vicinity of the pore approaches zero, and the forces on typical nanoparticles are in equilibrium. The pressure applied at this minimum rate can be used to calculate the zeta potential of the nanoparticles. The efficacy of this variable pressure method was demonstrated for a range of carboxylated 200 nm polystyrene nanoparticles with different surface charge densities. Results were of the same order as phase analysis light scattering (PALS) measurements. Unlike PALS results, the sequence of increasing zeta potential for different particle types agreed with conductometric titration.

  20. Intra- and interobserver variability of MRI-based volume measurements of the hippocampus and amygdala using the manual ray-tracing method

    International Nuclear Information System (INIS)

    Achten, E.; Deblaere, K.; Damme, F. van; Kunnen, M.; Wagter, C. de; Boon, P.; Reuck, J. de

    1998-01-01

    We studied the intra- and interobserver variability of volume measurments of the hippocampus (HC) and the amygdala as applied to the detection of HC atrophy in patients with complex partial seizures (CPE), measuring the volumes of the HC and amygdala of 11 normal volunteers and 12 patients with presumed CPE, using the manual ray-tracing method. Two independent observers performed these measurements twice each using home-made software. The intra- and interobserver variability of the absolute volumes and of the normalised left-to-right volume differences (δV) between the HC (δV HC ), the amygdala (δV A ) and the sum of both (δV HCA) were assessed. In our mainly right-handed normals, the right HC and amygdala were on average 0.05 and 0.03 ml larger respectively than on the left. The interobserver variability for volume measurements in normal subjects was 1.80 ml for the HC and 0.82 ml for the amygdala, the intraobserver variability roughly one third of these values. The interobserver variability coefficient in normals was 3.6 % for δV HCA , 4.7 % for δV HC and 7.3 % for δV A . The intraobserver variability coefficient was 3.4 % for δV HCA , 4.2 % for δV HC amd 5.6 % for δV A . The variability in patients was the same for volume differences less than 5 % either side of the interval for normality, but was higher when large volume differences were encountered, is probably due to the lack of thresholding and/or normalisation. Cutoff values for lateralisation with the δV were defined. No intra- or interobserver lateralisation differences were encountered with δV HCA and δV HC . From these observations we conclude that the manual ray-tracing method is a robust method for lateralisation in patients with TLE. Due to its higher variability, this method is less suited to measure absolute volumes. (orig.) (orig.)

  1. Intra- and interobserver variability of MRI-based volume measurements of the hippocampus and amygdala using the manual ray-tracing method

    Energy Technology Data Exchange (ETDEWEB)

    Achten, E.; Deblaere, K.; Damme, F. van; Kunnen, M. [MR Department 1K12, University Hospital Gent (Belgium); Wagter, C. de [Department of Radiotherapy and Nuclear Medicine, University Hospital Gent (Belgium); Boon, P.; Reuck, J. de [Department of Neurology, University Hospital Gent (Belgium)

    1998-09-01

    We studied the intra- and interobserver variability of volume measurments of the hippocampus (HC) and the amygdala as applied to the detection of HC atrophy in patients with complex partial seizures (CPE), measuring the volumes of the HC and amygdala of 11 normal volunteers and 12 patients with presumed CPE, using the manual ray-tracing method. Two independent observers performed these measurements twice each using home-made software. The intra- and interobserver variability of the absolute volumes and of the normalised left-to-right volume differences ({delta}V) between the HC ({delta}V{sub HC}), the amygdala ({delta}V{sub A}) and the sum of both ({delta}V{sub HCA)} were assessed. In our mainly right-handed normals, the right HC and amygdala were on average 0.05 and 0.03 ml larger respectively than on the left. The interobserver variability for volume measurements in normal subjects was 1.80 ml for the HC and 0.82 ml for the amygdala, the intraobserver variability roughly one third of these values. The interobserver variability coefficient in normals was 3.6 % for {delta}V{sub HCA}, 4.7 % for {delta}V{sub HC} and 7.3 % for {delta}V{sub A}. The intraobserver variability coefficient was 3.4 % for {delta}V{sub HCA}, 4.2 % for {delta}V{sub HC} amd 5.6 % for {delta}V{sub A}. The variability in patients was the same for volume differences less than 5 % either side of the interval for normality, but was higher when large volume differences were encountered, is probably due to the lack of thresholding and/or normalisation. Cutoff values for lateralisation with the {delta}V were defined. No intra- or interobserver lateralisation differences were encountered with {delta}V{sub HCA} and {delta}V{sub HC}. From these observations we conclude that the manual ray-tracing method is a robust method for lateralisation in patients with TLE. Due to its higher variability, this method is less suited to measure absolute volumes. (orig.) (orig.) With 2 figs., 7 tabs., 23 refs.

  2. Systems, methods, and software for determining spatially variable distributions of the dielectric properties of a heterogeneous material

    Science.gov (United States)

    Farrington, Stephen P.

    2018-05-15

    Systems, methods, and software for measuring the spatially variable relative dielectric permittivity of materials along a linear or otherwise configured sensor element, and more specifically the spatial variability of soil moisture in one dimension as inferred from the dielectric profile of the soil matrix surrounding a linear sensor element. Various methods provided herein combine advances in the processing of time domain reflectometry data with innovations in physical sensing apparatuses. These advancements enable high temporal (and thus spatial) resolution of electrical reflectance continuously along an insulated waveguide that is permanently emplaced in contact with adjacent soils. The spatially resolved reflectance is directly related to impedance changes along the waveguide that are dominated by electrical permittivity contrast due to variations in soil moisture. Various methods described herein are thus able to monitor soil moisture in profile with high spatial resolution.

  3. A comparison on parameter-estimation methods in multiple regression analysis with existence of multicollinearity among independent variables

    Directory of Open Access Journals (Sweden)

    Hukharnsusatrue, A.

    2005-11-01

    Full Text Available The objective of this research is to compare multiple regression coefficients estimating methods with existence of multicollinearity among independent variables. The estimation methods are Ordinary Least Squares method (OLS, Restricted Least Squares method (RLS, Restricted Ridge Regression method (RRR and Restricted Liu method (RL when restrictions are true and restrictions are not true. The study used the Monte Carlo Simulation method. The experiment was repeated 1,000 times under each situation. The analyzed results of the data are demonstrated as follows. CASE 1: The restrictions are true. In all cases, RRR and RL methods have a smaller Average Mean Square Error (AMSE than OLS and RLS method, respectively. RRR method provides the smallest AMSE when the level of correlations is high and also provides the smallest AMSE for all level of correlations and all sample sizes when standard deviation is equal to 5. However, RL method provides the smallest AMSE when the level of correlations is low and middle, except in the case of standard deviation equal to 3, small sample sizes, RRR method provides the smallest AMSE.The AMSE varies with, most to least, respectively, level of correlations, standard deviation and number of independent variables but inversely with to sample size.CASE 2: The restrictions are not true.In all cases, RRR method provides the smallest AMSE, except in the case of standard deviation equal to 1 and error of restrictions equal to 5%, OLS method provides the smallest AMSE when the level of correlations is low or median and there is a large sample size, but the small sample sizes, RL method provides the smallest AMSE. In addition, when error of restrictions is increased, OLS method provides the smallest AMSE for all level, of correlations and all sample sizes, except when the level of correlations is high and sample sizes small. Moreover, the case OLS method provides the smallest AMSE, the most RLS method has a smaller AMSE than

  4. Projective-Dual Method for Solving Systems of Linear Equations with Nonnegative Variables

    Science.gov (United States)

    Ganin, B. V.; Golikov, A. I.; Evtushenko, Yu. G.

    2018-02-01

    In order to solve an underdetermined system of linear equations with nonnegative variables, the projection of a given point onto its solutions set is sought. The dual of this problem—the problem of unconstrained maximization of a piecewise-quadratic function—is solved by Newton's method. The problem of unconstrained optimization dual of the regularized problem of finding the projection onto the solution set of the system is considered. A connection of duality theory and Newton's method with some known algorithms of projecting onto a standard simplex is shown. On the example of taking into account the specifics of the constraints of the transport linear programming problem, the possibility to increase the efficiency of calculating the generalized Hessian matrix is demonstrated. Some examples of numerical calculations using MATLAB are presented.

  5. Improved variable reduction in partial least squares modelling based on predictive-property-ranked variables and adaptation of partial least squares complexity.

    Science.gov (United States)

    Andries, Jan P M; Vander Heyden, Yvan; Buydens, Lutgarde M C

    2011-10-31

    The calibration performance of partial least squares for one response variable (PLS1) can be improved by elimination of uninformative variables. Many methods are based on so-called predictive variable properties, which are functions of various PLS-model parameters, and which may change during the variable reduction process. In these methods variable reduction is made on the variables ranked in descending order for a given variable property. The methods start with full spectrum modelling. Iteratively, until a specified number of remaining variables is reached, the variable with the smallest property value is eliminated; a new PLS model is calculated, followed by a renewed ranking of the variables. The Stepwise Variable Reduction methods using Predictive-Property-Ranked Variables are denoted as SVR-PPRV. In the existing SVR-PPRV methods the PLS model complexity is kept constant during the variable reduction process. In this study, three new SVR-PPRV methods are proposed, in which a possibility for decreasing the PLS model complexity during the variable reduction process is build in. Therefore we denote our methods as PPRVR-CAM methods (Predictive-Property-Ranked Variable Reduction with Complexity Adapted Models). The selective and predictive abilities of the new methods are investigated and tested, using the absolute PLS regression coefficients as predictive property. They were compared with two modifications of existing SVR-PPRV methods (with constant PLS model complexity) and with two reference methods: uninformative variable elimination followed by either a genetic algorithm for PLS (UVE-GA-PLS) or an interval PLS (UVE-iPLS). The performance of the methods is investigated in conjunction with two data sets from near-infrared sources (NIR) and one simulated set. The selective and predictive performances of the variable reduction methods are compared statistically using the Wilcoxon signed rank test. The three newly developed PPRVR-CAM methods were able to retain

  6. Variable Selection via Partial Correlation.

    Science.gov (United States)

    Li, Runze; Liu, Jingyuan; Lou, Lejia

    2017-07-01

    Partial correlation based variable selection method was proposed for normal linear regression models by Bühlmann, Kalisch and Maathuis (2010) as a comparable alternative method to regularization methods for variable selection. This paper addresses two important issues related to partial correlation based variable selection method: (a) whether this method is sensitive to normality assumption, and (b) whether this method is valid when the dimension of predictor increases in an exponential rate of the sample size. To address issue (a), we systematically study this method for elliptical linear regression models. Our finding indicates that the original proposal may lead to inferior performance when the marginal kurtosis of predictor is not close to that of normal distribution. Our simulation results further confirm this finding. To ensure the superior performance of partial correlation based variable selection procedure, we propose a thresholded partial correlation (TPC) approach to select significant variables in linear regression models. We establish the selection consistency of the TPC in the presence of ultrahigh dimensional predictors. Since the TPC procedure includes the original proposal as a special case, our theoretical results address the issue (b) directly. As a by-product, the sure screening property of the first step of TPC was obtained. The numerical examples also illustrate that the TPC is competitively comparable to the commonly-used regularization methods for variable selection.

  7. Comparative performance of different stochastic methods to simulate drug exposure and variability in a population.

    Science.gov (United States)

    Tam, Vincent H; Kabbara, Samer

    2006-10-01

    Monte Carlo simulations (MCSs) are increasingly being used to predict the pharmacokinetic variability of antimicrobials in a population. However, various MCS approaches may differ in the accuracy of the predictions. We compared the performance of 3 different MCS approaches using a data set with known parameter values and dispersion. Ten concentration-time profiles were randomly generated and used to determine the best-fit parameter estimates. Three MCS methods were subsequently used to simulate the AUC(0-infinity) of the population, using the central tendency and dispersion of the following in the subject sample: 1) K and V; 2) clearance and V; 3) AUC(0-infinity). In each scenario, 10000 subject simulations were performed. Compared to true AUC(0-infinity) of the population, mean biases by various methods were 1) 58.4, 2) 380.7, and 3) 12.5 mg h L(-1), respectively. Our results suggest that the most realistic MCS approach appeared to be based on the variability of AUC(0-infinity) in the subject sample.

  8. Method of nuclear reactor control using a variable temperature load dependent set point

    International Nuclear Information System (INIS)

    Kelly, J.J.; Rambo, G.E.

    1982-01-01

    A method and apparatus for controlling a nuclear reactor in response to a variable average reactor coolant temperature set point is disclosed. The set point is dependent upon percent of full power load demand. A manually-actuated ''droop mode'' of control is provided whereby the reactor coolant temperature is allowed to drop below the set point temperature a predetermined amount wherein the control is switched from reactor control rods exclusively to feedwater flow

  9. Hepatic fat quantification using the two-point Dixon method and fat color maps based on non-alcoholic fatty liver disease activity score.

    Science.gov (United States)

    Hayashi, Tatsuya; Saitoh, Satoshi; Takahashi, Junji; Tsuji, Yoshinori; Ikeda, Kenji; Kobayashi, Masahiro; Kawamura, Yusuke; Fujii, Takeshi; Inoue, Masafumi; Miyati, Tosiaki; Kumada, Hiromitsu

    2017-04-01

    The two-point Dixon method for magnetic resonance imaging (MRI) is commonly used to non-invasively measure fat deposition in the liver. The aim of the present study was to assess the usefulness of MRI-fat fraction (MRI-FF) using the two-point Dixon method based on the non-alcoholic fatty liver disease activity score. This retrospective study included 106 patients who underwent liver MRI and MR spectroscopy, and 201 patients who underwent liver MRI and histological assessment. The relationship between MRI-FF and MR spectroscopy-fat fraction was used to estimate the corrected MRI-FF for hepatic multi-peaks of fat. Then, a color FF map was generated with the corrected MRI-FF based on the non-alcoholic fatty liver disease activity score. We defined FF variability as the standard deviation of FF in regions of interest. Uniformity of hepatic fat was visually graded on a three-point scale using both gray-scale and color FF maps. Confounding effects of histology (iron, inflammation and fibrosis) on corrected MRI-FF were assessed by multiple linear regression. The linear correlations between MRI-FF and MR spectroscopy-fat fraction, and between corrected MRI-FF and histological steatosis were strong (R 2  = 0.90 and R 2  = 0.88, respectively). Liver fat variability significantly increased with visual fat uniformity grade using both of the maps (ρ = 0.67-0.69, both P Hepatic iron, inflammation and fibrosis had no significant confounding effects on the corrected MRI-FF (all P > 0.05). The two-point Dixon method and the gray-scale or color FF maps based on the non-alcoholic fatty liver disease activity score were useful for fat quantification in the liver of patients without severe iron deposition. © 2016 The Japan Society of Hepatology.

  10. Intratumoral heterogeneity as a confounding factor in clonogenic assays for tumour radioresponsiveness

    International Nuclear Information System (INIS)

    Britten, R.A.; Evans, A.J.; Allalunis-Turner, M.J.; Franko, A.J.; Pearcey, R.G.

    1996-01-01

    The level of intra-tumoral heterogeneity of cellular radiosensitivity within primary cultures of three carcinomas of the cervix has been established. All three cultures contained clones that varied by as much as 3-fold in their clinically relevant radiosensitivity (SF 2 ). The level of intra-tumoral heterogeneity observed in these cervical tumour cultures was sufficient to be a major confounding factor to the use of pre-treatment assessments of radiosensitivity to predict for clinical radioresponsiveness. Mathematical modeling of the relative elimination of the tumour clones during fractionated radiotherapy indicates that, in two of the three biopsy samples, the use of pre-treatment derived SF 2 values from the heterogeneous tumour sample would significantly overestimate radioresponsiveness. We conclude that assays of cellular radiosensitivity that identify the radiosensitivity of the most radioresistant clones and measure their relative abundance could potentially increase the effectiveness of SF 2 values as a predictive marker of radioresponsiveness

  11. Combined Pulmonary Fibrosis and Emphysema in Scleroderma-Related Lung Disease Has a Major Confounding Effect on Lung Physiology and Screening for Pulmonary Hypertension.

    Science.gov (United States)

    Antoniou, K M; Margaritopoulos, G A; Goh, N S; Karagiannis, K; Desai, S R; Nicholson, A G; Siafakas, N M; Coghlan, J G; Denton, C P; Hansell, D M; Wells, A U

    2016-04-01

    To assess the prevalence of combined pulmonary fibrosis and emphysema (CPFE) in systemic sclerosis (SSc) patients with interstitial lung disease (ILD) and the effect of CPFE on the pulmonary function tests used to evaluate the severity of SSc-related ILD and the likelihood of pulmonary hypertension (PH). High-resolution computed tomography (HRCT) scans were obtained in 333 patients with SSc-related ILD and were evaluated for the presence of emphysema and the extent of ILD. The effects of emphysema on the associations between pulmonary function variables and the extent of SSc-related ILD as visualized on HRCT and echocardiographic evidence of PH were quantified. Emphysema was present in 41 (12.3%) of the 333 patients with SSc-related ILD, in 26 (19.7%) of 132 smokers, and in 15 (7.5%) of 201 lifelong nonsmokers. When the extent of fibrosis was taken into account, emphysema was associated with significant additional differences from the expected values for diffusing capacity for carbon monoxide (DLco) (average reduction of 24.1%; P emphysema had a greater effect than echocardiographically determined PH on the FVC/DLco ratio, regardless of whether it was analyzed as a continuous variable or using a threshold value of 1.6 or 2.0. Among patients with SSc-related ILD, emphysema is sporadically present in nonsmokers and is associated with a low pack-year history in smokers. The confounding effect of CPFE on measures of gas exchange has major implications for the construction of screening algorithms for PH in patients with SSc-related ILD. © 2016, American College of Rheumatology.

  12. The Bayesian group lasso for confounded spatial data

    Science.gov (United States)

    Hefley, Trevor J.; Hooten, Mevin B.; Hanks, Ephraim M.; Russell, Robin E.; Walsh, Daniel P.

    2017-01-01

    Generalized linear mixed models for spatial processes are widely used in applied statistics. In many applications of the spatial generalized linear mixed model (SGLMM), the goal is to obtain inference about regression coefficients while achieving optimal predictive ability. When implementing the SGLMM, multicollinearity among covariates and the spatial random effects can make computation challenging and influence inference. We present a Bayesian group lasso prior with a single tuning parameter that can be chosen to optimize predictive ability of the SGLMM and jointly regularize the regression coefficients and spatial random effect. We implement the group lasso SGLMM using efficient Markov chain Monte Carlo (MCMC) algorithms and demonstrate how multicollinearity among covariates and the spatial random effect can be monitored as a derived quantity. To test our method, we compared several parameterizations of the SGLMM using simulated data and two examples from plant ecology and disease ecology. In all examples, problematic levels multicollinearity occurred and influenced sampling efficiency and inference. We found that the group lasso prior resulted in roughly twice the effective sample size for MCMC samples of regression coefficients and can have higher and less variable predictive accuracy based on out-of-sample data when compared to the standard SGLMM.

  13. Invited Commentary: Using Financial Credits as Instrumental Variables for Estimating the Causal Relationship Between Income and Health.

    Science.gov (United States)

    Pega, Frank

    2016-05-01

    Social epidemiologists are interested in determining the causal relationship between income and health. Natural experiments in which individuals or groups receive income randomly or quasi-randomly from financial credits (e.g., tax credits or cash transfers) are increasingly being analyzed using instrumental variable analysis. For example, in this issue of the Journal, Hamad and Rehkopf (Am J Epidemiol. 2016;183(9):775-784) used an in-work tax credit called the Earned Income Tax Credit as an instrument to estimate the association between income and child development. However, under certain conditions, the use of financial credits as instruments could violate 2 key instrumental variable analytic assumptions. First, some financial credits may directly influence health, for example, through increasing a psychological sense of welfare security. Second, financial credits and health may have several unmeasured common causes, such as politics, other social policies, and the motivation to maximize the credit. If epidemiologists pursue such instrumental variable analyses, using the amount of an unconditional, universal credit that an individual or group has received as the instrument may produce the most conceptually convincing and generalizable evidence. However, other natural income experiments (e.g., lottery winnings) and other methods that allow better adjustment for confounding might be more promising approaches for estimating the causal relationship between income and health. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. Evaluating disease management programme effectiveness: an introduction to instrumental variables.

    Science.gov (United States)

    Linden, Ariel; Adams, John L

    2006-04-01

    This paper introduces the concept of instrumental variables (IVs) as a means of providing an unbiased estimate of treatment effects in evaluating disease management (DM) programme effectiveness. Model development is described using zip codes as the IV. Three diabetes DM outcomes were evaluated: annual diabetes costs, emergency department (ED) visits and hospital days. Both ordinary least squares (OLS) and IV estimates showed a significant treatment effect for diabetes costs (P = 0.011) but neither model produced a significant treatment effect for ED visits. However, the IV estimate showed a significant treatment effect for hospital days (P = 0.006) whereas the OLS model did not. These results illustrate the utility of IV estimation when the OLS model is sensitive to the confounding effect of hidden bias.

  15. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    Science.gov (United States)

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  16. Benchmarking Variable Selection in QSAR.

    Science.gov (United States)

    Eklund, Martin; Norinder, Ulf; Boyer, Scott; Carlsson, Lars

    2012-02-01

    Variable selection is important in QSAR modeling since it can improve model performance and transparency, as well as reduce the computational cost of model fitting and predictions. Which variable selection methods that perform well in QSAR settings is largely unknown. To address this question we, in a total of 1728 benchmarking experiments, rigorously investigated how eight variable selection methods affect the predictive performance and transparency of random forest models fitted to seven QSAR datasets covering different endpoints, descriptors sets, types of response variables, and number of chemical compounds. The results show that univariate variable selection methods are suboptimal and that the number of variables in the benchmarked datasets can be reduced with about 60 % without significant loss in model performance when using multivariate adaptive regression splines MARS and forward selection. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. PLS-based and regularization-based methods for the selection of relevant variables in non-targeted metabolomics data

    Directory of Open Access Journals (Sweden)

    Renata Bujak

    2016-07-01

    Full Text Available Non-targeted metabolomics constitutes a part of systems biology and aims to determine many metabolites in complex biological samples. Datasets obtained in non-targeted metabolomics studies are multivariate and high-dimensional due to the sensitivity of mass spectrometry-based detection methods as well as complexity of biological matrices. Proper selection of variables which contribute into group classification is a crucial step, especially in metabolomics studies which are focused on searching for disease biomarker candidates. In the present study, three different statistical approaches were tested using two metabolomics datasets (RH and PH study. Orthogonal projections to latent structures-discriminant analysis (OPLS-DA without and with multiple testing correction as well as least absolute shrinkage and selection operator (LASSO were tested and compared. For the RH study, OPLS-DA model built without multiple testing correction, selected 46 and 218 variables based on VIP criteria using Pareto and UV scaling, respectively. In the case of the PH study, 217 and 320 variables were selected based on VIP criteria using Pareto and UV scaling, respectively. In the RH study, OPLS-DA model built with multiple testing correction, selected 4 and 19 variables as statistically significant in terms of Pareto and UV scaling, respectively. For PH study, 14 and 18 variables were selected based on VIP criteria in terms of Pareto and UV scaling, respectively. Additionally, the concept and fundaments of the least absolute shrinkage and selection operator (LASSO with bootstrap procedure evaluating reproducibility of results, was demonstrated. In the RH and PH study, the LASSO selected 14 and 4 variables with reproducibility between 99.3% and 100%. However, apart from the popularity of PLS-DA and OPLS-DA methods in metabolomics, it should be highlighted that they do not control type I or type II error, but only arbitrarily establish a cut-off value for PLS-DA loadings

  18. Flexible and scalable methods for quantifying stochastic variability in the era of massive time-domain astronomical data sets

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, Brandon C. [Department of Physics, Broida Hall, University of California, Santa Barbara, CA 93106-9530 (United States); Becker, Andrew C. [Department of Astronomy, University of Washington, P.O. Box 351580, Seattle, WA 98195-1580 (United States); Sobolewska, Malgosia [Nicolaus Copernicus Astronomical Center, Bartycka 18, 00-716, Warsaw (Poland); Siemiginowska, Aneta [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Uttley, Phil [Astronomical Institute Anton Pannekoek, University of Amsterdam, Postbus 94249, 1090 GE Amsterdam (Netherlands)

    2014-06-10

    We present the use of continuous-time autoregressive moving average (CARMA) models as a method for estimating the variability features of a light curve, and in particular its power spectral density (PSD). CARMA models fully account for irregular sampling and measurement errors, making them valuable for quantifying variability, forecasting and interpolating light curves, and variability-based classification. We show that the PSD of a CARMA model can be expressed as a sum of Lorentzian functions, which makes them extremely flexible and able to model a broad range of PSDs. We present the likelihood function for light curves sampled from CARMA processes, placing them on a statistically rigorous foundation, and we present a Bayesian method to infer the probability distribution of the PSD given the measured light curve. Because calculation of the likelihood function scales linearly with the number of data points, CARMA modeling scales to current and future massive time-domain data sets. We conclude by applying our CARMA modeling approach to light curves for an X-ray binary, two active galactic nuclei, a long-period variable star, and an RR Lyrae star in order to illustrate their use, applicability, and interpretation.

  19. Flexible and scalable methods for quantifying stochastic variability in the era of massive time-domain astronomical data sets

    International Nuclear Information System (INIS)

    Kelly, Brandon C.; Becker, Andrew C.; Sobolewska, Malgosia; Siemiginowska, Aneta; Uttley, Phil

    2014-01-01

    We present the use of continuous-time autoregressive moving average (CARMA) models as a method for estimating the variability features of a light curve, and in particular its power spectral density (PSD). CARMA models fully account for irregular sampling and measurement errors, making them valuable for quantifying variability, forecasting and interpolating light curves, and variability-based classification. We show that the PSD of a CARMA model can be expressed as a sum of Lorentzian functions, which makes them extremely flexible and able to model a broad range of PSDs. We present the likelihood function for light curves sampled from CARMA processes, placing them on a statistically rigorous foundation, and we present a Bayesian method to infer the probability distribution of the PSD given the measured light curve. Because calculation of the likelihood function scales linearly with the number of data points, CARMA modeling scales to current and future massive time-domain data sets. We conclude by applying our CARMA modeling approach to light curves for an X-ray binary, two active galactic nuclei, a long-period variable star, and an RR Lyrae star in order to illustrate their use, applicability, and interpretation.

  20. Environmental lead exposure is associated with visit-to-visit systolic blood pressure variability in the US adults.

    Science.gov (United States)

    Faramawi, Mohammed F; Delongchamp, Robert; Lin, Yu-Sheng; Liu, Youcheng; Abouelenien, Saly; Fischbach, Lori; Jadhav, Supriya

    2015-04-01

    The association between environmental lead exposure and blood pressure variability, an important risk factor for cardiovascular disease, is unexplored and unknown. The objective of the study was to test the hypothesis that lead exposure is associated with blood pressure variability. American participants 17 years of age or older from National Health and Nutrition Examination Survey III were included in the analysis. Participants' blood lead concentrations expressed as micrograms per deciliter were determined. The standard deviations of visit-to-visit systolic and diastolic blood pressure were calculated to determine blood pressure variability. Multivariable regression analyses adjusted for age, gender, race, smoking and socioeconomic status were employed. The participants' mean age and mean blood lead concentration were 42.72 years and 3.44 mcg/dl, respectively. Systolic blood pressure variability was significantly associated with environmental lead exposure after adjusting for the effect of the confounders. The unadjusted and adjusted means of visit-to-visit systolic blood pressure variability and the β coefficient of lead exposure were 3.44, 3.33 mcg/dl, β coefficient = 0.07, P variability. Screening adults with fluctuating blood pressure for lead exposure could be warranted.

  1. A Novel Method for Lithium-Ion Battery Online Parameter Identification Based on Variable Forgetting Factor Recursive Least Squares

    Directory of Open Access Journals (Sweden)

    Zizhou Lao

    2018-05-01

    Full Text Available For model-based state of charge (SOC estimation methods, the battery model parameters change with temperature, SOC, and so forth, causing the estimation error to increase. Constantly updating model parameters during battery operation, also known as online parameter identification, can effectively solve this problem. In this paper, a lithium-ion battery is modeled using the Thevenin model. A variable forgetting factor (VFF strategy is introduced to improve forgetting factor recursive least squares (FFRLS to variable forgetting factor recursive least squares (VFF-RLS. A novel method based on VFF-RLS for the online identification of the Thevenin model is proposed. Experiments verified that VFF-RLS gives more stable online parameter identification results than FFRLS. Combined with an unscented Kalman filter (UKF algorithm, a joint algorithm named VFF-RLS-UKF is proposed for SOC estimation. In a variable-temperature environment, a battery SOC estimation experiment was performed using the joint algorithm. The average error of the SOC estimation was as low as 0.595% in some experiments. Experiments showed that VFF-RLS can effectively track the changes in model parameters. The joint algorithm improved the SOC estimation accuracy compared to the method with the fixed forgetting factor.

  2. Parasitism can be a confounding factor in assessing the response of zebra mussels to water contamination

    International Nuclear Information System (INIS)

    Minguez, Laëtitia; Buronfosse, Thierry; Beisel, Jean-Nicolas; Giambérini, Laure

    2012-01-01

    Biological responses measured in aquatic organisms to monitor environmental pollution could be also affected by different biotic and abiotic factors. Among these environmental factors, parasitism has often been neglected even if infection by parasites is very frequent. In the present field investigation, the parasite infra-communities and zebra mussel biological responses were studied up- and downstream a waste water treatment plant in northeast France. In both sites, mussels were infected by ciliates and/or intracellular bacteria, but prevalence rates and infection intensities were different according to the habitat. Concerning the biological responses differences were observed related to the site quality and the infection status. Parasitism affects both systems but seemed to depend mainly on environmental conditions. The influence of parasites is not constant, but remains important to consider it as a potential confounding factor in ecotoxicological studies. This study also emphasizes the interesting use of integrative indexes to synthesize data set. Highlights: ► Study of potential bias associated with the use of infected zebra mussels in ecotoxicological studies. ► Presence of infected mussels on banks and channels, up- and downstream a waste water treatment plant. ► Parasitism influence on biological responses dependent of mussel population history. ► Integrative index, an interesting tool to synthesize the set of biological data. - Parasitism influence on the host physiology would be strongly dependent on environmental conditions but remains a potential confounding factor in ecotoxicological studies.

  3. Marital well-being and depression in Chinese marriage: Going beyond satisfaction and ruling out critical confounders.

    Science.gov (United States)

    Cao, Hongjian; Zhou, Nan; Fang, Xiaoyi; Fine, Mark

    2017-09-01

    Based on data obtained from 203 Chinese couples during the early years of marriage and utilizing the actor-partner interdependence model, this study examined the prospective associations between different aspects of marital well-being (i.e., marital satisfaction, instability, commitment, and closeness) and depressive symptoms (assessed 2 years later) while controlling for critical intrapersonal (i.e., neuroticism and self-esteem) and contextual (i.e., stressful life events) confounders. Results indicated that (a) when considering different aspects of marital well-being as predictors of depressive symptoms separately, each aspect was significantly associated with spouses' own subsequent depressive symptoms; (b) when examining various aspects of marital well-being simultaneously, only husbands' commitment, husbands' instability, and wives' instability were significantly associated with their own subsequent depressive symptoms above and beyond the other aspects; and (c) the associations between husbands' commitment, husbands' instability, and wives' instability and their own subsequent depressive symptoms remained significant even after controlling for potential major intrapersonal and contextual confounders. Such findings (a) provide evidence that the marital discord model of depression may apply to Chinese couples, (b) highlight the importance of going beyond marital (dis)satisfaction when examining the association between marital well-being and depression, and (c) demonstrate that marital well-being can account for unique variance in depressive symptoms above and beyond an array of intrapersonal and contextual risk factors. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  4. The concept of attributes and preventions of the variables that influence the pipeline risk in the Muhlbauer Method

    Energy Technology Data Exchange (ETDEWEB)

    Schafer, Alexandro G. [Universidade Federal do Pampa (UNIPAMPA), Bage, RS (Brazil)

    2009-07-01

    There are several methods for the risk assessment and risk management applied to pipelines, among them the Muhlbauer's Method. Muhlbauer is an internationally recognized authority on pipeline risk management. The purpose of this model is to evaluate the public exposure to the risk and identify ways for management that risk in fact. The assessment is made by the attribution of quantitative values to the several items that influences in the pipeline risk. Because the ultimate goal of the risk assessment is to provide a means of risk management, it is sometimes useful to make a distinction between two types of risk variables. The risk evaluator can categorize each index risk variable as either an attribute or a prevention. This paper approaches the subject of the definition of attributes and preventions in the Muhlbauer basic model of risk assessment and also presents a classification of the variables that influence the risk in agreement with those two categories. (author)

  5. THE QUADRANTS METHOD TO ESTIMATE QUANTITATIVE VARIABLES IN MANAGEMENT PLANS IN THE AMAZON

    Directory of Open Access Journals (Sweden)

    Gabriel da Silva Oliveira

    2015-12-01

    Full Text Available This work aimed to evaluate the accuracy in estimates of abundance, basal area and commercial volume per hectare, by the quadrants method applied to an area of 1.000 hectares of rain forest in the Amazon. Samples were simulated by random and systematic process with different sample sizes, ranging from 100 to 200 sampling points. The amounts estimated by the samples were compared with the parametric values recorded in the census. In the analysis we considered as the population all trees with diameter at breast height equal to or greater than 40 cm. The quadrants method did not reach the desired level of accuracy for the variables basal area and commercial volume, overestimating the observed values recorded in the census. However, the accuracy of the estimates of abundance, basal area and commercial volume was satisfactory for applying the method in forest inventories for management plans in the Amazon.

  6. Electromagnetic variable degrees of freedom actuator systems and methods

    Science.gov (United States)

    Montesanti, Richard C [Pleasanton, CA; Trumper, David L [Plaistow, NH; Kirtley, Jr., James L.

    2009-02-17

    The present invention provides a variable reluctance actuator system and method that can be adapted for simultaneous rotation and translation of a moving element by applying a normal-direction magnetic flux on the moving element. In a beneficial example arrangement, the moving element includes a swing arm that carries a cutting tool at a set radius from an axis of rotation so as to produce a rotary fast tool servo that provides a tool motion in a direction substantially parallel to the surface-normal of a workpiece at the point of contact between the cutting tool and workpiece. An actuator rotates a swing arm such that a cutting tool moves toward and away from a mounted rotating workpiece in a controlled manner in order to machine the workpiece. Position sensors provide rotation and displacement information for a swing arm to a control system. A control system commands and coordinates motion of the fast tool servo with the motion of a spindle, rotating table, cross-feed slide, and in feed slide of a precision lathe.

  7. Real-time Continuous Assessment Method for Mental and Physiological Condition using Heart Rate Variability

    Science.gov (United States)

    Yoshida, Yutaka; Yokoyama, Kiyoko; Ishii, Naohiro

    It is necessary to monitor the daily health condition for preventing stress syndrome. In this study, it was proposed the method assessing the mental and physiological condition, such as the work stress or the relaxation, using heart rate variability at real time and continuously. The instantanuous heart rate (HR), and the ratio of the number of extreme points (NEP) and the number of heart beats were calculated for assessing mental and physiological condition. In this method, 20 beats heart rate were used to calculate these indexes. These were calculated in one beat interval. Three conditions, which are sitting rest, performing mental arithmetic and watching relaxation movie, were assessed using our proposed algorithm. The assessment accuracies were 71.9% and 55.8%, when performing mental arithmetic and watching relaxation movie respectively. In this method, the mental and physiological condition was assessed using only 20 regressive heart beats, so this method is considered as the real time assessment method.

  8. New Methods for Prosodic Transcription: Capturing Variability as a Source of Information

    Directory of Open Access Journals (Sweden)

    Jennifer Cole

    2016-06-01

    Full Text Available Understanding the role of prosody in encoding linguistic meaning and in shaping phonetic form requires the analysis of prosodically annotated speech drawn from a wide variety of speech materials. Yet obtaining accurate and reliable prosodic annotations for even small datasets is challenging due to the time and expertise required. We discuss several factors that make prosodic annotation difficult and impact its reliability, all of which relate to 'variability': in the patterning of prosodic elements (features and structures as they relate to the linguistic and discourse context, in the acoustic cues for those prosodic elements, and in the parameter values of the cues. We propose two novel methods for prosodic transcription that capture variability as a source of information relevant to the linguistic analysis of prosody. The first is 'Rapid Prosody Transcription '(RPT, which can be performed by non-experts using a simple set of unary labels to mark prominence and boundaries based on immediate auditory impression. Inter-transcriber variability is used to calculate continuous-valued prosody ‘scores’ that are assigned to each word and represent the perceptual salience of its prosodic features or structure. RPT can be used to model the relative influence of top-down factors and acoustic cues in prosody perception, and to model prosodic variation across many dimensions, including language variety,speech style, or speaker’s affect. The second proposed method is the identification of individual cues to the contrastive prosodic elements of an utterance. Cue specification provides a link between the contrastive symbolic categories of prosodic structures and the continuous-valued parameters in the acoustic signal, and offers a framework for investigating how factors related to the grammatical and situational context influence the phonetic form of spoken words and phrases. While cue specification as a transcription tool has not yet been explored as

  9. Gas permeation measurement under defined humidity via constant volume/variable pressure method

    KAUST Repository

    Jan Roman, Pauls

    2012-02-01

    Many industrial gas separations in which membrane processes are feasible entail high water vapour contents, as in CO 2-separation from flue gas in carbon capture and storage (CCS), or in biogas/natural gas processing. Studying the effect of water vapour on gas permeability through polymeric membranes is essential for materials design and optimization of these membrane applications. In particular, for amine-based CO 2 selective facilitated transport membranes, water vapour is necessary for carrier-complex formation (Matsuyama et al., 1996; Deng and Hägg, 2010; Liu et al., 2008; Shishatskiy et al., 2010) [1-4]. But also conventional polymeric membrane materials can vary their permeation behaviour due to water-induced swelling (Potreck, 2009) [5]. Here we describe a simple approach to gas permeability measurement in the presence of water vapour, in the form of a modified constant volume/variable pressure method (pressure increase method). © 2011 Elsevier B.V.

  10. The Multi-Attribute Group Decision-Making Method Based on Interval Grey Trapezoid Fuzzy Linguistic Variables.

    Science.gov (United States)

    Yin, Kedong; Wang, Pengyu; Li, Xuemei

    2017-12-13

    With respect to multi-attribute group decision-making (MAGDM) problems, where attribute values take the form of interval grey trapezoid fuzzy linguistic variables (IGTFLVs) and the weights (including expert and attribute weight) are unknown, improved grey relational MAGDM methods are proposed. First, the concept of IGTFLV, the operational rules, the distance between IGTFLVs, and the projection formula between the two IGTFLV vectors are defined. Second, the expert weights are determined by using the maximum proximity method based on the projection values between the IGTFLV vectors. The attribute weights are determined by the maximum deviation method and the priorities of alternatives are determined by improved grey relational analysis. Finally, an example is given to prove the effectiveness of the proposed method and the flexibility of IGTFLV.

  11. Statistical Metadata Analysis of the Variability of Latency, Device Transfer Time, and Coordinate Position from Smartphone-Recorded Infrasound Data

    Science.gov (United States)

    Garces, E. L.; Garces, M. A.; Christe, A.

    2017-12-01

    The RedVox infrasound recorder app uses microphones and barometers in smartphones to record infrasound, low-frequency sound below the threshold of human hearing. We study a device's metadata, which includes position, latency time, the differences between the device's internal times and the server times, and the machine time, searching for patterns and possible errors or discontinuities in these scaled parameters. We highlight metadata variability through scaled multivariate displays (histograms, distribution curves, scatter plots), all created and organized through software development in Python. This project is helpful in ascertaining variability and honing the accuracy of smartphones, aiding the emergence of portable devices as viable geophysical data collection instruments. It can also improve the app and cloud service by increasing efficiency and accuracy, allowing to better document and foresee drastic natural movements like tsunamis, earthquakes, volcanic eruptions, storms, rocket launches, and meteor impacts; recorded data can later be used for studies and analysis by a variety of professions. We expect our final results to produce insight on how to counteract problematic issues in data mining and improve accuracy in smartphone data-collection. By eliminating lurking variables and minimizing the effect of confounding variables, we hope to discover efficient processes to reduce superfluous precision, unnecessary errors, and data artifacts. These methods should conceivably be transferable to other areas of software development, data analytics, and statistics-based experiments, contributing a precedent of smartphone metadata studies from geophysical rather than societal data. The results should facilitate the rise of civilian-accessible, hand-held, data-gathering mobile sensor networks and yield more straightforward data mining techniques.

  12. High Levels of Sample-to-Sample Variation Confound Data Analysis for Non-Invasive Prenatal Screening of Fetal Microdeletions.

    Directory of Open Access Journals (Sweden)

    Tianjiao Chu

    Full Text Available Our goal was to test the hypothesis that inter-individual genomic copy number variation in control samples is a confounding factor in the non-invasive prenatal detection of fetal microdeletions via the sequence-based analysis of maternal plasma DNA. The database of genomic variants (DGV was used to determine the "Genomic Variants Frequency" (GVF for each 50kb region in the human genome. Whole genome sequencing of fifteen karyotypically normal maternal plasma and six CVS DNA controls samples was performed. The coefficient of variation of relative read counts (cv.RTC for these samples was determined for each 50kb region. Maternal plasma from two pregnancies affected with a chromosome 5p microdeletion was also sequenced, and analyzed using the GCREM algorithm. We found strong correlation between high variance in read counts and GVF amongst controls. Consequently we were unable to confirm the presence of the microdeletion via sequencing of maternal plasma samples obtained from two sequential affected pregnancies. Caution should be exercised when performing NIPT for microdeletions. It is vital to develop our understanding of the factors that impact the sensitivity and specificity of these approaches. In particular, benign copy number variation amongst controls is a major confounder, and their effects should be corrected bioinformatically.

  13. An adaptive sampling method for variable-fidelity surrogate models using improved hierarchical kriging

    Science.gov (United States)

    Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli

    2018-01-01

    Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.

  14. Assessing Mucoadhesion in Polymer Gels: The Effect of Method Type and Instrument Variables

    Directory of Open Access Journals (Sweden)

    Jéssica Bassi da Silva

    2018-03-01

    Full Text Available The process of mucoadhesion has been widely studied using a wide variety of methods, which are influenced by instrumental variables and experiment design, making the comparison between the results of different studies difficult. The aim of this work was to standardize the conditions of the detachment test and the rheological methods of mucoadhesion assessment for semisolids, and introduce a texture profile analysis (TPA method. A factorial design was developed to suggest standard conditions for performing the detachment force method. To evaluate the method, binary polymeric systems were prepared containing poloxamer 407 and Carbopol 971P®, Carbopol 974P®, or Noveon® Polycarbophil. The mucoadhesion of systems was evaluated, and the reproducibility of these measurements investigated. This detachment force method was demonstrated to be reproduceable, and gave different adhesion when mucin disk or ex vivo oral mucosa was used. The factorial design demonstrated that all evaluated parameters had an effect on measurements of mucoadhesive force, but the same was not observed for the work of adhesion. It was suggested that the work of adhesion is a more appropriate metric for evaluating mucoadhesion. Oscillatory rheology was more capable of investigating adhesive interactions than flow rheology. TPA method was demonstrated to be reproducible and can evaluate the adhesiveness interaction parameter. This investigation demonstrates the need for standardized methods to evaluate mucoadhesion and makes suggestions for a standard study design.

  15. Examination of the Relationship between Oral Health and Arterial Sclerosis without Genetic Confounding through the Study of Older Japanese Twins.

    Directory of Open Access Journals (Sweden)

    Yuko Kurushima

    Full Text Available Although researchers have recently demonstrated a relationship between oral health and arterial sclerosis, the genetic contribution to this relationship has been ignored even though genetic factors are expected to have some effect on various diseases. The aim of this study was to evaluate oral health as a significant risk factor related to arterial sclerosis after eliminating genetic confounding through study of older Japanese twins.Medical and dental surveys were conducted individually for 106 Japanese twin pairs over the age of 50 years. Maximal carotid intima-media thickness (IMT-Cmax was measured as a surrogate marker of arterial sclerosis. IMT-Cmax > 1.0 mm was diagnosed as arterial sclerosis. All of the twins were examined for the number of remaining teeth, masticatory performance, and periodontal status. We evaluated each measurement related with IMT-Cmax and arterial sclerosis using generalized estimating equations analysis adjusted for potential risk factors. For non-smoking monozygotic twins, a regression analysis using a "between within" model was conducted to evaluate the relationship between IMT-Cmax and the number of teeth as the environmental factor controlling genetic and familial confounding.We examined 91 monozygotic and 15 dizygotic twin pairs (males: 42, females: 64 with a mean (± standard deviation age of 67.4 ± 10.0 years. Out of all of the oral health-related measurements collected, only the number of teeth was significantly related to arterial sclerosis (odds ratio: 0.72, 95% confidence interval: 0.52-0.99 per five teeth. Regression analysis showed a significant association between the IMT-Cmax and the number of teeth as an environmental factor (p = 0.037.Analysis of monozygotic twins older than 50 years of age showed that having fewer teeth could be a significant environmental factor related to arterial sclerosis, even after controlling for genetic and familial confounding.

  16. Total sulfur determination in residues of crude oil distillation using FT-IR/ATR and variable selection methods

    Science.gov (United States)

    Müller, Aline Lima Hermes; Picoloto, Rochele Sogari; Mello, Paola de Azevedo; Ferrão, Marco Flores; dos Santos, Maria de Fátima Pereira; Guimarães, Regina Célia Lourenço; Müller, Edson Irineu; Flores, Erico Marlon Moraes

    2012-04-01

    Total sulfur concentration was determined in atmospheric residue (AR) and vacuum residue (VR) samples obtained from petroleum distillation process by Fourier transform infrared spectroscopy with attenuated total reflectance (FT-IR/ATR) in association with chemometric methods. Calibration and prediction set consisted of 40 and 20 samples, respectively. Calibration models were developed using two variable selection models: interval partial least squares (iPLS) and synergy interval partial least squares (siPLS). Different treatments and pre-processing steps were also evaluated for the development of models. The pre-treatment based on multiplicative scatter correction (MSC) and the mean centered data were selected for models construction. The use of siPLS as variable selection method provided a model with root mean square error of prediction (RMSEP) values significantly better than those obtained by PLS model using all variables. The best model was obtained using siPLS algorithm with spectra divided in 20 intervals and combinations of 3 intervals (911-824, 823-736 and 737-650 cm-1). This model produced a RMSECV of 400 mg kg-1 S and RMSEP of 420 mg kg-1 S, showing a correlation coefficient of 0.990.

  17. Lung lesion doubling times: values and variability based on method of volume determination

    International Nuclear Information System (INIS)

    Eisenbud Quint, Leslie; Cheng, Joan; Schipper, Matthew; Chang, Andrew C.; Kalemkerian, Gregory

    2008-01-01

    Purpose: To determine doubling times (DTs) of lung lesions based on volumetric measurements from thin-section CT imaging. Methods: Previously untreated patients with ≥ two thin-section CT scans showing a focal lung lesion were identified. Lesion volumes were derived using direct volume measurements and volume calculations based on lesion area and diameter. Growth rates (GRs) were compared by tissue diagnosis and measurement technique. Results: 54 lesions were evaluated including 8 benign lesions, 10 metastases, 3 lymphomas, 15 adenocarcinomas, 11 squamous carcinomas, and 7 miscellaneous lung cancers. Using direct volume measurements, median DTs were 453, 111, 15, 181, 139 and 137 days, respectively. Lung cancer DTs ranged from 23-2239 days. There were no significant differences in GRs among the different lesion types. There was considerable variability among GRs using different volume determination methods. Conclusions: Lung cancer doubling times showed a substantial range, and different volume determination methods gave considerably different DTs

  18. Methods for assessment of climate variability and climate changes in different time-space scales

    International Nuclear Information System (INIS)

    Lobanov, V.; Lobanova, H.

    2004-01-01

    Main problem of hydrology and design support for water projects connects with modern climate change and its impact on hydrological characteristics as observed as well as designed. There are three main stages of this problem: - how to extract a climate variability and climate change from complex hydrological records; - how to assess the contribution of climate change and its significance for the point and area; - how to use the detected climate change for computation of design hydrological characteristics. Design hydrological characteristic is the main generalized information, which is used for water management and design support. First step of a research is a choice of hydrological characteristic, which can be as a traditional one (annual runoff for assessment of water resources, maxima, minima runoff, etc) as well as a new one, which characterizes an intra-annual function or intra-annual runoff distribution. For this aim a linear model has been developed which has two coefficients connected with an amplitude and level (initial conditions) of seasonal function and one parameter, which characterizes an intensity of synoptic and macro-synoptic fluctuations inside a year. Effective statistical methods have been developed for a separation of climate variability and climate change and extraction of homogeneous components of three time scales from observed long-term time series: intra annual, decadal and centural. The first two are connected with climate variability and the last (centural) with climate change. Efficiency of new methods of decomposition and smoothing has been estimated by stochastic modeling and well as on the synthetic examples. For an assessment of contribution and statistical significance of modern climate change components statistical criteria and methods have been used. Next step has been connected with a generalization of the results of detected climate changes over the area and spatial modeling. For determination of homogeneous region with the same

  19. A variable capacitance based modeling and power capability predicting method for ultracapacitor

    Science.gov (United States)

    Liu, Chang; Wang, Yujie; Chen, Zonghai; Ling, Qiang

    2018-01-01

    Methods of accurate modeling and power capability predicting for ultracapacitors are of great significance in management and application of lithium-ion battery/ultracapacitor hybrid energy storage system. To overcome the simulation error coming from constant capacitance model, an improved ultracapacitor model based on variable capacitance is proposed, where the main capacitance varies with voltage according to a piecewise linear function. A novel state-of-charge calculation approach is developed accordingly. After that, a multi-constraint power capability prediction is developed for ultracapacitor, in which a Kalman-filter-based state observer is designed for tracking ultracapacitor's real-time behavior. Finally, experimental results verify the proposed methods. The accuracy of the proposed model is verified by terminal voltage simulating results under different temperatures, and the effectiveness of the designed observer is proved by various test conditions. Additionally, the power capability prediction results of different time scales and temperatures are compared, to study their effects on ultracapacitor's power capability.

  20. The Multi-Attribute Group Decision-Making Method Based on Interval Grey Trapezoid Fuzzy Linguistic Variables

    Directory of Open Access Journals (Sweden)

    Kedong Yin

    2017-12-01

    Full Text Available With respect to multi-attribute group decision-making (MAGDM problems, where attribute values take the form of interval grey trapezoid fuzzy linguistic variables (IGTFLVs and the weights (including expert and attribute weight are unknown, improved grey relational MAGDM methods are proposed. First, the concept of IGTFLV, the operational rules, the distance between IGTFLVs, and the projection formula between the two IGTFLV vectors are defined. Second, the expert weights are determined by using the maximum proximity method based on the projection values between the IGTFLV vectors. The attribute weights are determined by the maximum deviation method and the priorities of alternatives are determined by improved grey relational analysis. Finally, an example is given to prove the effectiveness of the proposed method and the flexibility of IGTFLV.

  1. General method and exact solutions to a generalized variable-coefficient two-dimensional KdV equation

    International Nuclear Information System (INIS)

    Chen, Yong; Shanghai Jiao-Tong Univ., Shangai; Chinese Academy of sciences, Beijing

    2005-01-01

    A general method to uniformly construct exact solutions in terms of special function of nonlinear partial differential equations is presented by means of a more general ansatz and symbolic computation. Making use of the general method, we can successfully obtain the solutions found by the method proposed by Fan (J. Phys. A., 36 (2003) 7009) and find other new and more general solutions, which include polynomial solutions, exponential solutions, rational solutions, triangular periodic wave solution, soliton solutions, soliton-like solutions and Jacobi, Weierstrass doubly periodic wave solutions. A general variable-coefficient two-dimensional KdV equation is chosen to illustrate the method. As a result, some new exact soliton-like solutions are obtained. planets. The numerical results are given in tables. The results are discussed in the conclusion

  2. Stochastic weather inputs for improved urban water demand forecasting: application of nonlinear input variable selection and machine learning methods

    Science.gov (United States)

    Quilty, J.; Adamowski, J. F.

    2015-12-01

    Urban water supply systems are often stressed during seasonal outdoor water use as water demands related to the climate are variable in nature making it difficult to optimize the operation of the water supply system. Urban water demand forecasts (UWD) failing to include meteorological conditions as inputs to the forecast model may produce poor forecasts as they cannot account for the increase/decrease in demand related to meteorological conditions. Meteorological records stochastically simulated into the future can be used as inputs to data-driven UWD forecasts generally resulting in improved forecast accuracy. This study aims to produce data-driven UWD forecasts for two different Canadian water utilities (Montreal and Victoria) using machine learning methods by first selecting historical UWD and meteorological records derived from a stochastic weather generator using nonlinear input variable selection. The nonlinear input variable selection methods considered in this work are derived from the concept of conditional mutual information, a nonlinear dependency measure based on (multivariate) probability density functions and accounts for relevancy, conditional relevancy, and redundancy from a potential set of input variables. The results of our study indicate that stochastic weather inputs can improve UWD forecast accuracy for the two sites considered in this work. Nonlinear input variable selection is suggested as a means to identify which meteorological conditions should be utilized in the forecast.

  3. The Effect of 4-week Difference Training Methods on Some Fitness Variables in Youth Handball Players

    Directory of Open Access Journals (Sweden)

    Abdolhossein a Parnow

    2016-09-01

    Full Text Available Handball is a team sport in which main activities such as sprinting, arm throwing, hitting, and so on involve. This Olympic team sport requires a standard of preparation in order to complete sixteen minutes of competitive play and to achieve success. This study, therefore, was done to determinate the effect of a 4-week different training on some physical fitness variables in youth Handball players. Thirty high-school students participated in the study and assigned into the Resistance Training (RT (n = 10: 16.75± 0.36 yr; 63.14± 4.19 kg; 174.8 ± 5.41 cm, Plyometric Training (PT (n = 10: 16.57± 0.26 yr; 65.52± 6.79 kg; 173.5 ± 5.44 cm, and Complex Training (CT (n=10, 16.23± 0.50 yr; 58.43± 10.50 kg; 175.2 ± 8.19 cm groups. Subjects were evaluated in anthropometric and physiological characteristics 48 hours before and after of a 4-week protocol. Because of study purposes, statistical analyses consisted of a repeated measure ANVOA and one-way ANOVA were used. In considering with pre to post test variables changes in the groups, data analysis showed BF, strength, speed, agility, and explosive power were affected by training protocols (P0.05. In conclusion, complex training result in advantageous effect on variables such as strength, explosive power, speed and agility in youth handball players compare with resistance and plyometric training although we also reported positive effect of these training methods. Coaches and players, therefore, could consider complex training as alternative method for other training methods.

  4. The ad-libitum alcohol ?taste test?: secondary analyses of potential confounds and construct validity

    OpenAIRE

    Jones, Andrew; Button, Emily; Rose, Abigail K.; Robinson, Eric; Christiansen, Paul; Di Lemma, Lisa; Field, Matt

    2015-01-01

    Rationale Motivation to drink alcohol can be measured in the laboratory using an ad-libitum ?taste test?, in which participants rate the taste of alcoholic drinks whilst their intake is covertly monitored. Little is known about the construct validity of this paradigm. Objective The objective of this study was to investigate variables that may compromise the validity of this paradigm and its construct validity. Methods We re-analysed data from 12 studies from our laboratory that incorporated a...

  5. Assessing data quality and the variability of source data verification auditing methods in clinical research settings.

    Science.gov (United States)

    Houston, Lauren; Probst, Yasmine; Martin, Allison

    2018-05-18

    Data audits within clinical settings are extensively used as a major strategy to identify errors, monitor study operations and ensure high-quality data. However, clinical trial guidelines are non-specific in regards to recommended frequency, timing and nature of data audits. The absence of a well-defined data quality definition and method to measure error undermines the reliability of data quality assessment. This review aimed to assess the variability of source data verification (SDV) auditing methods to monitor data quality in a clinical research setting. The scientific databases MEDLINE, Scopus and Science Direct were searched for English language publications, with no date limits applied. Studies were considered if they included data from a clinical trial or clinical research setting and measured and/or reported data quality using a SDV auditing method. In total 15 publications were included. The nature and extent of SDV audit methods in the articles varied widely, depending upon the complexity of the source document, type of study, variables measured (primary or secondary), data audit proportion (3-100%) and collection frequency (6-24 months). Methods for coding, classifying and calculating error were also inconsistent. Transcription errors and inexperienced personnel were the main source of reported error. Repeated SDV audits using the same dataset demonstrated ∼40% improvement in data accuracy and completeness over time. No description was given in regards to what determines poor data quality in clinical trials. A wide range of SDV auditing methods are reported in the published literature though no uniform SDV auditing method could be determined for "best practice" in clinical trials. Published audit methodology articles are warranted for the development of a standardised SDV auditing method to monitor data quality in clinical research settings. Copyright © 2018. Published by Elsevier Inc.

  6. Harnessing real world data from wearables and self-monitoring devices: feasibility, confounders and ethical considerations

    Directory of Open Access Journals (Sweden)

    Uttam Barick

    2016-07-01

    Full Text Available The increasing usage of smart phones has compelled mobile technology to become a universal part of everyday life. From wearable gadgets to sophisticated implantable medical devices, the advent of mobile technology has completely transformed the healthcare delivery scenario. Self-report measures enabled by mobile technology are increasingly becoming a more time and cost efficient method of assessing real world health outcomes. But, amidst all the optimism, there are concerns also on adopting this technology as regulations and ethical considerations on privacy legislations of end users are unclear. In general, the healthcare industry functions on some stringent regulations and compliances to ensure the safety and protection of patient information. A couple of the most common regulations are Health Insurance Portability Accountability Act (HIPPA and Health Information Technology for Economic and Clinical Health (HITECH. To harness the true potential of mobile technology to empower stakeholders and provide them a common platform which seamlessly integrates healthcare delivery and research, it is imperative that challenges and drawbacks in the sphere are identified and addressed. In this age of information and technology, no stones should be left unturned to ensure that the human race has access to the best healthcare services without an intrusion into his/her confidentiality. This article is an overview of the role of tracking and self-monitoring devices in data collection for real world evidence/observational studies in context to feasibility, confounders and ethical considerations.

  7. A Novel Group-Fused Sparse Partial Correlation Method for Simultaneous Estimation of Functional Networks in Group Comparison Studies.

    Science.gov (United States)

    Liang, Xiaoyun; Vaughan, David N; Connelly, Alan; Calamante, Fernando

    2018-05-01

    The conventional way to estimate functional networks is primarily based on Pearson correlation along with classic Fisher Z test. In general, networks are usually calculated at the individual-level and subsequently aggregated to obtain group-level networks. However, such estimated networks are inevitably affected by the inherent large inter-subject variability. A joint graphical model with Stability Selection (JGMSS) method was recently shown to effectively reduce inter-subject variability, mainly caused by confounding variations, by simultaneously estimating individual-level networks from a group. However, its benefits might be compromised when two groups are being compared, given that JGMSS is blinded to other groups when it is applied to estimate networks from a given group. We propose a novel method for robustly estimating networks from two groups by using group-fused multiple graphical-lasso combined with stability selection, named GMGLASS. Specifically, by simultaneously estimating similar within-group networks and between-group difference, it is possible to address inter-subject variability of estimated individual networks inherently related with existing methods such as Fisher Z test, and issues related to JGMSS ignoring between-group information in group comparisons. To evaluate the performance of GMGLASS in terms of a few key network metrics, as well as to compare with JGMSS and Fisher Z test, they are applied to both simulated and in vivo data. As a method aiming for group comparison studies, our study involves two groups for each case, i.e., normal control and patient groups; for in vivo data, we focus on a group of patients with right mesial temporal lobe epilepsy.

  8. A Real-Time Analysis Method for Pulse Rate Variability Based on Improved Basic Scale Entropy

    Directory of Open Access Journals (Sweden)

    Yongxin Chou

    2017-01-01

    Full Text Available Base scale entropy analysis (BSEA is a nonlinear method to analyze heart rate variability (HRV signal. However, the time consumption of BSEA is too long, and it is unknown whether the BSEA is suitable for analyzing pulse rate variability (PRV signal. Therefore, we proposed a method named sliding window iterative base scale entropy analysis (SWIBSEA by combining BSEA and sliding window iterative theory. The blood pressure signals of healthy young and old subjects are chosen from the authoritative international database MIT/PhysioNet/Fantasia to generate PRV signals as the experimental data. Then, the BSEA and the SWIBSEA are used to analyze the experimental data; the results show that the SWIBSEA reduces the time consumption and the buffer cache space while it gets the same entropy as BSEA. Meanwhile, the changes of base scale entropy (BSE for healthy young and old subjects are the same as that of HRV signal. Therefore, the SWIBSEA can be used for deriving some information from long-term and short-term PRV signals in real time, which has the potential for dynamic PRV signal analysis in some portable and wearable medical devices.

  9. Logic Learning Machine and standard supervised methods for Hodgkin's lymphoma prognosis using gene expression data and clinical variables.

    Science.gov (United States)

    Parodi, Stefano; Manneschi, Chiara; Verda, Damiano; Ferrari, Enrico; Muselli, Marco

    2018-03-01

    This study evaluates the performance of a set of machine learning techniques in predicting the prognosis of Hodgkin's lymphoma using clinical factors and gene expression data. Analysed samples from 130 Hodgkin's lymphoma patients included a small set of clinical variables and more than 54,000 gene features. Machine learning classifiers included three black-box algorithms ( k-nearest neighbour, Artificial Neural Network, and Support Vector Machine) and two methods based on intelligible rules (Decision Tree and the innovative Logic Learning Machine method). Support Vector Machine clearly outperformed any of the other methods. Among the two rule-based algorithms, Logic Learning Machine performed better and identified a set of simple intelligible rules based on a combination of clinical variables and gene expressions. Decision Tree identified a non-coding gene ( XIST) involved in the early phases of X chromosome inactivation that was overexpressed in females and in non-relapsed patients. XIST expression might be responsible for the better prognosis of female Hodgkin's lymphoma patients.

  10. Methods to quantify variable importance: implications for theanalysis of noisy ecological data

    OpenAIRE

    Murray, Kim; Conner, Mary M.

    2009-01-01

    Determining the importance of independent variables is of practical relevance to ecologists and managers concerned with allocating limited resources to the management of natural systems. Although techniques that identify explanatory variables having the largest influence on the response variable are needed to design management actions effectively, the use of various indices to evaluate variable importance is poorly understood. Using Monte Carlo simulations, we compared six different indices c...

  11. Postnatal undernutrition in rats: attempts to develop alternative methods to food deprive pups without maternal behavioral alteration.

    Science.gov (United States)

    Codo, W; Carlini, E A

    1979-09-01

    Two methods were investigated as attempts to undernourish rat pups without the disturbances in maternal behavior that accompany the procedures used to date for this purpose. In the 1st method, a litter of 12 pups was raised by both a lactating mother and a "sensitized" female. The sensitized female was provided under the assumption that she could correct for the deficit in maternal care when 1 mother raises a large litter. The results showed that the pups raised by the 2 females were constantly removed by the females from each other's nests; the females engaged in constant fighting and showed altered maternal behavior. As a consequence the pups lost more weight than control underfed young. The 2nd method consisted of removing 6-8 nipples from virgin females which were mated 10 days later. After delivery these females raised litters of 6 pups. Their maternal behavior was equal to that of unoperated controls, and at weaning the pups had 20-50% less body weight. This method could be useful to study undernutrition effects on behavior, without confounding experimental variables.

  12. Heuristic methods using grasp, path relinking and variable neighborhood search for the clustered traveling salesman problem

    Directory of Open Access Journals (Sweden)

    Mário Mestria

    2013-08-01

    Full Text Available The Clustered Traveling Salesman Problem (CTSP is a generalization of the Traveling Salesman Problem (TSP in which the set of vertices is partitioned into disjoint clusters and objective is to find a minimum cost Hamiltonian cycle such that the vertices of each cluster are visited contiguously. The CTSP is NP-hard and, in this context, we are proposed heuristic methods for the CTSP using GRASP, Path Relinking and Variable Neighborhood Descent (VND. The heuristic methods were tested using Euclidean instances with up to 2000 vertices and clusters varying between 4 to 150 vertices. The computational tests were performed to compare the performance of the heuristic methods with an exact algorithm using the Parallel CPLEX software. The computational results showed that the hybrid heuristic method using VND outperforms other heuristic methods.

  13. Solving (2 + 1)-dimensional sine-Poisson equation by a modified variable separated ordinary differential equation method

    International Nuclear Information System (INIS)

    Ka-Lin, Su; Yuan-Xi, Xie

    2010-01-01

    By introducing a more general auxiliary ordinary differential equation (ODE), a modified variable separated ordinary differential equation method is presented for solving the (2 + 1)-dimensional sine-Poisson equation. As a result, many explicit and exact solutions of the (2 + 1)-dimensional sine-Poisson equation are derived in a simple manner by this technique. (general)

  14. Development and validation of a new fallout transport method using variable spectral winds

    International Nuclear Information System (INIS)

    Hopkins, A.T.

    1984-01-01

    A new method was developed to incorporate variable winds into fallout transport calculations. The method uses spectral coefficients derived by the National Meteorological Center. Wind vector components are computed with the coefficients along the trajectories of falling particles. Spectral winds are used in the two-step method to compute dose rate on the ground, downwind of a nuclear cloud. First, the hotline is located by computing trajectories of particles from an initial, stabilized cloud, through spectral winds to the ground. The connection of particle landing points is the hotline. Second, dose rate on and around the hotline is computed by analytically smearing the falling cloud's activity along the ground. The feasibility of using spectral winds for fallout particle transport was validated by computing Mount St. Helens ashfall locations and comparing calculations to fallout data. In addition, an ashfall equation was derived for computing volcanic ash mass/area on the ground. Ashfall data and the ashfall equation were used to back-calculate an aggregated particle size distribution for the Mount St. Helens eruption cloud

  15. A finite difference method for space fractional differential equations with variable diffusivity coefficient

    KAUST Repository

    Mustapha, K.

    2017-06-03

    Anomalous diffusion is a phenomenon that cannot be modeled accurately by second-order diffusion equations, but is better described by fractional diffusion models. The nonlocal nature of the fractional diffusion operators makes substantially more difficult the mathematical analysis of these models and the establishment of suitable numerical schemes. This paper proposes and analyzes the first finite difference method for solving {\\\\em variable-coefficient} fractional differential equations, with two-sided fractional derivatives, in one-dimensional space. The proposed scheme combines first-order forward and backward Euler methods for approximating the left-sided fractional derivative when the right-sided fractional derivative is approximated by two consecutive applications of the first-order backward Euler method. Our finite difference scheme reduces to the standard second-order central difference scheme in the absence of fractional derivatives. The existence and uniqueness of the solution for the proposed scheme are proved, and truncation errors of order $h$ are demonstrated, where $h$ denotes the maximum space step size. The numerical tests illustrate the global $O(h)$ accuracy of our scheme, except for nonsmooth cases which, as expected, have deteriorated convergence rates.

  16. A finite difference method for space fractional differential equations with variable diffusivity coefficient

    KAUST Repository

    Mustapha, K.; Furati, K.; Knio, Omar; Maitre, O. Le

    2017-01-01

    Anomalous diffusion is a phenomenon that cannot be modeled accurately by second-order diffusion equations, but is better described by fractional diffusion models. The nonlocal nature of the fractional diffusion operators makes substantially more difficult the mathematical analysis of these models and the establishment of suitable numerical schemes. This paper proposes and analyzes the first finite difference method for solving {\\em variable-coefficient} fractional differential equations, with two-sided fractional derivatives, in one-dimensional space. The proposed scheme combines first-order forward and backward Euler methods for approximating the left-sided fractional derivative when the right-sided fractional derivative is approximated by two consecutive applications of the first-order backward Euler method. Our finite difference scheme reduces to the standard second-order central difference scheme in the absence of fractional derivatives. The existence and uniqueness of the solution for the proposed scheme are proved, and truncation errors of order $h$ are demonstrated, where $h$ denotes the maximum space step size. The numerical tests illustrate the global $O(h)$ accuracy of our scheme, except for nonsmooth cases which, as expected, have deteriorated convergence rates.

  17. A method for determining average beach slope and beach slope variability for U.S. sandy coastlines

    Science.gov (United States)

    Doran, Kara S.; Long, Joseph W.; Overbeck, Jacquelyn R.

    2015-01-01

    The U.S. Geological Survey (USGS) National Assessment of Hurricane-Induced Coastal Erosion Hazards compares measurements of beach morphology with storm-induced total water levels to produce forecasts of coastal change for storms impacting the Gulf of Mexico and Atlantic coastlines of the United States. The wave-induced water level component (wave setup and swash) is estimated by using modeled offshore wave height and period and measured beach slope (from dune toe to shoreline) through the empirical parameterization of Stockdon and others (2006). Spatial and temporal variability in beach slope leads to corresponding variability in predicted wave setup and swash. For instance, seasonal and storm-induced changes in beach slope can lead to differences on the order of 1 meter (m) in wave-induced water level elevation, making accurate specification of this parameter and its associated uncertainty essential to skillful forecasts of coastal change. A method for calculating spatially and temporally averaged beach slopes is presented here along with a method for determining total uncertainty for each 200-m alongshore section of coastline.

  18. Surgery confounds biology: the predictive value of stage-, grade- and prostate-specific antigen for recurrence after radical prostatectomy as a function of surgeon experience.

    Science.gov (United States)

    Vickers, Andrew J; Savage, Caroline J; Bianco, Fernando J; Klein, Eric A; Kattan, Michael W; Secin, Fernando P; Guilloneau, Bertrand D; Scardino, Peter T

    2011-04-01

    Statistical models predicting cancer recurrence after surgery are based on biologic variables. We have shown previously that prostate cancer recurrence is related to both tumor biology and to surgical technique. Here, we evaluate the association between several biological predictors and biochemical recurrence across varying surgical experience. The study included two separate cohorts: 6,091 patients treated by open radical prostatectomy and an independent replication set of 2,298 patients treated laparoscopically. We calculated the odds ratios for biological predictors of biochemical recurrence-stage, Gleason grade and prostate-specific antigen (PSA)-and also the predictive accuracy (area under the curve, AUC) of a multivariable model, for subgroups of patients defined by the experience of their surgeon. In the open cohort, the odds ratio for Gleason score 8+ and advanced pathologic stage, though not PSA or Gleason score 7, increased dramatically when patients treated by surgeons with lower levels of experience were excluded (Gleason 8+: odds ratios 5.6 overall vs. 13.0 for patients treated by surgeons with 1,000+ prior cases; locally advanced disease: odds ratios of 6.6 vs. 12.2, respectively). The AUC of the multivariable model was 0.750 for patients treated by surgeons with 50 or fewer cases compared to 0.849 for patients treated by surgeons with 500 or more. Although predictiveness was lower overall for the independent replication set cohort, the main findings were replicated. Surgery confounds biology. Although our findings have no direct clinical implications, studies investigating biological variables as predictors of outcome after curative resection of cancer should consider the impact of surgeon-specific factors. Copyright © 2010 UICC.

  19. Interindividual variability in the dose-specific effect of dopamine on carotid chemoreceptor sensitivity to hypoxia

    Science.gov (United States)

    Limberg, Jacqueline K.; Johnson, Blair D.; Holbein, Walter W.; Ranadive, Sushant M.; Mozer, Michael T.

    2015-01-01

    Human studies use varying levels of low-dose (1-4 μg·kg−1·min−1) dopamine to examine peripheral chemosensitivity, based on its known ability to blunt carotid body responsiveness to hypoxia. However, the effect of dopamine on the ventilatory responses to hypoxia is highly variable between individuals. Thus we sought to determine 1) the dose response relationship between dopamine and peripheral chemosensitivity as assessed by the ventilatory response to hypoxia in a cohort of healthy adults, and 2) potential confounding cardiovascular responses at variable low doses of dopamine. Young, healthy adults (n = 30, age = 32 ± 1, 24 male/6 female) were given intravenous (iv) saline and a range of iv dopamine doses (1–4 μg·kg−1·min−1) prior to and throughout five hypoxic ventilatory response (HVR) tests. Subjects initially received iv saline, and after each HVR the dopamine infusion rate was increased by 1 μg·kg−1·min−1. Tidal volume, respiratory rate, heart rate, blood pressure, and oxygen saturation were continuously measured. Dopamine significantly reduced HVR at all doses (P HVR in the high group only (P HVR in the low group (P > 0.05). Dopamine infusion also resulted in a reduction in blood pressure (3 μg·kg−1·min−1) and total peripheral resistance (1–4 μg·kg−1·min−1), driven primarily by subjects with low baseline chemosensitivity. In conclusion, we did not find a single dose of dopamine that elicited a nadir HVR in all subjects. Additionally, potential confounding cardiovascular responses occur with dopamine infusion, which may limit its usage. PMID:26586909

  20. VariableR Reclustering in Multiple Top Quark and W Boson Events

    Energy Technology Data Exchange (ETDEWEB)

    Hyde, Jeremy [SLAC National Accelerator Lab., Menlo Park, CA (United States)

    2015-08-14

    VariableR jet reclustering is an innovative technique that allows for the reconstruction of boosted object over a wide range of kinematic regimes. Such capability enables the efficient identification of events with multiple boosted top quarks which is a typical signature for new physics processes such as the production of the supersymmetric partner of the gluon. In order to evaluate the performance of the algorithm, the VariableR reclustered jets are compared with fixed radius reclustered jets. The flexibility of the algorithm is tested by reconstructing both boosted top quarks and boosted W bosons. The VariableR reclustering method is found to be more efficient than the fixed radius algorithm at identifying top quarks and W bosons in events with four top quarks, therefore enhancing the sensitivity for gluino searches.

  1. Glucose variability negatively impacts long-term functional outcome in patients with traumatic brain injury.

    Science.gov (United States)

    Matsushima, Kazuhide; Peng, Monica; Velasco, Carlos; Schaefer, Eric; Diaz-Arrastia, Ramon; Frankel, Heidi

    2012-04-01

    Significant glycemic excursions (so-called glucose variability) affect the outcome of generic critically ill patients but has not been well studied in patients with traumatic brain injury (TBI). The purpose of this study was to evaluate the impact of glucose variability on long-term functional outcome of patients with TBI. A noncomputerized tight glucose control protocol was used in our intensivist model surgical intensive care unit. The relationship between the glucose variability and long-term (a median of 6 months after injury) functional outcome defined by extended Glasgow Outcome Scale (GOSE) was analyzed using ordinal logistic regression models. Glucose variability was defined by SD and percentage of excursion (POE) from the preset range glucose level. A total of 109 patients with TBI under tight glucose control had long-term GOSE evaluated. In univariable analysis, there was a significant association between lower GOSE score and higher mean glucose, higher SD, POE more than 60, POE 80 to 150, and single episode of glucose less than 60 mg/dL but not POE 80 to 110. After adjusting for possible confounding variables in multivariable ordinal logistic regression models, higher SD, POE more than 60, POE 80 to 150, and single episode of glucose less than 60 mg/dL were significantly associated with lower GOSE score. Glucose variability was significantly associated with poorer long-term functional outcome in patients with TBI as measured by the GOSE score. Well-designed protocols to minimize glucose variability may be key in improving long-term functional outcome. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Linking global climate and temperature variability to widespread amphibian declines putatively caused by disease.

    Science.gov (United States)

    Rohr, Jason R; Raffel, Thomas R

    2010-05-04

    The role of global climate change in the decline of biodiversity and the emergence of infectious diseases remains controversial, and the effect of climatic variability, in particular, has largely been ignored. For instance, it was recently revealed that the proposed link between climate change and widespread amphibian declines, putatively caused by the chytrid fungus Batrachochytrium dendrobatidis (Bd), was tenuous because it was based on a temporally confounded correlation. Here we provide temporally unconfounded evidence that global El Niño climatic events drive widespread amphibian losses in genus Atelopus via increased regional temperature variability, which can reduce amphibian defenses against pathogens. Of 26 climate variables tested, only factors associated with temperature variability could account for the spatiotemporal patterns of declines thought to be associated with Bd. Climatic predictors of declines became significant only after controlling for a pattern consistent with epidemic spread (by temporally detrending the data). This presumed spread accounted for 59% of the temporal variation in amphibian losses, whereas El Niño accounted for 59% of the remaining variation. Hence, we could account for 83% of the variation in declines with these two variables alone. Given that global climate change seems to increase temperature variability, extreme climatic events, and the strength of Central Pacific El Niño episodes, climate change might exacerbate worldwide enigmatic declines of amphibians, presumably by increasing susceptibility to disease. These results suggest that changes to temperature variability associated with climate change might be as significant to biodiversity losses and disease emergence as changes to mean temperature.

  3. The WFCAM multiwavelength Variable Star Catalog

    Science.gov (United States)

    Ferreira Lopes, C. E.; Dékány, I.; Catelan, M.; Cross, N. J. G.; Angeloni, R.; Leão, I. C.; De Medeiros, J. R.

    2015-01-01

    Context. Stellar variability in the near-infrared (NIR) remains largely unexplored. The exploitation of public science archives with data-mining methods offers a perspective for a time-domain exploration of the NIR sky. Aims: We perform a comprehensive search for stellar variability using the optical-NIR multiband photometric data in the public Calibration Database of the WFCAM Science Archive (WSA), with the aim of contributing to the general census of variable stars and of extending the current scarce inventory of accurate NIR light curves for a number of variable star classes. Methods: Standard data-mining methods were applied to extract and fine-tune time-series data from the WSA. We introduced new variability indices designed for multiband data with correlated sampling, and applied them for preselecting variable star candidates, i.e., light curves that are dominated by correlated variations, from noise-dominated ones. Preselection criteria were established by robust numerical tests for evaluating the response of variability indices to the colored noise characteristic of the data. We performed a period search using the string-length minimization method on an initial catalog of 6551 variable star candidates preselected by variability indices. Further frequency analysis was performed on positive candidates using three additional methods in combination, in order to cope with aliasing. Results: We find 275 periodic variable stars and an additional 44 objects with suspected variability with uncertain periods or apparently aperiodic variation. Only 44 of these objects had been previously known, including 11 RR Lyrae stars on the outskirts of the globular cluster M 3 (NGC 5272). We provide a preliminary classification of the new variable stars that have well-measured light curves, but the variability types of a large number of objects remain ambiguous. We classify most of the new variables as contact binary stars, but we also find several pulsating stars, among which

  4. Compromised Motor Dexterity Confounds Processing Speed Task Outcomes in Stroke Patients

    Directory of Open Access Journals (Sweden)

    Essie Low

    2017-09-01

    Full Text Available Most conventional measures of information processing speed require motor responses to facilitate performance. However, although not often addressed clinically, motor impairment, whether due to age or acquired brain injury, would be expected to confound the outcome measure of such tasks. The current study recruited 29 patients (20 stroke and 9 transient ischemic attack with documented reduction in dexterity of the dominant hand, and 29 controls, to investigate the extent to which 3 commonly used processing speed measures with varying motor demands (a Visuo-Motor Reaction Time task, and the Wechsler Adult Intelligence Scale-IV Symbol Search and Coding subtests may be measuring motor-related speed more so than cognitive speed. Analyses include correlations between indices of cognitive and motor speed obtained from two other tasks (Inspection Time and Pegboard task, respectively with the three speed measures, followed by hierarchical regressions to determine the relative contribution of cognitive and motor speed indices toward task performance. Results revealed that speed outcomes on tasks with relatively high motor demands, such as Coding, were largely reflecting motor speed in individuals with reduced dominant hand dexterity. Thus, findings indicate the importance of employing measures with minimal motor requirements, especially when the assessment of speed is aimed at understanding cognitive rather than physical function.

  5. [A method to estimate the short-term fractal dimension of heart rate variability based on wavelet transform].

    Science.gov (United States)

    Zhonggang, Liang; Hong, Yan

    2006-10-01

    A new method of calculating fractal dimension of short-term heart rate variability signals is presented. The method is based on wavelet transform and filter banks. The implementation of the method is: First of all we pick-up the fractal component from HRV signals using wavelet transform. Next, we estimate the power spectrum distribution of fractal component using auto-regressive model, and we estimate parameter 7 using the least square method. Finally according to formula D = 2- (gamma-1)/2 estimate fractal dimension of HRV signal. To validate the stability and reliability of the proposed method, using fractional brown movement simulate 24 fractal signals that fractal value is 1.6 to validate, the result shows that the method has stability and reliability.

  6. Serotonin-1A receptors in major depression quantified using PET: controversies, confounds, and recommendations.

    Science.gov (United States)

    Shrestha, Saurav; Hirvonen, Jussi; Hines, Christina S; Henter, Ioline D; Svenningsson, Per; Pike, Victor W; Innis, Robert B

    2012-02-15

    The serotonin-1A (5-HT(1A)) receptor is of particular interest in human positron emission tomography (PET) studies of major depressive disorder (MDD). Of the eight studies investigating this issue in the brains of patients with MDD, four reported decreased 5-HT(1A) receptor density, two reported no change, and two reported increased 5-HT(1A) receptor density. While clinical heterogeneity may have contributed to these differing results, methodological factors by themselves could also explain the discrepancies. This review highlights several of these factors, including the use of the cerebellum as a reference region and the imprecision of measuring the concentration of parent radioligand in arterial plasma, the method otherwise considered to be the 'gold standard'. Other potential confounds also exist that could restrict or unexpectedly affect the interpretation of results. For example, the radioligand may be a substrate for an efflux transporter - like P-gp - at the blood-brain barrier; furthermore, the binding of the radioligand to the receptor in various stages of cellular trafficking is unknown. Efflux transport and cellular trafficking may also be differentially expressed in patients compared to healthy subjects. We believe that, taken together, the existing disparate findings do not reliably answer the question of whether 5-HT(1A) receptors are altered in MDD or in subgroups of patients with MDD. In addition, useful meta-analysis is precluded because only one of the imaging centers acquired all the data necessary to address these methodological concerns. We recommend that in the future, individual centers acquire more thorough data capable of addressing methodological concerns, and that multiple centers collaborate to meaningfully pool their data for meta-analysis. Published by Elsevier Inc.

  7. Impact of menstruation on select hematology and clinical chemistry variables in cynomolgus macaques.

    Science.gov (United States)

    Perigard, Christopher J; Parrula, M Cecilia M; Larkin, Matthew H; Gleason, Carol R

    2016-06-01

    In preclinical studies with cynomolgus macaques, it is common to have one or more females presenting with menses. Published literature indicates that the blood lost during menses causes decreases in red blood cell mass variables (RBC, HGB, and HCT), which would be a confounding factor in the interpretation of drug-related effects on clinical pathology data, but no scientific data have been published to support this claim. This investigation was conducted to determine if the amount of blood lost during menses in cynomolgus macaques has an effect on routine hematology and serum chemistry variables. Ten female cynomolgus macaques (Macaca fascicularis), 5 to 6.5 years old, were observed daily during approximately 3 months (97 days) for the presence of menses. Hematology and serum chemistry variables were evaluated twice weekly. The results indicated that menstruation affects the erythrogram including RBC, HGB, HCT, MCHC, MCV, reticulocyte count, RDW, the leukogram including neutrophil, lymphocyte, and monocyte counts, and chemistry variables, including GGT activity, and the concentrations of total proteins, albumin, globulins, and calcium. The magnitude of the effect of menstruation on susceptible variables is dependent on the duration of the menstrual phase. Macaques with menstrual phases lasting ≥ 7 days are more likely to develop changes in variables related to chronic blood loss. In preclinical toxicology studies with cynomolgus macaques, interpretation of changes in several commonly evaluated hematology and serum chemistry variables requires adequate clinical observation and documentation concerning presence and duration of menses. There is a concern that macaques with long menstrual cycles can develop iron deficiency anemia due to chronic menstrual blood loss. © 2016 American Society for Veterinary Clinical Pathology.

  8. A new generalized expansion method and its application in finding explicit exact solutions for a generalized variable coefficients KdV equation

    International Nuclear Information System (INIS)

    Sabry, R.; Zahran, M.A.; Fan Engui

    2004-01-01

    A generalized expansion method is proposed to uniformly construct a series of exact solutions for general variable coefficients non-linear evolution equations. The new approach admits the following types of solutions (a) polynomial solutions, (b) exponential solutions, (c) rational solutions, (d) triangular periodic wave solutions, (e) hyperbolic and solitary wave solutions and (f) Jacobi and Weierstrass doubly periodic wave solutions. The efficiency of the method has been demonstrated by applying it to a generalized variable coefficients KdV equation. Then, new and rich variety of exact explicit solutions have been found

  9. Developing a multipoint titration method with a variable dose implementation for anaerobic digestion monitoring.

    Science.gov (United States)

    Salonen, K; Leisola, M; Eerikäinen, T

    2009-01-01

    Determination of metabolites from an anaerobic digester with an acid base titration is considered as superior method for many reasons. This paper describes a practical at line compatible multipoint titration method. The titration procedure was improved by speed and data quality. A simple and novel control algorithm for estimating a variable titrant dose was derived for this purpose. This non-linear PI-controller like algorithm does not require any preliminary information from sample. Performance of this controller is superior compared to traditional linear PI-controllers. In addition, simplification for presenting polyprotic acids as a sum of multiple monoprotic acids is introduced along with a mathematical error examination. A method for inclusion of the ionic strength effect with stepwise iteration is shown. The titration model is presented with matrix notations enabling simple computation of all concentration estimates. All methods and algorithms are illustrated in the experimental part. A linear correlation better than 0.999 was obtained for both acetate and phosphate used as model compounds with slopes of 0.98 and 1.00 and average standard deviations of 0.6% and 0.8%, respectively. Furthermore, insensitivity of the presented method for overlapping buffer capacity curves was shown.

  10. Assessing the 2D Models of Geo-technological Variables in a Block of a Cuban Laterite Ore Body. Part IV Local Polynomial Method

    Directory of Open Access Journals (Sweden)

    Arístides Alejandro Legrá-Lobaina

    2016-10-01

    Full Text Available The local polynomial method is based on assuming that is possible to estimate the value of a U variable in any of the P coordinate through local polynomials estimated based on approximate data. This investigation analyzes the probability of modeling in two dimensions the thickness and nickel, iron and cobalt concentrations in a block of Cuban laterite ores by using the mentioned method. It was also analyzed if the results of modeling these variables depend on the estimation method that is used.

  11. Approximate Solutions of Delay Differential Equations with Constant and Variable Coefficients by the Enhanced Multistage Homotopy Perturbation Method

    Directory of Open Access Journals (Sweden)

    D. Olvera

    2015-01-01

    Full Text Available We expand the application of the enhanced multistage homotopy perturbation method (EMHPM to solve delay differential equations (DDEs with constant and variable coefficients. This EMHPM is based on a sequence of subintervals that provide approximate solutions that require less CPU time than those computed from the dde23 MATLAB numerical integration algorithm solutions. To address the accuracy of our proposed approach, we examine the solutions of several DDEs having constant and variable coefficients, finding predictions with a good match relative to the corresponding numerical integration solutions.

  12. Phenology and growth adjustments of oil palm (Elaeis guineensis) to photoperiod and climate variability.

    Science.gov (United States)

    Legros, S; Mialet-Serra, I; Caliman, J-P; Siregar, F A; Clément-Vidal, A; Dingkuhn, M

    2009-11-01

    Oil palm flowering and fruit production show seasonal maxima whose causes are unknown. Drought periods confound these rhythms, making it difficult to analyse or predict dynamics of production. The present work aims to analyse phenological and growth responses of adult oil palms to seasonal and inter-annual climatic variability. Two oil palm genotypes planted in a replicated design at two sites in Indonesia underwent monthly observations during 22 months in 2006-2008. Measurements included growth of vegetative and reproductive organs, morphology and phenology. Drought was estimated from climatic water balance (rainfall - potential evapotranspiration) and simulated fraction of transpirable soil water. Production history of the same plants for 2001-2005 was used for inter-annual analyses. Drought was absent at the equatorial Kandista site (0 degrees 55'N) but the Batu Mulia site (3 degrees 12'S) had a dry season with variable severity. Vegetative growth and leaf appearance rate fluctuated with drought level. Yield of fruit, a function of the number of female inflorescences produced, was negatively correlated with photoperiod at Kandista. Dual annual maxima were observed supporting a recent theory of circadian control. The photoperiod-sensitive phases were estimated at 9 (or 9 + 12 x n) months before bunch maturity for a given phytomer. The main sensitive phase for drought effects was estimated at 29 months before bunch maturity, presumably associated with inflorescence sex determination. It is assumed that seasonal peaks of flowering in oil palm are controlled even near the equator by photoperiod response within a phytomer. These patterns are confounded with drought effects that affect flowering (yield) with long time-lag. Resulting dynamics are complex, but if the present results are confirmed it will be possible to predict them with models.

  13. Instrumental variables estimates of peer effects in social networks.

    Science.gov (United States)

    An, Weihua

    2015-03-01

    Estimating peer effects with observational data is very difficult because of contextual confounding, peer selection, simultaneity bias, and measurement error, etc. In this paper, I show that instrumental variables (IVs) can help to address these problems in order to provide causal estimates of peer effects. Based on data collected from over 4000 students in six middle schools in China, I use the IV methods to estimate peer effects on smoking. My design-based IV approach differs from previous ones in that it helps to construct potentially strong IVs and to directly test possible violation of exogeneity of the IVs. I show that measurement error in smoking can lead to both under- and imprecise estimations of peer effects. Based on a refined measure of smoking, I find consistent evidence for peer effects on smoking. If a student's best friend smoked within the past 30 days, the student was about one fifth (as indicated by the OLS estimate) or 40 percentage points (as indicated by the IV estimate) more likely to smoke in the same time period. The findings are robust to a variety of robustness checks. I also show that sharing cigarettes may be a mechanism for peer effects on smoking. A 10% increase in the number of cigarettes smoked by a student's best friend is associated with about 4% increase in the number of cigarettes smoked by the student in the same time period. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Application of the vibration method for damage identification of a beam with a variable cross-sectional area

    Directory of Open Access Journals (Sweden)

    Zamorska Izabela

    2018-01-01

    Full Text Available The subject of the paper is an application of the non-destructive vibration method for identifying the location of two cracks occurring in a beam. The vibration method is based on knowledge of a certain number of vibration frequencies of an undamaged element and the knowledge of the same number of vibration frequencies of an element with a defect. The analyzed beam, with a variable cross-sectional area, has been described according to the Bernoulli-Euler theory. To determine the values of free vibration frequencies the analytical solution, with the help of the Green’s function method, has been used.

  15. Spatial Variability of Geriatric Depression Risk in a High-Density City: A Data-Driven Socio-Environmental Vulnerability Mapping Approach.

    Science.gov (United States)

    Ho, Hung Chak; Lau, Kevin Ka-Lun; Yu, Ruby; Wang, Dan; Woo, Jean; Kwok, Timothy Chi Yui; Ng, Edward

    2017-08-31

    Previous studies found a relationship between geriatric depression and social deprivation. However, most studies did not include environmental factors in the statistical models, introducing a bias to estimate geriatric depression risk because the urban environment was found to have significant associations with mental health. We developed a cross-sectional study with a binomial logistic regression to examine the geriatric depression risk of a high-density city based on five social vulnerability factors and four environmental measures. We constructed a socio-environmental vulnerability index by including the significant variables to map the geriatric depression risk in Hong Kong, a high-density city characterized by compact urban environment and high-rise buildings. Crude and adjusted odds ratios (ORs) of the variables were significantly different, indicating that both social and environmental variables should be included as confounding factors. For the comprehensive model controlled by all confounding factors, older adults who were of lower education had the highest geriatric depression risks (OR: 1.60 (1.21, 2.12)). Higher percentage of residential area and greater variation in building height within the neighborhood also contributed to geriatric depression risk in Hong Kong, while average building height had negative association with geriatric depression risk. In addition, the socio-environmental vulnerability index showed that higher scores were associated with higher geriatric depression risk at neighborhood scale. The results of mapping and cross-section model suggested that geriatric depression risk was associated with a compact living environment with low socio-economic conditions in historical urban areas in Hong Kong. In conclusion, our study found a significant difference in geriatric depression risk between unadjusted and adjusted models, suggesting the importance of including environmental factors in estimating geriatric depression risk. We also

  16. Observations of fast variable objects

    International Nuclear Information System (INIS)

    Alekseev, G.N.

    1978-01-01

    A problem on studying fast variable astronomic objects is considered. The basis of the method used in the experiment is a detailed photoelectric study of a fast variableness along with spectroscopy of a high time resolution. Power spectrum of the SS Cyg brightness oscillations and autocorrelation function of the AX Mon brightness are analyzed as an example. To provide a reliable identification of parameters of star active regions responsible for the fast variableness, an experiment is proposed, the ''synchronous spectroscopy'' method being used. The method is based on the supposition about temporary stationarity of occasional processes within the limits of the time scale of several hours. The block diagram of the experiment is described

  17. Analysis of Within-Test Variability of Non-Destructive Test Methods to Evaluate Compressive Strength of Normal Vibrated and Self-Compacting Concretes

    Science.gov (United States)

    Nepomuceno, Miguel C. S.; Lopes, Sérgio M. R.

    2017-10-01

    Non-destructive tests (NDT) have been used in the last decades for the assessment of in-situ quality and integrity of concrete elements. An important step in the application of NDT methods concerns to the interpretation and validation of the test results. In general, interpretation of NDT results should involve three distinct phases leading to the development of conclusions: processing of collected data, analysis of within-test variability and quantitative evaluation of property under investigation. The analysis of within-test variability can provide valuable information, since this can be compared with that of within-test variability associated with the NDT method in use, either to provide a measure of the quality control or to detect the presence of abnormal circumstances during the in-situ application. This paper reports the analysis of the experimental results of within-test variability of NDT obtained for normal vibrated concrete and self-compacting concrete. The NDT reported includes the surface hardness test, ultrasonic pulse velocity test, penetration resistance test, pull-off test, pull-out test and maturity test. The obtained results are discussed and conclusions are presented.

  18. Concurrent variable-interval variable-ratio schedules in a dynamic choice environment.

    Science.gov (United States)

    Bell, Matthew C; Baum, William M

    2017-11-01

    Most studies of operant choice have focused on presenting subjects with a fixed pair of schedules across many experimental sessions. Using these methods, studies of concurrent variable- interval variable-ratio schedules helped to evaluate theories of choice. More recently, a growing literature has focused on dynamic choice behavior. Those dynamic choice studies have analyzed behavior on a number of different time scales using concurrent variable-interval schedules. Following the dynamic choice approach, the present experiment examined performance on concurrent variable-interval variable-ratio schedules in a rapidly changing environment. Our objectives were to compare performance on concurrent variable-interval variable-ratio schedules with extant data on concurrent variable-interval variable-interval schedules using a dynamic choice procedure and to extend earlier work on concurrent variable-interval variable-ratio schedules. We analyzed performances at different time scales, finding strong similarities between concurrent variable-interval variable-interval and concurrent variable-interval variable- ratio performance within dynamic choice procedures. Time-based measures revealed almost identical performance in the two procedures compared with response-based measures, supporting the view that choice is best understood as time allocation. Performance at the smaller time scale of visits accorded with the tendency seen in earlier research toward developing a pattern of strong preference for and long visits to the richer alternative paired with brief "samples" at the leaner alternative ("fix and sample"). © 2017 Society for the Experimental Analysis of Behavior.

  19. Confounding factors and genetic polymorphism in the evaluation of individual steroid profiling

    Science.gov (United States)

    Kuuranne, Tiia; Saugy, Martial; Baume, Norbert

    2014-01-01

    In the fight against doping, steroid profiling is a powerful tool to detect drug misuse with endogenous anabolic androgenic steroids. To establish sensitive and reliable models, the factors influencing profiling should be recognised. We performed an extensive literature review of the multiple factors that could influence the quantitative levels and ratios of endogenous steroids in urine matrix. For a comprehensive and scientific evaluation of the urinary steroid profile, it is necessary to define the target analytes as well as testosterone metabolism. The two main confounding factors, that is, endogenous and exogenous factors, are detailed to show the complex process of quantifying the steroid profile within WADA-accredited laboratories. Technical aspects are also discussed as they could have a significant impact on the steroid profile, and thus the steroid module of the athlete biological passport (ABP). The different factors impacting the major components of the steroid profile must be understood to ensure scientifically sound interpretation through the Bayesian model of the ABP. Not only should the statistical data be considered but also the experts in the field must be consulted for successful implementation of the steroidal module. PMID:24764553

  20. Petroleomics by electrospray ionization FT-ICR mass spectrometry coupled to partial least squares with variable selection methods: prediction of the total acid number of crude oils.

    Science.gov (United States)

    Terra, Luciana A; Filgueiras, Paulo R; Tose, Lílian V; Romão, Wanderson; de Souza, Douglas D; de Castro, Eustáquio V R; de Oliveira, Mirela S L; Dias, Júlio C M; Poppi, Ronei J

    2014-10-07

    Negative-ion mode electrospray ionization, ESI(-), with Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) was coupled to a Partial Least Squares (PLS) regression and variable selection methods to estimate the total acid number (TAN) of Brazilian crude oil samples. Generally, ESI(-)-FT-ICR mass spectra present a power of resolution of ca. 500,000 and a mass accuracy less than 1 ppm, producing a data matrix containing over 5700 variables per sample. These variables correspond to heteroatom-containing species detected as deprotonated molecules, [M - H](-) ions, which are identified primarily as naphthenic acids, phenols and carbazole analog species. The TAN values for all samples ranged from 0.06 to 3.61 mg of KOH g(-1). To facilitate the spectral interpretation, three methods of variable selection were studied: variable importance in the projection (VIP), interval partial least squares (iPLS) and elimination of uninformative variables (UVE). The UVE method seems to be more appropriate for selecting important variables, reducing the dimension of the variables to 183 and producing a root mean square error of prediction of 0.32 mg of KOH g(-1). By reducing the size of the data, it was possible to relate the selected variables with their corresponding molecular formulas, thus identifying the main chemical species responsible for the TAN values.

  1. Desire thinking as a confounder in the relationship between mindfulness and craving: Evidence from a cross-cultural validation of the Desire Thinking Questionnaire.

    Science.gov (United States)

    Chakroun-Baggioni, Nadia; Corman, Maya; Spada, Marcantonio M; Caselli, Gabriele; Gierski, Fabien

    2017-10-01

    Desire thinking and mindfulness have been associated with craving. The aim of the present study was to validate the French version of the Desire Thinking Questionnaire (DTQ) and to investigate the relationship between mindfulness, desire thinking and craving among a sample of university students. Four hundred and ninety six university students completed the DTQ and measures of mindfulness, craving and alcohol use. Results from confirmatory factor analyses showed that the two-factor structure proposed in the original DTQ exhibited suitable goodness-of-fit statistics. The DTQ also demonstrated good internal reliability, temporal stability and predictive validity. A set of linear regressions revealed that desire thinking had a confounding effect in the relationship between mindfulness and craving. The confounding role of desire thinking in the relationship between mindfulness and craving suggests that interrupting desire thinking may be a viable clinical option aimed at reducing craving. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Variable flexure-based fluid filter

    Science.gov (United States)

    Brown, Steve B.; Colston, Jr., Billy W.; Marshall, Graham; Wolcott, Duane

    2007-03-13

    An apparatus and method for filtering particles from a fluid comprises a fluid inlet, a fluid outlet, a variable size passage between the fluid inlet and the fluid outlet, and means for adjusting the size of the variable size passage for filtering the particles from the fluid. An inlet fluid flow stream is introduced to a fixture with a variable size passage. The size of the variable size passage is set so that the fluid passes through the variable size passage but the particles do not pass through the variable size passage.

  3. Methods for the Quasi-Periodic Variability Analysis in Blazars Y. Liu ...

    Indian Academy of Sciences (India)

    the variability analysis in blazars in optical and radio bands, to search for possible quasi-periodic signals. 2. Power spectral density (PSD). In statistical signal processing and physics, the power spectral density (PSD) is a positive real function of a frequency variable associated with a stationary stochas- tic process. Intuitively ...

  4. Detecting the quality of glycerol monolaurate: a method for using Fourier transform infrared spectroscopy with wavelet transform and modified uninformative variable elimination.

    Science.gov (United States)

    Chen, Xiaojing; Wu, Di; He, Yong; Liu, Shou

    2009-04-06

    Glycerol monolaurate (GML) products contain many impurities, such as lauric acid and glucerol. The GML content is an important quality indicator for GML production. A hybrid variable selection algorithm, which is a combination of wavelet transform (WT) technology and modified uninformative variable eliminate (MUVE) method, was proposed to extract useful information from Fourier transform infrared (FT-IR) transmission spectroscopy for the determination of GML content. FT-IR spectra data were compressed by WT first; the irrelevant variables in the compressed wavelet coefficients were eliminated by MUVE. In the MUVE process, simulated annealing (SA) algorithm was employed to search the optimal cutoff threshold. After the WT-MUVE process, variables for the calibration model were reduced from 7366 to 163. Finally, the retained variables were employed as inputs of partial least squares (PLS) model to build the calibration model. For the prediction set, the correlation coefficient (r) of 0.9910 and root mean square error of prediction (RMSEP) of 4.8617 were obtained. The prediction result was better than the PLS model with full-spectra data. It was indicated that proposed WT-MUVE method could not only make the prediction more accurate, but also make the calibration model more parsimonious. Furthermore, the reconstructed spectra represented the projection of the selected wavelet coefficients into the original domain, affording the chemical interpretation of the predicted results. It is concluded that the FT-IR transmission spectroscopy technique with the proposed method is promising for the fast detection of GML content.

  5. Focus on variability : New tools to study intra-individual variability in developmental data

    NARCIS (Netherlands)

    van Geert, P; van Dijk, M

    2002-01-01

    In accordance with dynamic systems theory, we assume that variability is an important developmental phenomenon. However, the standard methodological toolkit of the developmental psychologist offers few instruments for the study of variability. In this article we will present several new methods that

  6. Measurement error, time lag, unmeasured confounding: Considerations for longitudinal estimation of the effect of a mediator in randomised clinical trials.

    Science.gov (United States)

    Goldsmith, K A; Chalder, T; White, P D; Sharpe, M; Pickles, A

    2018-06-01

    Clinical trials are expensive and time-consuming and so should also be used to study how treatments work, allowing for the evaluation of theoretical treatment models and refinement and improvement of treatments. These treatment processes can be studied using mediation analysis. Randomised treatment makes some of the assumptions of mediation models plausible, but the mediator-outcome relationship could remain subject to bias. In addition, mediation is assumed to be a temporally ordered longitudinal process, but estimation in most mediation studies to date has been cross-sectional and unable to explore this assumption. This study used longitudinal structural equation modelling of mediator and outcome measurements from the PACE trial of rehabilitative treatments for chronic fatigue syndrome (ISRCTN 54285094) to address these issues. In particular, autoregressive and simplex models were used to study measurement error in the mediator, different time lags in the mediator-outcome relationship, unmeasured confounding of the mediator and outcome, and the assumption of a constant mediator-outcome relationship over time. Results showed that allowing for measurement error and unmeasured confounding were important. Contemporaneous rather than lagged mediator-outcome effects were more consistent with the data, possibly due to the wide spacing of measurements. Assuming a constant mediator-outcome relationship over time increased precision.

  7. Estimating Composite Curve Number Using an Improved SCS-CN Method with Remotely Sensed Variables in Guangzhou, China

    Directory of Open Access Journals (Sweden)

    Qihao Weng

    2013-03-01

    Full Text Available The rainfall and runoff relationship becomes an intriguing issue as urbanization continues to evolve worldwide. In this paper, we developed a simulation model based on the soil conservation service curve number (SCS-CN method to analyze the rainfall-runoff relationship in Guangzhou, a rapid growing metropolitan area in southern China. The SCS-CN method was initially developed by the Natural Resources Conservation Service (NRCS of the United States Department of Agriculture (USDA, and is one of the most enduring methods for estimating direct runoff volume in ungauged catchments. In this model, the curve number (CN is a key variable which is usually obtained by the look-up table of TR-55. Due to the limitations of TR-55 in characterizing complex urban environments and in classifying land use/cover types, the SCS-CN model cannot provide more detailed runoff information. Thus, this paper develops a method to calculate CN by using remote sensing variables, including vegetation, impervious surface, and soil (V-I-S. The specific objectives of this paper are: (1 To extract the V-I-S fraction images using Linear Spectral Mixture Analysis; (2 To obtain composite CN by incorporating vegetation types, soil types, and V-I-S fraction images; and (3 To simulate direct runoff under the scenarios with precipitation of 57mm (occurred once every five years by average and 81mm (occurred once every ten years. Our experiment shows that the proposed method is easy to use and can derive composite CN effectively.

  8. Exhaustive Search for Sparse Variable Selection in Linear Regression

    Science.gov (United States)

    Igarashi, Yasuhiko; Takenaka, Hikaru; Nakanishi-Ohno, Yoshinori; Uemura, Makoto; Ikeda, Shiro; Okada, Masato

    2018-04-01

    We propose a K-sparse exhaustive search (ES-K) method and a K-sparse approximate exhaustive search method (AES-K) for selecting variables in linear regression. With these methods, K-sparse combinations of variables are tested exhaustively assuming that the optimal combination of explanatory variables is K-sparse. By collecting the results of exhaustively computing ES-K, various approximate methods for selecting sparse variables can be summarized as density of states. With this density of states, we can compare different methods for selecting sparse variables such as relaxation and sampling. For large problems where the combinatorial explosion of explanatory variables is crucial, the AES-K method enables density of states to be effectively reconstructed by using the replica-exchange Monte Carlo method and the multiple histogram method. Applying the ES-K and AES-K methods to type Ia supernova data, we confirmed the conventional understanding in astronomy when an appropriate K is given beforehand. However, we found the difficulty to determine K from the data. Using virtual measurement and analysis, we argue that this is caused by data shortage.

  9. Accounting for the Confound of Meninges in Segmenting Entorhinal and Perirhinal Cortices in T1-Weighted MRI.

    Science.gov (United States)

    Xie, Long; Wisse, Laura E M; Das, Sandhitsu R; Wang, Hongzhi; Wolk, David A; Manjón, Jose V; Yushkevich, Paul A

    2016-10-01

    Quantification of medial temporal lobe (MTL) cortices, including entorhinal cortex (ERC) and perirhinal cortex (PRC), from in vivo MRI is desirable for studying the human memory system as well as in early diagnosis and monitoring of Alzheimer's disease. However, ERC and PRC are commonly over-segmented in T1-weighted (T1w) MRI because of the adjacent meninges that have similar intensity to gray matter in T1 contrast. This introduces errors in the quantification and could potentially confound imaging studies of ERC/PRC. In this paper, we propose to segment MTL cortices along with the adjacent meninges in T1w MRI using an established multi-atlas segmentation framework together with super-resolution technique. Experimental results comparing the proposed pipeline with existing pipelines support the notion that a large portion of meninges is segmented as gray matter by existing algorithms but not by our algorithm. Cross-validation experiments demonstrate promising segmentation accuracy. Further, agreement between the volume and thickness measures from the proposed pipeline and those from the manual segmentations increase dramatically as a result of accounting for the confound of meninges. Evaluated in the context of group discrimination between patients with amnestic mild cognitive impairment and normal controls, the proposed pipeline generates more biologically plausible results and improves the statistical power in discriminating groups in absolute terms comparing to other techniques using T1w MRI. Although the performance of the proposed pipeline is inferior to that using T2-weighted MRI, which is optimized to image MTL sub-structures, the proposed pipeline could still provide important utilities in analyzing many existing large datasets that only have T1w MRI available.

  10. Dynamic Variability of Isometric Action Tremor in Precision Pinching

    Directory of Open Access Journals (Sweden)

    Tim Eakin

    2012-01-01

    Full Text Available Evolutionary development of isometric force impulse frequencies, power, and the directional concordance of changes in oscillatory tremor during performance of a two-digit force regulation task was examined. Analyses compared a patient group having tremor confounding volitional force regulation with a control group having no neuropathological diagnosis. Dependent variables for tremor varied temporally and spatially, both within individual trials and across trials, across individuals, across groups, and between digits. Particularly striking findings were magnitude increases during approaches to cue markers and shifts in the concordance phase from pinching toward rigid sway patterns as the magnitude increased. Magnitudes were significantly different among trace line segments of the task and were characterized by differences in relative force required and by the task progress with respect to cue markers for beginning, reversing force change direction, or task termination. The main systematic differences occurred during cue marker approach and were independent of trial sequence order.

  11. A Method to Derive Monitoring Variables for a Cyber Security Test-bed of I and C System

    Energy Technology Data Exchange (ETDEWEB)

    Han, Kyung Soo; Song, Jae Gu; Lee, Joung Woon; Lee, Cheol Kwon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    In the IT field, monitoring techniques have been developed to protect the systems connected by networks from cyber attacks and incidents. For the development of monitoring systems for I and C cyber security, it is necessary to review the monitoring systems in the IT field and derive cyber security-related monitoring variables among the proprietary operating information about the I and C systems. Tests for the development and application of these monitoring systems may cause adverse effects on the I and C systems. To analyze influences on the system and safely intended variables, the construction of an I and C system Test-bed should be preceded. This article proposes a method of deriving variables that should be monitored through a monitoring system for cyber security as a part of I and C Test-bed. The surveillance features and the monitored variables of NMS(Network Management System), a monitoring technique in the IT field, were reviewed in section 2. In Section 3, the monitoring variables for an I and C cyber security were derived by the of NMS and the investigation for information used for hacking techniques that can be practiced against I and C systems. The monitoring variables of NMS in the IT field and the information about the malicious behaviors used for hacking were derived as expected variables to be monitored for an I and C cyber security research. The derived monitoring variables were classified into the five functions of NMS for efficient management. For the cyber security of I and C systems, the vulnerabilities should be understood through a penetration test etc. and an assessment of influences on the actual system should be carried out. Thus, constructing a test-bed of I and C systems is necessary for the safety system in operation. In the future, it will be necessary to develop a logging and monitoring system for studies on the vulnerabilities of I and C systems with test-beds.

  12. A Method to Derive Monitoring Variables for a Cyber Security Test-bed of I and C System

    International Nuclear Information System (INIS)

    Han, Kyung Soo; Song, Jae Gu; Lee, Joung Woon; Lee, Cheol Kwon

    2013-01-01

    In the IT field, monitoring techniques have been developed to protect the systems connected by networks from cyber attacks and incidents. For the development of monitoring systems for I and C cyber security, it is necessary to review the monitoring systems in the IT field and derive cyber security-related monitoring variables among the proprietary operating information about the I and C systems. Tests for the development and application of these monitoring systems may cause adverse effects on the I and C systems. To analyze influences on the system and safely intended variables, the construction of an I and C system Test-bed should be preceded. This article proposes a method of deriving variables that should be monitored through a monitoring system for cyber security as a part of I and C Test-bed. The surveillance features and the monitored variables of NMS(Network Management System), a monitoring technique in the IT field, were reviewed in section 2. In Section 3, the monitoring variables for an I and C cyber security were derived by the of NMS and the investigation for information used for hacking techniques that can be practiced against I and C systems. The monitoring variables of NMS in the IT field and the information about the malicious behaviors used for hacking were derived as expected variables to be monitored for an I and C cyber security research. The derived monitoring variables were classified into the five functions of NMS for efficient management. For the cyber security of I and C systems, the vulnerabilities should be understood through a penetration test etc. and an assessment of influences on the actual system should be carried out. Thus, constructing a test-bed of I and C systems is necessary for the safety system in operation. In the future, it will be necessary to develop a logging and monitoring system for studies on the vulnerabilities of I and C systems with test-beds

  13. Remote sensing and avian influenza: A review of image processing methods for extracting key variables affecting avian influenza virus survival in water from Earth Observation satellites

    Science.gov (United States)

    Tran, Annelise; Goutard, Flavie; Chamaillé, Lise; Baghdadi, Nicolas; Lo Seen, Danny

    2010-02-01

    Recent studies have highlighted the potential role of water in the transmission of avian influenza (AI) viruses and the existence of often interacting variables that determine the survival rate of these viruses in water; the two main variables are temperature and salinity. Remote sensing has been used to map and monitor water bodies for several decades. In this paper, we review satellite image analysis methods used for water detection and characterization, focusing on the main variables that influence AI virus survival in water. Optical and radar imagery are useful for detecting water bodies at different spatial and temporal scales. Methods to monitor the temperature of large water surfaces are also available. Current methods for estimating other relevant water variables such as salinity, pH, turbidity and water depth are not presently considered to be effective.

  14. Interindividual variability in the dose-specific effect of dopamine on carotid chemoreceptor sensitivity to hypoxia.

    Science.gov (United States)

    Limberg, Jacqueline K; Johnson, Blair D; Holbein, Walter W; Ranadive, Sushant M; Mozer, Michael T; Joyner, Michael J

    2016-01-15

    Human studies use varying levels of low-dose (1-4 μg·kg(-1)·min(-1)) dopamine to examine peripheral chemosensitivity, based on its known ability to blunt carotid body responsiveness to hypoxia. However, the effect of dopamine on the ventilatory responses to hypoxia is highly variable between individuals. Thus we sought to determine 1) the dose response relationship between dopamine and peripheral chemosensitivity as assessed by the ventilatory response to hypoxia in a cohort of healthy adults, and 2) potential confounding cardiovascular responses at variable low doses of dopamine. Young, healthy adults (n = 30, age = 32 ± 1, 24 male/6 female) were given intravenous (iv) saline and a range of iv dopamine doses (1-4 μg·kg(-1)·min(-1)) prior to and throughout five hypoxic ventilatory response (HVR) tests. Subjects initially received iv saline, and after each HVR the dopamine infusion rate was increased by 1 μg·kg(-1)·min(-1). Tidal volume, respiratory rate, heart rate, blood pressure, and oxygen saturation were continuously measured. Dopamine significantly reduced HVR at all doses (P HVR in the high group only (P HVR in the low group (P > 0.05). Dopamine infusion also resulted in a reduction in blood pressure (3 μg·kg(-1)·min(-1)) and total peripheral resistance (1-4 μg·kg(-1)·min(-1)), driven primarily by subjects with low baseline chemosensitivity. In conclusion, we did not find a single dose of dopamine that elicited a nadir HVR in all subjects. Additionally, potential confounding cardiovascular responses occur with dopamine infusion, which may limit its usage. Copyright © 2016 the American Physiological Society.

  15. Statistical Analysis of Clinical Data on a Pocket Calculator, Part 2 Statistics on a Pocket Calculator, Part 2

    CERN Document Server

    Cleophas, Ton J

    2012-01-01

    The first part of this title contained all statistical tests relevant to starting clinical investigations, and included tests for continuous and binary data, power, sample size, multiple testing, variability, confounding, interaction, and reliability. The current part 2 of this title reviews methods for handling missing data, manipulated data, multiple confounders, predictions beyond observation, uncertainty of diagnostic tests, and the problems of outliers. Also robust tests, non-linear modeling , goodness of fit testing, Bhatacharya models, item response modeling, superiority testing, variab

  16. The Relationship between Patient Satisfaction with Service Quality and Survival in Non-Small Cell Lung Cancer - Is Self-Rated Health a Potential Confounder?

    Directory of Open Access Journals (Sweden)

    Christopher G Lis

    other concerning your medical condition and treatment" (HR = 0.59; 95% CI: 0.36 to 0.94; p = 0.03.SRH appears to confound the PS-survival relationship in NSCLC. SRH should be used as a control/stratification variable in analyses involving PS as a predictor of clinical cancer outcomes.

  17. The Relationship between Patient Satisfaction with Service Quality and Survival in Non-Small Cell Lung Cancer - Is Self-Rated Health a Potential Confounder?

    Science.gov (United States)

    Lis, Christopher G; Patel, Kamal; Gupta, Digant

    2015-01-01

    concerning your medical condition and treatment" (HR = 0.59; 95% CI: 0.36 to 0.94; p = 0.03). SRH appears to confound the PS-survival relationship in NSCLC. SRH should be used as a control/stratification variable in analyses involving PS as a predictor of clinical cancer outcomes.

  18. Influence of GSTM1 and GSTT1 genotypes and confounding factors on the frequency of sister chromatid exchange and micronucleus among road construction workers.

    Science.gov (United States)

    Kumar, Anil; Yadav, Anita; Giri, Shiv Kumar; Dev, Kapil; Gautam, Sanjeev Kumar; Gupta, Ranjan; Aggarwal, Neeraj

    2011-07-01

    In the present study, we have investigated the influence of polymorphism of GSTM1 and GSTT1 genes and confounding factors such as age, sex, exposure duration and consumption habits on cytogenetic biomarkers. Frequency of sister chromatid exchanges (SCEs), high frequency cell (HFC) and cytokinesis blocked micronuclei (CBMN) were evaluated in peripheral blood lymphocytes of 115 occupationally exposed road construction workers and 105 unexposed individuals. The distribution of null and positive genotypes of glutathione-S transferase gene was evaluated by multiplex PCR among control and exposed subjects. An increased frequency of CBMN (7.03±2.08); SCE (6.95±1.76) and HFC (6.28±1.69) were found in exposed subjects when compared to referent (CBMN - 3.35±1.10; SCE - 4.13±1.30 and HFC - 3.98±1.56). These results were found statistically significant at p<0.05. When the effect of confounding factors on the frequency of studied biomarkers was evaluated, a strong positive interaction was found. The individuals having GSTM1 and GSTT1 null genotypes had higher frequency of CBMN, SCE and HFC. The association between GSTM1 and GSTT1 genotypes and studied biomarkers was found statistically significant at p<0.05. Our findings suggest that individuals having null type of GST are more susceptible to cytogenetic damage by occupational exposure regardless of confounding factors. There is a significant effect of polymorphism of these genes on cytogenetic biomarkers which are considered as early effects of genotoxic carcinogens. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. How to ask about patient satisfaction? The visual analogue scale is less vulnerable to confounding factors and ceiling effect than a symmetric Likert scale.

    Science.gov (United States)

    Voutilainen, Ari; Pitkäaho, Taina; Kvist, Tarja; Vehviläinen-Julkunen, Katri

    2016-04-01

    To study the effects of scale type (visual analogue scale vs. Likert), item order (systematic vs. random), item non-response and patient-related characteristics (age, gender, subjective health, need for assistance with filling out the questionnaire and length of stay) on the results of patient satisfaction surveys. Although patient satisfaction is one of the most intensely studied issues in the health sciences, research information about the effects of possible instrument-related confounding factors on patient satisfaction surveys is scant. A quasi-experimental design was employed. A non-randomized sample of 150 surgical patients was gathered to minimize possible alterations in care quality. Data were collected in May-September 2014 from one tertiary hospital in Finland using the Revised Humane Caring Scale instrument. New versions of the instrument were created for the present purposes. In these versions, items were either in a visual analogue format or Likert-scaled, in systematic or random order. The data were analysed using an analysis of covariance and a paired samples t-test. The visual analogue scale items were less vulnerable to bias from confounding factors than were the Likert-scaled items. The visual analogue scale also avoided the ceiling effect better than Likert and the time needed to complete the visual analogue scale questionnaire was 28% shorter than that needed to complete the Likert-scaled questionnaire. The present results supported the use of visual analogue scale rather than Likert scaling in patient satisfaction surveys and stressed the need to account for as many potential confounding factors as possible. © 2015 John Wiley & Sons Ltd.

  20. Probability density function method for variable-density pressure-gradient-driven turbulence and mixing

    International Nuclear Information System (INIS)

    Bakosi, Jozsef; Ristorcelli, Raymond J.

    2010-01-01

    Probability density function (PDF) methods are extended to variable-density pressure-gradient-driven turbulence. We apply the new method to compute the joint PDF of density and velocity in a non-premixed binary mixture of different-density molecularly mixing fluids under gravity. The full time-evolution of the joint PDF is captured in the highly non-equilibrium flow: starting from a quiescent state, transitioning to fully developed turbulence and finally dissipated by molecular diffusion. High-Atwood-number effects (as distinguished from the Boussinesq case) are accounted for: both hydrodynamic turbulence and material mixing are treated at arbitrary density ratios, with the specific volume, mass flux and all their correlations in closed form. An extension of the generalized Langevin model, originally developed for the Lagrangian fluid particle velocity in constant-density shear-driven turbulence, is constructed for variable-density pressure-gradient-driven flows. The persistent small-scale anisotropy, a fundamentally 'non-Kolmogorovian' feature of flows under external acceleration forces, is captured by a tensorial diffusion term based on the external body force. The material mixing model for the fluid density, an active scalar, is developed based on the beta distribution. The beta-PDF is shown to be capable of capturing the mixing asymmetry and that it can accurately represent the density through transition, in fully developed turbulence and in the decay process. The joint model for hydrodynamics and active material mixing yields a time-accurate evolution of the turbulent kinetic energy and Reynolds stress anisotropy without resorting to gradient diffusion hypotheses, and represents the mixing state by the density PDF itself, eliminating the need for dubious mixing measures. Direct numerical simulations of the homogeneous Rayleigh-Taylor instability are used for model validation.

  1. Effect of process variables on the Drucker-Prager cap model and residual stress distribution of tablets estimated by the finite element method.

    Science.gov (United States)

    Hayashi, Yoshihiro; Otoguro, Saori; Miura, Takahiro; Onuki, Yoshinori; Obata, Yasuko; Takayama, Kozo

    2014-01-01

    A multivariate statistical technique was applied to clarify the causal correlation between variables in the manufacturing process and the residual stress distribution of tablets. Theophylline tablets were prepared according to a Box-Behnken design using the wet granulation method. Water amounts (X1), kneading time (X2), lubricant-mixing time (X3), and compression force (X4) were selected as design variables. The Drucker-Prager cap (DPC) model was selected as the method for modeling the mechanical behavior of pharmaceutical powders. Simulation parameters, such as Young's modulus, Poisson rate, internal friction angle, plastic deformation parameters, and initial density of the powder, were measured. Multiple regression analysis demonstrated that the simulation parameters were significantly affected by process variables. The constructed DPC models were fed into the analysis using the finite element method (FEM), and the mechanical behavior of pharmaceutical powders during the tableting process was analyzed using the FEM. The results of this analysis revealed that the residual stress distribution of tablets increased with increasing X4. Moreover, an interaction between X2 and X3 also had an effect on shear and the x-axial residual stress of tablets. Bayesian network analysis revealed causal relationships between the process variables, simulation parameters, residual stress distribution, and pharmaceutical responses of tablets. These results demonstrated the potential of the FEM as a tool to help improve our understanding of the residual stress of tablets and to optimize process variables, which not only affect tablet characteristics, but also are risks of causing tableting problems.

  2. Using variable combination population analysis for variable selection in multivariate calibration.

    Science.gov (United States)

    Yun, Yong-Huan; Wang, Wei-Ting; Deng, Bai-Chuan; Lai, Guang-Bi; Liu, Xin-bo; Ren, Da-Bing; Liang, Yi-Zeng; Fan, Wei; Xu, Qing-Song

    2015-03-03

    Variable (wavelength or feature) selection techniques have become a critical step for the analysis of datasets with high number of variables and relatively few samples. In this study, a novel variable selection strategy, variable combination population analysis (VCPA), was proposed. This strategy consists of two crucial procedures. First, the exponentially decreasing function (EDF), which is the simple and effective principle of 'survival of the fittest' from Darwin's natural evolution theory, is employed to determine the number of variables to keep and continuously shrink the variable space. Second, in each EDF run, binary matrix sampling (BMS) strategy that gives each variable the same chance to be selected and generates different variable combinations, is used to produce a population of subsets to construct a population of sub-models. Then, model population analysis (MPA) is employed to find the variable subsets with the lower root mean squares error of cross validation (RMSECV). The frequency of each variable appearing in the best 10% sub-models is computed. The higher the frequency is, the more important the variable is. The performance of the proposed procedure was investigated using three real NIR datasets. The results indicate that VCPA is a good variable selection strategy when compared with four high performing variable selection methods: genetic algorithm-partial least squares (GA-PLS), Monte Carlo uninformative variable elimination by PLS (MC-UVE-PLS), competitive adaptive reweighted sampling (CARS) and iteratively retains informative variables (IRIV). The MATLAB source code of VCPA is available for academic research on the website: http://www.mathworks.com/matlabcentral/fileexchange/authors/498750. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Elastic Stress Analysis of Rotating Functionally Graded Annular Disk of Variable Thickness Using Finite Difference Method

    Directory of Open Access Journals (Sweden)

    Mohammad Hadi Jalali

    2018-01-01

    Full Text Available Elastic stress analysis of rotating variable thickness annular disk made of functionally graded material (FGM is presented. Elasticity modulus, density, and thickness of the disk are assumed to vary radially according to a power-law function. Radial stress, circumferential stress, and radial deformation of the rotating FG annular disk of variable thickness with clamped-clamped (C-C, clamped-free (C-F, and free-free (F-F boundary conditions are obtained using the numerical finite difference method, and the effects of the graded index, thickness variation, and rotating speed on the stresses and deformation are evaluated. It is shown that using FG material could decrease the value of radial stress and increase the radial displacement in a rotating thin disk. It is also demonstrated that increasing the rotating speed can strongly increase the stress in the FG annular disk.

  4. Beat to beat variability in cardiovascular variables: noise or music?

    Science.gov (United States)

    Appel, M. L.; Berger, R. D.; Saul, J. P.; Smith, J. M.; Cohen, R. J.

    1989-01-01

    Cardiovascular variables such as heart rate, arterial blood pressure, stroke volume and the shape of electrocardiographic complexes all fluctuate on a beat to beat basis. These fluctuations have traditionally been ignored or, at best, treated as noise to be averaged out. The variability in cardiovascular signals reflects the homeodynamic interplay between perturbations to cardiovascular function and the dynamic response of the cardiovascular regulatory systems. Modern signal processing techniques provide a means of analyzing beat to beat fluctuations in cardiovascular signals, so as to permit a quantitative, noninvasive or minimally invasive method of assessing closed loop hemodynamic regulation and cardiac electrical stability. This method promises to provide a new approach to the clinical diagnosis and management of alterations in cardiovascular regulation and stability.

  5. Diffusion-weighted magnetic resonance imaging in the characterization of testicular germ cell neoplasms: Effect of ROI methods on apparent diffusion coefficient values and interobserver variability

    Energy Technology Data Exchange (ETDEWEB)

    Tsili, Athina C., E-mail: a_tsili@yahoo.gr [Department of Clinical Radiology, Medical School, University of Ioannina, University Campus, 45110, Ioannina (Greece); Ntorkou, Alexandra, E-mail: alexdorkou@hotmail.com [Department of Clinical Radiology, Medical School, University of Ioannina, University Campus, 45110, Ioannina (Greece); Astrakas, Loukas, E-mail: astrakas@uoi.gr [Department of Medical Physics, Medical School, University of Ioannina, University Campus, 45110, Ioannina (Greece); Xydis, Vasilis, E-mail: vxydis@cc.uoi.gr [Department of Clinical Radiology, Medical School, University of Ioannina, University Campus, 45110, Ioannina (Greece); Tsampalas, Stavros, E-mail: stamp@gmail.com [Department of Urology, Medical School, University of Ioannina, University Campus, 45110, Ioannina (Greece); Sofikitis, Nikolaos, E-mail: akrosnin@hotmail.com [Department of Urology, Medical School, University of Ioannina, University Campus, 45110, Ioannina (Greece); Argyropoulou, Maria I., E-mail: margyrop@cc.uoi.gr [Department of Clinical Radiology, Medical School, University of Ioannina, University Campus, 45110, Ioannina (Greece)

    2017-04-15

    Highlights: • Seminomas have lower mean ADC compared to NSGCNs. • Round ROI is accurate in characterizing TGCNS. • ROI shape has no significant effect on interobserver variability. - Abstract: Introduction: To evaluate the difference in apparent diffusion coefficient (ADC) measurements at diffusion-weighted (DW) magnetic resonance imaging of differently shaped regions-of-interest (ROIs) in testicular germ cell neoplasms (TGCNS), the diagnostic ability of differently shaped ROIs in differentiating seminomas from nonseminomatous germ cell neoplasms (NSGCNs) and the interobserver variability. Materials and methods: Thirty-three TGCNs were retrospectively evaluated. Patients underwent MR examinations, including DWI on a 1.5-T MR system. Two observers measured mean tumor ADCs using four distinct ROI methods: round, square, freehand and multiple small, round ROIs. The interclass correlation coefficient was analyzed to assess interobserver variability. Statistical analysis was used to compare mean ADC measurements among observers, methods and histologic types. Results: All ROI methods showed excellent interobserver agreement, with excellent correlation (P < 0.001). Multiple, small ROIs provided the lower mean ADC in TGCNs. Seminomas had lower mean ADC compared to NSGCNs for each ROI method (P < 0.001). Round ROI proved the most accurate method in characterizing TGCNS. Conclusion: Interobserver variability in ADC measurement is excellent, irrespective of the ROI shape. Multiple, small round ROIs and round ROI proved the more accurate methods for ADC measurement in the characterization of TGCNs and in the differentiation between seminomas and NSGCNs, respectively.

  6. Sequence variation does not confound the measurement of plasma PfHRP2 concentration in African children presenting with severe malaria

    Directory of Open Access Journals (Sweden)

    Ramutton Thiranut

    2012-08-01

    Full Text Available Abstract Background Plasmodium falciparum histidine-rich protein PFHRP2 measurement is used widely for diagnosis, and more recently for severity assessment in falciparum malaria. The Pfhrp2 gene is highly polymorphic, with deletion of the entire gene reported in both laboratory and field isolates. These issues potentially confound the interpretation of PFHRP2 measurements. Methods Studies designed to detect deletion of Pfhrp2 and its paralog Pfhrp3 were undertaken with samples from patients in seven countries contributing to the largest hospital-based severe malaria trial (AQUAMAT. The quantitative relationship between sequence polymorphism and PFHRP2 plasma concentration was examined in samples from selected sites in Mozambique and Tanzania. Results There was no evidence for deletion of either Pfhrp2 or Pfhrp3 in the 77 samples with lowest PFHRP2 plasma concentrations across the seven countries. Pfhrp2 sequence diversity was very high with no haplotypes shared among 66 samples sequenced. There was no correlation between Pfhrp2 sequence length or repeat type and PFHRP2 plasma concentration. Conclusions These findings indicate that sequence polymorphism is not a significant cause of variation in PFHRP2 concentration in plasma samples from African children. This justifies the further development of plasma PFHRP2 concentration as a method for assessing African children who may have severe falciparum malaria. The data also add to the existing evidence base supporting the use of rapid diagnostic tests based on PFHRP2 detection.

  7. Accounting for genetic and environmental confounds in associations between parent and child characteristics : a systematic review of children-of-twins studies

    OpenAIRE

    McAdams, Tom A; Neiderhiser, Jenae M; Rijsdijk, Fruhling V; Narusyte, Jurgita; Lichtenstein, Paul; Eley, Thalia C

    2014-01-01

    Parental psychopathology, parenting style, and the quality of intrafamilial relationships are all associated with child mental health outcomes. However, most research can say little about the causal pathways underlying these associations. This is because most studies are not genetically informative and are therefore not able to account for the possibility that associations are confounded by gene-environment correlation. That is, biological parents not only provide a rearing environment for th...

  8. Globally exponential stability and periodic solutions of CNNS with variable coefficients and variable delays

    International Nuclear Information System (INIS)

    Liu Haifei; Wang Li

    2006-01-01

    In this Letter, by using the inequality method and the Lyapunov functional method, we analyze the globally exponential stability and the existence of periodic solutions of a class of cellular neutral networks with delays and variable coefficients. Some simple and new sufficient conditions ensuring the existence and uniqueness of globally exponential stability of periodic solutions for cellular neutral networks with variable coefficients and delays are obtained. In addition, one example is also worked out to illustrate our theory

  9. Globally exponential stability and periodic solutions of CNNS with variable coefficients and variable delays

    Energy Technology Data Exchange (ETDEWEB)

    Liu Haifei [School of Management and Engineering, Nanjing University, Nanjing 210093 (China)]. E-mail: hfliu80@126.com; Wang Li [School of Management and Engineering, Nanjing University, Nanjing 210093 (China)

    2006-09-15

    In this Letter, by using the inequality method and the Lyapunov functional method, we analyze the globally exponential stability and the existence of periodic solutions of a class of cellular neutral networks with delays and variable coefficients. Some simple and new sufficient conditions ensuring the existence and uniqueness of globally exponential stability of periodic solutions for cellular neutral networks with variable coefficients and delays are obtained. In addition, one example is also worked out to illustrate our theory.

  10. A probabilistic method for species sensitivity distributions taking into account the inherent uncertainty and variability of effects to estimate environmental risk.

    Science.gov (United States)

    Gottschalk, Fadri; Nowack, Bernd

    2013-01-01

    This article presents a method of probabilistically computing species sensitivity distributions (SSD) that is well-suited to cope with distinct data scarcity and variability. First, a probability distribution that reflects the uncertainty and variability of sensitivity is modeled for each species considered. These single species sensitivity distributions are then combined to create an SSD for a particular ecosystem. A probabilistic estimation of the risk is carried out by combining the probability of critical environmental concentrations with the probability of organisms being impacted negatively by these concentrations. To evaluate the performance of the method, we developed SSD and risk calculations for the aquatic environment exposed to triclosan. The case studies showed that the probabilistic results reflect the empirical information well, and the method provides a valuable alternative or supplement to more traditional methods for calculating SSDs based on averaging raw data and/or on using theoretical distributional forms. A comparison and evaluation with single SSD values (5th-percentile [HC5]) revealed the robustness of the proposed method. Copyright © 2012 SETAC.

  11. Anxiety disorders are associated with reduced heart rate variability: A meta-analysis

    Directory of Open Access Journals (Sweden)

    John eChalmers

    2014-07-01

    Full Text Available Background: Anxiety disorders increase risk of future cardiovascular disease (CVD and mortality, even after controlling for confounds including smoking, lifestyle, and socioeconomic status, and irrespective of a history of medical disorders. While impaired vagal function, indicated by reductions in heart rate variability (HRV, may be one mechanism linking anxiety disorders to CVD, prior studies have reported inconsistent findings highlighting the need for meta-analysis.Method: Studies comparing resting state HRV recordings in patients with an anxiety disorder as a primary diagnosis and healthy controls were considered for meta-analysis. Results: Meta-analyses were based on 36 articles, including 2086 patients with an anxiety disorder and 2294 controls. Overall, anxiety disorders were characterised by lower HRV (high frequency: Hedges’ g = -.29. 95%CI: -.41 to -.17, p < 0.001; time domain: Hedges’ g = -0.45, 95%CI: -0.57 to -0.33, p < .001 than controls. Panic Disorder (n=447, Post-Traumatic Stress Disorder (n=192, Generalized Anxiety Disorder (n=68, and Social anxiety disorder (n=90, but not Obsessive Compulsive Disorder (n=40, displayed reductions in high frequency HRV relative to controls (all ps < .001. Conclusions: Anxiety disorders are associated with reduced HRV, findings associated with a small to moderate effect size. Findings have important implications for future physical health and wellbeing of patients, highlighting a need for comprehensive cardiovascular risk reduction.

  12. A modified variable-coefficient projective Riccati equation method and its application to (2 + 1)-dimensional simplified generalized Broer-Kaup system

    International Nuclear Information System (INIS)

    Liu Qing; Zhu Jiamin; Hong Bihai

    2008-01-01

    A modified variable-coefficient projective Riccati equation method is proposed and applied to a (2 + 1)-dimensional simplified and generalized Broer-Kaup system. It is shown that the method presented by Huang and Zhang [Huang DJ, Zhang HQ. Chaos, Solitons and Fractals 2005; 23:601] is a special case of our method. The results obtained in the paper include many new formal solutions besides the all solutions found by Huang and Zhang

  13. Influence of mercury exposure on blood pressure, resting heart rate and heart rate variability in French Polynesians: a cross-sectional study

    Directory of Open Access Journals (Sweden)

    Valera Beatriz

    2011-11-01

    Full Text Available Abstract Background Populations which diet is rich in seafood are highly exposed to contaminants such as mercury, which could affect cardiovascular risk factors Objective To assess the associations between mercury and blood pressure (BP, resting heart rate (HR and HR variability (HRV among French Polynesians Methods Data were collected among 180 adults (≥ 18 years and 101 teenagers (12-17 years. HRV was measured using a two-hour ambulatory electrocardiogram (Holter and BP was measured using a standardized protocol. The association between mercury and HRV and BP parameters was studied using analysis of variance (ANOVA and analysis of covariance (ANCOVA Results Among teenagers, the high frequency (HF decreased between the 2nd and 3rd tertile (380 vs. 204 ms2, p = 0.03 and a similar pattern was observed for the square root of the mean squared differences of successive R-R intervals (rMSSD (43 vs. 30 ms, p = 0.005 after adjusting for confounders. In addition, the ratio low/high frequency (LF/HF increased between the 2nd and 3rd tertile (2.3 vs. 3.0, p = 0.04. Among adults, the standard deviation of R-R intervals (SDNN tended to decrease between the 1st and 2nd tertile (84 vs. 75 ms, p = 0.069 after adjusting for confounders. Furthermore, diastolic BP tended to increase between the 2nd and 3rd tertile (86 vs. 91 mm Hg, p = 0.09. No significant difference was observed in resting HR or pulse pressure (PP Conclusions Mercury was associated with decreased HRV among French Polynesian teenagers while no significant association was observed with resting HR, BP, or PP among teenagers or adults

  14. A moving mesh method with variable relaxation time

    OpenAIRE

    Soheili, Ali Reza; Stockie, John M.

    2006-01-01

    We propose a moving mesh adaptive approach for solving time-dependent partial differential equations. The motion of spatial grid points is governed by a moving mesh PDE (MMPDE) in which a mesh relaxation time \\tau is employed as a regularization parameter. Previously reported results on MMPDEs have invariably employed a constant value of the parameter \\tau. We extend this standard approach by incorporating a variable relaxation time that is calculated adaptively alongside the solution in orde...

  15. Toward Capturing Momentary Changes of Heart Rate Variability by a Dynamic Analysis Method.

    Directory of Open Access Journals (Sweden)

    Haoshi Zhang

    Full Text Available The analysis of heart rate variability (HRV has been performed on long-term electrocardiography (ECG recordings (12~24 hours and short-term recordings (2~5 minutes, which may not capture momentary change of HRV. In this study, we present a new method to analyze the momentary HRV (mHRV. The ECG recordings were segmented into a series of overlapped HRV analysis windows with a window length of 5 minutes and different time increments. The performance of the proposed method in delineating the dynamics of momentary HRV measurement was evaluated with four commonly used time courses of HRV measures on both synthetic time series and real ECG recordings from human subjects and dogs. Our results showed that a smaller time increment could capture more dynamical information on transient changes. Considering a too short increment such as 10 s would cause the indented time courses of the four measures, a 1-min time increment (4-min overlapping was suggested in the analysis of mHRV in the study. ECG recordings from human subjects and dogs were used to further assess the effectiveness of the proposed method. The pilot study demonstrated that the proposed analysis of mHRV could provide more accurate assessment of the dynamical changes in cardiac activity than the conventional measures of HRV (without time overlapping. The proposed method may provide an efficient means in delineating the dynamics of momentary HRV and it would be worthy performing more investigations.

  16. Soil variability in engineering applications

    Science.gov (United States)

    Vessia, Giovanna

    2014-05-01

    Natural geomaterials, as soils and rocks, show spatial variability and heterogeneity of physical and mechanical properties. They can be measured by in field and laboratory testing. The heterogeneity concerns different values of litho-technical parameters pertaining similar lithological units placed close to each other. On the contrary, the variability is inherent to the formation and evolution processes experienced by each geological units (homogeneous geomaterials on average) and captured as a spatial structure of fluctuation of physical property values about their mean trend, e.g. the unit weight, the hydraulic permeability, the friction angle, the cohesion, among others. The preceding spatial variations shall be managed by engineering models to accomplish reliable designing of structures and infrastructures. Materon (1962) introduced the Geostatistics as the most comprehensive tool to manage spatial correlation of parameter measures used in a wide range of earth science applications. In the field of the engineering geology, Vanmarcke (1977) developed the first pioneering attempts to describe and manage the inherent variability in geomaterials although Terzaghi (1943) already highlighted that spatial fluctuations of physical and mechanical parameters used in geotechnical designing cannot be neglected. A few years later, Mandelbrot (1983) and Turcotte (1986) interpreted the internal arrangement of geomaterial according to Fractal Theory. In the same years, Vanmarcke (1983) proposed the Random Field Theory providing mathematical tools to deal with inherent variability of each geological units or stratigraphic succession that can be resembled as one material. In this approach, measurement fluctuations of physical parameters are interpreted through the spatial variability structure consisting in the correlation function and the scale of fluctuation. Fenton and Griffiths (1992) combined random field simulation with the finite element method to produce the Random

  17. Statistical identification of effective input variables

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1982-09-01

    A statistical sensitivity analysis procedure has been developed for ranking the input data of large computer codes in the order of sensitivity-importance. The method is economical for large codes with many input variables, since it uses a relatively small number of computer runs. No prior judgemental elimination of input variables is needed. The sceening method is based on stagewise correlation and extensive regression analysis of output values calculated with selected input value combinations. The regression process deals with multivariate nonlinear functions, and statistical tests are also available for identifying input variables that contribute to threshold effects, i.e., discontinuities in the output variables. A computer code SCREEN has been developed for implementing the screening techniques. The efficiency has been demonstrated by several examples and applied to a fast reactor safety analysis code (Venus-II). However, the methods and the coding are general and not limited to such applications

  18. Pseudo-variables method to calculate HMA relaxation modulus through low-temperature induced stress and strain

    International Nuclear Information System (INIS)

    Canestrari, Francesco; Stimilli, Arianna; Bahia, Hussain U.; Virgili, Amedeo

    2015-01-01

    Highlights: • Proposal of a new method to analyze low-temperature cracking of bituminous mixtures. • Reliability of the relaxation modulus master curve modeling through Prony series. • Suitability of the pseudo-variables approach for a close form solution. - Abstract: Thermal cracking is a critical failure mode for asphalt pavements. Relaxation modulus is the major viscoelastic property that controls the development of thermally induced tensile stresses. Therefore, accurate determination of the relaxation modulus is fundamental for designing long lasting pavements. This paper proposes a reliable analytical solution for constructing the relaxation modulus master curve by measuring stress and strain thermally induced in asphalt mixtures. The solution, based on Boltzmann’s Superposition Principle and pseudo-variables concepts, accounts for time and temperature dependency of bituminous materials modulus, avoiding complex integral transformations. The applicability of the solution is demonstrated by testing a reference mixture using the Asphalt Thermal Cracking Analyzer (ATCA) device. By applying thermal loadings on restrained and unrestrained asphalt beams, ATCA allows the determination of several parameters, but is still unable to provide reliable estimations of relaxation properties. Without them the measurements from ATCA cannot be used in modeling of pavement behavior. Thus, the proposed solution successfully integrates ATCA experimental data. The same methodology can be applied to all test methods that concurrently measure stress and strain. The statistical parameters used to evaluate the goodness of fit show optimum correlation between theoretical and experimental results, demonstrating the accuracy of this mathematical approach

  19. DESIGNS FOR MIXTURE AND PROCESS VARIABLES APPLIED IN TABLET FORMULATIONS

    NARCIS (Netherlands)

    DUINEVELD, C. A. A.; Smilde, A. K.; Doornbos, D. A.

    1993-01-01

    Although there are several methods for the construction of a design for process variables and mixture variables, there are not very many methods which are suitable to combine mixture and process variables in one design. Some of the methods which are feasible will be shown. These methods will be

  20. Fourier transform methods for calculating action variables and semiclassical eigenvalues for coupled oscillator systems

    International Nuclear Information System (INIS)

    Eaker, C.W.; Schatz, G.C.; De Leon, N.; Heller, E.J.

    1984-01-01

    Two methods for calculating the good action variables and semiclassical eigenvalues for coupled oscillator systems are presented, both of which relate the actions to the coefficients appearing in the Fourier representation of the normal coordinates and momenta. The two methods differ in that one is based on the exact expression for the actions together with the EBK semiclassical quantization condition while the other is derived from the Sorbie--Handy (SH) approximation to the actions. However, they are also very similar in that the actions in both methods are related to the same set of Fourier coefficients and both require determining the perturbed frequencies in calculating actions. These frequencies are also determined from the Fourier representations, which means that the actions in both methods are determined from information entirely contained in the Fourier expansion of the coordinates and momenta. We show how these expansions can very conveniently be obtained from fast Fourier transform (FFT) methods and that numerical filtering methods can be used to remove spurious Fourier components associated with the finite trajectory integration duration. In the case of the SH based method, we find that the use of filtering enables us to relax the usual periodicity requirement on the calculated trajectory. Application to two standard Henon--Heiles models is considered and both are shown to give semiclassical eigenvalues in good agreement with previous calculations for nondegenerate and 1:1 resonant systems. In comparing the two methods, we find that although the exact method is quite general in its ability to be used for systems exhibiting complex resonant behavior, it converges more slowly with increasing trajectory integration duration and is more sensitive to the algorithm for choosing perturbed frequencies than the SH based method

  1. Association of Body Mass Index with Depression, Anxiety and Suicide-An Instrumental Variable Analysis of the HUNT Study.

    Directory of Open Access Journals (Sweden)

    Johan Håkon Bjørngaard

    Full Text Available While high body mass index is associated with an increased risk of depression and anxiety, cumulative evidence indicates that it is a protective factor for suicide. The associations from conventional observational studies of body mass index with mental health outcomes are likely to be influenced by reverse causality or confounding by ill-health. In the present study, we investigated the associations between offspring body mass index and parental anxiety, depression and suicide in order to avoid problems with reverse causality and confounding by ill-health.We used data from 32,457 mother-offspring and 27,753 father-offspring pairs from the Norwegian HUNT-study. Anxiety and depression were assessed using the Hospital Anxiety and Depression Scale and suicide death from national registers. Associations between offspring and own body mass index and symptoms of anxiety and depression and suicide mortality were estimated using logistic and Cox regression. Causal effect estimates were estimated with a two sample instrument variable approach using offspring body mass index as an instrument for parental body mass index.Both own and offspring body mass index were positively associated with depression, while the results did not indicate any substantial association between body mass index and anxiety. Although precision was low, suicide mortality was inversely associated with own body mass index and the results from the analysis using offspring body mass index supported these results. Adjusted odds ratios per standard deviation body mass index from the instrumental variable analysis were 1.22 (95% CI: 1.05, 1.43 for depression, 1.10 (95% CI: 0.95, 1.27 for anxiety, and the instrumental variable estimated hazard ratios for suicide was 0.69 (95% CI: 0.30, 1.63.The present study's results indicate that suicide mortality is inversely associated with body mass index. We also found support for a positive association between body mass index and depression, but not

  2. Climate Variability Structures Plant Community Dynamics in Mediterranean Restored and Reference Tidal Wetlands

    Directory of Open Access Journals (Sweden)

    Dylan E. Chapple

    2017-03-01

    Full Text Available In Mediterranean regions and other areas with variable climates, interannual weather variability may impact ecosystem dynamics, and by extension ecological restoration projects. Conditions at reference sites, which are often used to evaluate restoration projects, may also be influenced by weather variability, confounding interpretations of restoration outcomes. To better understand the influence of weather variability on plant community dynamics, we explore change in a vegetation dataset collected between 1990 and 2005 at a historic tidal wetland reference site and a nearby tidal wetland restoration project initiated in 1976 in California’s San Francisco (SF Bay. To determine the factors influencing reference and restoration trajectories, we examine changes in plant community identity in relation to annual salinity levels in the SF Bay, annual rainfall, and tidal channel structure. Over the entire study period, both sites experienced significant directional change away from the 1990 community. Community change was accelerated following low salinity conditions that resulted from strong El Niño events in 1994–1995 and 1997–1998. Overall rates of change were greater at the restoration site and driven by a combination of dominant and sub-dominant species, whereas change at the reference site was driven by sub-dominant species. Sub-dominant species first appeared at the restoration site in 1996 and incrementally increased during each subsequent year, whereas sub-dominant species cover at the reference site peaked in 1999 and subsequently declined. Our results show that frequent, long-term monitoring is needed to adequately capture plant community dynamics in variable Mediterranean ecosystems and demonstrate the need for expanding restoration monitoring and timing restoration actions to match weather conditions.

  3. Optimization method to determine mass transfer variables in a PWR crud deposition risk assessment tool

    International Nuclear Information System (INIS)

    Do, Chuong; Hussey, Dennis; Wells, Daniel M.; Epperson, Kenny

    2016-01-01

    Optimization numerical method was implemented to determine several mass transfer coefficients in a crud-induced power shift risk assessment code. The approach was to utilize a multilevel strategy that targets different model parameters that first changes the major order variables, mass transfer inputs, then calibrates the minor order variables, crud source terms, according to available plant data. In this manner, the mass transfer inputs are effectively simplified as 'dependent' on the crud source terms. Two optimization studies were performed using DAKOTA, a design and analysis toolkit, with the difference between the runs, being the number of model runs using BOA, allowed for adjusting the crud source terms, therefore, reducing the uncertainty with calibration. The result of the first case showed that the current best estimated values for the mass transfer coefficients, which were derived from first principle analysis, can be considered an optimized set. When the run limit of BOA was increased for the second case, an improvement in the prediction was obtained with the results deviating slightly from the best estimated values. (author)

  4. Nonlinear Methods to Assess Changes in Heart Rate Variability in Type 2 Diabetic Patients

    Energy Technology Data Exchange (ETDEWEB)

    Bhaskar, Roy, E-mail: imbhaskarall@gmail.com [Indian Institute of Technology (India); University of Connecticut, Farmington, CT (United States); Ghatak, Sobhendu [Indian Institute of Technology (India)

    2013-10-15

    Heart rate variability (HRV) is an important indicator of autonomic modulation of cardiovascular function. Diabetes can alter cardiac autonomic modulation by damaging afferent inputs, thereby increasing the risk of cardiovascular disease. We applied nonlinear analytical methods to identify parameters associated with HRV that are indicative of changes in autonomic modulation of heart function in diabetic patients. We analyzed differences in HRV patterns between diabetic and age-matched healthy control subjects using nonlinear methods. Lagged Poincaré plot, autocorrelation, and detrended fluctuation analysis were applied to analyze HRV in electrocardiography (ECG) recordings. Lagged Poincare plot analysis revealed significant changes in some parameters, suggestive of decreased parasympathetic modulation. The detrended fluctuation exponent derived from long-term fitting was higher than the short-term one in the diabetic population, which was also consistent with decreased parasympathetic input. The autocorrelation function of the deviation of inter-beat intervals exhibited a highly correlated pattern in the diabetic group compared with the control group. The HRV pattern significantly differs between diabetic patients and healthy subjects. All three statistical methods employed in the study may prove useful to detect the onset and extent of autonomic neuropathy in diabetic patients.

  5. Nonlinear Methods to Assess Changes in Heart Rate Variability in Type 2 Diabetic Patients

    International Nuclear Information System (INIS)

    Bhaskar, Roy; Ghatak, Sobhendu

    2013-01-01

    Heart rate variability (HRV) is an important indicator of autonomic modulation of cardiovascular function. Diabetes can alter cardiac autonomic modulation by damaging afferent inputs, thereby increasing the risk of cardiovascular disease. We applied nonlinear analytical methods to identify parameters associated with HRV that are indicative of changes in autonomic modulation of heart function in diabetic patients. We analyzed differences in HRV patterns between diabetic and age-matched healthy control subjects using nonlinear methods. Lagged Poincaré plot, autocorrelation, and detrended fluctuation analysis were applied to analyze HRV in electrocardiography (ECG) recordings. Lagged Poincare plot analysis revealed significant changes in some parameters, suggestive of decreased parasympathetic modulation. The detrended fluctuation exponent derived from long-term fitting was higher than the short-term one in the diabetic population, which was also consistent with decreased parasympathetic input. The autocorrelation function of the deviation of inter-beat intervals exhibited a highly correlated pattern in the diabetic group compared with the control group. The HRV pattern significantly differs between diabetic patients and healthy subjects. All three statistical methods employed in the study may prove useful to detect the onset and extent of autonomic neuropathy in diabetic patients

  6. Design method of a power management strategy for variable battery capacities range-extended electric vehicles to improve energy efficiency and cost-effectiveness

    International Nuclear Information System (INIS)

    Du, Jiuyu; Chen, Jingfu; Song, Ziyou; Gao, Mingming; Ouyang, Minggao

    2017-01-01

    Energy management strategy and battery capacity are the primary factors for the energy efficiency of range-extended electric buses (REEBs). To improve the energy efficiency of REEBs developed by Tsinghua University, an optimal design method of global optimization-based strategy is investigated. It is real-time and adaptive to variable traction battery capacities of series REEBs. For simulation, the physical model of REEB and key components are established. The optimal strategy is first extracted by the power split ratio (PSR) from REEB simulation result with dynamic program (DP) algorithm. The power distribution map is obtained by series simulations for variable battery capacity options. The control law for developing optimal strategy are achieved by cluster regression for power distribution data. To verify the effect of the proposed energy management strategy, characteristics of powertrain, energy efficiency, operating cost, and computing time are ultimately analyzed. Simulation results show that the energy efficiency of the global optimization-based strategy presented in this paper is similar to that of the DP strategy. Therefore, the overall energy efficiency can be significantly improved compared with that of the CDCS strategy, and operating costs can be substantially reduced. The feasibility of candidate control strategies is thereby assessed via the employment of variable parameters. - Highlights: • Analysis method of powertrain energy efficiency and power distribution is proposed. • The power distribution rules of strategy with variable battery capacities are achieved. • The parametric method of proposed PSR-RB strategy is presented. • The energy efficiency of powertrain is analysis by flow analysis method. • The energy management strategy is global optimization-based and real-time.

  7. Emittance measurements by variable quadrupole method

    International Nuclear Information System (INIS)

    Toprek, D.

    2005-01-01

    The beam emittance is a measure of both the beam size and beam divergence, we cannot directly measure its value. If the beam size is measured at different locations or under different focusing conditions such that different parts of the phase space ellipse will be probed by the beam size monitor, the beam emittance can be determined. An emittance measurement can be performed by different methods. Here we will consider the varying quadrupole setting method.

  8. Method for assessing coal-floor water-inrush risk based on the variable-weight model and unascertained measure theory

    Science.gov (United States)

    Wu, Qiang; Zhao, Dekang; Wang, Yang; Shen, Jianjun; Mu, Wenping; Liu, Honglei

    2017-11-01

    Water inrush from coal-seam floors greatly threatens mining safety in North China and is a complex process controlled by multiple factors. This study presents a mathematical assessment system for coal-floor water-inrush risk based on the variable-weight model (VWM) and unascertained measure theory (UMT). In contrast to the traditional constant-weight model (CWM), which assigns a fixed weight to each factor, the VWM varies with the factor-state value. The UMT employs the confidence principle, which is more effective in ordered partition problems than the maximum membership principle adopted in the former mathematical theory. The method is applied to the Datang Tashan Coal Mine in North China. First, eight main controlling factors are selected to construct the comprehensive evaluation index system. Subsequently, an incentive-penalty variable-weight model is built to calculate the variable weights of each factor. Then, the VWM-UMT model is established using the quantitative risk-grade divide of each factor according to the UMT. On this basis, the risk of coal-floor water inrush in Tashan Mine No. 8 is divided into five grades. For comparison, the CWM is also adopted for the risk assessment, and a differences distribution map is obtained between the two methods. Finally, the verification of water-inrush points indicates that the VWM-UMT model is powerful and more feasible and reasonable. The model has great potential and practical significance in future engineering applications.

  9. Price variability and marketing method in non-ferrous metals: Slade's analysis revisited

    NARCIS (Netherlands)

    Gilbert, C.L.; Ferretti, F.

    2002-01-01

    We examine the impact of the pricing regime on price variability with reference to the non-ferrous metals industry. Theoretical arguments are ambiguous, but suggest that the extent of monopoly power is more important than the pricing regime as a determinant of variability. Slade (Quart. J. Econ. 106

  10. Variable and subset selection in PLS regression

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    2001-01-01

    The purpose of this paper is to present some useful methods for introductory analysis of variables and subsets in relation to PLS regression. We present here methods that are efficient in finding the appropriate variables or subset to use in the PLS regression. The general conclusion...... is that variable selection is important for successful analysis of chemometric data. An important aspect of the results presented is that lack of variable selection can spoil the PLS regression, and that cross-validation measures using a test set can show larger variation, when we use different subsets of X, than...

  11. Confounding Problems in Multifactor AOV When Using Several Organismic Variables of Limited Reliability

    Science.gov (United States)

    Games, Paul A.

    1975-01-01

    A brief introduction is presented on how multiple regression and linear model techniques can handle data analysis situations that most educators and psychologists think of as appropriate for analysis of variance. (Author/BJG)

  12. Collective variables and dissipation

    International Nuclear Information System (INIS)

    Balian, R.

    1984-09-01

    This is an introduction to some basic concepts of non-equilibrium statistical mechanics. We emphasize in particular the relevant entropy relative to a given set of collective variables, the meaning of the projection method in the Liouville space, its use to establish the generalized transport equations for these variables, and the interpretation of dissipation in the framework of information theory

  13. Sources of variability in the determination by evaporation method of gross alpha activity in water samples

    Energy Technology Data Exchange (ETDEWEB)

    Baeza, A.; Corbacho, J.A. [LARUEX, Caceres (Spain). Environmental Radioactivity Lab.

    2013-07-01

    Determining the gross alpha activity concentration of water samples is one way to screen for waters whose radionuclide content is so high that its consumption could imply surpassing the Total Indicative Dose as defined in European Directive 98/83/EC. One of the most commonly used methods to prepare the sources to measure gross alpha activity in water samples is desiccation. Its main advantages are the simplicity of the procedure, the low cost of source preparation, and the possibility of simultaneously determining the gross beta activity. The preparation of the source, the construction of the calibration curves, and the measurement procedure itself involve, however, various factors that may introduce sufficient variability into the results to significantly affect the screening process. We here identify the main sources of this variability, and propose specific procedures to follow in the desiccation process that will reduce the uncertainties, and ensure that the result is indeed representative of the sum of the activities of the alpha emitters present in the sample. (orig.)

  14. Spatial Variability of Geriatric Depression Risk in a High-Density City: A Data-Driven Socio-Environmental Vulnerability Mapping Approach

    Directory of Open Access Journals (Sweden)

    Hung Chak Ho

    2017-08-01

    Full Text Available Previous studies found a relationship between geriatric depression and social deprivation. However, most studies did not include environmental factors in the statistical models, introducing a bias to estimate geriatric depression risk because the urban environment was found to have significant associations with mental health. We developed a cross-sectional study with a binomial logistic regression to examine the geriatric depression risk of a high-density city based on five social vulnerability factors and four environmental measures. We constructed a socio-environmental vulnerability index by including the significant variables to map the geriatric depression risk in Hong Kong, a high-density city characterized by compact urban environment and high-rise buildings. Crude and adjusted odds ratios (ORs of the variables were significantly different, indicating that both social and environmental variables should be included as confounding factors. For the comprehensive model controlled by all confounding factors, older adults who were of lower education had the highest geriatric depression risks (OR: 1.60 (1.21, 2.12. Higher percentage of residential area and greater variation in building height within the neighborhood also contributed to geriatric depression risk in Hong Kong, while average building height had negative association with geriatric depression risk. In addition, the socio-environmental vulnerability index showed that higher scores were associated with higher geriatric depression risk at neighborhood scale. The results of mapping and cross-section model suggested that geriatric depression risk was associated with a compact living environment with low socio-economic conditions in historical urban areas in Hong Kong. In conclusion, our study found a significant difference in geriatric depression risk between unadjusted and adjusted models, suggesting the importance of including environmental factors in estimating geriatric depression risk

  15. Limitations of a metabolic network-based reverse ecology method for inferring host-pathogen interactions.

    Science.gov (United States)

    Takemoto, Kazuhiro; Aie, Kazuki

    2017-05-25

    Host-pathogen interactions are important in a wide range of research fields. Given the importance of metabolic crosstalk between hosts and pathogens, a metabolic network-based reverse ecology method was proposed to infer these interactions. However, the validity of this method remains unclear because of the various explanations presented and the influence of potentially confounding factors that have thus far been neglected. We re-evaluated the importance of the reverse ecology method for evaluating host-pathogen interactions while statistically controlling for confounding effects using oxygen requirement, genome, metabolic network, and phylogeny data. Our data analyses showed that host-pathogen interactions were more strongly influenced by genome size, primary network parameters (e.g., number of edges), oxygen requirement, and phylogeny than the reserve ecology-based measures. These results indicate the limitations of the reverse ecology method; however, they do not discount the importance of adopting reverse ecology approaches altogether. Rather, we highlight the need for developing more suitable methods for inferring host-pathogen interactions and conducting more careful examinations of the relationships between metabolic networks and host-pathogen interactions.

  16. A New Multidisciplinary Design Optimization Method Accounting for Discrete and Continuous Variables under Aleatory and Epistemic Uncertainties

    Directory of Open Access Journals (Sweden)

    Hong-Zhong Huang

    2012-02-01

    Full Text Available Various uncertainties are inevitable in complex engineered systems and must be carefully treated in design activities. Reliability-Based Multidisciplinary Design Optimization (RBMDO has been receiving increasing attention in the past decades to facilitate designing fully coupled systems but also achieving a desired reliability considering uncertainty. In this paper, a new formulation of multidisciplinary design optimization, namely RFCDV (random/fuzzy/continuous/discrete variables Multidisciplinary Design Optimization (RFCDV-MDO, is developed within the framework of Sequential Optimization and Reliability Assessment (SORA to deal with multidisciplinary design problems in which both aleatory and epistemic uncertainties are present. In addition, a hybrid discrete-continuous algorithm is put forth to efficiently solve problems where both discrete and continuous design variables exist. The effectiveness and computational efficiency of the proposed method are demonstrated via a mathematical problem and a pressure vessel design problem.

  17. The impact of obstructive sleep apnea variability measured in-lab versus in-home on sample size calculations

    Directory of Open Access Journals (Sweden)

    Levendowski Daniel

    2009-01-01

    Full Text Available Abstract Background When conducting a treatment intervention, it is assumed that variability associated with measurement of the disease can be controlled sufficiently to reasonably assess the outcome. In this study we investigate the variability of Apnea-Hypopnea Index obtained by polysomnography and by in-home portable recording in untreated mild to moderate obstructive sleep apnea (OSA patients at a four- to six-month interval. Methods Thirty-seven adult patients serving as placebo controls underwent a baseline polysomnography and in-home sleep study followed by a second set of studies under the same conditions. The polysomnography studies were acquired and scored at three independent American Academy of Sleep Medicine accredited sleep laboratories. The in-home studies were acquired by the patient and scored using validated auto-scoring algorithms. The initial in-home study was conducted on average two months prior to the first polysomnography, the follow-up polysomnography and in-home studies were conducted approximately five to six months after the initial polysomnography. Results When comparing the test-retest Apnea-hypopnea Index (AHI and apnea index (AI, the in-home results were more highly correlated (r = 0.65 and 0.68 than the comparable PSG results (r = 0.56 and 0.58. The in-home results provided approximately 50% less test-retest variability than the comparable polysomnography AHI and AI values. Both the overall polysomnography AHI and AI showed a substantial bias toward increased severity upon retest (8 and 6 events/hr respectively while the in-home bias was essentially zero. The in-home percentage of time supine showed a better correlation compared to polysomnography (r = 0.72 vs. 0.43. Patients biased toward more time supine during the initial polysomnography; no trends in time supine for in-home studies were noted. Conclusion Night-to-night variability in sleep-disordered breathing can be a confounding factor in assessing

  18. Gaia DR2 documentation Chapter 7: Variability

    Science.gov (United States)

    Eyer, L.; Guy, L.; Distefano, E.; Clementini, G.; Mowlavi, N.; Rimoldini, L.; Roelens, M.; Audard, M.; Holl, B.; Lanzafame, A.; Lebzelter, T.; Lecoeur-Taïbi, I.; Molnár, L.; Ripepi, V.; Sarro, L.; Jevardat de Fombelle, G.; Nienartowicz, K.; De Ridder, J.; Juhász, Á.; Molinaro, R.; Plachy, E.; Regibo, S.

    2018-04-01

    This chapter of the Gaia DR2 documentation describes the models and methods used on the 22 months of data to produce the Gaia variable star results for Gaia DR2. The variability processing and analysis was based mostly on the calibrated G and integrated BP and RP photometry. The variability analysis approach to the Gaia data has been described in Eyer et al. (2017), and the Gaia DR2 results are presented in Holl et al. (2018). Detailed methods on specific topics will be published in a number of separate articles. Variability behaviour in the colour magnitude diagram is presented in Gaia Collaboration et al. (2018c).

  19. A new multi-step technique with differential transform method for analytical solution of some nonlinear variable delay differential equations.

    Science.gov (United States)

    Benhammouda, Brahim; Vazquez-Leal, Hector

    2016-01-01

    This work presents an analytical solution of some nonlinear delay differential equations (DDEs) with variable delays. Such DDEs are difficult to treat numerically and cannot be solved by existing general purpose codes. A new method of steps combined with the differential transform method (DTM) is proposed as a powerful tool to solve these DDEs. This method reduces the DDEs to ordinary differential equations that are then solved by the DTM. Furthermore, we show that the solutions can be improved by Laplace-Padé resummation method. Two examples are presented to show the efficiency of the proposed technique. The main advantage of this technique is that it possesses a simple procedure based on a few straight forward steps and can be combined with any analytical method, other than the DTM, like the homotopy perturbation method.

  20. Variable selection in near-infrared spectroscopy: Benchmarking of feature selection methods on biodiesel data

    International Nuclear Information System (INIS)

    Balabin, Roman M.; Smirnov, Sergey V.

    2011-01-01

    During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm -1 ) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic