Barnwell-Ménard, Jean-Louis; Li, Qing; Cohen, Alan A
2015-03-15
The loss of signal associated with categorizing a continuous variable is well known, and previous studies have demonstrated that this can lead to an inflation of Type-I error when the categorized variable is a confounder in a regression analysis estimating the effect of an exposure on an outcome. However, it is not known how the Type-I error may vary under different circumstances, including logistic versus linear regression, different distributions of the confounder, and different categorization methods. Here, we analytically quantified the effect of categorization and then performed a series of 9600 Monte Carlo simulations to estimate the Type-I error inflation associated with categorization of a confounder under different regression scenarios. We show that Type-I error is unacceptably high (>10% in most scenarios and often 100%). The only exception was when the variable categorized was a continuous mixture proxy for a genuinely dichotomous latent variable, where both the continuous proxy and the categorized variable are error-ridden proxies for the dichotomous latent variable. As expected, error inflation was also higher with larger sample size, fewer categories, and stronger associations between the confounder and the exposure or outcome. We provide online tools that can help researchers estimate the potential error inflation and understand how serious a problem this is. Copyright © 2014 John Wiley & Sons, Ltd.
Confounding of three binary-variables counterfactual model
Liu, Jingwei; Hu, Shuang
2011-01-01
Confounding of three binary-variables counterfactual model is discussed in this paper. According to the effect between the control variable and the covariate variable, we investigate three counterfactual models: the control variable is independent of the covariate variable, the control variable has the effect on the covariate variable and the covariate variable affects the control variable. Using the ancillary information based on conditional independence hypotheses, the sufficient conditions...
Methods to control for unmeasured confounding in pharmacoepidemiology: an overview.
Uddin, Md Jamal; Groenwold, Rolf H H; Ali, Mohammed Sanni; de Boer, Anthonius; Roes, Kit C B; Chowdhury, Muhammad A B; Klungel, Olaf H
2016-06-01
Background Unmeasured confounding is one of the principal problems in pharmacoepidemiologic studies. Several methods have been proposed to detect or control for unmeasured confounding either at the study design phase or the data analysis phase. Aim of the Review To provide an overview of commonly used methods to detect or control for unmeasured confounding and to provide recommendations for proper application in pharmacoepidemiology. Methods/Results Methods to control for unmeasured confounding in the design phase of a study are case only designs (e.g., case-crossover, case-time control, self-controlled case series) and the prior event rate ratio adjustment method. Methods that can be applied in the data analysis phase include, negative control method, perturbation variable method, instrumental variable methods, sensitivity analysis, and ecological analysis. A separate group of methods are those in which additional information on confounders is collected from a substudy. The latter group includes external adjustment, propensity score calibration, two-stage sampling, and multiple imputation. Conclusion As the performance and application of the methods to handle unmeasured confounding may differ across studies and across databases, we stress the importance of using both statistical evidence and substantial clinical knowledge for interpretation of the study results.
Methods to control for unmeasured confounding in pharmacoepidemiology : an overview
Uddin, Md Jamal; Groenwold, Rolf H H; Ali, Mohammed Sanni; de Boer, Anthonius; Roes, Kit C B; Chowdhury, Muhammad A B; Klungel, Olaf H.
2016-01-01
Background Unmeasured confounding is one of the principal problems in pharmacoepidemiologic studies. Several methods have been proposed to detect or control for unmeasured confounding either at the study design phase or the data analysis phase. Aim of the Review To provide an overview of commonly
DEFF Research Database (Denmark)
Hallager, Dennis Winge; Hansen, Lars Valentin; Dragsted, Casper Rokkjær
2016-01-01
STUDY DESIGN: Cross-sectional analyses on a consecutive, prospective cohort. OBJECTIVE: To evaluate the ability of the Scoliosis Research Society (SRS)-Schwab Adult Spinal Deformity Classification to group patients by widely used health-related quality-of-life (HRQOL) scores and examine possible...... to confounding. However, age group and aetiology had individual significant effects. CONCLUSION: The SRS-Schwab sagittal modifiers reliably grouped patients graded 0 versus + / + + according to the most widely used HRQOL scores and the effects of increasing grade level on odds for worse ODI scores remained...... confounding variables. SUMMARY OF BACKGROUND DATA: The SRS-Schwab Adult Spinal Deformity Classification includes sagittal modifiers considered important for HRQOL and the clinical impact of the classification has been validated in patients from the International Spine Study Group database; however, equivocal...
Schroeder, Krista; Jia, Haomiao; Smaldone, Arlene
Propensity score (PS) methods are increasingly being employed by researchers to reduce bias arising from confounder imbalance when using observational data to examine intervention effects. The purpose of this study was to examine PS theory and methodology and compare application of three PS methods (matching, stratification, weighting) to determine which best improves confounder balance. Baseline characteristics of a sample of 20,518 school-aged children with severe obesity (of whom 1,054 received an obesity intervention) were assessed prior to PS application. Three PS methods were then applied to the data to determine which showed the greatest improvement in confounder balance between the intervention and control group. The effect of each PS method on the outcome variable-body mass index percentile change at one year-was also examined. SAS 9.4 and Comprehensive Meta-analysis statistical software were used for analyses. Prior to PS adjustment, the intervention and control groups differed significantly on seven of 11 potential confounders. PS matching removed all differences. PS stratification and weighting both removed one difference but created two new differences. Sensitivity analyses did not change these results. Body mass index percentile at 1 year decreased in both groups. The size of the decrease was smaller in the intervention group, and the estimate of the decrease varied by PS method. Selection of a PS method should be guided by insight from statistical theory and simulation experiments, in addition to observed improvement in confounder balance. For this data set, PS matching worked best to correct confounder imbalance. Because each method varied in correcting confounder imbalance, we recommend that multiple PS methods be compared for ability to improve confounder balance before implementation in evaluating treatment effects in observational data.
Detection rates of geckos in visual surveys: Turning confounding variables into useful knowledge
Lardner, Bjorn; Rodda, Gordon H.; Yackel Adams, Amy A.; Savidge, Julie A.; Reed, Robert N.
2016-01-01
Transect surveys without some means of estimating detection probabilities generate population size indices prone to bias because survey conditions differ in time and space. Knowing what causes such bias can help guide the collection of relevant survey covariates, correct the survey data, anticipate situations where bias might be unacceptably large, and elucidate the ecology of target species. We used negative binomial regression to evaluate confounding variables for gecko (primarily Hemidactylus frenatus and Lepidodactylus lugubris) counts on 220-m-long transects surveyed at night, primarily for snakes, on 9,475 occasions. Searchers differed in gecko detection rates by up to a factor of six. The worst and best headlamps differed by a factor of at least two. Strong winds had a negative effect potentially as large as those of searchers or headlamps. More geckos were seen during wet weather conditions, but the effect size was small. Compared with a detection nadir during waxing gibbous (nearly full) moons above the horizon, we saw 28% more geckos during waning crescent moons below the horizon. A sine function suggested that we saw 24% more geckos at the end of the wet season than at the end of the dry season. Fluctuations on a longer timescale also were verified. Disturbingly, corrected data exhibited strong short-term fluctuations that covariates apparently failed to capture. Although some biases can be addressed with measured covariates, others will be difficult to eliminate as a significant source of error in longterm monitoring programs.
Bak, Thomas
2016-01-01
Within the current debates on cognitive reserve, cognitive aging and dementia, showing increasingly a positive effect of mental, social and physical activities on health in older age, bilingualism remains one of the most controversial issues. Some reasons for it might be social or even ideological. However, one of the most important genuine problems facing bilingualism research is the high number of potential confounding variables. Bilingual communities often differ from monolingual ones in a...
Adolescent sleep disturbance and school performance: the confounding variable of socioeconomics.
Pagel, James F; Forister, Natalie; Kwiatkowki, Carol
2007-02-15
To assess how selected socioeconomic variables known to affect school performance alter the association between reported sleep disturbance and poor school performance in a contiguous middle school/high school population. A school district/college IRB approved questionnaire was distributed in science and health classes in middle school and high school. This questionnaire included a frequency scaled pediatric sleep disturbance questionnaire for completion by students and a permission and demographic questionnaire for completion by parents (completed questionnaires n = 238 with 69.3% including GPA). Sleep complaints occur at high frequency in this sample (sleep onset insomnia 60% > 1 x /wk.; 21.2% every night; sleepiness during the day (45.7% > 1 x /wk.; 15.2 % every night), and difficulty concentrating (54.6% > 1 x /wk.; 12.9% always). Students with lower grade point averages (GPAs) were more likely to have restless/aching legs when trying to fall asleep, difficulty concentrating during the day, snoring every night, difficulty waking in the morning, sleepiness during the day, and falling asleep in class. Lower reported GPAs were significantly associated with lower household incomes. After statistically controlling for income, restless legs, sleepiness during the day, and difficulty with concentration continued to significantly affect school performance. This study provides additional evidence indicating that sleep disturbances occur at high frequencies in adolescents and significantly affect daytime performance, as measured by GPA. The socioeconomic variable of household income also significantly affects GPA. After statistically controlling for age and household income, the number and type of sleep variables noted to significantly affect GPA are altered but persistent in demonstrating significant effects on school performance.
Shalaumova, Yu V; Varaksin, A N; Panov, V G
2016-01-01
There was performed an analysis of the accounting of the impact of concomitant variables (confounders), introducing a systematic error in the assessment of the impact of risk factors on the resulting variable. The analysis showed that standardization is an effective method for the reduction of the shift of risk assessment. In the work there is suggested an algorithm implementing the method of standardization based on stratification, providing for the minimization of the difference of distributions of confounders in groups on risk factors. To automate the standardization procedures there was developed a software available on the website of the Institute of Industrial Ecology, UB RAS. With the help of the developed software by numerically modeling there were determined conditions of the applicability of the method of standardization on the basis of stratification for the case of the normal distribution on the response and confounder and linear relationship between them. Comparison ofresults obtained with the help of the standardization with statistical methods (logistic regression and analysis of covariance) in solving the problem of human ecology, has shown that obtaining close results is possible if there will be met exactly conditions for the applicability of statistical methods. Standardization is less sensitive to violations of conditions of applicability.
Schnitzer, Mireille E.; Lok, Judith J.; Gruber, Susan
2015-01-01
This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios. PMID:26226129
Hallager, Dennis Winge; Hansen, Lars Valentin; Dragsted, Casper Rokkjær; Peytz, Nina; Gehrchen, Martin; Dahl, Benny
2016-05-01
Cross-sectional analyses on a consecutive, prospective cohort. To evaluate the ability of the Scoliosis Research Society (SRS)-Schwab Adult Spinal Deformity Classification to group patients by widely used health-related quality-of-life (HRQOL) scores and examine possible confounding variables. The SRS-Schwab Adult Spinal Deformity Classification includes sagittal modifiers considered important for HRQOL and the clinical impact of the classification has been validated in patients from the International Spine Study Group database; however, equivocal results were reported for the Pelvic Tilt modifier and potential confounding variables were not evaluated. Between March 2013 and May 2014, all adult spinal deformity patients from our outpatient clinic with sufficient radiographs were prospectively enrolled. Analyses of HRQOL variance and post hoc analyses were performed for each SRS-Schwab modifier. Age, history of spine surgery, and aetiology of spinal deformity were considered potential confounders and their influence on the association between SRS-Schwab modifiers and aggregated Oswestry Disability Index (ODI) scores was evaluated with multivariate proportional odds regressions. P values were adjusted for multiple testing. Two hundred ninety-two of 460 eligible patients were included for analyses. The SRS-Schwab Classification significantly discriminated HRQOL scores between normal and abnormal sagittal modifier classifications. Individual grade comparisons showed equivocal results; however, Pelvic Tilt grade + versus + + did not discriminate patients according to any HRQOL score. All modifiers showed significant proportional odds for worse aggregated ODI scores with increasing grade levels and the effects were robust to confounding. However, age group and aetiology had individual significant effects. The SRS-Schwab sagittal modifiers reliably grouped patients graded 0 versus + / + + according to the most widely used HRQOL scores and the
Keogh, Ruth H; Daniel, Rhian M; VanderWeele, Tyler J; Vansteelandt, Stijn
2018-05-01
Estimation of causal effects of time-varying exposures using longitudinal data is a common problem in epidemiology. When there are time-varying confounders, which may include past outcomes, affected by prior exposure, standard regression methods can lead to bias. Methods such as inverse probability weighted estimation of marginal structural models have been developed to address this problem. However, in this paper we show how standard regression methods can be used, even in the presence of time-dependent confounding, to estimate the total effect of an exposure on a subsequent outcome by controlling appropriately for prior exposures, outcomes, and time-varying covariates. We refer to the resulting estimation approach as sequential conditional mean models (SCMMs), which can be fitted using generalized estimating equations. We outline this approach and describe how including propensity score adjustment is advantageous. We compare the causal effects being estimated using SCMMs and marginal structural models, and we compare the two approaches using simulations. SCMMs enable more precise inferences, with greater robustness against model misspecification via propensity score adjustment, and easily accommodate continuous exposures and interactions. A new test for direct effects of past exposures on a subsequent outcome is described.
Street, Nathan Lee
2017-01-01
Teacher value-added measures (VAM) are designed to provide information regarding teachers' causal impact on the academic growth of students while controlling for exogenous variables. While some researchers contend VAMs successfully and authentically measure teacher causality on learning, others suggest VAMs cannot adequately control for exogenous…
DEFF Research Database (Denmark)
Cichosz, Simon Lebech; Frystyk, Jan; Tarnow, Lise
2017-01-01
BACKGROUND: We have recently shown how the combination of information from continuous glucose monitor (CGM) and heart rate variability (HRV) measurements can be used to construct an algorithm for prediction of hypoglycemia in both bedbound and active patients with type 1 diabetes (T1D). Questions...... with CGM and a Holter device while they performed normal daily activities. CAN was diagnosed using two cardiac reflex tests: (1) deep breathing and (2) orthostatic hypotension and end organ symptoms. Early CAN was defined as the presence of one abnormal reflex test and severe CAN was defined as two...
Predictive modelling using neuroimaging data in the presence of confounds.
Rao, Anil; Monteiro, Joao M; Mourao-Miranda, Janaina
2017-04-15
When training predictive models from neuroimaging data, we typically have available non-imaging variables such as age and gender that affect the imaging data but which we may be uninterested in from a clinical perspective. Such variables are commonly referred to as 'confounds'. In this work, we firstly give a working definition for confound in the context of training predictive models from samples of neuroimaging data. We define a confound as a variable which affects the imaging data and has an association with the target variable in the sample that differs from that in the population-of-interest, i.e., the population over which we intend to apply the estimated predictive model. The focus of this paper is the scenario in which the confound and target variable are independent in the population-of-interest, but the training sample is biased due to a sample association between the target and confound. We then discuss standard approaches for dealing with confounds in predictive modelling such as image adjustment and including the confound as a predictor, before deriving and motivating an Instance Weighting scheme that attempts to account for confounds by focusing model training so that it is optimal for the population-of-interest. We evaluate the standard approaches and Instance Weighting in two regression problems with neuroimaging data in which we train models in the presence of confounding, and predict samples that are representative of the population-of-interest. For comparison, these models are also evaluated when there is no confounding present. In the first experiment we predict the MMSE score using structural MRI from the ADNI database with gender as the confound, while in the second we predict age using structural MRI from the IXI database with acquisition site as the confound. Considered over both datasets we find that none of the methods for dealing with confounding gives more accurate predictions than a baseline model which ignores confounding, although
Extraction Methods, Variability Encountered in
Bodelier, P.L.E.; Nelson, K.E.
2014-01-01
Synonyms Bias in DNA extractions methods; Variation in DNA extraction methods Definition The variability in extraction methods is defined as differences in quality and quantity of DNA observed using various extraction protocols, leading to differences in outcome of microbial community composition
Vanderweele, Tyler J; Arah, Onyebuchi A
2011-01-01
Uncontrolled confounding in observational studies gives rise to biased effect estimates. Sensitivity analysis techniques can be useful in assessing the magnitude of these biases. In this paper, we use the potential outcomes framework to derive a general class of sensitivity-analysis formulas for outcomes, treatments, and measured and unmeasured confounding variables that may be categorical or continuous. We give results for additive, risk-ratio and odds-ratio scales. We show that these results encompass a number of more specific sensitivity-analysis methods in the statistics and epidemiology literature. The applicability, usefulness, and limits of the bias-adjustment formulas are discussed. We illustrate the sensitivity-analysis techniques that follow from our results by applying them to 3 different studies. The bias formulas are particularly simple and easy to use in settings in which the unmeasured confounding variable is binary with constant effect on the outcome across treatment levels.
Carbonell, F; Bellec, P; Shmuel, A
2014-02-01
The effect of regressing out the global average signal (GAS) in resting state fMRI data has become a concern for interpreting functional connectivity analyses. It is not clear whether the reported anti-correlations between the Default Mode and the Dorsal Attention Networks are intrinsic to the brain, or are artificially created by regressing out the GAS. Here we introduce a concept, Impact of the Global Average on Functional Connectivity (IGAFC), for quantifying the sensitivity of seed-based correlation analyses to the regression of the GAS. This voxel-wise IGAFC index is defined as the product of two correlation coefficients: the correlation between the GAS and the fMRI time course of a voxel, times the correlation between the GAS and the seed time course. This definition enables the calculation of a threshold at which the impact of regressing-out the GAS would be large enough to introduce spurious negative correlations. It also yields a post-hoc impact correction procedure via thresholding, which eliminates spurious correlations introduced by regressing out the GAS. In addition, we introduce an Artificial Negative Correlation Index (ANCI), defined as the absolute difference between the IGAFC index and the impact threshold. The ANCI allows a graded confidence scale for ranking voxels according to their likelihood of showing artificial correlations. By applying this method, we observed regions in the Default Mode and Dorsal Attention Networks that were anti-correlated. These findings confirm that the previously reported negative correlations between the Dorsal Attention and Default Mode Networks are intrinsic to the brain and not the result of statistical manipulations. Our proposed quantification of the impact that a confound may have on functional connectivity can be generalized to global effect estimators other than the GAS. It can be readily applied to other confounds, such as systemic physiological or head movement interferences, in order to quantify their
Instrumental variable methods in comparative safety and effectiveness research.
Brookhart, M Alan; Rassen, Jeremy A; Schneeweiss, Sebastian
2010-06-01
Instrumental variable (IV) methods have been proposed as a potential approach to the common problem of uncontrolled confounding in comparative studies of medical interventions, but IV methods are unfamiliar to many researchers. The goal of this article is to provide a non-technical, practical introduction to IV methods for comparative safety and effectiveness research. We outline the principles and basic assumptions necessary for valid IV estimation, discuss how to interpret the results of an IV study, provide a review of instruments that have been used in comparative effectiveness research, and suggest some minimal reporting standards for an IV analysis. Finally, we offer our perspective of the role of IV estimation vis-à-vis more traditional approaches based on statistical modeling of the exposure or outcome. We anticipate that IV methods will be often underpowered for drug safety studies of very rare outcomes, but may be potentially useful in studies of intended effects where uncontrolled confounding may be substantial.
Instrumental variable methods in comparative safety and effectiveness research†
Brookhart, M. Alan; Rassen, Jeremy A.; Schneeweiss, Sebastian
2010-01-01
Summary Instrumental variable (IV) methods have been proposed as a potential approach to the common problem of uncontrolled confounding in comparative studies of medical interventions, but IV methods are unfamiliar to many researchers. The goal of this article is to provide a non-technical, practical introduction to IV methods for comparative safety and effectiveness research. We outline the principles and basic assumptions necessary for valid IV estimation, discuss how to interpret the results of an IV study, provide a review of instruments that have been used in comparative effectiveness research, and suggest some minimal reporting standards for an IV analysis. Finally, we offer our perspective of the role of IV estimation vis-à-vis more traditional approaches based on statistical modeling of the exposure or outcome. We anticipate that IV methods will be often underpowered for drug safety studies of very rare outcomes, but may be potentially useful in studies of intended effects where uncontrolled confounding may be substantial. PMID:20354968
Measuring the surgical 'learning curve': methods, variables and competency.
Khan, Nuzhath; Abboudi, Hamid; Khan, Mohammed Shamim; Dasgupta, Prokar; Ahmed, Kamran
2014-03-01
To describe how learning curves are measured and what procedural variables are used to establish a 'learning curve' (LC). To assess whether LCs are a valuable measure of competency. A review of the surgical literature pertaining to LCs was conducted using the Medline and OVID databases. Variables should be fully defined and when possible, patient-specific variables should be used. Trainee's prior experience and level of supervision should be quantified; the case mix and complexity should ideally be constant. Logistic regression may be used to control for confounding variables. Ideally, a learning plateau should reach a predefined/expert-derived competency level, which should be fully defined. When the group splitting method is used, smaller cohorts should be used in order to narrow the range of the LC. Simulation technology and competence-based objective assessments may be used in training and assessment in LC studies. Measuring the surgical LC has potential benefits for patient safety and surgical education. However, standardisation in the methods and variables used to measure LCs is required. Confounding variables, such as participant's prior experience, case mix, difficulty of procedures and level of supervision, should be controlled. Competency and expert performance should be fully defined. © 2013 The Authors. BJU International © 2013 BJU International.
Mismeasurement and the resonance of strong confounders: correlated errors.
Marshall, J R; Hastrup, J L; Ross, J S
1999-07-01
Confounding in epidemiology, and the limits of standard methods of control for an imperfectly measured confounder, have been understood for some time. However, most treatments of this problem are based on the assumption that errors of measurement in confounding and confounded variables are independent. This paper considers the situation in which a strong risk factor (confounder) and an inconsequential but suspected risk factor (confounded) are each measured with errors that are correlated; the situation appears especially likely to occur in the field of nutritional epidemiology. Error correlation appears to add little to measurement error as a source of bias in estimating the impact of a strong risk factor: it can add to, diminish, or reverse the bias induced by measurement error in estimating the impact of the inconsequential risk factor. Correlation of measurement errors can add to the difficulty involved in evaluating structures in which confounding and measurement error are present. In its presence, observed correlations among risk factors can be greater than, less than, or even opposite to the true correlations. Interpretation of multivariate epidemiologic structures in which confounding is likely requires evaluation of measurement error structures, including correlations among measurement errors.
Confounding adjustment through front-door blocking in longitudinal studies
Directory of Open Access Journals (Sweden)
Arvid Sjölander
2013-03-01
Full Text Available A common aim of epidemiological research is to estimate the causal effect of a particular exposure on a particular outcome. Towards this end, observed associations are often ‘adjusted’ for potential confounding variables. When the potential confounders are unmeasured, explicit adjustment becomes unfeasible. It has been demonstrated that causal effects can be estimated even in the presence of umeasured confounding, utilizing a method called ‘front-door blocking’. In this paper we generalize this method to longitudinal studies. We demonstrate that the method of front-door blocking poses a number of challenging statistical problems, analogous to the famous problems associ- ated with the method of ‘back-door blocking’.
Directed acyclic graphs (DAGs): an aid to assess confounding in dental research.
Merchant, Anwar T; Pitiphat, Waranuch
2002-12-01
Confounding, a special type of bias, occurs when an extraneous factor is associated with the exposure and independently affects the outcome. In order to get an unbiased estimate of the exposure-outcome relationship, we need to identify potential confounders, collect information on them, design appropriate studies, and adjust for confounding in data analysis. However, it is not always clear which variables to collect information on and adjust for in the analyses. Inappropriate adjustment for confounding can even introduce bias where none existed. Directed acyclic graphs (DAGs) provide a method to select potential confounders and minimize bias in the design and analysis of epidemiological studies. DAGs have been used extensively in expert systems and robotics. Robins (1987) introduced the application of DAGs in epidemiology to overcome shortcomings of traditional methods to control for confounding, especially as they related to unmeasured confounding. DAGs provide a quick and visual way to assess confounding without making parametric assumptions. We introduce DAGs, starting with definitions and rules for basic manipulation, stressing more on applications than theory. We then demonstrate their application in the control of confounding through examples of observational and cross-sectional epidemiological studies.
Directory of Open Access Journals (Sweden)
Correa Londoño Guillermo
2013-08-01
Full Text Available Resumen. Parte de la variabilidad total en un estudio experimental puede explicarse por factores que son asignados y/o controlados por el investigador y que son de interés primario para este. Asimismo, los experimentos suelen involucrar factores que a pesar de su carácter secundario también afectan la respuesta. El mecanismo más comúnmente usado para controlar el efecto de factores secundarios es el bloqueo. Existen, sin embargo, situaciones en las que la fuente de variación secundaria solamente se reconoce tras haberse iniciado el experimento y/o en las que sus niveles no configuran categorías que permitan agrupar unidades experimentales homogéneas; en tales casos, podría considerarse la utilización de covariables para satisfacer los mismos objetivos que el bloqueo. Para aplicar una adecuada corrección mediante análisis de covarianza deben satisfacerse dos condiciones: la viabilidad y la pertinencia. La viabilidad se refiere a la posibilidad de explicar parte de la variabilidad de la respuesta en función de la covariable, mediante un modelo de regresión. La pertinencia tiene que ver con la adecuación de la corrección aplicada, considerando que al eliminar el efecto de la covariable no se arrastre parte del efecto de los tratamientos. La viabilidad suele evaluarse con apoyo de algún programa estadístico; la pertinencia, por su parte, exige una aproximación conceptual. / Abstract. Some portion of the total variability in an experimental study can be explained by factors that are controlled and/or assigned by the researcher, and that are of his primary interest. Likewise, experiments usually involve factors that, despite their ancillary nature, also affect the response. Blocking is the most widely used mechanism to control the effect of ancillary factors. There are, however, situations in which the secondary source of variation is recognized only after the experiment has been started and/or in which its levels don’t allow to
Variable selection by lasso-type methods
Directory of Open Access Journals (Sweden)
Sohail Chand
2011-09-01
Full Text Available Variable selection is an important property of shrinkage methods. The adaptive lasso is an oracle procedure and can do consistent variable selection. In this paper, we provide an explanation that how use of adaptive weights make it possible for the adaptive lasso to satisfy the necessary and almost sufcient condition for consistent variable selection. We suggest a novel algorithm and give an important result that for the adaptive lasso if predictors are normalised after the introduction of adaptive weights, it makes the adaptive lasso performance identical to the lasso.
Gait variability: methods, modeling and meaning
Directory of Open Access Journals (Sweden)
Hausdorff Jeffrey M
2005-07-01
Full Text Available Abstract The study of gait variability, the stride-to-stride fluctuations in walking, offers a complementary way of quantifying locomotion and its changes with aging and disease as well as a means of monitoring the effects of therapeutic interventions and rehabilitation. Previous work has suggested that measures of gait variability may be more closely related to falls, a serious consequence of many gait disorders, than are measures based on the mean values of other walking parameters. The Current JNER series presents nine reports on the results of recent investigations into gait variability. One novel method for collecting unconstrained, ambulatory data is reviewed, and a primer on analysis methods is presented along with a heuristic approach to summarizing variability measures. In addition, the first studies of gait variability in animal models of neurodegenerative disease are described, as is a mathematical model of human walking that characterizes certain complex (multifractal features of the motor control's pattern generator. Another investigation demonstrates that, whereas both healthy older controls and patients with a higher-level gait disorder walk more slowly in reduced lighting, only the latter's stride variability increases. Studies of the effects of dual tasks suggest that the regulation of the stride-to-stride fluctuations in stride width and stride time may be influenced by attention loading and may require cognitive input. Finally, a report of gait variability in over 500 subjects, probably the largest study of this kind, suggests how step width variability may relate to fall risk. Together, these studies provide new insights into the factors that regulate the stride-to-stride fluctuations in walking and pave the way for expanded research into the control of gait and the practical application of measures of gait variability in the clinical setting.
Directory of Open Access Journals (Sweden)
Varaksin Anatoly
2014-03-01
Full Text Available The methods of the analysis of research data including the concomitant variables (confounders associated with both the response and the current factor are considered. There are two usual ways to take into account such variables: the first, at the stage of planning the experiment and the second, in analyzing the received data. Despite the equal effectiveness of these approaches, there exists strong reason to restrict the usage of regression method to accounting for confounders by ANCOVA. Authors consider the standardization by stratification as a reliable method to account for the effect of confounding factors as opposed to the widely-implemented application of logistic regression and the covariance analysis. The program for the automation of standardization procedure is proposed, it is available at the site of the Institute of Industrial Ecology.
Constrained variable projection method for blind deconvolution
International Nuclear Information System (INIS)
Cornelio, A; Piccolomini, E Loli; Nagy, J G
2012-01-01
This paper is focused on the solution of the blind deconvolution problem, here modeled as a separable nonlinear least squares problem. The well known ill-posedness, both on recovering the blurring operator and the true image, makes the problem really difficult to handle. We show that, by imposing appropriate constraints on the variables and with well chosen regularization parameters, it is possible to obtain an objective function that is fairly well behaved. Hence, the resulting nonlinear minimization problem can be effectively solved by classical methods, such as the Gauss-Newton algorithm.
Chiba, Yasutaka
2014-01-01
Questions of mediation are often of interest in reasoning about mechanisms, and methods have been developed to address these questions. However, these methods make strong assumptions about the absence of confounding. Even if exposure is randomized, there may be mediator-outcome confounding variables. Inference about direct and indirect effects is particularly challenging if these mediator-outcome confounders are affected by the exposure because in this case these effects are not identified irrespective of whether data is available on these exposure-induced mediator-outcome confounders. In this paper, we provide a sensitivity analysis technique for natural direct and indirect effects that is applicable even if there are mediator-outcome confounders affected by the exposure. We give techniques for both the difference and risk ratio scales and compare the technique to other possible approaches. PMID:25580387
Mismeasurement and the resonance of strong confounders: uncorrelated errors.
Marshall, J R; Hastrup, J L
1996-05-15
Greenland first documented (Am J Epidemiol 1980; 112:564-9) that error in the measurement of a confounder could resonate--that it could bias estimates of other study variables, and that the bias could persist even with statistical adjustment for the confounder as measured. An important question is raised by this finding: can such bias be more than trivial within the bounds of realistic data configurations? The authors examine several situations involving dichotomous and continuous data in which a confounder and a null variable are measured with error, and they assess the extent of resultant bias in estimates of the effect of the null variable. They show that, with continuous variables, measurement error amounting to 40% of observed variance in the confounder could cause the observed impact of the null study variable to appear to alter risk by as much as 30%. Similarly, they show, with dichotomous independent variables, that 15% measurement error in the form of misclassification could lead the null study variable to appear to alter risk by as much as 50%. Such bias would result only from strong confounding. Measurement error would obscure the evidence that strong confounding is a likely problem. These results support the need for every epidemiologic inquiry to include evaluations of measurement error in each variable considered.
Risk assessment of groundwater level variability using variable Kriging methods
Spanoudaki, Katerina; Kampanis, Nikolaos A.
2015-04-01
Assessment of the water table level spatial variability in aquifers provides useful information regarding optimal groundwater management. This information becomes more important in basins where the water table level has fallen significantly. The spatial variability of the water table level in this work is estimated based on hydraulic head measured during the wet period of the hydrological year 2007-2008, in a sparsely monitored basin in Crete, Greece, which is of high socioeconomic and agricultural interest. Three Kriging-based methodologies are elaborated in Matlab environment to estimate the spatial variability of the water table level in the basin. The first methodology is based on the Ordinary Kriging approach, the second involves auxiliary information from a Digital Elevation Model in terms of Residual Kriging and the third methodology calculates the probability of the groundwater level to fall below a predefined minimum value that could cause significant problems in groundwater resources availability, by means of Indicator Kriging. The Box-Cox methodology is applied to normalize both the data and the residuals for improved prediction results. In addition, various classical variogram models are applied to determine the spatial dependence of the measurements. The Matérn model proves to be the optimal, which in combination with Kriging methodologies provides the most accurate cross validation estimations. Groundwater level and probability maps are constructed to examine the spatial variability of the groundwater level in the basin and the associated risk that certain locations exhibit regarding a predefined minimum value that has been set for the sustainability of the basin's groundwater resources. Acknowledgement The work presented in this paper has been funded by the Greek State Scholarships Foundation (IKY), Fellowships of Excellence for Postdoctoral Studies (Siemens Program), 'A simulation-optimization model for assessing the best practices for the
Collective variables method in relativistic theory
International Nuclear Information System (INIS)
Shurgaya, A.V.
1983-01-01
Classical theory of N-component field is considered. The method of collective variables accurately accounting for conservation laws proceeding from invariance theory under homogeneous Lorentz group is developed within the frames of generalized hamiltonian dynamics. Hyperboloids are invariant surfaces Under the homogeneous Lorentz group. Proceeding from this, field transformation is introduced, and the surface is parametrized so that generators of the homogeneous Lorentz group do not include components dependent on interaction and their effect on the field function is reduced to geometrical. The interaction is completely included in the expression for the energy-momentum vector of the system which is a dynamical value. Gauge is chosen where parameters of four-dimensional translations and their canonically-conjugated pulses are non-physical and thus phase space is determined by parameters of the homogeneous Lorentz group, field function and their canonically-conjugated pulses. So it is managed to accurately account for conservation laws proceeding from the requirement of lorentz-invariance
'Mechanical restraint-confounders, risk, alliance score'
DEFF Research Database (Denmark)
Deichmann Nielsen, Lea; Bech, Per; Hounsgaard, Lise
2017-01-01
. AIM: To clinically validate a new, structured short-term risk assessment instrument called the Mechanical Restraint-Confounders, Risk, Alliance Score (MR-CRAS), with the intended purpose of supporting the clinicians' observation and assessment of the patient's readiness to be released from mechanical...... restraint. METHODS: The content and layout of MR-CRAS and its user manual were evaluated using face validation by forensic mental health clinicians, content validation by an expert panel, and pilot testing within two, closed forensic mental health inpatient units. RESULTS: The three sub-scales (Confounders......, Risk, and a parameter of Alliance) showed excellent content validity. The clinical validations also showed that MR-CRAS was perceived and experienced as a comprehensible, relevant, comprehensive, and useable risk assessment instrument. CONCLUSIONS: MR-CRAS contains 18 clinically valid items...
Resting-state FMRI confounds and cleanup
Murphy, Kevin; Birn, Rasmus M.; Bandettini, Peter A.
2013-01-01
The goal of resting-state functional magnetic resonance imaging (FMRI) is to investigate the brain’s functional connections by using the temporal similarity between blood oxygenation level dependent (BOLD) signals in different regions of the brain “at rest” as an indicator of synchronous neural activity. Since this measure relies on the temporal correlation of FMRI signal changes between different parts of the brain, any non-neural activity-related process that affects the signals will influence the measure of functional connectivity, yielding spurious results. To understand the sources of these resting-state FMRI confounds, this article describes the origins of the BOLD signal in terms of MR physics and cerebral physiology. Potential confounds arising from motion, cardiac and respiratory cycles, arterial CO2 concentration, blood pressure/cerebral autoregulation, and vasomotion are discussed. Two classes of techniques to remove confounds from resting-state BOLD time series are reviewed: 1) those utilising external recordings of physiology and 2) data-based cleanup methods that only use the resting-state FMRI data itself. Further methods that remove noise from functional connectivity measures at a group level are also discussed. For successful interpretation of resting-state FMRI comparisons and results, noise cleanup is an often over-looked but essential step in the analysis pipeline. PMID:23571418
DATA COLLECTION METHOD FOR PEDESTRIAN MOVEMENT VARIABLES
Directory of Open Access Journals (Sweden)
Hajime Inamura
2000-01-01
Full Text Available The need of tools for design and evaluation of pedestrian areas, subways stations, entrance hall, shopping mall, escape routes, stadium etc lead to the necessity of a pedestrian model. One approach pedestrian model is Microscopic Pedestrian Simulation Model. To be able to develop and calibrate a microscopic pedestrian simulation model, a number of variables need to be considered. As the first step of model development, some data was collected using video and the coordinate of the head path through image processing were also taken. Several numbers of variables can be gathered to describe the behavior of pedestrian from a different point of view. This paper describes how to obtain variables from video taking and simple image processing that can represent the movement of pedestrians and its variables
Blasi, Thomas; Buettner, Florian; Strasser, Michael K.; Marr, Carsten; Theis, Fabian J.
2017-06-01
Accessing gene expression at a single-cell level has unraveled often large heterogeneity among seemingly homogeneous cells, which remains obscured when using traditional population-based approaches. The computational analysis of single-cell transcriptomics data, however, still imposes unresolved challenges with respect to normalization, visualization and modeling the data. One such issue is differences in cell size, which introduce additional variability into the data and for which appropriate normalization techniques are needed. Otherwise, these differences in cell size may obscure genuine heterogeneities among cell populations and lead to overdispersed steady-state distributions of mRNA transcript numbers. We present cgCorrect, a statistical framework to correct for differences in cell size that are due to cell growth in single-cell transcriptomics data. We derive the probability for the cell-growth-corrected mRNA transcript number given the measured, cell size-dependent mRNA transcript number, based on the assumption that the average number of transcripts in a cell increases proportionally to the cell’s volume during the cell cycle. cgCorrect can be used for both data normalization and to analyze the steady-state distributions used to infer the gene expression mechanism. We demonstrate its applicability on both simulated data and single-cell quantitative real-time polymerase chain reaction (PCR) data from mouse blood stem and progenitor cells (and to quantitative single-cell RNA-sequencing data obtained from mouse embryonic stem cells). We show that correcting for differences in cell size affects the interpretation of the data obtained by typically performed computational analysis.
Emittance measurements by variable quadrupole method
International Nuclear Information System (INIS)
Toprek, D.
2005-01-01
The beam emittance is a measure of both the beam size and beam divergence, we cannot directly measure its value. If the beam size is measured at different locations or under different focusing conditions such that different parts of the phase space ellipse will be probed by the beam size monitor, the beam emittance can be determined. An emittance measurement can be performed by different methods. Here we will consider the varying quadrupole setting method.
Assessment of Confounding in Studies of Delay and Survival
DEFF Research Database (Denmark)
Tørring, Marie Louise; Vedsted, Peter; Frydenberg, Morten
BACKGROUND: Whether longer time to diagnosis (diagnostic delay) in patients with cancer symptoms is directly and independently associated with poor prognosis cannot be determined in randomised controlled trials. Analysis of observational data is therefore necessary. Many previous studies of the i......BACKGROUND: Whether longer time to diagnosis (diagnostic delay) in patients with cancer symptoms is directly and independently associated with poor prognosis cannot be determined in randomised controlled trials. Analysis of observational data is therefore necessary. Many previous studies......) Clarify which factors are considered confounders or intermediate variables in the literature. 2) Assess how and to what extent these factors bias survival estimates. CONSIDERATIONS: As illustrated in Figure 1, symptoms of cancer may alert patients, GP's, and hospital doctors differently and influence both...... delay and survival time in different ways. We therefore assume that the impact of confounding factors depends on the type of delay studied (e.g., patient delay, GP delay, referral delay, or treatment delay). MATERIALS & METHODS: The project includes systematic review and methodological developments...
Marine oils: Complex, confusing, confounded?
Directory of Open Access Journals (Sweden)
Benjamin B. Albert
2016-09-01
Full Text Available Marine oils gained prominence following the report that Greenland Inuits who consumed a high-fat diet rich in long-chain n-3 polyunsaturated fatty acids (PUFAs also had low rates of cardiovascular disease. Marine n-3 PUFAs have since become a billion dollar industry, which will continue to grow based on current trends. However, recent systematic reviews question the health benefits of marine oil supplements, particularly in the prevention of cardiovascular disease. Marine oils constitute an extremely complex dietary intervention for a number of reasons: i the many chemical compounds they contain; ii the many biological processes affected by n-3 PUFAs; iii their tendency to deteriorate and form potentially toxic primary and secondary oxidation products; and iv inaccuracy in the labelling of consumer products. These complexities may confound the clinical literature, limiting the ability to make substantive conclusions for some key health outcomes. Thus, there is a pressing need for clinical trials using marine oils whose composition has been independently verified and demonstrated to be minimally oxidised. Without such data, it is premature to conclude that n-3 PUFA rich supplements are ineffective.
Confounding Underlies the Apparent Month of Birth Effect in Multiple Sclerosis
Fiddes, Barnaby; Wason, James; Kemppinen, Anu; Ban, Maria; Compston, Alastair; Sawcer, Stephen
2013-01-01
Objective Several groups have reported apparent association between month of birth and multiple sclerosis. We sought to test the extent to which such studies might be confounded by extraneous variables such as year and place of birth. Methods Using national birth statistics from 2 continents, we assessed the evidence for seasonal variations in birth rate and tested the extent to which these are subject to regional and temporal variation. We then established the age and regional origin distrib...
Environmental confounding in gene-environment interaction studies.
Vanderweele, Tyler J; Ko, Yi-An; Mukherjee, Bhramar
2013-07-01
We show that, in the presence of uncontrolled environmental confounding, joint tests for the presence of a main genetic effect and gene-environment interaction will be biased if the genetic and environmental factors are correlated, even if there is no effect of either the genetic factor or the environmental factor on the disease. When environmental confounding is ignored, such tests will in fact reject the joint null of no genetic effect with a probability that tends to 1 as the sample size increases. This problem with the joint test vanishes under gene-environment independence, but it still persists if estimating the gene-environment interaction parameter itself is of interest. Uncontrolled environmental confounding will bias estimates of gene-environment interaction parameters even under gene-environment independence, but it will not do so if the unmeasured confounding variable itself does not interact with the genetic factor. Under gene-environment independence, if the interaction parameter without controlling for the environmental confounder is nonzero, then there is gene-environment interaction either between the genetic factor and the environmental factor of interest or between the genetic factor and the unmeasured environmental confounder. We evaluate several recently proposed joint tests in a simulation study and discuss the implications of these results for the conduct of gene-environment interaction studies.
Using ecological propensity score to adjust for missing confounders in small area studies.
Wang, Yingbo; Pirani, Monica; Hansell, Anna L; Richardson, Sylvia; Blangiardo, Marta
2017-11-09
Small area ecological studies are commonly used in epidemiology to assess the impact of area level risk factors on health outcomes when data are only available in an aggregated form. However, the resulting estimates are often biased due to unmeasured confounders, which typically are not available from the standard administrative registries used for these studies. Extra information on confounders can be provided through external data sets such as surveys or cohorts, where the data are available at the individual level rather than at the area level; however, such data typically lack the geographical coverage of administrative registries. We develop a framework of analysis which combines ecological and individual level data from different sources to provide an adjusted estimate of area level risk factors which is less biased. Our method (i) summarizes all available individual level confounders into an area level scalar variable, which we call ecological propensity score (EPS), (ii) implements a hierarchical structured approach to impute the values of EPS whenever they are missing, and (iii) includes the estimated and imputed EPS into the ecological regression linking the risk factors to the health outcome. Through a simulation study, we show that integrating individual level data into small area analyses via EPS is a promising method to reduce the bias intrinsic in ecological studies due to unmeasured confounders; we also apply the method to a real case study to evaluate the effect of air pollution on coronary heart disease hospital admissions in Greater London. © The Author 2017. Published by Oxford University Press.
Probabilistic Power Flow Method Considering Continuous and Discrete Variables
Directory of Open Access Journals (Sweden)
Xuexia Zhang
2017-04-01
Full Text Available This paper proposes a probabilistic power flow (PPF method considering continuous and discrete variables (continuous and discrete power flow, CDPF for power systems. The proposed method—based on the cumulant method (CM and multiple deterministic power flow (MDPF calculations—can deal with continuous variables such as wind power generation (WPG and loads, and discrete variables such as fuel cell generation (FCG. In this paper, continuous variables follow a normal distribution (loads or a non-normal distribution (WPG, and discrete variables follow a binomial distribution (FCG. Through testing on IEEE 14-bus and IEEE 118-bus power systems, the proposed method (CDPF has better accuracy compared with the CM, and higher efficiency compared with the Monte Carlo simulation method (MCSM.
A Streamlined Artificial Variable Free Version of Simplex Method
Inayatullah, Syed; Touheed, Nasir; Imtiaz, Muhammad
2015-01-01
This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new ...
Two methods for studying the X-ray variability
Yan, Shu-Ping; Ji, Li; Méndez, Mariano; Wang, Na; Liu, Siming; Li, Xiang-Dong
2016-01-01
The X-ray aperiodic variability and quasi-periodic oscillation (QPO) are the important tools to study the structure of the accretion flow of X-ray binaries. However, the origin of the complex X-ray variability from X-ray binaries remains yet unsolved. We proposed two methods for studying the X-ray
The functional variable method for solving the fractional Korteweg ...
Indian Academy of Sciences (India)
The physical and engineering processes have been modelled by means of fractional ... very important role in various fields such as economics, chemistry, notably control the- .... In §3, the functional variable method is applied for finding exact.
Extensions of von Neumann's method for generating random variables
International Nuclear Information System (INIS)
Monahan, J.F.
1979-01-01
Von Neumann's method of generating random variables with the exponential distribution and Forsythe's method for obtaining distributions with densities of the form e/sup -G//sup( x/) are generalized to apply to certain power series representations. The flexibility of the power series methods is illustrated by algorithms for the Cauchy and geometric distributions
Variable identification in group method of data handling methodology
Energy Technology Data Exchange (ETDEWEB)
Pereira, Iraci Martinez, E-mail: martinez@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Bueno, Elaine Inacio [Instituto Federal de Educacao, Ciencia e Tecnologia, Guarulhos, SP (Brazil)
2011-07-01
The Group Method of Data Handling - GMDH is a combinatorial multi-layer algorithm in which a network of layers and nodes is generated using a number of inputs from the data stream being evaluated. The GMDH network topology has been traditionally determined using a layer by layer pruning process based on a preselected criterion of what constitutes the best nodes at each level. The traditional GMDH method is based on an underlying assumption that the data can be modeled by using an approximation of the Volterra Series or Kolmorgorov-Gabor polynomial. A Monitoring and Diagnosis System was developed based on GMDH and Artificial Neural Network - ANN methodologies, and applied to the IPEN research Reactor IEA-R1. The GMDH was used to study the best set of variables to be used to train an ANN, resulting in a best monitoring variable estimative. The system performs the monitoring by comparing these estimative calculated values with measured ones. The IPEN Reactor Data Acquisition System is composed of 58 variables (process and nuclear variables). As the GMDH is a self-organizing methodology, the input variables choice is made automatically, and the real input variables used in the Monitoring and Diagnosis System were not showed in the final result. This work presents a study of variable identification of GMDH methodology by means of an algorithm that works in parallel with the GMDH algorithm and traces the initial variables paths, resulting in an identification of the variables that composes the best Monitoring and Diagnosis Model. (author)
Variable identification in group method of data handling methodology
International Nuclear Information System (INIS)
Pereira, Iraci Martinez; Bueno, Elaine Inacio
2011-01-01
The Group Method of Data Handling - GMDH is a combinatorial multi-layer algorithm in which a network of layers and nodes is generated using a number of inputs from the data stream being evaluated. The GMDH network topology has been traditionally determined using a layer by layer pruning process based on a preselected criterion of what constitutes the best nodes at each level. The traditional GMDH method is based on an underlying assumption that the data can be modeled by using an approximation of the Volterra Series or Kolmorgorov-Gabor polynomial. A Monitoring and Diagnosis System was developed based on GMDH and Artificial Neural Network - ANN methodologies, and applied to the IPEN research Reactor IEA-R1. The GMDH was used to study the best set of variables to be used to train an ANN, resulting in a best monitoring variable estimative. The system performs the monitoring by comparing these estimative calculated values with measured ones. The IPEN Reactor Data Acquisition System is composed of 58 variables (process and nuclear variables). As the GMDH is a self-organizing methodology, the input variables choice is made automatically, and the real input variables used in the Monitoring and Diagnosis System were not showed in the final result. This work presents a study of variable identification of GMDH methodology by means of an algorithm that works in parallel with the GMDH algorithm and traces the initial variables paths, resulting in an identification of the variables that composes the best Monitoring and Diagnosis Model. (author)
Falsification Testing of Instrumental Variables Methods for Comparative Effectiveness Research.
Pizer, Steven D
2016-04-01
To demonstrate how falsification tests can be used to evaluate instrumental variables methods applicable to a wide variety of comparative effectiveness research questions. Brief conceptual review of instrumental variables and falsification testing principles and techniques accompanied by an empirical application. Sample STATA code related to the empirical application is provided in the Appendix. Comparative long-term risks of sulfonylureas and thiazolidinediones for management of type 2 diabetes. Outcomes include mortality and hospitalization for an ambulatory care-sensitive condition. Prescribing pattern variations are used as instrumental variables. Falsification testing is an easily computed and powerful way to evaluate the validity of the key assumption underlying instrumental variables analysis. If falsification tests are used, instrumental variables techniques can help answer a multitude of important clinical questions. © Health Research and Educational Trust.
The functional variable method for finding exact solutions of some ...
Indian Academy of Sciences (India)
Abstract. In this paper, we implemented the functional variable method and the modified. Riemann–Liouville derivative for the exact solitary wave solutions and periodic wave solutions of the time-fractional Klein–Gordon equation, and the time-fractional Hirota–Satsuma coupled. KdV system. This method is extremely simple ...
International Nuclear Information System (INIS)
Yu, Dequan; Cong, Shu-Lin; Sun, Zhigang
2015-01-01
Highlights: • An optimised finite element discrete variable representation method is proposed. • The method is tested by solving one and two dimensional Schrödinger equations. • The method is quite efficient in solving the molecular Schrödinger equation. • It is very easy to generalise the method to multidimensional problems. - Abstract: The Lobatto discrete variable representation (LDVR) proposed by Manoloupolos and Wyatt (1988) has unique features but has not been generally applied in the field of chemical dynamics. Instead, it has popular application in solving atomic physics problems, in combining with the finite element method (FE-DVR), due to its inherent abilities for treating the Coulomb singularity in spherical coordinates. In this work, an efficient phase optimisation and variable mapping procedure is proposed to improve the grid efficiency of the LDVR/FE-DVR method, which makes it not only be competing with the popular DVR methods, such as the Sinc-DVR, but also keep its advantages for treating with the Coulomb singularity. The method is illustrated by calculations for one-dimensional Coulomb potential, and the vibrational states of one-dimensional Morse potential, two-dimensional Morse potential and two-dimensional Henon–Heiles potential, which prove the efficiency of the proposed scheme and promise more general applications of the LDVR/FE-DVR method
Energy Technology Data Exchange (ETDEWEB)
Yu, Dequan [School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian 116024 (China); State Key Laboratory of Molecular Reaction Dynamics and Center for Theoretical and Computational Chemistry, Dalian Institute of Chemical Physics, Chinese Academy of Science, Dalian 116023 (China); Cong, Shu-Lin, E-mail: shlcong@dlut.edu.cn [School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian 116024 (China); Sun, Zhigang, E-mail: zsun@dicp.ac.cn [State Key Laboratory of Molecular Reaction Dynamics and Center for Theoretical and Computational Chemistry, Dalian Institute of Chemical Physics, Chinese Academy of Science, Dalian 116023 (China); Center for Advanced Chemical Physics and 2011 Frontier Center for Quantum Science and Technology, University of Science and Technology of China, 96 Jinzhai Road, Hefei 230026 (China)
2015-09-08
Highlights: • An optimised finite element discrete variable representation method is proposed. • The method is tested by solving one and two dimensional Schrödinger equations. • The method is quite efficient in solving the molecular Schrödinger equation. • It is very easy to generalise the method to multidimensional problems. - Abstract: The Lobatto discrete variable representation (LDVR) proposed by Manoloupolos and Wyatt (1988) has unique features but has not been generally applied in the field of chemical dynamics. Instead, it has popular application in solving atomic physics problems, in combining with the finite element method (FE-DVR), due to its inherent abilities for treating the Coulomb singularity in spherical coordinates. In this work, an efficient phase optimisation and variable mapping procedure is proposed to improve the grid efficiency of the LDVR/FE-DVR method, which makes it not only be competing with the popular DVR methods, such as the Sinc-DVR, but also keep its advantages for treating with the Coulomb singularity. The method is illustrated by calculations for one-dimensional Coulomb potential, and the vibrational states of one-dimensional Morse potential, two-dimensional Morse potential and two-dimensional Henon–Heiles potential, which prove the efficiency of the proposed scheme and promise more general applications of the LDVR/FE-DVR method.
Effect decomposition in the presence of an exposure-induced mediator-outcome confounder
VanderWeele, Tyler J.; Vansteelandt, Stijn; Robins, James M.
2014-01-01
Methods from causal mediation analysis have generalized the traditional approach to direct and indirect effects in the epidemiologic and social science literature by allowing for interaction and non-linearities. However, the methods from the causal inference literature have themselves been subject to a major limitation in that the so-called natural direct and indirect effects that are employed are not identified from data whenever there is a variable that is affected by the exposure, which also confounds the relationship between the mediator and the outcome. In this paper we describe three alternative approaches to effect decomposition that give quantities that can be interpreted as direct and indirect effects, and that can be identified from data even in the presence of an exposure-induced mediator-outcome confounder. We describe a simple weighting-based estimation method for each of these three approaches, illustrated with data from perinatal epidemiology. The methods described here can shed insight into pathways and questions of mediation even when an exposure-induced mediator-outcome confounder is present. PMID:24487213
New complex variable meshless method for advection—diffusion problems
International Nuclear Information System (INIS)
Wang Jian-Fei; Cheng Yu-Min
2013-01-01
In this paper, an improved complex variable meshless method (ICVMM) for two-dimensional advection—diffusion problems is developed based on improved complex variable moving least-square (ICVMLS) approximation. The equivalent functional of two-dimensional advection—diffusion problems is formed, the variation method is used to obtain the equation system, and the penalty method is employed to impose the essential boundary conditions. The difference method for two-point boundary value problems is used to obtain the discrete equations. Then the corresponding formulas of the ICVMM for advection—diffusion problems are presented. Two numerical examples with different node distributions are used to validate and inestigate the accuracy and efficiency of the new method in this paper. It is shown that ICVMM is very effective for advection—diffusion problems, and has a good convergent character, accuracy, and computational efficiency
Error response test system and method using test mask variable
Gender, Thomas K. (Inventor)
2006-01-01
An error response test system and method with increased functionality and improved performance is provided. The error response test system provides the ability to inject errors into the application under test to test the error response of the application under test in an automated and efficient manner. The error response system injects errors into the application through a test mask variable. The test mask variable is added to the application under test. During normal operation, the test mask variable is set to allow the application under test to operate normally. During testing, the error response test system can change the test mask variable to introduce an error into the application under test. The error response system can then monitor the application under test to determine whether the application has the correct response to the error.
Improvement of the variable storage coefficient method with water surface gradient as a variable
The variable storage coefficient (VSC) method has been used for streamflow routing in continuous hydrological simulation models such as the Agricultural Policy/Environmental eXtender (APEX) and the Soil and Water Assessment Tool (SWAT) for more than 30 years. APEX operates on a daily time step and ...
Directory of Open Access Journals (Sweden)
Johanna M Walz
Full Text Available Vascular endothelial growth factor-A (VEGF-A is intensively investigated in various medical fields. However, comparing VEGF-A measurements is difficult because sample acquisition and pre-analytic procedures differ between studies. We therefore investigated which variables act as confounders of VEGF-A measurements.Following a standardized protocol, blood was taken at three clinical sites from six healthy participants (one male and one female participant at each center twice one week apart. The following pre-analytical parameters were varied in order to analyze their impact on VEGF-A measurements: analyzing center, anticoagulant (EDTA vs. PECT / CTAD, cannula (butterfly vs. neonatal, type of centrifuge (swing-out vs. fixed-angle, time before and after centrifugation, filling level (completely filled vs. half-filled tubes and analyzing method (ELISA vs. multiplex bead array. Additionally, intrapersonal variations over time and sex differences were explored. Statistical analysis was performed using a linear regression model.The following parameters were identified as statistically significant independent confounders of VEGF-A measurements: analyzing center, anticoagulant, centrifuge, analyzing method and sex of the proband. The following parameters were no significant confounders in our data set: intrapersonal variation over one week, cannula, time before and after centrifugation and filling level of collection tubes.VEGF-A measurement results can be affected significantly by the identified pre-analytical parameters. We recommend the use of CTAD anticoagulant, a standardized type of centrifuge and one central laboratory using the same analyzing method for all samples.
Walz, Johanna M; Boehringer, Daniel; Deissler, Heidrun L; Faerber, Lothar; Goepfert, Jens C; Heiduschka, Peter; Kleeberger, Susannah M; Klettner, Alexa; Krohne, Tim U; Schneiderhan-Marra, Nicole; Ziemssen, Focke; Stahl, Andreas
2016-01-01
Vascular endothelial growth factor-A (VEGF-A) is intensively investigated in various medical fields. However, comparing VEGF-A measurements is difficult because sample acquisition and pre-analytic procedures differ between studies. We therefore investigated which variables act as confounders of VEGF-A measurements. Following a standardized protocol, blood was taken at three clinical sites from six healthy participants (one male and one female participant at each center) twice one week apart. The following pre-analytical parameters were varied in order to analyze their impact on VEGF-A measurements: analyzing center, anticoagulant (EDTA vs. PECT / CTAD), cannula (butterfly vs. neonatal), type of centrifuge (swing-out vs. fixed-angle), time before and after centrifugation, filling level (completely filled vs. half-filled tubes) and analyzing method (ELISA vs. multiplex bead array). Additionally, intrapersonal variations over time and sex differences were explored. Statistical analysis was performed using a linear regression model. The following parameters were identified as statistically significant independent confounders of VEGF-A measurements: analyzing center, anticoagulant, centrifuge, analyzing method and sex of the proband. The following parameters were no significant confounders in our data set: intrapersonal variation over one week, cannula, time before and after centrifugation and filling level of collection tubes. VEGF-A measurement results can be affected significantly by the identified pre-analytical parameters. We recommend the use of CTAD anticoagulant, a standardized type of centrifuge and one central laboratory using the same analyzing method for all samples.
CONFOUNDING STRUCTURE OF TWO-LEVEL NONREGULAR FACTORIAL DESIGNS
Institute of Scientific and Technical Information of China (English)
Ren Junbai
2012-01-01
In design theory,the alias structure of regular fractional factorial designs is elegantly described with group theory.However,this approach cannot be applied to nonregular designs directly. For an arbitrary nonregular design,a natural question is how to describe the confounding relations between its effects,is there any inner structure similar to regular designs? The aim of this article is to answer this basic question.Using coefficients of indicator function,confounding structure of nonregular fractional factorial designs is obtained as linear constrains on the values of effects.A method to estimate the sparse significant effects in an arbitrary nonregular design is given through an example.
Recursive form of general limited memory variable metric methods
Czech Academy of Sciences Publication Activity Database
Lukšan, Ladislav; Vlček, Jan
2013-01-01
Roč. 49, č. 2 (2013), s. 224-235 ISSN 0023-5954 Institutional support: RVO:67985807 Keywords : unconstrained optimization * large scale optimization * limited memory methods * variable metric updates * recursive matrix formulation * algorithms Subject RIV: BA - General Mathematics Impact factor: 0.563, year: 2013 http://dml.cz/handle/10338.dmlcz/143365
The variability of piezoelectric measurements. Material and measurement method contributions
International Nuclear Information System (INIS)
Stewart, M.; Cain, M.
2002-01-01
The variability of piezoelectric materials measurements has been investigated in order to separate the contributions from intrinsic instrumental variability, and the contributions from the variability in materials. The work has pinpointed several areas where weaknesses in the measurement methods result in high variability, and also show that good correlation between piezoelectric parameters allow simpler measurement methods to be used. The Berlincourt method has been shown to be unreliable when testing thin discs, however when testing thicker samples there is a good correlation between this and other methods. The high field permittivity and low field permittivity correlate well, so tolerances on low field measurements would predict high field performance. In trying to identify microstructural origins of samples that behave differently to others within a batch, no direct evidence was found to suggest that outliers originate from either differences in microstructure or crystallography. Some of the samples chosen as maximum outliers showed pin-holes, probably from electrical breakdown during poling, even though these defects would ordinarily be detrimental to piezoelectric output. (author)
Variable Lifting Index (VLI): A New Method for Evaluating Variable Lifting Tasks.
Waters, Thomas; Occhipinti, Enrico; Colombini, Daniela; Alvarez-Casado, Enrique; Fox, Robert
2016-08-01
We seek to develop a new approach for analyzing the physical demands of highly variable lifting tasks through an adaptation of the Revised NIOSH (National Institute for Occupational Safety and Health) Lifting Equation (RNLE) into a Variable Lifting Index (VLI). There are many jobs that contain individual lifts that vary from lift to lift due to the task requirements. The NIOSH Lifting Equation is not suitable in its present form to analyze variable lifting tasks. In extending the prior work on the VLI, two procedures are presented to allow users to analyze variable lifting tasks. One approach involves the sampling of lifting tasks performed by a worker over a shift and the calculation of the Frequency Independent Lift Index (FILI) for each sampled lift and the aggregation of the FILI values into six categories. The Composite Lift Index (CLI) equation is used with lifting index (LI) category frequency data to calculate the VLI. The second approach employs a detailed systematic collection of lifting task data from production and/or organizational sources. The data are organized into simplified task parameter categories and further aggregated into six FILI categories, which also use the CLI equation to calculate the VLI. The two procedures will allow practitioners to systematically employ the VLI method to a variety of work situations where highly variable lifting tasks are performed. The scientific basis for the VLI procedure is similar to that for the CLI originally presented by NIOSH; however, the VLI method remains to be validated. The VLI method allows an analyst to assess highly variable manual lifting jobs in which the task characteristics vary from lift to lift during a shift. © 2015, Human Factors and Ergonomics Society.
Assessment of hip dysplasia and osteoarthritis: Variability of different methods
International Nuclear Information System (INIS)
Troelsen, Anders; Elmengaard, Brian; Soeballe, Kjeld; Roemer, Lone; Kring, Soeren
2010-01-01
Background: Reliable assessment of hip dysplasia and osteoarthritis is crucial in young adults who may benefit from joint-preserving surgery. Purpose: To investigate the variability of different methods for diagnostic assessment of hip dysplasia and osteoarthritis. Material and Methods: By each of four observers, two assessments were done by vision and two by angle construction. For both methods, the intra- and interobserver variability of center-edge and acetabular index angle assessment were analyzed. The observers' ability to diagnose hip dysplasia and osteoarthritis were assessed. All measures were compared to those made on computed tomography scan. Results: Intra- and interobserver variability of angle assessment was less when angles were drawn compared with assessment by vision, and the observers' ability to diagnose hip dysplasia improved when angles were drawn. Assessment of osteoarthritis in general showed poor agreement with findings on computed tomography scan. Conclusion: We recommend that angles always should be drawn for assessment of hip dysplasia on pelvic radiographs. Given the inherent variability of diagnostic assessment of hip dysplasia, a computed tomography scan could be considered in patients with relevant hip symptoms and a center-edge angle between 20 deg and 30 deg. Osteoarthritis should be assessed by measuring the joint space width or by classifying the Toennis grade as either 0-1 or 2-3
Assessment of hip dysplasia and osteoarthritis: Variability of different methods
Energy Technology Data Exchange (ETDEWEB)
Troelsen, Anders; Elmengaard, Brian; Soeballe, Kjeld (Orthopedic Research Unit, Univ. Hospital of Aarhus, Aarhus (Denmark)), e-mail: a_troelsen@hotmail.com; Roemer, Lone (Dept. of Radiology, Univ. Hospital of Aarhus, Aarhus (Denmark)); Kring, Soeren (Dept. of Orthopedic Surgery, Aabenraa Hospital, Aabenraa (Denmark))
2010-03-15
Background: Reliable assessment of hip dysplasia and osteoarthritis is crucial in young adults who may benefit from joint-preserving surgery. Purpose: To investigate the variability of different methods for diagnostic assessment of hip dysplasia and osteoarthritis. Material and Methods: By each of four observers, two assessments were done by vision and two by angle construction. For both methods, the intra- and interobserver variability of center-edge and acetabular index angle assessment were analyzed. The observers' ability to diagnose hip dysplasia and osteoarthritis were assessed. All measures were compared to those made on computed tomography scan. Results: Intra- and interobserver variability of angle assessment was less when angles were drawn compared with assessment by vision, and the observers' ability to diagnose hip dysplasia improved when angles were drawn. Assessment of osteoarthritis in general showed poor agreement with findings on computed tomography scan. Conclusion: We recommend that angles always should be drawn for assessment of hip dysplasia on pelvic radiographs. Given the inherent variability of diagnostic assessment of hip dysplasia, a computed tomography scan could be considered in patients with relevant hip symptoms and a center-edge angle between 20 deg and 30 deg. Osteoarthritis should be assessed by measuring the joint space width or by classifying the Toennis grade as either 0-1 or 2-3
Energy Technology Data Exchange (ETDEWEB)
Cessenat, M.; Genta, P.
1996-12-31
We use a method based on a separation of variables for solving a system of first order partial differential equations, in a very simple modelling of MHD. The method consists in introducing three unknown variables {phi}1, {phi}2, {phi}3 in addition of the time variable {tau} and then searching a solution which is separated with respect to {phi}1 and {tau} only. This is allowed by a very simple relation, called a `metric separation equation`, which governs the type of solutions with respect to time. The families of solutions for the system of equations thus obtained, correspond to a radial evolution of the fluid. Solving the MHD equations is then reduced to find the transverse component H{sub {Sigma}} of the magnetic field on the unit sphere {Sigma} by solving a non linear partial differential equation on {Sigma}. Thus we generalize ideas due to Courant-Friedrichs and to Sedov on dimensional analysis and self-similar solutions. (authors).
Chaos synchronization using single variable feedback based on backstepping method
International Nuclear Information System (INIS)
Zhang Jian; Li Chunguang; Zhang Hongbin; Yu Juebang
2004-01-01
In recent years, backstepping method has been developed in the field of nonlinear control, such as controller, observer and output regulation. In this paper, an effective backstepping design is applied to chaos synchronization. There are some advantages in this method for synchronizing chaotic systems, such as (a) the synchronization error is exponential convergent; (b) only one variable information of the master system is needed; (c) it presents a systematic procedure for selecting a proper controller. Numerical simulations for the Chua's circuit and the Roessler system demonstrate that this method is very effective
A streamlined artificial variable free version of simplex method.
Directory of Open Access Journals (Sweden)
Syed Inayatullah
Full Text Available This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new method has also been presented which provides a way to easily implement the phase 1 of traditional dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our methods provide full freedom to the user, that whether to start with primal artificial free version or dual artificial free version without making any reformulation to the LP structure. Last but not the least, it provides a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement.
A streamlined artificial variable free version of simplex method.
Inayatullah, Syed; Touheed, Nasir; Imtiaz, Muhammad
2015-01-01
This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new method has also been presented which provides a way to easily implement the phase 1 of traditional dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our methods provide full freedom to the user, that whether to start with primal artificial free version or dual artificial free version without making any reformulation to the LP structure. Last but not the least, it provides a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement.
Variable scaling method and Stark effect in hydrogen atom
International Nuclear Information System (INIS)
Choudhury, R.K.R.; Ghosh, B.
1983-09-01
By relating the Stark effect problem in hydrogen-like atoms to that of the spherical anharmonic oscillator we have found simple formulas for energy eigenvalues for the Stark effect. Matrix elements have been calculated using 0(2,1) algebra technique after Armstrong and then the variable scaling method has been used to find optimal solutions. Our numerical results are compared with those of Hioe and Yoo and also with the results obtained by Lanczos. (author)
Variable importance and prediction methods for longitudinal problems with missing variables.
Directory of Open Access Journals (Sweden)
Iván Díaz
Full Text Available We present prediction and variable importance (VIM methods for longitudinal data sets containing continuous and binary exposures subject to missingness. We demonstrate the use of these methods for prognosis of medical outcomes of severe trauma patients, a field in which current medical practice involves rules of thumb and scoring methods that only use a few variables and ignore the dynamic and high-dimensional nature of trauma recovery. Well-principled prediction and VIM methods can provide a tool to make care decisions informed by the high-dimensional patient's physiological and clinical history. Our VIM parameters are analogous to slope coefficients in adjusted regressions, but are not dependent on a specific statistical model, nor require a certain functional form of the prediction regression to be estimated. In addition, they can be causally interpreted under causal and statistical assumptions as the expected outcome under time-specific clinical interventions, related to changes in the mean of the outcome if each individual experiences a specified change in the variable (keeping other variables in the model fixed. Better yet, the targeted MLE used is doubly robust and locally efficient. Because the proposed VIM does not constrain the prediction model fit, we use a very flexible ensemble learner (the SuperLearner, which returns a linear combination of a list of user-given algorithms. Not only is such a prediction algorithm intuitive appealing, it has theoretical justification as being asymptotically equivalent to the oracle selector. The results of the analysis show effects whose size and significance would have been not been found using a parametric approach (such as stepwise regression or LASSO. In addition, the procedure is even more compelling as the predictor on which it is based showed significant improvements in cross-validated fit, for instance area under the curve (AUC for a receiver-operator curve (ROC. Thus, given that 1 our VIM
Correction of confounding bias in non-randomized studies by appropriate weighting.
Schmoor, Claudia; Gall, Christine; Stampf, Susanne; Graf, Erika
2011-03-01
In non-randomized studies, the assessment of a causal effect of treatment or exposure on outcome is hampered by possible confounding. Applying multiple regression models including the effects of treatment and covariates on outcome is the well-known classical approach to adjust for confounding. In recent years other approaches have been promoted. One of them is based on the propensity score and considers the effect of possible confounders on treatment as a relevant criterion for adjustment. Another proposal is based on using an instrumental variable. Here inference relies on a factor, the instrument, which affects treatment but is thought to be otherwise unrelated to outcome, so that it mimics randomization. Each of these approaches can basically be interpreted as a simple reweighting scheme, designed to address confounding. The procedures will be compared with respect to their fundamental properties, namely, which bias they aim to eliminate, which effect they aim to estimate, and which parameter is modelled. We will expand our overview of methods for analysis of non-randomized studies to methods for analysis of randomized controlled trials and show that analyses of both study types may target different effects and different parameters. The considerations will be illustrated using a breast cancer study with a so-called Comprehensive Cohort Study design, including a randomized controlled trial and a non-randomized study in the same patient population as sub-cohorts. This design offers ideal opportunities to discuss and illustrate the properties of the different approaches. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Wind resource in metropolitan France: assessment methods, variability and trends
International Nuclear Information System (INIS)
Jourdier, Benedicte
2015-01-01
France has one of the largest wind potentials in Europe, yet far from being fully exploited. The wind resource and energy yield assessment is a key step before building a wind farm, aiming at predicting the future electricity production. Any over-estimation in the assessment process puts in jeopardy the project's profitability. This has been the case in the recent years, when wind farm managers have noticed that they produced less than expected. The under-production problem leads to questioning both the validity of the assessment methods and the inter-annual wind variability. This thesis tackles these two issues. In a first part are investigated the errors linked to the assessment methods, especially in two steps: the vertical extrapolation of wind measurements and the statistical modelling of wind-speed data by a Weibull distribution. The second part investigates the inter-annual to decadal variability of wind speeds, in order to understand how this variability may have contributed to the under-production and so that it is better taken into account in the future. (author) [fr
Poppers, Kaposi's sarcoma, and HIV infection: empirical example of a strong confounding effect?
Morabia, A
1995-01-01
Are there empirical examples of strong confounding effects? Textbooks usually show examples of weak confounding or use hypothetical examples of strong confounding to illustrate the paradoxical consequences of not separating out the effect of the studied exposure from that of second factor acting as a confounder. HIV infection is a candidate strong confounder of the spuriously high association reported between consumption of poppers, a sexual stimulant, and risk of Kaposi's sarcoma in the early phase of the AIDS epidemic. To examine this hypothesis, assumptions must be made on the prevalence of HIV infection among cases of Kaposi's sarcoma and on the prevalence of heavy popper consumption according to HIV infection in cases and controls. Results show that HIV infection may have confounded the poppers-Kaposi's sarcoma association. However, it cannot be ruled out that HIV did not qualify as a confounder because it was either an intermediate variable or an effect modifier of the association between popper inhalation and Kaposi's sarcoma. This example provides a basis to discuss the mechanism by which confounding occurs as well as the practical importance of confounding in epidemiologic research.
Lutz, Sharon M; Thwing, Annie; Schmiege, Sarah; Kroehl, Miranda; Baker, Christopher D; Starling, Anne P; Hokanson, John E; Ghosh, Debashis
2017-07-19
In mediation analysis if unmeasured confounding is present, the estimates for the direct and mediated effects may be over or under estimated. Most methods for the sensitivity analysis of unmeasured confounding in mediation have focused on the mediator-outcome relationship. The Umediation R package enables the user to simulate unmeasured confounding of the exposure-mediator, exposure-outcome, and mediator-outcome relationships in order to see how the results of the mediation analysis would change in the presence of unmeasured confounding. We apply the Umediation package to the Genetic Epidemiology of Chronic Obstructive Pulmonary Disease (COPDGene) study to examine the role of unmeasured confounding due to population stratification on the effect of a single nucleotide polymorphism (SNP) in the CHRNA5/3/B4 locus on pulmonary function decline as mediated by cigarette smoking. Umediation is a flexible R package that examines the role of unmeasured confounding in mediation analysis allowing for normally distributed or Bernoulli distributed exposures, outcomes, mediators, measured confounders, and unmeasured confounders. Umediation also accommodates multiple measured confounders, multiple unmeasured confounders, and allows for a mediator-exposure interaction on the outcome. Umediation is available as an R package at https://github.com/SharonLutz/Umediation A tutorial on how to install and use the Umediation package is available in the Additional file 1.
Modeling intraindividual variability with repeated measures data methods and applications
Hershberger, Scott L
2013-01-01
This book examines how individuals behave across time and to what degree that behavior changes, fluctuates, or remains stable.It features the most current methods on modeling repeated measures data as reported by a distinguished group of experts in the field. The goal is to make the latest techniques used to assess intraindividual variability accessible to a wide range of researchers. Each chapter is written in a ""user-friendly"" style such that even the ""novice"" data analyst can easily apply the techniques.Each chapter features:a minimum discussion of mathematical detail;an empirical examp
Viscoelastic Earthquake Cycle Simulation with Memory Variable Method
Hirahara, K.; Ohtani, M.
2017-12-01
There have so far been no EQ (earthquake) cycle simulations, based on RSF (rate and state friction) laws, in viscoelastic media, except for Kato (2002), who simulated cycles on a 2-D vertical strike-slip fault, and showed nearly the same cycles as those in elastic cases. The viscoelasticity could, however, give more effects on large dip-slip EQ cycles. In a boundary element approach, stress is calculated using a hereditary integral of stress relaxation function and slip deficit rate, where we need the past slip rates, leading to huge computational costs. This is a cause for almost no simulations in viscoelastic media. We have investigated the memory variable method utilized in numerical computation of wave propagation in dissipative media (e.g., Moczo and Kristek, 2005). In this method, introducing memory variables satisfying 1st order differential equations, we need no hereditary integrals in stress calculation and the computational costs are the same order of those in elastic cases. Further, Hirahara et al. (2012) developed the iterative memory variable method, referring to Taylor et al. (1970), in EQ cycle simulations in linear viscoelastic media. In this presentation, first, we introduce our method in EQ cycle simulations and show the effect of the linear viscoelasticity on stick-slip cycles in a 1-DOF block-SLS (standard linear solid) model, where the elastic spring of the traditional block-spring model is replaced by SLS element and we pull, in a constant rate, the block obeying RSF law. In this model, the memory variable stands for the displacement of the dash-pot in SLS element. The use of smaller viscosity reduces the recurrence time to a minimum value. The smaller viscosity means the smaller relaxation time, which makes the stress recovery quicker, leading to the smaller recurrence time. Second, we show EQ cycles on a 2-D dip-slip fault with the dip angel of 20 degrees in an elastic layer with thickness of 40 km overriding a Maxwell viscoelastic half
Directory of Open Access Journals (Sweden)
Corey Sparks
2009-07-01
Full Text Available This paper presents an analysis of the differential growth rates of the farming and non-farming segments of a rural Scottish community during the 19th and early 20th centuries using the variable-r method allowing for net migration. Using this method, I find that the farming population of Orkney, Scotland, showed less variability in their reproduction and growth rates than the non-farming population during a period of net population decline. I conclude by suggesting that the variable-r method can be used in general cases where the relative growth of subpopulations or subpopulation reproduction is of interest.
Sensitivity analysis for the effects of multiple unmeasured confounders.
Groenwold, Rolf H H; Sterne, Jonathan A C; Lawlor, Debbie A; Moons, Karel G M; Hoes, Arno W; Tilling, Kate
2016-09-01
Observational studies are prone to (unmeasured) confounding. Sensitivity analysis of unmeasured confounding typically focuses on a single unmeasured confounder. The purpose of this study was to assess the impact of multiple (possibly weak) unmeasured confounders. Simulation studies were performed based on parameters estimated from the British Women's Heart and Health Study, including 28 measured confounders and assuming no effect of ascorbic acid intake on mortality. In addition, 25, 50, or 100 unmeasured confounders were simulated, with various mutual correlations and correlations with measured confounders. The correlated unmeasured confounders did not need to be strongly associated with exposure and outcome to substantially bias the exposure-outcome association at interest, provided that there are sufficiently many unmeasured confounders. Correlations between unmeasured confounders, in addition to the strength of their relationship with exposure and outcome, are key drivers of the magnitude of unmeasured confounding and should be considered in sensitivity analyses. However, if the unmeasured confounders are correlated with measured confounders, the bias yielded by unmeasured confounders is partly removed through adjustment for the measured confounders. Discussions of the potential impact of unmeasured confounding in observational studies, and sensitivity analyses to examine this, should focus on the potential for the joint effect of multiple unmeasured confounders to bias results. Copyright © 2016 Elsevier Inc. All rights reserved.
Lindmark, Anita; de Luna, Xavier; Eriksson, Marie
2018-05-10
To estimate direct and indirect effects of an exposure on an outcome from observed data, strong assumptions about unconfoundedness are required. Since these assumptions cannot be tested using the observed data, a mediation analysis should always be accompanied by a sensitivity analysis of the resulting estimates. In this article, we propose a sensitivity analysis method for parametric estimation of direct and indirect effects when the exposure, mediator, and outcome are all binary. The sensitivity parameters consist of the correlations between the error terms of the exposure, mediator, and outcome models. These correlations are incorporated into the estimation of the model parameters and identification sets are then obtained for the direct and indirect effects for a range of plausible correlation values. We take the sampling variability into account through the construction of uncertainty intervals. The proposed method is able to assess sensitivity to both mediator-outcome confounding and confounding involving the exposure. To illustrate the method, we apply it to a mediation study based on the data from the Swedish Stroke Register (Riksstroke). An R package that implements the proposed method is available. Copyright © 2018 John Wiley & Sons, Ltd.
Hypnotics and mortality – confounding by disease and socioeconomic position
DEFF Research Database (Denmark)
Kriegbaum, Margit; Hendriksen, Carsten; Vass, Mikkel
2015-01-01
Purpose The aim of this Cohort study of 10 527 Danish men was to investigate the extent to which the association between hypnotics and mortality is confounded by several markers of disease and living conditions. Methods Exposure was purchases of hypnotics 1995–1999 (“low users” (150 or less defined......% confidence intervals (CI). Results When covariates were entered one at a time, the changes in HR estimates showed that psychiatric disease, socioeconomic position and substance abuse reduced the excess risk by 17–36% in the low user group and by 45–52% in the high user group. Somatic disease, intelligence...... point at psychiatric disease, substance abuse and socioeconomic position as potential confounding factors partly explaining the association between use of hypnotics and all-cause mortality....
Directory of Open Access Journals (Sweden)
Sandvik Leiv
2011-04-01
Full Text Available Abstract Background The number of events per individual is a widely reported variable in medical research papers. Such variables are the most common representation of the general variable type called discrete numerical. There is currently no consensus on how to compare and present such variables, and recommendations are lacking. The objective of this paper is to present recommendations for analysis and presentation of results for discrete numerical variables. Methods Two simulation studies were used to investigate the performance of hypothesis tests and confidence interval methods for variables with outcomes {0, 1, 2}, {0, 1, 2, 3}, {0, 1, 2, 3, 4}, and {0, 1, 2, 3, 4, 5}, using the difference between the means as an effect measure. Results The Welch U test (the T test with adjustment for unequal variances and its associated confidence interval performed well for almost all situations considered. The Brunner-Munzel test also performed well, except for small sample sizes (10 in each group. The ordinary T test, the Wilcoxon-Mann-Whitney test, the percentile bootstrap interval, and the bootstrap-t interval did not perform satisfactorily. Conclusions The difference between the means is an appropriate effect measure for comparing two independent discrete numerical variables that has both lower and upper bounds. To analyze this problem, we encourage more frequent use of parametric hypothesis tests and confidence intervals.
Quantitative assessment of unobserved confounding is mandatory in nonrandomized intervention studies
Groenwold, R H H; Hak, E; Hoes, A W
OBJECTIVE: In nonrandomized intervention studies unequal distribution of patient characteristics in the groups under study may hinder comparability of prognosis and therefore lead to confounding bias. Our objective was to review methods to control for observed confounding, as well as unobserved
Variable aperture-based ptychographical iterative engine method.
Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng
2018-02-01
A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Variable aperture-based ptychographical iterative engine method
Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng
2018-02-01
A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches.
Method for curing polymers using variable-frequency microwave heating
Lauf, Robert J.; Bible, Don W.; Paulauskas, Felix L.
1998-01-01
A method for curing polymers (11) incorporating a variable frequency microwave furnace system (10) designed to allow modulation of the frequency of the microwaves introduced into a furnace cavity (34). By varying the frequency of the microwave signal, non-uniformities within the cavity (34) are minimized, thereby achieving a more uniform cure throughout the workpiece (36). A directional coupler (24) is provided for detecting the direction of a signal and further directing the signal depending on the detected direction. A first power meter (30) is provided for measuring the power delivered to the microwave furnace (32). A second power meter (26) detects the magnitude of reflected power. The furnace cavity (34) may be adapted to be used to cure materials defining a continuous sheet or which require compressive forces during curing.
Interpolation decoding method with variable parameters for fractal image compression
International Nuclear Information System (INIS)
He Chuanjiang; Li Gaoping; Shen Xiaona
2007-01-01
The interpolation fractal decoding method, which is introduced by [He C, Yang SX, Huang X. Progressive decoding method for fractal image compression. IEE Proc Vis Image Signal Process 2004;3:207-13], involves generating progressively the decoded image by means of an interpolation iterative procedure with a constant parameter. It is well-known that the majority of image details are added at the first steps of iterations in the conventional fractal decoding; hence the constant parameter for the interpolation decoding method must be set as a smaller value in order to achieve a better progressive decoding. However, it needs to take an extremely large number of iterations to converge. It is thus reasonable for some applications to slow down the iterative process at the first stages of decoding and then to accelerate it afterwards (e.g., at some iteration as we need). To achieve the goal, this paper proposed an interpolation decoding scheme with variable (iteration-dependent) parameters and proved the convergence of the decoding process mathematically. Experimental results demonstrate that the proposed scheme has really achieved the above-mentioned goal
Variable threshold method for ECG R-peak detection.
Kew, Hsein-Ping; Jeong, Do-Un
2011-10-01
In this paper, a wearable belt-type ECG electrode worn around the chest by measuring the real-time ECG is produced in order to minimize the inconvenient in wearing. ECG signal is detected using a potential instrument system. The measured ECG signal is transmits via an ultra low power consumption wireless data communications unit to personal computer using Zigbee-compatible wireless sensor node. ECG signals carry a lot of clinical information for a cardiologist especially the R-peak detection in ECG. R-peak detection generally uses the threshold value which is fixed. There will be errors in peak detection when the baseline changes due to motion artifacts and signal size changes. Preprocessing process which includes differentiation process and Hilbert transform is used as signal preprocessing algorithm. Thereafter, variable threshold method is used to detect the R-peak which is more accurate and efficient than fixed threshold value method. R-peak detection using MIT-BIH databases and Long Term Real-Time ECG is performed in this research in order to evaluate the performance analysis.
Feasibility of wavelet expansion methods to treat the energy variable
International Nuclear Information System (INIS)
Van Rooijen, W. F. G.
2012-01-01
This paper discusses the use of the Discrete Wavelet Transform (DWT) to implement a functional expansion of the energy variable in neutron transport. The motivation of the work is to investigate the possibility of adapting the expansion level of the neutron flux in a material region to the complexity of the cross section in that region. If such an adaptive treatment is possible, 'simple' material regions (e.g., moderator regions) require little effort, while a detailed treatment is used for 'complex' regions (e.g., fuel regions). Our investigations show that in fact adaptivity cannot be achieved. The most fundamental reason is that in a multi-region system, the energy dependence of the cross section in a material region does not imply that the neutron flux in that region has a similar energy dependence. If it is chosen to sacrifice adaptivity, then the DWT method can be very accurate, but the complexity of such a method is higher than that of an equivalent hyper-fine group calculation. The conclusion is thus that, unfortunately, the DWT approach is not very practical. (authors)
International Nuclear Information System (INIS)
Nazareth, J. L.
1979-01-01
1 - Description of problem or function: OCOPTR and DRVOCR are computer programs designed to find minima of non-linear differentiable functions f: R n →R with n dimensional domains. OCOPTR requires that the user only provide function values (i.e. it is a derivative-free routine). DRVOCR requires the user to supply both function and gradient information. 2 - Method of solution: OCOPTR and DRVOCR use the variable metric (or quasi-Newton) method of Davidon (1975). For OCOPTR, the derivatives are estimated by finite differences along a suitable set of linearly independent directions. For DRVOCR, the derivatives are user- supplied. Some features of the codes are the storage of the approximation to the inverse Hessian matrix in lower trapezoidal factored form and the use of an optimally-conditioned updating method. Linear equality constraints are permitted subject to the initial Hessian factor being chosen correctly. 3 - Restrictions on the complexity of the problem: The functions to which the routine is applied are assumed to be differentiable. The routine also requires (n 2 /2) + 0(n) storage locations where n is the problem dimension
Bias, Confounding, and Interaction: Lions and Tigers, and Bears, Oh My!
Vetter, Thomas R; Mascha, Edward J
2017-09-01
Epidemiologists seek to make a valid inference about the causal effect between an exposure and a disease in a specific population, using representative sample data from a specific population. Clinical researchers likewise seek to make a valid inference about the association between an intervention and outcome(s) in a specific population, based upon their randomly collected, representative sample data. Both do so by using the available data about the sample variable to make a valid estimate about its corresponding or underlying, but unknown population parameter. Random error in an experiment can be due to the natural, periodic fluctuation or variation in the accuracy or precision of virtually any data sampling technique or health measurement tool or scale. In a clinical research study, random error can be due to not only innate human variability but also purely chance. Systematic error in an experiment arises from an innate flaw in the data sampling technique or measurement instrument. In the clinical research setting, systematic error is more commonly referred to as systematic bias. The most commonly encountered types of bias in anesthesia, perioperative, critical care, and pain medicine research include recall bias, observational bias (Hawthorne effect), attrition bias, misclassification or informational bias, and selection bias. A confounding variable is a factor associated with both the exposure of interest and the outcome of interest. A confounding variable (confounding factor or confounder) is a variable that correlates (positively or negatively) with both the exposure and outcome. Confounding is typically not an issue in a randomized trial because the randomized groups are sufficiently balanced on all potential confounding variables, both observed and nonobserved. However, confounding can be a major problem with any observational (nonrandomized) study. Ignoring confounding in an observational study will often result in a "distorted" or incorrect estimate of
Carotta: Revealing Hidden Confounder Markers in Metabolic Breath Profiles
Directory of Open Access Journals (Sweden)
Anne-Christin Hauschild
2015-06-01
Full Text Available Computational breath analysis is a growing research area aiming at identifying volatile organic compounds (VOCs in human breath to assist medical diagnostics of the next generation. While inexpensive and non-invasive bioanalytical technologies for metabolite detection in exhaled air and bacterial/fungal vapor exist and the first studies on the power of supervised machine learning methods for profiling of the resulting data were conducted, we lack methods to extract hidden data features emerging from confounding factors. Here, we present Carotta, a new cluster analysis framework dedicated to uncovering such hidden substructures by sophisticated unsupervised statistical learning methods. We study the power of transitivity clustering and hierarchical clustering to identify groups of VOCs with similar expression behavior over most patient breath samples and/or groups of patients with a similar VOC intensity pattern. This enables the discovery of dependencies between metabolites. On the one hand, this allows us to eliminate the effect of potential confounding factors hindering disease classification, such as smoking. On the other hand, we may also identify VOCs associated with disease subtypes or concomitant diseases. Carotta is an open source software with an intuitive graphical user interface promoting data handling, analysis and visualization. The back-end is designed to be modular, allowing for easy extensions with plugins in the future, such as new clustering methods and statistics. It does not require much prior knowledge or technical skills to operate. We demonstrate its power and applicability by means of one artificial dataset. We also apply Carotta exemplarily to a real-world example dataset on chronic obstructive pulmonary disease (COPD. While the artificial data are utilized as a proof of concept, we will demonstrate how Carotta finds candidate markers in our real dataset associated with confounders rather than the primary disease (COPD
Electromagnetic variable degrees of freedom actuator systems and methods
Montesanti, Richard C [Pleasanton, CA; Trumper, David L [Plaistow, NH; Kirtley, Jr., James L.
2009-02-17
The present invention provides a variable reluctance actuator system and method that can be adapted for simultaneous rotation and translation of a moving element by applying a normal-direction magnetic flux on the moving element. In a beneficial example arrangement, the moving element includes a swing arm that carries a cutting tool at a set radius from an axis of rotation so as to produce a rotary fast tool servo that provides a tool motion in a direction substantially parallel to the surface-normal of a workpiece at the point of contact between the cutting tool and workpiece. An actuator rotates a swing arm such that a cutting tool moves toward and away from a mounted rotating workpiece in a controlled manner in order to machine the workpiece. Position sensors provide rotation and displacement information for a swing arm to a control system. A control system commands and coordinates motion of the fast tool servo with the motion of a spindle, rotating table, cross-feed slide, and in feed slide of a precision lathe.
Larter, K F; Rees, B B
2017-06-01
In many experiments, euthanasia, or humane killing, of animals is necessary. Some methods of euthanasia cause death through cessation of respiratory or cardiovascular systems, causing oxygen levels of blood and tissues to drop. For experiments where the goal is to measure the effects of environmental low oxygen (hypoxia), the choice of euthanasia technique, therefore, may confound the results. This study examined the effects of four euthanasia methods commonly used in fish biology (overdose of MS-222, overdose of clove oil, rapid cooling and blunt trauma to the head) on variables known to be altered during hypoxia (haematocrit, plasma cortisol, blood lactate and blood glucose) or reflecting gill damage (trypan blue exclusion) and energetic status (ATP, ADP and ATP:ADP) in Gulf killifish Fundulus grandis after 24 h exposure to well-aerated conditions (normoxia, 7·93 mg O 2 l -1 , c. 150 mm Hg or c. 20 kPa) or reduced oxygen levels (0·86 mg O 2 l -1 , c. 17 mm Hg or c. 2·2 kPa). Regardless of oxygen treatment, fish euthanized by an overdose of MS-222 had higher haematocrit and lower gill ATP:ADP than fish euthanized by other methods. The effects of 24 h hypoxic exposure on these and other variables, however, were equivalent among methods of euthanasia (i.e. there were no significant interactions between euthanasia method and oxygen treatment). The choice of an appropriate euthanasia method, therefore, will depend upon the magnitude of the treatment effects (e.g. hypoxia) relative to potential artefacts caused by euthanasia on the variables of interest. © 2017 The Fisheries Society of the British Isles.
Role of environmental confounding in the association between FKBP5 and first-episode psychosis
Directory of Open Access Journals (Sweden)
Olesya eAjnakina
2014-07-01
Full Text Available Background: Failure to account for the etiological diversity that typically occurs in psychiatric cohorts may increase the potential for confounding, as a proportion of genetic variance will be specific to exposures that have variable distribution in cases. This study investigated whether minimizing the potential for such confounding strengthened the evidence for a genetic candidate currently unsupported at the genome-wide level.Methods: 291 first-episode psychosis cases from South London UK, and 218 unaffected controls were evaluated for a functional polymorphism at the rs1360780 locus in FKBP5. The relationship between FKBP5 and psychosis was modelled using logistic regression. Cannabis use (Cannabis Experiences Questionnaire and parental separation (Childhood Experience of Care and Abuse Questionnaire were modelled as confounders in the analysis.Results: Association at rs1360780 was not detected until the effects of the two environmental factors had been adjusted for in the model (OR=2.81, 95% CI 1.23-6.43, p=0.02. A statistical interaction between rs1360780 and parental separation was confirmed by stratified tests (OR=2.8, p=0.02 vs. OR=0.89, p=0.80. The genetic main effect was directionally-consistent with findings in other (stress-related clinical phenotypes. Moreover, the variation in effect magnitude was explained by the level of power associated with different cannabis constructs used in the model (r=0.95.Conclusions: Our results suggest that the extent to which genetic variants in FKBP5 can influence susceptibility to psychosis may depend on the other etiological factors involved. This finding requires further validation in other large independent cohorts. Potentially this work could have translational implications, as the ability to discriminate between genetic etiologies, based on a case-by-case understanding of exposure history would confer an important clinical advantage that would benefit the delivery of personalizable treatment
LD Score regression distinguishes confounding from polygenicity in genome-wide association studies
DEFF Research Database (Denmark)
Bulik-Sullivan, Brendan K.; Loh, Po-Ru; Finucane, Hilary K.
2015-01-01
Both polygenicity (many small genetic effects) and confounding biases, such as cryptic relatedness and population stratification, can yield an inflated distribution of test statistics in genome-wide association studies (GWAS). However, current methods cannot distinguish between inflation from...
Bachegowda, Lohith S; Cheng, Yan H; Long, Thomas; Shaz, Beth H
2017-01-01
-Substantial variability between different antibody titration methods prompted development and introduction of uniform methods in 2008. -To determine whether uniform methods consistently decrease interlaboratory variation in proficiency testing. -Proficiency testing data for antibody titration between 2009 and 2013 were obtained from the College of American Pathologists. Each laboratory was supplied plasma and red cells to determine anti-A and anti-D antibody titers by their standard method: gel or tube by uniform or other methods at different testing phases (immediate spin and/or room temperature [anti-A], and/or anti-human globulin [AHG: anti-A and anti-D]) with different additives. Interlaboratory variations were compared by analyzing the distribution of titer results by method and phase. -A median of 574 and 1100 responses were reported for anti-A and anti-D antibody titers, respectively, during a 5-year period. The 3 most frequent (median) methods performed for anti-A antibody were uniform tube room temperature (147.5; range, 119-159), uniform tube AHG (143.5; range, 134-150), and other tube AHG (97; range, 82-116); for anti-D antibody, the methods were other tube (451; range, 431-465), uniform tube (404; range, 382-462), and uniform gel (137; range, 121-153). Of the larger reported methods, uniform gel AHG phase for anti-A and anti-D antibodies had the most participants with the same result (mode). For anti-A antibody, 0 of 8 (uniform versus other tube room temperature) and 1 of 8 (uniform versus other tube AHG), and for anti-D antibody, 0 of 8 (uniform versus other tube) and 0 of 8 (uniform versus other gel) proficiency tests showed significant titer variability reduction. -Uniform methods harmonize laboratory techniques but rarely reduce interlaboratory titer variance in comparison with other methods.
Field calculations. Part I: Choice of variables and methods
International Nuclear Information System (INIS)
Turner, L.R.
1981-01-01
Magnetostatic calculations can involve (in order of increasing complexity) conductors only, material with constant or infinite permeability, or material with variable permeability. We consider here only the most general case, calculations involving ferritic material with variable permeability. Variables suitable for magnetostatic calculations are the magnetic field, the magnetic vector potential, and the magnetic scalar potential. For two-dimensional calculations the potentials, which each have only one component, have advantages over the field, which has two components. Because it is a single-valued variable, the vector potential is perhaps the best variable for two-dimensional calculations. In three dimensions, both the field and the vector potential have three components; the scalar potential, with only one component,provides a much smaller system of equations to be solved. However the scalar potential is not single-valued. To circumvent this problem, a calculation with two scalar potentials can be performed. The scalar potential whose source is the conductors can be calculated directly by the Biot-Savart law, and the scalar potential whose source is the magnetized material is single valued. However in some situations, the fields from the two potentials nearly cancel; and the numerical accuracy is lost. The 3-D magnetostatic program TOSCA employs a single total scalar potential; the program GFUN uses the magnetic field as its variable
Multisample adjusted U-statistics that account for confounding covariates.
Satten, Glen A; Kong, Maiying; Datta, Somnath
2018-06-19
Multisample U-statistics encompass a wide class of test statistics that allow the comparison of 2 or more distributions. U-statistics are especially powerful because they can be applied to both numeric and nonnumeric data, eg, ordinal and categorical data where a pairwise similarity or distance-like measure between categories is available. However, when comparing the distribution of a variable across 2 or more groups, observed differences may be due to confounding covariates. For example, in a case-control study, the distribution of exposure in cases may differ from that in controls entirely because of variables that are related to both exposure and case status and are distributed differently among case and control participants. We propose to use individually reweighted data (ie, using the stratification score for retrospective data or the propensity score for prospective data) to construct adjusted U-statistics that can test the equality of distributions across 2 (or more) groups in the presence of confounding covariates. Asymptotic normality of our adjusted U-statistics is established and a closed form expression of their asymptotic variance is presented. The utility of our approach is demonstrated through simulation studies, as well as in an analysis of data from a case-control study conducted among African-Americans, comparing whether the similarity in haplotypes (ie, sets of adjacent genetic loci inherited from the same parent) occurring in a case and a control participant differs from the similarity in haplotypes occurring in 2 control participants. Copyright © 2018 John Wiley & Sons, Ltd.
International Nuclear Information System (INIS)
Kowsary, F.; Pooladvand, K.; Pourshaghaghy, A.
2007-01-01
In this paper, an appropriate distribution of the heating elements' strengths in a radiation furnace is estimated using inverse methods so that a pre-specified temperature and heat flux distribution is attained on the design surface. Minimization of the sum of the squares of the error function is performed using the variable metric method (VMM), and the results are compared with those obtained by the conjugate gradient method (CGM) established previously in the literature. It is shown via test cases and a well-founded validation procedure that the VMM, when using a 'regularized' estimator, is more accurate and is able to reach at a higher quality final solution as compared to the CGM. The test cases used in this study were two-dimensional furnaces filled with an absorbing, emitting, and scattering gas
A moving mesh method with variable relaxation time
Soheili, Ali Reza; Stockie, John M.
2006-01-01
We propose a moving mesh adaptive approach for solving time-dependent partial differential equations. The motion of spatial grid points is governed by a moving mesh PDE (MMPDE) in which a mesh relaxation time \\tau is employed as a regularization parameter. Previously reported results on MMPDEs have invariably employed a constant value of the parameter \\tau. We extend this standard approach by incorporating a variable relaxation time that is calculated adaptively alongside the solution in orde...
Confounding in statistical mediation analysis: What it is and how to address it.
Valente, Matthew J; Pelham, William E; Smyth, Heather; MacKinnon, David P
2017-11-01
Psychology researchers are often interested in mechanisms underlying how randomized interventions affect outcomes such as substance use and mental health. Mediation analysis is a common statistical method for investigating psychological mechanisms that has benefited from exciting new methodological improvements over the last 2 decades. One of the most important new developments is methodology for estimating causal mediated effects using the potential outcomes framework for causal inference. Potential outcomes-based methods developed in epidemiology and statistics have important implications for understanding psychological mechanisms. We aim to provide a concise introduction to and illustration of these new methods and emphasize the importance of confounder adjustment. First, we review the traditional regression approach for estimating mediated effects. Second, we describe the potential outcomes framework. Third, we define what a confounder is and how the presence of a confounder can provide misleading evidence regarding mechanisms of interventions. Fourth, we describe experimental designs that can help rule out confounder bias. Fifth, we describe new statistical approaches to adjust for measured confounders of the mediator-outcome relation and sensitivity analyses to probe effects of unmeasured confounders on the mediated effect. All approaches are illustrated with application to a real counseling intervention dataset. Counseling psychologists interested in understanding the causal mechanisms of their interventions can benefit from incorporating the most up-to-date techniques into their mediation analyses. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Design Method of Active Disturbance Rejection Variable Structure Control System
Directory of Open Access Journals (Sweden)
Yun-jie Wu
2015-01-01
Full Text Available Based on lines cluster approaching theory and inspired by the traditional exponent reaching law method, a new control method, lines cluster approaching mode control (LCAMC method, is designed to improve the parameter simplicity and structure optimization of the control system. The design guidelines and mathematical proofs are also given. To further improve the tracking performance and the inhibition of the white noise, connect the active disturbance rejection control (ADRC method with the LCAMC method and create the extended state observer based lines cluster approaching mode control (ESO-LCAMC method. Taking traditional servo control system as example, two control schemes are constructed and two kinds of comparison are carried out. Computer simulation results show that LCAMC method, having better tracking performance than the traditional sliding mode control (SMC system, makes the servo system track command signal quickly and accurately in spite of the persistent equivalent disturbances and ESO-LCAMC method further reduces the tracking error and filters the white noise added on the system states. Simulation results verify the robust property and comprehensive performance of control schemes.
Variable-mesh method of solving differential equations
Van Wyk, R.
1969-01-01
Multistep predictor-corrector method for numerical solution of ordinary differential equations retains high local accuracy and convergence properties. In addition, the method was developed in a form conducive to the generation of effective criteria for the selection of subsequent step sizes in step-by-step solution of differential equations.
Sarvari, S. M. Hosseini
2017-09-01
The traditional form of discrete ordinates method is applied to solve the radiative transfer equation in plane-parallel semi-transparent media with variable refractive index through using the variable discrete ordinate directions and the concept of refracted radiative intensity. The refractive index are taken as constant in each control volume, such that the direction cosines of radiative rays remain non-variant through each control volume, and then, the directions of discrete ordinates are changed locally by passing each control volume, according to the Snell's law of refraction. The results are compared by the previous studies in this field. Despite simplicity, the results show that the variable discrete ordinate method has a good accuracy in solving the radiative transfer equation in the semi-transparent media with arbitrary distribution of refractive index.
Apparatus and method for variable angle slant hole collimator
Lee, Seung Joon; Kross, Brian J.; McKisson, John E.
2017-07-18
A variable angle slant hole (VASH) collimator for providing collimation of high energy photons such as gamma rays during radiological imaging of humans. The VASH collimator includes a stack of multiple collimator leaves and a means of quickly aligning each leaf to provide various projection angles. Rather than rotate the detector around the subject, the VASH collimator enables the detector to remain stationary while the projection angle of the collimator is varied for tomographic acquisition. High collimator efficiency is achieved by maintaining the leaves in accurate alignment through the various projection angles. Individual leaves include unique angled cuts to maintain a precise target collimation angle. Matching wedge blocks driven by two actuators with twin-lead screws accurately position each leaf in the stack resulting in the precise target collimation angle. A computer interface with the actuators enables precise control of the projection angle of the collimator.
Combustion engine variable compression ratio apparatus and method
Lawrence,; Keith, E [Peoria, IL; Strawbridge, Bryan E [Dunlap, IL; Dutart, Charles H [Washington, IL
2006-06-06
An apparatus and method for varying a compression ratio of an engine having a block and a head mounted thereto. The apparatus and method includes a cylinder having a block portion and a head portion, a piston linearly movable in the block portion of the cylinder, a cylinder plug linearly movable in the head portion of the cylinder, and a valve located in the cylinder plug and operable to provide controlled fluid communication with the block portion of the cylinder.
Biasogram: visualization of confounding technical bias in gene expression data
DEFF Research Database (Denmark)
Krzystanek, Marcin; Szallasi, Zoltan Imre; Eklund, Aron Charles
2013-01-01
Gene expression profiles of clinical cohorts can be used to identify genes that are correlated with a clinical variable of interest such as patient outcome or response to a particular drug. However, expression measurements are susceptible to technical bias caused by variation in extraneous factors...... such as RNA quality and array hybridization conditions. If such technical bias is correlated with the clinical variable of interest, the likelihood of identifying false positive genes is increased. Here we describe a method to visualize an expression matrix as a projection of all genes onto a plane defined...... by a clinical variable and a technical nuisance variable. The resulting plot indicates the extent to which each gene is correlated with the clinical variable or the technical variable. We demonstrate this method by applying it to three clinical trial microarray data sets, one of which identified genes that may...
Fast analytical method for the addition of random variables
International Nuclear Information System (INIS)
Senna, V.; Milidiu, R.L.; Fleming, P.V.; Salles, M.R.; Oliveria, L.F.S.
1983-01-01
Using the minimal cut sets representation of a fault tree, a new approach to the method of moments is proposed in order to estimate confidence bounds to the top event probability. The method utilizes two or three moments either to fit a distribution (the normal and lognormal families) or to evaluate bounds from standard inequalities (e.g. Markov, Tchebycheff, etc.) Examples indicate that the results obtained by the log-normal family are in good agreement with those obtained by Monte Carlo simulation
Tang, Zheng-Zheng; Chen, Guanhua; Alekseyenko, Alexander V
2016-09-01
Recent advances in sequencing technology have made it possible to obtain high-throughput data on the composition of microbial communities and to study the effects of dysbiosis on the human host. Analysis of pairwise intersample distances quantifies the association between the microbiome diversity and covariates of interest (e.g. environmental factors, clinical outcomes, treatment groups). In the design of these analyses, multiple choices for distance metrics are available. Most distance-based methods, however, use a single distance and are underpowered if the distance is poorly chosen. In addition, distance-based tests cannot flexibly handle confounding variables, which can result in excessive false-positive findings. We derive presence-weighted UniFrac to complement the existing UniFrac distances for more powerful detection of the variation in species richness. We develop PERMANOVA-S, a new distance-based method that tests the association of microbiome composition with any covariates of interest. PERMANOVA-S improves the commonly-used Permutation Multivariate Analysis of Variance (PERMANOVA) test by allowing flexible confounder adjustments and ensembling multiple distances. We conducted extensive simulation studies to evaluate the performance of different distances under various patterns of association. Our simulation studies demonstrate that the power of the test relies on how well the selected distance captures the nature of the association. The PERMANOVA-S unified test combines multiple distances and achieves good power regardless of the patterns of the underlying association. We demonstrate the usefulness of our approach by reanalyzing several real microbiome datasets. miProfile software is freely available at https://medschool.vanderbilt.edu/tang-lab/software/miProfile z.tang@vanderbilt.edu or g.chen@vanderbilt.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
A Latent Variable Clustering Method for Wireless Sensor Networks
DEFF Research Database (Denmark)
Vasilev, Vladislav; Iliev, Georgi; Poulkov, Vladimir
2016-01-01
In this paper we derive a clustering method based on the Hidden Conditional Random Field (HCRF) model in order to maximizes the performance of a wireless sensor. Our novel approach to clustering in this paper is in the application of an index invariant graph that we defined in a previous work and...
Landau, Sabine; Emsley, Richard; Dunn, Graham
2018-06-01
Random allocation avoids confounding bias when estimating the average treatment effect. For continuous outcomes measured at post-treatment as well as prior to randomisation (baseline), analyses based on (A) post-treatment outcome alone, (B) change scores over the treatment phase or (C) conditioning on baseline values (analysis of covariance) provide unbiased estimators of the average treatment effect. The decision to include baseline values of the clinical outcome in the analysis is based on precision arguments, with analysis of covariance known to be most precise. Investigators increasingly carry out explanatory analyses to decompose total treatment effects into components that are mediated by an intermediate continuous outcome and a non-mediated part. Traditional mediation analysis might be performed based on (A) post-treatment values of the intermediate and clinical outcomes alone, (B) respective change scores or (C) conditioning on baseline measures of both intermediate and clinical outcomes. Using causal diagrams and Monte Carlo simulation, we investigated the performance of the three competing mediation approaches. We considered a data generating model that included three possible confounding processes involving baseline variables: The first two processes modelled baseline measures of the clinical variable or the intermediate variable as common causes of post-treatment measures of these two variables. The third process allowed the two baseline variables themselves to be correlated due to past common causes. We compared the analysis models implied by the competing mediation approaches with this data generating model to hypothesise likely biases in estimators, and tested these in a simulation study. We applied the methods to a randomised trial of pragmatic rehabilitation in patients with chronic fatigue syndrome, which examined the role of limiting activities as a mediator. Estimates of causal mediation effects derived by approach (A) will be biased if one of
Variational method for objective analysis of scalar variable and its ...
Indian Academy of Sciences (India)
e-mail: sinha@tropmet.res.in. In this study real time data have been used to compare the standard and triangle method by ... The work presented in this paper is about a vari- ... But when the balance is needed ..... tred at 17:30h IST of 11 June within half a degree of ..... Ogura Y and Chen Y L 1977 A life history of an intense.
Confounding and exposure measurement error in air pollution epidemiology
Sheppard, L.; Burnett, R.T.; Szpiro, A.A.; Kim, J.Y.; Jerrett, M.; Pope, C.; Brunekreef, B.|info:eu-repo/dai/nl/067548180
2012-01-01
Studies in air pollution epidemiology may suffer from some specific forms of confounding and exposure measurement error. This contribution discusses these, mostly in the framework of cohort studies. Evaluation of potential confounding is critical in studies of the health effects of air pollution.
Optimal management strategies in variable environments: Stochastic optimal control methods
Williams, B.K.
1985-01-01
Dynamic optimization was used to investigate the optimal defoliation of salt desert shrubs in north-western Utah. Management was formulated in the context of optimal stochastic control theory, with objective functions composed of discounted or time-averaged biomass yields. Climatic variability and community patterns of salt desert shrublands make the application of stochastic optimal control both feasible and necessary. A primary production model was used to simulate shrub responses and harvest yields under a variety of climatic regimes and defoliation patterns. The simulation results then were used in an optimization model to determine optimal defoliation strategies. The latter model encodes an algorithm for finite state, finite action, infinite discrete time horizon Markov decision processes. Three questions were addressed: (i) What effect do changes in weather patterns have on optimal management strategies? (ii) What effect does the discounting of future returns have? (iii) How do the optimal strategies perform relative to certain fixed defoliation strategies? An analysis was performed for the three shrub species, winterfat (Ceratoides lanata), shadscale (Atriplex confertifolia) and big sagebrush (Artemisia tridentata). In general, the results indicate substantial differences among species in optimal control strategies, which are associated with differences in physiological and morphological characteristics. Optimal policies for big sagebrush varied less with variation in climate, reserve levels and discount rates than did either shadscale or winterfat. This was attributed primarily to the overwintering of photosynthetically active tissue and to metabolic activity early in the growing season. Optimal defoliation of shadscale and winterfat generally was more responsive to differences in plant vigor and climate, reflecting the sensitivity of these species to utilization and replenishment of carbohydrate reserves. Similarities could be seen in the influence of both
Tétreault, Louis-François; Perron, Stéphane; Smargiassi, Audrey
2013-10-01
This review assessed the confounding effect of one traffic-related exposure (noise or air pollutants) on the association between the other exposure and cardiovascular outcomes. A systematic review was conducted with the databases Medline and Embase. The confounding effects in studies were assessed by using change in the estimate with a 10 % cutoff point. The influence on the change in the estimate of the quality of the studies, the exposure assessment methods and the correlation between road noise and air pollutions were also assessed. Nine publications were identified. For most studies, the specified confounders produced changes in estimates noise and pollutants, the quality of the study and of the exposure assessment do not seem to influence the confounding effects. Results from this review suggest that confounding of cardiovascular effects by noise or air pollutants is low, though with further improvements in exposure assessment, the situation may change. More studies using pollution indicators specific to road traffic are needed to properly assess if noise and air pollution are subjected to confounding.
Methods for Analyzing Electric Load Shape and its Variability
Energy Technology Data Exchange (ETDEWEB)
Price, Philip
2010-05-12
Current methods of summarizing and analyzing electric load shape are discussed briefly and compared. Simple rules of thumb for graphical display of load shapes are suggested. We propose a set of parameters that quantitatively describe the load shape in many buildings. Using the example of a linear regression model to predict load shape from time and temperature, we show how quantities such as the load?s sensitivity to outdoor temperature, and the effectiveness of demand response (DR), can be quantified. Examples are presented using real building data.
The Variability and Evaluation Method of Recycled Concrete Aggregate Properties
Directory of Open Access Journals (Sweden)
Zhiqing Zhang
2017-01-01
Full Text Available With the same sources and regeneration techniques, given RA’s properties may display large variations. The same single property index of different sets maybe has a large difference of the whole property. How shall we accurately evaluate the whole property of RA? 8 groups of RAs from pavement and building were used to research the method of evaluating the holistic characteristics of RA. After testing and investigating, the parameters of aggregates were analyzed. The data of physical and mechanical properties show a distinct dispersion and instability; thus, it has been difficult to express the whole characteristics in any single property parameter. The Euclidean distance can express the similarity of samples. The closer the distance, the more similar the property. The standard variance of the whole property Euclidean distances for two types of RA is Sk=7.341 and Sk=2.208, respectively, which shows that the property of building RA has great fluctuation, while pavement RA is more stable. There are certain correlations among the apparent density, water absorption, and crushed value of RAs, and the Mahalanobis distance method can directly evaluate the whole property by using its parameters: mean, variance, and covariance, and it can provide a grade evaluation model for RAs.
Bernhardt, Jase; Carleton, Andrew M.
2018-05-01
The two main methods for determining the average daily near-surface air temperature, twice-daily averaging (i.e., [Tmax+Tmin]/2) and hourly averaging (i.e., the average of 24 hourly temperature measurements), typically show differences associated with the asymmetry of the daily temperature curve. To quantify the relative influence of several land surface and atmosphere variables on the two temperature averaging methods, we correlate data for 215 weather stations across the Contiguous United States (CONUS) for the period 1981-2010 with the differences between the two temperature-averaging methods. The variables are land use-land cover (LULC) type, soil moisture, snow cover, cloud cover, atmospheric moisture (i.e., specific humidity, dew point temperature), and precipitation. Multiple linear regression models explain the spatial and monthly variations in the difference between the two temperature-averaging methods. We find statistically significant correlations between both the land surface and atmosphere variables studied with the difference between temperature-averaging methods, especially for the extreme (i.e., summer, winter) seasons (adjusted R2 > 0.50). Models considering stations with certain LULC types, particularly forest and developed land, have adjusted R2 values > 0.70, indicating that both surface and atmosphere variables control the daily temperature curve and its asymmetry. This study improves our understanding of the role of surface and near-surface conditions in modifying thermal climates of the CONUS for a wide range of environments, and their likely importance as anthropogenic forcings—notably LULC changes and greenhouse gas emissions—continues.
Partial differential equations with variable exponents variational methods and qualitative analysis
Radulescu, Vicentiu D
2015-01-01
Partial Differential Equations with Variable Exponents: Variational Methods and Qualitative Analysis provides researchers and graduate students with a thorough introduction to the theory of nonlinear partial differential equations (PDEs) with a variable exponent, particularly those of elliptic type. The book presents the most important variational methods for elliptic PDEs described by nonhomogeneous differential operators and containing one or more power-type nonlinearities with a variable exponent. The authors give a systematic treatment of the basic mathematical theory and constructive meth
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...
Jones, Andrew; Button, Emily; Rose, Abigail K; Robinson, Eric; Christiansen, Paul; Di Lemma, Lisa; Field, Matt
2016-03-01
Motivation to drink alcohol can be measured in the laboratory using an ad-libitum 'taste test', in which participants rate the taste of alcoholic drinks whilst their intake is covertly monitored. Little is known about the construct validity of this paradigm. The objective of this study was to investigate variables that may compromise the validity of this paradigm and its construct validity. We re-analysed data from 12 studies from our laboratory that incorporated an ad-libitum taste test. We considered time of day and participants' awareness of the purpose of the taste test as potential confounding variables. We examined whether gender, typical alcohol consumption, subjective craving, scores on the Alcohol Use Disorders Identification Test and perceived pleasantness of the drinks predicted ad-libitum consumption (construct validity). We included 762 participants (462 female). Participant awareness and time of day were not related to ad-libitum alcohol consumption. Males drank significantly more alcohol than females (p alcohol consumption (p = 0.04), craving (p alcohol consumption. The construct validity of the taste test was supported by relationships between ad-libitum consumption and typical alcohol consumption, craving and pleasantness ratings of the drinks. The ad-libitum taste test is a valid method for the assessment of alcohol intake in the laboratory.
Some confounding factors in the study of mortality and occupational exposures
International Nuclear Information System (INIS)
Gilbert, E.S.
1982-01-01
With the recent interest in the study of occupational exposures, the impact of certain selective biases in the groups studied is a matter of some concern. In this paper, data from the Hanford nuclear facility population (southeastern Washington State, 1947-1976), which includes many radiation workers, are used to illustrate a method for examining the effect on mortality of such potentially confounding variables as calendar year, length of time since entering the industry, employment status, length of employment, job category, and initial employment year. The analysis, which is based on the Mantel-Haenszel procedure as adapted for a prospective study, differs from most previous studies of occupational variables which have relied primarily on comparing standardized mortality ratios (utilizing an external control) for various subgroups of the population. Results of this analysis confirm other studies in that reduced death rates are observed for early years of follow-up and for those with higher socioeconomic status (as indicated by job category). In addition, workers employed less than two years and especially terminated workers are found to have elevated death rates as compared with the remainder of the study population. It is important that such correlations be taken into account in planning and interpreting analyses of the effects of occupational exposure
International Nuclear Information System (INIS)
Millwater, Harry; Singh, Gulshan; Cortina, Miguel
2012-01-01
There are many methods to identify the important variable out of a set of random variables, i.e., “inter-variable” importance; however, to date there are no comparable methods to identify the “region” of importance within a random variable, i.e., “intra-variable” importance. Knowledge of the critical region of an input random variable (tail, near-tail, and central region) can provide valuable information towards characterizing, understanding, and improving a model through additional modeling or testing. As a result, an intra-variable probabilistic sensitivity method was developed and demonstrated for independent random variables that computes the partial derivative of a probabilistic response with respect to a localized perturbation in the CDF values of each random variable. These sensitivities are then normalized in absolute value with respect to the largest sensitivity within a distribution to indicate the region of importance. The methodology is implemented using the Score Function kernel-based method such that existing samples can be used to compute sensitivities for negligible cost. Numerical examples demonstrate the accuracy of the method through comparisons with finite difference and numerical integration quadrature estimates. - Highlights: ► Probabilistic sensitivity methodology. ► Determines the “region” of importance within random variables such as left tail, near tail, center, right tail, etc. ► Uses the Score Function approach to reuse the samples, hence, negligible cost. ► No restrictions on the random variable types or limit states.
Hassanzadeh, S.; Hosseinibalam, F.; Omidvari, M.
2008-04-01
Data of seven meteorological variables (relative humidity, wet temperature, dry temperature, maximum temperature, minimum temperature, ground temperature and sun radiation time) and ozone values have been used for statistical analysis. Meteorological variables and ozone values were analyzed using both multiple linear regression and principal component methods. Data for the period 1999-2004 are analyzed jointly using both methods. For all periods, temperature dependent variables were highly correlated, but were all negatively correlated with relative humidity. Multiple regression analysis was used to fit the meteorological variables using the meteorological variables as predictors. A variable selection method based on high loading of varimax rotated principal components was used to obtain subsets of the predictor variables to be included in the linear regression model of the meteorological variables. In 1999, 2001 and 2002 one of the meteorological variables was weakly influenced predominantly by the ozone concentrations. However, the model did not predict that the meteorological variables for the year 2000 were not influenced predominantly by the ozone concentrations that point to variation in sun radiation. This could be due to other factors that were not explicitly considered in this study.
Directory of Open Access Journals (Sweden)
Mike S Fowler
Full Text Available The colour of environmental variability influences the size of population fluctuations when filtered through density dependent dynamics, driving extinction risk through dynamical resonance. Slow fluctuations (low frequencies dominate in red environments, rapid fluctuations (high frequencies in blue environments and white environments are purely random (no frequencies dominate. Two methods are commonly employed to generate the coloured spatial and/or temporal stochastic (environmental series used in combination with population (dynamical feedback models: autoregressive [AR(1] and sinusoidal (1/f models. We show that changing environmental colour from white to red with 1/f models, and from white to red or blue with AR(1 models, generates coloured environmental series that are not normally distributed at finite time-scales, potentially confounding comparison with normally distributed white noise models. Increasing variability of sample Skewness and Kurtosis and decreasing mean Kurtosis of these series alter the frequency distribution shape of the realised values of the coloured stochastic processes. These changes in distribution shape alter patterns in the probability of single and series of extreme conditions. We show that the reduced extinction risk for undercompensating (slow growing populations in red environments previously predicted with traditional 1/f methods is an artefact of changes in the distribution shapes of the environmental series. This is demonstrated by comparison with coloured series controlled to be normally distributed using spectral mimicry. Changes in the distribution shape that arise using traditional methods lead to underestimation of extinction risk in normally distributed, red 1/f environments. AR(1 methods also underestimate extinction risks in traditionally generated red environments. This work synthesises previous results and provides further insight into the processes driving extinction risk in model populations. We
Fowler, Mike S; Ruokolainen, Lasse
2013-01-01
The colour of environmental variability influences the size of population fluctuations when filtered through density dependent dynamics, driving extinction risk through dynamical resonance. Slow fluctuations (low frequencies) dominate in red environments, rapid fluctuations (high frequencies) in blue environments and white environments are purely random (no frequencies dominate). Two methods are commonly employed to generate the coloured spatial and/or temporal stochastic (environmental) series used in combination with population (dynamical feedback) models: autoregressive [AR(1)] and sinusoidal (1/f) models. We show that changing environmental colour from white to red with 1/f models, and from white to red or blue with AR(1) models, generates coloured environmental series that are not normally distributed at finite time-scales, potentially confounding comparison with normally distributed white noise models. Increasing variability of sample Skewness and Kurtosis and decreasing mean Kurtosis of these series alter the frequency distribution shape of the realised values of the coloured stochastic processes. These changes in distribution shape alter patterns in the probability of single and series of extreme conditions. We show that the reduced extinction risk for undercompensating (slow growing) populations in red environments previously predicted with traditional 1/f methods is an artefact of changes in the distribution shapes of the environmental series. This is demonstrated by comparison with coloured series controlled to be normally distributed using spectral mimicry. Changes in the distribution shape that arise using traditional methods lead to underestimation of extinction risk in normally distributed, red 1/f environments. AR(1) methods also underestimate extinction risks in traditionally generated red environments. This work synthesises previous results and provides further insight into the processes driving extinction risk in model populations. We must let
Directory of Open Access Journals (Sweden)
Yuanyuan Yu
2017-12-01
Full Text Available Abstract Background Confounders can produce spurious associations between exposure and outcome in observational studies. For majority of epidemiologists, adjusting for confounders using logistic regression model is their habitual method, though it has some problems in accuracy and precision. It is, therefore, important to highlight the problems of logistic regression and search the alternative method. Methods Four causal diagram models were defined to summarize confounding equivalence. Both theoretical proofs and simulation studies were performed to verify whether conditioning on different confounding equivalence sets had the same bias-reducing potential and then to select the optimum adjusting strategy, in which logistic regression model and inverse probability weighting based marginal structural model (IPW-based-MSM were compared. The “do-calculus” was used to calculate the true causal effect of exposure on outcome, then the bias and standard error were used to evaluate the performances of different strategies. Results Adjusting for different sets of confounding equivalence, as judged by identical Markov boundaries, produced different bias-reducing potential in the logistic regression model. For the sets satisfied G-admissibility, adjusting for the set including all the confounders reduced the equivalent bias to the one containing the parent nodes of the outcome, while the bias after adjusting for the parent nodes of exposure was not equivalent to them. In addition, all causal effect estimations through logistic regression were biased, although the estimation after adjusting for the parent nodes of exposure was nearest to the true causal effect. However, conditioning on different confounding equivalence sets had the same bias-reducing potential under IPW-based-MSM. Compared with logistic regression, the IPW-based-MSM could obtain unbiased causal effect estimation when the adjusted confounders satisfied G-admissibility and the optimal
International Nuclear Information System (INIS)
Le Coq, G.; Boudsocq, G.; Raymond, P.
1983-03-01
The Control Variable Method is extended to multidimensional fluid flow transient computations. In this paper basic principles of the method are given. The method uses a fully implicit space discretization and is based on the decomposition of the momentum flux tensor into scalar, vectorial, and tensorial, terms. Finally some computations about viscous-driven flow and buoyancy-driven flow in cavity are presented
Variable selection methods in PLS regression - a comparison study on metabolomics data
DEFF Research Database (Denmark)
Karaman, İbrahim; Hedemann, Mette Skou; Knudsen, Knud Erik Bach
. The aim of the metabolomics study was to investigate the metabolic profile in pigs fed various cereal fractions with special attention to the metabolism of lignans using LC-MS based metabolomic approach. References 1. Lê Cao KA, Rossouw D, Robert-Granié C, Besse P: A Sparse PLS for Variable Selection when...... integrated approach. Due to the high number of variables in data sets (both raw data and after peak picking) the selection of important variables in an explorative analysis is difficult, especially when different data sets of metabolomics data need to be related. Variable selection (or removal of irrelevant...... different strategies for variable selection on PLSR method were considered and compared with respect to selected subset of variables and the possibility for biological validation. Sparse PLSR [1] as well as PLSR with Jack-knifing [2] was applied to data in order to achieve variable selection prior...
Harris, John Richardson; Caporaso, George J; Sampayan, Stephen E
2013-10-22
A system and method for producing modulated electrical signals. The system uses a variable resistor having a photoconductive wide bandgap semiconductor material construction whose conduction response to changes in amplitude of incident radiation is substantially linear throughout a non-saturation region to enable operation in non-avalanche mode. The system also includes a modulated radiation source, such as a modulated laser, for producing amplitude-modulated radiation with which to direct upon the variable resistor and modulate its conduction response. A voltage source and an output port, are both operably connected to the variable resistor so that an electrical signal may be produced at the output port by way of the variable resistor, either generated by activation of the variable resistor or propagating through the variable resistor. In this manner, the electrical signal is modulated by the variable resistor so as to have a waveform substantially similar to the amplitude-modulated radiation.
The relationship between glass ceiling and power distance as a cultural variable by a new method
Naide Jahangirov; Guler Saglam Ari; Seymur Jahangirov; Nuray Guneri Tosunoglu
2015-01-01
Glass ceiling symbolizes a variety of barriers and obstacles that arise from gender inequality at business life. With this mind, culture influences gender dynamics. The purpose of this research was to examine the relationship between the glass ceiling and the power distance as a cultural variable within organizations. Gender variable is taken as a moderator variable in relationship between the concepts. In addition to conventional correlation analysis, we employed a new method to investigate ...
Directory of Open Access Journals (Sweden)
Said Broumi
2015-03-01
Full Text Available The interval neutrosophic uncertain linguistic variables can easily express the indeterminate and inconsistent information in real world, and TOPSIS is a very effective decision making method more and more extensive applications. In this paper, we will extend the TOPSIS method to deal with the interval neutrosophic uncertain linguistic information, and propose an extended TOPSIS method to solve the multiple attribute decision making problems in which the attribute value takes the form of the interval neutrosophic uncertain linguistic variables and attribute weight is unknown. Firstly, the operational rules and properties for the interval neutrosophic variables are introduced. Then the distance between two interval neutrosophic uncertain linguistic variables is proposed and the attribute weight is calculated by the maximizing deviation method, and the closeness coefficients to the ideal solution for each alternatives. Finally, an illustrative example is given to illustrate the decision making steps and the effectiveness of the proposed method.
Azmi, Nur Iffah Mohamed; Arifin Mat Piah, Kamal; Yusoff, Wan Azhar Wan; Romlay, Fadhlur Rahman Mohd
2018-03-01
Controller that uses PID parameters requires a good tuning method in order to improve the control system performance. Tuning PID control method is divided into two namely the classical methods and the methods of artificial intelligence. Particle swarm optimization algorithm (PSO) is one of the artificial intelligence methods. Previously, researchers had integrated PSO algorithms in the PID parameter tuning process. This research aims to improve the PSO-PID tuning algorithms by integrating the tuning process with the Variable Weight Grey- Taguchi Design of Experiment (DOE) method. This is done by conducting the DOE on the two PSO optimizing parameters: the particle velocity limit and the weight distribution factor. Computer simulations and physical experiments were conducted by using the proposed PSO- PID with the Variable Weight Grey-Taguchi DOE and the classical Ziegler-Nichols methods. They are implemented on the hydraulic positioning system. Simulation results show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE has reduced the rise time by 48.13% and settling time by 48.57% compared to the Ziegler-Nichols method. Furthermore, the physical experiment results also show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE tuning method responds better than Ziegler-Nichols tuning. In conclusion, this research has improved the PSO-PID parameter by applying the PSO-PID algorithm together with the Variable Weight Grey-Taguchi DOE method as a tuning method in the hydraulic positioning system.
A Comparison of Methods to Test Mediation and Other Intervening Variable Effects
MacKinnon, David P.; Lockwood, Chondra M.; Hoffman, Jeanne M.; West, Stephen G.; Sheets, Virgil
2010-01-01
A Monte Carlo study compared 14 methods to test the statistical significance of the intervening variable effect. An intervening variable (mediator) transmits the effect of an independent variable to a dependent variable. The commonly used R. M. Baron and D. A. Kenny (1986) approach has low statistical power. Two methods based on the distribution of the product and 2 difference-in-coefficients methods have the most accurate Type I error rates and greatest statistical power except in 1 important case in which Type I error rates are too high. The best balance of Type I error and statistical power across all cases is the test of the joint significance of the two effects comprising the intervening variable effect. PMID:11928892
Propulsion and launching analysis of variable-mass rockets by analytical methods
D.D. Ganji; M. Gorji; M. Hatami; A. Hasanpour; N. Khademzadeh
2013-01-01
In this study, applications of some analytical methods on nonlinear equation of the launching of a rocket with variable mass are investigated. Differential transformation method (DTM), homotopy perturbation method (HPM) and least square method (LSM) were applied and their results are compared with numerical solution. An excellent agreement with analytical methods and numerical ones is observed in the results and this reveals that analytical methods are effective and convenient. Also a paramet...
Yu, Yuanyuan; Li, Hongkai; Sun, Xiaoru; Su, Ping; Wang, Tingting; Liu, Yi; Yuan, Zhongshang; Liu, Yanxun; Xue, Fuzhong
2017-12-28
Confounders can produce spurious associations between exposure and outcome in observational studies. For majority of epidemiologists, adjusting for confounders using logistic regression model is their habitual method, though it has some problems in accuracy and precision. It is, therefore, important to highlight the problems of logistic regression and search the alternative method. Four causal diagram models were defined to summarize confounding equivalence. Both theoretical proofs and simulation studies were performed to verify whether conditioning on different confounding equivalence sets had the same bias-reducing potential and then to select the optimum adjusting strategy, in which logistic regression model and inverse probability weighting based marginal structural model (IPW-based-MSM) were compared. The "do-calculus" was used to calculate the true causal effect of exposure on outcome, then the bias and standard error were used to evaluate the performances of different strategies. Adjusting for different sets of confounding equivalence, as judged by identical Markov boundaries, produced different bias-reducing potential in the logistic regression model. For the sets satisfied G-admissibility, adjusting for the set including all the confounders reduced the equivalent bias to the one containing the parent nodes of the outcome, while the bias after adjusting for the parent nodes of exposure was not equivalent to them. In addition, all causal effect estimations through logistic regression were biased, although the estimation after adjusting for the parent nodes of exposure was nearest to the true causal effect. However, conditioning on different confounding equivalence sets had the same bias-reducing potential under IPW-based-MSM. Compared with logistic regression, the IPW-based-MSM could obtain unbiased causal effect estimation when the adjusted confounders satisfied G-admissibility and the optimal strategy was to adjust for the parent nodes of outcome, which
Theoretical investigations of the new Cokriging method for variable-fidelity surrogate modeling
DEFF Research Database (Denmark)
Zimmermann, Ralf; Bertram, Anna
2018-01-01
Cokriging is a variable-fidelity surrogate modeling technique which emulates a target process based on the spatial correlation of sampled data of different levels of fidelity. In this work, we address two theoretical questions associated with the so-called new Cokriging method for variable fidelity...
DEFF Research Database (Denmark)
Burgess, Stephen; Thompson, Simon G; Thompson, Grahame
2010-01-01
Genetic markers can be used as instrumental variables, in an analogous way to randomization in a clinical trial, to estimate the causal relationship between a phenotype and an outcome variable. Our purpose is to extend the existing methods for such Mendelian randomization studies to the context o...
Comparison of Sparse and Jack-knife partial least squares regression methods for variable selection
DEFF Research Database (Denmark)
Karaman, Ibrahim; Qannari, El Mostafa; Martens, Harald
2013-01-01
The objective of this study was to compare two different techniques of variable selection, Sparse PLSR and Jack-knife PLSR, with respect to their predictive ability and their ability to identify relevant variables. Sparse PLSR is a method that is frequently used in genomics, whereas Jack-knife PL...
P-Link: A method for generating multicomponent cytochrome P450 fusions with variable linker length
DEFF Research Database (Denmark)
Belsare, Ketaki D.; Ruff, Anna Joelle; Martinez, Ronny
2014-01-01
Fusion protein construction is a widely employed biochemical technique, especially when it comes to multi-component enzymes such as cytochrome P450s. Here we describe a novel method for generating fusion proteins with variable linker lengths, protein fusion with variable linker insertion (P...
International Nuclear Information System (INIS)
Qin Maochang; Fan Guihong
2008-01-01
There are many interesting methods can be utilized to construct special solutions of nonlinear differential equations with constant coefficients. However, most of these methods are not applicable to nonlinear differential equations with variable coefficients. A new method is presented in this Letter, which can be used to find special solutions of nonlinear differential equations with variable coefficients. This method is based on seeking appropriate Bernoulli equation corresponding to the equation studied. Many well-known equations are chosen to illustrate the application of this method
Comparison of different calibration methods suited for calibration problems with many variables
DEFF Research Database (Denmark)
Holst, Helle
1992-01-01
This paper describes and compares different kinds of statistical methods proposed in the literature as suited for solving calibration problems with many variables. These are: principal component regression, partial least-squares, and ridge regression. The statistical techniques themselves do...
Using traditional methods and indigenous technologies for coping with climate variability
Stigter, C.J.; Zheng Dawei,; Onyewotu, L.O.Z.; Mei Xurong,
2005-01-01
In agrometeorology and management of meteorology related natural resources, many traditional methods and indigenous technologies are still in use or being revived for managing low external inputs sustainable agriculture (LEISA) under conditions of climate variability. This paper starts with the
International Nuclear Information System (INIS)
Bosevski, T.
1971-01-01
The polynomial interpolation of neutron flux between the chosen space and energy variables enabled transformation of the integral transport equation into a system of linear equations with constant coefficients. Solutions of this system are the needed values of flux for chosen values of space and energy variables. The proposed improved method for solving the neutron transport problem including the mathematical formalism is simple and efficient since the number of needed input data is decreased both in treating the spatial and energy variables. Mathematical method based on this approach gives more stable solutions with significantly decreased probability of numerical errors. Computer code based on the proposed method was used for calculations of one heavy water and one light water reactor cell, and the results were compared to results of other very precise calculations. The proposed method was better concerning convergence rate, decreased computing time and needed computer memory. Discretization of variables enabled direct comparison of theoretical and experimental results
A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.
Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan
2017-01-01
Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.
A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method
Directory of Open Access Journals (Sweden)
Jun-He Yang
2017-01-01
Full Text Available Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir’s water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir’s water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.
Model reduction method using variable-separation for stochastic saddle point problems
Jiang, Lijian; Li, Qiuqi
2018-02-01
In this paper, we consider a variable-separation (VS) method to solve the stochastic saddle point (SSP) problems. The VS method is applied to obtain the solution in tensor product structure for stochastic partial differential equations (SPDEs) in a mixed formulation. The aim of such a technique is to construct a reduced basis approximation of the solution of the SSP problems. The VS method attempts to get a low rank separated representation of the solution for SSP in a systematic enrichment manner. No iteration is performed at each enrichment step. In order to satisfy the inf-sup condition in the mixed formulation, we enrich the separated terms for the primal system variable at each enrichment step. For the SSP problems by regularization or penalty, we propose a more efficient variable-separation (VS) method, i.e., the variable-separation by penalty method. This can avoid further enrichment of the separated terms in the original mixed formulation. The computation of the variable-separation method decomposes into offline phase and online phase. Sparse low rank tensor approximation method is used to significantly improve the online computation efficiency when the number of separated terms is large. For the applications of SSP problems, we present three numerical examples to illustrate the performance of the proposed methods.
Latent variable method for automatic adaptation to background states in motor imagery BCI
Dagaev, Nikolay; Volkova, Ksenia; Ossadtchi, Alexei
2018-02-01
Objective. Brain-computer interface (BCI) systems are known to be vulnerable to variabilities in background states of a user. Usually, no detailed information on these states is available even during the training stage. Thus there is a need in a method which is capable of taking background states into account in an unsupervised way. Approach. We propose a latent variable method that is based on a probabilistic model with a discrete latent variable. In order to estimate the model’s parameters, we suggest to use the expectation maximization algorithm. The proposed method is aimed at assessing characteristics of background states without any corresponding data labeling. In the context of asynchronous motor imagery paradigm, we applied this method to the real data from twelve able-bodied subjects with open/closed eyes serving as background states. Main results. We found that the latent variable method improved classification of target states compared to the baseline method (in seven of twelve subjects). In addition, we found that our method was also capable of background states recognition (in six of twelve subjects). Significance. Without any supervised information on background states, the latent variable method provides a way to improve classification in BCI by taking background states into account at the training stage and then by making decisions on target states weighted by posterior probabilities of background states at the prediction stage.
DEFF Research Database (Denmark)
Rørbye, Christina; Nørgaard, Mogens; Nilas, Lisbeth
2005-01-01
BACKGROUND: The aim of the study was to compare satisfaction with medical and surgical abortion and to identify potential confounders affecting satisfaction. METHODS: 1033 women with gestational age (GA) < or = 63 days had either a medical (600 mg mifepristone followed by 1 mg gemeprost) or a sur...
van der Meer, Hedwig A.; Speksnijder, Caroline M.; Engelbert, Raoul; Lobbezoo, Frank; Nijhuis – van der Sanden, Maria W G; Visscher, Corine M.
OBJECTIVES:: The objective of this observational study was to establish the possible presence of confounders on the association between temporomandibular disorders (TMD) and headaches in a patient population from a TMD and Orofacial Pain Clinic. METHODS:: Several subtypes of headaches were
Meer, H.A. van der; Speksnijder, C.M.; Engelbert, R.H.; Lobbezoo, F.; Nijhuis-Van der Sanden, M.W.G.; Visscher, C.M.
2017-01-01
OBJECTIVES: The objective of this observational study was to establish the possible presence of confounders on the association between temporomandibular disorders (TMD) and headaches in a patient population from a TMD and Orofacial Pain Clinic. MATERIALS AND METHODS: Several subtypes of headaches
Bollen, Kenneth A
2007-06-01
R. D. Howell, E. Breivik, and J. B. Wilcox (2007) have argued that causal (formative) indicators are inherently subject to interpretational confounding. That is, they have argued that using causal (formative) indicators leads the empirical meaning of a latent variable to be other than that assigned to it by a researcher. Their critique of causal (formative) indicators rests on several claims: (a) A latent variable exists apart from the model when there are effect (reflective) indicators but not when there are causal (formative) indicators, (b) causal (formative) indicators need not have the same consequences, (c) causal (formative) indicators are inherently subject to interpretational confounding, and (d) a researcher cannot detect interpretational confounding when using causal (formative) indicators. This article shows that each claim is false. Rather, interpretational confounding is more a problem of structural misspecification of a model combined with an underidentified model that leaves these misspecifications undetected. Interpretational confounding does not occur if the model is correctly specified whether a researcher has causal (formative) or effect (reflective) indicators. It is the validity of a model not the type of indicator that determines the potential for interpretational confounding. Copyright 2007 APA, all rights reserved.
International Nuclear Information System (INIS)
Proriol, J.
1994-01-01
Five different methods are compared for selecting the most important variables with a view to classifying high energy physics events with neural networks. The different methods are: the F-test, Principal Component Analysis (PCA), a decision tree method: CART, weight evaluation, and Optimal Cell Damage (OCD). The neural networks use the variables selected with the different methods. We compare the percentages of events properly classified by each neural network. The learning set and the test set are the same for all the neural networks. (author)
International Nuclear Information System (INIS)
Tang, Bo; He, Yinnian; Wei, Leilei; Zhang, Xindong
2012-01-01
In this Letter, a generalized fractional sub-equation method is proposed for solving fractional differential equations with variable coefficients. Being concise and straightforward, this method is applied to the space–time fractional Gardner equation with variable coefficients. As a result, many exact solutions are obtained including hyperbolic function solutions, trigonometric function solutions and rational solutions. It is shown that the considered method provides a very effective, convenient and powerful mathematical tool for solving many other fractional differential equations in mathematical physics. -- Highlights: ► Study of fractional differential equations with variable coefficients plays a role in applied physical sciences. ► It is shown that the proposed algorithm is effective for solving fractional differential equations with variable coefficients. ► The obtained solutions may give insight into many considerable physical processes.
Aygunes, Gunes
2017-07-01
The objective of this paper is to survey and determine the macroeconomic factors affecting the level of venture capital (VC) investments in a country. The literary depends on venture capitalists' quality and countries' venture capital investments. The aim of this paper is to give relationship between venture capital investment and macro economic variables via statistical computation method. We investigate the countries and macro economic variables. By using statistical computation method, we derive correlation between venture capital investments and macro economic variables. According to method of logistic regression model (logit regression or logit model), macro economic variables are correlated with each other in three group. Venture capitalists regard correlations as a indicator. Finally, we give correlation matrix of our results.
Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H
2017-07-01
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in
A stochastic Galerkin method for the Euler equations with Roe variable transformation
Pettersson, Per; Iaccarino, Gianluca; Nordströ m, Jan
2014-01-01
The Euler equations subject to uncertainty in the initial and boundary conditions are investigated via the stochastic Galerkin approach. We present a new fully intrusive method based on a variable transformation of the continuous equations. Roe variables are employed to get quadratic dependence in the flux function and a well-defined Roe average matrix that can be determined without matrix inversion.In previous formulations based on generalized polynomial chaos expansion of the physical variables, the need to introduce stochastic expansions of inverse quantities, or square roots of stochastic quantities of interest, adds to the number of possible different ways to approximate the original stochastic problem. We present a method where the square roots occur in the choice of variables, resulting in an unambiguous problem formulation.The Roe formulation saves computational cost compared to the formulation based on expansion of conservative variables. Moreover, the Roe formulation is more robust and can handle cases of supersonic flow, for which the conservative variable formulation fails to produce a bounded solution. For certain stochastic basis functions, the proposed method can be made more effective and well-conditioned. This leads to increased robustness for both choices of variables. We use a multi-wavelet basis that can be chosen to include a large number of resolution levels to handle more extreme cases (e.g. strong discontinuities) in a robust way. For smooth cases, the order of the polynomial representation can be increased for increased accuracy. © 2013 Elsevier Inc.
DEFF Research Database (Denmark)
Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe
2003-01-01
Non-differential measurement error in the exposure variable is known to attenuate the dose-response relationship. The amount of attenuation introduced in a given situation is not only a function of the precision of the exposure measurement but also depends on the conditional variance of the true...... exposure given the other independent variables. In addition, confounder effects may also be affected by the exposure measurement error. These difficulties in statistical model development are illustrated by examples from a epidemiological study performed in the Faroe Islands to investigate the adverse...
Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant
Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa
2013-09-17
System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.
VOLUMETRIC METHOD FOR EVALUATION OF BEACHES VARIABILITY BASED ON GIS-TOOLS
Directory of Open Access Journals (Sweden)
V. V. Dolotov
2015-01-01
Full Text Available In frame of cadastral beach evaluation the volumetric method of natural variability index is proposed. It base on spatial calculations with Cut-Fill method and volume accounting ofboththe common beach contour and specific areas for the each time.
Control Method for Variable Speed Wind Turbines to Support Temporary Primary Frequency Control
DEFF Research Database (Denmark)
Wang, Haijiao; Chen, Zhe; Jiang, Quanyuan
2014-01-01
This paper develops a control method for variable speed wind turbines (VSWTs) to support temporary primary frequency control of power system. The control method contains two parts: (1) up-regulate support control when a frequency drop event occurs; (2) down-regulate support control when a frequen...
A design method of compensators for multi-variable control system with PID controllers 'CHARLY'
International Nuclear Information System (INIS)
Fujiwara, Toshitaka; Yamada, Katsumi
1985-01-01
A systematic design method of compensators for a multi-variable control system having usual PID controllers in its loops is presented in this paper. The method itself is able: to determine the main manipulating variable corresponding to each controlled variable with a sensitivity analysis in the frequency domain. to tune PID controllers sufficiently to realize adequate control actions with a searching technique of minimum values of cost functionals. to design compensators improving the control preformance and to simulate a total system for confirming the designed compensators. In the phase of compensator design, the state variable feed-back gain is obtained by means of the OPTIMAL REGULATOR THEORY for the composite system of plant and PID controllers. The transfer function type compensators the configurations of which were previously given are, then, designed to approximate the frequency responces of the above mentioned state feed-back system. An example is illustrated for convenience. (author)
A Novel Flood Forecasting Method Based on Initial State Variable Correction
Directory of Open Access Journals (Sweden)
Kuang Li
2017-12-01
Full Text Available The influence of initial state variables on flood forecasting accuracy by using conceptual hydrological models is analyzed in this paper and a novel flood forecasting method based on correction of initial state variables is proposed. The new method is abbreviated as ISVC (Initial State Variable Correction. The ISVC takes the residual between the measured and forecasted flows during the initial period of the flood event as the objective function, and it uses a particle swarm optimization algorithm to correct the initial state variables, which are then used to drive the flood forecasting model. The historical flood events of 11 watersheds in south China are forecasted and verified, and important issues concerning the ISVC application are then discussed. The study results show that the ISVC is effective and applicable in flood forecasting tasks. It can significantly improve the flood forecasting accuracy in most cases.
International Nuclear Information System (INIS)
Sumer, Kutluk Kagan; Goktas, Ozlem; Hepsag, Aycan
2009-01-01
In this study, we used ARIMA, seasonal ARIMA (SARIMA) and alternatively the regression model with seasonal latent variable in forecasting electricity demand by using data that belongs to 'Kayseri and Vicinity Electricity Joint-Stock Company' over the 1997:1-2005:12 periods. This study tries to examine the advantages of forecasting with ARIMA, SARIMA methods and with the model has seasonal latent variable to each other. The results support that ARIMA and SARIMA models are unsuccessful in forecasting electricity demand. The regression model with seasonal latent variable used in this study gives more successful results than ARIMA and SARIMA models because also this model can consider seasonal fluctuations and structural breaks
Johnson, Kenneth L.; White, K. Preston, Jr.
2012-01-01
The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. In this paper, the results of empirical tests intended to assess the accuracy of acceptance sampling plan calculators implemented for six variable distributions are presented.
The Combined Effects of Measurement Error and Omitting Confounders in the Single-Mediator Model.
Fritz, Matthew S; Kenny, David A; MacKinnon, David P
2016-01-01
Mediation analysis requires a number of strong assumptions be met in order to make valid causal inferences. Failing to account for violations of these assumptions, such as not modeling measurement error or omitting a common cause of the effects in the model, can bias the parameter estimates of the mediated effect. When the independent variable is perfectly reliable, for example when participants are randomly assigned to levels of treatment, measurement error in the mediator tends to underestimate the mediated effect, while the omission of a confounding variable of the mediator-to-outcome relation tends to overestimate the mediated effect. Violations of these two assumptions often co-occur, however, in which case the mediated effect could be overestimated, underestimated, or even, in very rare circumstances, unbiased. To explore the combined effect of measurement error and omitted confounders in the same model, the effect of each violation on the single-mediator model is first examined individually. Then the combined effect of having measurement error and omitted confounders in the same model is discussed. Throughout, an empirical example is provided to illustrate the effect of violating these assumptions on the mediated effect.
The Combined Effects of Measurement Error and Omitting Confounders in the Single-Mediator Model
Fritz, Matthew S.; Kenny, David A.; MacKinnon, David P.
2016-01-01
Mediation analysis requires a number of strong assumptions be met in order to make valid causal inferences. Failing to account for violations of these assumptions, such as not modeling measurement error or omitting a common cause of the effects in the model, can bias the parameter estimates of the mediated effect. When the independent variable is perfectly reliable, for example when participants are randomly assigned to levels of treatment, measurement error in the mediator tends to underestimate the mediated effect, while the omission of a confounding variable of the mediator to outcome relation tends to overestimate the mediated effect. Violations of these two assumptions often co-occur, however, in which case the mediated effect could be overestimated, underestimated, or even, in very rare circumstances, unbiased. In order to explore the combined effect of measurement error and omitted confounders in the same model, the impact of each violation on the single-mediator model is first examined individually. Then the combined effect of having measurement error and omitted confounders in the same model is discussed. Throughout, an empirical example is provided to illustrate the effect of violating these assumptions on the mediated effect. PMID:27739903
Kuramoto, S. Janet; Stuart, Elizabeth A.
2013-01-01
Despite that randomization is the gold standard for estimating causal relationships, many questions in prevention science are left to be answered through non-experimental studies often because randomization is either infeasible or unethical. While methods such as propensity score matching can adjust for observed confounding, unobserved confounding is the Achilles heel of most non-experimental studies. This paper describes and illustrates seven sensitivity analysis techniques that assess the sensitivity of study results to an unobserved confounder. These methods were categorized into two groups to reflect differences in their conceptualization of sensitivity analysis, as well as their targets of interest. As a motivating example we examine the sensitivity of the association between maternal suicide and offspring’s risk for suicide attempt hospitalization. While inferences differed slightly depending on the type of sensitivity analysis conducted, overall the association between maternal suicide and offspring’s hospitalization for suicide attempt was found to be relatively robust to an unobserved confounder. The ease of implementation and the insight these analyses provide underscores sensitivity analysis techniques as an important tool for non-experimental studies. The implementation of sensitivity analysis can help increase confidence in results from non-experimental studies and better inform prevention researchers and policymakers regarding potential intervention targets. PMID:23408282
Liu, Weiwei; Kuramoto, S Janet; Stuart, Elizabeth A
2013-12-01
Despite the fact that randomization is the gold standard for estimating causal relationships, many questions in prevention science are often left to be answered through nonexperimental studies because randomization is either infeasible or unethical. While methods such as propensity score matching can adjust for observed confounding, unobserved confounding is the Achilles heel of most nonexperimental studies. This paper describes and illustrates seven sensitivity analysis techniques that assess the sensitivity of study results to an unobserved confounder. These methods were categorized into two groups to reflect differences in their conceptualization of sensitivity analysis, as well as their targets of interest. As a motivating example, we examine the sensitivity of the association between maternal suicide and offspring's risk for suicide attempt hospitalization. While inferences differed slightly depending on the type of sensitivity analysis conducted, overall, the association between maternal suicide and offspring's hospitalization for suicide attempt was found to be relatively robust to an unobserved confounder. The ease of implementation and the insight these analyses provide underscores sensitivity analysis techniques as an important tool for nonexperimental studies. The implementation of sensitivity analysis can help increase confidence in results from nonexperimental studies and better inform prevention researchers and policy makers regarding potential intervention targets.
A method to forecast quantitative variables relating to nuclear public acceptance
International Nuclear Information System (INIS)
Ohnishi, T.
1992-01-01
A methodology is proposed for forecasting the future trend of quantitative variables profoundly related to the public acceptance (PA) of nuclear energy. The social environment influencing PA is first modeled by breaking it down into a finite number of fundamental elements and then the interactive formulae between the quantitative variables, which are attributed to and characterize each element, are determined by using the actual values of the variables in the past. Inputting the estimated values of exogenous variables into these formulae, the forecast values of endogenous variables can finally be obtained. Using this method, the problem of nuclear PA in Japan is treated as, for example, where the context is considered to comprise a public sector and the general social environment and socio-psychology. The public sector is broken down into three elements of the general public, the inhabitants living around nuclear facilities and the activists of anti-nuclear movements, whereas the social environment and socio-psychological factors are broken down into several elements, such as news media and psychological factors. Twenty-seven endogenous and seven exogenous variables are introduced to quantify these elements. After quantitatively formulating the interactive features between them and extrapolating the exogenous variables into the future estimates are made of the growth or attenuation of the endogenous variables, such as the pro- and anti-nuclear fractions in public opinion polls and the frequency of occurrence of anti-nuclear movements. (author)
Resistance Torque Based Variable Duty-Cycle Control Method for a Stage II Compressor
Zhong, Meipeng; Zheng, Shuiying
2017-07-01
The resistance torque of a piston stage II compressor generates strenuous fluctuations in a rotational period, and this can lead to negative influences on the working performance of the compressor. To restrain the strenuous fluctuations in the piston stage II compressor, a variable duty-cycle control method based on the resistance torque is proposed. A dynamic model of a stage II compressor is set up, and the resistance torque and other characteristic parameters are acquired as the control targets. Then, a variable duty-cycle control method is applied to track the resistance torque, thereby improving the working performance of the compressor. Simulated results show that the compressor, driven by the proposed method, requires lower current, while the rotating speed and the output torque remain comparable to the traditional variable-frequency control methods. A variable duty-cycle control system is developed, and the experimental results prove that the proposed method can help reduce the specific power, input power, and working noise of the compressor to 0.97 kW·m-3·min-1, 0.09 kW and 3.10 dB, respectively, under the same conditions of discharge pressure of 2.00 MPa and a discharge volume of 0.095 m3/min. The proposed variable duty-cycle control method tracks the resistance torque dynamically, and improves the working performance of a Stage II Compressor. The proposed variable duty-cycle control method can be applied to other compressors, and can provide theoretical guidance for the compressor.
A survey of variable selection methods in two Chinese epidemiology journals
Directory of Open Access Journals (Sweden)
Lynn Henry S
2010-09-01
Full Text Available Abstract Background Although much has been written on developing better procedures for variable selection, there is little research on how it is practiced in actual studies. This review surveys the variable selection methods reported in two high-ranking Chinese epidemiology journals. Methods Articles published in 2004, 2006, and 2008 in the Chinese Journal of Epidemiology and the Chinese Journal of Preventive Medicine were reviewed. Five categories of methods were identified whereby variables were selected using: A - bivariate analyses; B - multivariable analysis; e.g. stepwise or individual significance testing of model coefficients; C - first bivariate analyses, followed by multivariable analysis; D - bivariate analyses or multivariable analysis; and E - other criteria like prior knowledge or personal judgment. Results Among the 287 articles that reported using variable selection methods, 6%, 26%, 30%, 21%, and 17% were in categories A through E, respectively. One hundred sixty-three studies selected variables using bivariate analyses, 80% (130/163 via multiple significance testing at the 5% alpha-level. Of the 219 multivariable analyses, 97 (44% used stepwise procedures, 89 (41% tested individual regression coefficients, but 33 (15% did not mention how variables were selected. Sixty percent (58/97 of the stepwise routines also did not specify the algorithm and/or significance levels. Conclusions The variable selection methods reported in the two journals were limited in variety, and details were often missing. Many studies still relied on problematic techniques like stepwise procedures and/or multiple testing of bivariate associations at the 0.05 alpha-level. These deficiencies should be rectified to safeguard the scientific validity of articles published in Chinese epidemiology journals.
Confounding and exposure measurement error in air pollution epidemiology.
Sheppard, Lianne; Burnett, Richard T; Szpiro, Adam A; Kim, Sun-Young; Jerrett, Michael; Pope, C Arden; Brunekreef, Bert
2012-06-01
Studies in air pollution epidemiology may suffer from some specific forms of confounding and exposure measurement error. This contribution discusses these, mostly in the framework of cohort studies. Evaluation of potential confounding is critical in studies of the health effects of air pollution. The association between long-term exposure to ambient air pollution and mortality has been investigated using cohort studies in which subjects are followed over time with respect to their vital status. In such studies, control for individual-level confounders such as smoking is important, as is control for area-level confounders such as neighborhood socio-economic status. In addition, there may be spatial dependencies in the survival data that need to be addressed. These issues are illustrated using the American Cancer Society Cancer Prevention II cohort. Exposure measurement error is a challenge in epidemiology because inference about health effects can be incorrect when the measured or predicted exposure used in the analysis is different from the underlying true exposure. Air pollution epidemiology rarely if ever uses personal measurements of exposure for reasons of cost and feasibility. Exposure measurement error in air pollution epidemiology comes in various dominant forms, which are different for time-series and cohort studies. The challenges are reviewed and a number of suggested solutions are discussed for both study domains.
Investigating the Idoho oil spillage into Lagos: Some confounding ...
African Journals Online (AJOL)
... caused by these spillages must consider the socio-economic characteristics of the population as this may reveal a true picture of the event and facilitate proper interpretation of the result. Keywords: Toxicity, Idoho Oil Spillage, Confounders, Socio economic factors. Nigerian Journal of Health and Biomedical Sciences Vol.
Quantification and variability in colonic volume with a novel magnetic resonance imaging method
DEFF Research Database (Denmark)
Nilsson, M; Sandberg, Thomas Holm; Poulsen, Jakob Lykke
2015-01-01
Background: Segmental distribution of colorectal volume is relevant in a number of diseases, but clinical and experimental use demands robust reliability and validity. Using a novel semi-automatic magnetic resonance imaging-based technique, the aims of this study were to describe: (i) inter......-individual and intra-individual variability of segmental colorectal volumes between two observations in healthy subjects and (ii) the change in segmental colorectal volume distribution before and after defecation. Methods: The inter-individual and intra-individual variability of four colorectal volumes (cecum...... (p = 0.02). Conclusions & Inferences: Imaging of segmental colorectal volume, morphology, and fecal accumulation is advantageous to conventional methods in its low variability, high spatial resolution, and its absence of contrast-enhancing agents and irradiation. Hence, the method is suitable...
Directory of Open Access Journals (Sweden)
Thomas Frisell
Full Text Available BACKGROUND: Research has consistently found lower cognitive ability to be related to increased risk for violent and other antisocial behaviour. Since this association has remained when adjusting for childhood socioeconomic position, ethnicity, and parental characteristics, it is often assumed to be causal, potentially mediated through school adjustment problems and conduct disorder. Socioeconomic differences are notoriously difficult to quantify, however, and it is possible that the association between intelligence and delinquency suffer substantial residual confounding. METHODS: We linked longitudinal Swedish total population registers to study the association of general cognitive ability (intelligence at age 18 (the Conscript Register, 1980-1993 with the incidence proportion of violent criminal convictions (the Crime Register, 1973-2009, among all men born in Sweden 1961-1975 (N = 700,514. Using probit regression, we controlled for measured childhood socioeconomic variables, and further employed sibling comparisons (family pedigree data from the Multi-Generation Register to adjust for shared familial characteristics. RESULTS: Cognitive ability in early adulthood was inversely associated to having been convicted of a violent crime (β = -0.19, 95% CI: -0.19; -0.18, the association remained when adjusting for childhood socioeconomic factors (β = -0.18, 95% CI: -0.18; -0.17. The association was somewhat lower within half-brothers raised apart (β = -0.16, 95% CI: -0.18; -0.14, within half-brothers raised together (β = -0.13, 95% CI: (-0.15; -0.11, and lower still in full-brother pairs (β = -0.10, 95% CI: -0.11; -0.09. The attenuation among half-brothers raised together and full brothers was too strong to be attributed solely to attenuation from measurement error. DISCUSSION: Our results suggest that the association between general cognitive ability and violent criminality is confounded partly by factors shared by
Handling stress may confound murine gut microbiota studies
Directory of Open Access Journals (Sweden)
Cary R. Allen-Blevins
2017-01-01
Full Text Available Background Accumulating evidence indicates interactions between human milk composition, particularly sugars (human milk oligosaccharides or HMO, the gut microbiota of human infants, and behavioral effects. Some HMO secreted in human milk are unable to be endogenously digested by the human infant but are able to be metabolized by certain species of gut microbiota, including Bifidobacterium longum subsp. infantis (B. infantis, a species sensitive to host stress (Bailey & Coe, 2004. Exposure to gut bacteria like B. infantisduring critical neurodevelopment windows in early life appears to have behavioral consequences; however, environmental, physical, and social stress during this period can also have behavioral and microbial consequences. While rodent models are a useful method for determining causal relationships between HMO, gut microbiota, and behavior, murine studies of gut microbiota usually employ oral gavage, a technique stressful to the mouse. Our aim was to develop a less-invasive technique for HMO administration to remove the potential confound of gavage stress. Under the hypothesis that stress affects gut microbiota, particularly B. infantis, we predicted the pups receiving a prebiotic solution in a less-invasive manner would have the highest amount of Bifidobacteria in their gut. Methods This study was designed to test two methods, active and passive, of solution administration to mice and the effects on their gut microbiome. Neonatal C57BL/6J mice housed in a specific-pathogen free facility received increasing doses of fructooligosaccharide (FOS solution or deionized, distilled water. Gastrointestinal (GI tracts were collected from five dams, six sires, and 41 pups over four time points. Seven fecal pellets from unhandled pups and two pellets from unhandled dams were also collected. Qualitative real-time polymerase chain reaction (qRT-PCR was used to quantify and compare the amount of Bifidobacterium, Bacteroides, Bacteroidetes, and
A sizing method for stand-alone PV installations with variable demand
Energy Technology Data Exchange (ETDEWEB)
Posadillo, R. [Grupo de Investigacion en Energias y Recursos Renovables, Dpto. de Fisica Aplicada, E.P.S., Universidad de Cordoba, Avda. Menendez Pidal s/n, 14004 Cordoba (Spain); Lopez Luque, R. [Grupo de Investigacion de Fisica Para las Energias y Recursos Renovables, Dpto. de Fisica Aplicada, Edificio C2 Campus de Rabanales, 14071 Cordoba (Spain)
2008-05-15
The practical applicability of the considerations made in a previous paper to characterize energy balances in stand-alone photovoltaic systems (SAPV) is presented. Given that energy balances were characterized based on monthly estimations, the method is appropriate for sizing installations with variable monthly demands and variable monthly panel tilt (for seasonal estimations). The method presented is original in that it is the only method proposed for this type of demand. The method is based on the rational utilization of daily solar radiation distribution functions. When exact mathematical expressions are not available, approximate empirical expressions can be used. The more precise the statistical characterization of the solar radiation on the receiver module, the more precise the sizing method given that the characterization will solely depend on the distribution function of the daily global irradiation on the tilted surface H{sub g{beta}}{sub i}. This method, like previous ones, uses the concept of loss of load probability (LLP) as a parameter to characterize system design and includes information on the standard deviation of this parameter ({sigma}{sub LLP}) as well as two new parameters: annual number of system failures (f) and the standard deviation of annual number of system failures ({sigma}{sub f}). This paper therefore provides an analytical method for evaluating and sizing stand-alone PV systems with variable monthly demand and panel inclination. The sizing method has also been applied in a practical manner. (author)
Wegner, Franz
2016-01-01
This text presents the mathematical concepts of Grassmann variables and the method of supersymmetry to a broad audience of physicists interested in applying these tools to disordered and critical systems, as well as related topics in statistical physics. Based on many courses and seminars held by the author, one of the pioneers in this field, the reader is given a systematic and tutorial introduction to the subject matter. The algebra and analysis of Grassmann variables is presented in part I. The mathematics of these variables is applied to a random matrix model, path integrals for fermions, dimer models and the Ising model in two dimensions. Supermathematics - the use of commuting and anticommuting variables on an equal footing - is the subject of part II. The properties of supervectors and supermatrices, which contain both commuting and Grassmann components, are treated in great detail, including the derivation of integral theorems. In part III, supersymmetric physical models are considered. While supersym...
KEELE, Minimization of Nonlinear Function with Linear Constraints, Variable Metric Method
International Nuclear Information System (INIS)
Westley, G.W.
1975-01-01
1 - Description of problem or function: KEELE is a linearly constrained nonlinear programming algorithm for locating a local minimum of a function of n variables with the variables subject to linear equality and/or inequality constraints. 2 - Method of solution: A variable metric procedure is used where the direction of search at each iteration is obtained by multiplying the negative of the gradient vector by a positive definite matrix which approximates the inverse of the matrix of second partial derivatives associated with the function. 3 - Restrictions on the complexity of the problem: Array dimensions limit the number of variables to 20 and the number of constraints to 50. These can be changed by the user
Duan, Fajie; Fu, Xiao; Jiang, Jiajia; Huang, Tingting; Ma, Ling; Zhang, Cong
2018-05-01
In this work, an automatic variable selection method for quantitative analysis of soil samples using laser-induced breakdown spectroscopy (LIBS) is proposed, which is based on full spectrum correction (FSC) and modified iterative predictor weighting-partial least squares (mIPW-PLS). The method features automatic selection without artificial processes. To illustrate the feasibility and effectiveness of the method, a comparison with genetic algorithm (GA) and successive projections algorithm (SPA) for different elements (copper, barium and chromium) detection in soil was implemented. The experimental results showed that all the three methods could accomplish variable selection effectively, among which FSC-mIPW-PLS required significantly shorter computation time (12 s approximately for 40,000 initial variables) than the others. Moreover, improved quantification models were got with variable selection approaches. The root mean square errors of prediction (RMSEP) of models utilizing the new method were 27.47 (copper), 37.15 (barium) and 39.70 (chromium) mg/kg, which showed comparable prediction effect with GA and SPA.
The Leech method for diagnosing constipation: intra- and interobserver variability and accuracy
International Nuclear Information System (INIS)
Lorijn, Fleur de; Voskuijl, Wieger P.; Taminiau, Jan A.; Benninga, Marc A.; Rijn, Rick R. van; Henneman, Onno D.F.; Heijmans, Jarom; Reitsma, Johannes B.
2006-01-01
The data concerning the value of a plain abdominal radiograph in childhood constipation are inconsistent. Recently, positive results have been reported of a new radiographic scoring system, ''the Leech method'', for assessing faecal loading. To assess intra- and interobserver variability and determine diagnostic accuracy of the Leech method in identifying children with functional constipation (FC). A total of 89 children (median age 9.8 years) with functional gastrointestinal disorders were included in the study. Based on clinical parameters, 52 fulfilled the criteria for FC, six fulfilled the criteria for functional abdominal pain (FAP), and 31 for functional non-retentive faecal incontinence (FNRFI); the latter two groups provided the controls. To assess intra- and interobserver variability of the Leech method three scorers scored the same abdominal radiograph twice. A Leech score of 9 or more was considered as suggestive of constipation. ROC analysis was used to determine the diagnostic accuracy of the Leech method in separating patients with FC from control patients. Significant intraobserver variability was found between two scorers (P=0.005 and P<0.0001), whereas there was no systematic difference between the two scores of the other scorer (P=0.89). The scores between scorers differed systematically and displayed large variability. The area under the ROC curve was 0.68 (95% CI 0.58-0.80), indicating poor diagnostic accuracy. The Leech scoring method for assessing faecal loading on a plain abdominal radiograph is of limited value in the diagnosis of FC in children. (orig.)
A method based on a separation of variables in magnetohydrodynamics (MHD)
International Nuclear Information System (INIS)
Cessenat, M.; Genta, P.
1996-01-01
We use a method based on a separation of variables for solving a system of first order partial differential equations, in a very simple modelling of MHD. The method consists in introducing three unknown variables φ1, φ2, φ3 in addition of the time variable τ and then searching a solution which is separated with respect to φ1 and τ only. This is allowed by a very simple relation, called a 'metric separation equation', which governs the type of solutions with respect to time. The families of solutions for the system of equations thus obtained, correspond to a radial evolution of the fluid. Solving the MHD equations is then reduced to find the transverse component H Σ of the magnetic field on the unit sphere Σ by solving a non linear partial differential equation on Σ. Thus we generalize ideas due to Courant-Friedrichs and to Sedov on dimensional analysis and self-similar solutions. (authors)
He, Xiaowei; Liang, Jimin; Wang, Xiaorui; Yu, Jingjing; Qu, Xiaochao; Wang, Xiaodong; Hou, Yanbin; Chen, Duofang; Liu, Fang; Tian, Jie
2010-11-22
In this paper, we present an incomplete variables truncated conjugate gradient (IVTCG) method for bioluminescence tomography (BLT). Considering the sparse characteristic of the light source and insufficient surface measurement in the BLT scenarios, we combine a sparseness-inducing (ℓ1 norm) regularization term with a quadratic error term in the IVTCG-based framework for solving the inverse problem. By limiting the number of variables updated at each iterative and combining a variable splitting strategy to find the search direction more efficiently, it obtains fast and stable source reconstruction, even without a priori information of the permissible source region and multispectral measurements. Numerical experiments on a mouse atlas validate the effectiveness of the method. In vivo mouse experimental results further indicate its potential for a practical BLT system.
Approaches for developing a sizing method for stand-alone PV systems with variable demand
Energy Technology Data Exchange (ETDEWEB)
Posadillo, R. [Grupo de Investigacion en Energias y Recursos Renovables, Dpto. de Fisica Aplicada, E.P.S., Universidad de Cordoba, Avda. Menendez Pidal s/n, 14004 Cordoba (Spain); Lopez Luque, R. [Grupo de Investigacion de Fisica para las Energias y Recursos Renovables, Dpto. de Fisica Aplicada. Edificio C2 Campus de Rabanales, 14071 Cordoba (Spain)
2008-05-15
Accurate sizing is one of the most important aspects to take into consideration when designing a stand-alone photovoltaic system (SAPV). Various methods, which differ in terms of their simplicity or reliability, have been developed for this purpose. Analytical methods, which seek functional relationships between variables of interest to the sizing problem, are one of these approaches. A series of rational considerations are presented in this paper with the aim of shedding light upon the basic principles and results of various sizing methods proposed by different authors. These considerations set the basis for a new analytical method that has been designed for systems with variable monthly energy demands. Following previous approaches, the method proposed is based on the concept of loss of load probability (LLP) - a parameter that is used to characterize system design. The method includes information on the standard deviation of loss of load probability ({sigma}{sub LLP}) and on two new parameters: annual number of system failures (f) and standard deviation of annual number of failures ({sigma}{sub f}). The method proves useful for sizing a PV system in a reliable manner and serves to explain the discrepancies found in the research on systems with LLP<10{sup -2}. We demonstrate that reliability depends not only on the sizing variables and on the distribution function of solar radiation, but on the minimum value as well, which in a given location and with a monthly average clearness index, achieves total solar radiation on the receiver surface. (author)
Enhancing the estimation of blood pressure using pulse arrival time and two confounding factors
International Nuclear Information System (INIS)
Baek, Hyun Jae; Kim, Ko Keun; Kim, Jung Soo; Lee, Boreom; Park, Kwang Suk
2010-01-01
A new method of blood pressure (BP) estimation using multiple regression with pulse arrival time (PAT) and two confounding factors was evaluated in clinical and unconstrained monitoring situations. For the first analysis with clinical data, electrocardiogram (ECG), photoplethysmogram (PPG) and invasive BP signals were obtained by a conventional patient monitoring device during surgery. In the second analysis, ECG, PPG and non-invasive BP were measured using systems developed to obtain data under conditions in which the subject was not constrained. To enhance the performance of BP estimation methods, heart rate (HR) and arterial stiffness were considered as confounding factors in regression analysis. The PAT and HR were easily extracted from ECG and PPG signals. For arterial stiffness, the duration from the maximum derivative point to the maximum of the dicrotic notch in the PPG signal, a parameter called TDB, was employed. In two experiments that normally cause BP variation, the correlation between measured BP and the estimated BP was investigated. Multiple-regression analysis with the two confounding factors improved correlation coefficients for diastolic blood pressure and systolic blood pressure to acceptable confidence levels, compared to existing methods that consider PAT only. In addition, reproducibility for the proposed method was determined using constructed test sets. Our results demonstrate that non-invasive, non-intrusive BP estimation can be obtained using methods that can be applied in both clinical and daily healthcare situations
Enhancing the estimation of blood pressure using pulse arrival time and two confounding factors.
Baek, Hyun Jae; Kim, Ko Keun; Kim, Jung Soo; Lee, Boreom; Park, Kwang Suk
2010-02-01
A new method of blood pressure (BP) estimation using multiple regression with pulse arrival time (PAT) and two confounding factors was evaluated in clinical and unconstrained monitoring situations. For the first analysis with clinical data, electrocardiogram (ECG), photoplethysmogram (PPG) and invasive BP signals were obtained by a conventional patient monitoring device during surgery. In the second analysis, ECG, PPG and non-invasive BP were measured using systems developed to obtain data under conditions in which the subject was not constrained. To enhance the performance of BP estimation methods, heart rate (HR) and arterial stiffness were considered as confounding factors in regression analysis. The PAT and HR were easily extracted from ECG and PPG signals. For arterial stiffness, the duration from the maximum derivative point to the maximum of the dicrotic notch in the PPG signal, a parameter called TDB, was employed. In two experiments that normally cause BP variation, the correlation between measured BP and the estimated BP was investigated. Multiple-regression analysis with the two confounding factors improved correlation coefficients for diastolic blood pressure and systolic blood pressure to acceptable confidence levels, compared to existing methods that consider PAT only. In addition, reproducibility for the proposed method was determined using constructed test sets. Our results demonstrate that non-invasive, non-intrusive BP estimation can be obtained using methods that can be applied in both clinical and daily healthcare situations.
A fast collocation method for a variable-coefficient nonlocal diffusion model
Wang, Che; Wang, Hong
2017-02-01
We develop a fast collocation scheme for a variable-coefficient nonlocal diffusion model, for which a numerical discretization would yield a dense stiffness matrix. The development of the fast method is achieved by carefully handling the variable coefficients appearing inside the singular integral operator and exploiting the structure of the dense stiffness matrix. The resulting fast method reduces the computational work from O (N3) required by a commonly used direct solver to O (Nlog N) per iteration and the memory requirement from O (N2) to O (N). Furthermore, the fast method reduces the computational work of assembling the stiffness matrix from O (N2) to O (N). Numerical results are presented to show the utility of the fast method.
Directory of Open Access Journals (Sweden)
Jovković Biljana
2012-12-01
Full Text Available The aim of this paper is to present the procedure of audit sampling using the variable sampling methods for conducting the tests of income from insurance premiums in insurance company 'Takovo'. Since the incomes from the insurance premiums from vehicle insurance and third-party vehicle insurance have the dominant share of the insurance company's income, the application of this method will be shown in the audit examination of these incomes - incomes from VI and TPVI premiums. For investigating the applicability of these methods in testing the income of other insurance companies, we shall implement the method of variable sampling in the audit testing of the premium income from the three leading insurance companies in Serbia, 'Dunav', 'DDOR' and 'Delta Generali' Insurance.
Bollen, Kenneth A.
2007-01-01
R. D. Howell, E. Breivik, and J. B. Wilcox (2007) have argued that causal (formative) indicators are inherently subject to interpretational confounding. That is, they have argued that using causal (formative) indicators leads the empirical meaning of a latent variable to be other than that assigned to it by a researcher. Their critique of causal…
A method to standardize gait and balance variables for gait velocity.
Iersel, M.B. van; Olde Rikkert, M.G.M.; Borm, G.F.
2007-01-01
Many gait and balance variables depend on gait velocity, which seriously hinders the interpretation of gait and balance data derived from walks at different velocities. However, as far as we know there is no widely accepted method to correct for effects of gait velocity on other gait and balance
Directory of Open Access Journals (Sweden)
Hongwu Zhang
2011-08-01
Full Text Available In this article, we study a Cauchy problem for an elliptic equation with variable coefficients. It is well-known that such a problem is severely ill-posed; i.e., the solution does not depend continuously on the Cauchy data. We propose a modified quasi-boundary value regularization method to solve it. Convergence estimates are established under two a priori assumptions on the exact solution. A numerical example is given to illustrate our proposed method.
DEFF Research Database (Denmark)
Garcia-Aymerich, J.; Lange, P.; Serra, I.
2008-01-01
this type of confounding. We sought to assess the presence of time-dependent confounding in the association between physical activity and COPD development and course by comparing risk estimates between standard statistical methods and MSMs. METHODS: By using the population-based cohort Copenhagen City Heart...
Loeys, Tom; Talloen, Wouter; Goubert, Liesbet; Moerkerke, Beatrijs; Vansteelandt, Stijn
2016-11-01
It is well known from the mediation analysis literature that the identification of direct and indirect effects relies on strong no unmeasured confounding assumptions of no unmeasured confounding. Even in randomized studies the mediator may still be correlated with unobserved prognostic variables that affect the outcome, in which case the mediator's role in the causal process may not be inferred without bias. In the behavioural and social science literature very little attention has been given so far to the causal assumptions required for moderated mediation analysis. In this paper we focus on the index for moderated mediation, which measures by how much the mediated effect is larger or smaller for varying levels of the moderator. We show that in linear models this index can be estimated without bias in the presence of unmeasured common causes of the moderator, mediator and outcome under certain conditions. Importantly, one can thus use the test for moderated mediation to support evidence for mediation under less stringent confounding conditions. We illustrate our findings with data from a randomized experiment assessing the impact of being primed with social deception upon observer responses to others' pain, and from an observational study of individuals who ended a romantic relationship assessing the effect of attachment anxiety during the relationship on mental distress 2 years after the break-up. © 2016 The British Psychological Society.
Prenatal Paracetamol Exposure and Wheezing in Childhood: Causation or Confounding?
Directory of Open Access Journals (Sweden)
Enrica Migliore
Full Text Available Several studies have reported an increased risk of wheezing in the children of mothers who used paracetamol during pregnancy. We evaluated to what extent this association is explained by confounding.We investigated the association between maternal paracetamol use in the first and third trimester of pregnancy and ever wheezing or recurrent wheezing/asthma in infants in the NINFEA cohort study. Risks ratios (RR and 95% confidence intervals (CI were estimated after adjustment for confounders, including maternal infections and antibiotic use during pregnancy.The prevalence of maternal paracetamol use was 30.6% during the first and 36.7% during the third trimester of pregnancy. The prevalence of ever wheezing and recurrent wheezing/asthma was 16.9% and 5.6%, respectively. After full adjustment, the RR for ever wheezing decreased from 1.25 [1.07-1.47] to 1.10 [0.94-1.30] in the first, and from 1.26 [1.08-1.47] to 1.10 [0.93-1.29] in the third trimester. A similar pattern was observed for recurrent wheezing/asthma. Duration of maternal paracetamol use was not associated with either outcome. Further analyses on paracetamol use for three non-infectious disorders (sciatica, migraine, and headache revealed no increased risk of wheezing in children.The association between maternal paracetamol use during pregnancy and infant wheezing is mainly, if not completely explained by confounding.
Strong Stability Preserving Explicit Linear Multistep Methods with Variable Step Size
Hadjimichael, Yiannis
2016-09-08
Strong stability preserving (SSP) methods are designed primarily for time integration of nonlinear hyperbolic PDEs, for which the permissible SSP step size varies from one step to the next. We develop the first SSP linear multistep methods (of order two and three) with variable step size, and prove their optimality, stability, and convergence. The choice of step size for multistep SSP methods is an interesting problem because the allowable step size depends on the SSP coefficient, which in turn depends on the chosen step sizes. The description of the methods includes an optimal step-size strategy. We prove sharp upper bounds on the allowable step size for explicit SSP linear multistep methods and show the existence of methods with arbitrarily high order of accuracy. The effectiveness of the methods is demonstrated through numerical examples.
Xu, Jun; Cudel, Christophe; Kohler, Sophie; Fontaine, Stéphane; Haeberlé, Olivier; Klotz, Marie-Louise
2012-04-01
Fabric's smoothness is a key factor in determining the quality of finished textile products and has great influence on the functionality of industrial textiles and high-end textile products. With popularization of the zero defect industrial concept, identifying and measuring defective material in the early stage of production is of great interest to the industry. In the current market, many systems are able to achieve automatic monitoring and control of fabric, paper, and nonwoven material during the entire production process, however online measurement of hairiness is still an open topic and highly desirable for industrial applications. We propose a computer vision approach to compute epipole by using variable homography, which can be used to measure emergent fiber length on textile fabrics. The main challenges addressed in this paper are the application of variable homography on textile monitoring and measurement, as well as the accuracy of the estimated calculation. We propose that a fibrous structure can be considered as a two-layer structure, and then we show how variable homography combined with epipolar geometry can estimate the length of the fiber defects. Simulations are carried out to show the effectiveness of this method. The true length of selected fibers is measured precisely using a digital optical microscope, and then the same fibers are tested by our method. Our experimental results suggest that smoothness monitored by variable homography is an accurate and robust method of quality control for important industrial fabrics.
A QSAR Study of Environmental Estrogens Based on a Novel Variable Selection Method
Directory of Open Access Journals (Sweden)
Aiqian Zhang
2012-05-01
Full Text Available A large number of descriptors were employed to characterize the molecular structure of 53 natural, synthetic, and environmental chemicals which are suspected of disrupting endocrine functions by mimicking or antagonizing natural hormones and may thus pose a serious threat to the health of humans and wildlife. In this work, a robust quantitative structure-activity relationship (QSAR model with a novel variable selection method has been proposed for the effective estrogens. The variable selection method is based on variable interaction (VSMVI with leave-multiple-out cross validation (LMOCV to select the best subset. During variable selection, model construction and assessment, the Organization for Economic Co-operation and Development (OECD principles for regulation of QSAR acceptability were fully considered, such as using an unambiguous multiple-linear regression (MLR algorithm to build the model, using several validation methods to assessment the performance of the model, giving the define of applicability domain and analyzing the outliers with the results of molecular docking. The performance of the QSAR model indicates that the VSMVI is an effective, feasible and practical tool for rapid screening of the best subset from large molecular descriptors.
A Method of MPPT Control Based on Power Variable Step-size in Photovoltaic Converter System
Directory of Open Access Journals (Sweden)
Xu Hui-xiang
2016-01-01
Full Text Available Since the disadvantage of traditional MPPT algorithms of variable step-size, proposed power tracking based on variable step-size with the advantage method of the constant-voltage and the perturb-observe (P&O[1-3]. The control strategy modify the problem of voltage fluctuation caused by perturb-observe method, at the same time, introducing the advantage of constant-voltage method and simplify the circuit topology. With the theoretical derivation, control the output power of photovoltaic modules to change the duty cycle of main switch. Achieve the maximum power stabilization output, reduce the volatility of energy loss effectively, and improve the inversion efficiency[3,4]. Given the result of experimental test based theoretical derivation and the curve of MPPT when the prototype work.
International Nuclear Information System (INIS)
Xu, Yuenong; Smooke, M.D.
1993-01-01
In this paper we present a primitive variable Newton-based solution method with a block-line linear equation solver for the calculation of reacting flows. The present approach is compared with the stream function-vorticity Newton's method and the SIMPLER algorithm on the calculation of a system of fully elliptic equations governing an axisymmetric methane-air laminar diffusion flame. The chemical reaction is modeled by the flame sheet approximation. The numerical solution agrees well with experimental data in the major chemical species. The comparison of three sets of numerical results indicates that the stream function-vorticity solution using the approximate boundary conditions reported in the previous calculations predicts a longer flame length and a broader flame shape. With a new set of modified vorticity boundary conditions, we obtain agreement between the primitive variable and stream function-vorticity solutions. The primitive variable Newton's method converges much faster than the other two methods. Because of much less computer memory required for the block-line tridiagonal solver compared to a direct solver, the present approach makes it possible to calculate multidimensional flames with detailed reaction mechanisms. The SIMPLER algorithm shows a slow convergence rate compared to the other two methods in the present calculation
Verrelst, Jochem; Malenovský, Zbyněk; Van der Tol, Christiaan; Camps-Valls, Gustau; Gastellu-Etchegorry, Jean-Philippe; Lewis, Philip; North, Peter; Moreno, Jose
2018-06-01
An unprecedented spectroscopic data stream will soon become available with forthcoming Earth-observing satellite missions equipped with imaging spectroradiometers. This data stream will open up a vast array of opportunities to quantify a diversity of biochemical and structural vegetation properties. The processing requirements for such large data streams require reliable retrieval techniques enabling the spatiotemporally explicit quantification of biophysical variables. With the aim of preparing for this new era of Earth observation, this review summarizes the state-of-the-art retrieval methods that have been applied in experimental imaging spectroscopy studies inferring all kinds of vegetation biophysical variables. Identified retrieval methods are categorized into: (1) parametric regression, including vegetation indices, shape indices and spectral transformations; (2) nonparametric regression, including linear and nonlinear machine learning regression algorithms; (3) physically based, including inversion of radiative transfer models (RTMs) using numerical optimization and look-up table approaches; and (4) hybrid regression methods, which combine RTM simulations with machine learning regression methods. For each of these categories, an overview of widely applied methods with application to mapping vegetation properties is given. In view of processing imaging spectroscopy data, a critical aspect involves the challenge of dealing with spectral multicollinearity. The ability to provide robust estimates, retrieval uncertainties and acceptable retrieval processing speed are other important aspects in view of operational processing. Recommendations towards new-generation spectroscopy-based processing chains for operational production of biophysical variables are given.
Zwinderman, A. H.; Cleophas, T. J.
2005-01-01
BACKGROUND: Clinical investigators, although they are generally familiar with testing differences between averages, have difficulty testing differences between variabilities. OBJECTIVE: To give examples of situations where variability is more relevant than averages and to describe simple methods for
Selecting minimum dataset soil variables using PLSR as a regressive multivariate method
Stellacci, Anna Maria; Armenise, Elena; Castellini, Mirko; Rossi, Roberta; Vitti, Carolina; Leogrande, Rita; De Benedetto, Daniela; Ferrara, Rossana M.; Vivaldi, Gaetano A.
2017-04-01
Long-term field experiments and science-based tools that characterize soil status (namely the soil quality indices, SQIs) assume a strategic role in assessing the effect of agronomic techniques and thus in improving soil management especially in marginal environments. Selecting key soil variables able to best represent soil status is a critical step for the calculation of SQIs. Current studies show the effectiveness of statistical methods for variable selection to extract relevant information deriving from multivariate datasets. Principal component analysis (PCA) has been mainly used, however supervised multivariate methods and regressive techniques are progressively being evaluated (Armenise et al., 2013; de Paul Obade et al., 2016; Pulido Moncada et al., 2014). The present study explores the effectiveness of partial least square regression (PLSR) in selecting critical soil variables, using a dataset comparing conventional tillage and sod-seeding on durum wheat. The results were compared to those obtained using PCA and stepwise discriminant analysis (SDA). The soil data derived from a long-term field experiment in Southern Italy. On samples collected in April 2015, the following set of variables was quantified: (i) chemical: total organic carbon and nitrogen (TOC and TN), alkali-extractable C (TEC and humic substances - HA-FA), water extractable N and organic C (WEN and WEOC), Olsen extractable P, exchangeable cations, pH and EC; (ii) physical: texture, dry bulk density (BD), macroporosity (Pmac), air capacity (AC), and relative field capacity (RFC); (iii) biological: carbon of the microbial biomass quantified with the fumigation-extraction method. PCA and SDA were previously applied to the multivariate dataset (Stellacci et al., 2016). PLSR was carried out on mean centered and variance scaled data of predictors (soil variables) and response (wheat yield) variables using the PLS procedure of SAS/STAT. In addition, variable importance for projection (VIP
Read margin analysis of crossbar arrays using the cell-variability-aware simulation method
Sun, Wookyung; Choi, Sujin; Shin, Hyungsoon
2018-02-01
This paper proposes a new concept of read margin analysis of crossbar arrays using cell-variability-aware simulation. The size of the crossbar array should be considered to predict the read margin characteristic of the crossbar array because the read margin depends on the number of word lines and bit lines. However, an excessively high-CPU time is required to simulate large arrays using a commercial circuit simulator. A variability-aware MATLAB simulator that considers independent variability sources is developed to analyze the characteristics of the read margin according to the array size. The developed MATLAB simulator provides an effective method for reducing the simulation time while maintaining the accuracy of the read margin estimation in the crossbar array. The simulation is also highly efficient in analyzing the characteristic of the crossbar memory array considering the statistical variations in the cell characteristics.
Biological variables for the site survey of surface ecosystems - existing data and survey methods
International Nuclear Information System (INIS)
Kylaekorpi, Lasse; Berggren, Jens; Larsson, Mats; Liberg, Maria; Rydgren, Bernt
2000-06-01
In the process of selecting a safe and environmentally acceptable location for the deep level repository of nuclear waste, site surveys will be carried out. These site surveys will also include studies of the biota at the site, in order to assure that the chosen site will not conflict with important ecological interests, and to establish a thorough baseline for future impact assessments and monitoring programmes. As a preparation to the site survey programme, a review of the variables that need to be surveyed is conducted. This report contains the review for some of those variables. For each variable, existing data sources and their characteristics are listed. For those variables for which existing data sources are inadequate, suggestions are made for appropriate methods that will enable the establishment of an acceptable baseline. In this report the following variables are reviewed: Fishery, Landscape, Vegetation types, Key biotopes, Species (flora and fauna), Red-listed species (flora and fauna), Biomass (flora and fauna), Water level, water retention time (incl. water body and flow), Nutrients/toxins, Oxygen concentration, Layering, stratification, Light conditions/transparency, Temperature, Sediment transport, (Marine environments are excluded from this review). For a major part of the variables, the existing data coverage is most likely insufficient. Both the temporal and/or the geographical resolution is often limited, which means that complementary surveys must be performed during (or before) the site surveys. It is, however, in general difficult to make exact judgements on the extent of existing data, and also to give suggestions for relevant methods to use in the site surveys. This can be finally decided only when the locations for the sites are decided upon. The relevance of the different variables also depends on the environmental characteristics of the sites. Therefore, we suggest that when the survey sites are selected, an additional review is
Biological variables for the site survey of surface ecosystems - existing data and survey methods
Energy Technology Data Exchange (ETDEWEB)
Kylaekorpi, Lasse; Berggren, Jens; Larsson, Mats; Liberg, Maria; Rydgren, Bernt [SwedPower AB, Stockholm (Sweden)
2000-06-01
In the process of selecting a safe and environmentally acceptable location for the deep level repository of nuclear waste, site surveys will be carried out. These site surveys will also include studies of the biota at the site, in order to assure that the chosen site will not conflict with important ecological interests, and to establish a thorough baseline for future impact assessments and monitoring programmes. As a preparation to the site survey programme, a review of the variables that need to be surveyed is conducted. This report contains the review for some of those variables. For each variable, existing data sources and their characteristics are listed. For those variables for which existing data sources are inadequate, suggestions are made for appropriate methods that will enable the establishment of an acceptable baseline. In this report the following variables are reviewed: Fishery, Landscape, Vegetation types, Key biotopes, Species (flora and fauna), Red-listed species (flora and fauna), Biomass (flora and fauna), Water level, water retention time (incl. water body and flow), Nutrients/toxins, Oxygen concentration, Layering, stratification, Light conditions/transparency, Temperature, Sediment transport, (Marine environments are excluded from this review). For a major part of the variables, the existing data coverage is most likely insufficient. Both the temporal and/or the geographical resolution is often limited, which means that complementary surveys must be performed during (or before) the site surveys. It is, however, in general difficult to make exact judgements on the extent of existing data, and also to give suggestions for relevant methods to use in the site surveys. This can be finally decided only when the locations for the sites are decided upon. The relevance of the different variables also depends on the environmental characteristics of the sites. Therefore, we suggest that when the survey sites are selected, an additional review is
International Nuclear Information System (INIS)
Nanty, Simon
2015-01-01
This work relates to the framework of uncertainty quantification for numerical simulators, and more precisely studies two industrial applications linked to the safety studies of nuclear plants. These two applications have several common features. The first one is that the computer code inputs are functional and scalar variables, functional ones being dependent. The second feature is that the probability distribution of functional variables is known only through a sample of their realizations. The third feature, relative to only one of the two applications, is the high computational cost of the code, which limits the number of possible simulations. The main objective of this work was to propose a complete methodology for the uncertainty analysis of numerical simulators for the two considered cases. First, we have proposed a methodology to quantify the uncertainties of dependent functional random variables from a sample of their realizations. This methodology enables to both model the dependency between variables and their link to another variable, called co-variate, which could be, for instance, the output of the considered code. Then, we have developed an adaptation of a visualization tool for functional data, which enables to simultaneously visualize the uncertainties and features of dependent functional variables. Second, a method to perform the global sensitivity analysis of the codes used in the two studied cases has been proposed. In the case of a computationally demanding code, the direct use of quantitative global sensitivity analysis methods is intractable. To overcome this issue, the retained solution consists in building a surrogate model or meta model, a fast-running model approximating the computationally expensive code. An optimized uniform sampling strategy for scalar and functional variables has been developed to build a learning basis for the meta model. Finally, a new approximation approach for expensive codes with functional outputs has been
Propulsion and launching analysis of variable-mass rockets by analytical methods
Directory of Open Access Journals (Sweden)
D.D. Ganji
2013-09-01
Full Text Available In this study, applications of some analytical methods on nonlinear equation of the launching of a rocket with variable mass are investigated. Differential transformation method (DTM, homotopy perturbation method (HPM and least square method (LSM were applied and their results are compared with numerical solution. An excellent agreement with analytical methods and numerical ones is observed in the results and this reveals that analytical methods are effective and convenient. Also a parametric study is performed here which includes the effect of exhaust velocity (Ce, burn rate (BR of fuel and diameter of cylindrical rocket (d on the motion of a sample rocket, and contours for showing the sensitivity of these parameters are plotted. The main results indicate that the rocket velocity and altitude are increased with increasing the Ce and BR and decreased with increasing the rocket diameter and drag coefficient.
A Review of Spectral Methods for Variable Amplitude Fatigue Prediction and New Results
Larsen, Curtis E.; Irvine, Tom
2013-01-01
A comprehensive review of the available methods for estimating fatigue damage from variable amplitude loading is presented. The dependence of fatigue damage accumulation on power spectral density (psd) is investigated for random processes relevant to real structures such as in offshore or aerospace applications. Beginning with the Rayleigh (or narrow band) approximation, attempts at improved approximations or corrections to the Rayleigh approximation are examined by comparison to rainflow analysis of time histories simulated from psd functions representative of simple theoretical and real world applications. Spectral methods investigated include corrections by Wirsching and Light, Ortiz and Chen, the Dirlik formula, and the Single-Moment method, among other more recent proposed methods. Good agreement is obtained between the spectral methods and the time-domain rainflow identification for most cases, with some limitations. Guidelines are given for using the several spectral methods to increase confidence in the damage estimate.
The Leech method for diagnosing constipation: intra- and interobserver variability and accuracy
Energy Technology Data Exchange (ETDEWEB)
Lorijn, Fleur de; Voskuijl, Wieger P.; Taminiau, Jan A.; Benninga, Marc A. [Emma Children' s Hospital, Department of Paediatric Gastroenterology and Nutrition, Amsterdam (Netherlands); Rijn, Rick R. van; Henneman, Onno D.F. [Academic Medical Centre, Department of Radiology, Amsterdam (Netherlands); Heijmans, Jarom [Emma Children' s Hospital, Department of Paediatric Gastroenterology and Nutrition, Amsterdam (Netherlands); Academic Medical Centre, Department of Radiology, Amsterdam (Netherlands); Reitsma, Johannes B. [Academic Medical Centre, Department of Clinical Epidemiology and Biostatistics, Amsterdam (Netherlands)
2006-01-01
The data concerning the value of a plain abdominal radiograph in childhood constipation are inconsistent. Recently, positive results have been reported of a new radiographic scoring system, ''the Leech method'', for assessing faecal loading. To assess intra- and interobserver variability and determine diagnostic accuracy of the Leech method in identifying children with functional constipation (FC). A total of 89 children (median age 9.8 years) with functional gastrointestinal disorders were included in the study. Based on clinical parameters, 52 fulfilled the criteria for FC, six fulfilled the criteria for functional abdominal pain (FAP), and 31 for functional non-retentive faecal incontinence (FNRFI); the latter two groups provided the controls. To assess intra- and interobserver variability of the Leech method three scorers scored the same abdominal radiograph twice. A Leech score of 9 or more was considered as suggestive of constipation. ROC analysis was used to determine the diagnostic accuracy of the Leech method in separating patients with FC from control patients. Significant intraobserver variability was found between two scorers (P=0.005 and P<0.0001), whereas there was no systematic difference between the two scores of the other scorer (P=0.89). The scores between scorers differed systematically and displayed large variability. The area under the ROC curve was 0.68 (95% CI 0.58-0.80), indicating poor diagnostic accuracy. The Leech scoring method for assessing faecal loading on a plain abdominal radiograph is of limited value in the diagnosis of FC in children. (orig.)
Use of a variable tracer infusion method to determine glucose turnover in humans
International Nuclear Information System (INIS)
Molina, J.M.; Baron, A.D.; Edelman, S.V.; Brechtel, G.; Wallace, P.; Olefsky, J.M.
1990-01-01
The single-compartment pool fraction model, when used with the hyperinsulinemic glucose clamp technique to measure rates of glucose turnover, sometimes underestimates true rates of glucose appearance (Ra) resulting in negative values for hepatic glucose output (HGO). We focused our attention on isotope discrimination and model error as possible explanations for this underestimation. We found no difference in [3-3H] glucose specific activity in samples obtained simultaneously from the femoral artery and vein (2,400 +/- 455 vs. 2,454 +/- 522 dpm/mg) in 6 men during a hyperinsulinemic euglycemic clamp study where insulin was infused at 40 mU.m-2.min-1 for 3 h; therefore, isotope discrimination did not occur. We compared the ability of a constant (0.6 microCi/min) vs. variable tracer infusion method (tracer added to the glucose infusate) to measure non-steady-state Ra during hyperinsulinemic clamp studies. Plasma specific activity fell during the constant tracer infusion studies but did not change from base line during the variable tracer infusion studies. By maintaining a constant plasma specific activity the variable tracer infusion method eliminates uncertainty about changes in glucose pool size. This overcame modeling error and more accurately measures non-steady-state Ra (P less than 0.001 by analysis of variance vs. constant infusion method). In conclusion, underestimation of Ra determined isotopically during hyperinsulinemic clamp studies is largely due to modeling error that can be overcome by use of the variable tracer infusion method. This method allows more accurate determination of Ra and HGO under non-steady-state conditions
The relationship between glass ceiling and power distance as a cultural variable by a new method
Directory of Open Access Journals (Sweden)
Naide Jahangirov
2015-12-01
Full Text Available Glass ceiling symbolizes a variety of barriers and obstacles that arise from gender inequality at business life. With this mind, culture influences gender dynamics. The purpose of this research was to examine the relationship between the glass ceiling and the power distance as a cultural variable within organizations. Gender variable is taken as a moderator variable in relationship between the concepts. In addition to conventional correlation analysis, we employed a new method to investigate this relationship in detail. The survey data were obtained from 109 people working at a research center which operated as a part of the non-profit private university in Ankara, Turkey. The relationship between the variables was revealed by a new method which was developed as an addition to the correlation in survey. The analysis revealed that the female staff perceived the glass ceiling and the power distance more intensely than the male staff. In addition, the medium level relation was determined between the power distance and the glass ceiling perception among female staff.
Confounding factors in determining causal soil moisture-precipitation feedback
Tuttle, Samuel E.; Salvucci, Guido D.
2017-07-01
Identification of causal links in the land-atmosphere system is important for construction and testing of land surface and general circulation models. However, the land and atmosphere are highly coupled and linked by a vast number of complex, interdependent processes. Statistical methods, such as Granger causality, can help to identify feedbacks from observational data, independent of the different parameterizations of physical processes and spatiotemporal resolution effects that influence feedbacks in models. However, statistical causal identification methods can easily be misapplied, leading to erroneous conclusions about feedback strength and sign. Here, we discuss three factors that must be accounted for in determination of causal soil moisture-precipitation feedback in observations and model output: seasonal and interannual variability, precipitation persistence, and endogeneity. The effect of neglecting these factors is demonstrated in simulated and observational data. The results show that long-timescale variability and precipitation persistence can have a substantial effect on detected soil moisture-precipitation feedback strength, while endogeneity has a smaller effect that is often masked by measurement error and thus is more likely to be an issue when analyzing model data or highly accurate observational data.
Strak, Maciej; Janssen, Nicole; Beelen, Rob; Schmitz, Oliver; Karssenberg, Derek; Houthuijs, Danny; van den Brink, Carolien; Dijst, Martin; Brunekreef, Bert; Hoek, Gerard
2017-07-01
Cohorts based on administrative data have size advantages over individual cohorts in investigating air pollution risks, but often lack in-depth information on individual risk factors related to lifestyle. If there is a correlation between lifestyle and air pollution, omitted lifestyle variables may result in biased air pollution risk estimates. Correlations between lifestyle and air pollution can be induced by socio-economic status affecting both lifestyle and air pollution exposure. Our overall aim was to assess potential confounding by missing lifestyle factors on air pollution mortality risk estimates. The first aim was to assess associations between long-term exposure to several air pollutants and lifestyle factors. The second aim was to assess whether these associations were sensitive to adjustment for individual and area-level socioeconomic status (SES), and whether they differed between subgroups of the population. Using the obtained air pollution-lifestyle associations and indirect adjustment methods, our third aim was to investigate the potential bias due to missing lifestyle information on air pollution mortality risk estimates in administrative cohorts. We used a recent Dutch national health survey of 387,195 adults to investigate the associations of PM 10 , PM 2.5 , PM 2.5-10 , PM 2.5 absorbance, OP DTT, OP ESR and NO 2 annual average concentrations at the residential address from land use regression models with individual smoking habits, alcohol consumption, physical activity and body mass index. We assessed the associations with and without adjustment for neighborhood and individual SES characteristics typically available in administrative data cohorts. We illustrated the effect of including lifestyle information on the air pollution mortality risk estimates in administrative cohort studies using a published indirect adjustment method. Current smoking and alcohol consumption were generally positively associated with air pollution. Physical activity
Improved flux calculations for viscous incompressible flow by the variable penalty method
International Nuclear Information System (INIS)
Kheshgi, H.; Luskin, M.
1985-01-01
The Navier-Stokes system for viscous, incompressible flow is considered, taking into account a replacement of the continuity equation by the perturbed continuity equation. The introduction of the approximation allows the pressure variable to be eliminated to obtain the system of equations for the approximate velocity. The penalty approximation is often applied to numerical discretizations since it provides a reduction in the size and band-width of the system of equations. Attention is given to error estimates, and to two numerical experiments which illustrate the error estimates considered. It is found that the variable penalty method provides an accurate solution for a much wider range of epsilon than the classical penalty method. 8 references
Directory of Open Access Journals (Sweden)
Hongfen Gao
2014-01-01
Full Text Available This paper describes the application of the complex variable meshless manifold method (CVMMM to stress intensity factor analyses of structures containing interface cracks between dissimilar materials. A discontinuous function and the near-tip asymptotic displacement functions are added to the CVMMM approximation using the framework of complex variable moving least-squares (CVMLS approximation. This enables the domain to be modeled by CVMMM without explicitly meshing the crack surfaces. The enriched crack-tip functions are chosen as those that span the asymptotic displacement fields for an interfacial crack. The complex stress intensity factors for bimaterial interfacial cracks were numerically evaluated using the method. Good agreement between the numerical results and the reference solutions for benchmark interfacial crack problems is realized.
A new hydraulic regulation method on district heating system with distributed variable-speed pumps
International Nuclear Information System (INIS)
Wang, Hai; Wang, Haiying; Zhu, Tong
2017-01-01
Highlights: • A hydraulic regulation method was presented for district heating with distributed variable speed pumps. • Information and automation technologies were utilized to support the proposed method. • A new hydraulic model was developed for distributed variable speed pumps. • A new optimization model was developed based on genetic algorithm. • Two scenarios of a multi-source looped system was illustrated to validate the method. - Abstract: Compared with the hydraulic configuration based on the conventional central circulating pump, a district heating system with distributed variable-speed-pumps configuration can often save 30–50% power consumption on circulating pumps with frequency inverters. However, the hydraulic regulations on distributed variable-speed-pumps configuration could be more complicated than ever while all distributed pumps need to be adjusted to their designated flow rates. Especially in a multi-source looped structure heating network where the distributed pumps have strongly coupled and severe non-linear hydraulic connections with each other, it would be rather difficult to maintain the hydraulic balance during the regulations. In this paper, with the help of the advanced automation and information technologies, a new hydraulic regulation method was proposed to achieve on-site hydraulic balance for the district heating systems with distributed variable-speed-pumps configuration. The proposed method was comprised of a new hydraulic model, which was developed to adapt the distributed variable-speed-pumps configuration, and a calibration model with genetic algorithm. By carrying out the proposed method step by step, the flow rates of all distributed pumps can be progressively adjusted to their designated values. A hypothetic district heating system with 2 heat sources and 10 substations was taken as a case study to illustrate the feasibility of the proposed method. Two scenarios were investigated respectively. In Scenario I, the
Duncan, Niall W.; Northoff, Georg
2013-01-01
Studies of intrinsic brain activity in the resting state have become increasingly common. A productive discussion of what analysis methods are appropriate, of the importance of physiologic correction and of the potential interpretations of results has been ongoing. However, less attention has been paid to factors other than physiologic noise that may confound resting-state experiments. These range from straightforward factors, such as ensuring that participants are all instructed in the same manner, to more obscure participant-related factors, such as body weight. We provide an overview of such potentially confounding factors, along with some suggested approaches for minimizing their impact. A particular theme that emerges from the overview is the range of systematic differences between types of study groups (e.g., between patients and controls) that may influence resting-state study results. PMID:22964258
Yoshida, Yutaka; Yokoyama, Kiyoko; Ishii, Naohiro
It is necessary to monitor the daily health condition for preventing stress syndrome. In this study, it was proposed the method assessing the mental and physiological condition, such as the work stress or the relaxation, using heart rate variability at real time and continuously. The instantanuous heart rate (HR), and the ratio of the number of extreme points (NEP) and the number of heart beats were calculated for assessing mental and physiological condition. In this method, 20 beats heart rate were used to calculate these indexes. These were calculated in one beat interval. Three conditions, which are sitting rest, performing mental arithmetic and watching relaxation movie, were assessed using our proposed algorithm. The assessment accuracies were 71.9% and 55.8%, when performing mental arithmetic and watching relaxation movie respectively. In this method, the mental and physiological condition was assessed using only 20 regressive heart beats, so this method is considered as the real time assessment method.
Directory of Open Access Journals (Sweden)
Mário Mestria
2013-08-01
Full Text Available The Clustered Traveling Salesman Problem (CTSP is a generalization of the Traveling Salesman Problem (TSP in which the set of vertices is partitioned into disjoint clusters and objective is to find a minimum cost Hamiltonian cycle such that the vertices of each cluster are visited contiguously. The CTSP is NP-hard and, in this context, we are proposed heuristic methods for the CTSP using GRASP, Path Relinking and Variable Neighborhood Descent (VND. The heuristic methods were tested using Euclidean instances with up to 2000 vertices and clusters varying between 4 to 150 vertices. The computational tests were performed to compare the performance of the heuristic methods with an exact algorithm using the Parallel CPLEX software. The computational results showed that the hybrid heuristic method using VND outperforms other heuristic methods.
Locating disease genes using Bayesian variable selection with the Haseman-Elston method
Directory of Open Access Journals (Sweden)
He Qimei
2003-12-01
Full Text Available Abstract Background We applied stochastic search variable selection (SSVS, a Bayesian model selection method, to the simulated data of Genetic Analysis Workshop 13. We used SSVS with the revisited Haseman-Elston method to find the markers linked to the loci determining change in cholesterol over time. To study gene-gene interaction (epistasis and gene-environment interaction, we adopted prior structures, which incorporate the relationship among the predictors. This allows SSVS to search in the model space more efficiently and avoid the less likely models. Results In applying SSVS, instead of looking at the posterior distribution of each of the candidate models, which is sensitive to the setting of the prior, we ranked the candidate variables (markers according to their marginal posterior probability, which was shown to be more robust to the prior. Compared with traditional methods that consider one marker at a time, our method considers all markers simultaneously and obtains more favorable results. Conclusions We showed that SSVS is a powerful method for identifying linked markers using the Haseman-Elston method, even for weak effects. SSVS is very effective because it does a smart search over the entire model space.
Method of nuclear reactor control using a variable temperature load dependent set point
International Nuclear Information System (INIS)
Kelly, J.J.; Rambo, G.E.
1982-01-01
A method and apparatus for controlling a nuclear reactor in response to a variable average reactor coolant temperature set point is disclosed. The set point is dependent upon percent of full power load demand. A manually-actuated ''droop mode'' of control is provided whereby the reactor coolant temperature is allowed to drop below the set point temperature a predetermined amount wherein the control is switched from reactor control rods exclusively to feedwater flow
Directory of Open Access Journals (Sweden)
Mashhood Ahmed Sheikh
2017-08-01
Full Text Available The life course perspective, the risky families model, and stress-and-coping models provide the rationale for assessing the role of smoking as a mediator in the association between childhood adversity and anxious and depressive symptomatology (ADS in adulthood. However, no previous study has assessed the independent mediating role of smoking in the association between childhood adversity and ADS in adulthood. Moreover, the importance of mediator-response confounding variables has rarely been demonstrated empirically in social and psychiatric epidemiology. The aim of this paper was to (i assess the mediating role of smoking in adulthood in the association between childhood adversity and ADS in adulthood, and (ii assess the change in estimates due to different mediator-response confounding factors (education, alcohol intake, and social support. The present analysis used data collected from 1994 to 2008 within the framework of the Tromsø Study (N = 4,530, a representative prospective cohort study of men and women. Seven childhood adversities (low mother's education, low father's education, low financial conditions, exposure to passive smoke, psychological abuse, physical abuse, and substance abuse distress were used to create a childhood adversity score. Smoking status was measured at a mean age of 54.7 years (Tromsø IV, and ADS in adulthood was measured at a mean age of 61.7 years (Tromsø V. Mediation analysis was used to assess the indirect effect and the proportion of mediated effect (% of childhood adversity on ADS in adulthood via smoking in adulthood. The test-retest reliability of smoking was good (Kappa: 0.67, 95% CI: 0.63; 0.71 in this sample. Childhood adversity was associated with a 10% increased risk of smoking in adulthood (Relative risk: 1.10, 95% CI: 1.03; 1.18, and both childhood adversity and smoking in adulthood were associated with greater levels of ADS in adulthood (p < 0.001. Smoking in adulthood did not significantly
Ultrahigh-dimensional variable selection method for whole-genome gene-gene interaction analysis
Directory of Open Access Journals (Sweden)
Ueki Masao
2012-05-01
Full Text Available Abstract Background Genome-wide gene-gene interaction analysis using single nucleotide polymorphisms (SNPs is an attractive way for identification of genetic components that confers susceptibility of human complex diseases. Individual hypothesis testing for SNP-SNP pairs as in common genome-wide association study (GWAS however involves difficulty in setting overall p-value due to complicated correlation structure, namely, the multiple testing problem that causes unacceptable false negative results. A large number of SNP-SNP pairs than sample size, so-called the large p small n problem, precludes simultaneous analysis using multiple regression. The method that overcomes above issues is thus needed. Results We adopt an up-to-date method for ultrahigh-dimensional variable selection termed the sure independence screening (SIS for appropriate handling of numerous number of SNP-SNP interactions by including them as predictor variables in logistic regression. We propose ranking strategy using promising dummy coding methods and following variable selection procedure in the SIS method suitably modified for gene-gene interaction analysis. We also implemented the procedures in a software program, EPISIS, using the cost-effective GPGPU (General-purpose computing on graphics processing units technology. EPISIS can complete exhaustive search for SNP-SNP interactions in standard GWAS dataset within several hours. The proposed method works successfully in simulation experiments and in application to real WTCCC (Wellcome Trust Case–control Consortium data. Conclusions Based on the machine-learning principle, the proposed method gives powerful and flexible genome-wide search for various patterns of gene-gene interaction.
Cumulative Mass and NIOSH Variable Lifting Index Method for Risk Assessment: Possible Relations.
Stucchi, Giulia; Battevi, Natale; Pandolfi, Monica; Galinotti, Luca; Iodice, Simona; Favero, Chiara
2018-02-01
Objective The aim of this study was to explore whether the Variable Lifting Index (VLI) can be corrected for cumulative mass and thus test its efficacy in predicting the risk of low-back pain (LBP). Background A validation study of the VLI method was published in this journal reporting promising results. Although several studies highlighted a positive correlation between cumulative load and LBP, cumulative mass has never been considered in any of the studies investigating the relationship between manual material handling and LBP. Method Both VLI and cumulative mass were calculated for 2,374 exposed subjects using a systematic approach. Due to high variability of cumulative mass values, a stratification within VLI categories was employed. Dummy variables (1-4) were assigned to each class and used as a multiplier factor for the VLI, resulting in a new index (VLI_CMM). Data on LBP were collected by occupational physicians at the study sites. Logistic regression was used to estimate the risk of acute LBP within levels of risk exposure when compared with a control group formed by 1,028 unexposed subjects. Results Data showed greatly variable values of cumulative mass across all VLI classes. The potential effect of cumulative mass on damage emerged as not significant ( p value = .6526). Conclusion When comparing VLI_CMM with raw VLI, the former failed to prove itself as a better predictor of LBP risk. Application To recognize cumulative mass as a modifier, especially for lumbar degenerative spine diseases, authors of future studies should investigate potential association between the VLI and other damage variables.
Method of collective variables with reference system for the grand canonical ensemble
International Nuclear Information System (INIS)
Yukhnovskii, I.R.
1989-01-01
A method of collective variables with special reference system for the grand canonical ensemble is presented. An explicit form is obtained for the basis sixth-degree measure density needed to describe the liquid-gas phase transition. Here the author presents the fundamentals of the method, which are as follows: (1) the functional form for the partition function in the grand canonical ensemble; (2) derivation of thermodynamic relations for the coefficients of the Jacobian; (3) transition to the problem on an adequate lattice; and (4) obtaining of the explicit form for the functional of the partition function
Application of Muskingum routing method with variable parameters in ungauged basin
Directory of Open Access Journals (Sweden)
Xiao-meng Song
2011-03-01
Full Text Available This paper describes a flood routing method applied in an ungauged basin, utilizing the Muskingum model with variable parameters of wave travel time K and weight coefficient of discharge x based on the physical characteristics of the river reach and flood, including the reach slope, length, width, and flood discharge. Three formulas for estimating parameters of wide rectangular, triangular, and parabolic cross sections are proposed. The influence of the flood on channel flow routing parameters is taken into account. The HEC-HMS hydrological model and the geospatial hydrologic analysis module HEC-GeoHMS were used to extract channel or watershed characteristics and to divide sub-basins. In addition, the initial and constant-rate method, user synthetic unit hydrograph method, and exponential recession method were used to estimate runoff volumes, the direct runoff hydrograph, and the baseflow hydrograph, respectively. The Muskingum model with variable parameters was then applied in the Louzigou Basin in Henan Province of China, and of the results, the percentages of flood events with a relative error of peak discharge less than 20% and runoff volume less than 10% are both 100%. They also show that the percentages of flood events with coefficients of determination greater than 0.8 are 83.33%, 91.67%, and 87.5%, respectively, for rectangular, triangular, and parabolic cross sections in 24 flood events. Therefore, this method is applicable to ungauged basins.
Houston, Lauren; Probst, Yasmine; Martin, Allison
2018-05-18
Data audits within clinical settings are extensively used as a major strategy to identify errors, monitor study operations and ensure high-quality data. However, clinical trial guidelines are non-specific in regards to recommended frequency, timing and nature of data audits. The absence of a well-defined data quality definition and method to measure error undermines the reliability of data quality assessment. This review aimed to assess the variability of source data verification (SDV) auditing methods to monitor data quality in a clinical research setting. The scientific databases MEDLINE, Scopus and Science Direct were searched for English language publications, with no date limits applied. Studies were considered if they included data from a clinical trial or clinical research setting and measured and/or reported data quality using a SDV auditing method. In total 15 publications were included. The nature and extent of SDV audit methods in the articles varied widely, depending upon the complexity of the source document, type of study, variables measured (primary or secondary), data audit proportion (3-100%) and collection frequency (6-24 months). Methods for coding, classifying and calculating error were also inconsistent. Transcription errors and inexperienced personnel were the main source of reported error. Repeated SDV audits using the same dataset demonstrated ∼40% improvement in data accuracy and completeness over time. No description was given in regards to what determines poor data quality in clinical trials. A wide range of SDV auditing methods are reported in the published literature though no uniform SDV auditing method could be determined for "best practice" in clinical trials. Published audit methodology articles are warranted for the development of a standardised SDV auditing method to monitor data quality in clinical research settings. Copyright © 2018. Published by Elsevier Inc.
Modeling the solute transport by particle-tracing method with variable weights
Jiang, J.
2016-12-01
Particle-tracing method is usually used to simulate the solute transport in fracture media. In this method, the concentration at one point is proportional to number of particles visiting this point. However, this method is rather inefficient at the points with small concentration. Few particles visit these points, which leads to violent oscillation or gives zero value of concentration. In this paper, we proposed a particle-tracing method with variable weights. The concentration at one point is proportional to the sum of the weights of the particles visiting it. It adjusts the weight factors during simulations according to the estimated probabilities of corresponding walks. If the weight W of a tracking particle is larger than the relative concentration C at the corresponding site, the tracking particle will be splitted into Int(W/C) copies and each copy will be simulated independently with the weight W/Int(W/C) . If the weight W of a tracking particle is less than the relative concentration C at the corresponding site, the tracking particle will be continually tracked with a probability W/C and the weight will be adjusted to be C. By adjusting weights, the number of visiting particles distributes evenly in the whole range. Through this variable weights scheme, we can eliminate the violent oscillation and increase the accuracy of orders of magnitudes.
Assessing Mucoadhesion in Polymer Gels: The Effect of Method Type and Instrument Variables
Directory of Open Access Journals (Sweden)
Jéssica Bassi da Silva
2018-03-01
Full Text Available The process of mucoadhesion has been widely studied using a wide variety of methods, which are influenced by instrumental variables and experiment design, making the comparison between the results of different studies difficult. The aim of this work was to standardize the conditions of the detachment test and the rheological methods of mucoadhesion assessment for semisolids, and introduce a texture profile analysis (TPA method. A factorial design was developed to suggest standard conditions for performing the detachment force method. To evaluate the method, binary polymeric systems were prepared containing poloxamer 407 and Carbopol 971P®, Carbopol 974P®, or Noveon® Polycarbophil. The mucoadhesion of systems was evaluated, and the reproducibility of these measurements investigated. This detachment force method was demonstrated to be reproduceable, and gave different adhesion when mucin disk or ex vivo oral mucosa was used. The factorial design demonstrated that all evaluated parameters had an effect on measurements of mucoadhesive force, but the same was not observed for the work of adhesion. It was suggested that the work of adhesion is a more appropriate metric for evaluating mucoadhesion. Oscillatory rheology was more capable of investigating adhesive interactions than flow rheology. TPA method was demonstrated to be reproducible and can evaluate the adhesiveness interaction parameter. This investigation demonstrates the need for standardized methods to evaluate mucoadhesion and makes suggestions for a standard study design.
Directory of Open Access Journals (Sweden)
Mohammad Hadi Jalali
2018-01-01
Full Text Available Elastic stress analysis of rotating variable thickness annular disk made of functionally graded material (FGM is presented. Elasticity modulus, density, and thickness of the disk are assumed to vary radially according to a power-law function. Radial stress, circumferential stress, and radial deformation of the rotating FG annular disk of variable thickness with clamped-clamped (C-C, clamped-free (C-F, and free-free (F-F boundary conditions are obtained using the numerical finite difference method, and the effects of the graded index, thickness variation, and rotating speed on the stresses and deformation are evaluated. It is shown that using FG material could decrease the value of radial stress and increase the radial displacement in a rotating thin disk. It is also demonstrated that increasing the rotating speed can strongly increase the stress in the FG annular disk.
Lung lesion doubling times: values and variability based on method of volume determination
International Nuclear Information System (INIS)
Eisenbud Quint, Leslie; Cheng, Joan; Schipper, Matthew; Chang, Andrew C.; Kalemkerian, Gregory
2008-01-01
Purpose: To determine doubling times (DTs) of lung lesions based on volumetric measurements from thin-section CT imaging. Methods: Previously untreated patients with ≥ two thin-section CT scans showing a focal lung lesion were identified. Lesion volumes were derived using direct volume measurements and volume calculations based on lesion area and diameter. Growth rates (GRs) were compared by tissue diagnosis and measurement technique. Results: 54 lesions were evaluated including 8 benign lesions, 10 metastases, 3 lymphomas, 15 adenocarcinomas, 11 squamous carcinomas, and 7 miscellaneous lung cancers. Using direct volume measurements, median DTs were 453, 111, 15, 181, 139 and 137 days, respectively. Lung cancer DTs ranged from 23-2239 days. There were no significant differences in GRs among the different lesion types. There was considerable variability among GRs using different volume determination methods. Conclusions: Lung cancer doubling times showed a substantial range, and different volume determination methods gave considerably different DTs
A variable pressure method for characterizing nanoparticle surface charge using pore sensors.
Vogel, Robert; Anderson, Will; Eldridge, James; Glossop, Ben; Willmott, Geoff
2012-04-03
A novel method using resistive pulse sensors for electrokinetic surface charge measurements of nanoparticles is presented. This method involves recording the particle blockade rate while the pressure applied across a pore sensor is varied. This applied pressure acts in a direction which opposes transport due to the combination of electro-osmosis, electrophoresis, and inherent pressure. The blockade rate reaches a minimum when the velocity of nanoparticles in the vicinity of the pore approaches zero, and the forces on typical nanoparticles are in equilibrium. The pressure applied at this minimum rate can be used to calculate the zeta potential of the nanoparticles. The efficacy of this variable pressure method was demonstrated for a range of carboxylated 200 nm polystyrene nanoparticles with different surface charge densities. Results were of the same order as phase analysis light scattering (PALS) measurements. Unlike PALS results, the sequence of increasing zeta potential for different particle types agreed with conductometric titration.
THE QUADRANTS METHOD TO ESTIMATE QUANTITATIVE VARIABLES IN MANAGEMENT PLANS IN THE AMAZON
Directory of Open Access Journals (Sweden)
Gabriel da Silva Oliveira
2015-12-01
Full Text Available This work aimed to evaluate the accuracy in estimates of abundance, basal area and commercial volume per hectare, by the quadrants method applied to an area of 1.000 hectares of rain forest in the Amazon. Samples were simulated by random and systematic process with different sample sizes, ranging from 100 to 200 sampling points. The amounts estimated by the samples were compared with the parametric values recorded in the census. In the analysis we considered as the population all trees with diameter at breast height equal to or greater than 40 cm. The quadrants method did not reach the desired level of accuracy for the variables basal area and commercial volume, overestimating the observed values recorded in the census. However, the accuracy of the estimates of abundance, basal area and commercial volume was satisfactory for applying the method in forest inventories for management plans in the Amazon.
Variability in CT lung-nodule volumetry: Effects of dose reduction and reconstruction methods.
Young, Stefano; Kim, Hyun J Grace; Ko, Moe Moe; Ko, War War; Flores, Carlos; McNitt-Gray, Michael F
2015-05-01
Measuring the size of nodules on chest CT is important for lung cancer staging and measuring therapy response. 3D volumetry has been proposed as a more robust alternative to 1D and 2D sizing methods. There have also been substantial advances in methods to reduce radiation dose in CT. The purpose of this work was to investigate the effect of dose reduction and reconstruction methods on variability in 3D lung-nodule volumetry. Reduced-dose CT scans were simulated by applying a noise-addition tool to the raw (sinogram) data from clinically indicated patient scans acquired on a multidetector-row CT scanner (Definition Flash, Siemens Healthcare). Scans were simulated at 25%, 10%, and 3% of the dose of their clinical protocol (CTDIvol of 20.9 mGy), corresponding to CTDIvol values of 5.2, 2.1, and 0.6 mGy. Simulated reduced-dose data were reconstructed with both conventional filtered backprojection (B45 kernel) and iterative reconstruction methods (SAFIRE: I44 strength 3 and I50 strength 3). Three lab technologist readers contoured "measurable" nodules in 33 patients under each of the different acquisition/reconstruction conditions in a blinded study design. Of the 33 measurable nodules, 17 were used to estimate repeatability with their clinical reference protocol, as well as interdose and inter-reconstruction-method reproducibilities. The authors compared the resulting distributions of proportional differences across dose and reconstruction methods by analyzing their means, standard deviations (SDs), and t-test and F-test results. The clinical-dose repeatability experiment yielded a mean proportional difference of 1.1% and SD of 5.5%. The interdose reproducibility experiments gave mean differences ranging from -5.6% to -1.7% and SDs ranging from 6.3% to 9.9%. The inter-reconstruction-method reproducibility experiments gave mean differences of 2.0% (I44 strength 3) and -0.3% (I50 strength 3), and SDs were identical at 7.3%. For the subset of repeatability cases, inter-reconstruction-method
The Effect of 4-week Difference Training Methods on Some Fitness Variables in Youth Handball Players
Directory of Open Access Journals (Sweden)
Abdolhossein a Parnow
2016-09-01
Full Text Available Handball is a team sport in which main activities such as sprinting, arm throwing, hitting, and so on involve. This Olympic team sport requires a standard of preparation in order to complete sixteen minutes of competitive play and to achieve success. This study, therefore, was done to determinate the effect of a 4-week different training on some physical fitness variables in youth Handball players. Thirty high-school students participated in the study and assigned into the Resistance Training (RT (n = 10: 16.75± 0.36 yr; 63.14± 4.19 kg; 174.8 ± 5.41 cm, Plyometric Training (PT (n = 10: 16.57± 0.26 yr; 65.52± 6.79 kg; 173.5 ± 5.44 cm, and Complex Training (CT (n=10, 16.23± 0.50 yr; 58.43± 10.50 kg; 175.2 ± 8.19 cm groups. Subjects were evaluated in anthropometric and physiological characteristics 48 hours before and after of a 4-week protocol. Because of study purposes, statistical analyses consisted of a repeated measure ANVOA and one-way ANOVA were used. In considering with pre to post test variables changes in the groups, data analysis showed BF, strength, speed, agility, and explosive power were affected by training protocols (P0.05. In conclusion, complex training result in advantageous effect on variables such as strength, explosive power, speed and agility in youth handball players compare with resistance and plyometric training although we also reported positive effect of these training methods. Coaches and players, therefore, could consider complex training as alternative method for other training methods.
Frank, Andrew A.
1984-01-01
A control system and method for a power delivery system, such as in an automotive vehicle, having an engine coupled to a continuously variable ratio transmission (CVT). Totally independent control of engine and transmission enable the engine to precisely follow a desired operating characteristic, such as the ideal operating line for minimum fuel consumption. CVT ratio is controlled as a function of commanded power or torque and measured load, while engine fuel requirements (e.g., throttle position) are strictly a function of measured engine speed. Fuel requirements are therefore precisely adjusted in accordance with the ideal characteristic for any load placed on the engine.
The complex variable boundary element method: Applications in determining approximative boundaries
Hromadka, T.V.
1984-01-01
The complex variable boundary element method (CVBEM) is used to determine approximation functions for boundary value problems of the Laplace equation such as occurs in potential theory. By determining an approximative boundary upon which the CVBEM approximator matches the desired constant (level curves) boundary conditions, the CVBEM is found to provide the exact solution throughout the interior of the transformed problem domain. Thus, the acceptability of the CVBEM approximation is determined by the closeness-of-fit of the approximative boundary to the study problem boundary. ?? 1984.
International Nuclear Information System (INIS)
Eaker, C.W.; Schatz, G.C.; De Leon, N.; Heller, E.J.
1984-01-01
Two methods for calculating the good action variables and semiclassical eigenvalues for coupled oscillator systems are presented, both of which relate the actions to the coefficients appearing in the Fourier representation of the normal coordinates and momenta. The two methods differ in that one is based on the exact expression for the actions together with the EBK semiclassical quantization condition while the other is derived from the Sorbie--Handy (SH) approximation to the actions. However, they are also very similar in that the actions in both methods are related to the same set of Fourier coefficients and both require determining the perturbed frequencies in calculating actions. These frequencies are also determined from the Fourier representations, which means that the actions in both methods are determined from information entirely contained in the Fourier expansion of the coordinates and momenta. We show how these expansions can very conveniently be obtained from fast Fourier transform (FFT) methods and that numerical filtering methods can be used to remove spurious Fourier components associated with the finite trajectory integration duration. In the case of the SH based method, we find that the use of filtering enables us to relax the usual periodicity requirement on the calculated trajectory. Application to two standard Henon--Heiles models is considered and both are shown to give semiclassical eigenvalues in good agreement with previous calculations for nondegenerate and 1:1 resonant systems. In comparing the two methods, we find that although the exact method is quite general in its ability to be used for systems exhibiting complex resonant behavior, it converges more slowly with increasing trajectory integration duration and is more sensitive to the algorithm for choosing perturbed frequencies than the SH based method
Negative confounding by essential fatty acids in methylmercury neurotoxicity associations
DEFF Research Database (Denmark)
Choi, Anna L; Mogensen, Ulla Brasch; Bjerve, Kristian S
2014-01-01
acid concentrations in the analysis (-22.0, 95% confidence interval [CI]=-39.4, -4.62). In structural equation models, poorer memory function (corresponding to a lower score in the learning trials and short delay recall in CVLT) was associated with a doubling of prenatal exposure to methylmercury after...... concentrations of fatty acids were determined in cord serum phospholipids. Neuropsychological performance in verbal, motor, attention, spatial, and memory functions was assessed at 7 years of age. Multiple regression and structural equation models (SEMs) were carried out to determine the confounder......-adjusted associations with methylmercury exposure. RESULTS: A short delay recall (in percent change) in the California Verbal Learning Test (CVLT) was associated with a doubling of cord blood methylmercury (-18.9, 95% confidence interval [CI]=-36.3, -1.51). The association became stronger after the inclusion of fatty...
phMRI: methodological considerations for mitigating potential confounding factors
Directory of Open Access Journals (Sweden)
Julius H Bourke
2015-05-01
Full Text Available Pharmacological Magnetic Resonance Imaging (phMRI is a variant of conventional MRI that adds pharmacological manipulations in order to study the effects of drugs, or uses pharmacological probes to investigate basic or applied (e.g. clinical neuroscience questions. Issues that may confound the interpretation of results from various types of phMRI studies are briefly discussed, and a set of methodological strategies that can mitigate these problems are described. These include strategies that can be employed at every stage of investigation, from study design to interpretation of resulting data, and additional techniques suited for use with clinical populations are also featured. Pharmacological MRI is a challenging area of research that has both significant advantages and formidable difficulties, however with due consideration and use of these strategies many of the key obstacles can be overcome.
Comorbidities, confounders, and the white matter transcriptome in chronic alcoholism.
Sutherland, Greg T; Sheedy, Donna; Sheahan, Pam J; Kaplan, Warren; Kril, Jillian J
2014-04-01
Alcohol abuse is the world's third leading cause of disease and disability, and one potential sequel of chronic abuse is alcohol-related brain damage (ARBD). This clinically manifests as cognitive dysfunction and pathologically as atrophy of white matter (WM) in particular. The mechanism linking chronic alcohol intoxication with ARBD remains largely unknown but it is also complicated by common comorbidities such as liver damage and nutritional deficiencies. Liver cirrhosis, in particular, often leads to hepatic encephalopathy (HE), a primary glial disease. In a novel transcriptomic study, we targeted the WM only of chronic alcoholics in an attempt to tease apart the pathogenesis of ARBD. Specifically, in alcoholics with and without HE, we explored both the prefrontal and primary motor cortices, 2 regions that experience differential levels of neuronal loss. Our results suggest that HE, along with 2 confounders, gray matter contamination, and low RNA quality are major drivers of gene expression in ARBD. All 3 exceeded the effects of alcohol itself. In particular, low-quality RNA samples were characterized by an up-regulation of translation machinery, while HE was associated with a down-regulation of mitochondrial energy metabolism pathways. The findings in HE alcoholics are consistent with the metabolic acidosis seen in this condition. In contrast non-HE alcoholics had widespread but only subtle changes in gene expression in their WM. Notwithstanding the latter result, this study demonstrates that significant confounders in transcriptomic studies of human postmortem brain tissue can be identified, quantified, and "removed" to reveal disease-specific signals. Copyright © 2014 by the Research Society on Alcoholism.
Internal Variability and Disequilibrium Confound Estimates of Climate Sensitivity From Observations
Marvel, Kate; Pincus, Robert; Schmidt, Gavin A.; Miller, Ron L.
2018-02-01
An emerging literature suggests that estimates of equilibrium climate sensitivity (ECS) derived from recent observations and energy balance models are biased low because models project more positive climate feedback in the far future. Here we use simulations from the Coupled Model Intercomparison Project Phase 5 (CMIP5) to show that across models, ECS inferred from the recent historical period (1979-2005) is indeed almost uniformly lower than that inferred from simulations subject to abrupt increases in CO2 radiative forcing. However, ECS inferred from simulations in which sea surface temperatures are prescribed according to observations is lower still. ECS inferred from simulations with prescribed sea surface temperatures is strongly linked to changes to tropical marine low clouds. However, feedbacks from these clouds are a weak constraint on long-term model ECS. One interpretation is that observations of recent climate changes constitute a poor direct proxy for long-term sensitivity.
Games, Paul A.
1975-01-01
A brief introduction is presented on how multiple regression and linear model techniques can handle data analysis situations that most educators and psychologists think of as appropriate for analysis of variance. (Author/BJG)
Typing Speed as a Confounding Variable and the Measurement of Quality in Divergent Thinking
Forthmann, Boris; Holling, Heinz; Çelik, Pinar; Storme, Martin; Lubart, Todd
2017-01-01
The need to control for writing or typing speed when assessing divergent-thinking performance has been recognized since the early '90s. An even longer tradition in divergent-thinking research has the issue of scoring the responses for quality. This research addressed both issues within structural equation modeling. Three dimensions of…
A variable capacitance based modeling and power capability predicting method for ultracapacitor
Liu, Chang; Wang, Yujie; Chen, Zonghai; Ling, Qiang
2018-01-01
Methods of accurate modeling and power capability predicting for ultracapacitors are of great significance in management and application of lithium-ion battery/ultracapacitor hybrid energy storage system. To overcome the simulation error coming from constant capacitance model, an improved ultracapacitor model based on variable capacitance is proposed, where the main capacitance varies with voltage according to a piecewise linear function. A novel state-of-charge calculation approach is developed accordingly. After that, a multi-constraint power capability prediction is developed for ultracapacitor, in which a Kalman-filter-based state observer is designed for tracking ultracapacitor's real-time behavior. Finally, experimental results verify the proposed methods. The accuracy of the proposed model is verified by terminal voltage simulating results under different temperatures, and the effectiveness of the designed observer is proved by various test conditions. Additionally, the power capability prediction results of different time scales and temperatures are compared, to study their effects on ultracapacitor's power capability.
Gas permeation measurement under defined humidity via constant volume/variable pressure method
Jan Roman, Pauls
2012-02-01
Many industrial gas separations in which membrane processes are feasible entail high water vapour contents, as in CO 2-separation from flue gas in carbon capture and storage (CCS), or in biogas/natural gas processing. Studying the effect of water vapour on gas permeability through polymeric membranes is essential for materials design and optimization of these membrane applications. In particular, for amine-based CO 2 selective facilitated transport membranes, water vapour is necessary for carrier-complex formation (Matsuyama et al., 1996; Deng and Hägg, 2010; Liu et al., 2008; Shishatskiy et al., 2010) [1-4]. But also conventional polymeric membrane materials can vary their permeation behaviour due to water-induced swelling (Potreck, 2009) [5]. Here we describe a simple approach to gas permeability measurement in the presence of water vapour, in the form of a modified constant volume/variable pressure method (pressure increase method). © 2011 Elsevier B.V.
Projective-Dual Method for Solving Systems of Linear Equations with Nonnegative Variables
Ganin, B. V.; Golikov, A. I.; Evtushenko, Yu. G.
2018-02-01
In order to solve an underdetermined system of linear equations with nonnegative variables, the projection of a given point onto its solutions set is sought. The dual of this problem—the problem of unconstrained maximization of a piecewise-quadratic function—is solved by Newton's method. The problem of unconstrained optimization dual of the regularized problem of finding the projection onto the solution set of the system is considered. A connection of duality theory and Newton's method with some known algorithms of projecting onto a standard simplex is shown. On the example of taking into account the specifics of the constraints of the transport linear programming problem, the possibility to increase the efficiency of calculating the generalized Hessian matrix is demonstrated. Some examples of numerical calculations using MATLAB are presented.
Zhang, Henry T; McGrath, Leah J; Wyss, Richard; Ellis, Alan R; Stürmer, Til
2017-12-01
To improve control of confounding by frailty when estimating the effect of influenza vaccination on all-cause mortality by controlling for a published set of claims-based predictors of dependency in activities of daily living (ADL). Using Medicare claims data, a cohort of beneficiaries >65 years of age was followed from September 1, 2007, to April 12, 2008, with covariates assessed in the 6 months before follow-up. We estimated Cox proportional hazards models of all-cause mortality, with influenza vaccination as a time-varying exposure. We controlled for common demographics, comorbidities, and health care utilization variables and then added 20 ADL dependency predictors. To gauge residual confounding, we estimated pre-influenza season hazard ratios (HRs) between September 1, 2007 and January 5, 2008, which should be 1.0 in the absence of bias. A cohort of 2 235 140 beneficiaries was created, with a median follow-up of 224 days. Overall, 52% were vaccinated and 4% died during follow-up. During the pre-influenza season period, controlling for demographics, comorbidities, and health care use resulted in a HR of 0.66 (0.64, 0.67). Adding the ADL dependency predictors moved the HR to 0.68 (0.67, 0.70). Controlling for demographics and ADL dependency predictors alone resulted in a HR of 0.68 (0.66, 0.70). Results were consistent with those in the literature, with significant uncontrolled confounding after adjustment for demographics, comorbidities, and health care use. Adding ADL dependency predictors moved HRs slightly closer to the null. Of the comorbidities, health care use variables, and ADL dependency predictors, the last set reduced confounding most. However, substantial uncontrolled confounding remained. Copyright © 2017 John Wiley & Sons, Ltd.
Study of input variables in group method of data handling methodology
International Nuclear Information System (INIS)
Pereira, Iraci Martinez; Bueno, Elaine Inacio
2013-01-01
The Group Method of Data Handling - GMDH is a combinatorial multi-layer algorithm in which a network of layers and nodes is generated using a number of inputs from the data stream being evaluated. The GMDH network topology has been traditionally determined using a layer by layer pruning process based on a pre-selected criterion of what constitutes the best nodes at each level. The traditional GMDH method is based on an underlying assumption that the data can be modeled by using an approximation of the Volterra Series or Kolmorgorov-Gabor polynomial. A Monitoring and Diagnosis System was developed based on GMDH and ANN methodologies, and applied to the IPEN research Reactor IEA-1. The system performs the monitoring by comparing the GMDH and ANN calculated values with measured ones. As the GMDH is a self-organizing methodology, the input variables choice is made automatically. On the other hand, the results of ANN methodology are strongly dependent on which variables are used as neural network input. (author)
Tam, Vincent H; Kabbara, Samer
2006-10-01
Monte Carlo simulations (MCSs) are increasingly being used to predict the pharmacokinetic variability of antimicrobials in a population. However, various MCS approaches may differ in the accuracy of the predictions. We compared the performance of 3 different MCS approaches using a data set with known parameter values and dispersion. Ten concentration-time profiles were randomly generated and used to determine the best-fit parameter estimates. Three MCS methods were subsequently used to simulate the AUC(0-infinity) of the population, using the central tendency and dispersion of the following in the subject sample: 1) K and V; 2) clearance and V; 3) AUC(0-infinity). In each scenario, 10000 subject simulations were performed. Compared to true AUC(0-infinity) of the population, mean biases by various methods were 1) 58.4, 2) 380.7, and 3) 12.5 mg h L(-1), respectively. Our results suggest that the most realistic MCS approach appeared to be based on the variability of AUC(0-infinity) in the subject sample.
A Real-Time Analysis Method for Pulse Rate Variability Based on Improved Basic Scale Entropy
Directory of Open Access Journals (Sweden)
Yongxin Chou
2017-01-01
Full Text Available Base scale entropy analysis (BSEA is a nonlinear method to analyze heart rate variability (HRV signal. However, the time consumption of BSEA is too long, and it is unknown whether the BSEA is suitable for analyzing pulse rate variability (PRV signal. Therefore, we proposed a method named sliding window iterative base scale entropy analysis (SWIBSEA by combining BSEA and sliding window iterative theory. The blood pressure signals of healthy young and old subjects are chosen from the authoritative international database MIT/PhysioNet/Fantasia to generate PRV signals as the experimental data. Then, the BSEA and the SWIBSEA are used to analyze the experimental data; the results show that the SWIBSEA reduces the time consumption and the buffer cache space while it gets the same entropy as BSEA. Meanwhile, the changes of base scale entropy (BSE for healthy young and old subjects are the same as that of HRV signal. Therefore, the SWIBSEA can be used for deriving some information from long-term and short-term PRV signals in real time, which has the potential for dynamic PRV signal analysis in some portable and wearable medical devices.
Variability of bronchial measurements obtained by sequential CT using two computer-based methods
International Nuclear Information System (INIS)
Brillet, Pierre-Yves; Fetita, Catalin I.; Mitrea, Mihai; Preteux, Francoise; Capderou, Andre; Dreuil, Serge; Simon, Jean-Marc; Grenier, Philippe A.
2009-01-01
This study aimed to evaluate the variability of lumen (LA) and wall area (WA) measurements obtained on two successive MDCT acquisitions using energy-driven contour estimation (EDCE) and full width at half maximum (FWHM) approaches. Both methods were applied to a database of segmental and subsegmental bronchi with LA > 4 mm 2 containing 42 bronchial segments of 10 successive slices that best matched on each acquisition. For both methods, the 95% confidence interval between repeated MDCT was between -1.59 and 1.5 mm 2 for LA, and -3.31 and 2.96 mm 2 for WA. The values of the coefficient of measurement variation (CV 10 , i.e., percentage ratio of the standard deviation obtained from the 10 successive slices to their mean value) were strongly correlated between repeated MDCT data acquisitions (r > 0.72; p 2 , whereas WA values were lower for bronchi with WA 2 ; no systematic EDCE underestimation or overestimation was observed for thicker-walled bronchi. In conclusion, variability between CT examinations and assessment techniques may impair measurements. Therefore, new parameters such as CV 10 need to be investigated to study bronchial remodeling. Finally, EDCE and FWHM are not interchangeable in longitudinal studies. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Ghasemi, Jahan B.; Zolfonoun, Ehsan [Toosi University of Technology, Tehran (Korea, Republic of)
2012-05-15
Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms.
International Nuclear Information System (INIS)
Ghasemi, Jahan B.; Zolfonoun, Ehsan
2012-01-01
Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms
Salonen, K; Leisola, M; Eerikäinen, T
2009-01-01
Determination of metabolites from an anaerobic digester with an acid base titration is considered as superior method for many reasons. This paper describes a practical at line compatible multipoint titration method. The titration procedure was improved by speed and data quality. A simple and novel control algorithm for estimating a variable titrant dose was derived for this purpose. This non-linear PI-controller like algorithm does not require any preliminary information from sample. Performance of this controller is superior compared to traditional linear PI-controllers. In addition, simplification for presenting polyprotic acids as a sum of multiple monoprotic acids is introduced along with a mathematical error examination. A method for inclusion of the ionic strength effect with stepwise iteration is shown. The titration model is presented with matrix notations enabling simple computation of all concentration estimates. All methods and algorithms are illustrated in the experimental part. A linear correlation better than 0.999 was obtained for both acetate and phosphate used as model compounds with slopes of 0.98 and 1.00 and average standard deviations of 0.6% and 0.8%, respectively. Furthermore, insensitivity of the presented method for overlapping buffer capacity curves was shown.
Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli
2018-01-01
Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.
The Bayesian group lasso for confounded spatial data
Hefley, Trevor J.; Hooten, Mevin B.; Hanks, Ephraim M.; Russell, Robin E.; Walsh, Daniel P.
2017-01-01
Generalized linear mixed models for spatial processes are widely used in applied statistics. In many applications of the spatial generalized linear mixed model (SGLMM), the goal is to obtain inference about regression coefficients while achieving optimal predictive ability. When implementing the SGLMM, multicollinearity among covariates and the spatial random effects can make computation challenging and influence inference. We present a Bayesian group lasso prior with a single tuning parameter that can be chosen to optimize predictive ability of the SGLMM and jointly regularize the regression coefficients and spatial random effect. We implement the group lasso SGLMM using efficient Markov chain Monte Carlo (MCMC) algorithms and demonstrate how multicollinearity among covariates and the spatial random effect can be monitored as a derived quantity. To test our method, we compared several parameterizations of the SGLMM using simulated data and two examples from plant ecology and disease ecology. In all examples, problematic levels multicollinearity occurred and influenced sampling efficiency and inference. We found that the group lasso prior resulted in roughly twice the effective sample size for MCMC samples of regression coefficients and can have higher and less variable predictive accuracy based on out-of-sample data when compared to the standard SGLMM.
DEFF Research Database (Denmark)
Thygesen, Lau Caspar; Pottegård, Anton; Ersbøll, Annette Kjaer
2017-01-01
AIMS: Previous studies have reported diverging results on the association between benzodiazepine use and cancer risk. METHODS: We investigated this association in a matched case-control study including incident cancer cases during 2002-2009 in the Danish Cancer Registry (n = 94 923) and age......% confidence interval 1.00-1.19) and for smoking-related cancers from 1.20 to 1.10 (95% confidence interval 1.00-1.21). CONCLUSION: We conclude that the increased risk observed in the solely register-based study could partly be attributed to unmeasured confounding....... PSs were used: The error-prone PS using register-based confounders and the calibrated PS based on both register- and survey-based confounders, retrieved from the Health Interview Survey. RESULTS: Register-based data showed that cancer cases had more diagnoses, higher comorbidity score and more co...
Toward Capturing Momentary Changes of Heart Rate Variability by a Dynamic Analysis Method.
Directory of Open Access Journals (Sweden)
Haoshi Zhang
Full Text Available The analysis of heart rate variability (HRV has been performed on long-term electrocardiography (ECG recordings (12~24 hours and short-term recordings (2~5 minutes, which may not capture momentary change of HRV. In this study, we present a new method to analyze the momentary HRV (mHRV. The ECG recordings were segmented into a series of overlapped HRV analysis windows with a window length of 5 minutes and different time increments. The performance of the proposed method in delineating the dynamics of momentary HRV measurement was evaluated with four commonly used time courses of HRV measures on both synthetic time series and real ECG recordings from human subjects and dogs. Our results showed that a smaller time increment could capture more dynamical information on transient changes. Considering a too short increment such as 10 s would cause the indented time courses of the four measures, a 1-min time increment (4-min overlapping was suggested in the analysis of mHRV in the study. ECG recordings from human subjects and dogs were used to further assess the effectiveness of the proposed method. The pilot study demonstrated that the proposed analysis of mHRV could provide more accurate assessment of the dynamical changes in cardiac activity than the conventional measures of HRV (without time overlapping. The proposed method may provide an efficient means in delineating the dynamics of momentary HRV and it would be worthy performing more investigations.
Determining Confounding Sensitivities In Eddy Current Thin Film Measurements
Energy Technology Data Exchange (ETDEWEB)
Gros, Ethan; Udpa, Lalita; Smith, James A.; Wachs, Katelyn
2016-07-01
Determining Confounding Sensitivities In Eddy Current Thin Film Measurements Ethan Gros, Lalita Udpa, Electrical Engineering, Michigan State University, East Lansing MI 48824 James A. Smith, Experiment Analysis, Idaho National Laboratory, Idaho Falls ID 83415 Eddy current (EC) techniques are widely used in industry to measure the thickness of non-conductive films on a metal substrate. This is done using a system whereby a coil carrying a high-frequency alternating current is used to create an alternating magnetic field at the surface of the instrument's probe. When the probe is brought near a conductive surface, the alternating magnetic field will induce ECs in the conductor. The substrate characteristics and the distance of the probe from the substrate (the coating thickness) affect the magnitude of the ECs. The induced currents load the probe coil affecting the terminal impedance of the coil. The measured probe impedance is related to the lift off between coil and conductor as well as conductivity of the test sample. For a known conductivity sample, the probe impedance can be converted into an equivalent film thickness value. The EC measurement can be confounded by a number of measurement parameters. It is the goal of this research to determine which physical properties of the measurement set-up and sample can adversely affect the thickness measurement. The eddy current testing is performed using a commercially available, hand held eddy current probe (ETA3.3H spring loaded eddy probe running at 8 MHz) that comes with a stand to hold the probe. The stand holds the probe and adjusts the probe on the z-axis to help position the probe in the correct area as well as make precise measurements. The signal from the probe is sent to a hand held readout, where the results are recorded directly in terms of liftoff or film thickness. Understanding the effect of certain factors on the measurements of film thickness, will help to evaluate how accurate the ETA3.3H spring
Modelling Cardiac Signal as a Confound in EEG-fMRI and its Application in Focal Epilepsy
DEFF Research Database (Denmark)
Liston, Adam David; Salek-Haddadi, Afraim; Hamandi, Khalid
2005-01-01
Cardiac noise has been shown to reduce the sensitivity of functional Magnetic Resonance Imaging (fMRI) to an experimental effect due to its confounding presence in the blood oxygenation level-dependent (BOLD) signal. Its effect is most severe in particular regions of the brain and a method is yet...
Directory of Open Access Journals (Sweden)
Tomaž Vrtovec
2015-06-01
Full Text Available Objective measurement of coronal vertebral inclination (CVI is of significant importance for evaluating spinal deformities in the coronal plane. The purpose of this study is to systematically analyze and compare manual and computerized measurements of CVI in cross-sectional and volumetric computed tomography (CT images. Three observers independently measured CVI in 14 CT images of normal and 14 CT images of scoliotic vertebrae by using six manual and two computerized measurements. Manual measurements were obtained in coronal cross-sections by manually identifying the vertebral body corners, which served to measure CVI according to the superior and inferior tangents, left and right tangents, and mid-endplate and mid-wall lines. Computerized measurements were obtained in two dimensions (2D and in three dimensions (3D by manually initializing an automated method in vertebral centroids and then searching for the planes of maximal symmetry of vertebral anatomical structures. The mid-endplate lines were the most reproducible and reliable manual measurements (intra- and inter-observer variability of 0.7° and 1.2° standard deviation, SD, respectively. The computerized measurements in 3D were more reproducible and reliable (intra- and inter-observer variability of 0.5° and 0.7° SD, respectively, but were most consistent with the mid-wall lines (2.0° SD and 1.4° mean absolute difference. The manual CVI measurements based on mid-endplate lines and the computerized CVI measurements in 3D resulted in the lowest intra-observer and inter-observer variability, however, computerized CVI measurements reduce observer interaction.
International Nuclear Information System (INIS)
Balabin, Roman M.; Smirnov, Sergey V.
2011-01-01
During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm -1 ) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic
Methods for assessment of climate variability and climate changes in different time-space scales
International Nuclear Information System (INIS)
Lobanov, V.; Lobanova, H.
2004-01-01
Main problem of hydrology and design support for water projects connects with modern climate change and its impact on hydrological characteristics as observed as well as designed. There are three main stages of this problem: - how to extract a climate variability and climate change from complex hydrological records; - how to assess the contribution of climate change and its significance for the point and area; - how to use the detected climate change for computation of design hydrological characteristics. Design hydrological characteristic is the main generalized information, which is used for water management and design support. First step of a research is a choice of hydrological characteristic, which can be as a traditional one (annual runoff for assessment of water resources, maxima, minima runoff, etc) as well as a new one, which characterizes an intra-annual function or intra-annual runoff distribution. For this aim a linear model has been developed which has two coefficients connected with an amplitude and level (initial conditions) of seasonal function and one parameter, which characterizes an intensity of synoptic and macro-synoptic fluctuations inside a year. Effective statistical methods have been developed for a separation of climate variability and climate change and extraction of homogeneous components of three time scales from observed long-term time series: intra annual, decadal and centural. The first two are connected with climate variability and the last (centural) with climate change. Efficiency of new methods of decomposition and smoothing has been estimated by stochastic modeling and well as on the synthetic examples. For an assessment of contribution and statistical significance of modern climate change components statistical criteria and methods have been used. Next step has been connected with a generalization of the results of detected climate changes over the area and spatial modeling. For determination of homogeneous region with the same
International Nuclear Information System (INIS)
Zhang Jiefang; Dai Chaoqing; Zong Fengde
2007-01-01
In this paper, with the variable separation approach and based on the general reduction theory, we successfully generalize this extended tanh-function method to obtain new types of variable separation solutions for the following Nizhnik-Novikov-Veselov (NNV) equation. Among the solutions, two solutions are new types of variable separation solutions, while the last solution is similar to the solution given by Darboux transformation in Hu et al 2003 Chin. Phys. Lett. 20 1413
SIVA/DIVA- INITIAL VALUE ORDINARY DIFFERENTIAL EQUATION SOLUTION VIA A VARIABLE ORDER ADAMS METHOD
Krogh, F. T.
1994-01-01
The SIVA/DIVA package is a collection of subroutines for the solution of ordinary differential equations. There are versions for single precision and double precision arithmetic. These solutions are applicable to stiff or nonstiff differential equations of first or second order. SIVA/DIVA requires fewer evaluations of derivatives than other variable order Adams predictor-corrector methods. There is an option for the direct integration of second order equations which can make integration of trajectory problems significantly more efficient. Other capabilities of SIVA/DIVA include: monitoring a user supplied function which can be separate from the derivative; dynamically controlling the step size; displaying or not displaying output at initial, final, and step size change points; saving the estimated local error; and reverse communication where subroutines return to the user for output or computation of derivatives instead of automatically performing calculations. The user must supply SIVA/DIVA with: 1) the number of equations; 2) initial values for the dependent and independent variables, integration stepsize, error tolerance, etc.; and 3) the driver program and operational parameters necessary for subroutine execution. SIVA/DIVA contains an extensive diagnostic message library should errors occur during execution. SIVA/DIVA is written in FORTRAN 77 for batch execution and is machine independent. It has a central memory requirement of approximately 120K of 8 bit bytes. This program was developed in 1983 and last updated in 1987.
International Nuclear Information System (INIS)
Do, Chuong; Hussey, Dennis; Wells, Daniel M.; Epperson, Kenny
2016-01-01
Optimization numerical method was implemented to determine several mass transfer coefficients in a crud-induced power shift risk assessment code. The approach was to utilize a multilevel strategy that targets different model parameters that first changes the major order variables, mass transfer inputs, then calibrates the minor order variables, crud source terms, according to available plant data. In this manner, the mass transfer inputs are effectively simplified as 'dependent' on the crud source terms. Two optimization studies were performed using DAKOTA, a design and analysis toolkit, with the difference between the runs, being the number of model runs using BOA, allowed for adjusting the crud source terms, therefore, reducing the uncertainty with calibration. The result of the first case showed that the current best estimated values for the mass transfer coefficients, which were derived from first principle analysis, can be considered an optimized set. When the run limit of BOA was increased for the second case, an improvement in the prediction was obtained with the results deviating slightly from the best estimated values. (author)
Energy Technology Data Exchange (ETDEWEB)
Baeza, A.; Corbacho, J.A. [LARUEX, Caceres (Spain). Environmental Radioactivity Lab.
2013-07-01
Determining the gross alpha activity concentration of water samples is one way to screen for waters whose radionuclide content is so high that its consumption could imply surpassing the Total Indicative Dose as defined in European Directive 98/83/EC. One of the most commonly used methods to prepare the sources to measure gross alpha activity in water samples is desiccation. Its main advantages are the simplicity of the procedure, the low cost of source preparation, and the possibility of simultaneously determining the gross beta activity. The preparation of the source, the construction of the calibration curves, and the measurement procedure itself involve, however, various factors that may introduce sufficient variability into the results to significantly affect the screening process. We here identify the main sources of this variability, and propose specific procedures to follow in the desiccation process that will reduce the uncertainties, and ensure that the result is indeed representative of the sum of the activities of the alpha emitters present in the sample. (orig.)
Liu, Tianyi; Nie, Xiaolu; Wu, Zehao; Zhang, Ying; Feng, Guoshuang; Cai, Siyu; Lv, Yaqi; Peng, Xiaoxia
2017-12-29
Different confounder adjustment strategies were used to estimate odds ratios (ORs) in case-control study, i.e. how many confounders original studies adjusted and what the variables are. This secondary data analysis is aimed to detect whether there are potential biases caused by difference of confounding factor adjustment strategies in case-control study, and whether such bias would impact the summary effect size of meta-analysis. We included all meta-analyses that focused on the association between breast cancer and passive smoking among non-smoking women, as well as each original case-control studies included in these meta-analyses. The relative deviations (RDs) of each original study were calculated to detect how magnitude the adjustment would impact the estimation of ORs, compared with crude ORs. At the same time, a scatter diagram was sketched to describe the distribution of adjusted ORs with different number of adjusted confounders. Substantial inconsistency existed in meta-analysis of case-control studies, which would influence the precision of the summary effect size. First, mixed unadjusted and adjusted ORs were used to combine individual OR in majority of meta-analysis. Second, original studies with different adjustment strategies of confounders were combined, i.e. the number of adjusted confounders and different factors being adjusted in each original study. Third, adjustment did not make the effect size of original studies trend to constringency, which suggested that model fitting might have failed to correct the systematic error caused by confounding. The heterogeneity of confounder adjustment strategies in case-control studies may lead to further bias for summary effect size in meta-analyses, especially for weak or medium associations so that the direction of causal inference would be even reversed. Therefore, further methodological researches are needed, referring to the assessment of confounder adjustment strategies, as well as how to take this kind
DEFF Research Database (Denmark)
Wiuf, Carsten; Pallesen, Jonatan; Foldager, Leslie
2016-01-01
variables without assuming a priori defined groups. We provide different ways to evaluate the significance of the aggregated variables based on theoretical considerations and resampling techniques, and show that under certain assumptions the FWER is controlled in the strong sense. Validity of the method...... and the results might depend on the chosen criteria. Methods that summarize, or aggregate, test statistics or p-values, without relying on a priori criteria, are therefore desirable. We present a simple method to aggregate a sequence of stochastic variables, such as test statistics or p-values, into fewer...
Variable Camber Continuous Aerodynamic Control Surfaces and Methods for Active Wing Shaping Control
Nguyen, Nhan T. (Inventor)
2016-01-01
An aerodynamic control apparatus for an air vehicle improves various aerodynamic performance metrics by employing multiple spanwise flap segments that jointly form a continuous or a piecewise continuous trailing edge to minimize drag induced by lift or vortices. At least one of the multiple spanwise flap segments includes a variable camber flap subsystem having multiple chordwise flap segments that may be independently actuated. Some embodiments also employ a continuous leading edge slat system that includes multiple spanwise slat segments, each of which has one or more chordwise slat segment. A method and an apparatus for implementing active control of a wing shape are also described and include the determination of desired lift distribution to determine the improved aerodynamic deflection of the wings. Flap deflections are determined and control signals are generated to actively control the wing shape to approximate the desired deflection.
de Sá, Joceline Cássia Ferezini; Costa, Eduardo Caldas; da Silva, Ester; Azevedo, George Dantas
2013-09-01
Polycystic ovary syndrome (PCOS) is an endocrine disorder associated with several cardiometabolic risk factors, such as central obesity, insulin resistance, type 2 diabetes, metabolic syndrome, and hypertension. These factors are associated with adrenergic overactivity, which is an important prognostic factor for the development of cardiovascular disorders. Given the common cardiometabolic disturbances occurring in PCOS women, over the last years studies have investigated the cardiac autonomic control of these patients, mainly based on heart rate variability (HRV). Thus, in this review, we will discuss the recent findings of the studies that investigated the HRV of women with PCOS, as well as noninvasive methods of analysis of autonomic control starting from basic indexes related to this methodology.
Purposeful selection of variables in logistic regression
Directory of Open Access Journals (Sweden)
Williams David Keith
2008-12-01
Full Text Available Abstract Background The main problem in many model-building situations is to choose from a large set of covariates those that should be included in the "best" model. A decision to keep a variable in the model might be based on the clinical or statistical significance. There are several variable selection algorithms in existence. Those methods are mechanical and as such carry some limitations. Hosmer and Lemeshow describe a purposeful selection of covariates within which an analyst makes a variable selection decision at each step of the modeling process. Methods In this paper we introduce an algorithm which automates that process. We conduct a simulation study to compare the performance of this algorithm with three well documented variable selection procedures in SAS PROC LOGISTIC: FORWARD, BACKWARD, and STEPWISE. Results We show that the advantage of this approach is when the analyst is interested in risk factor modeling and not just prediction. In addition to significant covariates, this variable selection procedure has the capability of retaining important confounding variables, resulting potentially in a slightly richer model. Application of the macro is further illustrated with the Hosmer and Lemeshow Worchester Heart Attack Study (WHAS data. Conclusion If an analyst is in need of an algorithm that will help guide the retention of significant covariates as well as confounding ones they should consider this macro as an alternative tool.
Mustapha, K.
2017-06-03
Anomalous diffusion is a phenomenon that cannot be modeled accurately by second-order diffusion equations, but is better described by fractional diffusion models. The nonlocal nature of the fractional diffusion operators makes substantially more difficult the mathematical analysis of these models and the establishment of suitable numerical schemes. This paper proposes and analyzes the first finite difference method for solving {\\\\em variable-coefficient} fractional differential equations, with two-sided fractional derivatives, in one-dimensional space. The proposed scheme combines first-order forward and backward Euler methods for approximating the left-sided fractional derivative when the right-sided fractional derivative is approximated by two consecutive applications of the first-order backward Euler method. Our finite difference scheme reduces to the standard second-order central difference scheme in the absence of fractional derivatives. The existence and uniqueness of the solution for the proposed scheme are proved, and truncation errors of order $h$ are demonstrated, where $h$ denotes the maximum space step size. The numerical tests illustrate the global $O(h)$ accuracy of our scheme, except for nonsmooth cases which, as expected, have deteriorated convergence rates.
Directory of Open Access Journals (Sweden)
Bai Shiye
2016-05-01
Full Text Available An objective function defined by minimum compliance of topology optimization for 3D continuum structure was established to search optimal material distribution constrained by the predetermined volume restriction. Based on the improved SIMP (solid isotropic microstructures with penalization model and the new sensitivity filtering technique, basic iteration equations of 3D finite element analysis were deduced and solved by optimization criterion method. All the above procedures were written in MATLAB programming language, and the topology optimization design examples of 3D continuum structure with reserved hole were examined repeatedly by observing various indexes, including compliance, maximum displacement, and density index. The influence of mesh, penalty factors, and filter radius on the topology results was analyzed. Computational results showed that the finer or coarser the mesh number was, the larger the compliance, maximum displacement, and density index would be. When the filtering radius was larger than 1.0, the topology shape no longer appeared as a chessboard problem, thus suggesting that the presented sensitivity filtering method was valid. The penalty factor should be an integer because iteration steps increased greatly when it is a noninteger. The above modified variable density method could provide technical routes for topology optimization design of more complex 3D continuum structures in the future.
Development and validation of a new fallout transport method using variable spectral winds
International Nuclear Information System (INIS)
Hopkins, A.T.
1984-01-01
A new method was developed to incorporate variable winds into fallout transport calculations. The method uses spectral coefficients derived by the National Meteorological Center. Wind vector components are computed with the coefficients along the trajectories of falling particles. Spectral winds are used in the two-step method to compute dose rate on the ground, downwind of a nuclear cloud. First, the hotline is located by computing trajectories of particles from an initial, stabilized cloud, through spectral winds to the ground. The connection of particle landing points is the hotline. Second, dose rate on and around the hotline is computed by analytically smearing the falling cloud's activity along the ground. The feasibility of using spectral winds for fallout particle transport was validated by computing Mount St. Helens ashfall locations and comparing calculations to fallout data. In addition, an ashfall equation was derived for computing volcanic ash mass/area on the ground. Ashfall data and the ashfall equation were used to back-calculate an aggregated particle size distribution for the Mount St. Helens eruption cloud
Nonlinear Methods to Assess Changes in Heart Rate Variability in Type 2 Diabetic Patients
Energy Technology Data Exchange (ETDEWEB)
Bhaskar, Roy, E-mail: imbhaskarall@gmail.com [Indian Institute of Technology (India); University of Connecticut, Farmington, CT (United States); Ghatak, Sobhendu [Indian Institute of Technology (India)
2013-10-15
Heart rate variability (HRV) is an important indicator of autonomic modulation of cardiovascular function. Diabetes can alter cardiac autonomic modulation by damaging afferent inputs, thereby increasing the risk of cardiovascular disease. We applied nonlinear analytical methods to identify parameters associated with HRV that are indicative of changes in autonomic modulation of heart function in diabetic patients. We analyzed differences in HRV patterns between diabetic and age-matched healthy control subjects using nonlinear methods. Lagged Poincaré plot, autocorrelation, and detrended fluctuation analysis were applied to analyze HRV in electrocardiography (ECG) recordings. Lagged Poincare plot analysis revealed significant changes in some parameters, suggestive of decreased parasympathetic modulation. The detrended fluctuation exponent derived from long-term fitting was higher than the short-term one in the diabetic population, which was also consistent with decreased parasympathetic input. The autocorrelation function of the deviation of inter-beat intervals exhibited a highly correlated pattern in the diabetic group compared with the control group. The HRV pattern significantly differs between diabetic patients and healthy subjects. All three statistical methods employed in the study may prove useful to detect the onset and extent of autonomic neuropathy in diabetic patients.
Nonlinear Methods to Assess Changes in Heart Rate Variability in Type 2 Diabetic Patients
International Nuclear Information System (INIS)
Bhaskar, Roy; Ghatak, Sobhendu
2013-01-01
Heart rate variability (HRV) is an important indicator of autonomic modulation of cardiovascular function. Diabetes can alter cardiac autonomic modulation by damaging afferent inputs, thereby increasing the risk of cardiovascular disease. We applied nonlinear analytical methods to identify parameters associated with HRV that are indicative of changes in autonomic modulation of heart function in diabetic patients. We analyzed differences in HRV patterns between diabetic and age-matched healthy control subjects using nonlinear methods. Lagged Poincaré plot, autocorrelation, and detrended fluctuation analysis were applied to analyze HRV in electrocardiography (ECG) recordings. Lagged Poincare plot analysis revealed significant changes in some parameters, suggestive of decreased parasympathetic modulation. The detrended fluctuation exponent derived from long-term fitting was higher than the short-term one in the diabetic population, which was also consistent with decreased parasympathetic input. The autocorrelation function of the deviation of inter-beat intervals exhibited a highly correlated pattern in the diabetic group compared with the control group. The HRV pattern significantly differs between diabetic patients and healthy subjects. All three statistical methods employed in the study may prove useful to detect the onset and extent of autonomic neuropathy in diabetic patients
Mustapha, K.; Furati, K.; Knio, Omar; Maitre, O. Le
2017-01-01
Anomalous diffusion is a phenomenon that cannot be modeled accurately by second-order diffusion equations, but is better described by fractional diffusion models. The nonlocal nature of the fractional diffusion operators makes substantially more difficult the mathematical analysis of these models and the establishment of suitable numerical schemes. This paper proposes and analyzes the first finite difference method for solving {\\em variable-coefficient} fractional differential equations, with two-sided fractional derivatives, in one-dimensional space. The proposed scheme combines first-order forward and backward Euler methods for approximating the left-sided fractional derivative when the right-sided fractional derivative is approximated by two consecutive applications of the first-order backward Euler method. Our finite difference scheme reduces to the standard second-order central difference scheme in the absence of fractional derivatives. The existence and uniqueness of the solution for the proposed scheme are proved, and truncation errors of order $h$ are demonstrated, where $h$ denotes the maximum space step size. The numerical tests illustrate the global $O(h)$ accuracy of our scheme, except for nonsmooth cases which, as expected, have deteriorated convergence rates.
International Nuclear Information System (INIS)
Bakosi, Jozsef; Ristorcelli, Raymond J.
2010-01-01
Probability density function (PDF) methods are extended to variable-density pressure-gradient-driven turbulence. We apply the new method to compute the joint PDF of density and velocity in a non-premixed binary mixture of different-density molecularly mixing fluids under gravity. The full time-evolution of the joint PDF is captured in the highly non-equilibrium flow: starting from a quiescent state, transitioning to fully developed turbulence and finally dissipated by molecular diffusion. High-Atwood-number effects (as distinguished from the Boussinesq case) are accounted for: both hydrodynamic turbulence and material mixing are treated at arbitrary density ratios, with the specific volume, mass flux and all their correlations in closed form. An extension of the generalized Langevin model, originally developed for the Lagrangian fluid particle velocity in constant-density shear-driven turbulence, is constructed for variable-density pressure-gradient-driven flows. The persistent small-scale anisotropy, a fundamentally 'non-Kolmogorovian' feature of flows under external acceleration forces, is captured by a tensorial diffusion term based on the external body force. The material mixing model for the fluid density, an active scalar, is developed based on the beta distribution. The beta-PDF is shown to be capable of capturing the mixing asymmetry and that it can accurately represent the density through transition, in fully developed turbulence and in the decay process. The joint model for hydrodynamics and active material mixing yields a time-accurate evolution of the turbulent kinetic energy and Reynolds stress anisotropy without resorting to gradient diffusion hypotheses, and represents the mixing state by the density PDF itself, eliminating the need for dubious mixing measures. Direct numerical simulations of the homogeneous Rayleigh-Taylor instability are used for model validation.
Directory of Open Access Journals (Sweden)
Mike D.R. Zhang
2001-01-01
Full Text Available In this paper, a method for analyzing the dynamic response of a structural system with variable mass, damping and stiffness is first presented. The dynamic equations of the structural system with variable mass and stiffness are derived according to the whole working process of a bridge bucket unloader. At the end of the paper, an engineering numerical example is given.
New Methods for Prosodic Transcription: Capturing Variability as a Source of Information
Directory of Open Access Journals (Sweden)
Jennifer Cole
2016-06-01
Full Text Available Understanding the role of prosody in encoding linguistic meaning and in shaping phonetic form requires the analysis of prosodically annotated speech drawn from a wide variety of speech materials. Yet obtaining accurate and reliable prosodic annotations for even small datasets is challenging due to the time and expertise required. We discuss several factors that make prosodic annotation difficult and impact its reliability, all of which relate to 'variability': in the patterning of prosodic elements (features and structures as they relate to the linguistic and discourse context, in the acoustic cues for those prosodic elements, and in the parameter values of the cues. We propose two novel methods for prosodic transcription that capture variability as a source of information relevant to the linguistic analysis of prosody. The first is 'Rapid Prosody Transcription '(RPT, which can be performed by non-experts using a simple set of unary labels to mark prominence and boundaries based on immediate auditory impression. Inter-transcriber variability is used to calculate continuous-valued prosody ‘scores’ that are assigned to each word and represent the perceptual salience of its prosodic features or structure. RPT can be used to model the relative influence of top-down factors and acoustic cues in prosody perception, and to model prosodic variation across many dimensions, including language variety,speech style, or speaker’s affect. The second proposed method is the identification of individual cues to the contrastive prosodic elements of an utterance. Cue specification provides a link between the contrastive symbolic categories of prosodic structures and the continuous-valued parameters in the acoustic signal, and offers a framework for investigating how factors related to the grammatical and situational context influence the phonetic form of spoken words and phrases. While cue specification as a transcription tool has not yet been explored as
Schiekirka, Sarah; Feufel, Markus A.; Herrmann-Lingen, Christoph; Raupach, Tobias
2015-01-01
Background and objective: Evaluation is an integral part of education in German medical schools. According to the quality standards set by the German Society for Evaluation, evaluation tools must provide an accurate and fair appraisal of teaching quality. Thus, data collection tools must be highly reliable and valid. This review summarises the current literature on evaluation of medical education with regard to the possible dimensions of teaching quality, the psychometric properties of survey instruments and potential confounding factors. Methods: We searched Pubmed, PsycINFO and PSYNDEX for literature on evaluation in medical education and included studies published up until June 30, 2011 as well as articles identified in the “grey literature”. Results are presented as a narrative review. Results: We identified four dimensions of teaching quality: structure, process, teacher characteristics, and outcome. Student ratings are predominantly used to address the first three dimensions, and a number of reliable tools are available for this purpose. However, potential confounders of student ratings pose a threat to the validity of these instruments. Outcome is usually operationalised in terms of student performance on examinations, but methodological problems may limit the usability of these data for evaluation purposes. In addition, not all examinations at German medical schools meet current quality standards. Conclusion: The choice of tools for evaluating medical education should be guided by the dimension that is targeted by the evaluation. Likewise, evaluation results can only be interpreted within the context of the construct addressed by the data collection tool that was used as well as its specific confounding factors. PMID:26421003
A comparison of Bayesian and Monte Carlo sensitivity analysis for unmeasured confounding.
McCandless, Lawrence C; Gustafson, Paul
2017-08-15
Bias from unmeasured confounding is a persistent concern in observational studies, and sensitivity analysis has been proposed as a solution. In the recent years, probabilistic sensitivity analysis using either Monte Carlo sensitivity analysis (MCSA) or Bayesian sensitivity analysis (BSA) has emerged as a practical analytic strategy when there are multiple bias parameters inputs. BSA uses Bayes theorem to formally combine evidence from the prior distribution and the data. In contrast, MCSA samples bias parameters directly from the prior distribution. Intuitively, one would think that BSA and MCSA ought to give similar results. Both methods use similar models and the same (prior) probability distributions for the bias parameters. In this paper, we illustrate the surprising finding that BSA and MCSA can give very different results. Specifically, we demonstrate that MCSA can give inaccurate uncertainty assessments (e.g. 95% intervals) that do not reflect the data's influence on uncertainty about unmeasured confounding. Using a data example from epidemiology and simulation studies, we show that certain combinations of data and prior distributions can result in dramatic prior-to-posterior changes in uncertainty about the bias parameters. This occurs because the application of Bayes theorem in a non-identifiable model can sometimes rule out certain patterns of unmeasured confounding that are not compatible with the data. Consequently, the MCSA approach may give 95% intervals that are either too wide or too narrow and that do not have 95% frequentist coverage probability. Based on our findings, we recommend that analysts use BSA for probabilistic sensitivity analysis. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
On the Confounding Effect of Temperature on Chemical Shift-Encoded Fat Quantification
Hernando, Diego; Sharma, Samir D.; Kramer, Harald; Reeder, Scott B.
2014-01-01
Purpose To characterize the confounding effect of temperature on chemical shift-encoded (CSE) fat quantification. Methods The proton resonance frequency of water, unlike triglycerides, depends on temperature. This leads to a temperature dependence of the spectral models of fat (relative to water) that are commonly used by CSE-MRI methods. Simulation analysis was performed for 1.5 Tesla CSE fat–water signals at various temperatures and echo time combinations. Oil–water phantoms were constructed and scanned at temperatures between 0 and 40°C using spectroscopy and CSE imaging at three echo time combinations. An explanted human liver, rejected for transplantation due to steatosis, was scanned using spectroscopy and CSE imaging. Fat–water reconstructions were performed using four different techniques: magnitude and complex fitting, with standard or temperature-corrected signal modeling. Results In all experiments, magnitude fitting with standard signal modeling resulted in large fat quantification errors. Errors were largest for echo time combinations near TEinit ≈ 1.3 ms, ΔTE ≈ 2.2 ms. Errors in fat quantification caused by temperature-related frequency shifts were smaller with complex fitting, and were avoided using a temperature-corrected signal model. Conclusion Temperature is a confounding factor for fat quantification. If not accounted for, it can result in large errors in fat quantifications in phantom and ex vivo acquisitions. PMID:24123362
Methods to quantify variable importance: implications for theanalysis of noisy ecological data
Murray, Kim; Conner, Mary M.
2009-01-01
Determining the importance of independent variables is of practical relevance to ecologists and managers concerned with allocating limited resources to the management of natural systems. Although techniques that identify explanatory variables having the largest influence on the response variable are needed to design management actions effectively, the use of various indices to evaluate variable importance is poorly understood. Using Monte Carlo simulations, we compared six different indices c...
Kovatchev, Boris P; Clarke, William L; Breton, Marc; Brayman, Kenneth; McCall, Anthony
2005-12-01
Continuous glucose monitors (CGMs) collect detailed blood glucose (BG) time series, which carry significant information about the dynamics of BG fluctuations. In contrast, the methods for analysis of CGM data remain those developed for infrequent BG self-monitoring. As a result, important information about the temporal structure of the data is lost during the translation of raw sensor readings into clinically interpretable statistics and images. The following mathematical methods are introduced into the field of CGM data interpretation: (1) analysis of BG rate of change; (2) risk analysis using previously reported Low/High BG Indices and Poincare (lag) plot of risk associated with temporal BG variability; and (3) spatial aggregation of the process of BG fluctuations and its Markov chain visualization. The clinical application of these methods is illustrated by analysis of data of a patient with Type 1 diabetes mellitus who underwent islet transplantation and with data from clinical trials. Normative data [12,025 reference (YSI device, Yellow Springs Instruments, Yellow Springs, OH) BG determinations] in patients with Type 1 diabetes mellitus who underwent insulin and glucose challenges suggest that the 90%, 95%, and 99% confidence intervals of BG rate of change that could be maximally sustained over 15-30 min are [-2,2], [-3,3], and [-4,4] mg/dL/min, respectively. BG dynamics and risk parameters clearly differentiated the stages of transplantation and the effects of medication. Aspects of treatment were clearly visualized by graphs of BG rate of change and Low/High BG Indices, by a Poincare plot of risk for rapid BG fluctuations, and by a plot of the aggregated Markov process. Advanced analysis and visualization of CGM data allow for evaluation of dynamical characteristics of diabetes and reveal clinical information that is inaccessible via standard statistics, which do not take into account the temporal structure of the data. The use of such methods improves the
An Analysis of Variable-Speed Wind Turbine Power-Control Methods with Fluctuating Wind Speed
Directory of Open Access Journals (Sweden)
Seung-Il Moon
2013-07-01
Full Text Available Variable-speed wind turbines (VSWTs typically use a maximum power-point tracking (MPPT method to optimize wind-energy acquisition. MPPT can be implemented by regulating the rotor speed or by adjusting the active power. The former, termed speed-control mode (SCM, employs a speed controller to regulate the rotor, while the latter, termed power-control mode (PCM, uses an active power controller to optimize the power. They are fundamentally equivalent; however, since they use a different controller at the outer control loop of the machine-side converter (MSC controller, the time dependence of the control system differs depending on whether SCM or PCM is used. We have compared and analyzed the power quality and the power coefficient when these two different control modes were used in fluctuating wind speeds through computer simulations. The contrast between the two methods was larger when the wind-speed fluctuations were greater. Furthermore, we found that SCM was preferable to PCM in terms of the power coefficient, but PCM was superior in terms of power quality and system stability.
Energy Technology Data Exchange (ETDEWEB)
Thompson, William L. [Bonneville Power Administration, Portland, OR (US). Environment, Fish and Wildlife
2001-07-01
Monitoring population numbers is important for assessing trends and meeting various legislative mandates. However, sampling across time introduces a temporal aspect to survey design in addition to the spatial one. For instance, a sample that is initially representative may lose this attribute if there is a shift in numbers and/or spatial distribution in the underlying population that is not reflected in later sampled plots. Plot selection methods that account for this temporal variability will produce the best trend estimates. Consequently, I used simulation to compare bias and relative precision of estimates of population change among stratified and unstratified sampling designs based on permanent, temporary, and partial replacement plots under varying levels of spatial clustering, density, and temporal shifting of populations. Permanent plots produced more precise estimates of change than temporary plots across all factors. Further, permanent plots performed better than partial replacement plots except for high density (5 and 10 individuals per plot) and 25% - 50% shifts in the population. Stratified designs always produced less precise estimates of population change for all three plot selection methods, and often produced biased change estimates and greatly inflated variance estimates under sampling with partial replacement. Hence, stratification that remains fixed across time should be avoided when monitoring populations that are likely to exhibit large changes in numbers and/or spatial distribution during the study period. Key words: bias; change estimation; monitoring; permanent plots; relative precision; sampling with partial replacement; temporary plots.
Directory of Open Access Journals (Sweden)
Qian Wang
2017-01-01
Full Text Available Different configurations of coupling strategies influence greatly the accuracy and convergence of the simulation results in the hybrid atomistic-continuum method. This study aims to quantitatively investigate this effect and offer the guidance on how to choose the proper configuration of coupling strategies in the hybrid atomistic-continuum method. We first propose a hybrid molecular dynamics- (MD- continuum solver in LAMMPS and OpenFOAM that exchanges state variables between the atomistic region and the continuum region and evaluate different configurations of coupling strategies using the sudden start Couette flow, aiming to find the preferable configuration that delivers better accuracy and efficiency. The major findings are as follows: (1 the C→A region plays the most important role in the overlap region and the “4-layer-1” combination achieves the best precision with a fixed width of the overlap region; (2 the data exchanging operation only needs a few sampling points closer to the occasions of interactions and decreasing the coupling exchange operations can reduce the computational load with acceptable errors; (3 the nonperiodic boundary force model with a smoothing parameter of 0.1 and a finer parameter of 20 can not only achieve the minimum disturbance near the MD-continuum interface but also keep the simulation precision.
Non-Chemical Distant Cellular Interactions as a potential confounder of Cell Biology Experiments
Directory of Open Access Journals (Sweden)
Ashkan eFarhadi
2014-10-01
Full Text Available Distant cells can communicate with each other through a variety of methods. Two such methods involve electrical and/or chemical mechanisms. Non-chemical, distant cellular interactions may be another method of communication that cells can use to modify the behavior of other cells that are mechanically separated. Moreover, non-chemical, distant cellular interactions may explain some cases of confounding effects in Cell Biology experiments. In this article, we review non-chemical, distant cellular interactions studies to try to shed light on the mechanisms in this highly unconventional field of cell biology. Despite the existence of several theories that try to explain the mechanism of non-chemical, distant cellular interactions, this phenomenon is still speculative. Among candidate mechanisms, electromagnetic waves appear to have the most experimental support. In this brief article, we try to answer a few key questions that may further clarify this mechanism.
Sensitivity analysis and power for instrumental variable studies.
Wang, Xuran; Jiang, Yang; Zhang, Nancy R; Small, Dylan S
2018-03-31
In observational studies to estimate treatment effects, unmeasured confounding is often a concern. The instrumental variable (IV) method can control for unmeasured confounding when there is a valid IV. To be a valid IV, a variable needs to be independent of unmeasured confounders and only affect the outcome through affecting the treatment. When applying the IV method, there is often concern that a putative IV is invalid to some degree. We present an approach to sensitivity analysis for the IV method which examines the sensitivity of inferences to violations of IV validity. Specifically, we consider sensitivity when the magnitude of association between the putative IV and the unmeasured confounders and the direct effect of the IV on the outcome are limited in magnitude by a sensitivity parameter. Our approach is based on extending the Anderson-Rubin test and is valid regardless of the strength of the instrument. A power formula for this sensitivity analysis is presented. We illustrate its usage via examples about Mendelian randomization studies and its implications via a comparison of using rare versus common genetic variants as instruments. © 2018, The International Biometric Society.
Energy Technology Data Exchange (ETDEWEB)
Frew, Bethany A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Cole, Wesley J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Sun, Yinong [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Mai, Trieu T [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Richards, James [National Renewable Energy Laboratory (NREL), Golden, CO (United States)
2017-08-01
Capacity expansion models (CEMs) are widely used to evaluate the least-cost portfolio of electricity generators, transmission, and storage needed to reliably serve demand over the evolution of many years or decades. Various CEM formulations are used to evaluate systems ranging in scale from states or utility service territories to national or multi-national systems. CEMs can be computationally complex, and to achieve acceptable solve times, key parameters are often estimated using simplified methods. In this paper, we focus on two of these key parameters associated with the integration of variable generation (VG) resources: capacity value and curtailment. We first discuss common modeling simplifications used in CEMs to estimate capacity value and curtailment, many of which are based on a representative subset of hours that can miss important tail events or which require assumptions about the load and resource distributions that may not match actual distributions. We then present an alternate approach that captures key elements of chronological operation over all hours of the year without the computationally intensive economic dispatch optimization typically employed within more detailed operational models. The updated methodology characterizes the (1) contribution of VG to system capacity during high load and net load hours, (2) the curtailment level of VG, and (3) the potential reductions in curtailments enabled through deployment of storage and more flexible operation of select thermal generators. We apply this alternate methodology to an existing CEM, the Regional Energy Deployment System (ReEDS). Results demonstrate that this alternate approach provides more accurate estimates of capacity value and curtailments by explicitly capturing system interactions across all hours of the year. This approach could be applied more broadly to CEMs at many different scales where hourly resource and load data is available, greatly improving the representation of challenges
International Nuclear Information System (INIS)
Skuladottir, Margret; Ramel, Alfons; Rytter, Dorte; Haug, Line Småstuen; Sabaredzovic, Azemira; Bech, Bodil Hammer; Henriksen, Tine Brink; Olsen, Sjurdur F.; Halldorsson, Thorhallur I.
2015-01-01
Background: Perfluorooctane sulfonate (PFOS) and perfluorooctanoic acid (PFOA) have consistently been associated with higher cholesterol levels in cross sectional studies. Concerns have, however, been raised about potential confounding by diet and clinical relevance. Objective: To examine the association between concentrations of PFOS and PFOA and total cholesterol in serum during pregnancy taking into considerations confounding by diet. Methods: 854 Danish women who gave birth in 1988–89 and provided a blood sample and reported their diet in week 30 of gestation. Results: Mean serum PFOS, PFOA and total cholesterol concentrations were 22.3 ng/mL, 4.1 ng/mL and 7.3 mmol/L, respectively. Maternal diet was a significant predictor of serum PFOS and PFOA concentrations. In particular intake of meat and meat products was positively associated while intake of vegetables was inversely associated (P for trend <0.01) with relative difference between the highest and lowest quartile in PFOS and PFOA concentrations ranging between 6% and 25% of mean values. After adjustment for dietary factors both PFOA and PFOS were positively and similarly associated with serum cholesterol (P for trend ≤0.01). For example, the mean increase in serum cholesterol was 0.39 mmol/L (95%CI: 0.09, 0.68) when comparing women in the highest to lowest quintile of PFOA concentrations. In comparison the mean increase in serum cholesterol was 0.61 mmol/L (95%CI: 0.17, 1.05) when comparing women in the highest to lowest quintile of saturated fat intake. Conclusion: In this study associations between PFOS and PFOA with serum cholesterol appeared unrelated to dietary intake and were similar in magnitude as the associations between saturated fat intake and serum cholesterol. - Highlights: • PFOS and PFOA have consistently been linked with raised serum cholesterol • Clinical relevance remains uncertain and confounding by diet has been suggested • The aim of this study was to address these issues in
Energy Technology Data Exchange (ETDEWEB)
Skuladottir, Margret; Ramel, Alfons [Faculty of Food Science and Nutrition, University of Iceland, Reykjavik (Iceland); Unit for Nutrition Research, Landspitali National University Hospital, Reykjavik (Iceland); Rytter, Dorte [Department of Public Health, Section for Epidemiology, Aarhus University, Aarhus (Denmark); Haug, Line Småstuen; Sabaredzovic, Azemira [Division of Environmental Medicine, Norwegian Institute of Public Health, Oslo (Norway); Bech, Bodil Hammer [Department of Public Health, Section for Epidemiology, Aarhus University, Aarhus (Denmark); Henriksen, Tine Brink [Pediatric Department, Aarhus University Hospital, Aarhus (Denmark); Olsen, Sjurdur F. [Center for Fetal Programming, Department of Epidemiology Research, Statens Serum Institut, Copenhagen (Denmark); Department of Nutrition, Harvard School of Public Health, Boston, MA (United States); Halldorsson, Thorhallur I., E-mail: tih@hi.is [Faculty of Food Science and Nutrition, University of Iceland, Reykjavik (Iceland); Unit for Nutrition Research, Landspitali National University Hospital, Reykjavik (Iceland); Center for Fetal Programming, Department of Epidemiology Research, Statens Serum Institut, Copenhagen (Denmark)
2015-11-15
Background: Perfluorooctane sulfonate (PFOS) and perfluorooctanoic acid (PFOA) have consistently been associated with higher cholesterol levels in cross sectional studies. Concerns have, however, been raised about potential confounding by diet and clinical relevance. Objective: To examine the association between concentrations of PFOS and PFOA and total cholesterol in serum during pregnancy taking into considerations confounding by diet. Methods: 854 Danish women who gave birth in 1988–89 and provided a blood sample and reported their diet in week 30 of gestation. Results: Mean serum PFOS, PFOA and total cholesterol concentrations were 22.3 ng/mL, 4.1 ng/mL and 7.3 mmol/L, respectively. Maternal diet was a significant predictor of serum PFOS and PFOA concentrations. In particular intake of meat and meat products was positively associated while intake of vegetables was inversely associated (P for trend <0.01) with relative difference between the highest and lowest quartile in PFOS and PFOA concentrations ranging between 6% and 25% of mean values. After adjustment for dietary factors both PFOA and PFOS were positively and similarly associated with serum cholesterol (P for trend ≤0.01). For example, the mean increase in serum cholesterol was 0.39 mmol/L (95%CI: 0.09, 0.68) when comparing women in the highest to lowest quintile of PFOA concentrations. In comparison the mean increase in serum cholesterol was 0.61 mmol/L (95%CI: 0.17, 1.05) when comparing women in the highest to lowest quintile of saturated fat intake. Conclusion: In this study associations between PFOS and PFOA with serum cholesterol appeared unrelated to dietary intake and were similar in magnitude as the associations between saturated fat intake and serum cholesterol. - Highlights: • PFOS and PFOA have consistently been linked with raised serum cholesterol • Clinical relevance remains uncertain and confounding by diet has been suggested • The aim of this study was to address these issues in
Platelet-rich plasma differs according to preparation method and human variability.
Mazzocca, Augustus D; McCarthy, Mary Beth R; Chowaniec, David M; Cote, Mark P; Romeo, Anthony A; Bradley, James P; Arciero, Robert A; Beitzel, Knut
2012-02-15
Varying concentrations of blood components in platelet-rich plasma preparations may contribute to the variable results seen in recently published clinical studies. The purposes of this investigation were (1) to quantify the level of platelets, growth factors, red blood cells, and white blood cells in so-called one-step (clinically used commercial devices) and two-step separation systems and (2) to determine the influence of three separate blood draws on the resulting components of platelet-rich plasma. Three different platelet-rich plasma (PRP) separation methods (on blood samples from eight subjects with a mean age [and standard deviation] of 31.6 ± 10.9 years) were used: two single-spin processes (PRPLP and PRPHP) and a double-spin process (PRPDS) were evaluated for concentrations of platelets, red and white blood cells, and growth factors. Additionally, the effect of three repetitive blood draws on platelet-rich plasma components was evaluated. The content and concentrations of platelets, white blood cells, and growth factors for each method of separation differed significantly. All separation techniques resulted in a significant increase in platelet concentration compared with native blood. Platelet and white blood-cell concentrations of the PRPHP procedure were significantly higher than platelet and white blood-cell concentrations produced by the so-called single-step PRPLP and the so-called two-step PRPDS procedures, although significant differences between PRPLP and PRPDS were not observed. Comparing the results of the three blood draws with regard to the reliability of platelet number and cell counts, wide variations of intra-individual numbers were observed. Single-step procedures are capable of producing sufficient amounts of platelets for clinical usage. Within the evaluated procedures, platelet numbers and numbers of white blood cells differ significantly. The intra-individual results of platelet-rich plasma separations showed wide variations in
Shinn, Cândida; Blanchet, Simon; Loot, Géraldine; Lek, Sovan; Grenouillet, Gaël
2015-12-15
The response of organisms to environmental stress is currently used in the assessment of ecosystem health. Morphological changes integrate the multiple effects of one or several stress factors upon the development of the exposed organisms. In a natural environment, many factors determine the patterns of morphological differentiation between individuals. However, few studies have sought to distinguish and measure the independent effect of these factors (genetic diversity and structure, spatial structuring of populations, physical-chemical conditions, etc.). Here we investigated the relationship between pesticide levels measured at 11 sites sampled in rivers of the Garonne river basin (SW France) and morphological changes of a freshwater fish species, the gudgeon (Gobio gobio). Each individual sampled was genotyped using 8 microsatellite markers and their phenotype characterized via 17 morphological traits. Our analysis detected a link between population genetic structure (revealed by a Bayesian method) and morphometry (linear discriminant analysis) of the studied populations. We then developed an original method based on general linear models using distance matrices, an extension of the partial Mantel test beyond 3 matrices. This method was used to test the relationship between contamination (toxicity index) and morphometry (PST of morphometric traits), taking into account (1) genetic differentiation between populations (FST), (2) geographical distances between sites, (3) site catchment area, and (4) various physical-chemical parameters for each sampling site. Upon removal of confounding effects, 3 of the 17 morphological traits studied were significantly correlated with pesticide toxicity, suggesting a response of these traits to the anthropogenic stress. These results underline the importance of taking into account the different sources of phenotypic variability between organisms when identifying the stress factors involved. The separation and quantification of
Hauck, Yolande; Soler, Charles; Gérôme, Patrick; Vong, Rithy; Macnab, Christine; Appere, Géraldine; Vergnaud, Gilles; Pourcel, Christine
2015-07-01
Propionibacterium acnes plays a central role in the pathogenesis of acne and is responsible for severe opportunistic infections. Numerous typing schemes have been developed that allow the identification of phylotypes, but they are often insufficient to differentiate subtypes. To better understand the genetic diversity of this species and to perform epidemiological analyses, high throughput discriminant genotyping techniques are needed. Here we describe the development of a multiple locus variable number of tandem repeats (VNTR) analysis (MLVA) method. Thirteen VNTRs were identified in the genome of P. acnes and were used to genotype a collection of clinical isolates. In addition, publically available sequencing data for 102 genomes were analyzed in silico, providing an MLVA genotype. The clustering of MLVA data was in perfect congruence with whole genome based clustering. Analysis of the clustered regularly interspaced short palindromic repeat (CRISPR) element uncovered new spacers, a supplementary source of genotypic information. The present MLVA13 scheme and associated internet database represents a first line genotyping assay to investigate large number of isolates. Particular strains may then be submitted to full genome sequencing in order to better analyze their pathogenic potential. Copyright © 2015 Elsevier B.V. All rights reserved.
Zhao, Yu Xi; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi
2018-04-01
Hydrological process evaluation is temporal dependent. Hydrological time series including dependence components do not meet the data consistency assumption for hydrological computation. Both of those factors cause great difficulty for water researches. Given the existence of hydrological dependence variability, we proposed a correlationcoefficient-based method for significance evaluation of hydrological dependence based on auto-regression model. By calculating the correlation coefficient between the original series and its dependence component and selecting reasonable thresholds of correlation coefficient, this method divided significance degree of dependence into no variability, weak variability, mid variability, strong variability, and drastic variability. By deducing the relationship between correlation coefficient and auto-correlation coefficient in each order of series, we found that the correlation coefficient was mainly determined by the magnitude of auto-correlation coefficient from the 1 order to p order, which clarified the theoretical basis of this method. With the first-order and second-order auto-regression models as examples, the reasonability of the deduced formula was verified through Monte-Carlo experiments to classify the relationship between correlation coefficient and auto-correlation coefficient. This method was used to analyze three observed hydrological time series. The results indicated the coexistence of stochastic and dependence characteristics in hydrological process.
Methods for the Quasi-Periodic Variability Analysis in Blazars Y. Liu ...
Indian Academy of Sciences (India)
the variability analysis in blazars in optical and radio bands, to search for possible quasi-periodic signals. 2. Power spectral density (PSD). In statistical signal processing and physics, the power spectral density (PSD) is a positive real function of a frequency variable associated with a stationary stochas- tic process. Intuitively ...
Price variability and marketing method in non-ferrous metals: Slade's analysis revisited
Gilbert, C.L.; Ferretti, F.
2002-01-01
We examine the impact of the pricing regime on price variability with reference to the non-ferrous metals industry. Theoretical arguments are ambiguous, but suggest that the extent of monopoly power is more important than the pricing regime as a determinant of variability. Slade (Quart. J. Econ. 106
van der Burg, Eeke; de Leeuw, Jan; Verdegaal, Renée
1988-01-01
Homogeneity analysis, or multiple correspondence analysis, is usually applied tok separate variables. In this paper we apply it to sets of variables by using sums within sets. The resulting technique is called OVERALS. It uses the notion of optimal scaling, with transformations that can be multiple
Directory of Open Access Journals (Sweden)
Zedong Bi
2016-08-01
Full Text Available Synapses may undergo variable changes during plasticity because of the variability of spike patterns such as temporal stochasticity and spatial randomness. Here, we call the variability of synaptic weight changes during plasticity to be efficacy variability. In this paper, we investigate how four aspects of spike pattern statistics (i.e., synchronous firing, burstiness/regularity, heterogeneity of rates and heterogeneity of cross-correlations influence the efficacy variability under pair-wise additive spike-timing dependent plasticity (STDP and synaptic homeostasis (the mean strength of plastic synapses into a neuron is bounded, by implementing spike shuffling methods onto spike patterns self-organized by a network of excitatory and inhibitory leaky integrate-and-fire (LIF neurons. With the increase of the decay time scale of the inhibitory synaptic currents, the LIF network undergoes a transition from asynchronous state to weak synchronous state and then to synchronous bursting state. We first shuffle these spike patterns using a variety of methods, each designed to evidently change a specific pattern statistics; and then investigate the change of efficacy variability of the synapses under STDP and synaptic homeostasis, when the neurons in the network fire according to the spike patterns before and after being treated by a shuffling method. In this way, we can understand how the change of pattern statistics may cause the change of efficacy variability. Our results are consistent with those of our previous study which implements spike-generating models on converging motifs. We also find that burstiness/regularity is important to determine the efficacy variability under asynchronous states, while heterogeneity of cross-correlations is the main factor to cause efficacy variability when the network moves into synchronous bursting states (the states observed in epilepsy.
International Nuclear Information System (INIS)
Pyun, J.J.
1981-01-01
As part of an effort to incorporate the variable Eulerian mesh into the second-order PIC computational method, a truncation error analysis was performed to calculate the second-order error terms for the variable Eulerian mesh system. The results that the maximum mesh size increment/decrement is limited to be α(Δr/sub i/) 2 where Δr/sub i/ is a non-dimensional mesh size of the ith cell, and α is a constant of order one. The numerical solutions of Burgers' equation by the second-order PIC method in the variable Eulerian mesh system wer compared with its exact solution. It was found that the second-order accuracy in the PIC method was maintained under the above condition. Additional problems were analyzed using the second-order PIC methods in both variable and uniform Eulerian mesh systems. The results indicate that the second-order PIC method in the variable Eulerian mesh system can provide substantial computational time saving with no loss in accuracy
Adjusting for Confounding in Early Postlaunch Settings: Going Beyond Logistic Regression Models.
Schmidt, Amand F; Klungel, Olaf H; Groenwold, Rolf H H
2016-01-01
Postlaunch data on medical treatments can be analyzed to explore adverse events or relative effectiveness in real-life settings. These analyses are often complicated by the number of potential confounders and the possibility of model misspecification. We conducted a simulation study to compare the performance of logistic regression, propensity score, disease risk score, and stabilized inverse probability weighting methods to adjust for confounding. Model misspecification was induced in the independent derivation dataset. We evaluated performance using relative bias confidence interval coverage of the true effect, among other metrics. At low events per coefficient (1.0 and 0.5), the logistic regression estimates had a large relative bias (greater than -100%). Bias of the disease risk score estimates was at most 13.48% and 18.83%. For the propensity score model, this was 8.74% and >100%, respectively. At events per coefficient of 1.0 and 0.5, inverse probability weighting frequently failed or reduced to a crude regression, resulting in biases of -8.49% and 24.55%. Coverage of logistic regression estimates became less than the nominal level at events per coefficient ≤5. For the disease risk score, inverse probability weighting, and propensity score, coverage became less than nominal at events per coefficient ≤2.5, ≤1.0, and ≤1.0, respectively. Bias of misspecified disease risk score models was 16.55%. In settings with low events/exposed subjects per coefficient, disease risk score methods can be useful alternatives to logistic regression models, especially when propensity score models cannot be used. Despite better performance of disease risk score methods than logistic regression and propensity score models in small events per coefficient settings, bias, and coverage still deviated from nominal.
Assessing mediation using marginal structural models in the presence of confounding and moderation
Coffman, Donna L.; Zhong, Wei
2012-01-01
This paper presents marginal structural models (MSMs) with inverse propensity weighting (IPW) for assessing mediation. Generally, individuals are not randomly assigned to levels of the mediator. Therefore, confounders of the mediator and outcome may exist that limit causal inferences, a goal of mediation analysis. Either regression adjustment or IPW can be used to take confounding into account, but IPW has several advantages. Regression adjustment of even one confounder of the mediator and ou...
Feng, Yong; Chen, Aiqing
2017-01-01
This study aimed to quantify blood pressure (BP) measurement accuracy and variability with different techniques. Thirty video clips of BP recordings from the BHS training database were converted to Korotkoff sound waveforms. Ten observers without receiving medical training were asked to determine BPs using (a) traditional manual auscultatory method and (b) visual auscultation method by visualizing the Korotkoff sound waveform, which was repeated three times on different days. The measurement error was calculated against the reference answers, and the measurement variability was calculated from the SD of the three repeats. Statistical analysis showed that, in comparison with the auscultatory method, visual method significantly reduced overall variability from 2.2 to 1.1 mmHg for SBP and from 1.9 to 0.9 mmHg for DBP (both p auscultation methods). In conclusion, the visual auscultation method had the ability to achieve an acceptable degree of BP measurement accuracy, with smaller variability in comparison with the traditional auscultatory method. PMID:29423405
International Nuclear Information System (INIS)
Park, Jessica J.; Chen, Ming-Hui; Loffredo, Marian; D’Amico, Anthony V.
2012-01-01
Purpose: Prostate-specific antigen (PSA) velocity, like PSA level, can be confounded. In this study, we estimated the impact that confounding factors could have on correctly identifying a patient with a PSA velocity >2 ng/ml/y. Methods and Materials: Between 2006 and 2010, a total of 50 men with newly diagnosed PC comprised the study cohort. We calculated and compared the false-positive and false-negative PSA velocity >2 ng/ml/y rates for all men and those with low-risk disease using two approaches to calculate PSA velocity. First, we used PSA values obtained within 18 months of diagnosis; second, we used values within 18 months of diagnosis, substituting the prebiopsy PSA for a repeat, nonconfounded PSA that was obtained using the same assay and without confounders. Results: Using PSA levels pre-biopsy, 46% of all men had a PSA velocity >2 ng/ml/y; whereas this value declined to 32% when substituting the last prebiopsy PSA for a repeat, nonconfounded PSA using the same assay and without confounders. The false-positive rate for PSA velocity >2 ng/ml/y was 43% as compared with a false-negative rate of PSA velocity >2 ng/ml/y of 11% (p = 0.0008) in the overall cohort. These respective values in the low-risk subgroup were 60% and 16.7% (p = 0.09). Conclusion: This study provides evidence to explain the discordance in cancer-specific outcomes among groups investigating the prognostic significance of PSA velocity >2 ng/ml/y, and highlights the importance of patient education on potential confounders of the PSA test before obtaining PSA levels.
Energy Technology Data Exchange (ETDEWEB)
Romberger, Jeff [SBW Consulting, Inc., Bellevue, WA (United States)
2017-06-21
An adjustable-speed drive (ASD) includes all devices that vary the speed of a rotating load, including those that vary the motor speed and linkage devices that allow constant motor speed while varying the load speed. The Variable Frequency Drive Evaluation Protocol presented here addresses evaluation issues for variable-frequency drives (VFDs) installed on commercial and industrial motor-driven centrifugal fans and pumps for which torque varies with speed. Constant torque load applications, such as those for positive displacement pumps, are not covered by this protocol.
Smith, Aimée C; Roberts, Jonathan R; Wallace, Eric S; Kong, Pui; Forrester, Stephanie E
2016-02-01
Two-dimensional methods have been used to compute trunk kinematic variables (flexion/extension, lateral bend, axial rotation) and X-factor (difference in axial rotation between trunk and pelvis) during the golf swing. Recent X-factor studies advocated three-dimensional (3D) analysis due to the errors associated with two-dimensional (2D) methods, but this has not been investigated for all trunk kinematic variables. The purpose of this study was to compare trunk kinematic variables and X-factor calculated by 2D and 3D methods to examine how different approaches influenced their profiles during the swing. Trunk kinematic variables and X-factor were calculated for golfers from vectors projected onto the global laboratory planes and from 3D segment angles. Trunk kinematic variable profiles were similar in shape; however, there were statistically significant differences in trunk flexion (-6.5 ± 3.6°) at top of backswing and trunk right-side lateral bend (8.7 ± 2.9°) at impact. Differences between 2D and 3D X-factor (approximately 16°) could largely be explained by projection errors introduced to the 2D analysis through flexion and lateral bend of the trunk and pelvis segments. The results support the need to use a 3D method for kinematic data calculation to accurately analyze the golf swing.
Hoffmann, Sebastian
2015-01-01
The development of non-animal skin sensitization test methods and strategies is quickly progressing. Either individually or in combination, the predictive capacity is usually described in comparison to local lymph node assay (LLNA) results. In this process the important lesson from other endpoints, such as skin or eye irritation, to account for variability reference test results - here the LLNA - has not yet been fully acknowledged. In order to provide assessors as well as method and strategy developers with appropriate estimates, we investigated the variability of EC3 values from repeated substance testing using the publicly available NICEATM (NTP Interagency Center for the Evaluation of Alternative Toxicological Methods) LLNA database. Repeat experiments for more than 60 substances were analyzed - once taking the vehicle into account and once combining data over all vehicles. In general, variability was higher when different vehicles were used. In terms of skin sensitization potential, i.e., discriminating sensitizer from non-sensitizers, the false positive rate ranged from 14-20%, while the false negative rate was 4-5%. In terms of skin sensitization potency, the rate to assign a substance to the next higher or next lower potency class was approx.10-15%. In addition, general estimates for EC3 variability are provided that can be used for modelling purposes. With our analysis we stress the importance of considering the LLNA variability in the assessment of skin sensitization test methods and strategies and provide estimates thereof.
International Nuclear Information System (INIS)
Hajnal, M.A.; Toth, E.; Hamori, K.; Minda, M.; Koteles, Gy.J.
2007-01-01
Complete text of publication follows. Objective. The aim of this study was to examine and further clarify the extent of radon and progeny induced carcinogenesis, both separated from and combined with other confounders and health risk factors. This work was financed by National Development Agency, Hungary, with GVOP-3.1.1.-2004-05-0384/3.0. Methods. A case-control study was conducted in a Hungarian countryside region where the proportion of houses with yearly average radon level above 200 Bq.m -3 was estimated to be higher than 20% by our preceding regional surveys. Radon levels were measured with CR39 closed etched detectors for three seasons separately yielding yearly average by estimating the low summer level. The detectors were placed in the bedrooms, where people were expected to spend one third of a day. 520 patients with diagnosed cancers were included in these measurements, amongst which 77 developed lung or respiratory cancers. The control group consisted 6333 individuals, above 30 years of age. Lifestyle risk factors of cancers were collected by surveys including social status, pollution from indoor heating, smoking and alcohol history, nutrition, exercise and mental health index 5. Except smoking and alcohol habits, these cofactors were only available for the control group. Comparing disease occurrences the authors selected the multivariate generalised linear models. The case and control proportions along a given factor are binomially distributed, thus the logit link function was used. For radon both log and linear terms were probed for. Results. Many known health confounders of cancers correlated with radon levels, with an estimated total net increase of 50-150 Bq m -3 with increased risks. For lung cancers the model with the terms radon, age, gender and smoking was found to have the lowest Akaike Information Criterion (AIC). Heavy dependency on age, gender and smoking contribute largely to observed lung cancer incidence. However log linear relationship
Directory of Open Access Journals (Sweden)
Wen-ku Shi
2016-01-01
Full Text Available The composite stiffness of parabolic leaf springs with variable stiffness is difficult to calculate using traditional integral equations. Numerical integration or FEA may be used but will require computer-aided software and long calculation times. An efficient method for calculating the composite stiffness of parabolic leaf springs with variable stiffness is developed and evaluated to reduce the complexity of calculation and shorten the calculation time. A simplified model for double-leaf springs with variable stiffness is built, and a composite stiffness calculation method for the model is derived using displacement superposition and material deformation continuity. The proposed method can be applied on triple-leaf and multileaf springs. The accuracy of the calculation method is verified by the rig test and FEA analysis. Finally, several parameters that should be considered during the design process of springs are discussed. The rig test and FEA analytical results indicate that the calculated results are acceptable. The proposed method can provide guidance for the design and production of parabolic leaf springs with variable stiffness. The composite stiffness of the leaf spring can be calculated quickly and accurately when the basic parameters of the leaf spring are known.
DEFF Research Database (Denmark)
Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe
2007-01-01
PURPOSE: The purpose of the study is to compare different approaches to the identification of confounders needed for analyzing observational data. Whereas standard analysis usually is conducted as if the confounders were known a priori, selection uncertainty also must be taken into account. METHO...
Epidemiology of dietary components and disease risk limits interpretability due to potential residual confounding by correlated dietary components. Dietary pattern analyses by factor analysis or partial least squares may overcome this limitation. To examine confounding by dietary pattern as well as ...
Epidemiology of dietary components and disease risk limits interpretability due to potential residual confounding by correlated dietary components. Dietary pattern analyses by factor analysis or partial least squares may overcome the limitation. To examine confounding by dietary pattern as well as ...
Modified Pressure-Correction Projection Methods: Open Boundary and Variable Time Stepping
Bonito, Andrea
2014-10-31
© Springer International Publishing Switzerland 2015. In this paper, we design and study two modifications of the first order standard pressure increment projection scheme for the Stokes system. The first scheme improves the existing schemes in the case of open boundary condition by modifying the pressure increment boundary condition, thereby minimizing the pressure boundary layer and recovering the optimal first order decay. The second scheme allows for variable time stepping. It turns out that the straightforward modification to variable time stepping leads to unstable schemes. The proposed scheme is not only stable but also exhibits the optimal first order decay. Numerical computations illustrating the theoretical estimates are provided for both new schemes.
Modified Pressure-Correction Projection Methods: Open Boundary and Variable Time Stepping
Bonito, Andrea; Guermond, Jean-Luc; Lee, Sanghyun
2014-01-01
© Springer International Publishing Switzerland 2015. In this paper, we design and study two modifications of the first order standard pressure increment projection scheme for the Stokes system. The first scheme improves the existing schemes in the case of open boundary condition by modifying the pressure increment boundary condition, thereby minimizing the pressure boundary layer and recovering the optimal first order decay. The second scheme allows for variable time stepping. It turns out that the straightforward modification to variable time stepping leads to unstable schemes. The proposed scheme is not only stable but also exhibits the optimal first order decay. Numerical computations illustrating the theoretical estimates are provided for both new schemes.
Energy Technology Data Exchange (ETDEWEB)
Schafer, Alexandro G. [Universidade Federal do Pampa (UNIPAMPA), Bage, RS (Brazil)
2009-07-01
There are several methods for the risk assessment and risk management applied to pipelines, among them the Muhlbauer's Method. Muhlbauer is an internationally recognized authority on pipeline risk management. The purpose of this model is to evaluate the public exposure to the risk and identify ways for management that risk in fact. The assessment is made by the attribution of quantitative values to the several items that influences in the pipeline risk. Because the ultimate goal of the risk assessment is to provide a means of risk management, it is sometimes useful to make a distinction between two types of risk variables. The risk evaluator can categorize each index risk variable as either an attribute or a prevention. This paper approaches the subject of the definition of attributes and preventions in the Muhlbauer basic model of risk assessment and also presents a classification of the variables that influence the risk in agreement with those two categories. (author)
International Nuclear Information System (INIS)
Ka-Lin, Su; Yuan-Xi, Xie
2010-01-01
By introducing a more general auxiliary ordinary differential equation (ODE), a modified variable separated ordinary differential equation method is presented for solving the (2 + 1)-dimensional sine-Poisson equation. As a result, many explicit and exact solutions of the (2 + 1)-dimensional sine-Poisson equation are derived in a simple manner by this technique. (general)
Variable order one-step methods for initial value problems I ...
African Journals Online (AJOL)
A class of variable order one-step integrators is proposed for Initial Value Problems (IVPs) in Ordinary Differential Equations (ODEs). It is based on a rational interpolant. Journal of the Nigerian Association of Mathematical Physics Vol. 10 2006: pp. 91-96 ...
Directory of Open Access Journals (Sweden)
Hukharnsusatrue, A.
2005-11-01
Full Text Available The objective of this research is to compare multiple regression coefficients estimating methods with existence of multicollinearity among independent variables. The estimation methods are Ordinary Least Squares method (OLS, Restricted Least Squares method (RLS, Restricted Ridge Regression method (RRR and Restricted Liu method (RL when restrictions are true and restrictions are not true. The study used the Monte Carlo Simulation method. The experiment was repeated 1,000 times under each situation. The analyzed results of the data are demonstrated as follows. CASE 1: The restrictions are true. In all cases, RRR and RL methods have a smaller Average Mean Square Error (AMSE than OLS and RLS method, respectively. RRR method provides the smallest AMSE when the level of correlations is high and also provides the smallest AMSE for all level of correlations and all sample sizes when standard deviation is equal to 5. However, RL method provides the smallest AMSE when the level of correlations is low and middle, except in the case of standard deviation equal to 3, small sample sizes, RRR method provides the smallest AMSE.The AMSE varies with, most to least, respectively, level of correlations, standard deviation and number of independent variables but inversely with to sample size.CASE 2: The restrictions are not true.In all cases, RRR method provides the smallest AMSE, except in the case of standard deviation equal to 1 and error of restrictions equal to 5%, OLS method provides the smallest AMSE when the level of correlations is low or median and there is a large sample size, but the small sample sizes, RL method provides the smallest AMSE. In addition, when error of restrictions is increased, OLS method provides the smallest AMSE for all level, of correlations and all sample sizes, except when the level of correlations is high and sample sizes small. Moreover, the case OLS method provides the smallest AMSE, the most RLS method has a smaller AMSE than
Influence of potentially confounding factors on sea urchin porewater toxicity tests
Carr, R.S.; Biedenbach, J.M.; Nipper, M.
2006-01-01
The influence of potentially confounding factors has been identified as a concern for interpreting sea urchin porewater toxicity test data. The results from >40 sediment-quality assessment surveys using early-life stages of the sea urchin Arbacia punctulata were compiled and examined to determine acceptable ranges of natural variables such as pH, ammonia, and dissolved organic carbon on the fertilization and embryological development endpoints. In addition, laboratory experiments were also conducted with A. punctulata and compared with information from the literature. Pore water with pH as low as 6.9 is an unlikely contributor to toxicity for the fertilization and embryological development tests with A. punctulata. Other species of sea urchin have narrower pH tolerance ranges. Ammonia is rarely a contributing factor in pore water toxicity tests using the fertilization endpoint, but the embryological development endpoint may be influenced by ammonia concentrations commonly found in porewater samples. Therefore, ammonia needs to be considered when interpreting results for the embryological development test. Humic acid does not affect sea urchin fertilization at saturation concentrations, but it could have an effect on the embryological development endpoint at near-saturation concentrations. There was no correlation between sediment total organic carbon concentrations and porewater dissolved organic carbon concentrations. Because of the potential for many varying substances to activate parthenogenesis in sea urchin eggs, it is recommended that a no-sperm control be included with every fertilization test treatment. ?? 2006 Springer Science+Business Media, Inc.
Effect of water quality and confounding factors on digestive enzyme activities in Gammarus fossarum.
Charron, L; Geffard, O; Chaumot, A; Coulaud, R; Queau, H; Geffard, A; Dedourge-Geffard, O
2013-12-01
The feeding activity and subsequent assimilation of the products resulting from food digestion allow organisms to obtain energy for growth, maintenance and reproduction. Among these biological parameters, we studied digestive enzymes (amylase, cellulase and trypsin) in Gammarus fossarum to assess the impact of contaminants on their access to energy resources. However, to enable objective assessment of a toxic effect of decreased water quality on an organisms' digestive capacity, it is necessary to establish reference values based on its natural variability as a function of changing biotic and abiotic factors. To limit the confounding influence of biotic factors, a caging approach with calibrated male organisms from the same population was used. This study applied an in situ deployment at 23 sites of the Rhone basin rivers, complemented by a laboratory experiment assessing the influence of two abiotic factors (temperature and conductivity). The results showed a small effect of conductivity on cellulase activity and a significant effect of temperature on digestive enzyme activity but only at the lowest temperature (7 °C). The experimental conditions allowed us to define an environmental reference value for digestive enzyme activities to select sites where the quality of the water impacted the digestive capacity of the organisms. In addition to the feeding rate, this study showed the relevance of digestive enzymes as biomarkers to be used as an early warning tool to reflect organisms' health and the chemical quality of aquatic ecosystems.
Quilty, J.; Adamowski, J. F.
2015-12-01
Urban water supply systems are often stressed during seasonal outdoor water use as water demands related to the climate are variable in nature making it difficult to optimize the operation of the water supply system. Urban water demand forecasts (UWD) failing to include meteorological conditions as inputs to the forecast model may produce poor forecasts as they cannot account for the increase/decrease in demand related to meteorological conditions. Meteorological records stochastically simulated into the future can be used as inputs to data-driven UWD forecasts generally resulting in improved forecast accuracy. This study aims to produce data-driven UWD forecasts for two different Canadian water utilities (Montreal and Victoria) using machine learning methods by first selecting historical UWD and meteorological records derived from a stochastic weather generator using nonlinear input variable selection. The nonlinear input variable selection methods considered in this work are derived from the concept of conditional mutual information, a nonlinear dependency measure based on (multivariate) probability density functions and accounts for relevancy, conditional relevancy, and redundancy from a potential set of input variables. The results of our study indicate that stochastic weather inputs can improve UWD forecast accuracy for the two sites considered in this work. Nonlinear input variable selection is suggested as a means to identify which meteorological conditions should be utilized in the forecast.
Aubert, A. H.; Tavenard, R.; Emonet, R.; De Lavenne, A.; Malinowski, S.; Guyet, T.; Quiniou, R.; Odobez, J.; Merot, P.; Gascuel-odoux, C.
2013-12-01
Studying floods has been a major issue in hydrological research for years, both in quantitative and qualitative hydrology. Stream chemistry is a mix of solutes, often used as tracers, as they originate from various sources in the catchment and reach the stream by various flow pathways. Previous studies (for instance (1)) hypothesized that stream chemistry reaction to a rainfall event is not unique but varies seasonally, and according to the yearly meteorological conditions. Identifying a typology of flood temporal chemical patterns is a way to better understand catchment processes at the flood and seasonal time scale. We applied a probabilistic model (Latent Dirichlet Allocation or LDA (2)) mining recurrent sequential patterns from a dataset of floods. A set of 472 floods was automatically extracted from a daily 12-year long record of nitrate, dissolved organic carbon, sulfate and chloride concentrations. Rainfall, discharge, water table depth and temperature are also considered. Data comes from a long-term hydrological observatory (AgrHys, western France) located at Kervidy-Naizin. From each flood, a document has been generated that is made of a set of "hydrological words". Each hydrological word corresponds to a measurement: it is a triplet made of the considered variable, the time at which the measurement is made (relative to the beginning of the flood), and its magnitude (that can be low, medium or high). The documents and the number of pattern to be mined are used as input data to the LDA algorithm. LDA relies on spotting co-occurrences (as an alternative to the more traditional study of correlation) between words that appear within the flood documents. It has two nice properties that are its ability to easily deal with missing data and its additive property that allows a document to be seen as a mixture of several flood patterns. The output of LDA is a set of patterns easily represented in graphics. These patterns correspond to typical reactions to rainfall
Binder, Gerhard; Weber, Karin; Apel, Anja; Roeben, Benjamin; Deuschle, Christian; Maechtel, Mirjam; Heger, Tanja; Nussbaum, Susanne; Gasser, Thomas; Maetzler, Walter; Berg, Daniela
2016-01-01
Introduction Biomarkers indicating trait, progression and prediction of pathology and symptoms in Parkinson's disease (PD) often lack specificity or reliability. Investigating biomarker variance between individuals and over time and the effect of confounding factors is essential for the evaluation of biomarkers in PD, such as insulin-like growth factor 1 (IGF-1). Materials and Methods IGF-1 serum levels were investigated in up to 8 biannual visits in 37 PD patients and 22 healthy controls (HC) in the longitudinal MODEP study. IGF-1 baseline levels and annual changes in IGF-1 were compared between PD patients and HC while accounting for baseline disease duration (19 early stage: ≤3.5 years; 18 moderate stage: >4 years), age, sex, body mass index (BMI) and common medical factors putatively modulating IGF-1. In addition, associations of baseline IGF-1 with annual changes of motor, cognitive and depressive symptoms and medication dose were investigated. Results PD patients in moderate (130±26 ng/mL; p = .004), but not early stages (115±19, p>.1), showed significantly increased baseline IGF-1 levels compared with HC (106±24 ng/mL; p = .017). Age had a significant negative correlation with IGF-1 levels in HC (r = -.47, p = .028) and no correlation in PD patients (r = -.06, p>.1). BMI was negatively correlated in the overall group (r = -.28, p = .034). The annual changes in IGF-1 did not differ significantly between groups and were not correlated with disease duration. Baseline IGF-1 levels were not associated with annual changes of clinical parameters. Discussion Elevated IGF-1 in serum might differentiate between patients in moderate PD stages and HC. However, the value of serum IGF-1 as a trait-, progression- and prediction marker in PD is limited as IGF-1 showed large inter- and intraindividual variability and may be modulated by several confounders. PMID:26967642
A Method to Derive Monitoring Variables for a Cyber Security Test-bed of I and C System
International Nuclear Information System (INIS)
Han, Kyung Soo; Song, Jae Gu; Lee, Joung Woon; Lee, Cheol Kwon
2013-01-01
In the IT field, monitoring techniques have been developed to protect the systems connected by networks from cyber attacks and incidents. For the development of monitoring systems for I and C cyber security, it is necessary to review the monitoring systems in the IT field and derive cyber security-related monitoring variables among the proprietary operating information about the I and C systems. Tests for the development and application of these monitoring systems may cause adverse effects on the I and C systems. To analyze influences on the system and safely intended variables, the construction of an I and C system Test-bed should be preceded. This article proposes a method of deriving variables that should be monitored through a monitoring system for cyber security as a part of I and C Test-bed. The surveillance features and the monitored variables of NMS(Network Management System), a monitoring technique in the IT field, were reviewed in section 2. In Section 3, the monitoring variables for an I and C cyber security were derived by the of NMS and the investigation for information used for hacking techniques that can be practiced against I and C systems. The monitoring variables of NMS in the IT field and the information about the malicious behaviors used for hacking were derived as expected variables to be monitored for an I and C cyber security research. The derived monitoring variables were classified into the five functions of NMS for efficient management. For the cyber security of I and C systems, the vulnerabilities should be understood through a penetration test etc. and an assessment of influences on the actual system should be carried out. Thus, constructing a test-bed of I and C systems is necessary for the safety system in operation. In the future, it will be necessary to develop a logging and monitoring system for studies on the vulnerabilities of I and C systems with test-beds
A Method to Derive Monitoring Variables for a Cyber Security Test-bed of I and C System
Energy Technology Data Exchange (ETDEWEB)
Han, Kyung Soo; Song, Jae Gu; Lee, Joung Woon; Lee, Cheol Kwon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2013-10-15
In the IT field, monitoring techniques have been developed to protect the systems connected by networks from cyber attacks and incidents. For the development of monitoring systems for I and C cyber security, it is necessary to review the monitoring systems in the IT field and derive cyber security-related monitoring variables among the proprietary operating information about the I and C systems. Tests for the development and application of these monitoring systems may cause adverse effects on the I and C systems. To analyze influences on the system and safely intended variables, the construction of an I and C system Test-bed should be preceded. This article proposes a method of deriving variables that should be monitored through a monitoring system for cyber security as a part of I and C Test-bed. The surveillance features and the monitored variables of NMS(Network Management System), a monitoring technique in the IT field, were reviewed in section 2. In Section 3, the monitoring variables for an I and C cyber security were derived by the of NMS and the investigation for information used for hacking techniques that can be practiced against I and C systems. The monitoring variables of NMS in the IT field and the information about the malicious behaviors used for hacking were derived as expected variables to be monitored for an I and C cyber security research. The derived monitoring variables were classified into the five functions of NMS for efficient management. For the cyber security of I and C systems, the vulnerabilities should be understood through a penetration test etc. and an assessment of influences on the actual system should be carried out. Thus, constructing a test-bed of I and C systems is necessary for the safety system in operation. In the future, it will be necessary to develop a logging and monitoring system for studies on the vulnerabilities of I and C systems with test-beds.
Gong, Jing-Bo; Wang, Ya; Lui, Simon S Y; Cheung, Eric F C; Chan, Raymond C K
2017-11-01
Childhood trauma has been shown to be a robust risk factor for mental disorders, and may exacerbate schizotypal traits or contribute to autistic trait severity. However, little is known whether childhood trauma confounds the overlap between schizotypal traits and autistic traits. This study examined whether childhood trauma acts as a confounding variable in the overlap between autistic and schizotypal traits in a large non-clinical adult sample. A total of 2469 participants completed the Autism Spectrum Quotient (AQ), the Schizotypal Personality Questionnaire (SPQ), and the Childhood Trauma Questionnaire-Short Form. Correlation analysis showed that the majority of associations between AQ variables and SPQ variables were significant (p autistic and schizotypal traits could not be explained by shared variance in terms of exposure to childhood trauma. The findings point to important overlaps in the conceptualization of ASD and SSD, independent of childhood trauma. Copyright © 2017 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
D. Olvera
2015-01-01
Full Text Available We expand the application of the enhanced multistage homotopy perturbation method (EMHPM to solve delay differential equations (DDEs with constant and variable coefficients. This EMHPM is based on a sequence of subintervals that provide approximate solutions that require less CPU time than those computed from the dde23 MATLAB numerical integration algorithm solutions. To address the accuracy of our proposed approach, we examine the solutions of several DDEs having constant and variable coefficients, finding predictions with a good match relative to the corresponding numerical integration solutions.
r2VIM: A new variable selection method for random forests in genome-wide association studies.
Szymczak, Silke; Holzinger, Emily; Dasgupta, Abhijit; Malley, James D; Molloy, Anne M; Mills, James L; Brody, Lawrence C; Stambolian, Dwight; Bailey-Wilson, Joan E
2016-01-01
Machine learning methods and in particular random forests (RFs) are a promising alternative to standard single SNP analyses in genome-wide association studies (GWAS). RFs provide variable importance measures (VIMs) to rank SNPs according to their predictive power. However, in contrast to the established genome-wide significance threshold, no clear criteria exist to determine how many SNPs should be selected for downstream analyses. We propose a new variable selection approach, recurrent relative variable importance measure (r2VIM). Importance values are calculated relative to an observed minimal importance score for several runs of RF and only SNPs with large relative VIMs in all of the runs are selected as important. Evaluations on simulated GWAS data show that the new method controls the number of false-positives under the null hypothesis. Under a simple alternative hypothesis with several independent main effects it is only slightly less powerful than logistic regression. In an experimental GWAS data set, the same strong signal is identified while the approach selects none of the SNPs in an underpowered GWAS. The novel variable selection method r2VIM is a promising extension to standard RF for objectively selecting relevant SNPs in GWAS while controlling the number of false-positive results.
Methods for Minimization and Management of Variability in Long-Term Groundwater Monitoring Results
2015-12-01
DECEMBER 2015 Poonam Kulkarni Charles Newell Claire Krebs Thomas McHugh GSI Environmental, Inc. Britt Sanford ProHydro Distribution...based on an understanding of the short-term variability and long-term attenuation rate at a particular site ( McHugh et al., 2015a). The...time is independent of these parameters ( McHugh et al., 2015c). The relative trade-off between monitoring frequency and time required to
A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method
Jun-He Yang; Ching-Hsue Cheng; Chia-Pan Chan
2017-01-01
Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting m...
Frisell, Thomas; Pawitan, Yudi; Långström, Niklas
2012-01-01
Research has consistently found lower cognitive ability to be related to increased risk for violent and other antisocial behaviour. Since this association has remained when adjusting for childhood socioeconomic position, ethnicity, and parental characteristics, it is often assumed to be causal, potentially mediated through school adjustment problems and conduct disorder. Socioeconomic differences are notoriously difficult to quantify, however, and it is possible that the association between intelligence and delinquency suffer substantial residual confounding. We linked longitudinal Swedish total population registers to study the association of general cognitive ability (intelligence) at age 18 (the Conscript Register, 1980-1993) with the incidence proportion of violent criminal convictions (the Crime Register, 1973-2009), among all men born in Sweden 1961-1975 (N = 700,514). Using probit regression, we controlled for measured childhood socioeconomic variables, and further employed sibling comparisons (family pedigree data from the Multi-Generation Register) to adjust for shared familial characteristics. Cognitive ability in early adulthood was inversely associated to having been convicted of a violent crime (β = -0.19, 95% CI: -0.19; -0.18), the association remained when adjusting for childhood socioeconomic factors (β = -0.18, 95% CI: -0.18; -0.17). The association was somewhat lower within half-brothers raised apart (β = -0.16, 95% CI: -0.18; -0.14), within half-brothers raised together (β = -0.13, 95% CI: (-0.15; -0.11), and lower still in full-brother pairs (β = -0.10, 95% CI: -0.11; -0.09). The attenuation among half-brothers raised together and full brothers was too strong to be attributed solely to attenuation from measurement error. Our results suggest that the association between general cognitive ability and violent criminality is confounded partly by factors shared by brothers. However, most of the association remains even
Strong Stability Preserving Explicit Linear Multistep Methods with Variable Step Size
Hadjimichael, Yiannis; Ketcheson, David I.; Loczi, Lajos; Né meth, Adriá n
2016-01-01
Strong stability preserving (SSP) methods are designed primarily for time integration of nonlinear hyperbolic PDEs, for which the permissible SSP step size varies from one step to the next. We develop the first SSP linear multistep methods (of order
Farrington, Stephen P.
2018-05-15
Systems, methods, and software for measuring the spatially variable relative dielectric permittivity of materials along a linear or otherwise configured sensor element, and more specifically the spatial variability of soil moisture in one dimension as inferred from the dielectric profile of the soil matrix surrounding a linear sensor element. Various methods provided herein combine advances in the processing of time domain reflectometry data with innovations in physical sensing apparatuses. These advancements enable high temporal (and thus spatial) resolution of electrical reflectance continuously along an insulated waveguide that is permanently emplaced in contact with adjacent soils. The spatially resolved reflectance is directly related to impedance changes along the waveguide that are dominated by electrical permittivity contrast due to variations in soil moisture. Various methods described herein are thus able to monitor soil moisture in profile with high spatial resolution.
Light Curve Periodic Variability of Cyg X-1 using Jurkevich Method ...
Indian Academy of Sciences (India)
Abstract. The Jurkevich method is a useful method to explore periodic- ity in the unevenly sampled observational data. In this work, we adopted the method to the light curve of Cyg X-1 from 1996 to 2012, and found that there is an interesting period of 370 days, which appears in both low/hard and high/soft states.
Light Curve Periodic Variability of Cyg X-1 using Jurkevich Method
Indian Academy of Sciences (India)
The Jurkevich method is a useful method to explore periodicity in the unevenly sampled observational data. In this work, we adopted the method to the light curve of Cyg X-1 from 1996 to 2012, and found that there is an interesting period of 370 days, which appears in both low/hard and high/soft states. That period may be ...
Knowlden, Adam P; Burns, Maranda; Harcrow, Andy; Shewmake, Meghan E
2016-03-16
Poor sleep quality is a significant public health problem. The role of nutrition in predicting sleep quality is a relatively unexplored area of inquiry. The purpose of this study was to evaluate the capacity of 10 food choice categories, sleep confounding beverages, and psychological distress to predict the sleep quality of college students. A logistic regression model comprising 10 food choice variables (healthy proteins, unhealthy proteins, healthy dairy, unhealthy dairy, healthy grains, unhealthy grains, healthy fruits and vegetables, unhealthy empty calories, healthy beverages, unhealthy beverages), sleep confounding beverages (caffeinated/alcoholic beverages), as well as psychological distress (low, moderate, serious distress) was computed to determine the capacity of the variables to predict sleep quality (good/poor). The odds of poor sleep quality were 32.4% lower for each unit of increased frequency of healthy proteins consumed (pempty calorie food choices consumed (p=0.003; OR=1.131), and 107.3% higher for those classified in the moderate psychological distress (p=0.016; OR=2.073). Collectively, healthy proteins, healthy dairy, unhealthy empty calories, and moderate psychological distress were moderately predictive of sleep quality in the sample (Nagelkerke R2=23.8%). Results of the study suggested higher frequency of consumption of healthy protein and healthy dairy food choices reduced the odds of poor sleep quality, while higher consumption of empty calories and moderate psychological distress increased the odds of poor sleep quality.
Personality may confound common measures of mate-choice.
Directory of Open Access Journals (Sweden)
Morgan David
Full Text Available The measurement of female mating preferences is central to the study of the evolution of male ornaments. Although several different methods have been developed to assess sexual preference in some standardized way, the most commonly used procedure consists of recording female spatial association with different males presented simultaneously. Sexual preference is then inferred from time spent in front of each male. However, the extent to which the measurement of female mate-choice is related to exploration tendencies has not been addressed so far. In the present study we assessed the influence of variation in exploration tendencies, a trait closely associated to global personality, on the measurement of female mating preference in the zebra finch (Taeniopygia guttata using the widely used four-chamber choice-apparatus. The number of movements performed within both exploration and mate-choice apparatus was consistent within and across the two contexts. In addition, personality explained variation in selectivity, preference strength and consistency. High-exploratory females showed lower selectivity, lower preference scores and displayed more consistent preference scores. Our results suggest that variation in personality may affect the measurement of female mating preference and may contribute to explain existing inconsistencies across studies.
Directory of Open Access Journals (Sweden)
Zamorska Izabela
2018-01-01
Full Text Available The subject of the paper is an application of the non-destructive vibration method for identifying the location of two cracks occurring in a beam. The vibration method is based on knowledge of a certain number of vibration frequencies of an undamaged element and the knowledge of the same number of vibration frequencies of an element with a defect. The analyzed beam, with a variable cross-sectional area, has been described according to the Bernoulli-Euler theory. To determine the values of free vibration frequencies the analytical solution, with the help of the Green’s function method, has been used.
Benhammouda, Brahim; Vazquez-Leal, Hector
2016-01-01
This work presents an analytical solution of some nonlinear delay differential equations (DDEs) with variable delays. Such DDEs are difficult to treat numerically and cannot be solved by existing general purpose codes. A new method of steps combined with the differential transform method (DTM) is proposed as a powerful tool to solve these DDEs. This method reduces the DDEs to ordinary differential equations that are then solved by the DTM. Furthermore, we show that the solutions can be improved by Laplace-Padé resummation method. Two examples are presented to show the efficiency of the proposed technique. The main advantage of this technique is that it possesses a simple procedure based on a few straight forward steps and can be combined with any analytical method, other than the DTM, like the homotopy perturbation method.
Zhonggang, Liang; Hong, Yan
2006-10-01
A new method of calculating fractal dimension of short-term heart rate variability signals is presented. The method is based on wavelet transform and filter banks. The implementation of the method is: First of all we pick-up the fractal component from HRV signals using wavelet transform. Next, we estimate the power spectrum distribution of fractal component using auto-regressive model, and we estimate parameter 7 using the least square method. Finally according to formula D = 2- (gamma-1)/2 estimate fractal dimension of HRV signal. To validate the stability and reliability of the proposed method, using fractional brown movement simulate 24 fractal signals that fractal value is 1.6 to validate, the result shows that the method has stability and reliability.
Energy Technology Data Exchange (ETDEWEB)
McGurk, Ross J. [Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States); Bowsher, James; Das, Shiva K. [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27705 (United States); Lee, John A [Molecular Imaging and Experimental Radiotherapy Unit, Universite Catholique de Louvain, 1200 Brussels (Belgium)
2013-04-15
Purpose: Many approaches have been proposed to segment high uptake objects in 18F-fluoro-deoxy-glucose positron emission tomography images but none provides consistent performance across the large variety of imaging situations. This study investigates the use of two methods of combining individual segmentation methods to reduce the impact of inconsistent performance of the individual methods: simple majority voting and probabilistic estimation. Methods: The National Electrical Manufacturers Association image quality phantom containing five glass spheres with diameters 13-37 mm and two irregularly shaped volumes (16 and 32 cc) formed by deforming high-density polyethylene bottles in a hot water bath were filled with 18-fluoro-deoxyglucose and iodine contrast agent. Repeated 5-min positron emission tomography (PET) images were acquired at 4:1 and 8:1 object-to-background contrasts for spherical objects and 4.5:1 and 9:1 for irregular objects. Five individual methods were used to segment each object: 40% thresholding, adaptive thresholding, k-means clustering, seeded region-growing, and a gradient based method. Volumes were combined using a majority vote (MJV) or Simultaneous Truth And Performance Level Estimate (STAPLE) method. Accuracy of segmentations relative to CT ground truth volumes were assessed using the Dice similarity coefficient (DSC) and the symmetric mean absolute surface distances (SMASDs). Results: MJV had median DSC values of 0.886 and 0.875; and SMASD of 0.52 and 0.71 mm for spheres and irregular shapes, respectively. STAPLE provided similar results with median DSC of 0.886 and 0.871; and median SMASD of 0.50 and 0.72 mm for spheres and irregular shapes, respectively. STAPLE had significantly higher DSC and lower SMASD values than MJV for spheres (DSC, p < 0.0001; SMASD, p= 0.0101) but MJV had significantly higher DSC and lower SMASD values compared to STAPLE for irregular shapes (DSC, p < 0.0001; SMASD, p= 0.0027). DSC was not significantly
Abrams, Keith R.; Amonkar, Mayur M.; Stapelkamp, Ceilidh; Swann, R. Suzanne
2015-01-01
Background. Patients with previously untreated BRAF V600E mutation-positive melanoma in BREAK-3 showed a median overall survival (OS) of 18.2 months for dabrafenib versus 15.6 months for dacarbazine (hazard ratio [HR], 0.76; 95% confidence interval, 0.48–1.21). Because patients receiving dacarbazine were allowed to switch to dabrafenib at disease progression, we attempted to adjust for the confounding effects on OS. Materials and Methods. Rank preserving structural failure time models (RPSFTMs) and the iterative parameter estimation (IPE) algorithm were used. Two analyses, “treatment group” (assumes treatment effect could continue until death) and “on-treatment observed” (assumes treatment effect disappears with discontinuation), were used to test the assumptions around the durability of the treatment effect. Results. A total of 36 of 63 patients (57%) receiving dacarbazine switched to dabrafenib. The adjusted OS HRs ranged from 0.50 to 0.55, depending on the analysis. The RPSFTM and IPE “treatment group” and “on-treatment observed” analyses performed similarly well. Conclusion. RPSFTM and IPE analyses resulted in point estimates for the OS HR that indicate a substantial increase in the treatment effect compared with the unadjusted OS HR of 0.76. The results are uncertain because of the assumptions associated with the adjustment methods. The confidence intervals continued to cross 1.00; thus, the adjusted estimates did not provide statistically significant evidence of a treatment benefit on survival. However, it is clear that a standard intention-to-treat analysis will be confounded in the presence of treatment switching—a reliance on unadjusted analyses could lead to inappropriate practice. Adjustment analyses provide useful additional information on the estimated treatment effects to inform decision making. Implications for Practice: Treatment switching is common in oncology trials, and the implications of this for the interpretation of the
An Auxiliary Variable Method for Markov Chain Monte Carlo Algorithms in High Dimension
Directory of Open Access Journals (Sweden)
Yosra Marnissi
2018-02-01
Full Text Available In this paper, we are interested in Bayesian inverse problems where either the data fidelity term or the prior distribution is Gaussian or driven from a hierarchical Gaussian model. Generally, Markov chain Monte Carlo (MCMC algorithms allow us to generate sets of samples that are employed to infer some relevant parameters of the underlying distributions. However, when the parameter space is high-dimensional, the performance of stochastic sampling algorithms is very sensitive to existing dependencies between parameters. In particular, this problem arises when one aims to sample from a high-dimensional Gaussian distribution whose covariance matrix does not present a simple structure. Another challenge is the design of Metropolis–Hastings proposals that make use of information about the local geometry of the target density in order to speed up the convergence and improve mixing properties in the parameter space, while not being too computationally expensive. These two contexts are mainly related to the presence of two heterogeneous sources of dependencies stemming either from the prior or the likelihood in the sense that the related covariance matrices cannot be diagonalized in the same basis. In this work, we address these two issues. Our contribution consists of adding auxiliary variables to the model in order to dissociate the two sources of dependencies. In the new augmented space, only one source of correlation remains directly related to the target parameters, the other sources of correlations being captured by the auxiliary variables. Experiments are conducted on two practical image restoration problems—namely the recovery of multichannel blurred images embedded in Gaussian noise and the recovery of signal corrupted by a mixed Gaussian noise. Experimental results indicate that adding the proposed auxiliary variables makes the sampling problem simpler since the new conditional distribution no longer contains highly heterogeneous
Uncertainty in T1 mapping using the variable flip angle method with two flip angles
International Nuclear Information System (INIS)
Schabel, Matthias C; Morrell, Glen R
2009-01-01
Propagation of errors, in conjunction with the theoretical signal equation for spoiled gradient echo pulse sequences, is used to derive a theoretical expression for uncertainty in quantitative variable flip angle T 1 mapping using two flip angles. This expression is then minimized to derive a rigorous expression for optimal flip angles that elucidates a commonly used empirical result. The theoretical expressions for uncertainty and optimal flip angles are combined to derive a lower bound on the achievable uncertainty for a given set of pulse sequence parameters and signal-to-noise ratio (SNR). These results provide a means of quantitatively determining the effect of changing acquisition parameters on T 1 uncertainty. (note)
Analysis of electrical circuits with variable load regime parameters projective geometry method
Penin, A
2015-01-01
This book introduces electric circuits with variable loads and voltage regulators. It allows to define invariant relationships for various parameters of regime and circuit sections and to prove the concepts characterizing these circuits. Generalized equivalent circuits are introduced. Projective geometry is used for the interpretation of changes of operating regime parameters. Expressions of normalized regime parameters and their changes are presented. Convenient formulas for the calculation of currents are given. Parallel voltage sources and the cascade connection of multi-port networks are d
A Method of Approximating Expectations of Functions of Sums of Independent Random Variables
Klass, Michael J.
1981-01-01
Let $X_1, X_2, \\cdots$ be a sequence of independent random variables with $S_n = \\sum^n_{i = 1} X_i$. Fix $\\alpha > 0$. Let $\\Phi(\\cdot)$ be a continuous, strictly increasing function on $\\lbrack 0, \\infty)$ such that $\\Phi(0) = 0$ and $\\Phi(cx) \\leq c^\\alpha\\Phi(x)$ for all $x > 0$ and all $c \\geq 2$. Suppose $a$ is a real number and $J$ is a finite nonempty subset of the positive integers. In this paper we are interested in approximating $E \\max_{j \\in J} \\Phi(|a + S_j|)$. We construct a nu...
Inter- and Intra-method Variability of VS Profiles and VS30 at ARRA-funded Sites
Yong, A.; Boatwright, J.; Martin, A. J.
2015-12-01
The 2009 American Recovery and Reinvestment Act (ARRA) funded geophysical site characterizations at 191 seismographic stations in California and in the central and eastern United States. Shallow boreholes were considered cost- and environmentally-prohibitive, thus non-invasive methods (passive and active surface- and body-wave techniques) were used at these stations. The drawback, however, is that these techniques measure seismic properties indirectly and introduce more uncertainty than borehole methods. The principal methods applied were Array Microtremor (AM), Multi-channel Analysis of Surface Waves (MASW; Rayleigh and Love waves), Spectral Analysis of Surface Waves (SASW), Refraction Microtremor (ReMi), and P- and S-wave refraction tomography. Depending on the apparent geologic or seismic complexity of the site, field crews applied one or a combination of these methods to estimate the shear-wave velocity (VS) profile and calculate VS30, the time-averaged VS to a depth of 30 meters. We study the inter- and intra-method variability of VS and VS30 at each seismographic station where combinations of techniques were applied. For each site, we find both types of variability in VS30 remain insignificant (5-10% difference) despite substantial variability observed in the VS profiles. We also find that reliable VS profiles are best developed using a combination of techniques, e.g., surface-wave VS profiles correlated against P-wave tomography to constrain variables (Poisson's ratio and density) that are key depth-dependent parameters used in modeling VS profiles. The most reliable results are based on surface- or body-wave profiles correlated against independent observations such as material properties inferred from outcropping geology nearby. For example, mapped geology describes station CI.LJR as a hard rock site (VS30 > 760 m/s). However, decomposed rock outcrops were found nearby and support the estimated VS30 of 303 m/s derived from the MASW (Love wave) profile.
Energy Technology Data Exchange (ETDEWEB)
Kelly, Brandon C. [Department of Physics, Broida Hall, University of California, Santa Barbara, CA 93106-9530 (United States); Becker, Andrew C. [Department of Astronomy, University of Washington, P.O. Box 351580, Seattle, WA 98195-1580 (United States); Sobolewska, Malgosia [Nicolaus Copernicus Astronomical Center, Bartycka 18, 00-716, Warsaw (Poland); Siemiginowska, Aneta [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Uttley, Phil [Astronomical Institute Anton Pannekoek, University of Amsterdam, Postbus 94249, 1090 GE Amsterdam (Netherlands)
2014-06-10
We present the use of continuous-time autoregressive moving average (CARMA) models as a method for estimating the variability features of a light curve, and in particular its power spectral density (PSD). CARMA models fully account for irregular sampling and measurement errors, making them valuable for quantifying variability, forecasting and interpolating light curves, and variability-based classification. We show that the PSD of a CARMA model can be expressed as a sum of Lorentzian functions, which makes them extremely flexible and able to model a broad range of PSDs. We present the likelihood function for light curves sampled from CARMA processes, placing them on a statistically rigorous foundation, and we present a Bayesian method to infer the probability distribution of the PSD given the measured light curve. Because calculation of the likelihood function scales linearly with the number of data points, CARMA modeling scales to current and future massive time-domain data sets. We conclude by applying our CARMA modeling approach to light curves for an X-ray binary, two active galactic nuclei, a long-period variable star, and an RR Lyrae star in order to illustrate their use, applicability, and interpretation.
International Nuclear Information System (INIS)
Kelly, Brandon C.; Becker, Andrew C.; Sobolewska, Malgosia; Siemiginowska, Aneta; Uttley, Phil
2014-01-01
We present the use of continuous-time autoregressive moving average (CARMA) models as a method for estimating the variability features of a light curve, and in particular its power spectral density (PSD). CARMA models fully account for irregular sampling and measurement errors, making them valuable for quantifying variability, forecasting and interpolating light curves, and variability-based classification. We show that the PSD of a CARMA model can be expressed as a sum of Lorentzian functions, which makes them extremely flexible and able to model a broad range of PSDs. We present the likelihood function for light curves sampled from CARMA processes, placing them on a statistically rigorous foundation, and we present a Bayesian method to infer the probability distribution of the PSD given the measured light curve. Because calculation of the likelihood function scales linearly with the number of data points, CARMA modeling scales to current and future massive time-domain data sets. We conclude by applying our CARMA modeling approach to light curves for an X-ray binary, two active galactic nuclei, a long-period variable star, and an RR Lyrae star in order to illustrate their use, applicability, and interpretation.
Inverse kinematics for the variable geometry truss manipulator via a Lagrangian dual method
Directory of Open Access Journals (Sweden)
Yanchun Zhao
2016-11-01
Full Text Available This article studies the inverse kinematics problem of the variable geometry truss manipulator. The problem is cast as an optimization process which can be divided into two steps. Firstly, according to the information about the location of the end effector and fixed base, an optimal center curve and the corresponding distribution of the intermediate platforms along this center line are generated. This procedure is implemented by solving a non-convex optimization problem that has a quadratic objective function subject to quadratic constraints. Then, in accordance with the distribution of the intermediate platforms along the optimal center curve, all lengths of the actuators are calculated via the inverse kinematics of each variable geometry truss module. Hence, the approach that we present is an optimization procedure that attempts to generate the optimal intermediate platform distribution along the optimal central curve, while the performance index and kinematic constraints are satisfied. By using the Lagrangian duality theory, a closed-form optimal solution of the original optimization is given. The numerical simulation substantiates the effectiveness of the introduced approach.
Shanafield, Margaret; Niswonger, Richard G.; Prudic, David E.; Pohll, Greg; Susfalk, Richard; Panday, Sorab
2014-01-01
Infiltration along ephemeral channels plays an important role in groundwater recharge in arid regions. A model is presented for estimating spatial variability of seepage due to streambed heterogeneity along channels based on measurements of streamflow-front velocities in initially dry channels. The diffusion-wave approximation to the Saint-Venant equations, coupled with Philip's equation for infiltration, is connected to the groundwater model MODFLOW and is calibrated by adjusting the saturated hydraulic conductivity of the channel bed. The model is applied to portions of two large water delivery canals, which serve as proxies for natural ephemeral streams. Estimated seepage rates compare well with previously published values. Possible sources of error stem from uncertainty in Manning's roughness coefficients, soil hydraulic properties and channel geometry. Model performance would be most improved through more frequent longitudinal estimates of channel geometry and thalweg elevation, and with measurements of stream stage over time to constrain wave timing and shape. This model is a potentially valuable tool for estimating spatial variability in longitudinal seepage along intermittent and ephemeral channels over a wide range of bed slopes and the influence of seepage rates on groundwater levels.
DEFF Research Database (Denmark)
Garcia-Aymerich, Judith; Lange, Peter; Serra, Ignasi
2008-01-01
PURPOSE: Results from longitudinal studies about the association between physical activity and chronic obstructive pulmonary disease (COPD) may have been biased because they did not properly adjust for time-dependent confounders. Marginal structural models (MSMs) have been proposed to address...... this type of confounding. We sought to assess the presence of time-dependent confounding in the association between physical activity and COPD development and course by comparing risk estimates between standard statistical methods and MSMs. METHODS: By using the population-based cohort Copenhagen City Heart...... Study, 6,568 subjects selected from the general population in 1976 were followed up until 2004 with three repeated examinations. RESULTS: Moderate to high compared with low physical activity was associated with a reduced risk of developing COPD both in the standard analysis (odds ratio [OR] 0.76, p = 0...
Directory of Open Access Journals (Sweden)
Ni An
2017-04-01
Full Text Available When modeling the soil/atmosphere interaction, it is of paramount importance to determine the net radiation flux. There are two common calculation methods for this purpose. Method 1 relies on use of air temperature, while Method 2 relies on use of both air and soil temperatures. Nowadays, there has been no consensus on the application of these two methods. In this study, the half-hourly data of solar radiation recorded at an experimental embankment are used to calculate the net radiation and long-wave radiation at different time-scales (half-hourly, hourly, and daily using the two methods. The results show that, compared with Method 2 which has been widely adopted in agronomical, geotechnical and geo-environmental applications, Method 1 is more feasible for its simplicity and accuracy at shorter time-scale. Moreover, in case of longer time-scale, daily for instance, less variations of net radiation and long-wave radiation are obtained, suggesting that no detailed soil temperature variations can be obtained. In other words, shorter time-scales are preferred in determining net radiation flux.
Latimer, Nicholas R; Abrams, Keith R; Amonkar, Mayur M; Stapelkamp, Ceilidh; Swann, R Suzanne
2015-07-01
Patients with previously untreated BRAF V600E mutation-positive melanoma in BREAK-3 showed a median overall survival (OS) of 18.2 months for dabrafenib versus 15.6 months for dacarbazine (hazard ratio [HR], 0.76; 95% confidence interval, 0.48-1.21). Because patients receiving dacarbazine were allowed to switch to dabrafenib at disease progression, we attempted to adjust for the confounding effects on OS. Rank preserving structural failure time models (RPSFTMs) and the iterative parameter estimation (IPE) algorithm were used. Two analyses, "treatment group" (assumes treatment effect could continue until death) and "on-treatment observed" (assumes treatment effect disappears with discontinuation), were used to test the assumptions around the durability of the treatment effect. A total of 36 of 63 patients (57%) receiving dacarbazine switched to dabrafenib. The adjusted OS HRs ranged from 0.50 to 0.55, depending on the analysis. The RPSFTM and IPE "treatment group" and "on-treatment observed" analyses performed similarly well. RPSFTM and IPE analyses resulted in point estimates for the OS HR that indicate a substantial increase in the treatment effect compared with the unadjusted OS HR of 0.76. The results are uncertain because of the assumptions associated with the adjustment methods. The confidence intervals continued to cross 1.00; thus, the adjusted estimates did not provide statistically significant evidence of a treatment benefit on survival. However, it is clear that a standard intention-to-treat analysis will be confounded in the presence of treatment switching-a reliance on unadjusted analyses could lead to inappropriate practice. Adjustment analyses provide useful additional information on the estimated treatment effects to inform decision making. Treatment switching is common in oncology trials, and the implications of this for the interpretation of the clinical effectiveness and cost-effectiveness of the novel treatment are important to consider. If
Osetrin, Evgeny; Osetrin, Konstantin
2017-11-01
We consider space-time models with pure radiation, which admit integration of the eikonal equation by the method of separation of variables. For all types of these models, the equations of the energy-momentum conservation law are integrated. The resulting form of metric, energy density, and wave vectors of radiation as functions of metric for all types of spaces under consideration is presented. The solutions obtained can be used for any metric theories of gravitation.
Johnson, Kenneth L.; White, K, Preston, Jr.
2012-01-01
The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques. This recommended procedure would be used as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. This document contains the outcome of the assessment.
Quantifying the potential role of unmeasured confounders : the example of influenza vaccination
Groenwold, R H H; Hoes, A W; Nichol, K L; Hak, E
2008-01-01
BACKGROUND: The validity of non-randomized studies using healthcare databases is often challenged because they lack information on potentially important confounders, such as functional health status and socioeconomic status. In a study quantifying the effects of influenza vaccination among
An Inclusive Design Method for Addressing Human Variability and Work Performance Issues
Directory of Open Access Journals (Sweden)
Amjad Hussain
2013-07-01
Full Text Available Humans play vital roles in manufacturing systems, but work performance is strongly influenced by factors such as experience, age, level of skill, physical and cognitive abilities and attitude towards work. Current manufacturing system design processes need to consider these human variability issues and their impact on work performance. An ‘inclusive design’ approach is proposed to consider the increasing diversity of the global workforce in terms of age, gender, cultural background, skill and experience. The decline in physical capabilities of older workers creates a mismatch between job demands and working capabilities which can be seen in manufacturing assembly that typically requires high physical demands for repetitive and accurate motions. The inclusive design approach leads to a reduction of this mismatch that results in a more productive, safe and healthy working environment giving benefits to the organization and individuals in terms of workforce satisfaction, reduced turnover, higher productivity and improved product quality.
Moody, John A.; Ebel, Brian A.
2012-01-01
We developed a difference infiltrometer to measure time series of non-steady infiltration rates during rainstorms at the point scale. The infiltrometer uses two, tipping bucket rain gages. One gage measures rainfall onto, and the other measures runoff from, a small circular plot about 0.5-m in diameter. The small size allows the infiltration rate to be computed as the difference of the cumulative rainfall and cumulative runoff without having to route water through a large plot. Difference infiltrometers were deployed in an area burned by the 2010 Fourmile Canyon Fire near Boulder, Colorado, USA, and data were collected during the summer of 2011. The difference infiltrometer demonstrated the capability to capture different magnitudes of infiltration rates and temporal variability associated with convective (high intensity, short duration) and cyclonic (low intensity, long duration) rainstorms. Data from the difference infiltrometer were used to estimate saturated hydraulic conductivity of soil affected by the heat from a wildfire. The difference infiltrometer is portable and can be deployed in rugged, steep terrain and does not require the transport of water, as many rainfall simulators require, because it uses natural rainfall. It can be used to assess infiltration models, determine runoff coefficients, identify rainfall depth or rainfall intensity thresholds to initiate runoff, estimate parameters for infiltration models, and compare remediation treatments on disturbed landscapes. The difference infiltrometer can be linked with other types of soil monitoring equipment in long-term studies for detecting temporal and spatial variability at multiple time scales and in nested designs where it can be linked to hillslope and basin-scale runoff responses.
Concerning an application of the method of least squares with a variable weight matrix
Sukhanov, A. A.
1979-01-01
An estimate of a state vector for a physical system when the weight matrix in the method of least squares is a function of this vector is considered. An iterative procedure is proposed for calculating the desired estimate. Conditions for the existence and uniqueness of the limit of this procedure are obtained, and a domain is found which contains the limit estimate. A second method for calculating the desired estimate which reduces to the solution of a system of algebraic equations is proposed. The question of applying Newton's method of tangents to solving the given system of algebraic equations is considered and conditions for the convergence of the modified Newton's method are obtained. Certain properties of the estimate obtained are presented together with an example.
Directory of Open Access Journals (Sweden)
Zizhou Lao
2018-05-01
Full Text Available For model-based state of charge (SOC estimation methods, the battery model parameters change with temperature, SOC, and so forth, causing the estimation error to increase. Constantly updating model parameters during battery operation, also known as online parameter identification, can effectively solve this problem. In this paper, a lithium-ion battery is modeled using the Thevenin model. A variable forgetting factor (VFF strategy is introduced to improve forgetting factor recursive least squares (FFRLS to variable forgetting factor recursive least squares (VFF-RLS. A novel method based on VFF-RLS for the online identification of the Thevenin model is proposed. Experiments verified that VFF-RLS gives more stable online parameter identification results than FFRLS. Combined with an unscented Kalman filter (UKF algorithm, a joint algorithm named VFF-RLS-UKF is proposed for SOC estimation. In a variable-temperature environment, a battery SOC estimation experiment was performed using the joint algorithm. The average error of the SOC estimation was as low as 0.595% in some experiments. Experiments showed that VFF-RLS can effectively track the changes in model parameters. The joint algorithm improved the SOC estimation accuracy compared to the method with the fixed forgetting factor.
Villaverde-Morcillo, S; Esteso, M C; Castaño, C; Santiago-Moreno, J
2016-02-01
Many post-mortem sperm collection techniques have been described for mammalian species, but their use in birds is scarce. This paper compares the efficacy of two post-mortem sperm retrieval techniques - the flushing and float-out methods - in the collection of rooster sperm, in conjunction with the use of two extenders, i.e., L&R-84 medium and Lake 7.1 medium. To determine whether the protective effects of these extenders against refrigeration are different for post-mortem and ejaculated sperm, pooled ejaculated samples (procured via the massage technique) were also diluted in the above extenders. Post-mortem and ejaculated sperm variables were assessed immediately at room temperature (0 h), and after refrigeration at 5°C for 24 and 48 h. The flushing method retrieved more sperm than the float-out method (596.5 ± 75.4 million sperm vs 341.0 ± 87.6 million sperm; p < 0.05); indeed, the number retrieved by the former method was similar to that obtained by massage-induced ejaculation (630.3 ± 78.2 million sperm). For sperm collected by all methods, the L&R-84 medium provided an advantage in terms of sperm motility variables at 0 h. In the refrigerated sperm samples, however, the Lake 7.1 medium was associated with higher percentages of viable sperm, and had a greater protective effect (p < 0.05) with respect to most motility variables. In conclusion, the flushing method is recommended for collecting sperm from dead birds. If this sperm needs to be refrigerated at 5°C until analysis, Lake 7.1 medium is recommended as an extender. © 2015 Blackwell Verlag GmbH.
Staley, James R.
2017-01-01
ABSTRACT Mendelian randomization, the use of genetic variants as instrumental variables (IV), can test for and estimate the causal effect of an exposure on an outcome. Most IV methods assume that the function relating the exposure to the expected value of the outcome (the exposure‐outcome relationship) is linear. However, in practice, this assumption may not hold. Indeed, often the primary question of interest is to assess the shape of this relationship. We present two novel IV methods for investigating the shape of the exposure‐outcome relationship: a fractional polynomial method and a piecewise linear method. We divide the population into strata using the exposure distribution, and estimate a causal effect, referred to as a localized average causal effect (LACE), in each stratum of population. The fractional polynomial method performs metaregression on these LACE estimates. The piecewise linear method estimates a continuous piecewise linear function, the gradient of which is the LACE estimate in each stratum. Both methods were demonstrated in a simulation study to estimate the true exposure‐outcome relationship well, particularly when the relationship was a fractional polynomial (for the fractional polynomial method) or was piecewise linear (for the piecewise linear method). The methods were used to investigate the shape of relationship of body mass index with systolic blood pressure and diastolic blood pressure. PMID:28317167
Bruce, N; Neufeld, L; Boy, E; West, C
1998-06-01
A number of studies have reported associations between indoor biofuel air pollution in developing countries and chronic obstructive lung disease (COLD) in adults and acute lower respiratory infection (ALRI) in children. Most of these studies have used indirect measures of exposure and generally dealt inadequately with confounding. More reliable, quantified information about this presumed effect is an important pre-requisite for prevention, not least because of the technical, economic and cultural barriers to achieving substantial exposure reductions in the world's poorest households, where ambient pollution levels are typically between ten and a hundred times higher than recommended standards. This study was carried out as part of a programme of research designed to inform the development of intervention studies capable of providing quantified estimates of health benefits. The association between respiratory symptoms and the use of open fires and chimney woodstoves ('planchas'), and the distribution of confounding factors, were examined in a cross-sectional study of 340 women aged 15-45 years, living in a poor rural area in the western highlands of Guatemala. The prevalence of reported cough and phlegm was significantly higher for three of six symptom measures among women using open fires. Although this finding is consistent with a number of other studies, none has systematically examined the extent to which strong associations with confounding variables in these settings limit the ability of observational studies to define the effect of indoor air pollution adequately. Very strong associations (P air pollution and health, although there is a reasonable case for believing that the observed association is causal. Intervention studies are required for stronger evidence of this association, and more importantly, to determine the size of health benefit achievable through feasible exposure reductions.
A method to screen obstructive sleep apnea using multi-variable non-intrusive measurements
International Nuclear Information System (INIS)
De Silva, S; Abeyratne, U R; Hukins, C
2011-01-01
Obstructive sleep apnea (OSA) is a serious sleep disorder. The current standard OSA diagnosis method is polysomnography (PSG) testing. PSG requires an overnight hospital stay while physically connected to 10–15 channels of measurement. PSG is expensive, inconvenient and requires the extensive involvement of a sleep technologist. As such, it is not suitable for community screening. OSA is a widespread disease and more than 80% of sufferers remain undiagnosed. Simplified, unattended and cheap OSA screening methods are urgently needed. Snoring is commonly associated with OSA but is not fully utilized in clinical diagnosis. Snoring contains pseudo-periodic packets of energy that produce characteristic vibrating sounds familiar to humans. In this paper, we propose a multi-feature vector that represents pitch information, formant information, a measure of periodic structure existence in snore episodes and the neck circumference of the subject to characterize OSA condition. Snore features were estimated from snore signals recorded in a sleep laboratory. The multi-feature vector was applied to a neural network for OSA/non-OSA classification and K-fold cross-validated using a random sub-sampling technique. We also propose a simple method to remove a specific class of background interference. Our method resulted in a sensitivity of 91 ± 6% and a specificity of 89 ± 5% for test data for AHI THRESHOLD = 15 for a database consisting of 51 subjects. This method has the potential as a non-intrusive, unattended technique to screen OSA using snore sound as the primary signal
Application of odex drilling method in a variably fractured volcanic/igneous environment
International Nuclear Information System (INIS)
Murphy, J.
1992-01-01
A case history of a subsurface investigation at a geothermal waste disposal facility within a volcanic flow regime illustrates a classic example of critical drilling problems arising from severe air and mud circulation loss. Extremely dense dacite and rhyolite rock alternating with severely fractured flow margins (interconnected with numerous voids and caverns) has provided the scenario for open-quotes gravel pilesclose quotes of substantial size between competent dacite flows. The interconnected void space at numerous depths beneath the site is great enough to create complete loss of circulation while using more than 3000 cubic feet per minute (cfm) of air at 350 pounds per square inch (psi). This initial failed effort also included the use of a foam additive. The technologies employed at this site to address the problem of circulation loss included air rotary casing hammer methods, mud rotary (with beat pulp additives and linen additives), boring wall stabilization, telescoped casing and ultimately the ODEX casing advancement system. The relative success of this seldom used method invites a discussion of the principals of the under-reamer drilling method (ODEX) and the physical limitations of the system. A practical knowledge of the advantages and disadvantages of each drilling method is necessary when designing an investigation addressing problems of soil and water contamination. Additionally, by addressing the methods that were unsuccessful, geologists, contractors and engineers can gain insight into the value and application of the various technologies available for similar drilling problems
Tayebi, A.; Shekari, Y.; Heydari, M. H.
2017-07-01
Several physical phenomena such as transformation of pollutants, energy, particles and many others can be described by the well-known convection-diffusion equation which is a combination of the diffusion and advection equations. In this paper, this equation is generalized with the concept of variable-order fractional derivatives. The generalized equation is called variable-order time fractional advection-diffusion equation (V-OTFA-DE). An accurate and robust meshless method based on the moving least squares (MLS) approximation and the finite difference scheme is proposed for its numerical solution on two-dimensional (2-D) arbitrary domains. In the time domain, the finite difference technique with a θ-weighted scheme and in the space domain, the MLS approximation are employed to obtain appropriate semi-discrete solutions. Since the newly developed method is a meshless approach, it does not require any background mesh structure to obtain semi-discrete solutions of the problem under consideration, and the numerical solutions are constructed entirely based on a set of scattered nodes. The proposed method is validated in solving three different examples including two benchmark problems and an applied problem of pollutant distribution in the atmosphere. In all such cases, the obtained results show that the proposed method is very accurate and robust. Moreover, a remarkable property so-called positive scheme for the proposed method is observed in solving concentration transport phenomena.
Computer Simulation of Nonuniform MTLs via Implicit Wendroff and State-Variable Methods
Directory of Open Access Journals (Sweden)
L. Brancik
2011-04-01
Full Text Available The paper deals with techniques for a computer simulation of nonuniform multiconductor transmission lines (MTLs based on the implicit Wendroff and the statevariable methods. The techniques fall into a class of finitedifference time-domain (FDTD methods useful to solve various electromagnetic systems. Their basic variants are extended and modified to enable solving both voltage and current distributions along nonuniform MTL’s wires and their sensitivities with respect to lumped and distributed parameters. An experimental error analysis is performed based on the Thomson cable whose analytical solutions are known, and some examples of simulation of both uniform and nonuniform MTLs are presented. Based on the Matlab language programme, CPU times are analyzed to compare efficiency of the methods. Some results for nonlinear MTLs simulation are presented as well.
Polymeric nanoparticles: A study on the preparation variables and characterization methods.
Crucho, Carina I C; Barros, Maria Teresa
2017-11-01
Since the emergence of Nanotechnology in the past decades, the development and design of nanomaterials has become an important field of research. An emerging component in this field is nanomedicine, wherein nanoscale materials are being developed for use as imaging agents or for drug delivery applications. Much work is currently focused in the preparation of well-defined nanomaterials in terms of size and shape. These factors play a significantly role in the nanomaterial behavior in vivo. In this context, this review focuses on the toolbox of available methods for the preparation of polymeric nanoparticles. We highlight some recent examples from the literature that demonstrate the influence of the preparation method on the physicochemical characteristics of the nanoparticles. Additionally, in the second part, the characterization methods for this type of nanoparticles are discussed. Copyright © 2017 Elsevier B.V. All rights reserved.
Method for the generation of variable density metal vapors which bypasses the liquidus phase
Kunnmann, Walter; Larese, John Z.
2001-01-01
The present invention provides a method for producing a metal vapor that includes the steps of combining a metal and graphite in a vessel to form a mixture; heating the mixture to a first temperature in an argon gas atmosphere to form a metal carbide; maintaining the first temperature for a period of time; heating the metal carbide to a second temperature to form a metal vapor; withdrawing the metal vapor and the argon gas from the vessel; and separating the metal vapor from the argon gas. Metal vapors made using this method can be used to produce uniform powders of the metal oxide that have narrow size distribution and high purity.
Energy Technology Data Exchange (ETDEWEB)
Meyer, L.; Witzel, G.; Ghez, A. M. [Department of Physics and Astronomy, University of California, Los Angeles, CA 90095-1547 (United States); Longstaff, F. A. [UCLA Anderson School of Management, University of California, Los Angeles, CA 90095-1481 (United States)
2014-08-10
Continuously time variable sources are often characterized by their power spectral density and flux distribution. These quantities can undergo dramatic changes over time if the underlying physical processes change. However, some changes can be subtle and not distinguishable using standard statistical approaches. Here, we report a methodology that aims to identify distinct but similar states of time variability. We apply this method to the Galactic supermassive black hole, where 2.2 μm flux is observed from a source associated with Sgr A* and where two distinct states have recently been suggested. Our approach is taken from mathematical finance and works with conditional flux density distributions that depend on the previous flux value. The discrete, unobserved (hidden) state variable is modeled as a stochastic process and the transition probabilities are inferred from the flux density time series. Using the most comprehensive data set to date, in which all Keck and a majority of the publicly available Very Large Telescope data have been merged, we show that Sgr A* is sufficiently described by a single intrinsic state. However, the observed flux densities exhibit two states: noise dominated and source dominated. Our methodology reported here will prove extremely useful to assess the effects of the putative gas cloud G2 that is on its way toward the black hole and might create a new state of variability.
Directory of Open Access Journals (Sweden)
Kedong Yin
2017-12-01
Full Text Available With respect to multi-attribute group decision-making (MAGDM problems, where attribute values take the form of interval grey trapezoid fuzzy linguistic variables (IGTFLVs and the weights (including expert and attribute weight are unknown, improved grey relational MAGDM methods are proposed. First, the concept of IGTFLV, the operational rules, the distance between IGTFLVs, and the projection formula between the two IGTFLV vectors are defined. Second, the expert weights are determined by using the maximum proximity method based on the projection values between the IGTFLV vectors. The attribute weights are determined by the maximum deviation method and the priorities of alternatives are determined by improved grey relational analysis. Finally, an example is given to prove the effectiveness of the proposed method and the flexibility of IGTFLV.
Yin, Kedong; Wang, Pengyu; Li, Xuemei
2017-12-13
With respect to multi-attribute group decision-making (MAGDM) problems, where attribute values take the form of interval grey trapezoid fuzzy linguistic variables (IGTFLVs) and the weights (including expert and attribute weight) are unknown, improved grey relational MAGDM methods are proposed. First, the concept of IGTFLV, the operational rules, the distance between IGTFLVs, and the projection formula between the two IGTFLV vectors are defined. Second, the expert weights are determined by using the maximum proximity method based on the projection values between the IGTFLV vectors. The attribute weights are determined by the maximum deviation method and the priorities of alternatives are determined by improved grey relational analysis. Finally, an example is given to prove the effectiveness of the proposed method and the flexibility of IGTFLV.
International Nuclear Information System (INIS)
Chen, Yong; Shanghai Jiao-Tong Univ., Shangai; Chinese Academy of sciences, Beijing
2005-01-01
A general method to uniformly construct exact solutions in terms of special function of nonlinear partial differential equations is presented by means of a more general ansatz and symbolic computation. Making use of the general method, we can successfully obtain the solutions found by the method proposed by Fan (J. Phys. A., 36 (2003) 7009) and find other new and more general solutions, which include polynomial solutions, exponential solutions, rational solutions, triangular periodic wave solution, soliton solutions, soliton-like solutions and Jacobi, Weierstrass doubly periodic wave solutions. A general variable-coefficient two-dimensional KdV equation is chosen to illustrate the method. As a result, some new exact soliton-like solutions are obtained. planets. The numerical results are given in tables. The results are discussed in the conclusion
Ehret, Totta; Torelli, Francesca; Klotz, Christian; Pedersen, Amy B; Seeber, Frank
2017-01-01
Rodents, in particular Mus musculus , have a long and invaluable history as models for human diseases in biomedical research, although their translational value has been challenged in a number of cases. We provide some examples in which rodents have been suboptimal as models for human biology and discuss confounders which influence experiments and may explain some of the misleading results. Infections of rodents with protozoan parasites are no exception in requiring close consideration upon model choice. We focus on the significant differences between inbred, outbred and wild animals, and the importance of factors such as microbiota, which are gaining attention as crucial variables in infection experiments. Frequently, mouse or rat models are chosen for convenience, e.g., availability in the institution rather than on an unbiased evaluation of whether they provide the answer to a given question. Apart from a general discussion on translational success or failure, we provide examples where infections with single-celled parasites in a chosen lab rodent gave contradictory or misleading results, and when possible discuss the reason for this. We present emerging alternatives to traditional rodent models, such as humanized mice and organoid primary cell cultures. So-called recombinant inbred strains such as the Collaborative Cross collection are also a potential solution for certain challenges. In addition, we emphasize the advantages of using wild rodents for certain immunological, ecological, and/or behavioral questions. The experimental challenges (e.g., availability of species-specific reagents) that come with the use of such non-model systems are also discussed. Our intention is to foster critical judgment of both traditional and newly available translational rodent models for research on parasitic protozoa that can complement the existing mouse and rat models.
Directory of Open Access Journals (Sweden)
Totta Ehret
2017-06-01
Full Text Available Rodents, in particular Mus musculus, have a long and invaluable history as models for human diseases in biomedical research, although their translational value has been challenged in a number of cases. We provide some examples in which rodents have been suboptimal as models for human biology and discuss confounders which influence experiments and may explain some of the misleading results. Infections of rodents with protozoan parasites are no exception in requiring close consideration upon model choice. We focus on the significant differences between inbred, outbred and wild animals, and the importance of factors such as microbiota, which are gaining attention as crucial variables in infection experiments. Frequently, mouse or rat models are chosen for convenience, e.g., availability in the institution rather than on an unbiased evaluation of whether they provide the answer to a given question. Apart from a general discussion on translational success or failure, we provide examples where infections with single-celled parasites in a chosen lab rodent gave contradictory or misleading results, and when possible discuss the reason for this. We present emerging alternatives to traditional rodent models, such as humanized mice and organoid primary cell cultures. So-called recombinant inbred strains such as the Collaborative Cross collection are also a potential solution for certain challenges. In addition, we emphasize the advantages of using wild rodents for certain immunological, ecological, and/or behavioral questions. The experimental challenges (e.g., availability of species-specific reagents that come with the use of such non-model systems are also discussed. Our intention is to foster critical judgment of both traditional and newly available translational rodent models for research on parasitic protozoa that can complement the existing mouse and rat models.
A METHOD OF THE MINIMIZING OF THE TOTAL ACQUISITIONS COST WITH THE INCREASING VARIABLE DEMAND
Directory of Open Access Journals (Sweden)
ELEONORA IONELA FOCȘAN
2015-12-01
Full Text Available Over time, mankind has tried to find different ways of costs reduction. This subject which we are facing more often nowadays, has been detailed studied, without reaching a general model, and also efficient, regarding the costs reduction. Costs reduction entails a number of benefits over the entity, the most important being: increase revenue and default to the profit, increase productivity, a higher level of services / products offered to clients, and last but not least, the risk mitigation of the economic deficit. Therefore, each entity search different modes to obtain most benefits, for the company to succeed in a competitive market. This article supports the companies, trying to make known a new way of minimizing the total cost of acquisitions, by presenting some hypotheses about the increasing variable demand, proving them, and development of formulas for reducing the costs. The hypotheses presented in the model described below, can be maximally exploited to obtain new models of reducing the total cost, according to the modes of the purchase of entities which approach it.
On mass and momentum conservation in the variable-parameter Muskingum method
Reggiani, Paolo; Todini, Ezio; Meißner, Dennis
2016-12-01
In this paper we investigate mass and momentum conservation in one-dimensional routing models. To this end we formulate the conservation equations for a finite-dimensional reach and compute individual terms using three standard Saint-Venant (SV) solvers: SOBEK, HEC-RAS and MIKE11. We also employ two different variable-parameter Muskingum (VPM) formulations: the classical Muskingum-Cunge (MC) and the revised, mass-conservative Muskingum-Cunge-Todini (MCT) approach, whereby geometrical cross sections are treated analytically in both cases. We initially compare the three SV solvers for a straight mild-sloping prismatic channel with geometric cross sections and a synthetic hydrograph as boundary conditions against the analytical MC and MCT solutions. The comparison is substantiated by the fact that in this flow regime the conditions for the parabolic equation model solved by MC and MCT are met. Through this intercomparison we show that all approaches have comparable mass and momentum conservation properties, except the MC. Then we extend the MCT to use natural cross sections for a real irregular river channel forced by an observed triple-peak event and compare the results with SOBEK. The model intercomparison demonstrates that the VPM in the form of MCT can be a computationally efficient, fully mass and momentum conservative approach and therefore constitutes a valid alternative to Saint-Venant based flood wave routing for a wide variety of rivers and channels in the world when downstream boundary conditions or hydraulic structures are non-influential.
Application of quantitative variables in the sampling method to evaluate the sugarcane rust brown
Directory of Open Access Journals (Sweden)
Joaquín Montalván Delgado
2017-01-01
Full Text Available To develop a system that increase the precision in the resistance evaluations to sugarcane brown rust disease through the use of the quantitative sampling, six cultivars of differential behavior versus the disease (PR980, My5514, Ja60-5, C334-64, C323-68 and B4362 were studied. A random block experimental design with three replications was used in a heavy infections conditions obtained from the cultivar highly susceptible B4362. The evaluations were done at three and five months of age, in the three-thirds: bottom, half and top of the +1, +3 and +5 sugarcane plant leaves of 10 plants for replica. The variable total leaf area affected of the leaf and in each third was analyzed. In 2 cm2 were observed the long and wide of the biggest and more frequent pustule, the total of pustule and the area of the biggest and more frequent pustule, and the area percentage occupied by the most frequent pustule by each cm2 were determined. Variance analysis and Tukey tests as well as confidence intervals analysis to determine the coefficient to use as constant of the pustule width, due to the little variation of this parameter were realized. The +3 leaf represented the half infection of the incidence of the brown rust, constituting for it the most appropriate to carry out the observations and the half third. An equation was also obtained to calculate the area occupied by pustules with a high level of confidence.
Directory of Open Access Journals (Sweden)
Renata Bujak
2016-07-01
Full Text Available Non-targeted metabolomics constitutes a part of systems biology and aims to determine many metabolites in complex biological samples. Datasets obtained in non-targeted metabolomics studies are multivariate and high-dimensional due to the sensitivity of mass spectrometry-based detection methods as well as complexity of biological matrices. Proper selection of variables which contribute into group classification is a crucial step, especially in metabolomics studies which are focused on searching for disease biomarker candidates. In the present study, three different statistical approaches were tested using two metabolomics datasets (RH and PH study. Orthogonal projections to latent structures-discriminant analysis (OPLS-DA without and with multiple testing correction as well as least absolute shrinkage and selection operator (LASSO were tested and compared. For the RH study, OPLS-DA model built without multiple testing correction, selected 46 and 218 variables based on VIP criteria using Pareto and UV scaling, respectively. In the case of the PH study, 217 and 320 variables were selected based on VIP criteria using Pareto and UV scaling, respectively. In the RH study, OPLS-DA model built with multiple testing correction, selected 4 and 19 variables as statistically significant in terms of Pareto and UV scaling, respectively. For PH study, 14 and 18 variables were selected based on VIP criteria in terms of Pareto and UV scaling, respectively. Additionally, the concept and fundaments of the least absolute shrinkage and selection operator (LASSO with bootstrap procedure evaluating reproducibility of results, was demonstrated. In the RH and PH study, the LASSO selected 14 and 4 variables with reproducibility between 99.3% and 100%. However, apart from the popularity of PLS-DA and OPLS-DA methods in metabolomics, it should be highlighted that they do not control type I or type II error, but only arbitrarily establish a cut-off value for PLS-DA loadings
8760-Based Method for Representing Variable Generation Capacity Value in Capacity Expansion Models
Energy Technology Data Exchange (ETDEWEB)
Frew, Bethany A [National Renewable Energy Laboratory (NREL), Golden, CO (United States)
2017-08-03
Capacity expansion models (CEMs) are widely used to evaluate the least-cost portfolio of electricity generators, transmission, and storage needed to reliably serve load over many years or decades. CEMs can be computationally complex and are often forced to estimate key parameters using simplified methods to achieve acceptable solve times or for other reasons. In this paper, we discuss one of these parameters -- capacity value (CV). We first provide a high-level motivation for and overview of CV. We next describe existing modeling simplifications and an alternate approach for estimating CV that utilizes hourly '8760' data of load and VG resources. We then apply this 8760 method to an established CEM, the National Renewable Energy Laboratory's (NREL's) Regional Energy Deployment System (ReEDS) model (Eurek et al. 2016). While this alternative approach for CV is not itself novel, it contributes to the broader CEM community by (1) demonstrating how a simplified 8760 hourly method, which can be easily implemented in other power sector models when data is available, more accurately captures CV trends than a statistical method within the ReEDS CEM, and (2) providing a flexible modeling framework from which other 8760-based system elements (e.g., demand response, storage, and transmission) can be added to further capture important dynamic interactions, such as curtailment.
Tuning method for multi-variable control system with PID controllers
International Nuclear Information System (INIS)
Fujiwara, Toshitaka
1983-01-01
Control systems, including thermal and nuclear power plants, generally and mainly use PID controllers consisting of proportional, integral and differential actions. These systems consist of multiple control loops which interfere with each other. Therefore, it is present status that the fine control of the system is carried out by the trial and error method because the adjusting procedure for a single control loop cannot be applied to a multi-loop system in most cases. In this report, a method to effectively adjust PID controller parameters in a short time in a control system which consists of multi-loops that interfere with each other. This method makes adjustment by using the control area as the evaluation function, which is the time-dependent integration of control deviation, the input to the PID controllers. In other words, the evaluation function is provided for each control result for every parameter (gain constant, reset rate, and differentiation time), and all parameters are simultaneously changed in the direction of minimizing the values of these evaluation functions. In the report, the principle of tuning method, the evaluation function for each of three parameters, and the adjusting system configuration for separately using for actual plant tuning and for control system design are described. It also shows the examples of application to the actual tuning of the control system for a thermal power plant and to a control system design. (Wakatsuki, Y.)
Examination of Stress-Coping Methods of Primary School Teachers in Terms of Different Variables
Bayraktar, Hatice Vatansever; Yilmaz, Kamile Özge
2016-01-01
This research is a study that aims to reveal whether there is a significant difference between primary school teachers' stress-coping methods and their demographic features, and if any, whether it is negative or positive. The study consists of 191 primary school teachers working in 14 primary schools in seven geographical regions. The…
Al-Omran, Abdulrasoul M.; Aly, Anwar A.; Al-Wabel, Mohammad I.; Al-Shayaa, Mohammad S.; Sallam, Abdulazeam S.; Nadeem, Mahmoud E.
2017-11-01
The analyses of 180 groundwater samples of Al-Kharj, Saudi Arabia, recorded that most groundwaters are unsuitable for drinking uses due to high salinity; however, they can be used for irrigation with some restriction. The electric conductivity of studied groundwater ranged between 1.05 and 10.15 dS m-1 with an average of 3.0 dS m-1. Nitrate was also found in high concentration in some groundwater. Piper diagrams revealed that the majority of water samples are magnesium-calcium/sulfate-chloride water type. The Gibbs's diagram revealed that the chemical weathering of rock-forming minerals and evaporation are influencing the groundwater chemistry. A kriging method was used for predicting spatial distribution of salinity (EC dS m-1) and NO3 - (mg L-1) in Al-Kharj's groundwater using data of 180 different locations. After normalization of data, variogram was drawn, for selecting suitable model for fitness on experimental variogram, less residual sum of squares value was used. Then cross-validation and root mean square error were used to select the best method for interpolation. The kriging method was found suitable methods for groundwater interpolation and management using either GS+ or ArcGIS.
Variability in malaria prophylaxis prescribing across Europe: a Delphi method analysis
Calleri, Guido; Behrens, Ron H.; Bisoffi, Zeno; Bjorkman, Anders; Castelli, Francesco; Gascon, Joaquim; Gobbi, Federico; Grobusch, Martin P.; Jelinek, Tomas; Schmid, Matthias L.; Niero, Mauro; Caramello, Pietro
2008-01-01
BACKGROUND: The indications for prescribing malaria chemoprophylaxis lack a solid evidence base that results in subjectivity and wide variation of practice across countries and among professionals. METHODS: European experts in travel medicine, who are members of TropNetEurop, participated in a
2013-01-01
Background Research suggests that reports of interpersonal discrimination result in poor mental health. Because personality characteristics may either confound or mediate the link between these reports and mental health, there is a need to disentangle its role in order to better understand the nature of discrimination-mental health association. We examined whether hostility, anger repression and expression, pessimism, optimism, and self-esteem served as confounders in the association between perceived interpersonal discrimination and CESD-based depressive symptoms in a race/ethnic heterogeneous probability-based sample of community-dwelling adults. Methods We employed a series of ordinary least squares regression analyses to examine the potential confounding effect of hostility, anger repression and expression, pessimism, optimism, and self-esteem between interpersonal discrimination and depressive symptoms. Results Hostility, anger repression, pessimism and self-esteem were significant as possible confounders of the relationship between interpersonal discrimination and depressive symptoms, together accounting for approximately 38% of the total association (beta: 0.1892, p interpersonal discrimination remained a positive predictor of depressive symptoms (beta: 0.1176, p personality characteristics in the association between reports of interpersonal discrimination and mental health, our results suggest that personality-related characteristics may serve as potential confounders. Nevertheless, our results also suggest that, net of these characteristics, reports of interpersonal discrimination are associated with poor mental health. PMID:24256578
Directory of Open Access Journals (Sweden)
Uttam Barick
2016-07-01
Full Text Available The increasing usage of smart phones has compelled mobile technology to become a universal part of everyday life. From wearable gadgets to sophisticated implantable medical devices, the advent of mobile technology has completely transformed the healthcare delivery scenario. Self-report measures enabled by mobile technology are increasingly becoming a more time and cost efficient method of assessing real world health outcomes. But, amidst all the optimism, there are concerns also on adopting this technology as regulations and ethical considerations on privacy legislations of end users are unclear. In general, the healthcare industry functions on some stringent regulations and compliances to ensure the safety and protection of patient information. A couple of the most common regulations are Health Insurance Portability Accountability Act (HIPPA and Health Information Technology for Economic and Clinical Health (HITECH. To harness the true potential of mobile technology to empower stakeholders and provide them a common platform which seamlessly integrates healthcare delivery and research, it is imperative that challenges and drawbacks in the sphere are identified and addressed. In this age of information and technology, no stones should be left unturned to ensure that the human race has access to the best healthcare services without an intrusion into his/her confidentiality. This article is an overview of the role of tracking and self-monitoring devices in data collection for real world evidence/observational studies in context to feasibility, confounders and ethical considerations.
Müller, Aline Lima Hermes; Picoloto, Rochele Sogari; Mello, Paola de Azevedo; Ferrão, Marco Flores; dos Santos, Maria de Fátima Pereira; Guimarães, Regina Célia Lourenço; Müller, Edson Irineu; Flores, Erico Marlon Moraes
2012-04-01
Total sulfur concentration was determined in atmospheric residue (AR) and vacuum residue (VR) samples obtained from petroleum distillation process by Fourier transform infrared spectroscopy with attenuated total reflectance (FT-IR/ATR) in association with chemometric methods. Calibration and prediction set consisted of 40 and 20 samples, respectively. Calibration models were developed using two variable selection models: interval partial least squares (iPLS) and synergy interval partial least squares (siPLS). Different treatments and pre-processing steps were also evaluated for the development of models. The pre-treatment based on multiplicative scatter correction (MSC) and the mean centered data were selected for models construction. The use of siPLS as variable selection method provided a model with root mean square error of prediction (RMSEP) values significantly better than those obtained by PLS model using all variables. The best model was obtained using siPLS algorithm with spectra divided in 20 intervals and combinations of 3 intervals (911-824, 823-736 and 737-650 cm-1). This model produced a RMSECV of 400 mg kg-1 S and RMSEP of 420 mg kg-1 S, showing a correlation coefficient of 0.990.
Variability of assay methods for total and free PSA after WHO standardization.
Foj, L; Filella, X; Alcover, J; Augé, J M; Escudero, J M; Molina, R
2014-03-01
The variability of total PSA (tPSA) and free PSA (fPSA) results among commercial assays has been suggested to be decreased by calibration to World Health Organization (WHO) reference materials. To characterize the current situation, it is necessary to know its impact in the critical cutoffs used in clinical practice. In the present study, we tested 167 samples with tPSA concentrations of 0 to 20 μg/L using seven PSA and six fPSA commercial assays, including Access, ARCHITECT i2000, ADVIA Centaur XP, IMMULITE 2000, Elecsys, and Lumipulse G1200, in which we only measured tPSA. tPSA and fPSA were measured in Access using the Hybritech and WHO calibrators. Passing-Bablok analysis was performed for PSA, and percentage of fPSA with the Hybritech-calibrated access comparison assay. For tPSA, relative differences were more than 10 % at 0.2 μg/L for ARCHITECT i2000, and at a critical concentration of 3, 4, and 10 μg/L, the relative difference was exceeded by ADVIA Centaur XP and WHO-calibrated Access. For percent fPSA, at a critical concentration of 10 %, the 10 % relative difference limit was exceeded by IMMULITE 2000 assay. At a critical concentration of 20 and 25 %, ADVIA Centaur XP, ARCHITECT i2000, and IMMULITE 2000 assays exceeded the 10 % relative difference limit. We have shown significant discordances between assays included in this study despite advances in standardization conducted in the last years. Further harmonization efforts are required in order to obtain a complete clinical concordance.
Models and Methods for Structural Topology Optimization with Discrete Design Variables
DEFF Research Database (Denmark)
Stolpe, Mathias
in the conceptual design phase to find innovative designs. The strength of topology optimization is the capability of determining both the optimal shape and the topology of the structure. In some cases also the optimal material properties can be determined. Optimal structural design problems are modeled...... such as bridges, airplanes, wind turbines, cars, etc. Topology optimization is a collection of theory, mathematical models, and numerical methods and is often used in the conceptual design phase to find innovative designs. The strength of topology optimization is the capability of determining both the optimal......Structural topology optimization is a multi-disciplinary research field covering optimal design of load carrying mechanical structures such as bridges, airplanes, wind turbines, cars, etc. Topology optimization is a collection of theory, mathematical models, and numerical methods and is often used...
Energy Technology Data Exchange (ETDEWEB)
Conti, Livio, E-mail: livio.conti@uninettunouniversity.net [Facoltà di Ingegneria, Università Telematica Internazionale Uninettuno, Corso Vittorio Emanuele II 39, 00186 Rome, Italy INFN Sezione Roma Tor Vergata, Via della Ricerca Scientifica 1, 00133 Rome (Italy); Sgrigna, Vittorio [Dipartimento di Matematica e Fisica, Università Roma Tre, 84 Via della Vasca Navale, I-00146 Rome (Italy); Zilpimiani, David [National Institute of Geophysics, Georgian Academy of Sciences, 1 M. Alexidze St., 009 Tbilisi, Georgia (United States); Assante, Dario [Facoltà di Ingegneria, Università Telematica Internazionale Uninettuno, Corso Vittorio Emanuele II 39, 00186 Rome, Italy INFN Sezione Roma Tor Vergata, Via della Ricerca Scientifica 1, 00133 Rome (Italy)
2014-08-21
An original method of signal conditioning and adaptive amplification is proposed for data acquisition systems of analog signals, conceived to obtain a high resolution spectrum of any input signal. The procedure is based on a feedback scheme of the signal amplification with aim at maximizing the dynamic range and resolution of the data acquisition system. The paper describes the signal conditioning, digitization, and data processing procedures applied to an a priori unknown signal in order to enucleate its amplitude and frequency content for applications in different environments: on the ground, in space, or in the laboratory. An electronic board of the conditioning module has also been constructed and described. In the paper are also discussed the main fields of application and advantages of the method with respect to those known today.
Sjogreen, Bjoern; Yee, H. C.
2007-01-01
Flows containing steady or nearly steady strong shocks in parts of the flow field, and unsteady turbulence with shocklets on other parts of the flow field are difficult to capture accurately and efficiently employing the same numerical scheme even under the multiblock grid or adaptive grid refinement framework. On one hand, sixth-order or higher shock-capturing methods are appropriate for unsteady turbulence with shocklets. On the other hand, lower order shock-capturing methods are more effective for strong steady shocks in terms of convergence. In order to minimize the shortcomings of low order and high order shock-capturing schemes for the subject flows,a multi- block overlapping grid with different orders of accuracy on different blocks is proposed. Test cases to illustrate the performance of the new solver are included.
International Nuclear Information System (INIS)
Conti, Livio; Sgrigna, Vittorio; Zilpimiani, David; Assante, Dario
2014-01-01
An original method of signal conditioning and adaptive amplification is proposed for data acquisition systems of analog signals, conceived to obtain a high resolution spectrum of any input signal. The procedure is based on a feedback scheme of the signal amplification with aim at maximizing the dynamic range and resolution of the data acquisition system. The paper describes the signal conditioning, digitization, and data processing procedures applied to an a priori unknown signal in order to enucleate its amplitude and frequency content for applications in different environments: on the ground, in space, or in the laboratory. An electronic board of the conditioning module has also been constructed and described. In the paper are also discussed the main fields of application and advantages of the method with respect to those known today
Ambrogioni, Luca; Güçlü, Umut; van Gerven, Marcel A. J.; Maris, Eric
2017-01-01
This paper introduces the kernel mixture network, a new method for nonparametric estimation of conditional probability densities using neural networks. We model arbitrarily complex conditional densities as linear combinations of a family of kernel functions centered at a subset of training points. The weights are determined by the outer layer of a deep neural network, trained by minimizing the negative log likelihood. This generalizes the popular quantized softmax approach, which can be seen ...
Gomes, Alberto Regio; Litch, Andrew D.; Wu, Guolian
2016-03-15
A refrigerator appliance (and associated method) that includes a condenser, evaporator and a multi-capacity compressor. The appliance also includes a pressure reducing device arranged within an evaporator-condenser refrigerant circuit, and a valve system for directing or restricting refrigerant flow through the device. The appliance further includes a controller for operating the compressor upon the initiation of a compressor ON-cycle at a priming capacity above a nominal capacity for a predetermined or calculated duration.
Research on a new type of precision cropping method with variable frequency vibration
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
Aiming at the cropping operations widely applied in practical industry production, a new method of bar cropping is presented. The rotational speeds of actuating motor of eccentric blocks are controlled by a frequency-changer, and the shearing die provides the bar with the controllable force, frequency and amplitude of vibration. By utilizing the stress concentration at the bottom of V shape groove on the bar, the low stress bar cropping is realized. The bar cropping experiments of duralumin alloy and steel ...
Frankenstein, Lutz; Zugck, Christian; Nelles, Manfred; Schellberg, Dieter; Katus, Hugo A; Remppis, B Andrew
2009-12-01
To verify whether controlling for indicators of disease severity and confounders represents a solution to the obesity paradox in chronic heart failure (CHF). From a cohort of 1790 patients, we formed 230 nested matched triplets by individually matching patients with body mass index (BMI) > 30 kg/m(2) (Group 3), BMI 20-24.9 k/m(2) (Group 1) and BMI 25-29.9 kg/m(2) (Group 2), according to NT-proBNP, age, sex, and NYHA class (triplet = one matched patient from each group). Although in the pre-matching cohort, BMI group was a significant univariable prognostic indicator, it did not retain significance [heart rate (HR): 0.91, 95% CI: 0.78-1.05, chi(2): 1.67] when controlled for group propensities as covariates. Furthermore, in the matched cohort, 1-year mortality and 3-year mortality did not differ significantly. Here, BMI again failed to reach statistical significance for prognosis, either as a continuous or categorical variable, whether crude or adjusted. This result was confirmed in the patients not selected for matching. NT-proBNP, however, remained statistically significant (log(NT-proBNP): HR: 1.49, 95% CI: 1.13-1.97, chi(2): 7.82) after multivariable adjustment. The obesity paradox does not appear to persist in a matched setting with respect to indicators of disease severity and other confounders. NT-proBNP remains an independent prognostic indicator of adverse outcome irrespective of obesity status.
Turner, W C; Cizauskas, C A; Getz, W M
2010-03-01
Estimates of parasite intensity within host populations are essential for many studies of host-parasite relationships. Here we evaluated the seasonal, age- and sex-related variability in faecal water content for two wild ungulate species, springbok (Antidorcas marsupialis) and plains zebra (Equus quagga). We then assessed whether or not faecal water content biased conclusions regarding differences in strongyle infection rates by season, age or sex. There was evidence of significant variation in faecal water content by season and age for both species, and by sex in springbok. Analyses of faecal egg counts demonstrated that sex was a near-significant factor in explaining variation in strongyle parasite infection rates in zebra (P = 0.055) and springbok (P = 0.052) using wet-weight faecal samples. However, once these intensity estimates were re-scaled by the percent of dry matter in the faeces, sex was no longer a significant factor (zebra, P = 0.268; springbok, P = 0.234). These results demonstrate that variation in faecal water content may confound analyses and could produce spurious conclusions, as was the case with host sex as a factor in the analysis. We thus recommend that researchers assess whether water variation could be a confounding factor when designing and performing research using faecal indices of parasite intensity.
Directory of Open Access Journals (Sweden)
Yi Cao
2013-06-01
Full Text Available A novel intelligent fault diagnosis method for motor roller bearings which operate under unsteady rotating speed and load is proposed in this paper. The pseudo Wigner-Ville distribution (PWVD and the relative crossing information (RCI methods are used for extracting the feature spectra from the non-stationary vibration signal measured for condition diagnosis. The RCI is used to automatically extract the feature spectrum from the time-frequency distribution of the vibration signal. The extracted feature spectrum is instantaneous, and not correlated with the rotation speed and load. By using the ant colony optimization (ACO clustering algorithm, the synthesizing symptom parameters (SSP for condition diagnosis are obtained. The experimental results shows that the diagnostic sensitivity of the SSP is higher than original symptom parameter (SP, and the SSP can sensitively reflect the characteristics of the feature spectrum for precise condition diagnosis. Finally, a fuzzy diagnosis method based on sequential inference and possibility theory is also proposed, by which the conditions of the machine can be identified sequentially as well.
Directory of Open Access Journals (Sweden)
Hong-Zhong Huang
2012-02-01
Full Text Available Various uncertainties are inevitable in complex engineered systems and must be carefully treated in design activities. Reliability-Based Multidisciplinary Design Optimization (RBMDO has been receiving increasing attention in the past decades to facilitate designing fully coupled systems but also achieving a desired reliability considering uncertainty. In this paper, a new formulation of multidisciplinary design optimization, namely RFCDV (random/fuzzy/continuous/discrete variables Multidisciplinary Design Optimization (RFCDV-MDO, is developed within the framework of Sequential Optimization and Reliability Assessment (SORA to deal with multidisciplinary design problems in which both aleatory and epistemic uncertainties are present. In addition, a hybrid discrete-continuous algorithm is put forth to efficiently solve problems where both discrete and continuous design variables exist. The effectiveness and computational efficiency of the proposed method are demonstrated via a mathematical problem and a pressure vessel design problem.
Sources of variability for the single-comparator method in a heavy-water reactor
International Nuclear Information System (INIS)
Damsgaard, E.; Heydorn, K.
1978-11-01
The well thermalized flux in the heavy-water-moderated DR 3 reactor at Risoe prompted us to investigate to what extent a single comparator could be used for multi-element determination instead of multiple comparators. The reliability of the single-comparator method is limited by the thermal-to-epithermal ratio, and experiments were designed to determine the variations in this ratio throughout a reactor operating period (4 weeks including a shut-down period of 4-5 days). The bi-isotopic method using zirconium as monitor was chosen, because 94 Zr and 96 Zr exhibit a large difference in their Isub(o)/Σsub(th) values, and would permit determination of the flux ratio with a precision sufficient to determine variations. One of the irradiation facilities comprises a rotating magazine with 3 channels, each of which can hold five aluminium cans. In this rig, five cans, each holding a polyvial with 1 ml of aqueous zirconium solution were irradiated simultaneously in one channel. Irradiations were carried out in the first and the third week of 4 periods. In another facility consisting of a pneumatic tube system, two samples were simultaneously irradiated on top of each other in a polyethylene rabbit. Experiments were carried out once a week for 4 periods. All samples were counted on a Ge(Li)-detector for 95 Zr, 97 sup(m)Nb and 97 Nb. The thermal-to-epithermal flux ratio was calculated from the induced activity, the nuclear data for the two zirconium isotopes and the detector efficiency. By analysis of variance the total variation of the flux ratio was separated into a random variation between reactor periods, and systematic differences between the positions, as well as the weeks in the operating period. If the variations are in statistical control, the error resulting from use of the single-comparator method in multi-element determination can be estimated for any combination of irradiation position and day in the operating period. With the measure flux ratio variations in DR
A novel variable baseline visibility detection system and its measurement method
Li, Meng; Jiang, Li-hui; Xiong, Xing-long; Zhang, Guizhong; Yao, JianQuan
2017-10-01
As an important meteorological observation instrument, the visibility meter can ensure the safety of traffic operation. However, due to the optical system contamination as well as sample error, the accuracy and stability of the equipment are difficult to meet the requirement in the low-visibility environment. To settle this matter, a novel measurement equipment was designed based upon multiple baseline, which essentially acts as an atmospheric transmission meter with movable optical receiver, applying weighted least square method to process signal. Theoretical analysis and experiments in real atmosphere environment support this technique.
Chasing the effects of Pre-analytical Confounders - a Multicentre Study on CSF-AD biomarkers
Directory of Open Access Journals (Sweden)
Maria Joao Leitao
2015-07-01
Full Text Available Core cerebrospinal fluid (CSF biomarkers-Aβ42, Tau and pTau–have been recently incorporated in the revised criteria for Alzheimer’s disease (AD. However, their widespread clinical application lacks standardization. Pre-analytical sample handling and storage play an important role in the reliable measurement of these biomarkers across laboratories. In this study, we aim to surpass the efforts from previous studies, by employing a multicentre approach to assess the impact of less studied CSF pre-analytical confounders in AD-biomarkers quantification. Four different centres participated in this study and followed the same established protocol. CSF samples were analysed for three biomarkers (Aβ42, Tau and pTau and tested for different spinning conditions (temperature: Room temperature (RT vs. 4oC; speed: 500g vs. 2000g vs. 3000g, storage volume variations (25%, 50% and 75% of tube total volume as well as freezing-thaw cycles (up to 5 cyles. The influence of sample routine parameters, inter-centre variability and relative value of each biomarker (reported as normal/abnormal, was analysed. Centrifugation conditions did not influence biomarkers levels, except for samples with a high CSF total protein content, where either non centrifugation or centrifugation at RT, compared to 4ºC, led to higher Aβ42 levels. Reducing CSF storage volume from 75% to 50% of total tube capacity, decreased Aβ42 concentration (within analytical CV of the assay, whereas no change in Tau or pTau was observed. Moreover, the concentration of Tau and pTau appears to be stable up to 5 freeze-thaw cycles, whereas Aβ42 levels decrease if CSF is freeze-thawed more than 3 times. This systematic study reinforces the need for CSF centrifugation at 4ºC prior to storage and highlights the influence of storage conditions in Aβ42 levels. This study contributes to the establishment of harmonized standard operating procedures that will help reducing inter-lab variability of CSF
Variability of dose predictions for cesium-137 and radium-226 using the PRISM method
International Nuclear Information System (INIS)
Bergstroem, U.; Andersson, K.; Roejder, B.
1984-01-01
The uncertainty associated with dose predictions for cesium-137 and radium-226 in a specific ecosystem has been studied. The method used is a systematic method for determining the effect of parameter uncertainties on model prediction called PRISM. The ecosystems studied are different types of lakes where the following transport processes are included: runoff of water in the lake, irrigation, transport in soil, in groundwater and in sediment. The ecosystems are modelled by the compartment principle, using the BIOPATH-code. Seven different internal exposure pathways are included. The total dose commitment for both nuclides varies about two orders of magnitude. For cesium-137 the total dose and the uncertainty are dominated by the consumption of fish. The most important factor to the total uncertainty is the concentration factor water-fish. For radium-226 the largest contributions to the total dose are the exposure pathways, fish, milk and drinking-water. Half of the uncertainty lies in the milk dose. This uncertainty is dominated by the distribution factor for milk. (orig.)
Directory of Open Access Journals (Sweden)
Rafdzah Zaki
2013-06-01
Full Text Available Objective(s: Reliability measures precision or the extent to which test results can be replicated. This is the first ever systematic review to identify statistical methods used to measure reliability of equipment measuring continuous variables. This studyalso aims to highlight the inappropriate statistical method used in the reliability analysis and its implication in the medical practice. Materials and Methods: In 2010, five electronic databases were searched between 2007 and 2009 to look for reliability studies. A total of 5,795 titles were initially identified. Only 282 titles were potentially related, and finally 42 fitted the inclusion criteria. Results: The Intra-class Correlation Coefficient (ICC is the most popular method with 25 (60% studies having used this method followed by the comparing means (8 or 19%. Out of 25 studies using the ICC, only 7 (28% reported the confidence intervals and types of ICC used. Most studies (71% also tested the agreement of instruments. Conclusion: This study finds that the Intra-class Correlation Coefficient is the most popular method used to assess the reliability of medical instruments measuring continuous outcomes. There are also inappropriate applications and interpretations of statistical methods in some studies. It is important for medical researchers to be aware of this issue, and be able to correctly perform analysis in reliability studies.
Directory of Open Access Journals (Sweden)
Hsu Fang-Han
2012-10-01
Full Text Available Abstract Background Despite initial response in adjuvant chemotherapy, ovarian cancer patients treated with the combination of paclitaxel and carboplatin frequently suffer from recurrence after few cycles of treatment, and the underlying mechanisms causing the chemoresistance remain unclear. Recently, The Cancer Genome Atlas (TCGA research network concluded an ovarian cancer study and released the dataset to the public. The TCGA dataset possesses large sample size, comprehensive molecular profiles, and clinical outcome information; however, because of the unknown molecular subtypes in ovarian cancer and the great diversity of adjuvant treatments TCGA patients went through, studying chemotherapeutic response using the TCGA data is difficult. Additionally, factors such as sample batches, patient ages, and tumor stages further confound or suppress the identification of relevant genes, and thus the biological functions and disease mechanisms. Results To address these issues, herein we propose an analysis procedure designed to reduce suppression effect by focusing on a specific chemotherapeutic treatment, and to remove confounding effects such as batch effect, patient's age, and tumor stages. The proposed procedure starts with a batch effect adjustment, followed by a rigorous sample selection process. Then, the gene expression, copy number, and methylation profiles from the TCGA ovarian cancer dataset are analyzed using a semi-supervised clustering method combined with a novel scoring function. As a result, two molecular classifications, one with poor copy number profiles and one with poor methylation profiles, enriched with unfavorable scores are identified. Compared with the samples enriched with favorable scores, these two classifications exhibit poor progression-free survival (PFS and might be associated with poor chemotherapy response specifically to the combination of paclitaxel and carboplatin. Significant genes and biological processes are
Variable cooling circuit for thermoelectric generator and engine and method of control
Prior, Gregory P
2012-10-30
An apparatus is provided that includes an engine, an exhaust system, and a thermoelectric generator (TEG) operatively connected to the exhaust system and configured to allow exhaust gas flow therethrough. A first radiator is operatively connected to the engine. An openable and closable engine valve is configured to open to permit coolant to circulate through the engine and the first radiator when coolant temperature is greater than a predetermined minimum coolant temperature. A first and a second valve are controllable to route cooling fluid from the TEG to the engine through coolant passages under a first set of operating conditions to establish a first cooling circuit, and from the TEG to a second radiator through at least some other coolant passages under a second set of operating conditions to establish a second cooling circuit. A method of controlling a cooling circuit is also provided.
Parodi, Stefano; Manneschi, Chiara; Verda, Damiano; Ferrari, Enrico; Muselli, Marco
2018-03-01
This study evaluates the performance of a set of machine learning techniques in predicting the prognosis of Hodgkin's lymphoma using clinical factors and gene expression data. Analysed samples from 130 Hodgkin's lymphoma patients included a small set of clinical variables and more than 54,000 gene features. Machine learning classifiers included three black-box algorithms ( k-nearest neighbour, Artificial Neural Network, and Support Vector Machine) and two methods based on intelligible rules (Decision Tree and the innovative Logic Learning Machine method). Support Vector Machine clearly outperformed any of the other methods. Among the two rule-based algorithms, Logic Learning Machine performed better and identified a set of simple intelligible rules based on a combination of clinical variables and gene expressions. Decision Tree identified a non-coding gene ( XIST) involved in the early phases of X chromosome inactivation that was overexpressed in females and in non-relapsed patients. XIST expression might be responsible for the better prognosis of female Hodgkin's lymphoma patients.
Propensity score methodology for confounding control in health care utilization databases
Directory of Open Access Journals (Sweden)
Elisabetta Patorno
2013-06-01
Full Text Available Propensity score (PS methodology is a common approach to control for confounding in nonexperimental studies of treatment effects using health care utilization databases. This methodology offers researchers many advantages compared with conventional multivariate models: it directly focuses on the determinants of treatment choice, facilitating the understanding of the clinical decision-making process by the researcher; it allows for graphical comparisons of the distribution of propensity scores and truncation of subjects without overlapping PS indicating a lack of equipoise; it allows transparent assessment of the confounder balance achieved by the PS at baseline; and it offers a straightforward approach to reduce the dimensionality of sometimes large arrays of potential confounders in utilization databases, directly addressing the “curse of dimensionality” in the context of rare events. This article provides an overview of the use of propensity score methodology for pharmacoepidemiologic research with large health care utilization databases, covering recent discussions on covariate selection, the role of automated techniques for addressing unmeasurable confounding via proxies, strategies to maximize clinical equipoise at baseline, and the potential of machine-learning algorithms for optimized propensity score estimation. The appendix discusses the available software packages for PS methodology. Propensity scores are a frequently used and versatile tool for transparent and comprehensive adjustment of confounding in pharmacoepidemiology with large health care databases.
The performance of random coefficient regression in accounting for residual confounding.
Gustafson, Paul; Greenland, Sander
2006-09-01
Greenland (2000, Biometrics 56, 915-921) describes the use of random coefficient regression to adjust for residual confounding in a particular setting. We examine this setting further, giving theoretical and empirical results concerning the frequentist and Bayesian performance of random coefficient regression. Particularly, we compare estimators based on this adjustment for residual confounding to estimators based on the assumption of no residual confounding. This devolves to comparing an estimator from a nonidentified but more realistic model to an estimator from a less realistic but identified model. The approach described by Gustafson (2005, Statistical Science 20, 111-140) is used to quantify the performance of a Bayesian estimator arising from a nonidentified model. From both theoretical calculations and simulations we find support for the idea that superior performance can be obtained by replacing unrealistic identifying constraints with priors that allow modest departures from those constraints. In terms of point-estimator bias this superiority arises when the extent of residual confounding is substantial, but the advantage is much broader in terms of interval estimation. The benefit from modeling residual confounding is maintained when the prior distributions employed only roughly correspond to reality, for the standard identifying constraints are equivalent to priors that typically correspond much worse.
Karim, Mohammad Ehsanul; Petkau, John; Gustafson, Paul; Platt, Robert W; Tremlett, Helen
2018-06-01
In longitudinal studies, if the time-dependent covariates are affected by the past treatment, time-dependent confounding may be present. For a time-to-event response, marginal structural Cox models are frequently used to deal with such confounding. To avoid some of the problems of fitting marginal structural Cox model, the sequential Cox approach has been suggested as an alternative. Although the estimation mechanisms are different, both approaches claim to estimate the causal effect of treatment by appropriately adjusting for time-dependent confounding. We carry out simulation studies to assess the suitability of the sequential Cox approach for analyzing time-to-event data in the presence of a time-dependent covariate that may or may not be a time-dependent confounder. Results from these simulations revealed that the sequential Cox approach is not as effective as marginal structural Cox model in addressing the time-dependent confounding. The sequential Cox approach was also found to be inadequate in the presence of a time-dependent covariate. We propose a modified version of the sequential Cox approach that correctly estimates the treatment effect in both of the above scenarios. All approaches are applied to investigate the impact of beta-interferon treatment in delaying disability progression in the British Columbia Multiple Sclerosis cohort (1995-2008).
International Nuclear Information System (INIS)
Jafari, S.; Hojjati, M.H.; Fathi, A.
2012-01-01
Rotating disks work mostly at high angular velocity and this results a large centrifugal force and consequently induce large stresses and deformations. Minimizing weight of such disks yields to benefits such as low dead weights and lower costs. This paper aims at finding an optimal disk profiles for minimum weight design using the Karush-Kuhn-Tucker method (KKT) as a classical optimization method, simulated annealing (SA) and particle swarm optimization (PSO) as two modern optimization techniques. Some semi-analytical solutions for the elastic stress distribution in a rotating annular disk with uniform and variable thickness and density proposed by the authors in the previous works have been used. The von Mises failure criterion of optimum disk is used as an inequality constraint to make sure that the rotating disk does not fail. The results show that the minimum weight obtained for all three methods is almost identical. The KKT method gives a profile with slightly less weight (6% less than SA and 1% less than PSO) while the implementation of PSO and SA methods are easier and provide more flexibility compared with those of the KKT method. The effectiveness of the proposed optimization methods is shown. - Highlights: ► Karush-Kuhn-Tucker, simulated annealing and particle swarm methods are used. ► The KKT gives slightly less weight (6% less than SA and 1% less than PSO). ► Implementation of PSO and SA methods are easier and provide more flexibility. ► The effectiveness of the proposed optimization methods is shown.
Energy Technology Data Exchange (ETDEWEB)
Jafari, S. [Faculty of Mechanical Engineering, Babol University of Technology, P.O. Box 484, Babol (Iran, Islamic Republic of); Hojjati, M.H., E-mail: Hojjati@nit.ac.ir [Faculty of Mechanical Engineering, Babol University of Technology, P.O. Box 484, Babol (Iran, Islamic Republic of); Fathi, A. [Faculty of Mechanical Engineering, Babol University of Technology, P.O. Box 484, Babol (Iran, Islamic Republic of)
2012-04-15
Rotating disks work mostly at high angular velocity and this results a large centrifugal force and consequently induce large stresses and deformations. Minimizing weight of such disks yields to benefits such as low dead weights and lower costs. This paper aims at finding an optimal disk profiles for minimum weight design using the Karush-Kuhn-Tucker method (KKT) as a classical optimization method, simulated annealing (SA) and particle swarm optimization (PSO) as two modern optimization techniques. Some semi-analytical solutions for the elastic stress distribution in a rotating annular disk with uniform and variable thickness and density proposed by the authors in the previous works have been used. The von Mises failure criterion of optimum disk is used as an inequality constraint to make sure that the rotating disk does not fail. The results show that the minimum weight obtained for all three methods is almost identical. The KKT method gives a profile with slightly less weight (6% less than SA and 1% less than PSO) while the implementation of PSO and SA methods are easier and provide more flexibility compared with those of the KKT method. The effectiveness of the proposed optimization methods is shown. - Highlights: Black-Right-Pointing-Pointer Karush-Kuhn-Tucker, simulated annealing and particle swarm methods are used. Black-Right-Pointing-Pointer The KKT gives slightly less weight (6% less than SA and 1% less than PSO). Black-Right-Pointing-Pointer Implementation of PSO and SA methods are easier and provide more flexibility. Black-Right-Pointing-Pointer The effectiveness of the proposed optimization methods is shown.
Miaw, Carolina Sheng Whei; Assis, Camila; Silva, Alessandro Rangel Carolino Sales; Cunha, Maria Luísa; Sena, Marcelo Martins; de Souza, Scheilla Vitorino Carvalho
2018-07-15
Grape, orange, peach and passion fruit nectars were formulated and adulterated by dilution with syrup, apple and cashew juices at 10 levels for each adulterant. Attenuated total reflectance Fourier transform mid infrared (ATR-FTIR) spectra were obtained. Partial least squares (PLS) multivariate calibration models allied to different variable selection methods, such as interval partial least squares (iPLS), ordered predictors selection (OPS) and genetic algorithm (GA), were used to quantify the main fruits. PLS improved by iPLS-OPS variable selection showed the highest predictive capacity to quantify the main fruit contents. The selected variables in the final models varied from 72 to 100; the root mean square errors of prediction were estimated from 0.5 to 2.6%; the correlation coefficients of prediction ranged from 0.948 to 0.990; and, the mean relative errors of prediction varied from 3.0 to 6.7%. All of the developed models were validated. Copyright © 2018 Elsevier Ltd. All rights reserved.
Bayat, Bardia; Zahraie, Banafsheh; Taghavi, Farahnaz; Nasseri, Mohsen
2013-08-01
Identification of spatial and spatiotemporal precipitation variations plays an important role in different hydrological applications such as missing data estimation. In this paper, the results of Bayesian maximum entropy (BME) and ordinary kriging (OK) are compared for modeling spatial and spatiotemporal variations of annual precipitation with and without incorporating elevation variations. The study area of this research is Namak Lake watershed located in the central part of Iran with an area of approximately 90,000 km2. The BME and OK methods have been used to model the spatial and spatiotemporal variations of precipitation in this watershed, and their performances have been evaluated using cross-validation statistics. The results of the case study have shown the superiority of BME over OK in both spatial and spatiotemporal modes. The results have shown that BME estimates are less biased and more accurate than OK. The improvements in the BME estimates are mostly related to incorporating hard and soft data in the estimation process, which resulted in more detailed and reliable results. Estimation error variance for BME results is less than OK estimations in the study area in both spatial and spatiotemporal modes.
Helin, Tuukka A; Pakkanen, Anja; Lassila, Riitta; Joutsi-Korhonen, Lotta
2013-05-01
Laboratory tests to assess novel oral anticoagulants (NOACs) are under evaluation. Routine monitoring is unnecessary, but under special circumstances bioactivity assessment becomes crucial. We analyzed the effects of NOACs on coagulation tests and the availability of specific assays at different laboratories. Plasma samples spiked with dabigatran (Dabi; 120 and 300 μg/L) or rivaroxaban (Riva; 60, 146, and 305 μg/L) were sent to 115 and 38 European laboratories, respectively. International normalized ratio (INR) and activated partial thromboplastin time (APTT) were analyzed for all samples; thrombin time (TT) was analyzed specifically for Dabi and calibrated anti-activated factor X (anti-Xa) activity for Riva. We compared the results with patient samples. Results of Dabi samples were reported by 73 laboratories (13 INR and 9 APTT reagents) and Riva samples by 22 laboratories (5 INR and 4 APTT reagents). Both NOACs increased INR values; the increase was modest, albeit larger, for Dabi, with higher CV, especially with Quick (vs Owren) methods. Both NOACs dose-dependently prolonged the APTT. Again, the prolongation and CVs were larger for Dabi. The INR and APTT results varied reagent-dependently (P laboratories, respectively. The screening tests INR and APTT are suboptimal in assessing NOACs, having high reagent dependence and low sensitivity and specificity. They may provide information, if laboratories recognize their limitations. The variation will likely increase and the sensitivity differ in clinical samples. Specific assays measure NOACs accurately; however, few laboratories applied them. © 2013 American Association for Clinical Chemistry.
Methods for Confirming the Gram Reaction of Gram-variable Bacillus Species Isolated from Tobacco
Directory of Open Access Journals (Sweden)
Morin A
2014-12-01
Full Text Available Bacillus is a predominant genus of bacteria isolated from tobacco. The Gram stain is the most commonly used and most important of all diagnostic staining techniques in microbiology. In order to help confirm the Gram positivity of Bacillus isolates from tobacco, three methods using the chemical differences of the cell wall and membrane of Gram-positive and Gram-negative bacteria were investigated: the KOH (potassium hydroxide, the LANA (L-alanine-4-nitroanilide, and the vancomycin susceptibility tests. When colonies of Gram-negative bacteria are treated with 3% KOH solution, a slimy suspension is produced, probably due to destruction of the cell wall and liberation of deoxyribonucleic acid (DNA. Gram-positive cell walls resist KOH treatment. The LANA test reveals the presence of a cell wall aminopeptidase that hydrolyzes the L-alanine-4-nitroanilide in Gram-negative bacteria. This enzyme is absent in Gram-positive bacteria. Vancomycin is a glycopeptide antibiotic inhibiting the cell wall peptido-glycan synthesis of Gram-positive microorganisms. Absence of lysis with KOH, absence of hydrolysis of LANA, and susceptibility to vancomycin were used with the Gram reaction to confirm the Gram positivity of various Bacillus species isolated from tobacco. B. laevolacticus excepted, all Bacillus species tested showed negative reactions to KOH and LANA tests, and all species were susceptible to vancomycin (5 and 30 µg.
Nissim, Nir; Shahar, Yuval; Boland, Mary Regina; Tatonetti, Nicholas P; Elovici, Yuval; Hripcsak, George; Moskovitch, Robert
2018-01-01
Background and Objectives Labeling instances by domain experts for classification is often time consuming and expensive. To reduce such labeling efforts, we had proposed the application of active learning (AL) methods, introduced our CAESAR-ALE framework for classifying the severity of clinical conditions, and shown its significant reduction of labeling efforts. The use of any of three AL methods (one well known [SVM-Margin], and two that we introduced [Exploitation and Combination_XA]) significantly reduced (by 48% to 64%) condition labeling efforts, compared to standard passive (random instance-selection) SVM learning. Furthermore, our new AL methods achieved maximal accuracy using 12% fewer labeled cases than the SVM-Margin AL method. However, because labelers have varying levels of expertise, a major issue associated with learning methods, and AL methods in particular, is how to best to use the labeling provided by a committee of labelers. First, we wanted to know, based on the labelers’ learning curves, whether using AL methods (versus standard passive learning methods) has an effect on the Intra-labeler variability (within the learning curve of each labeler) and inter-labeler variability (among the learning curves of different labelers). Then, we wanted to examine the effect of learning (either passively or actively) from the labels created by the majority consensus of a group of labelers. Methods We used our CAESAR-ALE framework for classifying the severity of clinical conditions, the three AL methods and the passive learning method, as mentioned above, to induce the classifications models. We used a dataset of 516 clinical conditions and their severity labeling, represented by features aggregated from the medical records of 1.9 million patients treated at Columbia University Medical Center. We analyzed the variance of the classification performance within (intra-labeler), and especially among (inter-labeler) the classification models that were induced by
Directory of Open Access Journals (Sweden)
Qihao Weng
2013-03-01
Full Text Available The rainfall and runoff relationship becomes an intriguing issue as urbanization continues to evolve worldwide. In this paper, we developed a simulation model based on the soil conservation service curve number (SCS-CN method to analyze the rainfall-runoff relationship in Guangzhou, a rapid growing metropolitan area in southern China. The SCS-CN method was initially developed by the Natural Resources Conservation Service (NRCS of the United States Department of Agriculture (USDA, and is one of the most enduring methods for estimating direct runoff volume in ungauged catchments. In this model, the curve number (CN is a key variable which is usually obtained by the look-up table of TR-55. Due to the limitations of TR-55 in characterizing complex urban environments and in classifying land use/cover types, the SCS-CN model cannot provide more detailed runoff information. Thus, this paper develops a method to calculate CN by using remote sensing variables, including vegetation, impervious surface, and soil (V-I-S. The specific objectives of this paper are: (1 To extract the V-I-S fraction images using Linear Spectral Mixture Analysis; (2 To obtain composite CN by incorporating vegetation types, soil types, and V-I-S fraction images; and (3 To simulate direct runoff under the scenarios with precipitation of 57mm (occurred once every five years by average and 81mm (occurred once every ten years. Our experiment shows that the proposed method is easy to use and can derive composite CN effectively.
Alternative method for variable aspect ratio vias using a vortex mask
Schepis, Anthony R.; Levinson, Zac; Burbine, Andrew; Smith, Bruce W.
2014-03-01
Historically IC (integrated circuit) device scaling has bridged the gap between technology nodes. Device size reduction is enabled by increased pattern density, enhancing functionality and effectively reducing cost per chip. Exemplifying this trend are aggressive reductions in memory cell sizes that have resulted in systems with diminishing area between bit/word lines. This affords an even greater challenge in the patterning of contact level features that are inherently difficult to resolve because of their relatively small area and complex aerial image. To accommodate these trends, semiconductor device design has shifted toward the implementation of elliptical contact features. This empowers designers to maximize the use of free device space, preserving contact area and effectively reducing the via dimension just along a single axis. It is therefore critical to provide methods that enhance the resolving capacity of varying aspect ratio vias for implementation in electronic design systems. Vortex masks, characterized by their helically induced propagation of light and consequent dark core, afford great potential for the patterning of such features when coupled with a high resolution negative tone resist system. This study investigates the integration of a vortex mask in a 193nm immersion (193i) lithography system and qualifies its ability to augment aspect ratio through feature density using aerial image vector simulation. It was found that vortex fabricated vias provide a distinct resolution advantage over traditionally patterned contact features employing a 6% attenuated phase shift mask (APM). 1:1 features were resolvable at 110nm pitch with a 38nm critical dimension (CD) and 110nm depth of focus (DOF) at 10% exposure latitude (EL). Furthermore, iterative source-mask optimization was executed as means to augment aspect ratio. By employing mask asymmetries and directionally biased sources aspect ratios ranging between 1:1 and 2:1 were achievable, however, this
Nissim, Nir; Shahar, Yuval; Elovici, Yuval; Hripcsak, George; Moskovitch, Robert
2017-09-01
Labeling instances by domain experts for classification is often time consuming and expensive. To reduce such labeling efforts, we had proposed the application of active learning (AL) methods, introduced our CAESAR-ALE framework for classifying the severity of clinical conditions, and shown its significant reduction of labeling efforts. The use of any of three AL methods (one well known [SVM-Margin], and two that we introduced [Exploitation and Combination_XA]) significantly reduced (by 48% to 64%) condition labeling efforts, compared to standard passive (random instance-selection) SVM learning. Furthermore, our new AL methods achieved maximal accuracy using 12% fewer labeled cases than the SVM-Margin AL method. However, because labelers have varying levels of expertise, a major issue associated with learning methods, and AL methods in particular, is how to best to use the labeling provided by a committee of labelers. First, we wanted to know, based on the labelers' learning curves, whether using AL methods (versus standard passive learning methods) has an effect on the Intra-labeler variability (within the learning curve of each labeler) and inter-labeler variability (among the learning curves of different labelers). Then, we wanted to examine the effect of learning (either passively or actively) from the labels created by the majority consensus of a group of labelers. We used our CAESAR-ALE framework for classifying the severity of clinical conditions, the three AL methods and the passive learning method, as mentioned above, to induce the classifications models. We used a dataset of 516 clinical conditions and their severity labeling, represented by features aggregated from the medical records of 1.9 million patients treated at Columbia University Medical Center. We analyzed the variance of the classification performance within (intra-labeler), and especially among (inter-labeler) the classification models that were induced by using the labels provided by seven
A review of instrumental variable estimators for Mendelian randomization.
Burgess, Stephen; Small, Dylan S; Thompson, Simon G
2017-10-01
Instrumental variable analysis is an approach for obtaining causal inferences on the effect of an exposure (risk factor) on an outcome from observational data. It has gained in popularity over the past decade with the use of genetic variants as instrumental variables, known as Mendelian randomization. An instrumental variable is associated with the exposure, but not associated with any confounder of the exposure-outcome association, nor is there any causal pathway from the instrumental variable to the outcome other than via the exposure. Under the assumption that a single instrumental variable or a set of instrumental variables for the exposure is available, the causal effect of the exposure on the outcome can be estimated. There are several methods available for instrumental variable estimation; we consider the ratio method, two-stage methods, likelihood-based methods, and semi-parametric methods. Techniques for obtaining statistical inferences and confidence intervals are presented. The statistical properties of estimates from these methods are compared, and practical advice is given about choosing a suitable analysis method. In particular, bias and coverage properties of estimators are considered, especially with weak instruments. Settings particularly relevant to Mendelian randomization are prioritized in the paper, notably the scenario of a continuous exposure and a continuous or binary outcome.
Directory of Open Access Journals (Sweden)
Pamuda Pudjisuryadi
2008-01-01
Full Text Available A meshless local Petrov-Galerkin (MLPG method that employs polygonal sub-domains constructed from several triangular patches rather than the typically used circular sub-domains is presented. Moving least-squares approximation is used to construct the trial displacements and linear, Lagrange interpolation functions are used to construct the test functions. An adaptive technique to improve the accuracy of approximate solutions is developed to minimize the computational cost. Variable domain of influence (VDOI and effective stress gradient indicator (EK for local error assessment are the focus of this study. Several numerical examples are presented to verify the efficiency and accuracy of the proposed adaptive MLPG method. The results show that the proposed adaptive technique performs as expected that is refining the problem domain in area with high stress concentration in which higher accuracy is commonly required.
International Nuclear Information System (INIS)
Canestrari, Francesco; Stimilli, Arianna; Bahia, Hussain U.; Virgili, Amedeo
2015-01-01
Highlights: • Proposal of a new method to analyze low-temperature cracking of bituminous mixtures. • Reliability of the relaxation modulus master curve modeling through Prony series. • Suitability of the pseudo-variables approach for a close form solution. - Abstract: Thermal cracking is a critical failure mode for asphalt pavements. Relaxation modulus is the major viscoelastic property that controls the development of thermally induced tensile stresses. Therefore, accurate determination of the relaxation modulus is fundamental for designing long lasting pavements. This paper proposes a reliable analytical solution for constructing the relaxation modulus master curve by measuring stress and strain thermally induced in asphalt mixtures. The solution, based on Boltzmann’s Superposition Principle and pseudo-variables concepts, accounts for time and temperature dependency of bituminous materials modulus, avoiding complex integral transformations. The applicability of the solution is demonstrated by testing a reference mixture using the Asphalt Thermal Cracking Analyzer (ATCA) device. By applying thermal loadings on restrained and unrestrained asphalt beams, ATCA allows the determination of several parameters, but is still unable to provide reliable estimations of relaxation properties. Without them the measurements from ATCA cannot be used in modeling of pavement behavior. Thus, the proposed solution successfully integrates ATCA experimental data. The same methodology can be applied to all test methods that concurrently measure stress and strain. The statistical parameters used to evaluate the goodness of fit show optimum correlation between theoretical and experimental results, demonstrating the accuracy of this mathematical approach
Sun, Fei; Xu, Bing; Zhang, Yi; Dai, Shengyun; Yang, Chan; Cui, Xianglong; Shi, Xinyuan; Qiao, Yanjiang
2016-01-01
The quality of Chinese herbal medicine tablets suffers from batch-to-batch variability due to a lack of manufacturing process understanding. In this paper, the Panax notoginseng saponins (PNS) immediate release tablet was taken as the research subject. By defining the dissolution of five active pharmaceutical ingredients and the tablet tensile strength as critical quality attributes (CQAs), influences of both the manipulated process parameters introduced by an orthogonal experiment design and the intermediate granules' properties on the CQAs were fully investigated by different chemometric methods, such as the partial least squares, the orthogonal projection to latent structures, and the multiblock partial least squares (MBPLS). By analyzing the loadings plots and variable importance in the projection indexes, the granule particle sizes and the minimal punch tip separation distance in tableting were identified as critical process parameters. Additionally, the MBPLS model suggested that the lubrication time in the final blending was also important in predicting tablet quality attributes. From the calculated block importance in the projection indexes, the tableting unit was confirmed to be the critical process unit of the manufacturing line. The results demonstrated that the combinatorial use of different multivariate modeling methods could help in understanding the complex process relationships as a whole. The output of this study can then be used to define a control strategy to improve the quality of the PNS immediate release tablet.
A method for determining average beach slope and beach slope variability for U.S. sandy coastlines
Doran, Kara S.; Long, Joseph W.; Overbeck, Jacquelyn R.
2015-01-01
The U.S. Geological Survey (USGS) National Assessment of Hurricane-Induced Coastal Erosion Hazards compares measurements of beach morphology with storm-induced total water levels to produce forecasts of coastal change for storms impacting the Gulf of Mexico and Atlantic coastlines of the United States. The wave-induced water level component (wave setup and swash) is estimated by using modeled offshore wave height and period and measured beach slope (from dune toe to shoreline) through the empirical parameterization of Stockdon and others (2006). Spatial and temporal variability in beach slope leads to corresponding variability in predicted wave setup and swash. For instance, seasonal and storm-induced changes in beach slope can lead to differences on the order of 1 meter (m) in wave-induced water level elevation, making accurate specification of this parameter and its associated uncertainty essential to skillful forecasts of coastal change. A method for calculating spatially and temporally averaged beach slopes is presented here along with a method for determining total uncertainty for each 200-m alongshore section of coastline.
Sun, Fei; Xu, Bing; Zhang, Yi; Dai, Shengyun; Yang, Chan; Cui, Xianglong; Shi, Xinyuan; Qiao, Yanjiang
2016-01-01
The quality of Chinese herbal medicine tablets suffers from batch-to-batch variability due to a lack of manufacturing process understanding. In this paper, the Panax notoginseng saponins (PNS) immediate release tablet was taken as the research subject. By defining the dissolution of five active pharmaceutical ingredients and the tablet tensile strength as critical quality attributes (CQAs), influences of both the manipulated process parameters introduced by an orthogonal experiment design and the intermediate granules’ properties on the CQAs were fully investigated by different chemometric methods, such as the partial least squares, the orthogonal projection to latent structures, and the multiblock partial least squares (MBPLS). By analyzing the loadings plots and variable importance in the projection indexes, the granule particle sizes and the minimal punch tip separation distance in tableting were identified as critical process parameters. Additionally, the MBPLS model suggested that the lubrication time in the final blending was also important in predicting tablet quality attributes. From the calculated block importance in the projection indexes, the tableting unit was confirmed to be the critical process unit of the manufacturing line. The results demonstrated that the combinatorial use of different multivariate modeling methods could help in understanding the complex process relationships as a whole. The output of this study can then be used to define a control strategy to improve the quality of the PNS immediate release tablet. PMID:27932865
Directory of Open Access Journals (Sweden)
Nurbaiti
2017-03-01
Full Text Available Science and technology have been rapidly evolved in some fields of knowledge, including mathematics. Such development can contribute to improvements on the learning process that encourage students and teachers to enhance their abilities and performances. In delivering the material on the linear equation system with two variables (SPLDV, the conventional teaching method where teachers become the center of the learning process is still well-practiced. This method would cause the students get bored and have difficulties to understand the concepts they are learning. Therefore, in order to the learning of SPLDV easy, an interesting, interactive media that the students and teachers can apply is necessary. This media is designed using GUI MATLAB and named as students’ electronic worksheets (e-LKS. This program is intended to help students in finding and understanding the SPLDV concepts more easily. This program is also expected to improve students’ motivation and creativity in learning the material. Based on the test using the System Usability Scale (SUS, the design of interactive mathematics learning media of the linear equation system with Two Variables (SPLDV gets grade B (excellent, meaning that this learning media is proper to be used for Junior High School students of grade VIII.
2013-01-01
Background Previous research showed inconsistent results regarding the relationship between the age of patients and preference statements regarding GP care. This study investigates whether elderly patients have different preference scores and ranking orders concerning 58 preference statements for GP care than younger patients. Moreover, this study examines whether patient characteristics and practice location may confound the relationship between age and the categorisation of a preference score as very important. Methods Data of the Consumer Quality Index GP Care were used, which were collected in 32 general practices in the Netherlands. The rank order and preference score were calculated for 58 preference statements for four age groups (0–30, 31–50, 51–74, 75 years and older). Using chi-square tests and logistic regression analyses, it was investigated whether a significant relationship between age and preference score was confounded by patient characteristics and practice location. Results Elderly patients did not have a significant different ranking order for the preference statements than the other three age groups (r = 0.0193; p = 0.41). However, in 53% of the statements significant differences were found in preference score between the four age groups. Elderly patients categorized significantly less preference statements as ‘very important’. In most cases, the significant relationships were not confounded by gender, education, perceived health, the number of GP contacts and location of the GP practice. Conclusion The preferences of elderly patients for GP care concern the same items as younger patients. However, their preferences are less strong, which cannot be ascribed to gender, education, perceived health, the number of GP contacts and practice location. PMID:23800156
Shrestha, Saurav; Hirvonen, Jussi; Hines, Christina S; Henter, Ioline D; Svenningsson, Per; Pike, Victor W; Innis, Robert B
2012-02-15
The serotonin-1A (5-HT(1A)) receptor is of particular interest in human positron emission tomography (PET) studies of major depressive disorder (MDD). Of the eight studies investigating this issue in the brains of patients with MDD, four reported decreased 5-HT(1A) receptor density, two reported no change, and two reported increased 5-HT(1A) receptor density. While clinical heterogeneity may have contributed to these differing results, methodological factors by themselves could also explain the discrepancies. This review highlights several of these factors, including the use of the cerebellum as a reference region and the imprecision of measuring the concentration of parent radioligand in arterial plasma, the method otherwise considered to be the 'gold standard'. Other potential confounds also exist that could restrict or unexpectedly affect the interpretation of results. For example, the radioligand may be a substrate for an efflux transporter - like P-gp - at the blood-brain barrier; furthermore, the binding of the radioligand to the receptor in various stages of cellular trafficking is unknown. Efflux transport and cellular trafficking may also be differentially expressed in patients compared to healthy subjects. We believe that, taken together, the existing disparate findings do not reliably answer the question of whether 5-HT(1A) receptors are altered in MDD or in subgroups of patients with MDD. In addition, useful meta-analysis is precluded because only one of the imaging centers acquired all the data necessary to address these methodological concerns. We recommend that in the future, individual centers acquire more thorough data capable of addressing methodological concerns, and that multiple centers collaborate to meaningfully pool their data for meta-analysis. Published by Elsevier Inc.
Kudryavtsev, O.; Rodochenko, V.
2018-03-01
We propose a new general numerical method aimed to solve integro-differential equations with variable coefficients. The problem under consideration arises in finance where in the context of pricing barrier options in a wide class of stochastic volatility models with jumps. To handle the effect of the correlation between the price and the variance, we use a suitable substitution for processes. Then we construct a Markov-chain approximation for the variation process on small time intervals and apply a maturity randomization technique. The result is a system of boundary problems for integro-differential equations with constant coefficients on the line in each vertex of the chain. We solve the arising problems using a numerical Wiener-Hopf factorization method. The approximate formulae for the factors are efficiently implemented by means of the Fast Fourier Transform. Finally, we use a recurrent procedure that moves backwards in time on the variance tree. We demonstrate the convergence of the method using Monte-Carlo simulations and compare our results with the results obtained by the Wiener-Hopf method with closed-form expressions of the factors.
An education gradient in health, a health gradient in education, or a confounded gradient in both?
Lynch, Jamie L; von Hippel, Paul T
2016-04-01
There is a positive gradient associating educational attainment with health, yet the explanation for this gradient is not clear. Does higher education improve health (causation)? Do the healthy become highly educated (selection)? Or do good health and high educational attainment both result from advantages established early in the life course (confounding)? This study evaluates these competing explanations by tracking changes in educational attainment and Self-rated Health (SRH) from age 15 to age 31 in the National Longitudinal Study of Youth, 1997 cohort. Ordinal logistic regression confirms that high-SRH adolescents are more likely to become highly educated. This is partly because adolescent SRH is associated with early advantages including adolescents' academic performance, college plans, and family background (confounding); however, net of these confounders adolescent SRH still predicts adult educational attainment (selection). Fixed-effects longitudinal regression shows that educational attainment has little causal effect on SRH at age 31. Completion of a high school diploma or associate's degree has no effect on SRH, while completion of a bachelor's or graduate degree have effects that, though significant, are quite small (less than 0.1 points on a 5-point scale). While it is possible that educational attainment would have greater effect on health at older ages, at age 31 what we see is a health gradient in education, shaped primarily by selection and confounding rather than by a causal effect of education on health. Copyright © 2016 Elsevier Ltd. All rights reserved.
Assessing Mediation Using Marginal Structural Models in the Presence of Confounding and Moderation
Coffman, Donna L.; Zhong, Wei
2012-01-01
This article presents marginal structural models with inverse propensity weighting (IPW) for assessing mediation. Generally, individuals are not randomly assigned to levels of the mediator. Therefore, confounders of the mediator and outcome may exist that limit causal inferences, a goal of mediation analysis. Either regression adjustment or IPW…
Jackson, D.; White, I.; Kostis, J.B.; Wilson, A.C.; Folsom, A.R.; Feskens, E.J.M.
2009-01-01
One difficulty in performing meta-analyses of observational cohort studies is that the availability of confounders may vary between cohorts, so that some cohorts provide fully adjusted analyses while others only provide partially adjusted analyses. Commonly, analyses of the association between an
van der Meer, Hedwig A; Speksnijder, Caroline M; Engelbert, Raoul H H; Lobbezoo, Frank; Nijhuis-van der Sanden, Maria W G; Visscher, Corine M
2017-09-01
The objective of this observational study was to establish the possible presence of confounders on the association between temporomandibular disorders (TMD) and headaches in a patient population from a TMD and Orofacial Pain Clinic. Several subtypes of headaches have been diagnosed: self-reported headache, (probable) migraine, (probable) tension-type headache, and secondary headache attributed to TMD. The presence of TMD was subdivided into 2 subtypes: painful TMD and function-related TMD. The associations between the subtypes of TMD and headaches were evaluated by single regression models. To study the influence of possible confounding factors on this association, the regression models were extended with age, sex, bruxism, stress, depression, and somatic symptoms. Of the included patients (n=203), 67.5% experienced headaches. In the subsample of patients with a painful TMD (n=58), the prevalence of self-reported headaches increased to 82.8%. The associations found between self-reported headache and (1) painful TMD and (2) function-related TMD were confounded by the presence of somatic symptoms. For probable migraine, both somatic symptoms and bruxism confounded the initial association found with painful TMD. The findings of this study imply that there is a central working mechanism overlapping TMD and headache. Health care providers should not regard these disorders separately, but rather look at the bigger picture to appreciate the complex nature of the diagnostic and therapeutic process.
DEFF Research Database (Denmark)
Jackson, D.; White, I.; Kostis, J.B.
2009-01-01
One difficulty in performing meta-analyses of observational cohort studies is that the availability of confounders may vary between cohorts, so that some cohorts provide fully adjusted analyses while others only provide partially adjusted analyses. Commonly, analyses of the association between an...
Directory of Open Access Journals (Sweden)
L. L. Davtian
2018-03-01
Full Text Available The influence of variables of pharmaceutical factors on the technological processes of drugs manufacturing is incredibly important. Thus, in the development of a new drug in the form of medicinal films, the relevance and necessity of determining the effect of the methods of active substances adding on the effectiveness of the drug was determined. The aim is rationalization of the method of the active pharmaceutical ingredients adding into the composition of the developed drug. Materials and methods. As experimental samples we used medicinal films, which were made using various methods of active ingredients adding. The quality of the samples was evaluated by the antimicrobial activity against Clostridium sporogenes and Staphylococcus aureus, which was determined by the diffusion method in agar. Results. The study of the antimicrobial activity of medicinal films with various methods of active ingredients adding showed that the adding of metronidazole as an aqueous solution increases the antimicrobial activity of the films by 21.23%, 16.89%, 28.59%, respectively, compared with films of similar composition, in which metronidazole was added as a suspension, and the remaining ingredients were added by the same way. The introduction of chlorhexidine bigluconate and glucosamine hydrochloride in the film-forming solution lastly together with the solution of metronidazole increases the antimicrobial activity by 24.67%, which is probably due to the absence of contact between thermolabile ingredients and solutions of film-forming substances having a high dissolution temperature. Conclusions. The most rational is adding of metronidazole to the medicinal films in the form of a 0.01% aqueous solution in a mixture with the chlorhexidine bigluconate and glucosamine hydrochloride solution to the final film-forming solution.
Bellera, Carine; Proust-Lima, Cécile; Joseph, Lawrence; Richaud, Pierre; Taylor, Jeremy; Sandler, Howard; Hanley, James; Mathoulin-Pélissier, Simone
2018-04-01
Background Biomarker series can indicate disease progression and predict clinical endpoints. When a treatment is prescribed depending on the biomarker, confounding by indication might be introduced if the treatment modifies the marker profile and risk of failure. Objective Our aim was to highlight the flexibility of a two-stage model fitted within a Bayesian Markov Chain Monte Carlo framework. For this purpose, we monitored the prostate-specific antigens in prostate cancer patients treated with external beam radiation therapy. In the presence of rising prostate-specific antigens after external beam radiation therapy, salvage hormone therapy can be prescribed to reduce both the prostate-specific antigens concentration and the risk of clinical failure, an illustration of confounding by indication. We focused on the assessment of the prognostic value of hormone therapy and prostate-specific antigens trajectory on the risk of failure. Methods We used a two-stage model within a Bayesian framework to assess the role of the prostate-specific antigens profile on clinical failure while accounting for a secondary treatment prescribed by indication. We modeled prostate-specific antigens using a hierarchical piecewise linear trajectory with a random changepoint. Residual prostate-specific antigens variability was expressed as a function of prostate-specific antigens concentration. Covariates in the survival model included hormone therapy, baseline characteristics, and individual predictions of the prostate-specific antigens nadir and timing and prostate-specific antigens slopes before and after the nadir as provided by the longitudinal process. Results We showed positive associations between an increased prostate-specific antigens nadir, an earlier changepoint and a steeper post-nadir slope with an increased risk of failure. Importantly, we highlighted a significant benefit of hormone therapy, an effect that was not observed when the prostate-specific antigens trajectory was
Jones, Andrew; Button, Emily; Rose, Abigail K.; Robinson, Eric; Christiansen, Paul; Di Lemma, Lisa; Field, Matt
2015-01-01
Rationale Motivation to drink alcohol can be measured in the laboratory using an ad-libitum ?taste test?, in which participants rate the taste of alcoholic drinks whilst their intake is covertly monitored. Little is known about the construct validity of this paradigm. Objective The objective of this study was to investigate variables that may compromise the validity of this paradigm and its construct validity. Methods We re-analysed data from 12 studies from our laboratory that incorporated a...
Terra, Luciana A; Filgueiras, Paulo R; Tose, Lílian V; Romão, Wanderson; de Souza, Douglas D; de Castro, Eustáquio V R; de Oliveira, Mirela S L; Dias, Júlio C M; Poppi, Ronei J
2014-10-07
Negative-ion mode electrospray ionization, ESI(-), with Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) was coupled to a Partial Least Squares (PLS) regression and variable selection methods to estimate the total acid number (TAN) of Brazilian crude oil samples. Generally, ESI(-)-FT-ICR mass spectra present a power of resolution of ca. 500,000 and a mass accuracy less than 1 ppm, producing a data matrix containing over 5700 variables per sample. These variables correspond to heteroatom-containing species detected as deprotonated molecules, [M - H](-) ions, which are identified primarily as naphthenic acids, phenols and carbazole analog species. The TAN values for all samples ranged from 0.06 to 3.61 mg of KOH g(-1). To facilitate the spectral interpretation, three methods of variable selection were studied: variable importance in the projection (VIP), interval partial least squares (iPLS) and elimination of uninformative variables (UVE). The UVE method seems to be more appropriate for selecting important variables, reducing the dimension of the variables to 183 and producing a root mean square error of prediction of 0.32 mg of KOH g(-1). By reducing the size of the data, it was possible to relate the selected variables with their corresponding molecular formulas, thus identifying the main chemical species responsible for the TAN values.
Directory of Open Access Journals (Sweden)
Sátor Ladislav
2014-03-01
Full Text Available A numerical analysis based on the meshless local Petrov- Galerkin (MLPG method is proposed for a functionally graded material FGM (FGMfunctionally graded material beam. The planar bending of the beam is considered with a transversal gradation of Young's modulus and a variable depth of the beam. The collocation formulation is constructed from the equilibrium equations for the mechanical fields. Dirac's delta function is employed as a test function in the derivation of a strong formulation. The Moving Least Squares (MLS approximation technique is applied for an approximation of the spatial variations of all the physical quantities. An investigation of the accuracy, the convergence of the accuracy, the computational efficiency and the effect of the level of the gradation of Young's modulus on the behaviour of coupled mechanical fields is presented in various boundary value problems for a rectangular beam with a functionally graded Young's modulus.
Energy Technology Data Exchange (ETDEWEB)
Miserev, D. S., E-mail: d.miserev@student.unsw.edu.au, E-mail: erazorheader@gmail.com [University of New South Wales, School of Physics (Australia)
2016-06-15
The problem of localized states in 1D systems with a relativistic spectrum, namely, graphene stripes and carbon nanotubes, is studied analytically. The bound state as a superposition of two chiral states is completely described by their relative phase, which is the foundation of the variable phase method (VPM) developed herein. Based on our VPM, we formulate and prove the relativistic Levinson theorem. The problem of bound states can be reduced to the analysis of closed trajectories of some vector field. Remarkably, the Levinson theorem appears as the Poincaré index theorem for these closed trajectories. The VPM equation is also reduced to the nonrelativistic and semiclassical limits. The limit of a small momentum p{sub y} of transverse quantization is applicable to an arbitrary integrable potential. In this case, a single confined mode is predicted.
Directory of Open Access Journals (Sweden)
Dawei Gong
2017-01-01
Full Text Available The pinning synchronous problem for complex networks with interval delays is studied in this paper. First, by using an inequality which is introduced from Newton-Leibniz formula, a new synchronization criterion is derived. Second, combining Finsler’s Lemma with homogenous matrix, convergent linear matrix inequality (LMI relaxations for synchronization analysis are proposed with matrix-valued coefficients. Third, a new variable subintervals method is applied to expand the obtained results. Different from previous results, the interval delays are divided into some subdelays, which can introduce more free weighting matrices. Fourth, the results are shown as LMI, which can be easily analyzed or tested. Finally, the stability of the networks is proved via Lyapunov’s stability theorem, and the simulation of the trajectory claims the practicality of the proposed pinning control.
Energy Technology Data Exchange (ETDEWEB)
Wang, Xiao; Gao, Wenzhong; Scholbrock, Andrew; Muljadi, Eduard; Gevorgian, Vahan; Wang, Jianhui; Yan, Weihang; Zhang, Huaguang
2017-10-18
To mitigate the degraded power system inertia and undesirable primary frequency response caused by large-scale wind power integration, the frequency support capabilities of variable-speed wind turbines is studied in this work. This is made possible by controlled inertial response, which is demonstrated on a research turbine - controls advanced research turbine, 3-bladed (CART3). Two distinct inertial control (IC) methods are analysed in terms of their impacts on the grids and the response of the turbine itself. The released kinetic energy in the IC methods are determined by the frequency measurement or shaped active power reference in the turbine speed-power plane. The wind turbine model is based on the high-fidelity turbine simulator fatigue, aerodynamic, structures and turbulence, which constitutes the aggregated wind power plant model with the simplified power converter model. The IC methods are implemented over the baseline CART3 controller, evaluated in the modified 9-bus and 14-bus testing power grids considering different wind speeds and different wind power penetration levels. The simulation results provide various insights on designing such kinds of ICs. The authors calculate the short-term dynamic equivalent loads and give a discussion about the turbine structural loadings related to the inertial response.
Directory of Open Access Journals (Sweden)
Xin Lu
2018-03-01
Full Text Available In recent years, the fractional order model has been employed to state of charge (SOC estimation. The non integer differentiation order being expressed as a function of recursive factors defining the fractality of charge distribution on porous electrodes. The battery SOC affects the fractal dimension of charge distribution, therefore the order of the fractional order model varies with the SOC at the same condition. This paper proposes a new method to estimate the SOC. A fractional continuous variable order model is used to characterize the fractal morphology of charge distribution. The order identification results showed that there is a stable monotonic relationship between the fractional order and the SOC after the battery inner electrochemical reaction reaches balanced. This feature makes the proposed model particularly suitable for SOC estimation when the battery is in the resting state. Moreover, a fast iterative method based on the proposed model is introduced for SOC estimation. The experimental results showed that the proposed iterative method can quickly estimate the SOC by several iterations while maintaining high estimation accuracy.
Directory of Open Access Journals (Sweden)
Sun F
2016-11-01
Full Text Available Fei Sun,1 Bing Xu,1,2 Yi Zhang,1 Shengyun Dai,1 Chan Yang,1 Xianglong Cui,1 Xinyuan Shi,1,2 Yanjiang Qiao1,2 1Research Center of Traditional Chinese Medicine Information Engineering, School of Chinese Materia Medica, Beijing University of Chinese Medicine, 2Key Laboratory of Manufacture Process Control and Quality Evaluation of Chinese Medicine, Beijing, People’s Republic of China Abstract: The quality of Chinese herbal medicine tablets suffers from batch-to-batch variability due to a lack of manufacturing process understanding. In this paper, the Panax notoginseng saponins (PNS immediate release tablet was taken as the research subject. By defining the dissolution of five active pharmaceutical ingredients and the tablet tensile strength as critical quality attributes (CQAs, influences of both the manipulated process parameters introduced by an orthogonal experiment design and the intermediate granules’ properties on the CQAs were fully investigated by different chemometric methods, such as the partial least squares, the orthogonal projection to latent structures, and the multiblock partial least squares (MBPLS. By analyzing the loadings plots and variable importance in the projection indexes, the granule particle sizes and the minimal punch tip separation distance in tableting were identified as critical process parameters. Additionally, the MBPLS model suggested that the lubrication time in the final blending was also important in predicting tablet quality attributes. From the calculated block importance in the projection indexes, the tableting unit was confirmed to be the critical process unit of the manufacturing line. The results demonstrated that the combinatorial use of different multivariate modeling methods could help in understanding the complex process relationships as a whole. The output of this study can then be used to define a control strategy to improve the quality of the PNS immediate release tablet. Keywords: Panax
Energy Technology Data Exchange (ETDEWEB)
Romero, Vicente [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bonney, Matthew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Schroeder, Benjamin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Weirs, V. Gregory [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-11-01
When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a class of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10^{-4} probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.
Modelling cardiac signal as a confound in EEG-fMRI and its application in focal epilepsy studies
DEFF Research Database (Denmark)
Liston, A. D.; Ellegaard Lund, Torben; Salek-Haddadi, A
2006-01-01
effects to be modelled, as effects of no interest. Our model is based on an over-complete basis set covering a linear relationship between cardiac-related MR signal and the phase of the cardiac cycle or time after pulse (TAP). This method showed that, on average, 24.6 +/- 10.9% of grey matter voxels......Cardiac noise has been shown to reduce the sensitivity of functional Magnetic Resonance Imaging (fMRI) to an experimental effect due to its confounding presence in the blood oxygenation level-dependent (BOLD) signal. Its effect is most severe in particular regions of the brain and a method is yet...... to take it into account in routine fMRI analysis. This paper reports the development of a general and robust technique to improve the reliability of EEG-fMRI studies to BOLD signal correlated with interictal epileptiform discharges (IEDs). In these studies, ECG is routinely recorded, enabling cardiac...
Reggiani, Paolo; Todini, Ezio; Meißner, Dennis
2014-11-01
A wide range of approaches are used for flow routing in hydrological models. One of the most attractive solutions is the variable-parameter Muskingum (VPM) method. Its major advantage consists in the fact that (i) it can be applied to poorly-gauged basins with unknown channel geometries, (ii) it requires short execution time and (iii) it adequately captures, also in the presence of mild slopes, the most salient features of a dynamic wave such as the looped rating curve and the steepening of the rising limb of the hydrograph. In addition, the method offers the possibility to derive average water levels for a reach segment, a quantity which is essential in flood forecasting and flood risk assessment. For reasons of computational economy the method is also appropriate for applications, in which hydrological and global circulation models (GCM) are coupled, and where computational effort becomes an issue. The VPM approach is presented from a philosophical and conceptual perspective, by showing the derivation of its mass and momentum balance properties from the point to the finite scale, and by demonstrating its strengths by means of an application in an operational context. The principal novel contributions of the article relate to (a) the extension of the Muskingum-Cunge-Todini approach to accept uniformly distributed lateral inflow, (b) the use of power law cross sections and (c) the validation of the method through a long-term simulation of a real-world case, including the comparison of results to those obtained using a full Saint Venant equations model.
Jiang, Hui; Zhang, Hang; Chen, Quansheng; Mei, Congli; Liu, Guohai
2015-10-01
The use of wavelength variable selection before partial least squares discriminant analysis (PLS-DA) for qualitative identification of solid state fermentation degree by FT-NIR spectroscopy technique was investigated in this study. Two wavelength variable selection methods including competitive adaptive reweighted sampling (CARS) and stability competitive adaptive reweighted sampling (SCARS) were employed to select the important wavelengths. PLS-DA was applied to calibrate identified model using selected wavelength variables by CARS and SCARS for identification of solid state fermentation degree. Experimental results showed that the number of selected wavelength variables by CARS and SCARS were 58 and 47, respectively, from the 1557 original wavelength variables. Compared with the results of full-spectrum PLS-DA, the two wavelength variable selection methods both could enhance the performance of identified models. Meanwhile, compared with CARS-PLS-DA model, the SCARS-PLS-DA model achieved better results with the identification rate of 91.43% in the validation process. The overall results sufficiently demonstrate the PLS-DA model constructed using selected wavelength variables by a proper wavelength variable method can be more accurate identification of solid state fermentation degree.
Energy Technology Data Exchange (ETDEWEB)
Gonzalez Martin, M.I.; Vicente Tavera, S.; Revilla Martin, I.; Vivar Quintana, A.M.; Gonzalez Perez, C.; Hernandez Hierro, J.M.; Lobos Ortega, I.A.
2016-07-01
The canonical biplot method (CB) is used to determine the discriminatory power of volatile chemical compounds in cheese. These volatile compounds were used as variables in order to differentiate among 6 groups or populations of cheeses (combinations of two seasons (winter and summer) with 3 types of cheese (cow, sheep and goat’s milk). We analyzed a total of 17 volatile compounds by means of gas chromatography coupled with mass detection. The compounds included aldehydes and methyl-aldehydes, alcohols (primary, secondary and branched chain), ketones, methyl-ketones and esters in winter (WC) and summer (SC) cow’s cheeses, winter (WSh) and summer (SSh) sheep’s cheeses and in winter (WG) and summer (SG) goat’s cheeses. The CB method allows differences to be found as a function of the elaboration of the cheeses, the seasonality of the milk, and the separation of the six groups of cheeses, characterizing the specific volatile chemical compounds responsible for such differences. (Author)
Directory of Open Access Journals (Sweden)
Inés Cano-Montalbán
2018-01-01
Full Text Available This Systematic Review is thought to deepen the relation between sociodemographic variables most associated with suicidal behaviour and suicide methods in Europe and America. A research was made from articles and reviews published between 2005-2015 in PsycINFO, Medline, Web of Science Core Collection, Scopus, and SciELO. Thanks to it, we retrieved 5,222 records which were analysed against the inclusion (e.g., any design of the study, published in English or Spanish and quality criteria, including 53 studies in the review. In these results it is noticeable how men (36% of the studies and elderly (28% of the studies commit suicide more frequently. Women (30% of the studies and young people (17% of the studies have more attempts and suicidal behaviour. The most commonly used methods among them include hanging (24% of the studies, firearm (17% of the studies, and precipitation (6% of the studies; unemployment (17% of the studies, rural life (9% of the studies, a marital status other than marriage (15% of the studies, and low education (23% of the studies are also closely associated with both suicide and suicidal behaviour. Consequently, important connections can be concluded when carrying out psychological autopsies, which should be taken into account due to their clear implications in personal and material damage that must be elucidated judicially, clarifying the specific occurrence as suicide, homicide, or accident.
Directory of Open Access Journals (Sweden)
Monika eFleischhauer
2013-09-01
Full Text Available Meta-analytic data highlight the value of the Implicit Association Test (IAT as an indirect measure of personality. Based on evidence suggesting that confounding factors such as cognitive abilities contribute to the IAT effect, this study provides a first investigation of whether basic personality traits explain unwanted variance in the IAT. In a gender-balanced sample of 204 volunteers, the Big-Five dimensions were assessed via self-report, peer-report, and IAT. By means of structural equation modeling, latent Big-Five personality factors (based on self- and peer-report were estimated and their predictive value for unwanted variance in the IAT was examined. In a first analysis, unwanted variance was defined in the sense of method-specific variance which may result from differences in task demands between the two IAT block conditions and which can be mirrored by the absolute size of the IAT effects. In a second analysis, unwanted variance was examined in a broader sense defined as those systematic variance components in the raw IAT scores that are not explained by the latent implicit personality factors. In contrast to the absolute IAT scores, this also considers biases associated with the direction of IAT effects (i.e., whether they are positive or negative in sign, biases that might result, for example, from the IAT’s stimulus or category features. None of the explicit Big-Five factors was predictive for method-specific variance in the IATs (first analysis. However, when considering unwanted variance that goes beyond pure method-specific variance (second analysis, a substantial effect of neuroticism occurred that may have been driven by the affective valence of IAT attribute categories and the facilitated processing of negative stimuli, typically associated with neuroticism. The findings thus point to the necessity of using attribute category labels and stimuli of similar affective valence in personality IATs to avoid confounding due to
Asimakopoulou, K G; Hampson, S E; Morrish, N J
2002-04-01
Neuropsychological functioning was examined in a group of 33 older (mean age 62.40 +/- 9.62 years) people with Type 2 diabetes (Group 1) and 33 non-diabetic participants matched with Group 1 on age, sex, premorbid intelligence and presence of hypertension and cardio/cerebrovascular conditions (Group 2). Data statistically corrected for confounding factors obtained from the diabetic group were compared with the matched control group. The results suggested small cognitive deficits in diabetic people's verbal memory and mental flexibility (Logical Memory A and SS7). No differences were seen between the two samples in simple and complex visuomotor attention, sustained complex visual attention, attention efficiency, mental double tracking, implicit memory, and self-reported memory problems. These findings indicate minimal cognitive impairment in relatively uncomplicated Type 2 diabetes and demonstrate the importance of control and matching for confounding factors.
Wind turbines and idiopathic symptoms: The confounding effect of concurrent environmental exposures.
Blanes-Vidal, Victoria; Schwartz, Joel
2016-01-01
Whether or not wind turbines pose a risk to human health is a matter of heated debate. Personal reactions to other environmental exposures occurring in the same settings as wind turbines may be responsible of the reported symptoms. However, these have not been accounted for in previous studies. We investigated whether there is an association between residential proximity to wind turbines and idiopathic symptoms, after controlling for personal reactions to other environmental co-exposures. We assessed wind turbine exposures in 454 residences as the distance to the closest wind turbine (Dw) and number of wind turbines turbines and agricultural odor exposure, we did not observe a significant relationship between residential proximity to wind turbines and symptoms and the parameter estimates were attenuated toward zero. Wind turbines-health associations can be confounded by personal reactions to other environmental co-exposures. Isolated associations reported in the literature may be due to confounding bias. Copyright © 2016 Elsevier Inc. All rights reserved.
Syphilis may be a confounding factor, not a causative agent, in syphilitic ALS.
Tuk, Bert
2016-01-01
Based upon a review of published clinical observations regarding syphilitic amyotrophic lateral sclerosis (ALS), I hypothesize that syphilis is actually a confounding factor, not a causative factor, in syphilitic ALS. Moreover, I propose that the successful treatment of ALS symptoms in patients with syphilitic ALS using penicillin G and hydrocortisone is an indirect consequence of the treatment regimen and is not due to the treatment of syphilis. Specifically, I propose that the observed effect is due to the various pharmacological activities of penicillin G ( e.g ., a GABA receptor antagonist) and/or the multifaceted pharmacological activity of hydrocortisone. The notion that syphilis may be a confounding factor in syphilitic ALS is highly relevant, as it suggests that treating ALS patients with penicillin G and hydrocortisone-regardless of whether they present with syphilitic ALS or non-syphilitic ALS-may be effective at treating this rapidly progressive, highly devastating disease.
Tran, Annelise; Goutard, Flavie; Chamaillé, Lise; Baghdadi, Nicolas; Lo Seen, Danny
2010-02-01
Recent studies have highlighted the potential role of water in the transmission of avian influenza (AI) viruses and the existence of often interacting variables that determine the survival rate of these viruses in water; the two main variables are temperature and salinity. Remote sensing has been used to map and monitor water bodies for several decades. In this paper, we review satellite image analysis methods used for water detection and characterization, focusing on the main variables that influence AI virus survival in water. Optical and radar imagery are useful for detecting water bodies at different spatial and temporal scales. Methods to monitor the temperature of large water surfaces are also available. Current methods for estimating other relevant water variables such as salinity, pH, turbidity and water depth are not presently considered to be effective.
Directory of Open Access Journals (Sweden)
S. H. Chiang
2016-06-01
Full Text Available Forest is a very important ecosystem and natural resource for living things. Based on forest inventories, government is able to make decisions to converse, improve and manage forests in a sustainable way. Field work for forestry investigation is difficult and time consuming, because it needs intensive physical labor and the costs are high, especially surveying in remote mountainous regions. A reliable forest inventory can give us a more accurate and timely information to develop new and efficient approaches of forest management. The remote sensing technology has been recently used for forest investigation at a large scale. To produce an informative forest inventory, forest attributes, including tree species are unavoidably required to be considered. In this study the aim is to classify forest tree species in Erdenebulgan County, Huwsgul province in Mongolia, using Maximum Entropy method. The study area is covered by a dense forest which is almost 70% of total territorial extension of Erdenebulgan County and is located in a high mountain region in northern Mongolia. For this study, Landsat satellite imagery and a Digital Elevation Model (DEM were acquired to perform tree species mapping. The forest tree species inventory map was collected from the Forest Division of the Mongolian Ministry of Nature and Environment as training data and also used as ground truth to perform the accuracy assessment of the tree species classification. Landsat images and DEM were processed for maximum entropy modeling, and this study applied the model with two experiments. The first one is to use Landsat surface reflectance for tree species classification; and the second experiment incorporates terrain variables in addition to the Landsat surface reflectance to perform the tree species classification. All experimental results were compared with the tree species inventory to assess the classification accuracy. Results show that the second one which uses Landsat surface
Hao Chiang, Shou; Valdez, Miguel; Chen, Chi-Farn
2016-06-01
Forest is a very important ecosystem and natural resource for living things. Based on forest inventories, government is able to make decisions to converse, improve and manage forests in a sustainable way. Field work for forestry investigation is difficult and time consuming, because it needs intensive physical labor and the costs are high, especially surveying in remote mountainous regions. A reliable forest inventory can give us a more accurate and timely information to develop new and efficient approaches of forest management. The remote sensing technology has been recently used for forest investigation at a large scale. To produce an informative forest inventory, forest attributes, including tree species are unavoidably required to be considered. In this study the aim is to classify forest tree species in Erdenebulgan County, Huwsgul province in Mongolia, using Maximum Entropy method. The study area is covered by a dense forest which is almost 70% of total territorial extension of Erdenebulgan County and is located in a high mountain region in northern Mongolia. For this study, Landsat satellite imagery and a Digital Elevation Model (DEM) were acquired to perform tree species mapping. The forest tree species inventory map was collected from the Forest Division of the Mongolian Ministry of Nature and Environment as training data and also used as ground truth to perform the accuracy assessment of the tree species classification. Landsat images and DEM were processed for maximum entropy modeling, and this study applied the model with two experiments. The first one is to use Landsat surface reflectance for tree species classification; and the second experiment incorporates terrain variables in addition to the Landsat surface reflectance to perform the tree species classification. All experimental results were compared with the tree species inventory to assess the classification accuracy. Results show that the second one which uses Landsat surface reflectance coupled
Directory of Open Access Journals (Sweden)
González-Martín, M. I.
2016-03-01
Full Text Available The canonical biplot method (CB is used to determine the discriminatory power of volatile chemical compounds in cheese. These volatile compounds were used as variables in order to differentiate among 6 groups or populations of cheeses (combinations of two seasons (winter and summer with 3 types of cheese (cow, sheep and goat’s milk. We analyzed a total of 17 volatile compounds by means of gas chromatography coupled with mass detection. The compounds included aldehydes and methyl-aldehydes, alcohols (primary, secondary and branched chain, ketones, methyl-ketones and esters in winter (WC and summer (SC cow’s cheeses, winter (WSh and summer (SSh sheep’s cheeses and in winter (WG and summer (SG goat’s cheeses. The CB method allows differences to be found as a function of the elaboration of the cheeses, the seasonality of the milk, and the separation of the six groups of cheeses, characterizing the specific volatile chemical compounds responsible for such differences.El m.todo biplot can.nico (CB se utiliza para determinar el poder discriminatorio de compuestos qu.micos vol.tiles en queso. Los compuestos vol.tiles se utilizan como variables con el fin de diferenciar entre los 6 grupos o poblaciones de quesos (combinaciones de dos temporadas (invierno y verano con 3 tipos de queso (vaca, oveja y cabra. Se analizan un total de 17 compuestos vol.tiles por medio de cromatograf.a de gases acoplada con detecci.n de masas. Los compuestos incluyen aldeh.dos y metil-aldeh.dos, alcoholes (primarios de cadena, secundaria y ramificada, cetonas, metil-cetonas y .steres. Los seis grupos de quesos son, quesos de vaca de invierno (WC y verano (SC; quesos de oveja de invierno (WSh y verano (SSh y quesos de cabra de invierno (WG y verano (SG. El m.todo CB permite la separaci.n de los seis grupos de quesos y encontrar las diferencias en funci.n del tipo y estacionalidad de la leche, caracterizando los compuestos qu.micos vol.tiles espec.ficos responsables de
Karmon, Anatte; Sheiner, Eyal
2008-06-01
Preeclampsia is a major cause of maternal morbidity, although its precise etiology remains elusive. A number of studies suggest that urinary tract infection (UTI) during the course of gestation is associated with elevated risk for preeclampsia, while others have failed to prove such an association. In our medical center, pregnant women who were exposed to at least one UTI episode during pregnancy were 1.3 times more likely to have mild preeclampsia and 1.8 times more likely to have severe preeclampsia as compared to unexposed women. Our results are based on univariate analyses and are not adjusted for potential confounders. This editorial aims to discuss the relationship between urinary tract infection and preeclampsia, as well as examine the current problems regarding the interpretation of this association. Although the relationship between UTI and preeclampsia has been demonstrated in studies with various designs, carried-out in a variety of settings, the nature of this association is unclear. By taking into account timeline, dose-response effects, treatment influences, and potential confounders, as well as by neutralizing potential biases, future studies may be able to clarify the relationship between UTI and preeclampsia by determining if it is causal, confounded, or spurious.
International Nuclear Information System (INIS)
Liu, L.H.
2004-01-01
A discrete curved ray-tracing method is developed to analyze the radiative transfer in one-dimensional absorbing-emitting semitransparent slab with variable spatial refractive index. The curved ray trajectory is locally treated as straight line and the complicated and time-consuming computation of ray trajectory is cut down. A problem of radiative equilibrium with linear variable spatial refractive index is taken as an example to examine the accuracy of the proposed method. The temperature distributions are determined by the proposed method and compared with the data in references, which are obtained by other different methods. The results show that the discrete curved ray-tracing method has a good accuracy in solving the radiative transfer in one-dimensional semitransparent slab with variable spatial refractive index
Sperling, Milena P R; Simões, Rodrigo P; Caruso, Flávia C R; Mendes, Renata G; Arena, Ross; Borghi-Silva, Audrey
2016-01-01
Recent studies have shown that the magnitude of the metabolic and autonomic responses during progressive resistance exercise (PRE) is associated with the determination of the anaerobic threshold (AT). AT is an important parameter to determine intensity in dynamic exercise. To investigate the metabolic and cardiac autonomic responses during dynamic resistance exercise in patients with Coronary Artery Disease (CAD). Twenty men (age = 63±7 years) with CAD [Left Ventricular Ejection Fraction (LVEF) = 60±10%] underwent a PRE protocol on a leg press until maximal exertion. The protocol began at 10% of One Repetition Maximum Test (1-RM), with subsequent increases of 10% until maximal exhaustion. Heart Rate Variability (HRV) indices from Poincaré plots (SD1, SD2, SD1/SD2) and time domain (rMSSD and RMSM), and blood lactate were determined at rest and during PRE. Significant alterations in HRV and blood lactate were observed starting at 30% of 1-RM (p<0.05). Bland-Altman plots revealed a consistent agreement between blood lactate threshold (LT) and rMSSD threshold (rMSSDT) and between LT and SD1 threshold (SD1T). Relative values of 1-RM in all LT, rMSSDT and SD1T did not differ (29%±5 vs 28%±5 vs 29%±5 Kg, respectively). HRV during PRE could be a feasible noninvasive method of determining AT in CAD patients to plan intensities during cardiac rehabilitation.
Purwanta, J.; Marnoto, T.; Setyono, P.; Ramelan, A. H.
2018-03-01
The cement plant impacts on the lives of people around the factory site, one of them on the air quality, especially dust. Cement plant has made various efforts to mitigate dust generated, but the reality on the ground is still a lot of dust flying around either of the cement factory chimneys and transportation. The purpose of this study was to find the optimum condition of nozle diameter from the cement dust catcher, for mitigation the dust spread to around the cement plant. This study uses research methods such as collecting secondary data which includes data intensity rainfall, the average long rains, wind speed and direction as well as data quality monitoring dust around PT. Semen Gresik (Persero) Tbk. Tuban plant. To determine the wind direction propensity models, use a soft Windrose file. To determine the impact on the spread of dust into the environment using secondary data monitoring air quality. Results of the study is that the mitigation of dust around the cement plant is influenced by natural factors, namely the tendency of wind direction, rainfall and rainy days, and the rate of dust emission from the chimney. I try for operate the cement dust catcher with variable of nozle diameter. Finally, I find the optimum condition of nozle diameter for cement dust catcher is 1.40 mm, with line equation is y = 149.09.e 1.6237.x and error 5%. In that condition, nozle can make the fog with a good quality and it can catch the cement dust well.
Melkonian, D; Korner, A; Meares, R; Bahramali, H
2012-10-01
A novel method of the time-frequency analysis of non-stationary heart rate variability (HRV) is developed which introduces the fragmentary spectrum as a measure that brings together the frequency content, timing and duration of HRV segments. The fragmentary spectrum is calculated by the similar basis function algorithm. This numerical tool of the time to frequency and frequency to time Fourier transformations accepts both uniform and non-uniform sampling intervals, and is applicable to signal segments of arbitrary length. Once the fragmentary spectrum is calculated, the inverse transform recovers the original signal and reveals accuracy of spectral estimates. Numerical experiments show that discontinuities at the boundaries of the succession of inter-beat intervals can cause unacceptable distortions of the spectral estimates. We have developed a measure that we call the "RR deltagram" as a form of the HRV data that minimises spectral errors. The analysis of the experimental HRV data from real-life and controlled breathing conditions suggests transient oscillatory components as functionally meaningful elements of highly complex and irregular patterns of HRV. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Turpin, Jason B.
2004-01-01
One-dimensional water-hammer modeling involves the solution of two coupled non-linear hyperbolic partial differential equations (PDEs). These equations result from applying the principles of conservation of mass and momentum to flow through a pipe, and usually the assumption that the speed at which pressure waves propagate through the pipe is constant. In order to solve these equations for the interested quantities (i.e. pressures and flow rates), they must first be converted to a system of ordinary differential equations (ODEs) by either approximating the spatial derivative terms with numerical techniques or using the Method of Characteristics (MOC). The MOC approach is ideal in that no numerical approximation errors are introduced in converting the original system of PDEs into an equivalent system of ODEs. Unfortunately this resulting system of ODEs is bound by a time step constraint so that when integrating the equations the solution can only be obtained at fixed time intervals. If the fluid system to be modeled also contains dynamic components (i.e. components that are best modeled by a system of ODEs), it may be necessary to take extremely small time steps during certain points of the model simulation in order to achieve stability and/or accuracy in the solution. Coupled together, the fixed time step constraint invoked by the MOC, and the occasional need for extremely small time steps in order to obtain stability and/or accuracy, can greatly increase simulation run times. As one solution to this problem, a method for combining variable step integration (VSI) algorithms with the MOC was developed for modeling water-hammer in systems with highly dynamic components. A case study is presented in which reverse flow through a dual-flapper check valve introduces a water-hammer event. The predicted pressure responses upstream of the check-valve are compared with test data.
Directory of Open Access Journals (Sweden)
Arístides Alejandro Legrá-Lobaina
2016-10-01
Full Text Available The local polynomial method is based on assuming that is possible to estimate the value of a U variable in any of the P coordinate through local polynomials estimated based on approximate data. This investigation analyzes the probability of modeling in two dimensions the thickness and nickel, iron and cobalt concentrations in a block of Cuban laterite ores by using the mentioned method. It was also analyzed if the results of modeling these variables depend on the estimation method that is used.
Bijwaard, Govert E; Myrskylä, Mikko; Tynelius, Per; Rasmussen, Finn
2017-07-01
A negative educational gradient has been found for many causes of death. This association may be partly explained by confounding factors that affect both educational attainment and mortality. We correct the cause-specific educational gradient for observed individual background and unobserved family factors using an innovative method based on months lost due to a specific cause of death re-weighted by the probability of attaining a higher educational level. We use data on men with brothers from the Swedish Military Conscription Registry (1951-1983), linked to administrative registers. This dataset of some 700,000 men allows us to distinguish between five education levels and many causes of death. The empirical results reveal that raising the educational level from primary to tertiary would result in an additional 20 months of survival between ages 18 and 63. This improvement in mortality is mainly attributable to fewer deaths from external causes. The highly educated gain more than nine months due to the reduction in deaths from external causes, but gain only two months due to the reduction in cancer mortality and four months due to the reduction in cardiovascular mortality. Ignoring confounding would lead to an underestimation of the gains by educational attainment, especially for the less educated. Our results imply that if the education distribution of 50,000 Swedish men from the 1951 cohort were replaced with that of the corresponding 1983 cohort, 22% of the person-years that were lost to death between ages 18 and 63 would have been saved for this cohort. Copyright © 2017 Elsevier Ltd. All rights reserved.
Motosugi, Utaroh; Hernando, Diego; Wiens, Curtis; Bannas, Peter; Reeder, Scott. B
2017-01-01
Purpose: To determine whether high signal-to-noise ratio (SNR) acquisitions improve the repeatability of liver proton density fat fraction (PDFF) measurements using confounder-corrected chemical shift-encoded magnetic resonance (MR) imaging (CSE-MRI). Materials and Methods: Eleven fat-water phantoms were scanned with 8 different protocols with varying SNR. After repositioning the phantoms, the same scans were repeated to evaluate the test-retest repeatability. Next, an in vivo study was performed with 20 volunteers and 28 patients scheduled for liver magnetic resonance imaging (MRI). Two CSE-MRI protocols with standard- and high-SNR were repeated to assess test-retest repeatability. MR spectroscopy (MRS)-based PDFF was acquired as a standard of reference. The standard deviation (SD) of the difference (Δ) of PDFF measured in the two repeated scans was defined to ascertain repeatability. The correlation between PDFF of CSE-MRI and MRS was calculated to assess accuracy. The SD of Δ and correlation coefficients of the two protocols (standard- and high-SNR) were compared using F-test and t-test, respectively. Two reconstruction algorithms (complex-based and magnitude-based) were used for both the phantom and in vivo experiments. Results: The phantom study demonstrated that higher SNR improved the repeatability for both complex- and magnitude-based reconstruction. Similarly, the in vivo study demonstrated that the repeatability of the high-SNR protocol (SD of Δ = 0.53 for complex- and = 0.85 for magnitude-based fit) was significantly higher than using the standard-SNR protocol (0.77 for complex, P magnitude-based fit, P = 0.003). No significant difference was observed in the accuracy between standard- and high-SNR protocols. Conclusion: Higher SNR improves the repeatability of fat quantification using confounder-corrected CSE-MRI. PMID:28190853
Kilian, Reinhold; Matschinger, Herbert; Löeffler, Walter; Roick, Christiane; Angermeyer, Matthias C
2002-03-01
Transformation of the dependent cost variable is often used to solve the problems of heteroscedasticity and skewness in linear ordinary least square regression of health service cost data. However, transformation may cause difficulties in the interpretation of regression coefficients and the retransformation of predicted values. The study compares the advantages and disadvantages of different methods to estimate regression based cost functions using data on the annual costs of schizophrenia treatment. Annual costs of psychiatric service use and clinical and socio-demographic characteristics of the patients were assessed for a sample of 254 patients with a diagnosis of schizophrenia (ICD-10 F 20.0) living in Leipzig. The clinical characteristics of the participants were assessed by means of the BPRS 4.0, the GAF, and the CAN for service needs. Quality of life was measured by WHOQOL-BREF. A linear OLS regression model with non-parametric standard errors, a log-transformed OLS model and a generalized linear model with a log-link and a gamma distribution were used to estimate service costs. For the estimation of robust non-parametric standard errors, the variance estimator by White and a bootstrap estimator based on 2000 replications were employed. Models were evaluated by the comparison of the R2 and the root mean squared error (RMSE). RMSE of the log-transformed OLS model was computed with three different methods of bias-correction. The 95% confidence intervals for the differences between the RMSE were computed by means of bootstrapping. A split-sample-cross-validation procedure was used to forecast the costs for the one half of the sample on the basis of a regression equation computed for the other half of the sample. All three methods showed significant positive influences of psychiatric symptoms and met psychiatric service needs on service costs. Only the log- transformed OLS model showed a significant negative impact of age, and only the GLM shows a significant
Yong, Chin-Khian
2013-09-01
A partially confounded factorial conjoint choice experiments design was used to examine the monetary value of the willingness to pay for E-book Reader's attributes. Conjoint analysis is an efficient, cost-effective, and most widely used quantitative method in marketing research to understand consumer preferences and value trade-off. Value can be interpreted by customer or consumer as the received of multiple benefits from a price that was paid. The monetary value of willingness to pay for battery life, internal memory, external memory, screen size, text to Speech, touch screen, and converting handwriting to digital text of E-book reader were estimated in this study. Due to the significant interaction effect of the attributes with the price, the monetary values for the seven attributes were found to be different at different values of odds of purchasing versus not purchasing. The significant interactions effects were one of the main contribution of the partially confounded factorial conjoint choice experiment.
Energy Technology Data Exchange (ETDEWEB)
Tsili, Athina C., E-mail: a_tsili@yahoo.gr [Department of Clinical Radiology, Medical School, University of Ioannina, University Campus, 45110, Ioannina (Greece); Ntorkou, Alexandra, E-mail: alexdorkou@hotmail.com [Department of Clinical Radiology, Medical School, University of Ioannina, University Campus, 45110, Ioannina (Greece); Astrakas, Loukas, E-mail: astrakas@uoi.gr [Department of Medical Physics, Medical School, University of Ioannina, University Campus, 45110, Ioannina (Greece); Xydis, Vasilis, E-mail: vxydis@cc.uoi.gr [Department of Clinical Radiology, Medical School, University of Ioannina, University Campus, 45110, Ioannina (Greece); Tsampalas, Stavros, E-mail: stamp@gmail.com [Department of Urology, Medical School, University of Ioannina, University Campus, 45110, Ioannina (Greece); Sofikitis, Nikolaos, E-mail: akrosnin@hotmail.com [Department of Urology, Medical School, University of Ioannina, University Campus, 45110, Ioannina (Greece); Argyropoulou, Maria I., E-mail: margyrop@cc.uoi.gr [Department of Clinical Radiology, Medical School, University of Ioannina, University Campus, 45110, Ioannina (Greece)
2017-04-15
Highlights: • Seminomas have lower mean ADC compared to NSGCNs. • Round ROI is accurate in characterizing TGCNS. • ROI shape has no significant effect on interobserver variability. - Abstract: Introduction: To evaluate the difference in apparent diffusion coefficient (ADC) measurements at diffusion-weighted (DW) magnetic resonance imaging of differently shaped regions-of-interest (ROIs) in testicular germ cell neoplasms (TGCNS), the diagnostic ability of differently shaped ROIs in differentiating seminomas from nonseminomatous germ cell neoplasms (NSGCNs) and the interobserver variability. Materials and methods: Thirty-three TGCNs were retrospectively evaluated. Patients underwent MR examinations, including DWI on a 1.5-T MR system. Two observers measured mean tumor ADCs using four distinct ROI methods: round, square, freehand and multiple small, round ROIs. The interclass correlation coefficient was analyzed to assess interobserver variability. Statistical analysis was used to compare mean ADC measurements among observers, methods and histologic types. Results: All ROI methods showed excellent interobserver agreement, with excellent correlation (P < 0.001). Multiple, small ROIs provided the lower mean ADC in TGCNs. Seminomas had lower mean ADC compared to NSGCNs for each ROI method (P < 0.001). Round ROI proved the most accurate method in characterizing TGCNS. Conclusion: Interobserver variability in ADC measurement is excellent, irrespective of the ROI shape. Multiple, small round ROIs and round ROI proved the more accurate methods for ADC measurement in the characterization of TGCNs and in the differentiation between seminomas and NSGCNs, respectively.
Heterogeneity in white blood cells has potential to confound DNA methylation measurements.
Directory of Open Access Journals (Sweden)
Bjorn T Adalsteinsson
Full Text Available Epigenetic studies are commonly conducted on DNA from tissue samples. However, tissues are ensembles of cells that may each have their own epigenetic profile, and therefore inter-individual cellular heterogeneity may compromise these studies. Here, we explore the potential for such confounding on DNA methylation measurement outcomes when using DNA from whole blood. DNA methylation was measured using pyrosequencing-based methodology in whole blood (n = 50-179 and in two white blood cell fractions (n = 20, isolated using density gradient centrifugation, in four CGIs (CpG Islands located in genes HHEX (10 CpG sites assayed, KCNJ11 (8 CpGs, KCNQ1 (4 CpGs and PM20D1 (7 CpGs. Cellular heterogeneity (variation in proportional white blood cell counts of neutrophils, lymphocytes, monocytes, eosinophils and basophils, counted by an automated cell counter explained up to 40% (p<0.0001 of the inter-individual variation in whole blood DNA methylation levels in the HHEX CGI, but not a significant proportion of the variation in the other three CGIs tested. DNA methylation levels in the two cell fractions, polymorphonuclear and mononuclear cells, differed significantly in the HHEX CGI; specifically the average absolute difference ranged between 3.4-15.7 percentage points per CpG site. In the other three CGIs tested, methylation levels in the two fractions did not differ significantly, and/or the difference was more moderate. In the examined CGIs, methylation levels were highly correlated between cell fractions. In summary, our analysis detects region-specific differential DNA methylation between white blood cell subtypes, which can confound the outcome of whole blood DNA methylation measurements. Finally, by demonstrating the high correlation between methylation levels in cell fractions, our results suggest a possibility to use a proportional number of a single white blood cell type to correct for this confounding effect in analyses.
Strachan, Eric; Poeschla, Brian; Dansie, Elizabeth; Succop, Annemarie; Chopko, Laura; Afari, Niloofar
2015-01-01
Pain is a complex phenomenon influenced by context and person-specific factors. Affective dimensions of pain involve both enduring personality traits and fleeting emotional states. We examined how personality traits and emotional states are linked with clinical and evoked pain in a twin sample. 99 female twin pairs were evaluated for clinical and evoked pain using the McGill Pain Questionnaire (MPQ) and dolorimetry, and completed the 120-item International Personality Item Pool (IPIP), the Positive and Negative Affect Scale (PANAS), and ratings of stress and mood. Using a co-twin control design we examined a) the relationship of personality traits and emotional states with clinical and evoked pain and b) whether genetics and common environment (i.e. familial factors) may account for the associations. Neuroticism was associated with the sensory component of the MPQ; this relationship was not confounded by familial factors. None of the emotional state measures was associated with the MPQ. PANAS negative affect was associated with lower evoked pressure pain threshold and tolerance; these associations were confounded by familial factors. There were no associations between IPIP traits and evoked pain. A relationship exists between neuroticism and clinical pain that is not confounded by familial factors. There is no similar relationship between negative emotional states and clinical pain. In contrast, the relationship between negative emotional states and evoked pain is strong while the relationship with enduring personality traits is weak. The relationship between negative emotional states and evoked pain appears to be non-causal and due to familial factors. Copyright © 2014 Elsevier Inc. All rights reserved.
Bayesian inference in a discrete shock model using confounded common cause data
International Nuclear Information System (INIS)
Kvam, Paul H.; Martz, Harry F.
1995-01-01
We consider redundant systems of identical components for which reliability is assessed statistically using only demand-based failures and successes. Direct assessment of system reliability can lead to gross errors in estimation if there exist external events in the working environment that cause two or more components in the system to fail in the same demand period which have not been included in the reliability model. We develop a simple Bayesian model for estimating component reliability and the corresponding probability of common cause failure in operating systems for which the data is confounded; that is, the common cause failures cannot be distinguished from multiple independent component failures in the narrative event descriptions
DEFF Research Database (Denmark)
del Campo, Marta; Mollenhauer, Brit; Bertolotto, Antonio
2012-01-01
Early diagnosis of neurodegenerative disorders such as Alzheimer's (AD) or Parkinson's disease (PD) is needed to slow down or halt the disease at the earliest stage. Cerebrospinal fluid (CSF) biomarkers can be a good tool for early diagnosis. However, their use in clinical practice is challenging...... the need to establish standardized operating procedures. Here, we merge two previous consensus guidelines for preanalytical confounding factors in order to achieve one exhaustive guideline updated with new evidence for Aβ42, total tau and phosphorylated tau, and α-synuclein. The proposed standardized...
International Nuclear Information System (INIS)
Liu Qing; Zhu Jiamin; Hong Bihai
2008-01-01
A modified variable-coefficient projective Riccati equation method is proposed and applied to a (2 + 1)-dimensional simplified and generalized Broer-Kaup system. It is shown that the method presented by Huang and Zhang [Huang DJ, Zhang HQ. Chaos, Solitons and Fractals 2005; 23:601] is a special case of our method. The results obtained in the paper include many new formal solutions besides the all solutions found by Huang and Zhang
Markus, Keith A
2016-01-01
Nesselroade and Molenaar presented the ideographic filter as a proposal for analyzing lawful regularities in behavioral research. The proposal highlights an inconsistency that poses a challenge for behavioral research more generally. One can distinguish a broadly Humean approach from a broadly non-Humean approach as they relate to variables and to causation. Nesselroade and Molenaar rejected a Humean approach to latent variables that characterizes them as nothing more than summaries of their manifest indicators. By contrast, they tacitly accepted a Humean approach to causes characterized as nothing more than summaries of their manifest causal effects. A non-Humean treatment of variables coupled with a Humean treatment of causation creates a theoretical tension within their proposal. For example, one can interpret the same model elements as simultaneously representing both variables and causes. Future refinement of the ideographic filter proposal to address this tension could follow any of a number of strategies.
Directory of Open Access Journals (Sweden)
Xiaohua Wei
2013-06-01
Full Text Available Forest change and climatic variability are two major drivers for influencing change in watershed hydrology in forest–dominated watersheds. Quantifying their relative contributions is important to fully understand their individual effects. This review paper summarizes the progress on quantifying the relative contributions of forest or land cover change and climatic variability to hydrology in large watersheds using available case studies. It compared pros and cons of various research methods, identified research challenges and proposed future research priorities. Our synthesis shows that the relative hydrological effects of forest changes and climatic variability are largely dependent on their own change magnitudes and watershed characteristics. In some severely disturbed watersheds, impacts of forest changes or land use changes can be as important as those from climatic variability. This paper provides a brief review on eight selected research methods for this type of research. Because each method or technique has its own strengths and weaknesses, combining two or more methods is a more robust approach than using any single method alone. Future research priorities include conducting more case studies, refining research methods, and considering mechanism-based research using landscape ecology and geochemistry approaches.
International Nuclear Information System (INIS)
Sabry, R.; Zahran, M.A.; Fan Engui
2004-01-01
A generalized expansion method is proposed to uniformly construct a series of exact solutions for general variable coefficients non-linear evolution equations. The new approach admits the following types of solutions (a) polynomial solutions, (b) exponential solutions, (c) rational solutions, (d) triangular periodic wave solutions, (e) hyperbolic and solitary wave solutions and (f) Jacobi and Weierstrass doubly periodic wave solutions. The efficiency of the method has been demonstrated by applying it to a generalized variable coefficients KdV equation. Then, new and rich variety of exact explicit solutions have been found
Christofaro, Diego Giulliano Destro; De Andrade, Selma Maffei; Cardoso, Jefferson Rosa; Mesas, Arthur Eumann; Codogno, Jamile Sanches; Fernandes, Rômulo Araújo
2015-01-01
The aim of this study was to determine whether high blood pressure (HBP) is associated with sedentary behavior in young people even after controlling for potential confounders (gender, age, socioeconomic level, tobacco, alcohol, obesity and physical activity). In this epidemiological study, 1231 adolescents were evaluated. Blood pressure was measured with an oscillometric device and waist circumference with an inextensible tape. Sedentary behavior (watching television, computer use and playing video games) and physical activity were assessed by a questionnaire. We used mean and standard deviation to describe the statistical analysis, and the association between HBP and sedentary behavior was assessed by the chi-squared test. Binary logistic regression was used to observe the magnitude of association and cluster analyses (sedentary behavior and abdominal obesity; sedentary behavior and physical inactivity). HBP was associated with sedentary behaviors [odds ratio (OR) = 2.21, 95% confidence interval (CI) = 1.41-3.96], even after controlling for various confounders (OR = 1.68, CI = 1.03-2.75). In cluster analysis the combination of sedentary behavior and elevated abdominal obesity contributed significantly to an increased likelihood of having HBP (OR = 13.51, CI 7.21-23.97). Sedentary behavior was associated with HBP, and excess fat in the abdominal region contributed to the modulation of this association.
Everything that you have ever been told about assessment center ratings is confounded.
Jackson, Duncan J R; Michaelides, George; Dewberry, Chris; Kim, Young-Jae
2016-07-01
Despite a substantial research literature on the influence of dimensions and exercises in assessment centers (ACs), the relative impact of these 2 sources of variance continues to raise uncertainties because of confounding. With confounded effects, it is not possible to establish the degree to which any 1 effect, including those related to exercises and dimensions, influences AC ratings. In the current study (N = 698) we used Bayesian generalizability theory to unconfound all of the possible effects contributing to variance in AC ratings. Our results show that ≤1.11% of the variance in AC ratings was directly attributable to behavioral dimensions, suggesting that dimension-related effects have no practical impact on the reliability of ACs. Even when taking aggregation level into consideration, effects related to general performance and exercises accounted for almost all of the reliable variance in AC ratings. The implications of these findings for recent dimension- and exercise-based perspectives on ACs are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Kielland, Ø N; Bech, C; Einum, S
2017-01-11
Environmental change may cause phenotypic changes that are inherited across generations through transgenerational plasticity (TGP). If TGP is adaptive, offspring fitness increases with an increasing match between parent and offspring environment. Here we test for adaptive TGP in somatic growth and metabolic rate in response to temperature in the clonal zooplankton Daphnia pulex Animals of the first focal generation experienced thermal transgenerational 'mismatch' (parental and offspring temperatures differed), whereas conditions of the next two generations matched the (grand)maternal thermal conditions. Adjustments of metabolic rate occurred during the lifetime of the first generation (i.e. within-generation plasticity). However, no further change was observed during the subsequent two generations, as would be expected under TGP. Furthermore, we observed no tendency for increased juvenile somatic growth (a trait highly correlated with fitness in Daphnia) over the three generations when reared at new temperatures. These results are inconsistent with existing studies of thermal TGP, and we describe how previous experimental designs may have confounded TGP with within-generation plasticity and selective mortality. We suggest that the current evidence for thermal TGP is weak. To increase our understanding of the ecological and evolutionary role of TGP, future studies should more carefully identify possible confounding factors. © 2017 The Author(s).