WorldWideScience

Sample records for instrumental variables models

  1. Testing concordance of instrumental variable effects in generalized linear models with application to Mendelian randomization

    Science.gov (United States)

    Dai, James Y.; Chan, Kwun Chuen Gary; Hsu, Li

    2014-01-01

    Instrumental variable regression is one way to overcome unmeasured confounding and estimate causal effect in observational studies. Built on structural mean models, there has been considerale work recently developed for consistent estimation of causal relative risk and causal odds ratio. Such models can sometimes suffer from identification issues for weak instruments. This hampered the applicability of Mendelian randomization analysis in genetic epidemiology. When there are multiple genetic variants available as instrumental variables, and causal effect is defined in a generalized linear model in the presence of unmeasured confounders, we propose to test concordance between instrumental variable effects on the intermediate exposure and instrumental variable effects on the disease outcome, as a means to test the causal effect. We show that a class of generalized least squares estimators provide valid and consistent tests of causality. For causal effect of a continuous exposure on a dichotomous outcome in logistic models, the proposed estimators are shown to be asymptotically conservative. When the disease outcome is rare, such estimators are consistent due to the log-linear approximation of the logistic function. Optimality of such estimators relative to the well-known two-stage least squares estimator and the double-logistic structural mean model is further discussed. PMID:24863158

  2. Instrumental Variables in the Long Run

    DEFF Research Database (Denmark)

    Casey, Gregory; Klemp, Marc Patrick Brag

    2017-01-01

    In the study of long-run economic growth, it is common to use historical or geographical variables as instruments for contemporary endogenous regressors. We study the interpretation of these conventional instrumental variable (IV) regressions in a general, yet simple, framework. Our aim...... quantitative implications for the field of long-run economic growth. We also use our framework to examine related empirical techniques. We find that two prominent regression methodologies - using gravity-based instruments for trade and including ancestry-adjusted variables in linear regression models - have...... is to estimate the long-run causal effect of changes in the endogenous explanatory variable. We find that conventional IV regressions generally cannot recover this parameter of interest. To estimate this parameter, therefore, we develop an augmented IV estimator that combines the conventional regression...

  3. Instrumental variables estimation of exposure effects on a time-to-event endpoint using structural cumulative survival models.

    Science.gov (United States)

    Martinussen, Torben; Vansteelandt, Stijn; Tchetgen Tchetgen, Eric J; Zucker, David M

    2017-12-01

    The use of instrumental variables for estimating the effect of an exposure on an outcome is popular in econometrics, and increasingly so in epidemiology. This increasing popularity may be attributed to the natural occurrence of instrumental variables in observational studies that incorporate elements of randomization, either by design or by nature (e.g., random inheritance of genes). Instrumental variables estimation of exposure effects is well established for continuous outcomes and to some extent for binary outcomes. It is, however, largely lacking for time-to-event outcomes because of complications due to censoring and survivorship bias. In this article, we make a novel proposal under a class of structural cumulative survival models which parameterize time-varying effects of a point exposure directly on the scale of the survival function; these models are essentially equivalent with a semi-parametric variant of the instrumental variables additive hazards model. We propose a class of recursive instrumental variable estimators for these exposure effects, and derive their large sample properties along with inferential tools. We examine the performance of the proposed method in simulation studies and illustrate it in a Mendelian randomization study to evaluate the effect of diabetes on mortality using data from the Health and Retirement Study. We further use the proposed method to investigate potential benefit from breast cancer screening on subsequent breast cancer mortality based on the HIP-study. © 2017, The International Biometric Society.

  4. Instrumental variables estimation under a structural Cox model

    DEFF Research Database (Denmark)

    Martinussen, Torben; Nørbo Sørensen, Ditte; Vansteelandt, Stijn

    2017-01-01

    Instrumental variable (IV) analysis is an increasingly popular tool for inferring the effect of an exposure on an outcome, as witnessed by the growing number of IV applications in epidemiology, for instance. The majority of IV analyses of time-to-event endpoints are, however, dominated by heurist...

  5. On the Interpretation of Instrumental Variables in the Presence of Specification Errors

    Directory of Open Access Journals (Sweden)

    P.A.V.B. Swamy

    2015-01-01

    Full Text Available The method of instrumental variables (IV and the generalized method of moments (GMM, and their applications to the estimation of errors-in-variables and simultaneous equations models in econometrics, require data on a sufficient number of instrumental variables that are both exogenous and relevant. We argue that, in general, such instruments (weak or strong cannot exist.

  6. Optimal Inference for Instrumental Variables Regression with non-Gaussian Errors

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    This paper is concerned with inference on the coefficient on the endogenous regressor in a linear instrumental variables model with a single endogenous regressor, nonrandom exogenous regressors and instruments, and i.i.d. errors whose distribution is unknown. It is shown that under mild smoothness...

  7. College quality and hourly wages: evidence from the self-revelation model, sibling models and instrumental variables.

    Science.gov (United States)

    Borgen, Nicolai T

    2014-11-01

    This paper addresses the recent discussion on confounding in the returns to college quality literature using the Norwegian case. The main advantage of studying Norway is the quality of the data. Norwegian administrative data provide information on college applications, family relations and a rich set of control variables for all Norwegian citizens applying to college between 1997 and 2004 (N = 141,319) and their succeeding wages between 2003 and 2010 (676,079 person-year observations). With these data, this paper uses a subset of the models that have rendered mixed findings in the literature in order to investigate to what extent confounding biases the returns to college quality. I compare estimates obtained using standard regression models to estimates obtained using the self-revelation model of Dale and Krueger (2002), a sibling fixed effects model and the instrumental variable model used by Long (2008). Using these methods, I consistently find increasing returns to college quality over the course of students' work careers, with positive returns only later in students' work careers. I conclude that the standard regression estimate provides a reasonable estimate of the returns to college quality. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. A review of instrumental variable estimators for Mendelian randomization.

    Science.gov (United States)

    Burgess, Stephen; Small, Dylan S; Thompson, Simon G

    2017-10-01

    Instrumental variable analysis is an approach for obtaining causal inferences on the effect of an exposure (risk factor) on an outcome from observational data. It has gained in popularity over the past decade with the use of genetic variants as instrumental variables, known as Mendelian randomization. An instrumental variable is associated with the exposure, but not associated with any confounder of the exposure-outcome association, nor is there any causal pathway from the instrumental variable to the outcome other than via the exposure. Under the assumption that a single instrumental variable or a set of instrumental variables for the exposure is available, the causal effect of the exposure on the outcome can be estimated. There are several methods available for instrumental variable estimation; we consider the ratio method, two-stage methods, likelihood-based methods, and semi-parametric methods. Techniques for obtaining statistical inferences and confidence intervals are presented. The statistical properties of estimates from these methods are compared, and practical advice is given about choosing a suitable analysis method. In particular, bias and coverage properties of estimators are considered, especially with weak instruments. Settings particularly relevant to Mendelian randomization are prioritized in the paper, notably the scenario of a continuous exposure and a continuous or binary outcome.

  9. Power calculator for instrumental variable analysis in pharmacoepidemiology.

    Science.gov (United States)

    Walker, Venexia M; Davies, Neil M; Windmeijer, Frank; Burgess, Stephen; Martin, Richard M

    2017-10-01

    Instrumental variable analysis, for example with physicians' prescribing preferences as an instrument for medications issued in primary care, is an increasingly popular method in the field of pharmacoepidemiology. Existing power calculators for studies using instrumental variable analysis, such as Mendelian randomization power calculators, do not allow for the structure of research questions in this field. This is because the analysis in pharmacoepidemiology will typically have stronger instruments and detect larger causal effects than in other fields. Consequently, there is a need for dedicated power calculators for pharmacoepidemiological research. The formula for calculating the power of a study using instrumental variable analysis in the context of pharmacoepidemiology is derived before being validated by a simulation study. The formula is applicable for studies using a single binary instrument to analyse the causal effect of a binary exposure on a continuous outcome. An online calculator, as well as packages in both R and Stata, are provided for the implementation of the formula by others. The statistical power of instrumental variable analysis in pharmacoepidemiological studies to detect a clinically meaningful treatment effect is an important consideration. Research questions in this field have distinct structures that must be accounted for when calculating power. The formula presented differs from existing instrumental variable power formulae due to its parametrization, which is designed specifically for ease of use by pharmacoepidemiologists. © The Author 2017. Published by Oxford University Press on behalf of the International Epidemiological Association

  10. Eliminating Survivor Bias in Two-stage Instrumental Variable Estimators.

    Science.gov (United States)

    Vansteelandt, Stijn; Walter, Stefan; Tchetgen Tchetgen, Eric

    2018-07-01

    Mendelian randomization studies commonly focus on elderly populations. This makes the instrumental variables analysis of such studies sensitive to survivor bias, a type of selection bias. A particular concern is that the instrumental variable conditions, even when valid for the source population, may be violated for the selective population of individuals who survive the onset of the study. This is potentially very damaging because Mendelian randomization studies are known to be sensitive to bias due to even minor violations of the instrumental variable conditions. Interestingly, the instrumental variable conditions continue to hold within certain risk sets of individuals who are still alive at a given age when the instrument and unmeasured confounders exert additive effects on the exposure, and moreover, the exposure and unmeasured confounders exert additive effects on the hazard of death. In this article, we will exploit this property to derive a two-stage instrumental variable estimator for the effect of exposure on mortality, which is insulated against the above described selection bias under these additivity assumptions.

  11. Censored Quantile Instrumental Variable Estimates of the Price Elasticity of Expenditure on Medical Care.

    Science.gov (United States)

    Kowalski, Amanda

    2016-01-02

    Efforts to control medical care costs depend critically on how individuals respond to prices. I estimate the price elasticity of expenditure on medical care using a censored quantile instrumental variable (CQIV) estimator. CQIV allows estimates to vary across the conditional expenditure distribution, relaxes traditional censored model assumptions, and addresses endogeneity with an instrumental variable. My instrumental variable strategy uses a family member's injury to induce variation in an individual's own price. Across the conditional deciles of the expenditure distribution, I find elasticities that vary from -0.76 to -1.49, which are an order of magnitude larger than previous estimates.

  12. Falsification Testing of Instrumental Variables Methods for Comparative Effectiveness Research.

    Science.gov (United States)

    Pizer, Steven D

    2016-04-01

    To demonstrate how falsification tests can be used to evaluate instrumental variables methods applicable to a wide variety of comparative effectiveness research questions. Brief conceptual review of instrumental variables and falsification testing principles and techniques accompanied by an empirical application. Sample STATA code related to the empirical application is provided in the Appendix. Comparative long-term risks of sulfonylureas and thiazolidinediones for management of type 2 diabetes. Outcomes include mortality and hospitalization for an ambulatory care-sensitive condition. Prescribing pattern variations are used as instrumental variables. Falsification testing is an easily computed and powerful way to evaluate the validity of the key assumption underlying instrumental variables analysis. If falsification tests are used, instrumental variables techniques can help answer a multitude of important clinical questions. © Health Research and Educational Trust.

  13. Instrumental variables estimation of exposure effects on a time-to-event endpoint using structural cumulative survival models

    DEFF Research Database (Denmark)

    Martinussen, Torben; Vansteelandt, Stijn; Tchetgen Tchetgen, Eric J.

    2017-01-01

    The use of instrumental variables for estimating the effect of an exposure on an outcome is popular in econometrics, and increasingly so in epidemiology. This increasing popularity may be attributed to the natural occurrence of instrumental variables in observational studies that incorporate elem...

  14. The effect of patient satisfaction with pharmacist consultation on medication adherence: an instrumental variable approach

    Directory of Open Access Journals (Sweden)

    Gu NY

    2008-12-01

    Full Text Available There are limited studies on quantifying the impact of patient satisfaction with pharmacist consultation on patient medication adherence. Objectives: The objective of this study is to evaluate the effect of patient satisfaction with pharmacist consultation services on medication adherence in a large managed care organization. Methods: We analyzed data from a patient satisfaction survey of 6,916 patients who had used pharmacist consultation services in Kaiser Permanente Southern California from 1993 to 1996. We compared treating patient satisfaction as exogenous, in a single-equation probit model, with a bivariate probit model where patient satisfaction was treated as endogenous. Different sets of instrumental variables were employed, including measures of patients' emotional well-being and patients' propensity to fill their prescriptions at a non-Kaiser Permanente (KP pharmacy. The Smith-Blundell test was used to test whether patient satisfaction was endogenous. Over-identification tests were used to test the validity of the instrumental variables. The Staiger-Stock weak instrument test was used to evaluate the explanatory power of the instrumental variables. Results: All tests indicated that the instrumental variables method was valid and the instrumental variables used have significant explanatory power. The single equation probit model indicated that the effect of patient satisfaction with pharmacist consultation was significant (p<0.010. However, the bivariate probit models revealed that the marginal effect of pharmacist consultation on medication adherence was significantly greater than the single equation probit. The effect increased from 7% to 30% (p<0.010 after controlling for endogeneity bias. Conclusion: After appropriate adjustment for endogeneity bias, patients satisfied with their pharmacy services are substantially more likely to adhere to their medication. The results have important policy implications given the increasing focus

  15. Finite-sample instrumental variables inference using an asymptotically pivotal statistic

    NARCIS (Netherlands)

    Bekker, Paul A.; Kleibergen, Frank

    2001-01-01

    The paper considers the K-statistic, Kleibergen’s (2000) adaptation of the Anderson-Rubin (AR) statistic in instrumental variables regression. Compared to the AR-statistic this K-statistic shows improved asymptotic efficiency in terms of degrees of freedom in overidenti?ed models and yet it shares,

  16. Instrumental variables I: instrumental variables exploit natural variation in nonexperimental data to estimate causal relationships.

    Science.gov (United States)

    Rassen, Jeremy A; Brookhart, M Alan; Glynn, Robert J; Mittleman, Murray A; Schneeweiss, Sebastian

    2009-12-01

    The gold standard of study design for treatment evaluation is widely acknowledged to be the randomized controlled trial (RCT). Trials allow for the estimation of causal effect by randomly assigning participants either to an intervention or comparison group; through the assumption of "exchangeability" between groups, comparing the outcomes will yield an estimate of causal effect. In the many cases where RCTs are impractical or unethical, instrumental variable (IV) analysis offers a nonexperimental alternative based on many of the same principles. IV analysis relies on finding a naturally varying phenomenon, related to treatment but not to outcome except through the effect of treatment itself, and then using this phenomenon as a proxy for the confounded treatment variable. This article demonstrates how IV analysis arises from an analogous but potentially impossible RCT design, and outlines the assumptions necessary for valid estimation. It gives examples of instruments used in clinical epidemiology and concludes with an outline on estimation of effects.

  17. A selective review of the first 20 years of instrumental variables models in health-services research and medicine.

    Science.gov (United States)

    Cawley, John

    2015-01-01

    The method of instrumental variables (IV) is useful for estimating causal effects. Intuitively, it exploits exogenous variation in the treatment, sometimes called natural experiments or instruments. This study reviews the literature in health-services research and medical research that applies the method of instrumental variables, documents trends in its use, and offers examples of various types of instruments. A literature search of the PubMed and EconLit research databases for English-language journal articles published after 1990 yielded a total of 522 original research articles. Citations counts for each article were derived from the Web of Science. A selective review was conducted, with articles prioritized based on number of citations, validity and power of the instrument, and type of instrument. The average annual number of papers in health services research and medical research that apply the method of instrumental variables rose from 1.2 in 1991-1995 to 41.8 in 2006-2010. Commonly-used instruments (natural experiments) in health and medicine are relative distance to a medical care provider offering the treatment and the medical care provider's historic tendency to administer the treatment. Less common but still noteworthy instruments include randomization of treatment for reasons other than research, randomized encouragement to undertake the treatment, day of week of admission as an instrument for waiting time for surgery, and genes as an instrument for whether the respondent has a heritable condition. The use of the method of IV has increased dramatically in the past 20 years, and a wide range of instruments have been used. Applications of the method of IV have in several cases upended conventional wisdom that was based on correlations and led to important insights about health and healthcare. Future research should pursue new applications of existing instruments and search for new instruments that are powerful and valid.

  18. Statistical Analysis for Multisite Trials Using Instrumental Variables with Random Coefficients

    Science.gov (United States)

    Raudenbush, Stephen W.; Reardon, Sean F.; Nomi, Takako

    2012-01-01

    Multisite trials can clarify the average impact of a new program and the heterogeneity of impacts across sites. Unfortunately, in many applications, compliance with treatment assignment is imperfect. For these applications, we propose an instrumental variable (IV) model with person-specific and site-specific random coefficients. Site-specific IV…

  19. On the shape of posterior densities and credible sets in instrumental variable regression models with reduced rank: an application of flexible sampling methods using neural networks

    NARCIS (Netherlands)

    Hoogerheide, L.F.; Kaashoek, J.F.; van Dijk, H.K.

    2007-01-01

    Likelihoods and posteriors of instrumental variable (IV) regression models with strong endogeneity and/or weak instruments may exhibit rather non-elliptical contours in the parameter space. This may seriously affect inference based on Bayesian credible sets. When approximating posterior

  20. On the shape of posterior densities and credible sets in instrumental variable regression models with reduced rank: an application of flexible sampling methods using neural networks

    NARCIS (Netherlands)

    L.F. Hoogerheide (Lennart); J.F. Kaashoek (Johan); H.K. van Dijk (Herman)

    2005-01-01

    textabstractLikelihoods and posteriors of instrumental variable regression models with strong endogeneity and/or weak instruments may exhibit rather non-elliptical contours in the parameter space. This may seriously affect inference based on Bayesian credible sets. When approximating such contours

  1. Quantifying measurement uncertainty and spatial variability in the context of model evaluation

    Science.gov (United States)

    Choukulkar, A.; Brewer, A.; Pichugina, Y. L.; Bonin, T.; Banta, R. M.; Sandberg, S.; Weickmann, A. M.; Djalalova, I.; McCaffrey, K.; Bianco, L.; Wilczak, J. M.; Newman, J. F.; Draxl, C.; Lundquist, J. K.; Wharton, S.; Olson, J.; Kenyon, J.; Marquis, M.

    2017-12-01

    In an effort to improve wind forecasts for the wind energy sector, the Department of Energy and the NOAA funded the second Wind Forecast Improvement Project (WFIP2). As part of the WFIP2 field campaign, a large suite of in-situ and remote sensing instrumentation was deployed to the Columbia River Gorge in Oregon and Washington from October 2015 - March 2017. The array of instrumentation deployed included 915-MHz wind profiling radars, sodars, wind- profiling lidars, and scanning lidars. The role of these instruments was to provide wind measurements at high spatial and temporal resolution for model evaluation and improvement of model physics. To properly determine model errors, the uncertainties in instrument-model comparisons need to be quantified accurately. These uncertainties arise from several factors such as measurement uncertainty, spatial variability, and interpolation of model output to instrument locations, to name a few. In this presentation, we will introduce a formalism to quantify measurement uncertainty and spatial variability. The accuracy of this formalism will be tested using existing datasets such as the eXperimental Planetary boundary layer Instrumentation Assessment (XPIA) campaign. Finally, the uncertainties in wind measurement and the spatial variability estimates from the WFIP2 field campaign will be discussed to understand the challenges involved in model evaluation.

  2. Linear latent variable models: the lava-package

    DEFF Research Database (Denmark)

    Holst, Klaus Kähler; Budtz-Jørgensen, Esben

    2013-01-01

    are implemented including robust standard errors for clustered correlated data, multigroup analyses, non-linear parameter constraints, inference with incomplete data, maximum likelihood estimation with censored and binary observations, and instrumental variable estimators. In addition an extensive simulation......An R package for specifying and estimating linear latent variable models is presented. The philosophy of the implementation is to separate the model specification from the actual data, which leads to a dynamic and easy way of modeling complex hierarchical structures. Several advanced features...

  3. Evaluating disease management programme effectiveness: an introduction to instrumental variables.

    Science.gov (United States)

    Linden, Ariel; Adams, John L

    2006-04-01

    This paper introduces the concept of instrumental variables (IVs) as a means of providing an unbiased estimate of treatment effects in evaluating disease management (DM) programme effectiveness. Model development is described using zip codes as the IV. Three diabetes DM outcomes were evaluated: annual diabetes costs, emergency department (ED) visits and hospital days. Both ordinary least squares (OLS) and IV estimates showed a significant treatment effect for diabetes costs (P = 0.011) but neither model produced a significant treatment effect for ED visits. However, the IV estimate showed a significant treatment effect for hospital days (P = 0.006) whereas the OLS model did not. These results illustrate the utility of IV estimation when the OLS model is sensitive to the confounding effect of hidden bias.

  4. The productivity of mental health care: an instrumental variable approach.

    Science.gov (United States)

    Lu, Mingshan

    1999-06-01

    BACKGROUND: Like many other medical technologies and treatments, there is a lack of reliable evidence on treatment effectiveness of mental health care. Increasingly, data from non-experimental settings are being used to study the effect of treatment. However, as in a number of studies using non-experimental data, a simple regression of outcome on treatment shows a puzzling negative and significant impact of mental health care on the improvement of mental health status, even after including a large number of potential control variables. The central problem in interpreting evidence from real-world or non-experimental settings is, therefore, the potential "selection bias" problem in observational data set. In other words, the choice/quantity of mental health care may be correlated with other variables, particularly unobserved variables, that influence outcome and this may lead to a bias in the estimate of the effect of care in conventional models. AIMS OF THE STUDY: This paper addresses the issue of estimating treatment effects using an observational data set. The information in a mental health data set obtained from two waves of data in Puerto Rico is explored. The results using conventional models - in which the potential selection bias is not controlled - and that from instrumental variable (IV) models - which is what was proposed in this study to correct for the contaminated estimation from conventional models - are compared. METHODS: Treatment effectiveness is estimated in a production function framework. Effectiveness is measured as the improvement in mental health status. To control for the potential selection bias problem, IV approaches are employed. The essence of the IV method is to use one or more instruments, which are observable factors that influence treatment but do not directly affect patient outcomes, to isolate the effect of treatment variation that is independent of unobserved patient characteristics. The data used in this study are the first (1992

  5. Instrumental variable methods in comparative safety and effectiveness research.

    Science.gov (United States)

    Brookhart, M Alan; Rassen, Jeremy A; Schneeweiss, Sebastian

    2010-06-01

    Instrumental variable (IV) methods have been proposed as a potential approach to the common problem of uncontrolled confounding in comparative studies of medical interventions, but IV methods are unfamiliar to many researchers. The goal of this article is to provide a non-technical, practical introduction to IV methods for comparative safety and effectiveness research. We outline the principles and basic assumptions necessary for valid IV estimation, discuss how to interpret the results of an IV study, provide a review of instruments that have been used in comparative effectiveness research, and suggest some minimal reporting standards for an IV analysis. Finally, we offer our perspective of the role of IV estimation vis-à-vis more traditional approaches based on statistical modeling of the exposure or outcome. We anticipate that IV methods will be often underpowered for drug safety studies of very rare outcomes, but may be potentially useful in studies of intended effects where uncontrolled confounding may be substantial.

  6. Instrumental variable methods in comparative safety and effectiveness research†

    Science.gov (United States)

    Brookhart, M. Alan; Rassen, Jeremy A.; Schneeweiss, Sebastian

    2010-01-01

    Summary Instrumental variable (IV) methods have been proposed as a potential approach to the common problem of uncontrolled confounding in comparative studies of medical interventions, but IV methods are unfamiliar to many researchers. The goal of this article is to provide a non-technical, practical introduction to IV methods for comparative safety and effectiveness research. We outline the principles and basic assumptions necessary for valid IV estimation, discuss how to interpret the results of an IV study, provide a review of instruments that have been used in comparative effectiveness research, and suggest some minimal reporting standards for an IV analysis. Finally, we offer our perspective of the role of IV estimation vis-à-vis more traditional approaches based on statistical modeling of the exposure or outcome. We anticipate that IV methods will be often underpowered for drug safety studies of very rare outcomes, but may be potentially useful in studies of intended effects where uncontrolled confounding may be substantial. PMID:20354968

  7. Econometrics in outcomes research: the use of instrumental variables.

    Science.gov (United States)

    Newhouse, J P; McClellan, M

    1998-01-01

    We describe an econometric technique, instrumental variables, that can be useful in estimating the effectiveness of clinical treatments in situations when a controlled trial has not or cannot be done. This technique relies upon the existence of one or more variables that induce substantial variation in the treatment variable but have no direct effect on the outcome variable of interest. We illustrate the use of the technique with an application to aggressive treatment of acute myocardial infarction in the elderly.

  8. Variable-Structure Control of a Model Glider Airplane

    Science.gov (United States)

    Waszak, Martin R.; Anderson, Mark R.

    2008-01-01

    A variable-structure control system designed to enable a fuselage-heavy airplane to recover from spin has been demonstrated in a hand-launched, instrumented model glider airplane. Variable-structure control is a high-speed switching feedback control technique that has been developed for control of nonlinear dynamic systems.

  9. Sensitivity analysis and power for instrumental variable studies.

    Science.gov (United States)

    Wang, Xuran; Jiang, Yang; Zhang, Nancy R; Small, Dylan S

    2018-03-31

    In observational studies to estimate treatment effects, unmeasured confounding is often a concern. The instrumental variable (IV) method can control for unmeasured confounding when there is a valid IV. To be a valid IV, a variable needs to be independent of unmeasured confounders and only affect the outcome through affecting the treatment. When applying the IV method, there is often concern that a putative IV is invalid to some degree. We present an approach to sensitivity analysis for the IV method which examines the sensitivity of inferences to violations of IV validity. Specifically, we consider sensitivity when the magnitude of association between the putative IV and the unmeasured confounders and the direct effect of the IV on the outcome are limited in magnitude by a sensitivity parameter. Our approach is based on extending the Anderson-Rubin test and is valid regardless of the strength of the instrument. A power formula for this sensitivity analysis is presented. We illustrate its usage via examples about Mendelian randomization studies and its implications via a comparison of using rare versus common genetic variants as instruments. © 2018, The International Biometric Society.

  10. Bias and Bias Correction in Multi-Site Instrumental Variables Analysis of Heterogeneous Mediator Effects

    Science.gov (United States)

    Reardon, Sean F.; Unlu, Faith; Zhu, Pei; Bloom, Howard

    2013-01-01

    We explore the use of instrumental variables (IV) analysis with a multi-site randomized trial to estimate the effect of a mediating variable on an outcome in cases where it can be assumed that the observed mediator is the only mechanism linking treatment assignment to outcomes, as assumption known in the instrumental variables literature as the…

  11. Causal null hypotheses of sustained treatment strategies: What can be tested with an instrumental variable?

    Science.gov (United States)

    Swanson, Sonja A; Labrecque, Jeremy; Hernán, Miguel A

    2018-05-02

    Sometimes instrumental variable methods are used to test whether a causal effect is null rather than to estimate the magnitude of a causal effect. However, when instrumental variable methods are applied to time-varying exposures, as in many Mendelian randomization studies, it is unclear what causal null hypothesis is tested. Here, we consider different versions of causal null hypotheses for time-varying exposures, show that the instrumental variable conditions alone are insufficient to test some of them, and describe additional assumptions that can be made to test a wider range of causal null hypotheses, including both sharp and average causal null hypotheses. Implications for interpretation and reporting of instrumental variable results are discussed.

  12. The contextual effects of social capital on health: a cross-national instrumental variable analysis.

    Science.gov (United States)

    Kim, Daniel; Baum, Christopher F; Ganz, Michael L; Subramanian, S V; Kawachi, Ichiro

    2011-12-01

    Past research on the associations between area-level/contextual social capital and health has produced conflicting evidence. However, interpreting this rapidly growing literature is difficult because estimates using conventional regression are prone to major sources of bias including residual confounding and reverse causation. Instrumental variable (IV) analysis can reduce such bias. Using data on up to 167,344 adults in 64 nations in the European and World Values Surveys and applying IV and ordinary least squares (OLS) regression, we estimated the contextual effects of country-level social trust on individual self-rated health. We further explored whether these associations varied by gender and individual levels of trust. Using OLS regression, we found higher average country-level trust to be associated with better self-rated health in both women and men. Instrumental variable analysis yielded qualitatively similar results, although the estimates were more than double in size in both sexes when country population density and corruption were used as instruments. The estimated health effects of raising the percentage of a country's population that trusts others by 10 percentage points were at least as large as the estimated health effects of an individual developing trust in others. These findings were robust to alternative model specifications and instruments. Conventional regression and to a lesser extent IV analysis suggested that these associations are more salient in women and in women reporting social trust. In a large cross-national study, our findings, including those using instrumental variables, support the presence of beneficial effects of higher country-level trust on self-rated health. Previous findings for contextual social capital using traditional regression may have underestimated the true associations. Given the close linkages between self-rated health and all-cause mortality, the public health gains from raising social capital within and across

  13. Instrument Variables for Reducing Noise in Parallel MRI Reconstruction

    Directory of Open Access Journals (Sweden)

    Yuchou Chang

    2017-01-01

    Full Text Available Generalized autocalibrating partially parallel acquisition (GRAPPA has been a widely used parallel MRI technique. However, noise deteriorates the reconstructed image when reduction factor increases or even at low reduction factor for some noisy datasets. Noise, initially generated from scanner, propagates noise-related errors during fitting and interpolation procedures of GRAPPA to distort the final reconstructed image quality. The basic idea we proposed to improve GRAPPA is to remove noise from a system identification perspective. In this paper, we first analyze the GRAPPA noise problem from a noisy input-output system perspective; then, a new framework based on errors-in-variables (EIV model is developed for analyzing noise generation mechanism in GRAPPA and designing a concrete method—instrument variables (IV GRAPPA to remove noise. The proposed EIV framework provides possibilities that noiseless GRAPPA reconstruction could be achieved by existing methods that solve EIV problem other than IV method. Experimental results show that the proposed reconstruction algorithm can better remove the noise compared to the conventional GRAPPA, as validated with both of phantom and in vivo brain data.

  14. Instrumental variable estimation of treatment effects for duration outcomes

    NARCIS (Netherlands)

    G.E. Bijwaard (Govert)

    2007-01-01

    textabstractIn this article we propose and implement an instrumental variable estimation procedure to obtain treatment effects on duration outcomes. The method can handle the typical complications that arise with duration data of time-varying treatment and censoring. The treatment effect we

  15. Robust best linear estimation for regression analysis using surrogate and instrumental variables.

    Science.gov (United States)

    Wang, C Y

    2012-04-01

    We investigate methods for regression analysis when covariates are measured with errors. In a subset of the whole cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies the classical measurement error model, but it may not have repeated measurements. In addition to the surrogate variables that are available among the subjects in the calibration sample, we assume that there is an instrumental variable (IV) that is available for all study subjects. An IV is correlated with the unobserved true exposure variable and hence can be useful in the estimation of the regression coefficients. We propose a robust best linear estimator that uses all the available data, which is the most efficient among a class of consistent estimators. The proposed estimator is shown to be consistent and asymptotically normal under very weak distributional assumptions. For Poisson or linear regression, the proposed estimator is consistent even if the measurement error from the surrogate or IV is heteroscedastic. Finite-sample performance of the proposed estimator is examined and compared with other estimators via intensive simulation studies. The proposed method and other methods are applied to a bladder cancer case-control study.

  16. Instrumented Impact Testing: Influence of Machine Variables and Specimen Position

    International Nuclear Information System (INIS)

    Lucon, E.; McCowan, C. N.; Santoyo, R. A.

    2008-01-01

    An investigation has been conducted on the influence of impact machine variables and specimen positioning on characteristic forces and absorbed energies from instrumented Charpy tests. Brittle and ductile fracture behavior has been investigated by testing NIST reference samples of low, high and super-high energy levels. Test machine variables included tightness of foundation, anvil and striker bolts, and the position of the center of percussion with respect to the center of strike. For specimen positioning, we tested samples which had been moved away or sideways with respect to the anvils. In order to assess the influence of the various factors, we compared mean values in the reference (unaltered) and altered conditions; for machine variables, t-test analyses were also performed in order to evaluate the statistical significance of the observed differences. Our results indicate that the only circumstance which resulted in variations larger than 5 percent for both brittle and ductile specimens is when the sample is not in contact with the anvils. These findings should be taken into account in future revisions of instrumented Charpy test standards.

  17. Instrumented Impact Testing: Influence of Machine Variables and Specimen Position

    Energy Technology Data Exchange (ETDEWEB)

    Lucon, E.; McCowan, C. N.; Santoyo, R. A.

    2008-09-15

    An investigation has been conducted on the influence of impact machine variables and specimen positioning on characteristic forces and absorbed energies from instrumented Charpy tests. Brittle and ductile fracture behavior has been investigated by testing NIST reference samples of low, high and super-high energy levels. Test machine variables included tightness of foundation, anvil and striker bolts, and the position of the center of percussion with respect to the center of strike. For specimen positioning, we tested samples which had been moved away or sideways with respect to the anvils. In order to assess the influence of the various factors, we compared mean values in the reference (unaltered) and altered conditions; for machine variables, t-test analyses were also performed in order to evaluate the statistical significance of the observed differences. Our results indicate that the only circumstance which resulted in variations larger than 5 percent for both brittle and ductile specimens is when the sample is not in contact with the anvils. These findings should be taken into account in future revisions of instrumented Charpy test standards.

  18. Comparison of variance estimators for metaanalysis of instrumental variable estimates

    NARCIS (Netherlands)

    Schmidt, A. F.; Hingorani, A. D.; Jefferis, B. J.; White, J.; Groenwold, R. H H; Dudbridge, F.; Ben-Shlomo, Y.; Chaturvedi, N.; Engmann, J.; Hughes, A.; Humphries, S.; Hypponen, E.; Kivimaki, M.; Kuh, D.; Kumari, M.; Menon, U.; Morris, R.; Power, C.; Price, J.; Wannamethee, G.; Whincup, P.

    2016-01-01

    Background: Mendelian randomization studies perform instrumental variable (IV) analysis using genetic IVs. Results of individual Mendelian randomization studies can be pooled through meta-analysis. We explored how different variance estimators influence the meta-analysed IV estimate. Methods: Two

  19. The XRF spectrometer and the selection of analysis conditions (instrumental variables)

    International Nuclear Information System (INIS)

    Willis, J.P.

    2002-01-01

    Full text: This presentation will begin with a brief discussion of EDXRF and flat- and curved-crystal WDXRF spectrometers, contrasting the major differences between the three types. The remainder of the presentation will contain a detailed overview of the choice and settings of the many instrumental variables contained in a modern WDXRF spectrometer, and will discuss critically the choices facing the analyst in setting up a WDXRF spectrometer for different elements and applications. In particular it will discuss the choice of tube target (when a choice is possible), the kV and mA settings, tube filters, collimator masks, collimators, analyzing crystals, secondary collimators, detectors, pulse height selection, X-ray path medium (air, nitrogen, vacuum or helium), counting times for peak and background positions and their effect on counting statistics and lower limit of detection (LLD). The use of Figure of Merit (FOM) calculations to objectively choose the best combination of instrumental variables also will be discussed. This presentation will be followed by a shorter session on a subsequent day entitled - A Selection of XRF Conditions - Practical Session, where participants will be given the opportunity to discuss in groups the selection of the best instrumental variables for three very diverse applications. Copyright (2002) Australian X-ray Analytical Association Inc

  20. Finite-sample instrumental variables inference using an asymptotically pivotal statistic

    NARCIS (Netherlands)

    Bekker, P; Kleibergen, F

    2003-01-01

    We consider the K-statistic, Kleibergen's (2002, Econometrica 70, 1781-1803) adaptation of the Anderson-Rubin (AR) statistic in instrumental variables regression. Whereas Kleibergen (2002) especially analyzes the asymptotic behavior of the statistic, we focus on finite-sample properties in, a

  1. Finite-sample instrumental variables Inference using an Asymptotically Pivotal Statistic

    NARCIS (Netherlands)

    Bekker, P.; Kleibergen, F.R.

    2001-01-01

    The paper considers the K-statistic, Kleibergen’s (2000) adaptation ofthe Anderson-Rubin (AR) statistic in instrumental variables regression.Compared to the AR-statistic this K-statistic shows improvedasymptotic efficiency in terms of degrees of freedom in overidentifiedmodels and yet it shares,

  2. Bayesian methods for meta-analysis of causal relationships estimated using genetic instrumental variables

    DEFF Research Database (Denmark)

    Burgess, Stephen; Thompson, Simon G; Thompson, Grahame

    2010-01-01

    Genetic markers can be used as instrumental variables, in an analogous way to randomization in a clinical trial, to estimate the causal relationship between a phenotype and an outcome variable. Our purpose is to extend the existing methods for such Mendelian randomization studies to the context o...

  3. The Effect of Birth Weight on Academic Performance: Instrumental Variable Analysis.

    Science.gov (United States)

    Lin, Shi Lin; Leung, Gabriel Matthew; Schooling, C Mary

    2017-05-01

    Observationally, lower birth weight is usually associated with poorer academic performance; whether this association is causal or the result of confounding is unknown. To investigate this question, we obtained an effect estimate, which can have a causal interpretation under specific assumptions, of birth weight on educational attainment using instrumental variable analysis based on single nucleotide polymorphisms determining birth weight combined with results from the Social Science Genetic Association Consortium study of 126,559 Caucasians. We similarly obtained an estimate of the effect of birth weight on academic performance in 4,067 adolescents from Hong Kong's (Chinese) Children of 1997 birth cohort (1997-2016), using twin status as an instrumental variable. Birth weight was not associated with years of schooling (per 100-g increase in birth weight, -0.006 years, 95% confidence interval (CI): -0.02, 0.01) or college completion (odds ratio = 1.00, 95% CI: 0.96, 1.03). Birth weight was also unrelated to academic performance in adolescents (per 100-g increase in birth weight, -0.004 grade, 95% CI: -0.04, 0.04) using instrumental variable analysis, although conventional regression gave a small positive association (0.02 higher grade, 95% CI: 0.01, 0.03). Observed associations of birth weight with academic performance may not be causal, suggesting that interventions should focus on the contextual factors generating this correlation. © The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  4. Social interactions and college enrollment: A combined school fixed effects/instrumental variables approach.

    Science.gov (United States)

    Fletcher, Jason M

    2015-07-01

    This paper provides some of the first evidence of peer effects in college enrollment decisions. There are several empirical challenges in assessing the influences of peers in this context, including the endogeneity of high school, shared group-level unobservables, and identifying policy-relevant parameters of social interactions models. This paper addresses these issues by using an instrumental variables/fixed effects approach that compares students in the same school but different grade-levels who are thus exposed to different sets of classmates. In particular, plausibly exogenous variation in peers' parents' college expectations are used as an instrument for peers' college choices. Preferred specifications indicate that increasing a student's exposure to college-going peers by ten percentage points is predicted to raise the student's probability of enrolling in college by 4 percentage points. This effect is roughly half the magnitude of growing up in a household with married parents (vs. an unmarried household). Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Error-in-variables models in calibration

    Science.gov (United States)

    Lira, I.; Grientschnig, D.

    2017-12-01

    In many calibration operations, the stimuli applied to the measuring system or instrument under test are derived from measurement standards whose values may be considered to be perfectly known. In that case, it is assumed that calibration uncertainty arises solely from inexact measurement of the responses, from imperfect control of the calibration process and from the possible inaccuracy of the calibration model. However, the premise that the stimuli are completely known is never strictly fulfilled and in some instances it may be grossly inadequate. Then, error-in-variables (EIV) regression models have to be employed. In metrology, these models have been approached mostly from the frequentist perspective. In contrast, not much guidance is available on their Bayesian analysis. In this paper, we first present a brief summary of the conventional statistical techniques that have been developed to deal with EIV models in calibration. We then proceed to discuss the alternative Bayesian framework under some simplifying assumptions. Through a detailed example about the calibration of an instrument for measuring flow rates, we provide advice on how the user of the calibration function should employ the latter framework for inferring the stimulus acting on the calibrated device when, in use, a certain response is measured.

  6. netherland hydrological modeling instrument

    Science.gov (United States)

    Hoogewoud, J. C.; de Lange, W. J.; Veldhuizen, A.; Prinsen, G.

    2012-04-01

    Netherlands Hydrological Modeling Instrument A decision support system for water basin management. J.C. Hoogewoud , W.J. de Lange ,A. Veldhuizen , G. Prinsen , The Netherlands Hydrological modeling Instrument (NHI) is the center point of a framework of models, to coherently model the hydrological system and the multitude of functions it supports. Dutch hydrological institutes Deltares, Alterra, Netherlands Environmental Assessment Agency, RWS Waterdienst, STOWA and Vewin are cooperating in enhancing the NHI for adequate decision support. The instrument is used by three different ministries involved in national water policy matters, for instance the WFD, drought management, manure policy and climate change issues. The basis of the modeling instrument is a state-of-the-art on-line coupling of the groundwater system (MODFLOW), the unsaturated zone (metaSWAP) and the surface water system (MOZART-DM). It brings together hydro(geo)logical processes from the column to the basin scale, ranging from 250x250m plots to the river Rhine and includes salt water flow. The NHI is validated with an eight year run (1998-2006) with dry and wet periods. For this run different parts of the hydrology have been compared with measurements. For instance, water demands in dry periods (e.g. for irrigation), discharges at outlets, groundwater levels and evaporation. A validation alone is not enough to get support from stakeholders. Involvement from stakeholders in the modeling process is needed. There fore to gain sufficient support and trust in the instrument on different (policy) levels a couple of actions have been taken: 1. a transparent evaluation of modeling-results has been set up 2. an extensive program is running to cooperate with regional waterboards and suppliers of drinking water in improving the NHI 3. sharing (hydrological) data via newly setup Modeling Database for local and national models 4. Enhancing the NHI with "local" information. The NHI is and has been used for many

  7. Bias and Bias Correction in Multisite Instrumental Variables Analysis of Heterogeneous Mediator Effects

    Science.gov (United States)

    Reardon, Sean F.; Unlu, Fatih; Zhu, Pei; Bloom, Howard S.

    2014-01-01

    We explore the use of instrumental variables (IV) analysis with a multisite randomized trial to estimate the effect of a mediating variable on an outcome in cases where it can be assumed that the observed mediator is the only mechanism linking treatment assignment to outcomes, an assumption known in the IV literature as the exclusion restriction.…

  8. Important variables for parents' postnatal sense of security: evaluating a new Swedish instrument (the PPSS instrument).

    Science.gov (United States)

    Persson, Eva K; Dykes, Anna-Karin

    2009-08-01

    to evaluate dimensions of both parents' postnatal sense of security the first week after childbirth, and to determine associations between the PPSS instrument and different sociodemographic and situational background variables. evaluative, cross-sectional design. 113 mothers and 99 fathers with children live born at term, from five hospitals in southern Sweden. mothers and fathers had similar feelings concerning postnatal sense of security. Of the dimensions in the PPSS instrument, a sense of midwives'/nurses' empowering behaviour, a sense of one's own general well-being and a sense of the mother's well-being as experienced by the father were the most important dimensions for parents' experienced security. A sense of affinity within the family (for both parents) and a sense of manageable breast feeding (for mothers) were not significantly associated with their experienced security. A sense of participation during pregnancy and general anxiety were significantly associated background variables for postnatal sense of security for both parents. For the mothers, parity and a sense that the father was participating during pregnancy were also significantly associated. more focus on parents' participation during pregnancy as well as midwives'/nurses' empowering behaviour during the postnatal period will be beneficial for both parents' postnatal sense of security.

  9. Density dependence and climate effects in Rocky Mountain elk: an application of regression with instrumental variables for population time series with sampling error.

    Science.gov (United States)

    Creel, Scott; Creel, Michael

    2009-11-01

    1. Sampling error in annual estimates of population size creates two widely recognized problems for the analysis of population growth. First, if sampling error is mistakenly treated as process error, one obtains inflated estimates of the variation in true population trajectories (Staples, Taper & Dennis 2004). Second, treating sampling error as process error is thought to overestimate the importance of density dependence in population growth (Viljugrein et al. 2005; Dennis et al. 2006). 2. In ecology, state-space models are used to account for sampling error when estimating the effects of density and other variables on population growth (Staples et al. 2004; Dennis et al. 2006). In econometrics, regression with instrumental variables is a well-established method that addresses the problem of correlation between regressors and the error term, but requires fewer assumptions than state-space models (Davidson & MacKinnon 1993; Cameron & Trivedi 2005). 3. We used instrumental variables to account for sampling error and fit a generalized linear model to 472 annual observations of population size for 35 Elk Management Units in Montana, from 1928 to 2004. We compared this model with state-space models fit with the likelihood function of Dennis et al. (2006). We discuss the general advantages and disadvantages of each method. Briefly, regression with instrumental variables is valid with fewer distributional assumptions, but state-space models are more efficient when their distributional assumptions are met. 4. Both methods found that population growth was negatively related to population density and winter snow accumulation. Summer rainfall and wolf (Canis lupus) presence had much weaker effects on elk (Cervus elaphus) dynamics [though limitation by wolves is strong in some elk populations with well-established wolf populations (Creel et al. 2007; Creel & Christianson 2008)]. 5. Coupled with predictions for Montana from global and regional climate models, our results

  10. Evaluation of multivariate calibration models transferred between spectroscopic instruments

    DEFF Research Database (Denmark)

    Eskildsen, Carl Emil Aae; Hansen, Per W.; Skov, Thomas

    2016-01-01

    In a setting where multiple spectroscopic instruments are used for the same measurements it may be convenient to develop the calibration model on a single instrument and then transfer this model to the other instruments. In the ideal scenario, all instruments provide the same predictions for the ......In a setting where multiple spectroscopic instruments are used for the same measurements it may be convenient to develop the calibration model on a single instrument and then transfer this model to the other instruments. In the ideal scenario, all instruments provide the same predictions...... for the same samples using the transferred model. However, sometimes the success of a model transfer is evaluated by comparing the transferred model predictions with the reference values. This is not optimal, as uncertainties in the reference method will impact the evaluation. This paper proposes a new method...... for calibration model transfer evaluation. The new method is based on comparing predictions from different instruments, rather than comparing predictions and reference values. A total of 75 flour samples were available for the study. All samples were measured on ten near infrared (NIR) instruments from two...

  11. Evaluation of Validity and Reliability for Hierarchical Scales Using Latent Variable Modeling

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.

    2012-01-01

    A latent variable modeling method is outlined, which accomplishes estimation of criterion validity and reliability for a multicomponent measuring instrument with hierarchical structure. The approach provides point and interval estimates for the scale criterion validity and reliability coefficients, and can also be used for testing composite or…

  12. Invited Commentary: Using Financial Credits as Instrumental Variables for Estimating the Causal Relationship Between Income and Health.

    Science.gov (United States)

    Pega, Frank

    2016-05-01

    Social epidemiologists are interested in determining the causal relationship between income and health. Natural experiments in which individuals or groups receive income randomly or quasi-randomly from financial credits (e.g., tax credits or cash transfers) are increasingly being analyzed using instrumental variable analysis. For example, in this issue of the Journal, Hamad and Rehkopf (Am J Epidemiol. 2016;183(9):775-784) used an in-work tax credit called the Earned Income Tax Credit as an instrument to estimate the association between income and child development. However, under certain conditions, the use of financial credits as instruments could violate 2 key instrumental variable analytic assumptions. First, some financial credits may directly influence health, for example, through increasing a psychological sense of welfare security. Second, financial credits and health may have several unmeasured common causes, such as politics, other social policies, and the motivation to maximize the credit. If epidemiologists pursue such instrumental variable analyses, using the amount of an unconditional, universal credit that an individual or group has received as the instrument may produce the most conceptually convincing and generalizable evidence. However, other natural income experiments (e.g., lottery winnings) and other methods that allow better adjustment for confounding might be more promising approaches for estimating the causal relationship between income and health. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. LARF: Instrumental Variable Estimation of Causal Effects through Local Average Response Functions

    Directory of Open Access Journals (Sweden)

    Weihua An

    2016-07-01

    Full Text Available LARF is an R package that provides instrumental variable estimation of treatment effects when both the endogenous treatment and its instrument (i.e., the treatment inducement are binary. The method (Abadie 2003 involves two steps. First, pseudo-weights are constructed from the probability of receiving the treatment inducement. By default LARF estimates the probability by a probit regression. It also provides semiparametric power series estimation of the probability and allows users to employ other external methods to estimate the probability. Second, the pseudo-weights are used to estimate the local average response function conditional on treatment and covariates. LARF provides both least squares and maximum likelihood estimates of the conditional treatment effects.

  14. Fasting Glucose and the Risk of Depressive Symptoms: Instrumental-Variable Regression in the Cardiovascular Risk in Young Finns Study.

    Science.gov (United States)

    Wesołowska, Karolina; Elovainio, Marko; Hintsa, Taina; Jokela, Markus; Pulkki-Råback, Laura; Pitkänen, Niina; Lipsanen, Jari; Tukiainen, Janne; Lyytikäinen, Leo-Pekka; Lehtimäki, Terho; Juonala, Markus; Raitakari, Olli; Keltikangas-Järvinen, Liisa

    2017-12-01

    Type 2 diabetes (T2D) has been associated with depressive symptoms, but the causal direction of this association and the underlying mechanisms, such as increased glucose levels, remain unclear. We used instrumental-variable regression with a genetic instrument (Mendelian randomization) to examine a causal role of increased glucose concentrations in the development of depressive symptoms. Data were from the population-based Cardiovascular Risk in Young Finns Study (n = 1217). Depressive symptoms were assessed in 2012 using a modified Beck Depression Inventory (BDI-I). Fasting glucose was measured concurrently with depressive symptoms. A genetic risk score for fasting glucose (with 35 single nucleotide polymorphisms) was used as an instrumental variable for glucose. Glucose was not associated with depressive symptoms in the standard linear regression (B = -0.04, 95% CI [-0.12, 0.04], p = .34), but the instrumental-variable regression showed an inverse association between glucose and depressive symptoms (B = -0.43, 95% CI [-0.79, -0.07], p = .020). The difference between the estimates of standard linear regression and instrumental-variable regression was significant (p = .026) CONCLUSION: Our results suggest that the association between T2D and depressive symptoms is unlikely to be caused by increased glucose concentrations. It seems possible that T2D might be linked to depressive symptoms due to low glucose levels.

  15. Variable importance in latent variable regression models

    NARCIS (Netherlands)

    Kvalheim, O.M.; Arneberg, R.; Bleie, O.; Rajalahti, T.; Smilde, A.K.; Westerhuis, J.A.

    2014-01-01

    The quality and practical usefulness of a regression model are a function of both interpretability and prediction performance. This work presents some new graphical tools for improved interpretation of latent variable regression models that can also assist in improved algorithms for variable

  16. Decoupling Solar Variability and Instrument Trends Using the Multiple Same-Irradiance-Level (MuSIL) Analysis Technique

    Science.gov (United States)

    Woods, Thomas N.; Eparvier, Francis G.; Harder, Jerald; Snow, Martin

    2018-05-01

    The solar spectral irradiance (SSI) dataset is a key record for studying and understanding the energetics and radiation balance in Earth's environment. Understanding the long-term variations of the SSI over timescales of the 11-year solar activity cycle and longer is critical for many Sun-Earth research topics. Satellite measurements of the SSI have been made since the 1970s, most of them in the ultraviolet, but recently also in the visible and near-infrared. A limiting factor for the accuracy of previous solar variability results is the uncertainties for the instrument degradation corrections, which need fairly large corrections relative to the amount of solar cycle variability at some wavelengths. The primary objective of this investigation has been to separate out solar cycle variability and any residual uncorrected instrumental trends in the SSI measurements from the Solar Radiation and Climate Experiment (SORCE) mission and the Thermosphere, Mesosphere, Ionosphere, Energetic, and Dynamics (TIMED) mission. A new technique called the Multiple Same-Irradiance-Level (MuSIL) analysis has been developed, which examines an SSI time series at different levels of solar activity to provide long-term trends in an SSI record, and the most common result is a downward trend that most likely stems from uncorrected instrument degradation. This technique has been applied to each wavelength in the SSI records from SORCE (2003 - present) and TIMED (2002 - present) to provide new solar cycle variability results between 27 nm and 1600 nm with a resolution of about 1 nm at most wavelengths. This technique, which was validated with the highly accurate total solar irradiance (TSI) record, has an estimated relative uncertainty of about 5% of the measured solar cycle variability. The MuSIL results are further validated with the comparison of the new solar cycle variability results from different solar cycles.

  17. Discrete-time modelling of musical instruments

    International Nuclear Information System (INIS)

    Vaelimaeki, Vesa; Pakarinen, Jyri; Erkut, Cumhur; Karjalainen, Matti

    2006-01-01

    This article describes physical modelling techniques that can be used for simulating musical instruments. The methods are closely related to digital signal processing. They discretize the system with respect to time, because the aim is to run the simulation using a computer. The physics-based modelling methods can be classified as mass-spring, modal, wave digital, finite difference, digital waveguide and source-filter models. We present the basic theory and a discussion on possible extensions for each modelling technique. For some methods, a simple model example is chosen from the existing literature demonstrating a typical use of the method. For instance, in the case of the digital waveguide modelling technique a vibrating string model is discussed, and in the case of the wave digital filter technique we present a classical piano hammer model. We tackle some nonlinear and time-varying models and include new results on the digital waveguide modelling of a nonlinear string. Current trends and future directions in physical modelling of musical instruments are discussed

  18. 8 years of Solar Spectral Irradiance Variability Observed from the ISS with the SOLAR/SOLSPEC Instrument

    Science.gov (United States)

    Damé, Luc; Bolsée, David; Meftah, Mustapha; Irbah, Abdenour; Hauchecorne, Alain; Bekki, Slimane; Pereira, Nuno; Cessateur, Marchand; Gäel; , Marion; et al.

    2016-10-01

    Accurate measurements of Solar Spectral Irradiance (SSI) are of primary importance for a better understanding of solar physics and of the impact of solar variability on climate (via Earth's atmospheric photochemistry). The acquisition of a top of atmosphere reference solar spectrum and of its temporal and spectral variability during the unusual solar cycle 24 is of prime interest for these studies. These measurements are performed since April 2008 with the SOLSPEC spectro-radiometer from the far ultraviolet to the infrared (166 nm to 3088 nm). This instrument, developed under a fruitful LATMOS/BIRA-IASB collaboration, is part of the Solar Monitoring Observatory (SOLAR) payload, externally mounted on the Columbus module of the International Space Station (ISS). The SOLAR mission, with its actual 8 years duration, will cover almost the entire solar cycle 24. We present here the in-flight operations and performances of the SOLSPEC instrument, including the engineering corrections, calibrations and improved know-how procedure for aging corrections. Accordingly, a SSI reference spectrum from the UV to the NIR will be presented, together with its variability in the UV, as measured by SOLAR/SOLSPEC for 8 years. Uncertainties on these measurements and comparisons with other instruments will be briefly discussed.

  19. Developing a theoretical model and questionnaire survey instrument to measure the success of electronic health records in residential aged care.

    Science.gov (United States)

    Yu, Ping; Qian, Siyu

    2018-01-01

    Electronic health records (EHR) are introduced into healthcare organizations worldwide to improve patient safety, healthcare quality and efficiency. A rigorous evaluation of this technology is important to reduce potential negative effects on patient and staff, to provide decision makers with accurate information for system improvement and to ensure return on investment. Therefore, this study develops a theoretical model and questionnaire survey instrument to assess the success of organizational EHR in routine use from the viewpoint of nursing staff in residential aged care homes. The proposed research model incorporates six variables in the reformulated DeLone and McLean information systems success model: system quality, information quality, service quality, use, user satisfaction and net benefits. Two variables training and self-efficacy were also incorporated into the model. A questionnaire survey instrument was designed to measure the eight variables in the model. After a pilot test, the measurement scale was used to collect data from 243 nursing staff members in 10 residential aged care homes belonging to three management groups in Australia. Partial least squares path modeling was conducted to validate the model. The validated EHR systems success model predicts the impact of the four antecedent variables-training, self-efficacy, system quality and information quality-on the net benefits, the indicator of EHR systems success, through the intermittent variables use and user satisfaction. A 24-item measurement scale was developed to quantitatively evaluate the performance of an EHR system. The parsimonious EHR systems success model and the measurement scale can be used to benchmark EHR systems success across organizations and units and over time.

  20. Breastfeeding and the risk of childhood asthma: A two-stage instrumental variable analysis to address endogeneity.

    Science.gov (United States)

    Sharma, Nivita D

    2017-09-01

    Several explanations for the inconsistent results on the effects of breastfeeding on childhood asthma have been suggested. The purpose of this study was to investigate one unexplored explanation, which is the presence of a potential endogenous relationship between breastfeeding and childhood asthma. Endogeneity exists when an explanatory variable is correlated with the error term for reasons such as selection bias, reverse causality, and unmeasured confounders. Unadjusted endogeneity will bias the effect of breastfeeding on childhood asthma. To investigate potential endogeneity, a cross-sectional study of breastfeeding practices and incidence of childhood asthma in 87 pediatric patients in Georgia, the USA, was conducted using generalized linear modeling and a two-stage instrumental variable analysis. First, the relationship between breastfeeding and childhood asthma was analyzed without considering endogeneity. Second, tests for presence of endogeneity were performed and having detected endogeneity between breastfeeding and childhood asthma, a two-stage instrumental variable analysis was performed. The first stage of this analysis estimated the duration of breastfeeding and the second-stage estimated the risk of childhood asthma. When endogeneity was not taken into account, duration of breastfeeding was found to significantly increase the risk of childhood asthma (relative risk ratio [RR]=2.020, 95% confidence interval [CI]: [1.143-3.570]). After adjusting for endogeneity, duration of breastfeeding significantly reduced the risk of childhood asthma (RR=0.003, 95% CI: [0.000-0.240]). The findings suggest that researchers should consider evaluating how the presence of endogeneity could affect the relationship between duration of breastfeeding and the risk of childhood asthma. © 2017 EAACI and John Wiley and Sons A/S. Published by John Wiley and Sons Ltd.

  1. The dynamic model of choosing an external funding instrument

    Directory of Open Access Journals (Sweden)

    Irena HONKOVA

    2015-06-01

    Full Text Available Making a decision about using a specific funding source is one of the most important tasks of financial management. The utilization of external sources features numerous advantages yet staying aware of diverse funding options is not easy for financial managers. Today it is crucial to quickly identify an optimum possibility and to make sure that all relevant criteria have been considered and no variant has been omitted. Over the long term it is also necessary to consider the category of time as changes made today do not affect only the current variables but they also have a significant impact on the future. This article aims to identify the most suitable model of choosing external funding sources that would describe the dynamics involved. The first part of the paper considers the theoretical background of external funding instrument and of decision criteria. The making of financial decisions is a process consisted of weighing the most suitable variants, selecting the best variant, and controlling the implementation of accepted proposals. The second part analyses results of the research - decisive weights of the criteria. Then it is created the model of the principal criteria Weighted Average Cost of Capital (Dynamic model WACC. Finally it is created the Dynamic Model of Choosing an External Funding Instrument. The created decision-making model facilitates the modeling of changes in time because it is crucial to know what future consequences lies in decisions made the contemporary turbulent world. Each variant features possible negative and positive changes of varying extent. The possibility to simulate these changes can illustrate an optimal variant to a decision-maker.

  2. Job demands and job strain as risk factors for employee wellbeing in elderly care: an instrumental-variables analysis.

    Science.gov (United States)

    Elovainio, Marko; Heponiemi, Tarja; Kuusio, Hannamaria; Jokela, Markus; Aalto, Anna-Mari; Pekkarinen, Laura; Noro, Anja; Finne-Soveri, Harriet; Kivimäki, Mika; Sinervo, Timo

    2015-02-01

    The association between psychosocial work environment and employee wellbeing has repeatedly been shown. However, as environmental evaluations have typically been self-reported, the observed associations may be attributable to reporting bias. Applying instrumental-variable regression, we used staffing level (the ratio of staff to residents) as an unconfounded instrument for self-reported job demands and job strain to predict various indicators of wellbeing (perceived stress, psychological distress and sleeping problems) among 1525 registered nurses, practical nurses and nursing assistants working in elderly care wards. In ordinary regression, higher self-reported job demands and job strain were associated with increased risk of perceived stress, psychological distress and sleeping problems. The effect estimates for the associations of these psychosocial factors with perceived stress and psychological distress were greater, but less precisely estimated, in an instrumental-variables analysis which took into account only the variation in self-reported job demands and job strain that was explained by staffing level. No association between psychosocial factors and sleeping problems was observed with the instrumental-variable analysis. These results support a causal interpretation of high self-reported job demands and job strain being risk factors for employee wellbeing. © The Author 2014. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.

  3. Nonparametric instrumental regression with non-convex constraints

    International Nuclear Information System (INIS)

    Grasmair, M; Scherzer, O; Vanhems, A

    2013-01-01

    This paper considers the nonparametric regression model with an additive error that is dependent on the explanatory variables. As is common in empirical studies in epidemiology and economics, it also supposes that valid instrumental variables are observed. A classical example in microeconomics considers the consumer demand function as a function of the price of goods and the income, both variables often considered as endogenous. In this framework, the economic theory also imposes shape restrictions on the demand function, such as integrability conditions. Motivated by this illustration in microeconomics, we study an estimator of a nonparametric constrained regression function using instrumental variables by means of Tikhonov regularization. We derive rates of convergence for the regularized model both in a deterministic and stochastic setting under the assumption that the true regression function satisfies a projected source condition including, because of the non-convexity of the imposed constraints, an additional smallness condition. (paper)

  4. Nonparametric instrumental regression with non-convex constraints

    Science.gov (United States)

    Grasmair, M.; Scherzer, O.; Vanhems, A.

    2013-03-01

    This paper considers the nonparametric regression model with an additive error that is dependent on the explanatory variables. As is common in empirical studies in epidemiology and economics, it also supposes that valid instrumental variables are observed. A classical example in microeconomics considers the consumer demand function as a function of the price of goods and the income, both variables often considered as endogenous. In this framework, the economic theory also imposes shape restrictions on the demand function, such as integrability conditions. Motivated by this illustration in microeconomics, we study an estimator of a nonparametric constrained regression function using instrumental variables by means of Tikhonov regularization. We derive rates of convergence for the regularized model both in a deterministic and stochastic setting under the assumption that the true regression function satisfies a projected source condition including, because of the non-convexity of the imposed constraints, an additional smallness condition.

  5. STAMINA - Model description. Standard Model Instrumentation for Noise Assessments

    NARCIS (Netherlands)

    Schreurs EM; Jabben J; Verheijen ENG; CMM; mev

    2010-01-01

    Deze rapportage beschrijft het STAMINA-model, dat staat voor Standard Model Instrumentation for Noise Assessments en door het RIVM is ontwikkeld. Het instituut gebruikt dit standaardmodel om omgevingsgeluid in Nederland in kaart te brengen. Het model is gebaseerd op de Standaard Karteringsmethode

  6. Instrumentation of a prestressed concrete containment vessel model

    International Nuclear Information System (INIS)

    Hessheimer, M.F.; Rightley, M.J.; Matsumoto, T.

    1995-01-01

    A series of static overpressurization tests of scale models of nuclear containment structures is being conducted by Sandia National Laboratories for the Nuclear Power Engineering Corporation of Japan and the U.S. Nuclear Regulatory Commission. At present, two tests are being planned: a test of a model of a steel containment vessel (SCV) that is representative of an improved, boiling water reactor (BWR) Mark II design; and a test of a model of a prestressed concrete containment vessel (PCCV). This paper discusses plans and the results of a preliminary investigation of the instrumentation of the PCCV model. The instrumentation suite for this model will consist of approximately 2000 channels of data to record displacements, strains in the reinforcing steel, prestressing tendons, concrete, steel liner and liner anchors, as well as pressure and temperature. The instrumentation is being designed to monitor the response of the model during prestressing operations, during Structural Integrity and Integrated Leak Rate testing, and during test to failure of the model. Particular emphasis has been placed on instrumentation of the prestressing system in order to understand the behavior of the prestressing strands at design and beyond design pressure levels. Current plans are to place load cells at both ends of one third of the tendons in addition to placing strain measurement devices along the length of selected tendons. Strain measurements will be made using conventional bonded foil resistance gages and a wire resistance gage, known as a open-quotes Tensmegclose quotes reg-sign gage, specifically designed for use with seven-wire strand. The results of preliminary tests of both types of gages, in the laboratory and in a simulated model configuration, are reported and plans for instrumentation of the model are discussed

  7. Sound production in recorder-like instruments : II. a simulation model

    NARCIS (Netherlands)

    Verge, M.P.; Hirschberg, A.; Causse, R.

    1997-01-01

    A simple one-dimensional representation of recorderlike instruments, that can be used for sound synthesis by physical modeling of flutelike instruments, is presented. This model combines the effects on the sound production by the instrument of the jet oscillations, vortex shedding at the edge of the

  8. Time variability of X-ray binaries: observations with INTEGRAL. Modeling

    International Nuclear Information System (INIS)

    Cabanac, Clement

    2007-01-01

    The exact origin of the observed X and Gamma ray variability in X-ray binaries is still an open debate in high energy astrophysics. Among others, these objects are showing aperiodic and quasi-periodic luminosity variations on timescales as small as the millisecond. This erratic behavior must put constraints on the proposed emission processes occurring in the vicinity of the neutrons star or the stellar mass black-hole held by these objects. We propose here to study their behavior following 3 different ways: first we examine the evolution of a particular X-ray source discovered by INTEGRAL, IGR J19140+0951. Using timing and spectral data given by different instruments, we show that the source type is plausibly consistent with a High Mass X-ray Binary hosting a neutrons star. Subsequently, we propose a new method dedicated to the study of timing data coming from coded mask aperture instruments. Using it on INTEGRAL/ISGRI real data, we detect the presence of periodic and quasi-periodic features in some pulsars and micro-quasars at energies as high as a hundred keV. Finally, we suggest a model designed to describe the low frequency variability of X-ray binaries in their hardest state. This model is based on thermal comptonization of soft photons by a warm corona in which a pressure wave is propagating in cylindrical geometry. By computing both numerical simulations and analytical solution, we show that this model should be suitable to describe some of the typical features observed in X-ray binaries power spectra in their hard state and their evolution such as aperiodic noise and low frequency quasi-periodic oscillations. (author) [fr

  9. Latest NASA Instrument Cost Model (NICM): Version VI

    Science.gov (United States)

    Mrozinski, Joe; Habib-Agahi, Hamid; Fox, George; Ball, Gary

    2014-01-01

    The NASA Instrument Cost Model, NICM, is a suite of tools which allow for probabilistic cost estimation of NASA's space-flight instruments at both the system and subsystem level. NICM also includes the ability to perform cost by analogy as well as joint confidence level (JCL) analysis. The latest version of NICM, Version VI, was released in Spring 2014. This paper will focus on the new features released with NICM VI, which include: 1) The NICM-E cost estimating relationship, which is applicable for instruments flying on Explorer-like class missions; 2) The new cluster analysis ability which, alongside the results of the parametric cost estimation for the user's instrument, also provides a visualization of the user's instrument's similarity to previously flown instruments; and 3) includes new cost estimating relationships for in-situ instruments.

  10. Changes in atmospheric variability in a glacial climate and the impacts on proxy data: a model intercomparison

    Directory of Open Access Journals (Sweden)

    F. S. R. Pausata

    2009-09-01

    Full Text Available Using four different climate models, we investigate sea level pressure variability in the extratropical North Atlantic in the preindustrial climate (1750 AD and at the Last Glacial Maximum (LGM, 21 kyrs before present in order to understand how changes in atmospheric circulation can affect signals recorded in climate proxies.

    In general, the models exhibit a significant reduction in interannual variance of sea level pressure at the LGM compared to pre-industrial simulations and this reduction is concentrated in winter. For the preindustrial climate, all models feature a similar leading mode of sea level pressure variability that resembles the leading mode of variability in the instrumental record: the North Atlantic Oscillation (NAO. In contrast, the leading mode of sea level pressure variability at the LGM is model dependent, but in each model different from that in the preindustrial climate. In each model, the leading (NAO-like mode of variability explains a smaller fraction of the variance and also less absolute variance at the LGM than in the preindustrial climate.

    The models show that the relationship between atmospheric variability and surface climate (temperature and precipitation variability change in different climates. Results are model-specific, but indicate that proxy signals at the LGM may be misinterpreted if changes in the spatial pattern and seasonality of surface climate variability are not taken into account.

  11. The sound of oscillating air jets: Physics, modeling and simulation in flute-like instruments

    Science.gov (United States)

    de La Cuadra, Patricio

    Flute-like instruments share a common mechanism that consists of blowing across one open end of a resonator to produce an air jet that is directed towards a sharp edge. Analysis of its operation involves various research fields including fluid dynamics, aero-acoustics, and physics. An effort has been made in this study to extend this description from instruments with fixed geometry like recorders and organ pipes to flutes played by the lips. An analysis of the jet's response to a periodic excitation is the focus of this study, as are the parameters under the player's control in forming the jet. The jet is excited with a controlled excitation consisting of two loudspeakers in opposite phase. A Schlieren system is used to visualize the jet, and image detection algorithms are developed to extract quantitative information from the images. In order to study the behavior of jets observed in different flute-like instruments, several geometries of the excitation and jet shapes are studied. The obtained data is used to propose analytical models that correctly fit the observed measurements and can be used for simulations. The control exerted by the performer on the instrument is of crucial importance in the quality of the sound produced for a number of flute-like instruments. The case of the transverse flute is experimentally studied. An ensemble of control parameters are measured and visualized in order to describe some aspects of the subtle control attained by an experienced flautist. Contrasting data from a novice flautist are compared. As a result, typical values for several non-dimensional parameters that characterize the normal operation of the instrument have been measured, and data to feed simulations has been collected. The information obtained through experimentation is combined with research developed over the last decades to put together a time-domain simulation. The model proposed is one-dimensional and driven by a single physical input. All the variables in the

  12. Impact of instrumental response on observed ozonesonde profiles: First-order estimates and implications for measures of variability

    Science.gov (United States)

    Clifton, G. T.; Merrill, J. T.; Johnson, B. J.; Oltmans, S. J.

    2009-12-01

    Ozonesondes provide information on the ozone distribution up to the middle stratosphere. Ozone profiles often feature layers, with vertically discrete maxima and minima in the mixing ratio. Layers are especially common in the UT/LS regions and originate from wave breaking, shearing and other transport processes. ECC sondes, however, have a moderate response time to significant changes in ozone. A sonde can ascend over 350 meters before it responds fully to a step change in ozone. This results in an overestimate of the altitude assigned to layers and an underestimate of the underlying variability in the amount of ozone. An estimate of the response time is made for each instrument during the preparation for flight, but the profile data are typically not processed to account for the response. Here we present a method of categorizing the response time of ECC instruments and an analysis of a low-pass filter approximation to the effects on profile data. Exponential functions were fit to the step-up and step-down responses using laboratory data. The resulting response time estimates were consistent with results from standard procedures, with the up-step response time exceeding the down-step value somewhat. A single-pole Butterworth filter that approximates the instrumental effect was used with synthetic layered profiles to make first-order estimates of the impact of the finite response time. Using a layer analysis program previously applied to observed profiles we find that instrumental effects can attenuate ozone variability by 20-45% in individual layers, but that the vertical offset in layer altitudes is moderate, up to about 150 meters. We will present results obtained using this approach, coupled with data on the distribution of layer characteristics found using the layer analysis procedure on profiles from Narragansett, Rhode Island and other US sites to quantify the impact on overall variability estimates given ambient distributions of layer occurrence, thickness

  13. Institution, Financial Sector, and Economic Growth: Use The Institutions As An Instrument Variable

    OpenAIRE

    Albertus Girik Allo

    2016-01-01

    Institution has been investigated having indirect role on economic growth. This paper aims to evaluate whether the quality of institution matters for economic growth. By applying institution as instrumental variable at Foreign Direct Investment (FDI), quality of institution significantly influence economic growth. This study applies two set of data period, namely 1985-2013 and 2000-2013, available online in the World Bank (WB). The first data set, 1985-2013 is used to estimate the role of fin...

  14. An Instrumental Variable Probit (IVP Analysis on Depressed Mood in Korea: The Impact of Gender Differences and Other Socio-Economic Factors

    Directory of Open Access Journals (Sweden)

    Lara Gitto

    2015-08-01

    Full Text Available Background Depression is a mental health state whose frequency has been increasing in modern societies. It imposes a great burden, because of the strong impact on people’s quality of life and happiness. Depression can be reliably diagnosed and treated in primary care: if more people could get effective treatments earlier, the costs related to depression would be reversed. The aim of this study was to examine the influence of socio-economic factors and gender on depressed mood, focusing on Korea. In fact, in spite of the great amount of empirical studies carried out for other countries, few epidemiological studies have examined the socio-economic determinants of depression in Korea and they were either limited to samples of employed women or did not control for individual health status. Moreover, as the likely data endogeneity (i.e. the possibility of correlation between the dependent variable and the error term as a result of autocorrelation or simultaneity, such as, in this case, the depressed mood due to health factors that, in turn might be caused by depression, might bias the results, the present study proposes an empirical approach, based on instrumental variables, to deal with this problem. Methods Data for the year 2008 from the Korea National Health and Nutrition Examination Survey (KNHANES were employed. About seven thousands of people (N= 6,751, of which 43% were males and 57% females, aged from 19 to 75 years old, were included in the sample considered in the analysis. In order to take into account the possible endogeneity of some explanatory variables, two Instrumental Variables Probit (IVP regressions were estimated; the variables for which instrumental equations were estimated were related to the participation of women to the workforce and to good health, as reported by people in the sample. Explanatory variables were related to age, gender, family factors (such as the number of family members and marital status and socio

  15. An Instrumental Variable Probit (IVP) analysis on depressed mood in Korea: the impact of gender differences and other socio-economic factors.

    Science.gov (United States)

    Gitto, Lara; Noh, Yong-Hwan; Andrés, Antonio Rodríguez

    2015-04-16

    Depression is a mental health state whose frequency has been increasing in modern societies. It imposes a great burden, because of the strong impact on people's quality of life and happiness. Depression can be reliably diagnosed and treated in primary care: if more people could get effective treatments earlier, the costs related to depression would be reversed. The aim of this study was to examine the influence of socio-economic factors and gender on depressed mood, focusing on Korea. In fact, in spite of the great amount of empirical studies carried out for other countries, few epidemiological studies have examined the socio-economic determinants of depression in Korea and they were either limited to samples of employed women or did not control for individual health status. Moreover, as the likely data endogeneity (i.e. the possibility of correlation between the dependent variable and the error term as a result of autocorrelation or simultaneity, such as, in this case, the depressed mood due to health factors that, in turn might be caused by depression), might bias the results, the present study proposes an empirical approach, based on instrumental variables, to deal with this problem. Data for the year 2008 from the Korea National Health and Nutrition Examination Survey (KNHANES) were employed. About seven thousands of people (N= 6,751, of which 43% were males and 57% females), aged from 19 to 75 years old, were included in the sample considered in the analysis. In order to take into account the possible endogeneity of some explanatory variables, two Instrumental Variables Probit (IVP) regressions were estimated; the variables for which instrumental equations were estimated were related to the participation of women to the workforce and to good health, as reported by people in the sample. Explanatory variables were related to age, gender, family factors (such as the number of family members and marital status) and socio-economic factors (such as education

  16. Confirming theoretical pay constructs of a variable pay scheme

    Directory of Open Access Journals (Sweden)

    Sibangilizwe Ncube

    2013-05-01

    Full Text Available Orientation: Return on the investment in variable pay programmes remains controversial because their cost versus contribution cannot be empirically justified. Research purpose: This study validates the findings of the model developed by De Swardt on the factors related to successful variable pay programmes. Motivation for the study: Many organisations blindly implement variable pay programmes without any means to assess the impact these programmes have on the company’s performance. This study was necessary to validate the findings of an existing instrument that validates the contribution of variable pay schemes. Research design, approach and method: The study was conducted using quantitative research. A total of 300 completed questionnaires from a non-purposive sample of 3000 participants in schemes across all South African industries were returned and analysed. Main findings: Using exploratory and confirmatory factor analysis, it was found that the validation instrument developed by De Swardt is still largely valid in evaluating variable pay schemes. The differences between the study and the model were reported. Practical/managerial implications: The study confirmed the robustness of an existing model that enables practitioners to empirically validate the use of variable pay plans. This model assists in the design and implementation of variable pay programmes that meet critical success factors. Contribution/value-add: The study contributed to the development of a measurement instrument that will assess whether a variable pay plan contributes to an organisation’s success.

  17. Sources and Impacts of Modeled and Observed Low-Frequency Climate Variability

    Science.gov (United States)

    Parsons, Luke Alexander

    Here we analyze climate variability using instrumental, paleoclimate (proxy), and the latest climate model data to understand more about the sources and impacts of low-frequency climate variability. Understanding the drivers of climate variability at interannual to century timescales is important for studies of climate change, including analyses of detection and attribution of climate change impacts. Additionally, correctly modeling the sources and impacts of variability is key to the simulation of abrupt change (Alley et al., 2003) and extended drought (Seager et al., 2005; Pelletier and Turcotte, 1997; Ault et al., 2014). In Appendix A, we employ an Earth system model (GFDL-ESM2M) simulation to study the impacts of a weakening of the Atlantic meridional overturning circulation (AMOC) on the climate of the American Tropics. The AMOC drives some degree of local and global internal low-frequency climate variability (Manabe and Stouffer, 1995; Thornalley et al., 2009) and helps control the position of the tropical rainfall belt (Zhang and Delworth, 2005). We find that a major weakening of the AMOC can cause large-scale temperature, precipitation, and carbon storage changes in Central and South America. Our results suggest that possible future changes in AMOC strength alone will not be sufficient to drive a large-scale dieback of the Amazonian forest, but this key natural ecosystem is sensitive to dry-season length and timing of rainfall (Parsons et al., 2014). In Appendix B, we compare a paleoclimate record of precipitation variability in the Peruvian Amazon to climate model precipitation variability. The paleoclimate (Lake Limon) record indicates that precipitation variability in western Amazonia is 'red' (i.e., increasing variability with timescale). By contrast, most state-of-the-art climate models indicate precipitation variability in this region is nearly 'white' (i.e., equally variability across timescales). This paleo-model disagreement in the overall

  18. Institution, Financial Sector, and Economic Growth: Use The Institutions As An Instrument Variable

    Directory of Open Access Journals (Sweden)

    Albertus Girik Allo

    2016-06-01

    Full Text Available Institution has been investigated having indirect role on economic growth. This paper aims to evaluate whether the quality of institution matters for economic growth. By applying institution as instrumental variable at Foreign Direct Investment (FDI, quality of institution significantly influence economic growth. This study applies two set of data period, namely 1985-2013 and 2000-2013, available online in the World Bank (WB. The first data set, 1985-2013 is used to estimate the role of financial sector on economic growth, focuses on 67 countries. The second data set, 2000-2013 determine the role of institution on financial sector and economic growth by applying 2SLS estimation method. We define institutional variables as set of indicators: Control of Corruption, Political Stability and Absence of Violence, and Voice and Accountability provide declining impact of FDI to economic growth.

  19. Assessing variable rate nitrogen fertilizer strategies within an extensively instrument field site using the MicroBasin model

    Science.gov (United States)

    Ward, N. K.; Maureira, F.; Yourek, M. A.; Brooks, E. S.; Stockle, C. O.

    2014-12-01

    The current use of synthetic nitrogen fertilizers in agriculture has many negative environmental and economic costs, necessitating improved nitrogen management. In the highly heterogeneous landscape of the Palouse region in eastern Washington and northern Idaho, crop nitrogen needs vary widely within a field. Site-specific nitrogen management is a promising strategy to reduce excess nitrogen lost to the environment while maintaining current yields by matching crop needs with inputs. This study used in-situ hydrologic, nutrient, and crop yield data from a heavily instrumented field site in the high precipitation zone of the wheat-producing Palouse region to assess the performance of the MicroBasin model. MicroBasin is a high-resolution watershed-scale ecohydrologic model with nutrient cycling and cropping algorithms based on the CropSyst model. Detailed soil mapping conducted at the site was used to parameterize the model and the model outputs were evaluated with observed measurements. The calibrated MicroBasin model was then used to evaluate the impact of various nitrogen management strategies on crop yield and nitrate losses. The strategies include uniform application as well as delineating the field into multiple zones of varying nitrogen fertilizer rates to optimize nitrogen use efficiency. We present how coupled modeling and in-situ data sets can inform agricultural management and policy to encourage improved nitrogen management.

  20. Flute-like musical instruments: A toy model investigated through numerical continuation

    Science.gov (United States)

    Terrien, Soizic; Vergez, Christophe; Fabre, Benoît

    2013-07-01

    Self-sustained musical instruments (bowed string, woodwind and brass instruments) can be modelled by nonlinear lumped dynamical systems. Among these instruments, flutes and flue organ pipes present the particularity to be modelled as a delay dynamical system. In this paper, such a system, a toy model of flute-like instruments, is studied using numerical continuation. Equilibrium and periodic solutions are explored with respect to the blowing pressure, with focus on amplitude and frequency evolutions along the different solution branches, as well as "jumps" between periodic solution branches. The influence of a second model parameter (namely the inharmonicity) on the behaviour of the system is addressed. It is shown that harmonicity plays a key role in the presence of hysteresis or quasiperiodic regime. Throughout the paper, experimental results on a real instrument are presented to illustrate various phenomena, and allow some qualitative comparisons with numerical results.

  1. Variables influencing the use of derivatives in South Africa – the development of a conceptual model

    Directory of Open Access Journals (Sweden)

    Stefan Schwegler

    2011-03-01

    Full Text Available This paper, which is the first in a two-part series, sets out the development of a conceptual model on the variables influencing investors’ decisions to use derivatives in their portfolios. Investor-specific variables include: the investor’s needs, goals and return expectations, the investor’s knowledge of financial markets, familiarity with different asset classes including derivative instruments, and the investor’s level of wealth and level of risk tolerance. Market-specific variables include: the level of volatility, standardisation, regulation and liquidity in a market, the level of information available on derivatives, the transparency of price determination, taxes, brokerage costs and product availability.

  2. Confounding of three binary-variables counterfactual model

    OpenAIRE

    Liu, Jingwei; Hu, Shuang

    2011-01-01

    Confounding of three binary-variables counterfactual model is discussed in this paper. According to the effect between the control variable and the covariate variable, we investigate three counterfactual models: the control variable is independent of the covariate variable, the control variable has the effect on the covariate variable and the covariate variable affects the control variable. Using the ancillary information based on conditional independence hypotheses, the sufficient conditions...

  3. Frequency-Zooming ARMA Modeling for Analysis of Noisy String Instrument Tones

    Directory of Open Access Journals (Sweden)

    Paulo A. A. Esquef

    2003-09-01

    Full Text Available This paper addresses model-based analysis of string instrument sounds. In particular, it reviews the application of autoregressive (AR modeling to sound analysis/synthesis purposes. Moreover, a frequency-zooming autoregressive moving average (FZ-ARMA modeling scheme is described. The performance of the FZ-ARMA method on modeling the modal behavior of isolated groups of resonance frequencies is evaluated for both synthetic and real string instrument tones immersed in background noise. We demonstrate that the FZ-ARMA modeling is a robust tool to estimate the decay time and frequency of partials of noisy tones. Finally, we discuss the use of the method in synthesis of string instrument sounds.

  4. Instrument Modeling and Synthesis

    Science.gov (United States)

    Horner, Andrew B.; Beauchamp, James W.

    During the 1970s and 1980s, before synthesizers based on direct sampling of musical sounds became popular, replicating musical instruments using frequency modulation (FM) or wavetable synthesis was one of the “holy grails” of music synthesis. Synthesizers such as the Yamaha DX7 allowed users great flexibility in mixing and matching sounds, but were notoriously difficult to coerce into producing sounds like those of a given instrument. Instrument design wizards practiced the mysteries of FM instrument design.

  5. Model SH intelligent instrument for thickness measuring

    International Nuclear Information System (INIS)

    Liu Juntao; Jia Weizhuang; Zhao Yunlong

    1995-01-01

    The authors introduce Model SH Intelligent Instrument for thickness measuring by using principle of beta back-scattering and its application range, features, principle of operation, system design, calibration and specifications

  6. Eutrophication Modeling Using Variable Chlorophyll Approach

    International Nuclear Information System (INIS)

    Abdolabadi, H.; Sarang, A.; Ardestani, M.; Mahjoobi, E.

    2016-01-01

    In this study, eutrophication was investigated in Lake Ontario to identify the interactions among effective drivers. The complexity of such phenomenon was modeled using a system dynamics approach based on a consideration of constant and variable stoichiometric ratios. The system dynamics approach is a powerful tool for developing object-oriented models to simulate complex phenomena that involve feedback effects. Utilizing stoichiometric ratios is a method for converting the concentrations of state variables. During the physical segmentation of the model, Lake Ontario was divided into two layers, i.e., the epilimnion and hypolimnion, and differential equations were developed for each layer. The model structure included 16 state variables related to phytoplankton, herbivorous zooplankton, carnivorous zooplankton, ammonium, nitrate, dissolved phosphorus, and particulate and dissolved carbon in the epilimnion and hypolimnion during a time horizon of one year. The results of several tests to verify the model, close to 1 Nash-Sutcliff coefficient (0.98), the data correlation coefficient (0.98), and lower standard errors (0.96), have indicated well-suited model’s efficiency. The results revealed that there were significant differences in the concentrations of the state variables in constant and variable stoichiometry simulations. Consequently, the consideration of variable stoichiometric ratios in algae and nutrient concentration simulations may be applied in future modeling studies to enhance the accuracy of the results and reduce the likelihood of inefficient control policies.

  7. Association of Body Mass Index with Depression, Anxiety and Suicide-An Instrumental Variable Analysis of the HUNT Study.

    Directory of Open Access Journals (Sweden)

    Johan Håkon Bjørngaard

    Full Text Available While high body mass index is associated with an increased risk of depression and anxiety, cumulative evidence indicates that it is a protective factor for suicide. The associations from conventional observational studies of body mass index with mental health outcomes are likely to be influenced by reverse causality or confounding by ill-health. In the present study, we investigated the associations between offspring body mass index and parental anxiety, depression and suicide in order to avoid problems with reverse causality and confounding by ill-health.We used data from 32,457 mother-offspring and 27,753 father-offspring pairs from the Norwegian HUNT-study. Anxiety and depression were assessed using the Hospital Anxiety and Depression Scale and suicide death from national registers. Associations between offspring and own body mass index and symptoms of anxiety and depression and suicide mortality were estimated using logistic and Cox regression. Causal effect estimates were estimated with a two sample instrument variable approach using offspring body mass index as an instrument for parental body mass index.Both own and offspring body mass index were positively associated with depression, while the results did not indicate any substantial association between body mass index and anxiety. Although precision was low, suicide mortality was inversely associated with own body mass index and the results from the analysis using offspring body mass index supported these results. Adjusted odds ratios per standard deviation body mass index from the instrumental variable analysis were 1.22 (95% CI: 1.05, 1.43 for depression, 1.10 (95% CI: 0.95, 1.27 for anxiety, and the instrumental variable estimated hazard ratios for suicide was 0.69 (95% CI: 0.30, 1.63.The present study's results indicate that suicide mortality is inversely associated with body mass index. We also found support for a positive association between body mass index and depression, but not

  8. Variable selection and model choice in geoadditive regression models.

    Science.gov (United States)

    Kneib, Thomas; Hothorn, Torsten; Tutz, Gerhard

    2009-06-01

    Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection.

  9. Measuring Instrument Constructs of Return Factors for Green Office Building Investments Variables Using Rasch Measurement Model

    Directory of Open Access Journals (Sweden)

    Isa Mona

    2016-01-01

    Full Text Available This paper is a preliminary study on rationalising green office building investments in Malaysia. The aim of this paper is attempt to introduce the application of Rasch measurement model analysis to determine the validity and reliability of each construct in the questionnaire. In achieving this objective, a questionnaire survey was developed consists of 6 sections and a total of 106 responses were received from various investors who own and lease office buildings in Kuala Lumpur. The Rasch Measurement analysis is used to measure the quality control of item constructs in the instrument by measuring the specific objectivity within the same dimension, to reduce ambiguous measures, and a realistic estimation of precision and implicit quality. The Rasch analysis consists of the summary statistics, item unidimensionality and item measures. A result shows the items and respondent (person reliability is at 0.91 and 0.95 respectively.

  10. NASA Instrument Cost/Schedule Model

    Science.gov (United States)

    Habib-Agahi, Hamid; Mrozinski, Joe; Fox, George

    2011-01-01

    NASA's Office of Independent Program and Cost Evaluation (IPCE) has established a number of initiatives to improve its cost and schedule estimating capabilities. 12One of these initiatives has resulted in the JPL developed NASA Instrument Cost Model. NICM is a cost and schedule estimator that contains: A system level cost estimation tool; a subsystem level cost estimation tool; a database of cost and technical parameters of over 140 previously flown remote sensing and in-situ instruments; a schedule estimator; a set of rules to estimate cost and schedule by life cycle phases (B/C/D); and a novel tool for developing joint probability distributions for cost and schedule risk (Joint Confidence Level (JCL)). This paper describes the development and use of NICM, including the data normalization processes, data mining methods (cluster analysis, principal components analysis, regression analysis and bootstrap cross validation), the estimating equations themselves and a demonstration of the NICM tool suite.

  11. Bias correction by use of errors-in-variables regression models in studies with K-X-ray fluorescence bone lead measurements.

    Science.gov (United States)

    Lamadrid-Figueroa, Héctor; Téllez-Rojo, Martha M; Angeles, Gustavo; Hernández-Ávila, Mauricio; Hu, Howard

    2011-01-01

    In-vivo measurement of bone lead by means of K-X-ray fluorescence (KXRF) is the preferred biological marker of chronic exposure to lead. Unfortunately, considerable measurement error associated with KXRF estimations can introduce bias in estimates of the effect of bone lead when this variable is included as the exposure in a regression model. Estimates of uncertainty reported by the KXRF instrument reflect the variance of the measurement error and, although they can be used to correct the measurement error bias, they are seldom used in epidemiological statistical analyzes. Errors-in-variables regression (EIV) allows for correction of bias caused by measurement error in predictor variables, based on the knowledge of the reliability of such variables. The authors propose a way to obtain reliability coefficients for bone lead measurements from uncertainty data reported by the KXRF instrument and compare, by the use of Monte Carlo simulations, results obtained using EIV regression models vs. those obtained by the standard procedures. Results of the simulations show that Ordinary Least Square (OLS) regression models provide severely biased estimates of effect, and that EIV provides nearly unbiased estimates. Although EIV effect estimates are more imprecise, their mean squared error is much smaller than that of OLS estimates. In conclusion, EIV is a better alternative than OLS to estimate the effect of bone lead when measured by KXRF. Copyright © 2010 Elsevier Inc. All rights reserved.

  12. A Core Language for Separate Variability Modeling

    DEFF Research Database (Denmark)

    Iosif-Lazăr, Alexandru Florin; Wasowski, Andrzej; Schaefer, Ina

    2014-01-01

    Separate variability modeling adds variability to a modeling language without requiring modifications of the language or the supporting tools. We define a core language for separate variability modeling using a single kind of variation point to define transformations of software artifacts in object...... hierarchical dependencies between variation points via copying and flattening. Thus, we reduce a model with intricate dependencies to a flat executable model transformation consisting of simple unconditional local variation points. The core semantics is extremely concise: it boils down to two operational rules...

  13. 26 CFR 1.1275-5 - Variable rate debt instruments.

    Science.gov (United States)

    2010-04-01

    ... nonpublicly traded property. A debt instrument (other than a tax-exempt obligation) that would otherwise... variations in the cost of newly borrowed funds in the currency in which the debt instrument is denominated... on the yield of actively traded personal property (within the meaning of section 1092(d)(1)). (ii...

  14. On-line scheme for parameter estimation of nonlinear lithium ion battery equivalent circuit models using the simplified refined instrumental variable method for a modified Wiener continuous-time model

    International Nuclear Information System (INIS)

    Allafi, Walid; Uddin, Kotub; Zhang, Cheng; Mazuir Raja Ahsan Sha, Raja; Marco, James

    2017-01-01

    Highlights: •Off-line estimation approach for continuous-time domain for non-invertible function. •Model reformulated to multi-input-single-output; nonlinearity described by sigmoid. •Method directly estimates parameters of nonlinear ECM from the measured-data. •Iterative on-line technique leads to smoother convergence. •The model is validated off-line and on-line using NCA battery. -- Abstract: The accuracy of identifying the parameters of models describing lithium ion batteries (LIBs) in typical battery management system (BMS) applications is critical to the estimation of key states such as the state of charge (SoC) and state of health (SoH). In applications such as electric vehicles (EVs) where LIBs are subjected to highly demanding cycles of operation and varying environmental conditions leading to non-trivial interactions of ageing stress factors, this identification is more challenging. This paper proposes an algorithm that directly estimates the parameters of a nonlinear battery model from measured input and output data in the continuous time-domain. The simplified refined instrumental variable method is extended to estimate the parameters of a Wiener model where there is no requirement for the nonlinear function to be invertible. To account for nonlinear battery dynamics, in this paper, the typical linear equivalent circuit model (ECM) is enhanced by a block-oriented Wiener configuration where the nonlinear memoryless block following the typical ECM is defined to be a sigmoid static nonlinearity. The nonlinear Weiner model is reformulated in the form of a multi-input, single-output linear model. This linear form allows the parameters of the nonlinear model to be estimated using any linear estimator such as the well-established least squares (LS) algorithm. In this paper, the recursive least square (RLS) method is adopted for online parameter estimation. The approach was validated on experimental data measured from an 18650-type Graphite

  15. Perception and Modeling of Affective Qualities of Musical Instrument Sounds across Pitch Registers.

    Science.gov (United States)

    McAdams, Stephen; Douglas, Chelsea; Vempala, Naresh N

    2017-01-01

    Composers often pick specific instruments to convey a given emotional tone in their music, partly due to their expressive possibilities, but also due to their timbres in specific registers and at given dynamic markings. Of interest to both music psychology and music informatics from a computational point of view is the relation between the acoustic properties that give rise to the timbre at a given pitch and the perceived emotional quality of the tone. Musician and nonmusician listeners were presented with 137 tones produced at a fixed dynamic marking (forte) playing tones at pitch class D# across each instrument's entire pitch range and with different playing techniques for standard orchestral instruments drawn from the brass, woodwind, string, and pitched percussion families. They rated each tone on six analogical-categorical scales in terms of emotional valence (positive/negative and pleasant/unpleasant), energy arousal (awake/tired), tension arousal (excited/calm), preference (like/dislike), and familiarity. Linear mixed models revealed interactive effects of musical training, instrument family, and pitch register, with non-linear relations between pitch register and several dependent variables. Twenty-three audio descriptors from the Timbre Toolbox were computed for each sound and analyzed in two ways: linear partial least squares regression (PLSR) and nonlinear artificial neural net modeling. These two analyses converged in terms of the importance of various spectral, temporal, and spectrotemporal audio descriptors in explaining the emotion ratings, but some differences also emerged. Different combinations of audio descriptors make major contributions to the three emotion dimensions, suggesting that they are carried by distinct acoustic properties. Valence is more positive with lower spectral slopes, a greater emergence of strong partials, and an amplitude envelope with a sharper attack and earlier decay. Higher tension arousal is carried by brighter sounds

  16. Instrument evaluation no. 11. ESI nuclear model 271 C contamination monitor

    International Nuclear Information System (INIS)

    Burgess, P.H.; Iles, W.J.

    1978-06-01

    The various radiations encountered in radiological protection cover a wide range of energies and radiation measurements have to he carried out under an equally broad spectrum of environmental conditions. This report is one of a series intended to give information on the performance characteristics of radiological protection instruments, to assist in the selection of appropriate instruments for a given purpose, to interpret the results obtained with such instruments, and, in particular, to know the likely sources and magnitude of errors that might be associated with measurements in the field. The radiation, electrical and environmental characteristics of radiation protection instruments are considered together with those aspects of the construction which make an instrument convenient for routine use. To provide consistent criteria for instrument performance, the range of tests performed on any particular class of instrument, the test methods and the criteria of acceptable performance are based broadly on the appropriate Recommendations of the International Electrotechnical Commission. The radiations in the tests are, in general, selected from the range of reference radiations for instrument calibration being drawn up by the International Standards Organisation. Normally, each report deals with the capabilities and limitations of one model of instrument and no direct comparison with other instruments intended for similar purposes is made, since the significance of particular performance characteristics largely depends on the radiations and environmental conditions in which the instrument is to be used. The results quoted here have all been obtained from tests on instruments in routine production, with the appropriate measurements being made by the NRPB. This report deals with the ESI Nuclear Model 271 C; a general purpose contamination monitor, comprising a GM tube connected by a coiled extensible cable to a ratemeter

  17. New earth system model for optical performance evaluation of space instruments.

    Science.gov (United States)

    Ryu, Dongok; Kim, Sug-Whan; Breault, Robert P

    2017-03-06

    In this study, a new global earth system model is introduced for evaluating the optical performance of space instruments. Simultaneous imaging and spectroscopic results are provided using this global earth system model with fully resolved spatial, spectral, and temporal coverage of sub-models of the Earth. The sun sub-model is a Lambertian scattering sphere with a 6-h scale and 295 lines of solar spectral irradiance. The atmospheric sub-model has a 15-layer three-dimensional (3D) ellipsoid structure. The land sub-model uses spectral bidirectional reflectance distribution functions (BRDF) defined by a semi-empirical parametric kernel model. The ocean is modeled with the ocean spectral albedo after subtracting the total integrated scattering of the sun-glint scatter model. A hypothetical two-mirror Cassegrain telescope with a 300-mm-diameter aperture and 21.504 mm × 21.504-mm focal plane imaging instrument is designed. The simulated image results are compared with observational data from HRI-VIS measurements during the EPOXI mission for approximately 24 h from UTC Mar. 18, 2008. Next, the defocus mapping result and edge spread function (ESF) measuring result show that the distance between the primary and secondary mirror increases by 55.498 μm from the diffraction-limited condition. The shift of the focal plane is determined to be 5.813 mm shorter than that of the defocused focal plane, and this result is confirmed through the estimation of point spread function (PSF) measurements. This study shows that the earth system model combined with an instrument model is a powerful tool that can greatly help the development phase of instrument missions.

  18. Bayesian modeling of measurement error in predictor variables

    NARCIS (Netherlands)

    Fox, Gerardus J.A.; Glas, Cornelis A.W.

    2003-01-01

    It is shown that measurement error in predictor variables can be modeled using item response theory (IRT). The predictor variables, that may be defined at any level of an hierarchical regression model, are treated as latent variables. The normal ogive model is used to describe the relation between

  19. Effect of corruption on healthcare satisfaction in post-soviet nations: A cross-country instrumental variable analysis of twelve countries.

    Science.gov (United States)

    Habibov, Nazim

    2016-03-01

    There is the lack of consensus about the effect of corruption on healthcare satisfaction in transitional countries. Interpreting the burgeoning literature on this topic has proven difficult due to reverse causality and omitted variable bias. In this study, the effect of corruption on healthcare satisfaction is investigated in a set of 12 Post-Socialist countries using instrumental variable regression on the sample of 2010 Life in Transition survey (N = 8655). The results indicate that experiencing corruption significantly reduces healthcare satisfaction. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  20. Automatic creation of Markov models for reliability assessment of safety instrumented systems

    International Nuclear Information System (INIS)

    Guo Haitao; Yang Xianhui

    2008-01-01

    After the release of new international functional safety standards like IEC 61508, people care more for the safety and availability of safety instrumented systems. Markov analysis is a powerful and flexible technique to assess the reliability measurements of safety instrumented systems, but it is fallible and time-consuming to create Markov models manually. This paper presents a new technique to automatically create Markov models for reliability assessment of safety instrumented systems. Many safety related factors, such as failure modes, self-diagnostic, restorations, common cause and voting, are included in Markov models. A framework is generated first based on voting, failure modes and self-diagnostic. Then, repairs and common-cause failures are incorporated into the framework to build a complete Markov model. Eventual simplification of Markov models can be done by state merging. Examples given in this paper show how explosively the size of Markov model increases as the system becomes a little more complicated as well as the advancement of automatic creation of Markov models

  1. Ares I Scale Model Acoustic Tests Instrumentation for Acoustic and Pressure Measurements

    Science.gov (United States)

    Vargas, Magda B.; Counter, Douglas D.

    2011-01-01

    The Ares I Scale Model Acoustic Test (ASMAT) was a development test performed at the Marshall Space Flight Center (MSFC) East Test Area (ETA) Test Stand 116. The test article included a 5% scale Ares I vehicle model and tower mounted on the Mobile Launcher. Acoustic and pressure data were measured by approximately 200 instruments located throughout the test article. There were four primary ASMAT instrument suites: ignition overpressure (IOP), lift-off acoustics (LOA), ground acoustics (GA), and spatial correlation (SC). Each instrumentation suite incorporated different sensor models which were selected based upon measurement requirements. These requirements included the type of measurement, exposure to the environment, instrumentation check-outs and data acquisition. The sensors were attached to the test article using different mounts and brackets dependent upon the location of the sensor. This presentation addresses the observed effect of the sensors and mounts on the acoustic and pressure measurements.

  2. Comparing proxy and model estimates of hydroclimate variability and change over the Common Era

    Science.gov (United States)

    Hydro2k Consortium, Pages

    2017-12-01

    Water availability is fundamental to societies and ecosystems, but our understanding of variations in hydroclimate (including extreme events, flooding, and decadal periods of drought) is limited because of a paucity of modern instrumental observations that are distributed unevenly across the globe and only span parts of the 20th and 21st centuries. Such data coverage is insufficient for characterizing hydroclimate and its associated dynamics because of its multidecadal to centennial variability and highly regionalized spatial signature. High-resolution (seasonal to decadal) hydroclimatic proxies that span all or parts of the Common Era (CE) and paleoclimate simulations from climate models are therefore important tools for augmenting our understanding of hydroclimate variability. In particular, the comparison of the two sources of information is critical for addressing the uncertainties and limitations of both while enriching each of their interpretations. We review the principal proxy data available for hydroclimatic reconstructions over the CE and highlight the contemporary understanding of how these proxies are interpreted as hydroclimate indicators. We also review the available last-millennium simulations from fully coupled climate models and discuss several outstanding challenges associated with simulating hydroclimate variability and change over the CE. A specific review of simulated hydroclimatic changes forced by volcanic events is provided, as is a discussion of expected improvements in estimated radiative forcings, models, and their implementation in the future. Our review of hydroclimatic proxies and last-millennium model simulations is used as the basis for articulating a variety of considerations and best practices for how to perform proxy-model comparisons of CE hydroclimate. This discussion provides a framework for how best to evaluate hydroclimate variability and its associated dynamics using these comparisons and how they can better inform

  3. Handbook of latent variable and related models

    CERN Document Server

    Lee, Sik-Yum

    2011-01-01

    This Handbook covers latent variable models, which are a flexible class of models for modeling multivariate data to explore relationships among observed and latent variables.- Covers a wide class of important models- Models and statistical methods described provide tools for analyzing a wide spectrum of complicated data- Includes illustrative examples with real data sets from business, education, medicine, public health and sociology.- Demonstrates the use of a wide variety of statistical, computational, and mathematical techniques.

  4. Climate Informed Economic Instruments to Enhance Urban Water Supply Resilience to Hydroclimatological Variability and Change

    Science.gov (United States)

    Brown, C.; Carriquiry, M.; Souza Filho, F. A.

    2006-12-01

    Hydroclimatological variability presents acute challenges to urban water supply providers. The impact is often most severe in developing nations where hydrologic and climate variability can be very high, water demand is unmet and increasing, and the financial resources to mitigate the social effects of that variability are limited. Furthermore, existing urban water systems face a reduced solution space, constrained by competing and conflicting interests, such as irrigation demand, recreation and hydropower production, and new (relative to system design) demands to satisfy environmental flow requirements. These constraints magnify the impacts of hydroclimatic variability and increase the vulnerability of urban areas to climate change. The high economic and social costs of structural responses to hydrologic variability, such as groundwater utilization and the construction or expansion of dams, create a need for innovative alternatives. Advances in hydrologic and climate forecasting, and the increasing sophistication and acceptance of incentive-based mechanisms for achieving economically efficient water allocation offer potential for improving the resilience of existing water systems to the challenge of variable supply. This presentation will explore the performance of a system of climate informed economic instruments designed to facilitate the reduction of hydroclimatologic variability-induced impacts on water-sensitive stakeholders. The system is comprised of bulk water option contracts between urban water suppliers and agricultural users and insurance indexed on reservoir inflows designed to cover the financial needs of the water supplier in situations where the option is likely to be exercised. Contract and insurance parameters are linked to forecasts and the evolution of seasonal precipitation and streamflow and designed for financial and political viability. A simulation of system performance is presented based on ongoing work in Metro Manila, Philippines. The

  5. Transient response of level instruments in a research reactor

    International Nuclear Information System (INIS)

    Cheng, Lap Y.

    1989-01-01

    A numerical model has been developed to simulate the dynamics of water level instruments in a research nuclear reactor. A bubble device, with helium gas as the working fluid, is used to monitor liquid level by sensing the static head pressure due to the height of liquid in the reactor vessel. A finite-difference model is constructed to study the transient response of the water level instruments to pressure perturbations. The field equations which describe the hydraulics of the helium gas in the bubbler device are arranged in the form of a tridiagonal matrix and the field variables are solved at each time step by the Thomas algorithm. Simulation results indicate that the dynamic response of the helium gas depends mainly on the volume and the inertia of the gas in the level instrument tubings. The anomalies in the simulated level indication are attributed to the inherent lag in the level instrument due to the hydraulics of the system. 1 ref., 5 figs

  6. Developing evaluation instrument based on CIPP models on the implementation of portfolio assessment

    Science.gov (United States)

    Kurnia, Feni; Rosana, Dadan; Supahar

    2017-08-01

    This study aimed to develop an evaluation instrument constructed by CIPP model on the implementation of portfolio assessment in science learning. This study used research and development (R & D) method; adapting 4-D by the development of non-test instrument, and the evaluation instrument constructed by CIPP model. CIPP is the abbreviation of Context, Input, Process, and Product. The techniques of data collection were interviews, questionnaires, and observations. Data collection instruments were: 1) the interview guidelines for the analysis of the problems and the needs, 2) questionnaire to see level of accomplishment of portfolio assessment instrument, and 3) observation sheets for teacher and student to dig up responses to the portfolio assessment instrument. The data obtained was quantitative data obtained from several validators. The validators consist of two lecturers as the evaluation experts, two practitioners (science teachers), and three colleagues. This paper shows the results of content validity obtained from the validators and the analysis result of the data obtained by using Aikens' V formula. The results of this study shows that the evaluation instrument based on CIPP models is proper to evaluate the implementation of portfolio assessment instruments. Based on the experts' judgments, practitioners, and colleagues, the Aikens' V coefficient was between 0.86-1,00 which means that it is valid and can be used in the limited trial and operational field trial.

  7. The demand-control model for job strain: a commentary on different ways to operationalize the exposure variable

    Directory of Open Access Journals (Sweden)

    Márcia Guimarães de Mello Alves

    2015-01-01

    Full Text Available Demand-control has been the most widely used model to study job strain in various countries. However, researchers have used the model differently, thus hindering the comparison of results. Such heterogeneity appears in both the study instrument used and in the definition of the main exposure variable - high strain. This cross-sectional study aimed to assess differences between various ways of operationalizing job strain through association with prevalent hypertension in a cohort of workers (Pro-Health Study. No difference in the association between high job strain and hypertension was found according to the different ways of operationalizing exposure, even though prevalence varied widely, according to the adopted form, from 19.6% for quadrants to 42% for subtraction tertile. The authors recommend further studies to define the cutoff for exposure variables using combined subjective and objective data.

  8. Asymptotics of diagonal elements of projection matrices under many instruments/regressors

    Czech Academy of Sciences Publication Activity Database

    Anatolyev, Stanislav; Yaskov, P.

    2017-01-01

    Roč. 33, č. 3 (2017), s. 717-738 ISSN 0266-4666 Institutional support: RVO:67985998 Keywords : instrumental variable estimation * inference * models Subject RIV: AH - Economics OBOR OECD: Applied Economics, Econometrics Impact factor: 1.011, year: 2016

  9. Comparing surgical trays with redundant instruments with trays with reduced instruments: a cost analysis.

    Science.gov (United States)

    John-Baptiste, A; Sowerby, L J; Chin, C J; Martin, J; Rotenberg, B W

    2016-01-01

    When prearranged standard surgical trays contain instruments that are repeatedly unused, the redundancy can result in unnecessary health care costs. Our objective was to estimate potential savings by performing an economic evaluation comparing the cost of surgical trays with redundant instruments with surgical trays with reduced instruments ("reduced trays"). We performed a cost-analysis from the hospital perspective over a 1-year period. Using a mathematical model, we compared the direct costs of trays containing redundant instruments to reduced trays for 5 otolaryngology procedures. We incorporated data from several sources including local hospital data on surgical volume, the number of instruments on redundant and reduced trays, wages of personnel and time required to pack instruments. From the literature, we incorporated instrument depreciation costs and the time required to decontaminate an instrument. We performed 1-way sensitivity analyses on all variables, including surgical volume. Costs were estimated in 2013 Canadian dollars. The cost of redundant trays was $21 806 and the cost of reduced trays was $8803, for a 1-year cost saving of $13 003. In sensitivity analyses, cost savings ranged from $3262 to $21 395, based on the surgical volume at the institution. Variation in surgical volume resulted in a wider range of estimates, with a minimum of $3253 for low-volume to a maximum of $52 012 for high-volume institutions. Our study suggests moderate savings may be achieved by reducing surgical tray redundancy and, if applied to other surgical specialties, may result in savings to Canadian health care systems.

  10. The long view: Causes of climate change over the instrumental period

    Science.gov (United States)

    Hegerl, G. C.; Schurer, A. P.; Polson, D.; Iles, C. E.; Bronnimann, S.

    2016-12-01

    The period of instrumentally recorded data has seen remarkable changes in climate, with periods of rapid warming, and periods of stagnation or cooling. A recent analysis of the observed temperature change from the instrumental record confirms that most of the warming recorded since the middle of the 20rst century has been caused by human influences, but shows large uncertainty in separating greenhouse gas from aerosol response if accounting for model uncertainty. The contribution by natural forcing and internal variability to the recent warming is estimated to be small, but becomes more important when analysing climate change over earlier or shorter time periods. For example, the enigmatic early 20th century warming was a period of strong climate anomalies, including the US dustbowl drought and exceptional heat waves, and pronounced Arctic warming. Attribution results suggests that about half of the global warming 1901-1950 was forced by greenhouse gases increases, with an anomalously strong contribution by climate variability, and contributions by natural forcing. Long term variations in circulation are important for some regional climate anomalies. Precipitation is important for impacts of climate change and precipitation changes are uncertain in models. Analysis of the instrumental record suggests a human influence on mean and heavy precipitation, and supports climate model estimates of the spatial pattern of precipitation sensitivity to warming. Broadly, and particularly over ocean, wet regions are getting wetter and dry regions are getting drier. In conclusion, the historical record provides evidence for a strong response to external forcings, supports climate models, and raises questions about multi-decadal variability.

  11. Variable Selection for Regression Models of Percentile Flows

    Science.gov (United States)

    Fouad, G.

    2017-12-01

    Percentile flows describe the flow magnitude equaled or exceeded for a given percent of time, and are widely used in water resource management. However, these statistics are normally unavailable since most basins are ungauged. Percentile flows of ungauged basins are often predicted using regression models based on readily observable basin characteristics, such as mean elevation. The number of these independent variables is too large to evaluate all possible models. A subset of models is typically evaluated using automatic procedures, like stepwise regression. This ignores a large variety of methods from the field of feature (variable) selection and physical understanding of percentile flows. A study of 918 basins in the United States was conducted to compare an automatic regression procedure to the following variable selection methods: (1) principal component analysis, (2) correlation analysis, (3) random forests, (4) genetic programming, (5) Bayesian networks, and (6) physical understanding. The automatic regression procedure only performed better than principal component analysis. Poor performance of the regression procedure was due to a commonly used filter for multicollinearity, which rejected the strongest models because they had cross-correlated independent variables. Multicollinearity did not decrease model performance in validation because of a representative set of calibration basins. Variable selection methods based strictly on predictive power (numbers 2-5 from above) performed similarly, likely indicating a limit to the predictive power of the variables. Similar performance was also reached using variables selected based on physical understanding, a finding that substantiates recent calls to emphasize physical understanding in modeling for predictions in ungauged basins. The strongest variables highlighted the importance of geology and land cover, whereas widely used topographic variables were the weakest predictors. Variables suffered from a high

  12. Asymptotics of diagonal elements of projection matrices under many instruments/regressors

    Czech Academy of Sciences Publication Activity Database

    Anatolyev, Stanislav; Yaskov, P.

    2017-01-01

    Roč. 33, č. 3 (2017), s. 717-738 ISSN 0266-4666 Institutional support: Progres-Q24 Keywords : instrumental variable estimation * inference * models Subject RIV: AH - Economics OBOR OECD: Applied Economics, Econometrics Impact factor: 1.011, year: 2016

  13. Preliminary Multi-Variable Parametric Cost Model for Space Telescopes

    Science.gov (United States)

    Stahl, H. Philip; Hendrichs, Todd

    2010-01-01

    This slide presentation reviews creating a preliminary multi-variable cost model for the contract costs of making a space telescope. There is discussion of the methodology for collecting the data, definition of the statistical analysis methodology, single variable model results, testing of historical models and an introduction of the multi variable models.

  14. How to get rid of W: a latent variables approach to modelling spatially lagged variables

    NARCIS (Netherlands)

    Folmer, H.; Oud, J.

    2008-01-01

    In this paper we propose a structural equation model (SEM) with latent variables to model spatial dependence. Rather than using the spatial weights matrix W, we propose to use latent variables to represent spatial dependence and spillover effects, of which the observed spatially lagged variables are

  15. How to get rid of W : a latent variables approach to modelling spatially lagged variables

    NARCIS (Netherlands)

    Folmer, Henk; Oud, Johan

    2008-01-01

    In this paper we propose a structural equation model (SEM) with latent variables to model spatial dependence. Rather than using the spatial weights matrix W, we propose to use latent variables to represent spatial dependence and spillover effects, of which the observed spatially lagged variables are

  16. Internal variability of a 3-D ocean model

    Directory of Open Access Journals (Sweden)

    Bjarne Büchmann

    2016-11-01

    Full Text Available The Defence Centre for Operational Oceanography runs operational forecasts for the Danish waters. The core setup is a 60-layer baroclinic circulation model based on the General Estuarine Transport Model code. At intervals, the model setup is tuned to improve ‘model skill’ and overall performance. It has been an area of concern that the uncertainty inherent to the stochastical/chaotic nature of the model is unknown. Thus, it is difficult to state with certainty that a particular setup is improved, even if the computed model skill increases. This issue also extends to the cases, where the model is tuned during an iterative process, where model results are fed back to improve model parameters, such as bathymetry.An ensemble of identical model setups with slightly perturbed initial conditions is examined. It is found that the initial perturbation causes the models to deviate from each other exponentially fast, causing differences of several PSUs and several kelvin within a few days of simulation. The ensemble is run for a full year, and the long-term variability of salinity and temperature is found for different regions within the modelled area. Further, the developing time scale is estimated for each region, and great regional differences are found – in both variability and time scale. It is observed that periods with very high ensemble variability are typically short-term and spatially limited events.A particular event is examined in detail to shed light on how the ensemble ‘behaves’ in periods with large internal model variability. It is found that the ensemble does not seem to follow any particular stochastic distribution: both the ensemble variability (standard deviation or range as well as the ensemble distribution within that range seem to vary with time and place. Further, it is observed that a large spatial variability due to mesoscale features does not necessarily correlate to large ensemble variability. These findings bear

  17. Instrumental variables estimates of peer effects in social networks.

    Science.gov (United States)

    An, Weihua

    2015-03-01

    Estimating peer effects with observational data is very difficult because of contextual confounding, peer selection, simultaneity bias, and measurement error, etc. In this paper, I show that instrumental variables (IVs) can help to address these problems in order to provide causal estimates of peer effects. Based on data collected from over 4000 students in six middle schools in China, I use the IV methods to estimate peer effects on smoking. My design-based IV approach differs from previous ones in that it helps to construct potentially strong IVs and to directly test possible violation of exogeneity of the IVs. I show that measurement error in smoking can lead to both under- and imprecise estimations of peer effects. Based on a refined measure of smoking, I find consistent evidence for peer effects on smoking. If a student's best friend smoked within the past 30 days, the student was about one fifth (as indicated by the OLS estimate) or 40 percentage points (as indicated by the IV estimate) more likely to smoke in the same time period. The findings are robust to a variety of robustness checks. I also show that sharing cigarettes may be a mechanism for peer effects on smoking. A 10% increase in the number of cigarettes smoked by a student's best friend is associated with about 4% increase in the number of cigarettes smoked by the student in the same time period. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Generalized latent variable modeling multilevel, longitudinal, and structural equation models

    CERN Document Server

    Skrondal, Anders; Rabe-Hesketh, Sophia

    2004-01-01

    This book unifies and extends latent variable models, including multilevel or generalized linear mixed models, longitudinal or panel models, item response or factor models, latent class or finite mixture models, and structural equation models.

  19. RECOVERY OF LARGE ANGULAR SCALE CMB POLARIZATION FOR INSTRUMENTS EMPLOYING VARIABLE-DELAY POLARIZATION MODULATORS

    Energy Technology Data Exchange (ETDEWEB)

    Miller, N. J.; Marriage, T. A.; Appel, J. W.; Bennett, C. L.; Eimer, J.; Essinger-Hileman, T.; Harrington, K.; Rostem, K.; Watts, D. J. [Department of Physics and Astronomy, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218 (United States); Chuss, D. T. [Department of Physics, Villanova University, 800 E Lancaster, Villanova, PA 19085 (United States); Wollack, E. J.; Fixsen, D. J.; Moseley, S. H.; Switzer, E. R., E-mail: Nathan.J.Miller@nasa.gov [Observational Cosmology Laboratory, Code 665, NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States)

    2016-02-20

    Variable-delay Polarization Modulators (VPMs) are currently being implemented in experiments designed to measure the polarization of the cosmic microwave background on large angular scales because of their capability for providing rapid, front-end polarization modulation and control over systematic errors. Despite the advantages provided by the VPM, it is important to identify and mitigate any time-varying effects that leak into the synchronously modulated component of the signal. In this paper, the effect of emission from a 300 K VPM on the system performance is considered and addressed. Though instrument design can greatly reduce the influence of modulated VPM emission, some residual modulated signal is expected. VPM emission is treated in the presence of rotational misalignments and temperature variation. Simulations of time-ordered data are used to evaluate the effect of these residual errors on the power spectrum. The analysis and modeling in this paper guides experimentalists on the critical aspects of observations using VPMs as front-end modulators. By implementing the characterizations and controls as described, front-end VPM modulation can be very powerful for mitigating 1/f noise in large angular scale polarimetric surveys. None of the systematic errors studied fundamentally limit the detection and characterization of B-modes on large scales for a tensor-to-scalar ratio of r = 0.01. Indeed, r < 0.01 is achievable with commensurately improved characterizations and controls.

  20. Dynamic Variables Fail to Predict Fluid Responsiveness in an Animal Model With Pericardial Effusion.

    Science.gov (United States)

    Broch, Ole; Renner, Jochen; Meybohm, Patrick; Albrecht, Martin; Höcker, Jan; Haneya, Assad; Steinfath, Markus; Bein, Berthold; Gruenewald, Matthias

    2016-10-01

    The reliability of dynamic and volumetric variables of fluid responsiveness in the presence of pericardial effusion is still elusive. The aim of the present study was to investigate their predictive power in a porcine model with hemodynamic relevant pericardial effusion. A single-center animal investigation. Twelve German domestic pigs. Pigs were studied before and during pericardial effusion. Instrumentation included a pulmonary artery catheter and a transpulmonary thermodilution catheter in the femoral artery. Hemodynamic variables like cardiac output (COPAC) and stroke volume (SVPAC) derived from pulmonary artery catheter, global end-diastolic volume (GEDV), stroke volume variation (SVV), and pulse-pressure variation (PPV) were obtained. At baseline, SVV, PPV, GEDV, COPAC, and SVPAC reliably predicted fluid responsiveness (area under the curve 0.81 [p = 0.02], 0.82 [p = 0.02], 0.74 [p = 0.07], 0.74 [p = 0.07], 0.82 [p = 0.02]). After establishment of pericardial effusion the predictive power of dynamic variables was impaired and only COPAC and SVPAC and GEDV allowed significant prediction of fluid responsiveness (area under the curve 0.77 [p = 0.04], 0.76 [p = 0.05], 0.83 [p = 0.01]) with clinically relevant changes in threshold values. In this porcine model, hemodynamic relevant pericardial effusion abolished the ability of dynamic variables to predict fluid responsiveness. COPAC, SVPAC, and GEDV enabled prediction, but their threshold values were significantly changed. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Problems with radiological surveillance instrumentation

    International Nuclear Information System (INIS)

    Swinth, K.L.; Tanner, J.E.; Fleming, D.M.

    1984-09-01

    Many radiological surveillance instruments are in use at DOE facilities throughout the country. These instruments are an essential part of all health physics programs, and poor instrument performance can increase program costs or compromise program effectiveness. Generic data from simple tests on newly purchased instruments shows that many instruments will not meet requirements due to manufacturing defects. In other cases, lack of consideration of instrument use has resulted in poor acceptance of instruments and poor reliability. The performance of instruments is highly variable for electronic and mechanical performance, radiation response, susceptibility to interferences and response to environmental factors. Poor instrument performance in these areas can lead to errors or poor accuracy in measurements

  2. Problems with radiological surveillance instrumentation

    International Nuclear Information System (INIS)

    Swinth, K.L.; Tanner, J.E.; Fleming, D.M.

    1985-01-01

    Many radiological surveillance instruments are in use at DOE facilities throughout the country. These instruments are an essential part of all health physics programs, and poor instrument performance can increase program costs or compromise program effectiveness. Generic data from simple tests on newly purchased instruments shows that many instruments will not meet requirements due to manufacturing defects. In other cases, lack of consideration of instrument use has resulted in poor acceptance of instruments and poor reliability. The performance of instruments is highly variable for electronic and mechanical performance, radiation response, susceptibility to interferences and response to environmental factors. Poor instrument performance in these areas can lead to errors or poor accuracy in measurements

  3. Determinants of The Application of Macro Prudential Instruments

    Directory of Open Access Journals (Sweden)

    Zakaria Firano

    2017-09-01

    Full Text Available The use of macro prudential instruments today gives rise to a major debate within the walls of central banks and other authorities in charge of financial stability. Contrary to micro prudential instruments, whose effects remain limited, macro prudential instruments are different in nature and can affect the stability of the financial system. By influencing the financial cycle and the financial structure of financial institutions, the use of such instruments should be conducted with great vigilance as well as macroeconomic and financial expertise. But the experiences of central banks in this area are sketchy, and only some emerging countries have experience using these types of instruments in different ways. This paper presents an analysis of instruments of macro prudential policy and attempts to empirically demonstrate that these instruments should be used only in specific economic and financial situations. Indeed, the results obtained, using modeling bivariate panel, confirm that these instruments are more effective when used to mitigate the euphoria of financial and economic cycles. In this sense, the output gap, describing the economic cycle, and the Z-score are the intermediate variables for the activation of capital instruments. Moreover, the liquidity ratio and changes in bank profitability are the two early warning indicators for activation of liquidity instruments.

  4. Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables

    Science.gov (United States)

    Henson, Robert A.; Templin, Jonathan L.; Willse, John T.

    2009-01-01

    This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…

  5. On the explaining-away phenomenon in multivariate latent variable models.

    Science.gov (United States)

    van Rijn, Peter; Rijmen, Frank

    2015-02-01

    Many probabilistic models for psychological and educational measurements contain latent variables. Well-known examples are factor analysis, item response theory, and latent class model families. We discuss what is referred to as the 'explaining-away' phenomenon in the context of such latent variable models. This phenomenon can occur when multiple latent variables are related to the same observed variable, and can elicit seemingly counterintuitive conditional dependencies between latent variables given observed variables. We illustrate the implications of explaining away for a number of well-known latent variable models by using both theoretical and real data examples. © 2014 The British Psychological Society.

  6. Collaborative Proposal: Improving Decadal Prediction of Arctic Climate Variability and Change Using a Regional Arctic System Model (RASM)

    Energy Technology Data Exchange (ETDEWEB)

    Maslowski, Wieslaw [Naval Postgraduate School, Monterey, CA (United States)

    2016-10-17

    This project aims to develop, apply and evaluate a regional Arctic System model (RASM) for enhanced decadal predictions. Its overarching goal is to advance understanding of the past and present states of arctic climate and to facilitate improvements in seasonal to decadal predictions. In particular, it will focus on variability and long-term change of energy and freshwater flows through the arctic climate system. The project will also address modes of natural climate variability as well as extreme and rapid climate change in a region of the Earth that is: (i) a key indicator of the state of global climate through polar amplification and (ii) which is undergoing environmental transitions not seen in instrumental records. RASM will readily allow the addition of other earth system components, such as ecosystem or biochemistry models, thus allowing it to facilitate studies of climate impacts (e.g., droughts and fires) and of ecosystem adaptations to these impacts. As such, RASM is expected to become a foundation for more complete Arctic System models and part of a model hierarchy important for improving climate modeling and predictions.

  7. Testing of Environmental Satellite Bus-Instrument Interfaces Using Engineering Models

    Science.gov (United States)

    Gagnier, Donald; Hayner, Rick; Nosek, Thomas; Roza, Michael; Hendershot, James E.; Razzaghi, Andrea I.

    2004-01-01

    This paper discusses the formulation and execution of a laboratory test of the electrical interfaces between multiple atmospheric scientific instruments and the spacecraft bus that carries them. The testing, performed in 2002, used engineering models of the instruments and the Aura spacecraft bus electronics. Aura is one of NASA s Earth Observatory System missions. The test was designed to evaluate the complex interfaces in the command and data handling subsystems prior to integration of the complete flight instruments on the spacecraft. A problem discovered during the flight integration phase of the observatory can cause significant cost and schedule impacts. The tests successfully revealed problems and led to their resolution before the full-up integration phase, saving significant cost and schedule. This approach could be beneficial for future environmental satellite programs involving the integration of multiple, complex scientific instruments onto a spacecraft bus.

  8. A mathematical model for describing the mechanical behaviour of root canal instruments.

    Science.gov (United States)

    Zhang, E W; Cheung, G S P; Zheng, Y F

    2011-01-01

    The purpose of this study was to establish a general mathematical model for describing the mechanical behaviour of root canal instruments by combining a theoretical analytical approach with a numerical finite-element method. Mathematical formulas representing the longitudinal (taper, helical angle and pitch) and cross-sectional configurations and area, the bending and torsional inertia, the curvature of the boundary point and the (geometry of) loading condition were derived. Torsional and bending stresses and the resultant deformation were expressed mathematically as a function of these geometric parameters, modulus of elasticity of the material and the applied load. As illustrations, three brands of NiTi endodontic files of different cross-sectional configurations (ProTaper, Hero 642, and Mani NRT) were analysed under pure torsion and pure bending situation by entering the model into a finite-element analysis package (ANSYS). Numerical results confirmed that mathematical models were a feasible method to analyse the mechanical properties and predict the stress and deformation for root canal instruments during root canal preparation. Mathematical and numerical model can be a suitable way to examine mechanical behaviours as a criterion of the instrument design and to predict the stress and strain experienced by the endodontic instruments during root canal preparation. © 2010 International Endodontic Journal.

  9. Instrumental variable analysis as a complementary analysis in studies of adverse effects : venous thromboembolism and second-generation versus third-generation oral contraceptives

    NARCIS (Netherlands)

    Boef, Anna G C; Souverein, Patrick C|info:eu-repo/dai/nl/243074948; Vandenbroucke, Jan P; van Hylckama Vlieg, Astrid; de Boer, Anthonius|info:eu-repo/dai/nl/075097346; le Cessie, Saskia; Dekkers, Olaf M

    2016-01-01

    PURPOSE: A potentially useful role for instrumental variable (IV) analysis may be as a complementary analysis to assess the presence of confounding when studying adverse drug effects. There has been discussion on whether the observed increased risk of venous thromboembolism (VTE) for

  10. Development of collaborative-creative learning model using virtual laboratory media for instrumental analytical chemistry lectures

    Science.gov (United States)

    Zurweni, Wibawa, Basuki; Erwin, Tuti Nurian

    2017-08-01

    The framework for teaching and learning in the 21st century was prepared with 4Cs criteria. Learning providing opportunity for the development of students' optimal creative skills is by implementing collaborative learning. Learners are challenged to be able to compete, work independently to bring either individual or group excellence and master the learning material. Virtual laboratory is used for the media of Instrumental Analytical Chemistry (Vis, UV-Vis-AAS etc) lectures through simulations computer application and used as a substitution for the laboratory if the equipment and instruments are not available. This research aims to design and develop collaborative-creative learning model using virtual laboratory media for Instrumental Analytical Chemistry lectures, to know the effectiveness of this design model adapting the Dick & Carey's model and Hannafin & Peck's model. The development steps of this model are: needs analyze, design collaborative-creative learning, virtual laboratory media using macromedia flash, formative evaluation and test of learning model effectiveness. While, the development stages of collaborative-creative learning model are: apperception, exploration, collaboration, creation, evaluation, feedback. Development of collaborative-creative learning model using virtual laboratory media can be used to improve the quality learning in the classroom, overcome the limitation of lab instruments for the real instrumental analysis. Formative test results show that the Collaborative-Creative Learning Model developed meets the requirements. The effectiveness test of students' pretest and posttest proves significant at 95% confidence level, t-test higher than t-table. It can be concluded that this learning model is effective to use for Instrumental Analytical Chemistry lectures.

  11. A SHARIA RETURN AS AN ALTERNATIVE INSTRUMENT FOR MONETARY POLICY

    Directory of Open Access Journals (Sweden)

    Ashief Hamam

    2011-09-01

    Full Text Available Rapid development in Islamic financial industry has not been supported by sharia monetary policy instruments. This study looks at the possibility of sharia returns as the instrument. Using both error correction model and vector error correction model to estimate the data from 2002(1 to 2010(12, this paper finds that sharia return has the same effect as the interest rate in the demand for money. The shock effect of sharia return on broad money supply, Gross Domestic Product, and Consumer Price Index is greater than that of interest rate. In addition, these three variables are more quickly become stable following the shock of sharia return. Keywords: Sharia return, islamic financial system, vector error correction modelJEL classification numbers: E52, G15

  12. Study of parental models: building an instrument for their exploration

    Directory of Open Access Journals (Sweden)

    José Francisco Martínez Licona

    2014-08-01

    Full Text Available Objective: This research presents the construction of an attributional questionnaire concerning the different parental models and factors that are involved in family interactions. Method: A mixed methodology was used as a foundation to develop items and respective pilots that allowed checking the validity and internal consistency of the instrument using expert judgment. Results: An instrument of 36 statements was organized into 12 categories to explore the parental models according to the following factors: parental models, breeding patterns, attachment bonds and guidelines for success, and promoted inside family contexts. Analyzing these factors contributes to the children’s development within the familiar frown, and the opportunity for socio-educational intervention. Conclusion: It is assumed that the family context is as decisive as the school context; therefore, exploring the nature of parental models is required to understand the features and influences that contribute to the development of young people in any social context.

  13. Warped Linear Prediction of Physical Model Excitations with Applications in Audio Compression and Instrument Synthesis

    Science.gov (United States)

    Glass, Alexis; Fukudome, Kimitoshi

    2004-12-01

    A sound recording of a plucked string instrument is encoded and resynthesized using two stages of prediction. In the first stage of prediction, a simple physical model of a plucked string is estimated and the instrument excitation is obtained. The second stage of prediction compensates for the simplicity of the model in the first stage by encoding either the instrument excitation or the model error using warped linear prediction. These two methods of compensation are compared with each other, and to the case of single-stage warped linear prediction, adjustments are introduced, and their applications to instrument synthesis and MPEG4's audio compression within the structured audio format are discussed.

  14. Use of a life-size three-dimensional-printed spine model for pedicle screw instrumentation training.

    Science.gov (United States)

    Park, Hyun Jin; Wang, Chenyu; Choi, Kyung Ho; Kim, Hyong Nyun

    2018-04-16

    Training beginners of the pedicle screw instrumentation technique in the operating room is limited because of issues related to patient safety and surgical efficiency. Three-dimensional (3D) printing enables training or simulation surgery on a real-size replica of deformed spine, which is difficult to perform in the usual cadaver or surrogate plastic models. The purpose of this study was to evaluate the educational effect of using a real-size 3D-printed spine model for training beginners of the free-hand pedicle screw instrumentation technique. We asked whether the use of a 3D spine model can improve (1) screw instrumentation accuracy and (2) length of procedure. Twenty life-size 3D-printed lumbar spine models were made from 10 volunteers (two models for each volunteer). Two novice surgeons who had no experience of free-hand pedicle screw instrumentation technique were instructed by an experienced surgeon, and each surgeon inserted 10 pedicle screws for each lumbar spine model. Computed tomography scans of the spine models were obtained to evaluate screw instrumentation accuracy. The length of time in completing the procedure was recorded. The results of the latter 10 spine models were compared with those of the former 10 models to evaluate learning effect. A total of 37/200 screws (18.5%) perforated the pedicle cortex with a mean of 1.7 mm (range, 1.2-3.3 mm). However, the latter half of the models had significantly less violation than the former half (10/100 vs. 27/100, p 3D-printed spine model can be an excellent tool for training beginners of the free-hand pedicle screw instrumentation.

  15. Gait variability: methods, modeling and meaning

    Directory of Open Access Journals (Sweden)

    Hausdorff Jeffrey M

    2005-07-01

    Full Text Available Abstract The study of gait variability, the stride-to-stride fluctuations in walking, offers a complementary way of quantifying locomotion and its changes with aging and disease as well as a means of monitoring the effects of therapeutic interventions and rehabilitation. Previous work has suggested that measures of gait variability may be more closely related to falls, a serious consequence of many gait disorders, than are measures based on the mean values of other walking parameters. The Current JNER series presents nine reports on the results of recent investigations into gait variability. One novel method for collecting unconstrained, ambulatory data is reviewed, and a primer on analysis methods is presented along with a heuristic approach to summarizing variability measures. In addition, the first studies of gait variability in animal models of neurodegenerative disease are described, as is a mathematical model of human walking that characterizes certain complex (multifractal features of the motor control's pattern generator. Another investigation demonstrates that, whereas both healthy older controls and patients with a higher-level gait disorder walk more slowly in reduced lighting, only the latter's stride variability increases. Studies of the effects of dual tasks suggest that the regulation of the stride-to-stride fluctuations in stride width and stride time may be influenced by attention loading and may require cognitive input. Finally, a report of gait variability in over 500 subjects, probably the largest study of this kind, suggests how step width variability may relate to fall risk. Together, these studies provide new insights into the factors that regulate the stride-to-stride fluctuations in walking and pave the way for expanded research into the control of gait and the practical application of measures of gait variability in the clinical setting.

  16. Multi-wheat-model ensemble responses to interannual climatic variability

    DEFF Research Database (Denmark)

    Ruane, A C; Hudson, N I; Asseng, S

    2016-01-01

    We compare 27 wheat models' yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981–2010 grain yield, and ......-term warming, suggesting that additional processes differentiate climate change impacts from observed climate variability analogs and motivating continuing analysis and model development efforts.......We compare 27 wheat models' yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981–2010 grain yield, and we...... evaluate results against the interannual variability of growing season temperature, precipitation, and solar radiation. The amount of information used for calibration has only a minor effect on most models' climate response, and even small multi-model ensembles prove beneficial. Wheat model clusters reveal...

  17. The effects of competition on premiums: using United Healthcare's 2015 entry into Affordable Care Act's marketplaces as an instrumental variable.

    Science.gov (United States)

    Agirdas, Cagdas; Krebs, Robert J; Yano, Masato

    2018-01-08

    One goal of the Affordable Care Act is to increase insurance coverage by improving competition and lowering premiums. To facilitate this goal, the federal government enacted online marketplaces in the 395 rating areas spanning 34 states that chose not to establish their own state-run marketplaces. Few multivariate regression studies analyzing the effects of competition on premiums suffer from endogeneity, due to simultaneity and omitted variable biases. However, United Healthcare's decision to enter these marketplaces in 2015 provides the researcher with an opportunity to address this endogeneity problem. Exploiting the variation caused by United Healthcare's entry decision as an instrument for competition, we study the impact of competition on premiums during the first 2 years of these marketplaces. Combining panel data from five different sources and controlling for 12 variables, we find that one more insurer in a rating area leads to a 6.97% reduction in the second-lowest-priced silver plan premium, which is larger than the estimated effects in existing literature. Furthermore, we run a threshold analysis and find that competition's effects on premiums become statistically insignificant if there are four or more insurers in a rating area. These findings are robust to alternative measures of premiums, inclusion of a non-linear term in the regression models and a county-level analysis.

  18. Multi-Wheat-Model Ensemble Responses to Interannual Climate Variability

    Science.gov (United States)

    Ruane, Alex C.; Hudson, Nicholas I.; Asseng, Senthold; Camarrano, Davide; Ewert, Frank; Martre, Pierre; Boote, Kenneth J.; Thorburn, Peter J.; Aggarwal, Pramod K.; Angulo, Carlos

    2016-01-01

    We compare 27 wheat models' yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981e2010 grain yield, and we evaluate results against the interannual variability of growing season temperature, precipitation, and solar radiation. The amount of information used for calibration has only a minor effect on most models' climate response, and even small multi-model ensembles prove beneficial. Wheat model clusters reveal common characteristics of yield response to climate; however models rarely share the same cluster at all four sites indicating substantial independence. Only a weak relationship (R2 0.24) was found between the models' sensitivities to interannual temperature variability and their response to long-termwarming, suggesting that additional processes differentiate climate change impacts from observed climate variability analogs and motivating continuing analysis and model development efforts.

  19. Are revised models better models? A skill score assessment of regional interannual variability

    Science.gov (United States)

    Sperber, Kenneth R.; Participating AMIP Modelling Groups

    1999-05-01

    Various skill scores are used to assess the performance of revised models relative to their original configurations. The interannual variability of all-India, Sahel and Nordeste rainfall and summer monsoon windshear is examined in integrations performed under the experimental design of the Atmospheric Model Intercomparison Project. For the indices considered, the revised models exhibit greater fidelity at simulating the observed interannual variability. Interannual variability of all-India rainfall is better simulated by models that have a more realistic rainfall climatology in the vicinity of India, indicating the beneficial effect of reducing systematic model error.

  20. Climate in Sweden during the past millennium - Evidence from proxy data, instrumental data and model simulations

    Energy Technology Data Exchange (ETDEWEB)

    Moberg, Anders; Gouirand, Isabelle; Schoning, Kristian; Wohlfarth, Barbara [Stockholm Univ. (Sweden). Dept. of Physical Geography and Quaternary Geology; Kjellstroem, Erik; Rummukainen, Markku [Swedish Meteorological and Hydrological Inst., Norrkoeping (Sweden). Rossby Centre; Jong, Rixt de [Lund Univ. (Sweden). Dept. of Quaternary Geology; Linderholm, Hans [Goeteborg Univ. (Sweden). Dept. of Earth Sciences; Zorita, Eduardo [GKSS Research Centre, Geesthacht (Germany)

    2006-12-15

    Knowledge about climatic variations is essential for SKB in its safety assessments of a geological repository for spent nuclear waste. There is therefore a need for information about possible future climatic variations under a range of possible climatic states. However, predictions of future climate in any deterministic sense are still beyond our reach. We can, nevertheless, try to estimate the magnitude of future climate variability and change due to natural forcing factors, by means of inferences drawn from natural climate variability in the past. Indeed, the climate of the future will be shaped by the sum of natural and anthropogenic climate forcing, as well as the internal climate variability. The aim here is to review and analyse the knowledge about Swedish climate variability, essentially during the past millennium. Available climate proxy data and long instrumental records provide empirical information on past climatic changes. We also demonstrate how climate modelling can be used to extend such knowledge. We use output from a global climate model driven with reconstructed radiative forcings (solar, volcanic and greenhouse gas forcing), to provide boundary conditions for a regional climate model. The regional model provides more details of the climate than the global model, and we develop a simulated climate history for Sweden that is complete in time and space and physically consistent. We use output from a regional model simulation for long periods in the last millennium, to study annual mean temperature, precipitation and runoff for the northern and southern parts of Sweden. The simulated data are used to place corresponding instrumental records for the 20th century into a plausible historical perspective. We also use output from the regional model to study how the frequency distribution of the daily temperature, precipitation, runoff and evaporation at Forsmark and Oskarshamn could have varied between unusually warm and cold 30-year periods during the

  1. Climate in Sweden during the past millennium - Evidence from proxy data, instrumental data and model simulations

    International Nuclear Information System (INIS)

    Moberg, Anders; Gouirand, Isabelle; Schoning, Kristian; Wohlfarth, Barbara

    2006-12-01

    Knowledge about climatic variations is essential for SKB in its safety assessments of a geological repository for spent nuclear waste. There is therefore a need for information about possible future climatic variations under a range of possible climatic states. However, predictions of future climate in any deterministic sense are still beyond our reach. We can, nevertheless, try to estimate the magnitude of future climate variability and change due to natural forcing factors, by means of inferences drawn from natural climate variability in the past. Indeed, the climate of the future will be shaped by the sum of natural and anthropogenic climate forcing, as well as the internal climate variability. The aim here is to review and analyse the knowledge about Swedish climate variability, essentially during the past millennium. Available climate proxy data and long instrumental records provide empirical information on past climatic changes. We also demonstrate how climate modelling can be used to extend such knowledge. We use output from a global climate model driven with reconstructed radiative forcings (solar, volcanic and greenhouse gas forcing), to provide boundary conditions for a regional climate model. The regional model provides more details of the climate than the global model, and we develop a simulated climate history for Sweden that is complete in time and space and physically consistent. We use output from a regional model simulation for long periods in the last millennium, to study annual mean temperature, precipitation and runoff for the northern and southern parts of Sweden. The simulated data are used to place corresponding instrumental records for the 20th century into a plausible historical perspective. We also use output from the regional model to study how the frequency distribution of the daily temperature, precipitation, runoff and evaporation at Forsmark and Oskarshamn could have varied between unusually warm and cold 30-year periods during the

  2. Coevolution of variability models and related software artifacts

    DEFF Research Database (Denmark)

    Passos, Leonardo; Teixeira, Leopoldo; Dinztner, Nicolas

    2015-01-01

    models coevolve with other artifact types, we study a large and complex real-world variant-rich software system: the Linux kernel. Specifically, we extract variability-coevolution patterns capturing changes in the variability model of the Linux kernel with subsequent changes in Makefiles and C source...

  3. Variable selection in Logistic regression model with genetic algorithm.

    Science.gov (United States)

    Zhang, Zhongheng; Trevino, Victor; Hoseini, Sayed Shahabuddin; Belciug, Smaranda; Boopathi, Arumugam Manivanna; Zhang, Ping; Gorunescu, Florin; Subha, Velappan; Dai, Songshi

    2018-02-01

    Variable or feature selection is one of the most important steps in model specification. Especially in the case of medical-decision making, the direct use of a medical database, without a previous analysis and preprocessing step, is often counterproductive. In this way, the variable selection represents the method of choosing the most relevant attributes from the database in order to build a robust learning models and, thus, to improve the performance of the models used in the decision process. In biomedical research, the purpose of variable selection is to select clinically important and statistically significant variables, while excluding unrelated or noise variables. A variety of methods exist for variable selection, but none of them is without limitations. For example, the stepwise approach, which is highly used, adds the best variable in each cycle generally producing an acceptable set of variables. Nevertheless, it is limited by the fact that it commonly trapped in local optima. The best subset approach can systematically search the entire covariate pattern space, but the solution pool can be extremely large with tens to hundreds of variables, which is the case in nowadays clinical data. Genetic algorithms (GA) are heuristic optimization approaches and can be used for variable selection in multivariable regression models. This tutorial paper aims to provide a step-by-step approach to the use of GA in variable selection. The R code provided in the text can be extended and adapted to other data analysis needs.

  4. Psyche Mission: Scientific Models and Instrument Selection

    Science.gov (United States)

    Polanskey, C. A.; Elkins-Tanton, L. T.; Bell, J. F., III; Lawrence, D. J.; Marchi, S.; Park, R. S.; Russell, C. T.; Weiss, B. P.

    2017-12-01

    NASA has chosen to explore (16) Psyche with their 14th Discovery-class mission. Psyche is a 226-km diameter metallic asteroid hypothesized to be the exposed core of a planetesimal that was stripped of its rocky mantle by multiple hit and run collisions in the early solar system. The spacecraft launch is planned for 2022 with arrival at the asteroid in 2026 for 21 months of operations. The Psyche investigation has five primary scientific objectives: A. Determine whether Psyche is a core, or if it is unmelted material. B. Determine the relative ages of regions of Psyche's surface. C. Determine whether small metal bodies incorporate the same light elements as are expected in the Earth's high-pressure core. D. Determine whether Psyche was formed under conditions more oxidizing or more reducing than Earth's core. E. Characterize Psyche's topography. The mission's task was to select the appropriate instruments to meet these objectives. However, exploring a metal world, rather than one made of ice, rock, or gas, requires development of new scientific models for Psyche to support the selection of the appropriate instruments for the payload. If Psyche is indeed a planetary core, we expect that it should have a detectable magnetic field. However, the strength of the magnetic field can vary by orders of magnitude depending on the formational history of Psyche. The implications of both the extreme low-end and the high-end predictions impact the magnetometer and mission design. For the imaging experiment, what can the team expect for the morphology of a heavily impacted metal body? Efforts are underway to further investigate the differences in crater morphology between high velocity impacts into metal and rock to be prepared to interpret the images of Psyche when they are returned. Finally, elemental composition measurements at Psyche using nuclear spectroscopy encompass a new and unexplored phase space of gamma-ray and neutron measurements. We will present some end

  5. Is foreign direct investment good for health in low and middle income countries? An instrumental variable approach.

    Science.gov (United States)

    Burns, Darren K; Jones, Andrew P; Goryakin, Yevgeniy; Suhrcke, Marc

    2017-05-01

    There is a scarcity of quantitative research into the effect of FDI on population health in low and middle income countries (LMICs). This paper investigates the relationship using annual panel data from 85 LMICs between 1974 and 2012. When controlling for time trends, country fixed effects, correlation between repeated observations, relevant covariates, and endogeneity via a novel instrumental variable approach, we find FDI to have a beneficial effect on overall health, proxied by life expectancy. When investigating age-specific mortality rates, we find a stronger beneficial effect of FDI on adult mortality, yet no association with either infant or child mortality. Notably, FDI effects on health remain undetected in all models which do not control for endogeneity. Exploring the effect of sector-specific FDI on health in LMICs, we provide preliminary evidence of a weak inverse association between secondary (i.e. manufacturing) sector FDI and overall life expectancy. Our results thus suggest that FDI has provided an overall benefit to population health in LMICs, particularly in adults, yet investments into the secondary sector could be harmful to health. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Modelling the liquidity ratio as macroprudential instrument

    OpenAIRE

    Jan Willem van den End; Mark Kruidhof

    2012-01-01

    The Basel III Liquidity Coverage Ratio (LCR) is a microprudential instrument to strengthen the liquidity position of banks. However, if in extreme scenarios the LCR becomes a binding constraint, the interaction of bank behaviour with the regulatory rule can have negative externalities. We simulate the systemic implications of the LCR by a liquidity stress-testing model, which takes into account the impact of bank reactions on second round feedback effects. We show that a flexible approach of ...

  7. Variable selection in multivariate calibration based on clustering of variable concept.

    Science.gov (United States)

    Farrokhnia, Maryam; Karimi, Sadegh

    2016-01-01

    Recently we have proposed a new variable selection algorithm, based on clustering of variable concept (CLoVA) in classification problem. With the same idea, this new concept has been applied to a regression problem and then the obtained results have been compared with conventional variable selection strategies for PLS. The basic idea behind the clustering of variable is that, the instrument channels are clustered into different clusters via clustering algorithms. Then, the spectral data of each cluster are subjected to PLS regression. Different real data sets (Cargill corn, Biscuit dough, ACE QSAR, Soy, and Tablet) have been used to evaluate the influence of the clustering of variables on the prediction performances of PLS. Almost in the all cases, the statistical parameter especially in prediction error shows the superiority of CLoVA-PLS respect to other variable selection strategies. Finally the synergy clustering of variable (sCLoVA-PLS), which is used the combination of cluster, has been proposed as an efficient and modification of CLoVA algorithm. The obtained statistical parameter indicates that variable clustering can split useful part from redundant ones, and then based on informative cluster; stable model can be reached. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Improved variable reduction in partial least squares modelling by Global-Minimum Error Uninformative-Variable Elimination.

    Science.gov (United States)

    Andries, Jan P M; Vander Heyden, Yvan; Buydens, Lutgarde M C

    2017-08-22

    The calibration performance of Partial Least Squares regression (PLS) can be improved by eliminating uninformative variables. For PLS, many variable elimination methods have been developed. One is the Uninformative-Variable Elimination for PLS (UVE-PLS). However, the number of variables retained by UVE-PLS is usually still large. In UVE-PLS, variable elimination is repeated as long as the root mean squared error of cross validation (RMSECV) is decreasing. The set of variables in this first local minimum is retained. In this paper, a modification of UVE-PLS is proposed and investigated, in which UVE is repeated until no further reduction in variables is possible, followed by a search for the global RMSECV minimum. The method is called Global-Minimum Error Uninformative-Variable Elimination for PLS, denoted as GME-UVE-PLS or simply GME-UVE. After each iteration, the predictive ability of the PLS model, built with the remaining variable set, is assessed by RMSECV. The variable set with the global RMSECV minimum is then finally selected. The goal is to obtain smaller sets of variables with similar or improved predictability than those from the classical UVE-PLS method. The performance of the GME-UVE-PLS method is investigated using four data sets, i.e. a simulated set, NIR and NMR spectra, and a theoretical molecular descriptors set, resulting in twelve profile-response (X-y) calibrations. The selective and predictive performances of the models resulting from GME-UVE-PLS are statistically compared to those from UVE-PLS and 1-step UVE, one-sided paired t-tests. The results demonstrate that variable reduction with the proposed GME-UVE-PLS method, usually eliminates significantly more variables than the classical UVE-PLS, while the predictive abilities of the resulting models are better. With GME-UVE-PLS, a lower number of uninformative variables, without a chemical meaning for the response, may be retained than with UVE-PLS. The selectivity of the classical UVE method

  9. From Transition Systems to Variability Models and from Lifted Model Checking Back to UPPAAL

    DEFF Research Database (Denmark)

    Dimovski, Aleksandar; Wasowski, Andrzej

    2017-01-01

    efficient lifted (family-based) model checking for real-time variability models. This reduces the cost of maintaining specialized family-based real-time model checkers. Real-time variability models can be model checked using the standard UPPAAL. We have implemented abstractions as syntactic source...

  10. The Flare Irradiance Spectral Model (FISM) and its Contributions to Space Weather Research, the Flare Energy Budget, and Instrument Design

    Science.gov (United States)

    Chamberlin, Phillip

    2008-01-01

    The Flare Irradiance Spectral Model (FISM) is an empirical model of the solar irradiance spectrum from 0.1 to 190 nm at 1 nm spectral resolution and on a 1-minute time cadence. The goal of FISM is to provide accurate solar spectral irradiances over the vacuum ultraviolet (VUV: 0-200 nm) range as input for ionospheric and thermospheric models. The seminar will begin with a brief overview of the FISM model, and also how the Solar Dynamics Observatory (SDO) EUV Variability Experiment (EVE) will contribute to improving FISM. Some current studies will then be presented that use FISM estimations of the solar VUV irradiance to quantify the contributions of the increased irradiance from flares to Earth's increased thermospheric and ionospheric densites. Initial results will also be presented from a study looking at the electron density increases in the Martian atmosphere during a solar flare. Results will also be shown quantifying the VUV contributions to the total flare energy budget for both the impulsive and gradual phases of solar flares. Lastly, an example of how FISM can be used to simplify the design of future solar VUV irradiance instruments will be discussed, using the future NOAA GOES-R Extreme Ultraviolet and X-Ray Sensors (EXIS) space weather instrument.

  11. Investigation on Motorcyclist Riding Behaviour at Curve Entry Using Instrumented Motorcycle

    Science.gov (United States)

    Yuen, Choon Wah; Karim, Mohamed Rehan; Saifizul, Ahmad

    2014-01-01

    This paper details the study on the changes in riding behaviour, such as changes in speed as well as the brake force and throttle force applied, when motorcyclists ride over a curve section road using an instrumented motorcycle. In this study, an instrumented motorcycle equipped with various types of sensors, on-board cameras, and data loggers, was developed in order to collect the riding data on the study site. Results from the statistical analysis showed that riding characteristics, such as changes in speed, brake force, and throttle force applied, are influenced by the distance from the curve entry, riding experience, and travel mileage of the riders. A structural equation modeling was used to study the impact of these variables on the change of riding behaviour in curve entry section. Four regression equations are formed to study the relationship between four dependent variables, which are speed, throttle force, front brake force, and rear brake force applied with the independent variables. PMID:24523660

  12. Mobile Music, Sensors, Physical Modeling, and Digital Fabrication: Articulating the Augmented Mobile Instrument

    Directory of Open Access Journals (Sweden)

    Romain Michon

    2017-12-01

    Full Text Available Two concepts are presented, extended, and unified in this paper: mobile device augmentation towards musical instruments design and the concept of hybrid instruments. The first consists of using mobile devices at the heart of novel musical instruments. Smartphones and tablets are augmented with passive and active elements that can take part in the production of sound (e.g., resonators, exciter, etc., add new affordances to the device, or change its global aesthetics and shape. Hybrid instruments combine physical/acoustical and “physically informed” virtual/digital elements. Recent progress in physical modeling of musical instruments and digital fabrication is exploited to treat instrument parts in a multidimensional way, allowing any physical element to be substituted with a virtual one and vice versa (as long as it is physically possible. A wide range of tools to design mobile hybrid instruments is introduced and evaluated. Aesthetic and design considerations when making such instruments are also presented through a series of examples.

  13. Spatial variability and parametric uncertainty in performance assessment models

    International Nuclear Information System (INIS)

    Pensado, Osvaldo; Mancillas, James; Painter, Scott; Tomishima, Yasuo

    2011-01-01

    The problem of defining an appropriate treatment of distribution functions (which could represent spatial variability or parametric uncertainty) is examined based on a generic performance assessment model for a high-level waste repository. The generic model incorporated source term models available in GoldSim ® , the TDRW code for contaminant transport in sparse fracture networks with a complex fracture-matrix interaction process, and a biosphere dose model known as BDOSE TM . Using the GoldSim framework, several Monte Carlo sampling approaches and transport conceptualizations were evaluated to explore the effect of various treatments of spatial variability and parametric uncertainty on dose estimates. Results from a model employing a representative source and ensemble-averaged pathway properties were compared to results from a model allowing for stochastic variation of transport properties along streamline segments (i.e., explicit representation of spatial variability within a Monte Carlo realization). We concluded that the sampling approach and the definition of an ensemble representative do influence consequence estimates. In the examples analyzed in this paper, approaches considering limited variability of a transport resistance parameter along a streamline increased the frequency of fast pathways resulting in relatively high dose estimates, while those allowing for broad variability along streamlines increased the frequency of 'bottlenecks' reducing dose estimates. On this basis, simplified approaches with limited consideration of variability may suffice for intended uses of the performance assessment model, such as evaluation of site safety. (author)

  14. Flyover Modeling of Planetary Pits - Undergraduate Student Instrument Project

    Science.gov (United States)

    Bhasin, N.; Whittaker, W.

    2015-12-01

    On the surface of the moon and Mars there are hundreds of skylights, which are collapsed holes that are believed to lead to underground caves. This research uses Vision, Inertial, and LIDAR sensors to build a high resolution model of a skylight as a landing vehicle flies overhead. We design and fabricate a pit modeling instrument to accomplish this task, implement software, and demonstrate sensing and modeling capability on a suborbital reusable launch vehicle flying over a simulated pit. Future missions on other planets and moons will explore pits and caves, led by the technology developed by this research. Sensor software utilizes modern graph-based optimization techniques to build 3D models using camera, LIDAR, and inertial data. The modeling performance was validated with a test flyover of a planetary skylight analog structure on the Masten Xombie sRLV. The trajectory profile closely follows that of autonomous planetary powered descent, including translational and rotational dynamics as well as shock and vibration. A hexagonal structure made of shipping containers provides a terrain feature that serves as an appropriate analog for the rim and upper walls of a cylindrical planetary skylight. The skylight analog floor, walls, and rim are modeled in elevation with a 96% coverage rate at 0.25m2 resolution. The inner skylight walls have 5.9cm2 color image resolution and the rims are 6.7cm2 with measurement precision superior to 1m. The multidisciplinary student team included students of all experience levels, with backgrounds in robotics, physics, computer science, systems, mechanical and electrical engineering. The team was commited to authentic scientific experimentation, and defined specific instrument requirements and measurable experiment objectives to verify successful completion.This work was made possible by the NASA Undergraduate Student Instrument Project Educational Flight Opportunity 2013 program. Additional support was provided by the sponsorship of an

  15. Preliminary Multi-Variable Cost Model for Space Telescopes

    Science.gov (United States)

    Stahl, H. Philip; Hendrichs, Todd

    2010-01-01

    Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. This paper reviews the methodology used to develop space telescope cost models; summarizes recently published single variable models; and presents preliminary results for two and three variable cost models. Some of the findings are that increasing mass reduces cost; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and technology development as a function of time reduces cost at the rate of 50% per 17 years.

  16. Development of Flexible Software Process Lines with Variability Operations

    DEFF Research Database (Denmark)

    Dohrmann, Patrick; Schramm, Joachim; Kuhrmann, Marco

    2016-01-01

    the development of flexible software process lines. Method: We conducted a longitudinal study in which we studied 5 variants of the V-Modell XT process line for 2 years. Results: Our results show the variability operation instrument feasible in practice. We analyzed 616 operation exemplars addressing various...

  17. Hydroclimate variability: comparing dendroclimatic records and future GCM scenarios

    International Nuclear Information System (INIS)

    Lapp, S.

    2008-01-01

    Drought events of the 20th Century in western North America have been linked to teleconnections that influence climate variability on inter-annual and decadal to multi-decadal time scales. These teleconnections represent the changes sea surface temperatures (SSTs) in the tropical and extra-tropical regions of the Pacific Ocean, ENSO (El-Nino Southern Oscillation) and PDO (Pacific Decadal Oscillation), respectively, and the Atlantic Ocean, AMO (Atlantic Multidecadal Oscillation), and also to atmospheric circulation patterns (PNA: Pacific-North American). A network of precipitation sensitive tree-ring chronologies from Montana, Alberta, Saskatchewan and NWT highly correlate to the climate moisture index (CMI) of precipitation potential evapotranspiration (P-PET), thus, capturing the long-term hydroclimatic variability of the region. Reconstructions of annual and seasonal CMI identify drought events in previous centuries that are more extreme in magnitude, frequency and duration than recorded during the instrumental period. Variability in the future climate will include these natural climate cycles as well as modulations of these cycles affected by human induced global warming. The proxy hydroclimate records derived from tree-rings present information on decadal and multi-decadal hydroclimatic variability for the past millennium; therefore, providing a unique opportunity to validate the climate variability simulated by GCMs (Global Climate Models) on longer time scales otherwise constrained by the shorter observation records. Developing scenarios of future variability depends: 1) on our understanding of the interaction of these teleconnection; and, 2) to identify climate models that are able to accurately simulate the hydroclimatic variability as detected in the instrumental and proxy records. (author)

  18. Asteroid electrostatic instrumentation and modelling

    Energy Technology Data Exchange (ETDEWEB)

    Aplin, K L; Bowles, N E; Urbak, E [Department of Physics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH (United Kingdom); Keane, D; Sawyer, E C, E-mail: k.aplin1@physics.ox.ac.uk [RAL Space, R25, Harwell Oxford, Didcot OX11 0QX (United Kingdom)

    2011-06-23

    Asteroid surface material is expected to become photoelectrically charged, and is likely to be transported through electrostatic levitation. Understanding any movement of the surface material is relevant to proposed space missions to return samples to Earth for detailed isotopic analysis. Motivated by preparations for the Marco Polo sample return mission, we present electrostatic modelling for a real asteroid, Itokawa, for which detailed shape information is available, and verify that charging effects are likely to be significant at the terminator and at the edges of shadow regions for the Marco Polo baseline asteroid, 1999JU3. We also describe the Asteroid Charge Experiment electric field instrumentation intended for Marco Polo. Finally, we find that the differing asteroid and spacecraft potentials on landing could perturb sample collection for the short landing time of 20min that is currently planned.

  19. Analysis models for variables associated with breastfeeding duration

    Directory of Open Access Journals (Sweden)

    Edson Theodoro dos S. Neto

    2013-09-01

    Full Text Available OBJECTIVE To analyze the factors associated with breastfeeding duration by two statistical models. METHODS A population-based cohort study was conducted with 86 mothers and newborns from two areas primary covered by the National Health System, with high rates of infant mortality in Vitória, Espírito Santo, Brazil. During 30 months, 67 (78% children and mothers were visited seven times at home by trained interviewers, who filled out survey forms. Data on food and sucking habits, socioeconomic and maternal characteristics were collected. Variables were analyzed by Cox regression models, considering duration of breastfeeding as the dependent variable, and logistic regression (dependent variables, was the presence of a breastfeeding child in different post-natal ages. RESULTS In the logistic regression model, the pacifier sucking (adjusted Odds Ratio: 3.4; 95%CI 1.2-9.55 and bottle feeding (adjusted Odds Ratio: 4.4; 95%CI 1.6-12.1 increased the chance of weaning a child before one year of age. Variables associated to breastfeeding duration in the Cox regression model were: pacifier sucking (adjusted Hazard Ratio 2.0; 95%CI 1.2-3.3 and bottle feeding (adjusted Hazard Ratio 2.0; 95%CI 1.2-3.5. However, protective factors (maternal age and family income differed between both models. CONCLUSIONS Risk and protective factors associated with cessation of breastfeeding may be analyzed by different models of statistical regression. Cox Regression Models are adequate to analyze such factors in longitudinal studies.

  20. A decision model for financial assurance instruments in the upstream petroleum sector

    International Nuclear Information System (INIS)

    Ferreira, Doneivan; Suslick, Saul; Farley, Joshua; Costanza, Robert; Krivov, Sergey

    2004-01-01

    The main objective of this paper is to deepen the discussion regarding the application of financial assurance instruments, bonds, in the upstream oil sector. This paper will also attempt to explain the current choice of instruments within the sector. The concepts of environmental damages and internalization of environmental and regulatory costs will be briefly explored. Bonding mechanisms are presently being adopted by several governments with the objective of guaranteeing the availability of funds for end-of-leasing operations. Regulators are mainly concerned with the prospect of inheriting liabilities from lessees. Several forms of bonding instruments currently available were identified and a new instrument classification was proposed. Ten commonly used instruments were selected and analyzed under the perspective of both regulators and industry (surety, paid-in and periodic-payment collateral accounts, letters of credit, self-guarantees, investment grade securities, real estate collaterals, insurance policies, pools, and special funds). A multiattribute value function model was then proposed to examine current instrument preferences. Preliminary simulations confirm the current scenario where regulators are likely to require surety bonds, letters of credit, and periodic payment collateral account tools

  1. Bayesian approach to errors-in-variables in regression models

    Science.gov (United States)

    Rozliman, Nur Aainaa; Ibrahim, Adriana Irawati Nur; Yunus, Rossita Mohammad

    2017-05-01

    In many applications and experiments, data sets are often contaminated with error or mismeasured covariates. When at least one of the covariates in a model is measured with error, Errors-in-Variables (EIV) model can be used. Measurement error, when not corrected, would cause misleading statistical inferences and analysis. Therefore, our goal is to examine the relationship of the outcome variable and the unobserved exposure variable given the observed mismeasured surrogate by applying the Bayesian formulation to the EIV model. We shall extend the flexible parametric method proposed by Hossain and Gustafson (2009) to another nonlinear regression model which is the Poisson regression model. We shall then illustrate the application of this approach via a simulation study using Markov chain Monte Carlo sampling methods.

  2. Understanding and forecasting polar stratospheric variability with statistical models

    Directory of Open Access Journals (Sweden)

    C. Blume

    2012-07-01

    Full Text Available The variability of the north-polar stratospheric vortex is a prominent aspect of the middle atmosphere. This work investigates a wide class of statistical models with respect to their ability to model geopotential and temperature anomalies, representing variability in the polar stratosphere. Four partly nonstationary, nonlinear models are assessed: linear discriminant analysis (LDA; a cluster method based on finite elements (FEM-VARX; a neural network, namely the multi-layer perceptron (MLP; and support vector regression (SVR. These methods model time series by incorporating all significant external factors simultaneously, including ENSO, QBO, the solar cycle, volcanoes, to then quantify their statistical importance. We show that variability in reanalysis data from 1980 to 2005 is successfully modeled. The period from 2005 to 2011 can be hindcasted to a certain extent, where MLP performs significantly better than the remaining models. However, variability remains that cannot be statistically hindcasted within the current framework, such as the unexpected major warming in January 2009. Finally, the statistical model with the best generalization performance is used to predict a winter 2011/12 with warm and weak vortex conditions. A vortex breakdown is predicted for late January, early February 2012.

  3. A simple model explaining super-resolution in absolute optical instruments

    Science.gov (United States)

    Leonhardt, Ulf; Sahebdivan, Sahar; Kogan, Alex; Tyc, Tomáš

    2015-05-01

    We develop a simple, one-dimensional model for super-resolution in absolute optical instruments that is able to describe the interplay between sources and detectors. Our model explains the subwavelength sensitivity of a point detector to a point source reported in previous computer simulations and experiments (Miñano 2011 New J. Phys.13 125009; Miñano 2014 New J. Phys.16 033015).

  4. Gaussian Mixture Model of Heart Rate Variability

    Science.gov (United States)

    Costa, Tommaso; Boccignone, Giuseppe; Ferraro, Mario

    2012-01-01

    Heart rate variability (HRV) is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters. PMID:22666386

  5. Verification of models for ballistic movement time and endpoint variability.

    Science.gov (United States)

    Lin, Ray F; Drury, Colin G

    2013-01-01

    A hand control movement is composed of several ballistic movements. The time required in performing a ballistic movement and its endpoint variability are two important properties in developing movement models. The purpose of this study was to test potential models for predicting these two properties. Twelve participants conducted ballistic movements of specific amplitudes using a drawing tablet. The measured data of movement time and endpoint variability were then used to verify the models. This study was successful with Hoffmann and Gan's movement time model (Hoffmann, 1981; Gan and Hoffmann 1988) predicting more than 90.7% data variance for 84 individual measurements. A new theoretically developed ballistic movement variability model, proved to be better than Howarth, Beggs, and Bowden's (1971) model, predicting on average 84.8% of stopping-variable error and 88.3% of aiming-variable errors. These two validated models will help build solid theoretical movement models and evaluate input devices. This article provides better models for predicting end accuracy and movement time of ballistic movements that are desirable in rapid aiming tasks, such as keying in numbers on a smart phone. The models allow better design of aiming tasks, for example button sizes on mobile phones for different user populations.

  6. Latent variable modeling%建立隐性变量模型

    Institute of Scientific and Technical Information of China (English)

    蔡力

    2012-01-01

    @@ A latent variable model, as the name suggests,is a statistical model that contains latent, that is, unobserved, variables.Their roots go back to Spearman's 1904 seminal work[1] on factor analysis,which is arguably the first well-articulated latent variable model to be widely used in psychology, mental health research, and allied disciplines.Because of the association of factor analysis with early studies of human intelligence, the fact that key variables in a statistical model are, on occasion, unobserved has been a point of lingering contention and controversy.The reader is assured, however, that a latent variable,defined in the broadest manner, is no more mysterious than an error term in a normal theory linear regression model or a random effect in a mixed model.

  7. Construction of the descriptive system for the Assessment of Quality of Life AQoL-6D utility instrument.

    Science.gov (United States)

    Richardson, Jeffrey R J; Peacock, Stuart J; Hawthorne, Graeme; Iezzi, Angelo; Elsworth, Gerald; Day, Neil A

    2012-04-17

    Multi attribute utility (MAU) instruments are used to include the health related quality of life (HRQoL) in economic evaluations of health programs. Comparative studies suggest different MAU instruments measure related but different constructs. The objective of this paper is to describe the methods employed to achieve content validity in the descriptive system of the Assessment of Quality of Life (AQoL)-6D, MAU instrument. The AQoL program introduced the use of psychometric methods in the construction of health related MAU instruments. To develop the AQoL-6D we selected 112 items from previous research, focus groups and expert judgment and administered them to 316 members of the public and 302 hospital patients. The search for content validity across a broad spectrum of health states required both formative and reflective modelling. We employed Exploratory Factor Analysis and Structural Equation Modelling (SEM) to meet these dual requirements. The resulting instrument employs 20 items in a multi-tier descriptive system. Latent dimension variables achieve sensitive descriptions of 6 dimensions which, in turn, combine to form a single latent QoL variable. Diagnostic statistics from the SEM analysis are exceptionally good and confirm the hypothesised structure of the model. The AQoL-6D descriptive system has good psychometric properties. They imply that the instrument has achieved construct validity and provides a sensitive description of HRQoL. This means that it may be used with confidence for measuring health related quality of life and that it is a suitable basis for modelling utilities for inclusion in the economic evaluation of health programs.

  8. Functional capabilities of the breadboard model of SIDRA satellite-borne instrument

    International Nuclear Information System (INIS)

    Dudnik, O.V.; Kurbatov, E.V.; Titov, K.G.; Prieto, M.; Sanchez, S.; Sylwester, J.; Gburek, S.; Podgorski, P.

    2013-01-01

    This paper presents the structure, principles of operation and functional capabilities of the breadboard model of SIDRA compact satellite-borne instrument. SIDRA is intended for monitoring fluxes of high-energy charged particles under outer-space conditions. We present the reasons to develop a particle spectrometer and we list the main objectives to be achieved with the help of this instrument. The paper describes the major specifications of the analog and digital signal processing units of the breadboard model. A specially designed and developed data processing module based on the Actel ProAsic3E A3PE3000 FPGA is presented and compared with the all-in one digital processing signal board based on the Xilinx Spartan 3 XC3S1500 FPGA.

  9. Galactic models with variable spiral structure

    International Nuclear Information System (INIS)

    James, R.A.; Sellwood, J.A.

    1978-01-01

    A series of three-dimensional computer simulations of disc galaxies has been run in which the self-consistent potential of the disc stars is supplemented by that arising from a small uniform Population II sphere. The models show variable spiral structure, which is more pronounced for thin discs. In addition, the thin discs form weak bars. In one case variable spiral structure associated with this bar has been seen. The relaxed discs are cool outside resonance regions. (author)

  10. A variable-order fractal derivative model for anomalous diffusion

    Directory of Open Access Journals (Sweden)

    Liu Xiaoting

    2017-01-01

    Full Text Available This paper pays attention to develop a variable-order fractal derivative model for anomalous diffusion. Previous investigations have indicated that the medium structure, fractal dimension or porosity may change with time or space during solute transport processes, results in time or spatial dependent anomalous diffusion phenomena. Hereby, this study makes an attempt to introduce a variable-order fractal derivative diffusion model, in which the index of fractal derivative depends on temporal moment or spatial position, to characterize the above mentioned anomalous diffusion (or transport processes. Compared with other models, the main advantages in description and the physical explanation of new model are explored by numerical simulation. Further discussions on the dissimilitude such as computational efficiency, diffusion behavior and heavy tail phenomena of the new model and variable-order fractional derivative model are also offered.

  11. Higher-dimensional cosmological model with variable gravitational ...

    Indian Academy of Sciences (India)

    We have studied five-dimensional homogeneous cosmological models with variable and bulk viscosity in Lyra geometry. Exact solutions for the field equations have been obtained and physical properties of the models are discussed. It has been observed that the results of new models are well within the observational ...

  12. The Standard, Power, and Color Model of Instrument Combination in Romantic-Era Symphonic Works

    Directory of Open Access Journals (Sweden)

    Randolph Johnson

    2011-08-01

    Full Text Available The Standard, Power, and Color (SPC model describes the nexus between musical instrument combination patterns and expressive goals in music. Instruments within each SPC group tend to attract each other and work as a functional unit to create orchestral gestures. Standard instruments establish a timbral groundwork; Power instruments create contrast through loud dynamic climaxes; and Color instruments catch listeners’ attention by means of their sparing use. Examples within these three groups include violin (Standard, piccolo (Power, and harp (Color. The SPC theory emerges from analyses of nineteenth-century symphonic works. Multidimensional scaling analysis of instrument combination frequencies maps instrument relationships; hierarchical clustering analysis indicates three SPC groups within the map. The SPC characterization is found to be moderately robust through the results of hypothesis testing: (1 Color instruments are included less often in symphonic works; (2 when Color instruments are included, they perform less often than the average instrument; and (3 Color and non-Color instruments have equal numbers of solo occurrences. Additionally, (4 Power instruments are positively associated with louder dynamic levels; and (5 when Power instruments are present in the musical texture, the pitch range spanned by the entire orchestra does not become more extreme.

  13. Multi-scale climate modelling over Southern Africa using a variable-resolution global model

    CSIR Research Space (South Africa)

    Engelbrecht, FA

    2011-12-01

    Full Text Available -mail: fengelbrecht@csir.co.za Multi-scale climate modelling over Southern Africa using a variable-resolution global model FA Engelbrecht1, 2*, WA Landman1, 3, CJ Engelbrecht4, S Landman5, MM Bopape1, B Roux6, JL McGregor7 and M Thatcher7 1 CSIR Natural... improvement. Keywords: multi-scale climate modelling, variable-resolution atmospheric model Introduction Dynamic climate models have become the primary tools for the projection of future climate change, at both the global and regional scales. Dynamic...

  14. Modelling the co-evolution of indirect genetic effects and inherited variability.

    Science.gov (United States)

    Marjanovic, Jovana; Mulder, Han A; Rönnegård, Lars; Bijma, Piter

    2018-03-28

    When individuals interact, their phenotypes may be affected not only by their own genes but also by genes in their social partners. This phenomenon is known as Indirect Genetic Effects (IGEs). In aquaculture species and some plants, however, competition not only affects trait levels of individuals, but also inflates variability of trait values among individuals. In the field of quantitative genetics, the variability of trait values has been studied as a quantitative trait in itself, and is often referred to as inherited variability. Such studies, however, consider only the genetic effect of the focal individual on trait variability and do not make a connection to competition. Although the observed phenotypic relationship between competition and variability suggests an underlying genetic relationship, the current quantitative genetic models of IGE and inherited variability do not allow for such a relationship. The lack of quantitative genetic models that connect IGEs to inherited variability limits our understanding of the potential of variability to respond to selection, both in nature and agriculture. Models of trait levels, for example, show that IGEs may considerably change heritable variation in trait values. Currently, we lack the tools to investigate whether this result extends to variability of trait values. Here we present a model that integrates IGEs and inherited variability. In this model, the target phenotype, say growth rate, is a function of the genetic and environmental effects of the focal individual and of the difference in trait value between the social partner and the focal individual, multiplied by a regression coefficient. The regression coefficient is a genetic trait, which is a measure of cooperation; a negative value indicates competition, a positive value cooperation, and an increasing value due to selection indicates the evolution of cooperation. In contrast to the existing quantitative genetic models, our model allows for co-evolution of

  15. Satellite-based Analysis of CO Variability over the Amazon Basin

    Science.gov (United States)

    Deeter, M. N.; Emmons, L. K.; Martinez-Alonso, S.; Tilmes, S.; Wiedinmyer, C.

    2017-12-01

    Pyrogenic emissions from the Amazon Basin exert significant influence on both climate and air quality but are highly variable from year to year. The ability of models to simulate the impact of biomass burning emissions on downstream atmospheric concentrations depends on (1) the quality of surface flux estimates (i.e., emissions inventories), (2) model dynamics (e.g., horizontal winds, large-scale convection and mixing) and (3) the representation of atmospheric chemical processes. With an atmospheric lifetime of a few months, carbon monoxide (CO) is a commonly used diagnostic for biomass burning. CO products are available from several satellite instruments and allow analyses of CO variability over extended regions such as the Amazon Basin with useful spatial and temporal sampling characteristics. The MOPITT ('Measurements of Pollution in the Troposphere') instrument was launched on the NASA Terra platform near the end of 1999 and is still operational. MOPITT is uniquely capable of measuring tropospheric CO concentrations using both thermal-infrared and near-infrared observations, resulting in the ability to independently retrieve lower- and upper-troposphere CO concentrations. We exploit the 18-year MOPITT record and related datasets to analyze the variability of CO over the Amazon Basin and evaluate simulations performed with the CAM-chem chemical transport model. We demonstrate that observed differences between MOPITT observations and model simulations provide important clues regarding emissions inventories, convective mixing and long-range transport.

  16. Variable Renewable Energy in Long-Term Planning Models: A Multi-Model Perspective

    Energy Technology Data Exchange (ETDEWEB)

    Cole, Wesley [National Renewable Energy Lab. (NREL), Golden, CO (United States); Frew, Bethany [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mai, Trieu [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sun, Yinong [National Renewable Energy Lab. (NREL), Golden, CO (United States); Bistline, John [Electric Power Research Inst. (EPRI), Knoxville, TN (United States); Blanford, Geoffrey [Electric Power Research Inst. (EPRI), Knoxville, TN (United States); Young, David [Electric Power Research Inst. (EPRI), Knoxville, TN (United States); Marcy, Cara [U.S. Energy Information Administration, Washington, DC (United States); Namovicz, Chris [U.S. Energy Information Administration, Washington, DC (United States); Edelman, Risa [US Environmental Protection Agency (EPA), Washington, DC (United States); Meroney, Bill [US Environmental Protection Agency (EPA), Washington, DC (United States); Sims, Ryan [US Environmental Protection Agency (EPA), Washington, DC (United States); Stenhouse, Jeb [US Environmental Protection Agency (EPA), Washington, DC (United States); Donohoo-Vallett, Paul [Dept. of Energy (DOE), Washington DC (United States)

    2017-11-01

    Long-term capacity expansion models of the U.S. electricity sector have long been used to inform electric sector stakeholders and decision-makers. With the recent surge in variable renewable energy (VRE) generators — primarily wind and solar photovoltaics — the need to appropriately represent VRE generators in these long-term models has increased. VRE generators are especially difficult to represent for a variety of reasons, including their variability, uncertainty, and spatial diversity. This report summarizes the analyses and model experiments that were conducted as part of two workshops on modeling VRE for national-scale capacity expansion models. It discusses the various methods for treating VRE among four modeling teams from the Electric Power Research Institute (EPRI), the U.S. Energy Information Administration (EIA), the U.S. Environmental Protection Agency (EPA), and the National Renewable Energy Laboratory (NREL). The report reviews the findings from the two workshops and emphasizes the areas where there is still need for additional research and development on analysis tools to incorporate VRE into long-term planning and decision-making. This research is intended to inform the energy modeling community on the modeling of variable renewable resources, and is not intended to advocate for or against any particular energy technologies, resources, or policies.

  17. The necessity of connection structures in neural models of variable binding.

    Science.gov (United States)

    van der Velde, Frank; de Kamps, Marc

    2015-08-01

    In his review of neural binding problems, Feldman (Cogn Neurodyn 7:1-11, 2013) addressed two types of models as solutions of (novel) variable binding. The one type uses labels such as phase synchrony of activation. The other ('connectivity based') type uses dedicated connections structures to achieve novel variable binding. Feldman argued that label (synchrony) based models are the only possible candidates to handle novel variable binding, whereas connectivity based models lack the flexibility required for that. We argue and illustrate that Feldman's analysis is incorrect. Contrary to his conclusion, connectivity based models are the only viable candidates for models of novel variable binding because they are the only type of models that can produce behavior. We will show that the label (synchrony) based models analyzed by Feldman are in fact examples of connectivity based models. Feldman's analysis that novel variable binding can be achieved without existing connection structures seems to result from analyzing the binding problem in a wrong frame of reference, in particular in an outside instead of the required inside frame of reference. Connectivity based models can be models of novel variable binding when they possess a connection structure that resembles a small-world network, as found in the brain. We will illustrate binding with this type of model with episode binding and the binding of words, including novel words, in sentence structures.

  18. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

    KAUST Repository

    Irincheeva, Irina; Cantoni, Eva; Genton, Marc G.

    2012-01-01

    We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

  19. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

    KAUST Repository

    Irincheeva, Irina

    2012-08-03

    We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

  20. Finite Difference Time Domain Modeling at USA Instruments, Inc.

    Science.gov (United States)

    Curtis, Richard

    2003-10-01

    Due to the competitive nature of the commercial MRI industry, it is essential for the financial health of a participating company to innovate new coil designs and bring product to market rapidly in response to ever-changing market conditions. However, the technology of MRI coil design is still early in its stage of development and its principles are yet evolving. As a result, it is not always possible to know the relevant electromagnetic effects of a given design since the interaction of coil elements is complex and often counter-intuitive. Even if the effects are known qualitatively, the quantitative results are difficult to obtain. At USA Instruments, Inc., the acquisition of the XFDTDâ electromagnetic simulation tool from REMCOM, Inc., has been helpful in determining the electromagnetic performance characteristics of existing coil designs in the prototype stage before the coils are released for production. In the ideal case, a coil design would be modeled earlier at the conceptual stage, so that only good designs will make it to the prototyping stage and the electromagnetic characteristics better understood very early in the design process and before the testing stage has begun. This paper is a brief overview of using FDTD modeling for MRI coil design at USA Instruments, Inc., and shows some of the highlights of recent FDTD modeling efforts on Birdcage coils, a staple of the MRI coil design portfolio.

  1. Methods and Models of Market Risk Stress-Testing of the Portfolio of Financial Instruments

    Directory of Open Access Journals (Sweden)

    Alexander M. Karminsky

    2015-01-01

    Full Text Available Amid instability of financial markets and macroeconomic situation the necessity of improving bank risk-management instrument arises. New economic reality defines the need for searching for more advanced approaches of estimating banks vulnerability to exceptional, but plausible events. Stress-testing belongs to such instruments. The paper reviews and compares the models of market risk stress-testing of the portfolio of different financial instruments. These days the topic of the paper is highly acute due to the fact that now stress-testing is becoming an integral part of anticrisis risk-management amid macroeconomic instability and appearance of new risks together with close interest to the problem of risk-aggregation. The paper outlines the notion of stress-testing and gives coverage of goals, functions of stress-tests and main criteria for market risk stress-testing classification. The paper also stresses special aspects of scenario analysis. Novelty of the research is explained by elaborating the programme of aggregated complex multifactor stress-testing of the portfolio risk based on scenario analysis. The paper highlights modern Russian and foreign models of stress-testing both on solo-basis and complex. The paper lays emphasis on the results of stress-testing and revaluations of positions for all three complex models: methodology of the Central Bank of stress-testing portfolio risk, model relying on correlations analysis and copula model. The models of stress-testing on solo-basis are different for each financial instrument. Parametric StressVaR model is applicable to shares and options stress-testing;model based on "Grek" indicators is used for options; for euroobligation regional factor model is used. Finally some theoretical recommendations about managing market risk of the portfolio are given.

  2. Industrial instrumentation principles and design

    CERN Document Server

    Padmanabhan, Tattamangalam R

    2000-01-01

    Pneumatic, hydraulic and allied instrumentation schemes have given way to electronic schemes in recent years thanks to the rapid strides in electronics and allied areas. Principles, design and applications of such state-of-the-art instrumentation schemes form the subject matter of this book. Through representative examples, the basic building blocks of instrumentation schemes are identified and each of these building blocks discussed in terms of its design and interface characteristics. The common generic schemes synthesized with such building blocks are dealt with subsequently. This forms the scope of Part I. The focus in Part II is on application. Displacement and allied instrumentation, force and allied instrumentation and process instrumentation in terms of temperature, flow, pressure level and other common process variables are dealt with separately and exhaustively. Despite the diversity in the sensor principles and characteristics and the variety in the applications and their environments, it is possib...

  3. An instrumental electrode model for solving EIT forward problems.

    Science.gov (United States)

    Zhang, Weida; Li, David

    2014-10-01

    An instrumental electrode model (IEM) capable of describing the performance of electrical impedance tomography (EIT) systems in the MHz frequency range has been proposed. Compared with the commonly used Complete Electrode Model (CEM), which assumes ideal front-end interfaces, the proposed model considers the effects of non-ideal components in the front-end circuits. This introduces an extra boundary condition in the forward model and offers a more accurate modelling for EIT systems. We have demonstrated its performance using simple geometry structures and compared the results with the CEM and full Maxwell methods. The IEM can provide a significantly more accurate approximation than the CEM in the MHz frequency range, where the full Maxwell methods are favoured over the quasi-static approximation. The improved electrode model will facilitate the future characterization and front-end design of real-world EIT systems.

  4. Manufacturing challenge: An employee perception of the impact of BEM variables on motivation

    Science.gov (United States)

    Nyaude, Alaster

    The study examines the impact of Thomas F. Gilbert's Behavior Engineering Model (BEM) variables on employee perception of motivation at an aerospace equipment manufacturing plant in Georgia. The research process involved literature review, and determination of an appropriate survey instrument for the study. The Hersey-Chevalier modified PROBE instrument (Appendix C) was used with Dr Roger Chevalier's validation. The participants' responses were further examined to determine the influence of demographic control variables of age, gender, length of service with the company and education on employee perception of motivation. The results indicated that the top three highly motivating variables were knowledge and skills, capacity and resources. Knowledge and skills was perceived to be highly motivating, capacity as second highly motivating and resources as the third highly motivating variable. Interestingly, the fourth highly motivating variable was information, the fifth was motives and the sixth was incentives. The results also showed that demographic control variables had no influence on employee perception of motivation. Further research may be required to understand to what extend these BEM variables impact employee perceptions of motivation.

  5. Design, Modelling and Teleoperation of a 2 mm Diameter Compliant Instrument for the da Vinci Platform.

    Science.gov (United States)

    Francis, P; Eastwood, K W; Bodani, V; Looi, T; Drake, J M

    2018-05-07

    This work explores the feasibility of creating and accurately controlling an instrument for robotic surgery with a 2 mm diameter and a three degree-of-freedom (DoF) wrist which is compatible with the da Vinci platform. The instrument's wrist is composed of a two DoF bending notched-nitinol tube pattern, for which a kinematic model has been developed. A base mechanism for controlling the wrist is designed for integration with the da Vinci Research Kit. A basic teleoperation task is successfully performed using two of the miniature instruments. The performance and accuracy of the instrument suggest that creating and accurately controlling a 2 mm diameter instrument is feasible and the design and modelling proposed in this work provide a basis for future miniature instrument development.

  6. Construction of the descriptive system for the assessment of quality of life AQoL-6D utility instrument

    Directory of Open Access Journals (Sweden)

    Richardson Jeffrey RJ

    2012-04-01

    Full Text Available Abstract Background Multi attribute utility (MAU instruments are used to include the health related quality of life (HRQoL in economic evaluations of health programs. Comparative studies suggest different MAU instruments measure related but different constructs. The objective of this paper is to describe the methods employed to achieve content validity in the descriptive system of the Assessment of Quality of Life (AQoL-6D, MAU instrument. Methods The AQoL program introduced the use of psychometric methods in the construction of health related MAU instruments. To develop the AQoL-6D we selected 112 items from previous research, focus groups and expert judgment and administered them to 316 members of the public and 302 hospital patients. The search for content validity across a broad spectrum of health states required both formative and reflective modelling. We employed Exploratory Factor Analysis and Structural Equation Modelling (SEM to meet these dual requirements. Results and Discussion The resulting instrument employs 20 items in a multi-tier descriptive system. Latent dimension variables achieve sensitive descriptions of 6 dimensions which, in turn, combine to form a single latent QoL variable. Diagnostic statistics from the SEM analysis are exceptionally good and confirm the hypothesised structure of the model. Conclusions The AQoL-6D descriptive system has good psychometric properties. They imply that the instrument has achieved construct validity and provides a sensitive description of HRQoL. This means that it may be used with confidence for measuring health related quality of life and that it is a suitable basis for modelling utilities for inclusion in the economic evaluation of health programs.

  7. Improved variable reduction in partial least squares modelling based on predictive-property-ranked variables and adaptation of partial least squares complexity.

    Science.gov (United States)

    Andries, Jan P M; Vander Heyden, Yvan; Buydens, Lutgarde M C

    2011-10-31

    The calibration performance of partial least squares for one response variable (PLS1) can be improved by elimination of uninformative variables. Many methods are based on so-called predictive variable properties, which are functions of various PLS-model parameters, and which may change during the variable reduction process. In these methods variable reduction is made on the variables ranked in descending order for a given variable property. The methods start with full spectrum modelling. Iteratively, until a specified number of remaining variables is reached, the variable with the smallest property value is eliminated; a new PLS model is calculated, followed by a renewed ranking of the variables. The Stepwise Variable Reduction methods using Predictive-Property-Ranked Variables are denoted as SVR-PPRV. In the existing SVR-PPRV methods the PLS model complexity is kept constant during the variable reduction process. In this study, three new SVR-PPRV methods are proposed, in which a possibility for decreasing the PLS model complexity during the variable reduction process is build in. Therefore we denote our methods as PPRVR-CAM methods (Predictive-Property-Ranked Variable Reduction with Complexity Adapted Models). The selective and predictive abilities of the new methods are investigated and tested, using the absolute PLS regression coefficients as predictive property. They were compared with two modifications of existing SVR-PPRV methods (with constant PLS model complexity) and with two reference methods: uninformative variable elimination followed by either a genetic algorithm for PLS (UVE-GA-PLS) or an interval PLS (UVE-iPLS). The performance of the methods is investigated in conjunction with two data sets from near-infrared sources (NIR) and one simulated set. The selective and predictive performances of the variable reduction methods are compared statistically using the Wilcoxon signed rank test. The three newly developed PPRVR-CAM methods were able to retain

  8. Does the Early Bird Catch the Worm? Instrumental Variable Estimates of Educational Effects of Age of School Entry in Germany

    OpenAIRE

    Puhani, Patrick A.; Weber, Andrea M.

    2006-01-01

    We estimate the effect of age of school entry on educational outcomes using two different data sets for Germany, sampling pupils at the end of primary school and in the middle of secondary school. Results are obtained based on instrumental variable estimation exploiting the exogenous variation in month of birth. We find robust and significant positive effects on educational outcomes for pupils who enter school at seven instead of six years of age: Test scores at the end of primary school incr...

  9. Modeling of carbon sequestration in coal-beds: A variable saturated simulation

    International Nuclear Information System (INIS)

    Liu Guoxiang; Smirnov, Andrei V.

    2008-01-01

    Storage of carbon dioxide in deep coal seams is a profitable method to reduce the concentration of green house gases in the atmosphere while the methane as a byproduct can be extracted during carbon dioxide injection into the coal seam. In this procedure, the key element is to keep carbon dioxide in the coal seam without escaping for a long term. It is depended on many factors such as properties of coal basin, fracture state, phase equilibrium, etc., especially the porosity, permeability and saturation of the coal seam. In this paper, a variable saturation model was developed to predict the capacity of carbon dioxide sequestration and coal-bed methane recovery. This variable saturation model can be used to track the saturation variability with the partial pressures change caused by carbon dioxide injection. Saturation variability is a key factor to predict the capacity of carbon dioxide storage and methane recovery. Based on this variable saturation model, a set of related variables including capillary pressure, relative permeability, porosity, coupled adsorption model, concentration and temperature equations were solved. From results of the simulation, historical data agree with the variable saturation model as well as the adsorption model constructed by Langmuir equations. The Appalachian basin, as an example, modeled the carbon dioxide sequestration in this paper. The results of the study and the developed models can provide the projections for the CO 2 sequestration and methane recovery in coal-beds within different regional specifics

  10. Design and Implementation of Data Collection Instruments for Neonatology Research

    Directory of Open Access Journals (Sweden)

    Monica G. HĂŞMĂŞANU

    2014-12-01

    Full Text Available im: The aim of our research was to design and implement data collection instruments to be use in context of an observational prospective clinical study with follow-up conducted on new born with intrauterine growth restriction. Methods: The structure of the data collection forms (paper based and electronic based was first identified and for each variable the best type to accomplish the research aim was established. The code for categorical variables has also been decided as well as the units of measurements for quantitative variables. In respect of good practice, a set of confounding factors (as gender, date of birth, etc. have also been identified and integrated in data collection instruments. Data-entry validation rules were implemented for each variable to reduce data input errors when the electronic data collection instrument was created. Results: Two data collection instruments have been developed and successfully implemented: a paper-based form and an electronic data collection instrument. The developed forms included demographics, neonatal complications (as hypoglycemia, hypocalcemia, etc., biochemical data at birth and follow-up, immunological data, as well as basal and follow-up echocardiographic data. Data-entry validation criteria have been implemented in electronic data collection instrument to assure validity and precision when paper-based data are translated in electronic form. Furthermore, to assure subject’s confidentiality a careful attention was given to HIPPA identifiers when electronic data collection instrument was developed. Conclusion: Data collection instruments were successfully developed and implemented as an a priori step in a clinical research for assisting data collection and management in a case of an observational prospective study with follow-up visits.

  11. Instrumentation and testing of a prestressed concrete containment vessel model

    International Nuclear Information System (INIS)

    Hessheimer, M.F.; Pace, D.W.; Klamerus, E.W.

    1997-01-01

    Static overpressurization tests of two scale models of nuclear containment structures - a steel containment vessel (SCV) representative of an improved, boiling water reactor (BWR) Mark II design and a prestressed concrete containment vessel (PCCV) for pressurized water reactors (PWR) - are being conducted by Sandia National Laboratories for the Nuclear Power Engineering Corporation of Japan and the U.S. Nuclear Regulatory Commission. This paper discusses plans for instrumentation and testing of the PCCV model. 6 refs., 2 figs., 2 tabs

  12. Results of the first tests of the SIDRA satellite-borne instrument breadboard model

    International Nuclear Information System (INIS)

    Dudnik, O.V.; Kurbatov, E.V.; Avilov, A.M.; Titov, K.G.; Prieto, M; Sanchez, S.; Spassky, A.V.; Sylwester, J.; Gburek, S.; Podgorski, P.

    2013-01-01

    In this work, the results of the calibration of the solid-state detectors and electronic channels of the SIDRA satellite borne energetic charged particle spectrometer-telescope breadboard model are presented. The block schemes and experimental equipment used to conduct the thermal vacuum and electromagnetic compatibility tests of the assemblies and modules of the compact satellite equipment are described. The results of the measured thermal conditions of operation of the signal analog and digital processing critical modules of the SIDRA instrument prototype are discussed. Finally, the levels of conducted interference generated by the instrument model in the primary vehicle-borne power circuits are presented.

  13. Treating pre-instrumental data as "missing" data: using a tree-ring-based paleoclimate record and imputations to reconstruct streamflow in the Missouri River Basin

    Science.gov (United States)

    Ho, M. W.; Lall, U.; Cook, E. R.

    2015-12-01

    Advances in paleoclimatology in the past few decades have provided opportunities to expand the temporal perspective of the hydrological and climatological variability across the world. The North American region is particularly fortunate in this respect where a relatively dense network of high resolution paleoclimate proxy records have been assembled. One such network is the annually-resolved Living Blended Drought Atlas (LBDA): a paleoclimate reconstruction of the Palmer Drought Severity Index (PDSI) that covers North America on a 0.5° × 0.5° grid based on tree-ring chronologies. However, the use of the LBDA to assess North American streamflow variability requires a model by which streamflow may be reconstructed. Paleoclimate reconstructions have typically used models that first seek to quantify the relationship between the paleoclimate variable and the environmental variable of interest before extrapolating the relationship back in time. In contrast, the pre-instrumental streamflow is here considered as "missing" data. A method of imputing the "missing" streamflow data, prior to the instrumental record, is applied through multiple imputation using chained equations for streamflow in the Missouri River Basin. In this method, the distribution of the instrumental streamflow and LBDA is used to estimate sets of plausible values for the "missing" streamflow data resulting in a ~600 year-long streamflow reconstruction. Past research into external climate forcings, oceanic-atmospheric variability and its teleconnections, and assessments of rare multi-centennial instrumental records demonstrate that large temporal oscillations in hydrological conditions are unlikely to be captured in most instrumental records. The reconstruction of multi-centennial records of streamflow will enable comprehensive assessments of current and future water resource infrastructure and operations under the existing scope of natural climate variability.

  14. Mediterranean climate modelling: variability and climate change scenarios

    International Nuclear Information System (INIS)

    Somot, S.

    2005-12-01

    Air-sea fluxes, open-sea deep convection and cyclo-genesis are studied in the Mediterranean with the development of a regional coupled model (AORCM). It accurately simulates these processes and their climate variabilities are quantified and studied. The regional coupling shows a significant impact on the number of winter intense cyclo-genesis as well as on associated air-sea fluxes and precipitation. A lower inter-annual variability than in non-coupled models is simulated for fluxes and deep convection. The feedbacks driving this variability are understood. The climate change response is then analysed for the 21. century with the non-coupled models: cyclo-genesis decreases, associated precipitation increases in spring and autumn and decreases in summer. Moreover, a warming and salting of the Mediterranean as well as a strong weakening of its thermohaline circulation occur. This study also concludes with the necessity of using AORCMs to assess climate change impacts on the Mediterranean. (author)

  15. Predictive and Descriptive CoMFA Models: The Effect of Variable Selection.

    Science.gov (United States)

    Sepehri, Bakhtyar; Omidikia, Nematollah; Kompany-Zareh, Mohsen; Ghavami, Raouf

    2018-01-01

    Aims & Scope: In this research, 8 variable selection approaches were used to investigate the effect of variable selection on the predictive power and stability of CoMFA models. Three data sets including 36 EPAC antagonists, 79 CD38 inhibitors and 57 ATAD2 bromodomain inhibitors were modelled by CoMFA. First of all, for all three data sets, CoMFA models with all CoMFA descriptors were created then by applying each variable selection method a new CoMFA model was developed so for each data set, 9 CoMFA models were built. Obtained results show noisy and uninformative variables affect CoMFA results. Based on created models, applying 5 variable selection approaches including FFD, SRD-FFD, IVE-PLS, SRD-UVEPLS and SPA-jackknife increases the predictive power and stability of CoMFA models significantly. Among them, SPA-jackknife removes most of the variables while FFD retains most of them. FFD and IVE-PLS are time consuming process while SRD-FFD and SRD-UVE-PLS run need to few seconds. Also applying FFD, SRD-FFD, IVE-PLS, SRD-UVE-PLS protect CoMFA countor maps information for both fields. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  16. Strategies to Enhance Online Learning Teams. Team Assessment and Diagnostics Instrument and Agent-based Modeling

    Science.gov (United States)

    2010-08-12

    Strategies to Enhance Online Learning Teams Team Assessment and Diagnostics Instrument and Agent-based Modeling Tristan E. Johnson, Ph.D. Learning ...REPORT DATE AUG 2010 2. REPORT TYPE 3. DATES COVERED 00-00-2010 to 00-00-2010 4. TITLE AND SUBTITLE Strategies to Enhance Online Learning ...TeamsTeam Strategies to Enhance Online Learning Teams: Team Assessment and Diagnostics Instrument and Agent-based Modeling 5a. CONTRACT NUMBER 5b. GRANT

  17. Focus on variability : New tools to study intra-individual variability in developmental data

    NARCIS (Netherlands)

    van Geert, P; van Dijk, M

    2002-01-01

    In accordance with dynamic systems theory, we assume that variability is an important developmental phenomenon. However, the standard methodological toolkit of the developmental psychologist offers few instruments for the study of variability. In this article we will present several new methods that

  18. Organizational learning, pilot test of Likert-type instruments

    Directory of Open Access Journals (Sweden)

    Manuel Alfonso Garzón Castrillón

    2010-09-01

    Full Text Available This paper presents the results obtained in the pilot study of instruments created to comply the specific objective of designing and validating instruments to study the capacity of organizational learning. The Likert measurement scale was used because it allowed to establish the pertinence of the dimension as variable in the context of organizational learning. A One-way Analysis of Variance (ANOVA was used, with statistical package SPSS. Some 138 variables in 3 factors and 40 affirmations were simplified.

  19. Computational Fluid Dynamics Modeling of a Supersonic Nozzle and Integration into a Variable Cycle Engine Model

    Science.gov (United States)

    Connolly, Joseph W.; Friedlander, David; Kopasakis, George

    2015-01-01

    This paper covers the development of an integrated nonlinear dynamic simulation for a variable cycle turbofan engine and nozzle that can be integrated with an overall vehicle Aero-Propulso-Servo-Elastic (APSE) model. A previously developed variable cycle turbofan engine model is used for this study and is enhanced here to include variable guide vanes allowing for operation across the supersonic flight regime. The primary focus of this study is to improve the fidelity of the model's thrust response by replacing the simple choked flow equation convergent-divergent nozzle model with a MacCormack method based quasi-1D model. The dynamic response of the nozzle model using the MacCormack method is verified by comparing it against a model of the nozzle using the conservation element/solution element method. A methodology is also presented for the integration of the MacCormack nozzle model with the variable cycle engine.

  20. Analytical Model for LLC Resonant Converter With Variable Duty-Cycle Control

    DEFF Research Database (Denmark)

    Shen, Yanfeng; Wang, Huai; Blaabjerg, Frede

    2016-01-01

    are identified and discussed. The proposed model enables a better understanding of the operation characteristics and fast parameter design of the LLC converter, which otherwise cannot be achieved by the existing simulation based methods and numerical models. The results obtained from the proposed model......In LLC resonant converters, the variable duty-cycle control is usually combined with a variable frequency control to widen the gain range, improve the light-load efficiency, or suppress the inrush current during start-up. However, a proper analytical model for the variable duty-cycle controlled LLC...... converter is still not available due to the complexity of operation modes and the nonlinearity of steady-state equations. This paper makes the efforts to develop an analytical model for the LLC converter with variable duty-cycle control. All possible operation models and critical operation characteristics...

  1. A model for AGN variability on multiple time-scales

    Science.gov (United States)

    Sartori, Lia F.; Schawinski, Kevin; Trakhtenbrot, Benny; Caplar, Neven; Treister, Ezequiel; Koss, Michael J.; Urry, C. Megan; Zhang, C. E.

    2018-05-01

    We present a framework to link and describe active galactic nuclei (AGN) variability on a wide range of time-scales, from days to billions of years. In particular, we concentrate on the AGN variability features related to changes in black hole fuelling and accretion rate. In our framework, the variability features observed in different AGN at different time-scales may be explained as realisations of the same underlying statistical properties. In this context, we propose a model to simulate the evolution of AGN light curves with time based on the probability density function (PDF) and power spectral density (PSD) of the Eddington ratio (L/LEdd) distribution. Motivated by general galaxy population properties, we propose that the PDF may be inspired by the L/LEdd distribution function (ERDF), and that a single (or limited number of) ERDF+PSD set may explain all observed variability features. After outlining the framework and the model, we compile a set of variability measurements in terms of structure function (SF) and magnitude difference. We then combine the variability measurements on a SF plot ranging from days to Gyr. The proposed framework enables constraints on the underlying PSD and the ability to link AGN variability on different time-scales, therefore providing new insights into AGN variability and black hole growth phenomena.

  2. Models and error analyses of measuring instruments in accountability systems in safeguards control

    International Nuclear Information System (INIS)

    Dattatreya, E.S.

    1977-05-01

    Essentially three types of measuring instruments are used in plutonium accountability systems: (1) the bubblers, for measuring the total volume of liquid in the holding tanks, (2) coulometers, titration apparatus and calorimeters, for measuring the concentration of plutonium; and (3) spectrometers, for measuring isotopic composition. These three classes of instruments are modeled and analyzed. Finally, the uncertainty in the estimation of total plutonium in the holding tank is determined

  3. Classification criteria of syndromes by latent variable models

    DEFF Research Database (Denmark)

    Petersen, Janne

    2010-01-01

    patient's characteristics. These methods may erroneously reduce multiplicity either by combining markers of different phenotypes or by mixing HALS with other processes such as aging. Latent class models identify homogenous groups of patients based on sets of variables, for example symptoms. As no gold......The thesis has two parts; one clinical part: studying the dimensions of human immunodeficiency virus associated lipodystrophy syndrome (HALS) by latent class models, and a more statistical part: investigating how to predict scores of latent variables so these can be used in subsequent regression...... standard exists for diagnosing HALS the normally applied diagnostic models cannot be used. Latent class models, which have never before been used to diagnose HALS, make it possible, under certain assumptions, to: statistically evaluate the number of phenotypes, test for mixing of HALS with other processes...

  4. Using structural equation modeling to investigate relationships among ecological variables

    Science.gov (United States)

    Malaeb, Z.A.; Kevin, Summers J.; Pugesek, B.H.

    2000-01-01

    Structural equation modeling is an advanced multivariate statistical process with which a researcher can construct theoretical concepts, test their measurement reliability, hypothesize and test a theory about their relationships, take into account measurement errors, and consider both direct and indirect effects of variables on one another. Latent variables are theoretical concepts that unite phenomena under a single term, e.g., ecosystem health, environmental condition, and pollution (Bollen, 1989). Latent variables are not measured directly but can be expressed in terms of one or more directly measurable variables called indicators. For some researchers, defining, constructing, and examining the validity of latent variables may be the end task of itself. For others, testing hypothesized relationships of latent variables may be of interest. We analyzed the correlation matrix of eleven environmental variables from the U.S. Environmental Protection Agency's (USEPA) Environmental Monitoring and Assessment Program for Estuaries (EMAP-E) using methods of structural equation modeling. We hypothesized and tested a conceptual model to characterize the interdependencies between four latent variables-sediment contamination, natural variability, biodiversity, and growth potential. In particular, we were interested in measuring the direct, indirect, and total effects of sediment contamination and natural variability on biodiversity and growth potential. The model fit the data well and accounted for 81% of the variability in biodiversity and 69% of the variability in growth potential. It revealed a positive total effect of natural variability on growth potential that otherwise would have been judged negative had we not considered indirect effects. That is, natural variability had a negative direct effect on growth potential of magnitude -0.3251 and a positive indirect effect mediated through biodiversity of magnitude 0.4509, yielding a net positive total effect of 0

  5. Understanding Hydrological Processes in Variable Source Areas in the Glaciated Northeastern US Watersheds under Variable Climate Conditions

    Science.gov (United States)

    Steenhuis, T. S.; Azzaino, Z.; Hoang, L.; Pacenka, S.; Worqlul, A. W.; Mukundan, R.; Stoof, C.; Owens, E. M.; Richards, B. K.

    2017-12-01

    The New York City source watersheds in the Catskill Mountains' humid, temperate climate has long-term hydrological and water quality monitoring data It is one of the few catchments where implementation of source and landscape management practices has led to decreased phosphorus concentration in the receiving surface waters. One of the reasons is that landscape measures correctly targeted the saturated variable source runoff areas (VSA) in the valley bottoms as the location where most of the runoff and other nonpoint pollutants originated. Measures targeting these areas were instrumental in lowering phosphorus concentration. Further improvements in water quality can be made based on a better understanding of the flow processes and water table fluctuations in the VSA. For that reason, we instrumented a self-contained upland variable source watershed with a landscape characteristic of a soil underlain by glacial till at shallow depth similar to the Catskill watersheds. In this presentation, we will discuss our experimental findings and present a mathematical model. Variable source areas have a small slope making gravity the driving force for the flow, greatly simplifying the simulation of the flow processes. The experimental data and the model simulations agreed for both outflow and water table fluctuations. We found that while the flows to the outlet were similar throughout the year, the discharge of the VSA varies greatly. This was due to transpiration by the plants which became active when soil temperatures were above 10oC. We found that shortly after the temperature increased above 10oC the baseflow stopped and only surface runoff occurred when rainstorms exceeded the storage capacity of the soil in at least a portion of the variable source area. Since plant growth in the variable source area was a major variable determining the base flow behavior, changes in temperature in the future - affecting the duration of the growing season - will affect baseflow and

  6. Goal-directed behaviour and instrumental devaluation: a neural system-level computational model

    Directory of Open Access Journals (Sweden)

    Francesco Mannella

    2016-10-01

    Full Text Available Devaluation is the key experimental paradigm used to demonstrate the presence of instrumental behaviours guided by goals in mammals. We propose a neural system-level computational model to address the question of which brain mechanisms allow the current value of rewards to control instrumental actions. The model pivots on and shows the computational soundness of the hypothesis for which the internal representation of instrumental manipulanda (e.g., levers activate the representation of rewards (or `action-outcomes', e.g. foods while attributing to them a value which depends on the current internal state of the animal (e.g., satiation for some but not all foods. The model also proposes an initial hypothesis of the integrated system of key brain components supporting this process and allowing the recalled outcomes to bias action selection: (a the sub-system formed by the basolateral amygdala and insular cortex acquiring the manipulanda-outcomes associations and attributing the current value to the outcomes; (b the three basal ganglia-cortical loops selecting respectively goals, associative sensory representations, and actions; (c the cortico-cortical and striato-nigro-striatal neural pathways supporting the selection, and selection learning, of actions based on habits and goals. The model reproduces and integrates the results of different devaluation experiments carried out with control rats and rats with pre- and post-training lesions of the basolateral amygdala, the nucleus accumbens core, the prelimbic cortex, and the dorso-medial striatum. The results support the soundness of the hypotheses of the model and show its capacity to integrate, at the system-level, the operations of the key brain structures underlying devaluation. Based on its hypotheses and predictions, the model also represents an operational framework to support the design and analysis of new experiments on the motivational aspects of goal-directed behaviour.

  7. Instrumental variable estimation of the causal effect of plasma 25-hydroxy-vitamin D on colorectal cancer risk: a mendelian randomization analysis.

    Directory of Open Access Journals (Sweden)

    Evropi Theodoratou

    Full Text Available Vitamin D deficiency has been associated with several common diseases, including cancer and is being investigated as a possible risk factor for these conditions. We reported the striking prevalence of vitamin D deficiency in Scotland. Previous epidemiological studies have reported an association between low dietary vitamin D and colorectal cancer (CRC. Using a case-control study design, we tested the association between plasma 25-hydroxy-vitamin D (25-OHD and CRC (2,001 cases, 2,237 controls. To determine whether plasma 25-OHD levels are causally linked to CRC risk, we applied the control function instrumental variable (IV method of the mendelian randomization (MR approach using four single nucleotide polymorphisms (rs2282679, rs12785878, rs10741657, rs6013897 previously shown to be associated with plasma 25-OHD. Low plasma 25-OHD levels were associated with CRC risk in the crude model (odds ratio (OR: 0.76, 95% Confidence Interval (CI: 0.71, 0.81, p: 1.4×10(-14 and after adjusting for age, sex and other confounding factors. Using an allele score that combined all four SNPs as the IV, the estimated causal effect was OR 1.16 (95% CI 0.60, 2.23, whilst it was 0.94 (95% CI 0.46, 1.91 and 0.93 (0.53, 1.63 when using an upstream (rs12785878, rs10741657 and a downstream allele score (rs2282679, rs6013897, respectively. 25-OHD levels were inversely associated with CRC risk, in agreement with recent meta-analyses. The fact that this finding was not replicated when the MR approach was employed might be due to weak instruments, giving low power to demonstrate an effect (<0.35. The prevalence and degree of vitamin D deficiency amongst individuals living in northerly latitudes is of considerable importance because of its relationship to disease. To elucidate the effect of vitamin D on CRC cancer risk, additional large studies of vitamin D and CRC risk are required and/or the application of alternative methods that are less sensitive to weak instrument

  8. Auto-associative Kernel Regression Model with Weighted Distance Metric for Instrument Drift Monitoring

    International Nuclear Information System (INIS)

    Shin, Ho Cheol; Park, Moon Ghu; You, Skin

    2006-01-01

    Recently, many on-line approaches to instrument channel surveillance (drift monitoring and fault detection) have been reported worldwide. On-line monitoring (OLM) method evaluates instrument channel performance by assessing its consistency with other plant indications through parametric or non-parametric models. The heart of an OLM system is the model giving an estimate of the true process parameter value against individual measurements. This model gives process parameter estimate calculated as a function of other plant measurements which can be used to identify small sensor drifts that would require the sensor to be manually calibrated or replaced. This paper describes an improvement of auto associative kernel regression (AAKR) by introducing a correlation coefficient weighting on kernel distances. The prediction performance of the developed method is compared with conventional auto-associative kernel regression

  9. Fixed transaction costs and modelling limited dependent variables

    NARCIS (Netherlands)

    Hempenius, A.L.

    1994-01-01

    As an alternative to the Tobit model, for vectors of limited dependent variables, I suggest a model, which follows from explicitly using fixed costs, if appropriate of course, in the utility function of the decision-maker.

  10. Contribution to the discussion of P.M. Fayers and David J. Hand: Causal variables, indicator variables and measurement scales: an example from quality of life

    DEFF Research Database (Denmark)

    Keiding, Niels

    2002-01-01

    Causal variables; Clinimetric scales; Composite scales; Construct validity; Measurement scales; Multi-item scales; Quality-of-life instruments......Causal variables; Clinimetric scales; Composite scales; Construct validity; Measurement scales; Multi-item scales; Quality-of-life instruments...

  11. Sparse modeling of spatial environmental variables associated with asthma.

    Science.gov (United States)

    Chang, Timothy S; Gangnon, Ronald E; David Page, C; Buckingham, William R; Tandias, Aman; Cowan, Kelly J; Tomasallo, Carrie D; Arndt, Brian G; Hanrahan, Lawrence P; Guilbert, Theresa W

    2015-02-01

    Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin's Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5-50years over a three-year period. Each patient's home address was geocoded to one of 3456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin's geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Evaluating two model reduction approaches for large scale hedonic models sensitive to omitted variables and multicollinearity

    DEFF Research Database (Denmark)

    Panduro, Toke Emil; Thorsen, Bo Jellesmark

    2014-01-01

    Hedonic models in environmental valuation studies have grown in terms of number of transactions and number of explanatory variables. We focus on the practical challenge of model reduction, when aiming for reliable parsimonious models, sensitive to omitted variable bias and multicollinearity. We...

  13. Instrumentation development

    International Nuclear Information System (INIS)

    Ubbes, W.F.; Yow, J.L. Jr.

    1988-01-01

    Instrumentation is developed for the Civilian Radioactive Waste Management Program to meet several different (and sometimes conflicting) objectives. This paper addresses instrumentation development for data needs that are related either directly or indirectly to a repository site, but does not touch on instrumentation for work with waste forms or other materials. Consequently, this implies a relatively large scale for the measurements, and an in situ setting for instrument performance. In this context, instruments are needed for site characterization to define phenomena, develop models, and obtain parameter values, and for later design and performance confirmation testing in the constructed repository. The former set of applications is more immediate, and is driven by the needs of program design and performance assessment activities. A host of general technical and nontechnical issues have arisen to challenge instrumentation development. Instruments can be classed into geomechanical, geohydrologic, or other specialty categories, but these issues cut across artificial classifications. These issues are outlined. Despite this imposing list of issues, several case histories are cited to evaluate progress in the area

  14. Cassini Radar EQM Model: Instrument Description and Performance Status

    Science.gov (United States)

    Borgarelli, L.; Faustini, E. Zampolini; Im, E.; Johnson, W. T. K.

    1996-01-01

    The spaeccraft of the Cassini Mission is planned to be launched towards Saturn in October 1997. The mission is designed to study the physical structure and chemical composition of Titan. The results of the tests performed on the Cassini radar engineering qualification model (EQM) are summarized. The approach followed in the verification and evaluation of the performance of the radio frequency subsystem EQM is presented. The results show that the instrument satisfies the relevant mission requirements.

  15. Variability aware compact model characterization for statistical circuit design optimization

    Science.gov (United States)

    Qiao, Ying; Qian, Kun; Spanos, Costas J.

    2012-03-01

    Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose an efficient variabilityaware compact model characterization methodology based on the linear propagation of variance. Hierarchical spatial variability patterns of selected compact model parameters are directly calculated from transistor array test structures. This methodology has been implemented and tested using transistor I-V measurements and the EKV-EPFL compact model. Calculation results compare well to full-wafer direct model parameter extractions. Further studies are done on the proper selection of both compact model parameters and electrical measurement metrics used in the method.

  16. Interannual modes of variability of Southern Hemisphere atmospheric circulation in CMIP3 models

    International Nuclear Information System (INIS)

    Grainger, S; Frederiksen, C S; Zheng, X

    2010-01-01

    The atmospheric circulation acts as a bridge between large-scale sources of climate variability, and climate variability on regional scales. Here a statistical method is applied to monthly mean Southern Hemisphere 500hPa geopotential height to separate the interannual variability of the seasonal mean into intraseasonal and slowly varying (time scales of a season or longer) components. Intraseasonal and slow modes of variability are estimated from realisations of models from the Coupled Model Intercomparison Project Phase 3 (CMIP3) twentieth century coupled climate simulation (20c3m) and are evaluated against those estimated from reanalysis data. The intraseasonal modes of variability are generally well reproduced across all CMIP3 20c3m models for both Southern Hemisphere summer and winter. The slow modes are in general less well reproduced than the intraseasonal modes, and there are larger differences between realisations than for the intraseasonal modes. New diagnostics are proposed to evaluate model variability. It is found that differences between realisations from each model are generally less than inter-model differences. Differences between model-mean diagnostics are found. The results obtained are applicable to assessing the reliability of changes in atmospheric circulation variability in CMIP3 models and for their suitability for further studies of regional climate variability.

  17. A novel methodology improves reservoir characterization models using geologic fuzzy variables

    Energy Technology Data Exchange (ETDEWEB)

    Soto B, Rodolfo [DIGITOIL, Maracaibo (Venezuela); Soto O, David A. [Texas A and M University, College Station, TX (United States)

    2004-07-01

    One of the research projects carried out in Cusiana field to explain its rapid decline during the last years was to get better permeability models. The reservoir of this field has a complex layered system that it is not easy to model using conventional methods. The new technique included the development of porosity and permeability maps from cored wells following the same trend of the sand depositions for each facie or layer according to the sedimentary facie and the depositional system models. Then, we used fuzzy logic to reproduce those maps in three dimensions as geologic fuzzy variables. After multivariate statistical and factor analyses, we found independence and a good correlation coefficient between the geologic fuzzy variables and core permeability and porosity. This means, the geologic fuzzy variable could explain the fabric, the grain size and the pore geometry of the reservoir rock trough the field. Finally, we developed a neural network permeability model using porosity, gamma ray and the geologic fuzzy variable as input variables. This model has a cross-correlation coefficient of 0.873 and average absolute error of 33% compared with the actual model with a correlation coefficient of 0.511 and absolute error greater than 250%. We tested different methodologies, but this new one showed dramatically be a promiser way to get better permeability models. The use of the models have had a high impact in the explanation of well performance and workovers, and reservoir simulation models. (author)

  18. Impact on mortality of prompt admission to critical care for deteriorating ward patients: an instrumental variable analysis using critical care bed strain.

    Science.gov (United States)

    Harris, Steve; Singer, Mervyn; Sanderson, Colin; Grieve, Richard; Harrison, David; Rowan, Kathryn

    2018-05-07

    To estimate the effect of prompt admission to critical care on mortality for deteriorating ward patients. We performed a prospective cohort study of consecutive ward patients assessed for critical care. Prompt admissions (within 4 h of assessment) were compared to a 'watchful waiting' cohort. We used critical care strain (bed occupancy) as a natural randomisation event that would predict prompt transfer to critical care. Strain was classified as low, medium or high (2+, 1 or 0 empty beds). This instrumental variable (IV) analysis was repeated for the subgroup of referrals with a recommendation for critical care once assessed. Risk-adjusted 90-day survival models were also constructed. A total of 12,380 patients from 48 hospitals were available for analysis. There were 2411 (19%) prompt admissions (median delay 1 h, IQR 1-2) and 9969 (81%) controls; 1990 (20%) controls were admitted later (median delay 11 h, IQR 6-26). Prompt admissions were less frequent (p care. In the risk-adjust survival model, 90-day mortality was similar. After allowing for unobserved prognostic differences between the groups, we find that prompt admission to critical care leads to lower 90-day mortality for patients assessed and recommended to critical care.

  19. Plasticity models of material variability based on uncertainty quantification techniques

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Reese E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Rizzi, Francesco [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Boyce, Brad [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Templeton, Jeremy Alan [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ostien, Jakob [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2017-11-01

    The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQ techniques can be used in model selection and assessing the quality of calibrated physical parameters.

  20. MANU. Instrumentation of Buffer Demo. Preliminary Study

    International Nuclear Information System (INIS)

    Laaksonen, R.

    2010-01-01

    The purpose of this work is to describe feasible measuring and monitoring alternatives which can be used, if needed, in medium to full scale nuclear waste repository deposition hole mock-up tests. The focus of the work was to determine what variables can actually be measured, how to achieve the measurements and what kind of demands comes from the modelling, scientific, and technical points of view. This project includes a review of the previous waste repository mock-up tests carried out in several European countries such as Belgium, Czech Republic, Spain and Sweden. Also information was gathered by interviewing domestic and foreign scientists specialized in the fields of measurement instrumentation and related in-situ and laboratory work. On the basis of this review, recommendations were developed for the necessary actions needed to be done from the instrumentation point of view for future tests. It is possible to measure and monitor the processes going on in a deposition hole in-situ conditions. The data received during a test in real repository conditions enables to follow the processes and to verify the hypothesis made on the behaviour of various components of the repository: buffer, canister, rock and backfill. Because full scale testing is expensive, the objectives and hypothesis must be carefully set and the test itself with its instrumentation must serve very specific objectives. The main purpose of mock-up tests is to verify that the conditions surrounding the canister are according to the design requirements. A whole mock-up test and demonstration process requires a lot of time and effort. The instrumentation part of the work must also start at early stages to ensure that the instrumentation itself will not become bottlenecked nor suffer from low quality solutions. The planning of the instrumentation work could be done in collaboration with foreign scientists which have participated to previous instrumentation projects. (orig.)

  1. Efficient Business Service Consumption by Customization with Variability Modelling

    Directory of Open Access Journals (Sweden)

    Michael Stollberg

    2010-07-01

    Full Text Available The establishment of service orientation in industry determines the need for efficient engineering technologies that properly support the whole life cycle of service provision and consumption. A central challenge is adequate support for the efficient employment of komplex services in their individual application context. This becomes particularly important for large-scale enterprise technologies where generic services are designed for reuse in several business scenarios. In this article we complement our work regarding Service Variability Modelling presented in a previous publication. There we presented an approach for the customization of services for individual application contexts by creating simplified variants, based on model-driven variability management. That work presents our revised service variability metamodel, new features of the variability tools and an applicability study, which reveals that substantial improvements on the efficiency of standard business service consumption under both usability and economic aspects can be achieved.

  2. A Systematic Review of Statistical Methods Used to Test for Reliability of Medical Instruments Measuring Continuous Variables

    Directory of Open Access Journals (Sweden)

    Rafdzah Zaki

    2013-06-01

    Full Text Available   Objective(s: Reliability measures precision or the extent to which test results can be replicated. This is the first ever systematic review to identify statistical methods used to measure reliability of equipment measuring continuous variables. This studyalso aims to highlight the inappropriate statistical method used in the reliability analysis and its implication in the medical practice.   Materials and Methods: In 2010, five electronic databases were searched between 2007 and 2009 to look for reliability studies. A total of 5,795 titles were initially identified. Only 282 titles were potentially related, and finally 42 fitted the inclusion criteria. Results: The Intra-class Correlation Coefficient (ICC is the most popular method with 25 (60% studies having used this method followed by the comparing means (8 or 19%. Out of 25 studies using the ICC, only 7 (28% reported the confidence intervals and types of ICC used. Most studies (71% also tested the agreement of instruments. Conclusion: This study finds that the Intra-class Correlation Coefficient is the most popular method used to assess the reliability of medical instruments measuring continuous outcomes. There are also inappropriate applications and interpretations of statistical methods in some studies. It is important for medical researchers to be aware of this issue, and be able to correctly perform analysis in reliability studies.

  3. Understanding and Measuring Evaluation Capacity: A Model and Instrument Validation Study

    Science.gov (United States)

    Taylor-Ritzler, Tina; Suarez-Balcazar, Yolanda; Garcia-Iriarte, Edurne; Henry, David B.; Balcazar, Fabricio E.

    2013-01-01

    This study describes the development and validation of the Evaluation Capacity Assessment Instrument (ECAI), a measure designed to assess evaluation capacity among staff of nonprofit organizations that is based on a synthesis model of evaluation capacity. One hundred and sixty-nine staff of nonprofit organizations completed the ECAI. The 68-item…

  4. Unit-specific calibration of Actigraph accelerometers in a mechanical setup - is it worth the effort? The effect on random output variation caused by technical inter-instrument variability in the laboratory and in the field

    DEFF Research Database (Denmark)

    Moeller, Niels C; Korsholm, Lars; Kristensen, Peter L

    2008-01-01

    BACKGROUND: Potentially, unit-specific in-vitro calibration of accelerometers could increase field data quality and study power. However, reduced inter-unit variability would only be important if random instrument variability contributes considerably to the total variation in field data. Therefor...

  5. Generalized Network Psychometrics : Combining Network and Latent Variable Models

    NARCIS (Netherlands)

    Epskamp, S.; Rhemtulla, M.; Borsboom, D.

    2017-01-01

    We introduce the network model as a formal psychometric model, conceptualizing the covariance between psychometric indicators as resulting from pairwise interactions between observable variables in a network structure. This contrasts with standard psychometric models, in which the covariance between

  6. Can Geostatistical Models Represent Nature's Variability? An Analysis Using Flume Experiments

    Science.gov (United States)

    Scheidt, C.; Fernandes, A. M.; Paola, C.; Caers, J.

    2015-12-01

    The lack of understanding in the Earth's geological and physical processes governing sediment deposition render subsurface modeling subject to large uncertainty. Geostatistics is often used to model uncertainty because of its capability to stochastically generate spatially varying realizations of the subsurface. These methods can generate a range of realizations of a given pattern - but how representative are these of the full natural variability? And how can we identify the minimum set of images that represent this natural variability? Here we use this minimum set to define the geostatistical prior model: a set of training images that represent the range of patterns generated by autogenic variability in the sedimentary environment under study. The proper definition of the prior model is essential in capturing the variability of the depositional patterns. This work starts with a set of overhead images from an experimental basin that showed ongoing autogenic variability. We use the images to analyze the essential characteristics of this suite of patterns. In particular, our goal is to define a prior model (a minimal set of selected training images) such that geostatistical algorithms, when applied to this set, can reproduce the full measured variability. A necessary prerequisite is to define a measure of variability. In this study, we measure variability using a dissimilarity distance between the images. The distance indicates whether two snapshots contain similar depositional patterns. To reproduce the variability in the images, we apply an MPS algorithm to the set of selected snapshots of the sedimentary basin that serve as training images. The training images are chosen from among the initial set by using the distance measure to ensure that only dissimilar images are chosen. Preliminary investigations show that MPS can reproduce fairly accurately the natural variability of the experimental depositional system. Furthermore, the selected training images provide

  7. Development of a Symptom-Based Patient-Reported Outcome Instrument for Functional Dyspepsia: A Preliminary Conceptual Model and an Evaluation of the Adequacy of Existing Instruments.

    Science.gov (United States)

    Taylor, Fiona; Reasner, David S; Carson, Robyn T; Deal, Linda S; Foley, Catherine; Iovin, Ramon; Lundy, J Jason; Pompilus, Farrah; Shields, Alan L; Silberg, Debra G

    2016-10-01

    The aim was to document, from the perspective of the empirical literature, the primary symptoms of functional dyspepsia (FD), evaluate the extent to which existing questionnaires target those symptoms, and, finally, identify any missing evidence that would impact the questionnaires' use in regulated clinical trials to assess treatment efficacy claims intended for product labeling. A literature review was conducted to identify the primary symptoms of FD and existing symptom-based FD patient-reported outcome (PRO) instruments. Following a database search, abstracts were screened and articles were retrieved for review. The primary symptoms of FD were organized into a conceptual model and the PRO instruments were evaluated for conceptual coverage as well as compared against evidentiary requirements presented in the FDA's PRO Guidance for Industry. Fifty-six articles and 16 instruments assessing FD symptoms were reviewed. Concepts listed in the Rome III criteria for FD (n = 7), those assessed by existing FD instruments (n = 34), and symptoms reported by patients in published qualitative research (n = 6) were summarized in the FD conceptual model. Except for vomiting, all of the identified symptoms from the published qualitative research reports were also specified in the Rome III criteria. Only three of the 16 instruments, the Dyspepsia Symptom Severity Index (DSSI), Nepean Dyspepsia Index (NDI), and Short-Form Nepean Dyspepsia Index (SF-NDI), measure all seven FD symptoms defined by the Rome III criteria. Among these three, each utilizes a 2-week recall period and 5-point Likert-type scale, and had evidence of patient involvement in development. Despite their coverage, when these instruments were evaluated in light of regulatory expectations, several issues jeopardized their potential qualification for substantiation of a labeling claim. No existing PRO instruments that measured all seven symptoms adhered to the regulatory principles necessary to support product

  8. Instrumenting an upland research catchment in Canterbury, New Zealand to study controls on variability of soil moisture, shallow groundwater and streamflow

    Science.gov (United States)

    McMillan, Hilary; Srinivasan, Ms

    2015-04-01

    Hydrologists recognise the importance of vertical drainage and deep flow paths in runoff generation, even in headwater catchments. Both soil and groundwater stores are highly variable over multiple scales, and the distribution of water has a strong control on flow rates and timing. In this study, we instrumented an upland headwater catchment in New Zealand to measure the temporal and spatial variation in unsaturated and saturated-zone responses. In NZ, upland catchments are the source of much of the water used in lowland agriculture, but the hydrology of such catchments and their role in water partitioning, storage and transport is poorly understood. The study area is the Langs Gully catchment in the North Branch of the Waipara River, Canterbury: this catchment was chosen to be representative of the foothills environment, with lightly managed dryland pasture and native Matagouri shrub vegetation cover. Over a period of 16 months we measured continuous soil moisture at 32 locations and near-surface water table (versus hillslope locations, and convergent versus divergent hillslopes. We found that temporal variability is strongly controlled by the climatic seasonal cycle, for both soil moisture and water table, and for both the mean and extremes of their distributions. Groundwater is a larger water storage component than soil moisture, and the difference increases with catchment wetness. The spatial standard deviation of both soil moisture and groundwater is larger in winter than in summer. It peaks during rainfall events due to partial saturation of the catchment, and also rises in spring as different locations dry out at different rates. The most important controls on spatial variability are aspect and distance from stream. South-facing and near-stream locations have higher water tables and more, larger soil moisture wetting events. Typical hydrological models do not explicitly account for aspect, but our results suggest that it is an important factor in hillslope

  9. Sulfur dioxide in the Venus atmosphere: I. Vertical distribution and variability

    Science.gov (United States)

    Vandaele, A. C.; Korablev, O.; Belyaev, D.; Chamberlain, S.; Evdokimova, D.; Encrenaz, Th.; Esposito, L.; Jessup, K. L.; Lefèvre, F.; Limaye, S.; Mahieux, A.; Marcq, E.; Mills, F. P.; Montmessin, F.; Parkinson, C. D.; Robert, S.; Roman, T.; Sandor, B.; Stolzenbach, A.; Wilson, C.; Wilquet, V.

    2017-10-01

    Recent observations of sulfur containing species (SO2, SO, OCS, and H2SO4) in Venus' mesosphere have generated controversy and great interest in the scientific community. These observations revealed unexpected spatial patterns and spatial/temporal variability that have not been satisfactorily explained by models. Sulfur oxide chemistry on Venus is closely linked to the global-scale cloud and haze layers, which are composed primarily of concentrated sulfuric acid. Sulfur oxide observations provide therefore important insight into the on-going chemical evolution of Venus' atmosphere, atmospheric dynamics, and possible volcanism. This paper is the first of a series of two investigating the SO2 and SO variability in the Venus atmosphere. This first part of the study will focus on the vertical distribution of SO2, considering mostly observations performed by instruments and techniques providing accurate vertical information. This comprises instruments in space (SPICAV/SOIR suite on board Venus Express) and Earth-based instruments (JCMT). The most noticeable feature of the vertical profile of the SO2 abundance in the Venus atmosphere is the presence of an inversion layer located at about 70-75 km, with VMRs increasing above. The observations presented in this compilation indicate that at least one other significant sulfur reservoir (in addition to SO2 and SO) must be present throughout the 70-100 km altitude region to explain the inversion in the SO2 vertical profile. No photochemical model has an explanation for this behaviour. GCM modelling indicates that dynamics may play an important role in generating an inflection point at 75 km altitude but does not provide a definitive explanation of the source of the inflection at all local times or latitudes The current study has been carried out within the frame of the International Space Science Institute (ISSI) International Team entitled 'SO2 variability in the Venus atmosphere'.

  10. Exploring mouthfeel in model wines: Sensory-to-instrumental approaches.

    Science.gov (United States)

    Laguna, Laura; Sarkar, Anwesha; Bryant, Michael G; Beadling, Andrew R; Bartolomé, Begoña; Victoria Moreno-Arribas, M

    2017-12-01

    Wine creates a group of oral-tactile stimulations not related to taste or aroma, such as astringency or fullness; better known as mouthfeel. During wine consumption, mouthfeel is affected by ethanol content, phenolic compounds and their interactions with the oral components. Mouthfeel arises through changes in the salivary film when wine is consumed. In order to understand the role of each wine component, eight different model wines with/without ethanol (8%), glycerol (10g/L) and commercial tannins (1g/L) were described using a trained panel. Descriptive analysis techniques were used to train the panel and measure the intensity of the mouthfeel attributes. Alongside, the suitability of different instrumental techniques (rheology, particle size, tribology and microstructure, using Transmission Electron Microscopy (TEM)) to measure wine mouthfeel sensation was investigated. Panelists discriminated samples based on their tactile-related components (ethanol, glycerol and tannins) at the levels found naturally in wine. Higher scores were found for all sensory attributes in the samples containing ethanol. Sensory astringency was associated mainly with the addition of tannins to the wine model and glycerol did not seem to play a discriminating role at the levels found in red wines. Visual viscosity was correlated with instrumental viscosity (R=0.815, p=0.014). Hydrodynamic diameter of saliva showed an increase in presence of tannins (almost 2.5-3-folds). However, presence of ethanol or glycerol decreased hydrodynamic diameter. These results were related with the sensory astringency and earthiness as well as with the formation of nano-complexes as observed by TEM. Rheologically, the most viscous samples were those containing glycerol or tannins. Tribology results showed that at a boundary lubrication regime, differences in traction coefficient lubrication were due by the presence of glycerol. However, no differences in traction coefficients were observed in presence

  11. On Studying Common Factor Dominance and Approximate Unidimensionality in Multicomponent Measuring Instruments with Discrete Items

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.

    2018-01-01

    This article outlines a procedure for examining the degree to which a common factor may be dominating additional factors in a multicomponent measuring instrument consisting of binary items. The procedure rests on an application of the latent variable modeling methodology and accounts for the discrete nature of the manifest indicators. The method…

  12. Internal variability in a regional climate model over West Africa

    Energy Technology Data Exchange (ETDEWEB)

    Vanvyve, Emilie; Ypersele, Jean-Pascal van [Universite catholique de Louvain, Institut d' astronomie et de geophysique Georges Lemaitre, Louvain-la-Neuve (Belgium); Hall, Nicholas [Laboratoire d' Etudes en Geophysique et Oceanographie Spatiales/Centre National d' Etudes Spatiales, Toulouse Cedex 9 (France); Messager, Christophe [University of Leeds, Institute for Atmospheric Science, Environment, School of Earth and Environment, Leeds (United Kingdom); Leroux, Stephanie [Universite Joseph Fourier, Laboratoire d' etude des Transferts en Hydrologie et Environnement, BP53, Grenoble Cedex 9 (France)

    2008-02-15

    Sensitivity studies with regional climate models are often performed on the basis of a few simulations for which the difference is analysed and the statistical significance is often taken for granted. In this study we present some simple measures of the confidence limits for these types of experiments by analysing the internal variability of a regional climate model run over West Africa. Two 1-year long simulations, differing only in their initial conditions, are compared. The difference between the two runs gives a measure of the internal variability of the model and an indication of which timescales are reliable for analysis. The results are analysed for a range of timescales and spatial scales, and quantitative measures of the confidence limits for regional model simulations are diagnosed for a selection of study areas for rainfall, low level temperature and wind. As the averaging period or spatial scale is increased, the signal due to internal variability gets smaller and confidence in the simulations increases. This occurs more rapidly for variations in precipitation, which appear essentially random, than for dynamical variables, which show some organisation on larger scales. (orig.)

  13. THE EVOLUTION OF ANNUAL MEAN TEMPERATURE AND PRECIPITATION QUANTITY VARIABILITY BASED ON ESTIMATED CHANGES BY THE REGIONAL CLIMATIC MODELS

    Directory of Open Access Journals (Sweden)

    Paula Furtună

    2013-03-01

    Full Text Available Climatic changes are representing one of the major challenges of our century, these being forcasted according to climate scenarios and models, which represent plausible and concrete images of future climatic conditions. The results of climate models comparison regarding future water resources and temperature regime trend can become a useful instrument for decision makers in choosing the most effective decisions regarding economic, social and ecologic levels. The aim of this article is the analysis of temperature and pluviometric variability at the closest grid point to Cluj-Napoca, based on data provided by six different regional climate models (RCMs. Analysed on 30 year periods (2001-2030,2031-2060 and 2061-2090, the mean temperature has an ascending general trend, with great varability between periods. The precipitation expressed trough percentage deviation shows a descending general trend, which is more emphazied during 2031-2060 and 2061-2090.

  14. Impulsive synchronization and parameter mismatch of the three-variable autocatalator model

    International Nuclear Information System (INIS)

    Li, Yang; Liao, Xiaofeng; Li, Chuandong; Huang, Tingwen; Yang, Degang

    2007-01-01

    The synchronization problems of the three-variable autocatalator model via impulsive control approach are investigated; several theorems on the stability of impulsive control systems are also investigated. These theorems are then used to find the conditions under which the three-variable autocatalator model can be asymptotically controlled to the equilibrium point. This Letter derives some sufficient conditions for the stabilization and synchronization of a three-variable autocatalator model via impulsive control with varying impulsive intervals. Furthermore, we address the chaos quasi-synchronization in the presence of single-parameter mismatch. To illustrate the effectiveness of the new scheme, several numerical examples are given

  15. Variable cycle control model for intersection based on multi-source information

    Science.gov (United States)

    Sun, Zhi-Yuan; Li, Yue; Qu, Wen-Cong; Chen, Yan-Yan

    2018-05-01

    In order to improve the efficiency of traffic control system in the era of big data, a new variable cycle control model based on multi-source information is presented for intersection in this paper. Firstly, with consideration of multi-source information, a unified framework based on cyber-physical system is proposed. Secondly, taking into account the variable length of cell, hysteresis phenomenon of traffic flow and the characteristics of lane group, a Lane group-based Cell Transmission Model is established to describe the physical properties of traffic flow under different traffic signal control schemes. Thirdly, the variable cycle control problem is abstracted into a bi-level programming model. The upper level model is put forward for cycle length optimization considering traffic capacity and delay. The lower level model is a dynamic signal control decision model based on fairness analysis. Then, a Hybrid Intelligent Optimization Algorithm is raised to solve the proposed model. Finally, a case study shows the efficiency and applicability of the proposed model and algorithm.

  16. Predicting College Women's Career Plans: Instrumentality, Work, and Family

    Science.gov (United States)

    Savela, Alexandra E.; O'Brien, Karen M.

    2016-01-01

    This study examined how college women's instrumentality and expectations about combining work and family predicted early career development variables. Specifically, 177 undergraduate women completed measures of instrumentality (i.e., traits such as ambition, assertiveness, and risk taking), willingness to compromise career for family, anticipated…

  17. Hidden Markov latent variable models with multivariate longitudinal data.

    Science.gov (United States)

    Song, Xinyuan; Xia, Yemao; Zhu, Hongtu

    2017-03-01

    Cocaine addiction is chronic and persistent, and has become a major social and health problem in many countries. Existing studies have shown that cocaine addicts often undergo episodic periods of addiction to, moderate dependence on, or swearing off cocaine. Given its reversible feature, cocaine use can be formulated as a stochastic process that transits from one state to another, while the impacts of various factors, such as treatment received and individuals' psychological problems on cocaine use, may vary across states. This article develops a hidden Markov latent variable model to study multivariate longitudinal data concerning cocaine use from a California Civil Addict Program. The proposed model generalizes conventional latent variable models to allow bidirectional transition between cocaine-addiction states and conventional hidden Markov models to allow latent variables and their dynamic interrelationship. We develop a maximum-likelihood approach, along with a Monte Carlo expectation conditional maximization (MCECM) algorithm, to conduct parameter estimation. The asymptotic properties of the parameter estimates and statistics for testing the heterogeneity of model parameters are investigated. The finite sample performance of the proposed methodology is demonstrated by simulation studies. The application to cocaine use study provides insights into the prevention of cocaine use. © 2016, The International Biometric Society.

  18. Using and Developing Measurement Instruments in Science Education: A Rasch Modeling Approach. Science & Engineering Education Sources

    Science.gov (United States)

    Liu, Xiufeng

    2010-01-01

    This book meets a demand in the science education community for a comprehensive and introductory measurement book in science education. It describes measurement instruments reported in refereed science education research journals, and introduces the Rasch modeling approach to developing measurement instruments in common science assessment domains,…

  19. Model instruments of effective segmentation of the fast food market

    Directory of Open Access Journals (Sweden)

    Mityaeva Tetyana L.

    2013-03-01

    Full Text Available The article presents results of optimisation step-type calculations of economic effectiveness of promotion of fast food with consideration of key parameters of assessment of efficiency of the marketing strategy of segmentation. The article justifies development of a mathematical model on the bases of 3D-presentations and three-dimensional system of management variables. The modern applied mathematical packages allow formation not only of one-dimensional and two-dimensional arrays and analyse links of variables, but also of three-dimensional, besides, the more links and parameters are taken into account, the more adequate and adaptive are results of modelling and, as a result, more informative and strategically valuable. The article shows modelling possibilities that allow taking into account strategies and reactions on formation of the marketing strategy under conditions of entering the fast food market segments.

  20. A variable resolution nonhydrostatic global atmospheric semi-implicit semi-Lagrangian model

    Science.gov (United States)

    Pouliot, George Antoine

    2000-10-01

    The objective of this project is to develop a variable-resolution finite difference adiabatic global nonhydrostatic semi-implicit semi-Lagrangian (SISL) model based on the fully compressible nonhydrostatic atmospheric equations. To achieve this goal, a three-dimensional variable resolution dynamical core was developed and tested. The main characteristics of the dynamical core can be summarized as follows: Spherical coordinates were used in a global domain. A hydrostatic/nonhydrostatic switch was incorporated into the dynamical equations to use the fully compressible atmospheric equations. A generalized horizontal variable resolution grid was developed and incorporated into the model. For a variable resolution grid, in contrast to a uniform resolution grid, the order of accuracy of finite difference approximations is formally lost but remains close to the order of accuracy associated with the uniform resolution grid provided the grid stretching is not too significant. The SISL numerical scheme was implemented for the fully compressible set of equations. In addition, the generalized minimum residual (GMRES) method with restart and preconditioner was used to solve the three-dimensional elliptic equation derived from the discretized system of equations. The three-dimensional momentum equation was integrated in vector-form to incorporate the metric terms in the calculations of the trajectories. Using global re-analysis data for a specific test case, the model was compared to similar SISL models previously developed. Reasonable agreement between the model and the other independently developed models was obtained. The Held-Suarez test for dynamical cores was used for a long integration and the model was successfully integrated for up to 1200 days. Idealized topography was used to test the variable resolution component of the model. Nonhydrostatic effects were simulated at grid spacings of 400 meters with idealized topography and uniform flow. Using a high

  1. Microprocessor-based, on-line decision aid for resolving conflicting nuclear reactor instrumentation

    International Nuclear Information System (INIS)

    Alesso, H.P.

    1981-01-01

    We describe one design for a microprocessor-based, on-line decision aid for identifying and resolving false, conflicting, or misleading instrument indications resulting from certain systems interactions for a pressurized water reactor. The system processes sensor signals from groups of instruments that track together under nominal transient and certain accident conditions, and alarms when they do not track together. We examine multiple-casualty systems interaction and formulate a trial grouping of variables that track together under specified conditions. A two-of-three type redundancy check of key variables provides alarm and indication of conflicting information when one signal suddenly tracks in opposition due to multiple casualty, instrument failure, and/or locally abnormal conditions. Since a vote count of two of three variables in conflict as inconclusive evidence, the system is not designed to provide tripping or corrective action, but improves the operator/instrument interface by providing additional and partially digested information

  2. Realization of computer-controlled CAMAC model through the technology of virtual instrument

    International Nuclear Information System (INIS)

    Le Yi; Li Cheng; Liao Juanjuan; Zhou Xin

    1997-01-01

    The author is to introduce virtual instrument system and basic features of its typical software development platform, and show this system's superiority and fitness to physical experiments by the example of the CAMAC model ADC2249A, which is often used in nuclear physics experiments

  3. Variable Fidelity Aeroelastic Toolkit - Structural Model, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed innovation is a methodology to incorporate variable fidelity structural models into steady and unsteady aeroelastic and aeroservoelastic analyses in...

  4. Instrumentation

    International Nuclear Information System (INIS)

    Umminger, K.

    2008-01-01

    A proper measurement of the relevant single and two-phase flow parameters is the basis for the understanding of many complex thermal-hydraulic processes. Reliable instrumentation is therefore necessary for the interaction between analysis and experiment especially in the field of nuclear safety research where postulated accident scenarios have to be simulated in experimental facilities and predicted by complex computer code systems. The so-called conventional instrumentation for the measurement of e. g. pressures, temperatures, pressure differences and single phase flow velocities is still a solid basis for the investigation and interpretation of many phenomena and especially for the understanding of the overall system behavior. Measurement data from such instrumentation still serves in many cases as a database for thermal-hydraulic system codes. However some special instrumentation such as online concentration measurement for boric acid in the water phase or for non-condensibles in steam atmosphere as well as flow visualization techniques were further developed and successfully applied during the recent years. Concerning the modeling needs for advanced thermal-hydraulic codes, significant advances have been accomplished in the last few years in the local instrumentation technology for two-phase flow by the application of new sensor techniques, optical or beam methods and electronic technology. This paper will give insight into the current state of instrumentation technology for safety-related thermohydraulic experiments. Advantages and limitations of some measurement processes and systems will be indicated as well as trends and possibilities for further development. Aspects of instrumentation in operating reactors will also be mentioned.

  5. The Properties of Model Selection when Retaining Theory Variables

    DEFF Research Database (Denmark)

    Hendry, David F.; Johansen, Søren

    Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...... set by their statistical significance can be undertaken without affecting the estimator distribution of the theory parameters. This strategy returns the theory-parameter estimates when the theory is correct, yet protects against the theory being under-specified because some w{t} are relevant....

  6. Assessing Mucoadhesion in Polymer Gels: The Effect of Method Type and Instrument Variables

    Directory of Open Access Journals (Sweden)

    Jéssica Bassi da Silva

    2018-03-01

    Full Text Available The process of mucoadhesion has been widely studied using a wide variety of methods, which are influenced by instrumental variables and experiment design, making the comparison between the results of different studies difficult. The aim of this work was to standardize the conditions of the detachment test and the rheological methods of mucoadhesion assessment for semisolids, and introduce a texture profile analysis (TPA method. A factorial design was developed to suggest standard conditions for performing the detachment force method. To evaluate the method, binary polymeric systems were prepared containing poloxamer 407 and Carbopol 971P®, Carbopol 974P®, or Noveon® Polycarbophil. The mucoadhesion of systems was evaluated, and the reproducibility of these measurements investigated. This detachment force method was demonstrated to be reproduceable, and gave different adhesion when mucin disk or ex vivo oral mucosa was used. The factorial design demonstrated that all evaluated parameters had an effect on measurements of mucoadhesive force, but the same was not observed for the work of adhesion. It was suggested that the work of adhesion is a more appropriate metric for evaluating mucoadhesion. Oscillatory rheology was more capable of investigating adhesive interactions than flow rheology. TPA method was demonstrated to be reproducible and can evaluate the adhesiveness interaction parameter. This investigation demonstrates the need for standardized methods to evaluate mucoadhesion and makes suggestions for a standard study design.

  7. External forcing as a metronome for Atlantic multidecadal variability

    Science.gov (United States)

    Otterå, Odd Helge; Bentsen, Mats; Drange, Helge; Suo, Lingling

    2010-10-01

    Instrumental records, proxy data and climate modelling show that multidecadal variability is a dominant feature of North Atlantic sea-surface temperature variations, with potential impacts on regional climate. To understand the observed variability and to gauge any potential for climate predictions it is essential to identify the physical mechanisms that lead to this variability, and to explore the spatial and temporal characteristics of multidecadal variability modes. Here we use a coupled ocean-atmosphere general circulation model to show that the phasing of the multidecadal fluctuations in the North Atlantic during the past 600 years is, to a large degree, governed by changes in the external solar and volcanic forcings. We find that volcanoes play a particularly important part in the phasing of the multidecadal variability through their direct influence on tropical sea-surface temperatures, on the leading mode of northern-hemisphere atmosphere circulation and on the Atlantic thermohaline circulation. We suggest that the implications of our findings for decadal climate prediction are twofold: because volcanic eruptions cannot be predicted a decade in advance, longer-term climate predictability may prove challenging, whereas the systematic post-eruption changes in ocean and atmosphere may hold promise for shorter-term climate prediction.

  8. Approaches for modeling within subject variability in pharmacometric count data analysis: dynamic inter-occasion variability and stochastic differential equations.

    Science.gov (United States)

    Deng, Chenhui; Plan, Elodie L; Karlsson, Mats O

    2016-06-01

    Parameter variation in pharmacometric analysis studies can be characterized as within subject parameter variability (WSV) in pharmacometric models. WSV has previously been successfully modeled using inter-occasion variability (IOV), but also stochastic differential equations (SDEs). In this study, two approaches, dynamic inter-occasion variability (dIOV) and adapted stochastic differential equations, were proposed to investigate WSV in pharmacometric count data analysis. These approaches were applied to published count models for seizure counts and Likert pain scores. Both approaches improved the model fits significantly. In addition, stochastic simulation and estimation were used to explore further the capability of the two approaches to diagnose and improve models where existing WSV is not recognized. The results of simulations confirmed the gain in introducing WSV as dIOV and SDEs when parameters vary randomly over time. Further, the approaches were also informative as diagnostics of model misspecification, when parameters changed systematically over time but this was not recognized in the structural model. The proposed approaches in this study offer strategies to characterize WSV and are not restricted to count data.

  9. Influences of variables on ship collision probability in a Bayesian belief network model

    International Nuclear Information System (INIS)

    Hänninen, Maria; Kujala, Pentti

    2012-01-01

    The influences of the variables in a Bayesian belief network model for estimating the role of human factors on ship collision probability in the Gulf of Finland are studied for discovering the variables with the largest influences and for examining the validity of the network. The change in the so-called causation probability is examined while observing each state of the network variables and by utilizing sensitivity and mutual information analyses. Changing course in an encounter situation is the most influential variable in the model, followed by variables such as the Officer of the Watch's action, situation assessment, danger detection, personal condition and incapacitation. The least influential variables are the other distractions on bridge, the bridge view, maintenance routines and the officer's fatigue. In general, the methods are found to agree on the order of the model variables although some disagreements arise due to slightly dissimilar approaches to the concept of variable influence. The relative values and the ranking of variables based on the values are discovered to be more valuable than the actual numerical values themselves. Although the most influential variables seem to be plausible, there are some discrepancies between the indicated influences in the model and literature. Thus, improvements are suggested to the network.

  10. Uncertainty and variability in computational and mathematical models of cardiac physiology.

    Science.gov (United States)

    Mirams, Gary R; Pathmanathan, Pras; Gray, Richard A; Challenor, Peter; Clayton, Richard H

    2016-12-01

    Mathematical and computational models of cardiac physiology have been an integral component of cardiac electrophysiology since its inception, and are collectively known as the Cardiac Physiome. We identify and classify the numerous sources of variability and uncertainty in model formulation, parameters and other inputs that arise from both natural variation in experimental data and lack of knowledge. The impact of uncertainty on the outputs of Cardiac Physiome models is not well understood, and this limits their utility as clinical tools. We argue that incorporating variability and uncertainty should be a high priority for the future of the Cardiac Physiome. We suggest investigating the adoption of approaches developed in other areas of science and engineering while recognising unique challenges for the Cardiac Physiome; it is likely that novel methods will be necessary that require engagement with the mathematics and statistics community. The Cardiac Physiome effort is one of the most mature and successful applications of mathematical and computational modelling for describing and advancing the understanding of physiology. After five decades of development, physiological cardiac models are poised to realise the promise of translational research via clinical applications such as drug development and patient-specific approaches as well as ablation, cardiac resynchronisation and contractility modulation therapies. For models to be included as a vital component of the decision process in safety-critical applications, rigorous assessment of model credibility will be required. This White Paper describes one aspect of this process by identifying and classifying sources of variability and uncertainty in models as well as their implications for the application and development of cardiac models. We stress the need to understand and quantify the sources of variability and uncertainty in model inputs, and the impact of model structure and complexity and their consequences for

  11. Natural climate variability in a coupled model

    International Nuclear Information System (INIS)

    Zebiak, S.E.; Cane, M.A.

    1990-01-01

    Multi-century simulations with a simplified coupled ocean-atmosphere model are described. These simulations reveal an impressive range of variability on decadal and longer time scales, in addition to the dominant interannual el Nino/Southern Oscillation signal that the model originally was designed to simulate. Based on a very large sample of century-long simulations, it is nonetheless possible to identify distinct model parameter sensitivities that are described here in terms of selected indices. Preliminary experiments motivated by general circulation model results for increasing greenhouse gases suggest a definite sensitivity to model global warming. While these results are not definitive, they strongly suggest that coupled air-sea dynamics figure prominently in global change and must be included in models for reliable predictions

  12. AMOC decadal variability in Earth system models: Mechanisms and climate impacts

    Energy Technology Data Exchange (ETDEWEB)

    Fedorov, Alexey [Yale Univ., New Haven, CT (United States)

    2017-09-06

    This is the final report for the project titled "AMOC decadal variability in Earth system models: Mechanisms and climate impacts". The central goal of this one-year research project was to understand the mechanisms of decadal and multi-decadal variability of the Atlantic Meridional Overturning Circulation (AMOC) within a hierarchy of climate models ranging from realistic ocean GCMs to Earth system models. The AMOC is a key element of ocean circulation responsible for oceanic transport of heat from low to high latitudes and controlling, to a large extent, climate variations in the North Atlantic. The questions of the AMOC stability, variability and predictability, directly relevant to the questions of climate predictability, were at the center of the research work.

  13. Higher-dimensional cosmological model with variable gravitational ...

    Indian Academy of Sciences (India)

    variable G and bulk viscosity in Lyra geometry. Exact solutions for ... a comparative study of Robertson–Walker models with a constant deceleration .... where H is defined as H =(˙A/A)+(1/3)( ˙B/B) and β0,H0 are representing present values of β ...

  14. Modeling temporal and spatial variability of traffic-related air pollution: Hourly land use regression models for black carbon

    Science.gov (United States)

    Dons, Evi; Van Poppel, Martine; Kochan, Bruno; Wets, Geert; Int Panis, Luc

    2013-08-01

    Land use regression (LUR) modeling is a statistical technique used to determine exposure to air pollutants in epidemiological studies. Time-activity diaries can be combined with LUR models, enabling detailed exposure estimation and limiting exposure misclassification, both in shorter and longer time lags. In this study, the traffic related air pollutant black carbon was measured with μ-aethalometers on a 5-min time base at 63 locations in Flanders, Belgium. The measurements show that hourly concentrations vary between different locations, but also over the day. Furthermore the diurnal pattern is different for street and background locations. This suggests that annual LUR models are not sufficient to capture all the variation. Hourly LUR models for black carbon are developed using different strategies: by means of dummy variables, with dynamic dependent variables and/or with dynamic and static independent variables. The LUR model with 48 dummies (weekday hours and weekend hours) performs not as good as the annual model (explained variance of 0.44 compared to 0.77 in the annual model). The dataset with hourly concentrations of black carbon can be used to recalibrate the annual model, resulting in many of the original explaining variables losing their statistical significance, and certain variables having the wrong direction of effect. Building new independent hourly models, with static or dynamic covariates, is proposed as the best solution to solve these issues. R2 values for hourly LUR models are mostly smaller than the R2 of the annual model, ranging from 0.07 to 0.8. Between 6 a.m. and 10 p.m. on weekdays the R2 approximates the annual model R2. Even though models of consecutive hours are developed independently, similar variables turn out to be significant. Using dynamic covariates instead of static covariates, i.e. hourly traffic intensities and hourly population densities, did not significantly improve the models' performance.

  15. Modeling 13.3nm Fe XXIII Flare Emissions Using the GOES-R EXIS Instrument

    Science.gov (United States)

    Rook, H.; Thiemann, E.

    2017-12-01

    The solar EUV spectrum is dominated by atomic transitions in ionized atoms in the solar atmosphere. As solar flares evolve, plasma temperatures and densities change, influencing abundances of various ions, changing intensities of different EUV wavelengths observed from the sun. Quantifying solar flare spectral irradiance is important for constraining models of Earth's atmosphere, improving communications quality, and controlling satellite navigation. However, high time cadence measurements of flare irradiance across the entire EUV spectrum were not available prior to the launch of SDO. The EVE MEGS-A instrument aboard SDO collected 0.1nm EUV spectrum data from 2010 until 2014, when the instrument failed. No current or future instrument is capable of similar high resolution and time cadence EUV observation. This necessitates a full EUV spectrum model to study EUV phenomena at Earth. It has been recently demonstrated that one hot flare EUV line, such as the 13.3nm Fe XXIII line, can be used to model cooler flare EUV line emissions, filling the role of MEGS-A. Since unblended measurements of Fe XXIII are typically unavailable, a proxy for the Fe XXIII line must be found. In this study, we construct two models of this line, first using the GOES 0.1-0.8nm soft x-ray (SXR) channel as the Fe XXIII proxy, and second using a physics-based model dependent on GOES emission measure and temperature data. We determine that the more sophisticated physics-based model shows better agreement with Fe XXIII measurements, although the simple proxy model also performs well. We also conclude that the high correlation between Fe XXIII emissions and the GOES 0.1-0.8nm band is because both emissions tend to peak near the GOES emission measure peak despite large differences in their contribution functions.

  16. Variable selection for mixture and promotion time cure rate models.

    Science.gov (United States)

    Masud, Abdullah; Tu, Wanzhu; Yu, Zhangsheng

    2016-11-16

    Failure-time data with cured patients are common in clinical studies. Data from these studies are typically analyzed with cure rate models. Variable selection methods have not been well developed for cure rate models. In this research, we propose two least absolute shrinkage and selection operators based methods, for variable selection in mixture and promotion time cure models with parametric or nonparametric baseline hazards. We conduct an extensive simulation study to assess the operating characteristics of the proposed methods. We illustrate the use of the methods using data from a study of childhood wheezing. © The Author(s) 2016.

  17. Development of a Conceptual Model and Survey Instrument to Measure Conscientious Objection to Abortion Provision.

    Directory of Open Access Journals (Sweden)

    Laura Florence Harris

    Full Text Available Conscientious objection to abortion, clinicians' refusal to perform legal abortions because of their religious or moral beliefs, has been the subject of increasing debate among bioethicists, policymakers, and public health advocates in recent years. Conscientious objection policies are intended to balance reproductive rights and clinicians' beliefs. However, in practice, clinician objection can act as a barrier to abortion access-impinging on reproductive rights, and increasing unsafe abortion and related morbidity and mortality. There is little information about conscientious objection from a medical or public health perspective. A quantitative instrument is needed to assess prevalence of conscientious objection and to provide insight on its practice. This paper describes the development of a survey instrument to measure conscientious objection to abortion provision.A literature review, and in-depth formative interviews with stakeholders in Colombia were used to develop a conceptual model of conscientious objection. This model led to the development of a survey, which was piloted, and then administered, in Ghana.The model posits three domains of conscientious objection that form the basis for the survey instrument: 1 beliefs about abortion and conscientious objection; 2 actions related to conscientious objection and abortion; and 3 self-identification as a conscientious objector.The instrument is intended to be used to assess prevalence among clinicians trained to provide abortions, and to gain insight on how conscientious objection is practiced in a variety of settings. Its results can inform more effective and appropriate strategies to regulate conscientious objection.

  18. Application of soft computing based hybrid models in hydrological variables modeling: a comprehensive review

    Science.gov (United States)

    Fahimi, Farzad; Yaseen, Zaher Mundher; El-shafie, Ahmed

    2017-05-01

    Since the middle of the twentieth century, artificial intelligence (AI) models have been used widely in engineering and science problems. Water resource variable modeling and prediction are the most challenging issues in water engineering. Artificial neural network (ANN) is a common approach used to tackle this problem by using viable and efficient models. Numerous ANN models have been successfully developed to achieve more accurate results. In the current review, different ANN models in water resource applications and hydrological variable predictions are reviewed and outlined. In addition, recent hybrid models and their structures, input preprocessing, and optimization techniques are discussed and the results are compared with similar previous studies. Moreover, to achieve a comprehensive view of the literature, many articles that applied ANN models together with other techniques are included. Consequently, coupling procedure, model evaluation, and performance comparison of hybrid models with conventional ANN models are assessed, as well as, taxonomy and hybrid ANN models structures. Finally, current challenges and recommendations for future researches are indicated and new hybrid approaches are proposed.

  19. A Miniaturized Variable Pressure Scanning Electron Microscope (MVP-SEM) for the Surface of Mars: An Instrument for the Planetary Science Community

    Science.gov (United States)

    Edmunson, J.; Gaskin, J. A.; Danilatos, G.; Doloboff, I. J.; Effinger, M. R.; Harvey, R. P.; Jerman, G. A.; Klein-Schoder, R.; Mackie, W.; Magera, B.; hide

    2016-01-01

    The Miniaturized Variable Pressure Scanning Electron Microscope(MVP-SEM) project, funded by the NASA Planetary Instrument Concepts for the Advancement of Solar System Observations (PICASSO) Research Opportunities in Space and Earth Science (ROSES), will build upon previous miniaturized SEM designs for lunar and International Space Station (ISS) applications and recent advancements in variable pressure SEM's to design and build a SEM to complete analyses of samples on the surface of Mars using the atmosphere as an imaging medium. By the end of the PICASSO work, a prototype of the primary proof-of-concept components (i.e., the electron gun, focusing optics and scanning system)will be assembled and preliminary testing in a Mars analog chamber at the Jet Propulsion Laboratory will be completed to partially fulfill Technology Readiness Level to 5 requirements for those components. The team plans to have Secondary Electron Imaging(SEI), Backscattered Electron (BSE) detection, and Energy Dispersive Spectroscopy (EDS) capabilities through the MVP-SEM.

  20. The VUV instrument SPICE for Solar Orbiter: performance ground testing

    Science.gov (United States)

    Caldwell, Martin E.; Morris, Nigel; Griffin, Douglas K.; Eccleston, Paul; Anderson, Mark; Pastor Santos, Carmen; Bruzzi, Davide; Tustain, Samuel; Howe, Chris; Davenne, Jenny; Grundy, Timothy; Speight, Roisin; Sidher, Sunil D.; Giunta, Alessandra; Fludra, Andrzej; Philippon, Anne; Auchere, Frederic; Hassler, Don; Davila, Joseph M.; Thompson, William T.; Schuehle, Udo H.; Meining, Stefan; Walls, Buddy; Phelan, P.; Dunn, Greg; Klein, Roman M.; Reichel, Thomas; Gyo, Manfred; Munro, Grant J.; Holmes, William; Doyle, Peter

    2017-08-01

    SPICE is an imaging spectrometer operating at vacuum ultraviolet (VUV) wavelengths, 70.4 - 79.0 nm and 97.3 - 104.9 nm. It is a facility instrument on the Solar Orbiter mission, which carries 10 science instruments in all, to make observations of the Sun's atmosphere and heliosphere, at close proximity to the Sun, i.e to 0.28 A.U. at perihelion. SPICE's role is to make VUV measurements of plasma in the solar atmosphere. SPICE is designed to achieve spectral imaging at spectral resolution >1500, spatial resolution of several arcsec, and two-dimensional FOV of 11 x16arcmins. The many strong constraints on the instrument design imposed by the mission requirements prevent the imaging performance from exceeding those of previous instruments, but by being closer to the sun there is a gain in spatial resolution. The price which is paid is the harsher environment, particularly thermal. This leads to some novel features in the design, which needed to be proven by ground test programs. These include a dichroic solar-transmitting primary mirror to dump the solar heat, a high in-flight temperature (60deg.C) and gradients in the optics box, and a bespoke variable-line-spacing grating to minimise the number of reflective components used. The tests culminate in the systemlevel test of VUV imaging performance and pointing stability. We will describe how our dedicated facility with heritage from previous solar instruments, is used to make these tests, and show the results, firstly on the Engineering Model of the optics unit, and more recently on the Flight Model. For the keywords, select up to 8 key terms for a search on your manuscript's subject.

  1. Model Predictive Control of a Nonlinear System with Known Scheduling Variable

    DEFF Research Database (Denmark)

    Mirzaei, Mahmood; Poulsen, Niels Kjølstad; Niemann, Hans Henrik

    2012-01-01

    Model predictive control (MPC) of a class of nonlinear systems is considered in this paper. We will use Linear Parameter Varying (LPV) model of the nonlinear system. By taking the advantage of having future values of the scheduling variable, we will simplify state prediction. Consequently...... the control problem of the nonlinear system is simplied into a quadratic programming. Wind turbine is chosen as the case study and we choose wind speed as the scheduling variable. Wind speed is measurable ahead of the turbine, therefore the scheduling variable is known for the entire prediction horizon....

  2. A geometric model for magnetizable bodies with internal variables

    Directory of Open Access Journals (Sweden)

    Restuccia, L

    2005-11-01

    Full Text Available In a geometrical framework for thermo-elasticity of continua with internal variables we consider a model of magnetizable media previously discussed and investigated by Maugin. We assume as state variables the magnetization together with its space gradient, subjected to evolution equations depending on both internal and external magnetic fields. We calculate the entropy function and necessary conditions for its existence.

  3. Examples of EOS Variables as compared to the UMM-Var Data Model

    Science.gov (United States)

    Cantrell, Simon; Lynnes, Chris

    2016-01-01

    In effort to provide EOSDIS clients a way to discover and use variable data from different providers, a Unified Metadata Model for Variables is being created. This presentation gives an overview of the model and use cases we are handling.

  4. A new model of wheezing severity in young children using the validated ISAAC wheezing module: A latent variable approach with validation in independent cohorts.

    Science.gov (United States)

    Brunwasser, Steven M; Gebretsadik, Tebeb; Gold, Diane R; Turi, Kedir N; Stone, Cosby A; Datta, Soma; Gern, James E; Hartert, Tina V

    2018-01-01

    The International Study of Asthma and Allergies in Children (ISAAC) Wheezing Module is commonly used to characterize pediatric asthma in epidemiological studies, including nearly all airway cohorts participating in the Environmental Influences on Child Health Outcomes (ECHO) consortium. However, there is no consensus model for operationalizing wheezing severity with this instrument in explanatory research studies. Severity is typically measured using coarsely-defined categorical variables, reducing power and potentially underestimating etiological associations. More precise measurement approaches could improve testing of etiological theories of wheezing illness. We evaluated a continuous latent variable model of pediatric wheezing severity based on four ISAAC Wheezing Module items. Analyses included subgroups of children from three independent cohorts whose parents reported past wheezing: infants ages 0-2 in the INSPIRE birth cohort study (Cohort 1; n = 657), 6-7-year-old North American children from Phase One of the ISAAC study (Cohort 2; n = 2,765), and 5-6-year-old children in the EHAAS birth cohort study (Cohort 3; n = 102). Models were estimated using structural equation modeling. In all cohorts, covariance patterns implied by the latent variable model were consistent with the observed data, as indicated by non-significant χ2 goodness of fit tests (no evidence of model misspecification). Cohort 1 analyses showed that the latent factor structure was stable across time points and child sexes. In both cohorts 1 and 3, the latent wheezing severity variable was prospectively associated with wheeze-related clinical outcomes, including physician asthma diagnosis, acute corticosteroid use, and wheeze-related outpatient medical visits when adjusting for confounders. We developed an easily applicable continuous latent variable model of pediatric wheezing severity based on items from the well-validated ISAAC Wheezing Module. This model prospectively associates with

  5. Speech-discrimination scores modeled as a binomial variable.

    Science.gov (United States)

    Thornton, A R; Raffin, M J

    1978-09-01

    Many studies have reported variability data for tests of speech discrimination, and the disparate results of these studies have not been given a simple explanation. Arguments over the relative merits of 25- vs 50-word tests have ignored the basic mathematical properties inherent in the use of percentage scores. The present study models performance on clinical tests of speech discrimination as a binomial variable. A binomial model was developed, and some of its characteristics were tested against data from 4120 scores obtained on the CID Auditory Test W-22. A table for determining significant deviations between scores was generated and compared to observed differences in half-list scores for the W-22 tests. Good agreement was found between predicted and observed values. Implications of the binomial characteristics of speech-discrimination scores are discussed.

  6. Optimal variable-grid finite-difference modeling for porous media

    International Nuclear Information System (INIS)

    Liu, Xinxin; Yin, Xingyao; Li, Haishan

    2014-01-01

    Numerical modeling of poroelastic waves by the finite-difference (FD) method is more expensive than that of acoustic or elastic waves. To improve the accuracy and computational efficiency of seismic modeling, variable-grid FD methods have been developed. In this paper, we derived optimal staggered-grid finite difference schemes with variable grid-spacing and time-step for seismic modeling in porous media. FD operators with small grid-spacing and time-step are adopted for low-velocity or small-scale geological bodies, while FD operators with big grid-spacing and time-step are adopted for high-velocity or large-scale regions. The dispersion relations of FD schemes were derived based on the plane wave theory, then the FD coefficients were obtained using the Taylor expansion. Dispersion analysis and modeling results demonstrated that the proposed method has higher accuracy with lower computational cost for poroelastic wave simulation in heterogeneous reservoirs. (paper)

  7. Comparison of climate envelope models developed using expert-selected variables versus statistical selection

    Science.gov (United States)

    Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romañach, Stephanie; Watling, James I.; Mazzotti, Frank J.

    2017-01-01

    Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using statistical methods of variable

  8. Spatial and temporal variability of rainfall and their effects on hydrological response in urban areas – a review

    OpenAIRE

    E. Cristiano; M.-C. ten Veldhuis; N. van de Giesen

    2017-01-01

    In urban areas, hydrological processes are characterized by high variability in space and time, making them sensitive to small-scale temporal and spatial rainfall variability. In the last decades new instruments, techniques, and methods have been developed to capture rainfall and hydrological processes at high resolution. Weather radars have been introduced to estimate high spatial and temporal rainfall variability. At the same time, new models have been proposed to reproduce hydrological res...

  9. A critical appraisal of instruments to measure outcomes of interprofessional education.

    Science.gov (United States)

    Oates, Matthew; Davidson, Megan

    2015-04-01

    Interprofessional education (IPE) is believed to prepare health professional graduates for successful collaborative practice. A range of instruments have been developed to measure the outcomes of IPE. An understanding of the psychometric properties of these instruments is important if they are to be used to measure the effectiveness of IPE. This review set out to identify instruments available to measure outcomes of IPE and collaborative practice in pre-qualification health professional students and to critically appraise the psychometric properties of validity, responsiveness and reliability against contemporary standards for instrument design. Instruments were selected from a pool of extant instruments and subjected to critical appraisal to determine whether they satisfied inclusion criteria. The qualitative and psychometric attributes of the included instruments were appraised using a checklist developed for this review. Nine instruments were critically appraised, including the widely adopted Readiness for Interprofessional Learning Scale (RIPLS) and the Interdisciplinary Education Perception Scale (IEPS). Validity evidence for instruments was predominantly based on test content and internal structure. Ceiling effects and lack of scale width contribute to the inability of some instruments to detect change in variables of interest. Limited reliability data were reported for two instruments. Scale development and scoring protocols were generally reported by instrument developers, but the inconsistent application of scoring protocols for some instruments was apparent. A number of instruments have been developed to measure outcomes of IPE in pre-qualification health professional students. Based on reported validity evidence and reliability data, the psychometric integrity of these instruments is limited. The theoretical test construction paradigm on which instruments have been developed may be contributing to the failure of some instruments to detect change in

  10. Parameter estimation of variable-parameter nonlinear Muskingum model using excel solver

    Science.gov (United States)

    Kang, Ling; Zhou, Liwei

    2018-02-01

    Abstract . The Muskingum model is an effective flood routing technology in hydrology and water resources Engineering. With the development of optimization technology, more and more variable-parameter Muskingum models were presented to improve effectiveness of the Muskingum model in recent decades. A variable-parameter nonlinear Muskingum model (NVPNLMM) was proposed in this paper. According to the results of two real and frequently-used case studies by various models, the NVPNLMM could obtain better values of evaluation criteria, which are used to describe the superiority of the estimated outflows and compare the accuracies of flood routing using various models, and the optimal estimated outflows by the NVPNLMM were closer to the observed outflows than the ones by other models.

  11. Combining fixed effects and instrumental variable approaches for estimating the effect of psychosocial job quality on mental health: evidence from 13 waves of a nationally representative cohort study.

    Science.gov (United States)

    Milner, Allison; Aitken, Zoe; Kavanagh, Anne; LaMontagne, Anthony D; Pega, Frank; Petrie, Dennis

    2017-06-23

    Previous studies suggest that poor psychosocial job quality is a risk factor for mental health problems, but they use conventional regression analytic methods that cannot rule out reverse causation, unmeasured time-invariant confounding and reporting bias. This study combines two quasi-experimental approaches to improve causal inference by better accounting for these biases: (i) linear fixed effects regression analysis and (ii) linear instrumental variable analysis. We extract 13 annual waves of national cohort data including 13 260 working-age (18-64 years) employees. The exposure variable is self-reported level of psychosocial job quality. The instruments used are two common workplace entitlements. The outcome variable is the Mental Health Inventory (MHI-5). We adjust for measured time-varying confounders. In the fixed effects regression analysis adjusted for time-varying confounders, a 1-point increase in psychosocial job quality is associated with a 1.28-point improvement in mental health on the MHI-5 scale (95% CI: 1.17, 1.40; P variable analysis, a 1-point increase psychosocial job quality is related to 1.62-point improvement on the MHI-5 scale (95% CI: -0.24, 3.48; P = 0.088). Our quasi-experimental results provide evidence to confirm job stressors as risk factors for mental ill health using methods that improve causal inference. © The Author 2017. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  12. Estimators for longitudinal latent exposure models: examining measurement model assumptions.

    Science.gov (United States)

    Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D

    2017-06-15

    Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  13. BehavePlus fire modeling system, version 5.0: Variables

    Science.gov (United States)

    Patricia L. Andrews

    2009-01-01

    This publication has been revised to reflect updates to version 4.0 of the BehavePlus software. It was originally published as the BehavePlus fire modeling system, version 4.0: Variables in July, 2008.The BehavePlus fire modeling system is a computer program based on mathematical models that describe wildland fire behavior and effects and the...

  14. Electrical Bioimpedance-Controlled Surgical Instrumentation.

    Science.gov (United States)

    Brendle, Christian; Rein, Benjamin; Niesche, Annegret; Korff, Alexander; Radermacher, Klaus; Misgeld, Berno; Leonhardt, Steffen

    2015-10-01

    A bioimpedance-controlled concept for bone cement milling during revision total hip replacement is presented. Normally, the surgeon manually removes bone cement using a hammer and chisel. However, this procedure is relatively rough and unintended harm may occur to tissue at any time. The proposed bioimpedance-controlled surgical instrumentation improves this process because, for example, most risks associated with bone cement removal are avoided. The electrical bioimpedance measurements enable online process-control by using the milling head as both a cutting tool and measurement electrode at the same time. Furthermore, a novel integrated surgical milling tool is introduced, which allows acquisition of electrical bioimpedance data for online control; these data are used as a process variable. Process identification is based on finite element method simulation and on experimental studies with a rapid control prototyping system. The control loop design includes the identified process model, the characterization of noise as being normally distributed and the filtering, which is necessary for sufficient accuracy ( ±0.5 mm). Also, in a comparative study, noise suppression is investigated in silico with a moving average filter and a Kalman filter. Finally, performance analysis shows that the bioimpedance-controlled surgical instrumentation may also performs effectively at a higher feed rate (e.g., 5 mm/s).

  15. Partitioning the impacts of spatial and climatological rainfall variability in urban drainage modeling

    Science.gov (United States)

    Peleg, Nadav; Blumensaat, Frank; Molnar, Peter; Fatichi, Simone; Burlando, Paolo

    2017-03-01

    The performance of urban drainage systems is typically examined using hydrological and hydrodynamic models where rainfall input is uniformly distributed, i.e., derived from a single or very few rain gauges. When models are fed with a single uniformly distributed rainfall realization, the response of the urban drainage system to the rainfall variability remains unexplored. The goal of this study was to understand how climate variability and spatial rainfall variability, jointly or individually considered, affect the response of a calibrated hydrodynamic urban drainage model. A stochastic spatially distributed rainfall generator (STREAP - Space-Time Realizations of Areal Precipitation) was used to simulate many realizations of rainfall for a 30-year period, accounting for both climate variability and spatial rainfall variability. The generated rainfall ensemble was used as input into a calibrated hydrodynamic model (EPA SWMM - the US EPA's Storm Water Management Model) to simulate surface runoff and channel flow in a small urban catchment in the city of Lucerne, Switzerland. The variability of peak flows in response to rainfall of different return periods was evaluated at three different locations in the urban drainage network and partitioned among its sources. The main contribution to the total flow variability was found to originate from the natural climate variability (on average over 74 %). In addition, the relative contribution of the spatial rainfall variability to the total flow variability was found to increase with longer return periods. This suggests that while the use of spatially distributed rainfall data can supply valuable information for sewer network design (typically based on rainfall with return periods from 5 to 15 years), there is a more pronounced relevance when conducting flood risk assessments for larger return periods. The results show the importance of using multiple distributed rainfall realizations in urban hydrology studies to capture the

  16. Modeling Turbulent Combustion for Variable Prandtl and Schmidt Number

    Science.gov (United States)

    Hassan, H. A.

    2004-01-01

    This report consists of two abstracts submitted for possible presentation at the AIAA Aerospace Science Meeting to be held in January 2005. Since the submittal of these abstracts we are continuing refinement of the model coefficients derived for the case of a variable Turbulent Prandtl number. The test cases being investigated are a Mach 9.2 flow over a degree ramp and a Mach 8.2 3-D calculation of crossing shocks. We have developed an axisymmetric code for treating axisymmetric flows. In addition the variable Schmidt number formulation was incorporated in the code and we are in the process of determining the model constants.

  17. Impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling

    Science.gov (United States)

    Chen, Jie; Li, Chao; Brissette, François P.; Chen, Hua; Wang, Mingna; Essou, Gilles R. C.

    2018-05-01

    Bias correction is usually implemented prior to using climate model outputs for impact studies. However, bias correction methods that are commonly used treat climate variables independently and often ignore inter-variable dependencies. The effects of ignoring such dependencies on impact studies need to be investigated. This study aims to assess the impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling. To this end, a joint bias correction (JBC) method which corrects the joint distribution of two variables as a whole is compared with an independent bias correction (IBC) method; this is considered in terms of correcting simulations of precipitation and temperature from 26 climate models for hydrological modeling over 12 watersheds located in various climate regimes. The results show that the simulated precipitation and temperature are considerably biased not only in the individual distributions, but also in their correlations, which in turn result in biased hydrological simulations. In addition to reducing the biases of the individual characteristics of precipitation and temperature, the JBC method can also reduce the bias in precipitation-temperature (P-T) correlations. In terms of hydrological modeling, the JBC method performs significantly better than the IBC method for 11 out of the 12 watersheds over the calibration period. For the validation period, the advantages of the JBC method are greatly reduced as the performance becomes dependent on the watershed, GCM and hydrological metric considered. For arid/tropical and snowfall-rainfall-mixed watersheds, JBC performs better than IBC. For snowfall- or rainfall-dominated watersheds, however, the two methods behave similarly, with IBC performing somewhat better than JBC. Overall, the results emphasize the advantages of correcting the P-T correlation when using climate model-simulated precipitation and temperature to assess the impact of climate change on watershed

  18. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.

    Science.gov (United States)

    Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  19. Designing means and specifications for model FT-619 kidney function instrument

    International Nuclear Information System (INIS)

    Yu Yongding

    1988-04-01

    In this paper, it is pointed out that the model FT-619 Kidney Function Equipment is a new cost-effective nuclear medicine instrument, which takes the leading position in China. The performance of the model FT-619,especially the lead collimated scintillation detector has reached the same level as the advanced equipment in the world market. It is also described in this article in detail that the design of the lead collimator and the shielding as well as the detection efficiency have achieved the optimum level and that a comparison has been made with foreign products

  20. A Model for Positively Correlated Count Variables

    DEFF Research Database (Denmark)

    Møller, Jesper; Rubak, Ege Holger

    2010-01-01

    An α-permanental random field is briefly speaking a model for a collection of non-negative integer valued random variables with positive associations. Though such models possess many appealing probabilistic properties, many statisticians seem unaware of α-permanental random fields...... and their potential applications. The purpose of this paper is to summarize useful probabilistic results, study stochastic constructions and simulation techniques, and discuss some examples of α-permanental random fields. This should provide a useful basis for discussing the statistical aspects in future work....

  1. Interacting ghost dark energy models with variable G and Λ

    Science.gov (United States)

    Sadeghi, J.; Khurshudyan, M.; Movsisyan, A.; Farahani, H.

    2013-12-01

    In this paper we consider several phenomenological models of variable Λ. Model of a flat Universe with variable Λ and G is accepted. It is well known, that varying G and Λ gives rise to modified field equations and modified conservation laws, which gives rise to many different manipulations and assumptions in literature. We will consider two component fluid, which parameters will enter to Λ. Interaction between fluids with energy densities ρ1 and ρ2 assumed as Q = 3Hb(ρ1+ρ2). We have numerical analyze of important cosmological parameters like EoS parameter of the composed fluid and deceleration parameter q of the model.

  2. Representing general theoretical concepts in structural equation models: The role of composite variables

    Science.gov (United States)

    Grace, J.B.; Bollen, K.A.

    2008-01-01

    Structural equation modeling (SEM) holds the promise of providing natural scientists the capacity to evaluate complex multivariate hypotheses about ecological systems. Building on its predecessors, path analysis and factor analysis, SEM allows for the incorporation of both observed and unobserved (latent) variables into theoretically-based probabilistic models. In this paper we discuss the interface between theory and data in SEM and the use of an additional variable type, the composite. In simple terms, composite variables specify the influences of collections of other variables and can be helpful in modeling heterogeneous concepts of the sort commonly of interest to ecologists. While long recognized as a potentially important element of SEM, composite variables have received very limited use, in part because of a lack of theoretical consideration, but also because of difficulties that arise in parameter estimation when using conventional solution procedures. In this paper we present a framework for discussing composites and demonstrate how the use of partially-reduced-form models can help to overcome some of the parameter estimation and evaluation problems associated with models containing composites. Diagnostic procedures for evaluating the most appropriate and effective use of composites are illustrated with an example from the ecological literature. It is argued that an ability to incorporate composite variables into structural equation models may be particularly valuable in the study of natural systems, where concepts are frequently multifaceted and the influence of suites of variables are often of interest. ?? Springer Science+Business Media, LLC 2007.

  3. Inter-model variability and biases of the global water cycle in CMIP3 coupled climate models

    International Nuclear Information System (INIS)

    Liepert, Beate G; Previdi, Michael

    2012-01-01

    Observed changes such as increasing global temperatures and the intensification of the global water cycle in the 20th century are robust results of coupled general circulation models (CGCMs). In spite of these successes, model-to-model variability and biases that are small in first order climate responses, however, have considerable implications for climate predictability especially when multi-model means are used. We show that most climate simulations of the 20th and 21st century A2 scenario performed with CMIP3 (Coupled Model Inter-comparison Project Phase 3) models have deficiencies in simulating the global atmospheric moisture balance. Large biases of only a few models (some biases reach the simulated global precipitation changes in the 20th and 21st centuries) affect the multi-model mean global moisture budget. An imbalanced flux of −0.14 Sv exists while the multi-model median imbalance is only −0.02 Sv. Moreover, for most models the detected imbalance changes over time. As a consequence, in 13 of the 18 CMIP3 models examined, global annual mean precipitation exceeds global evaporation, indicating that there should be a ‘leaking’ of moisture from the atmosphere whereas for the remaining five models a ‘flooding’ is implied. Nonetheless, in all models, the actual atmospheric moisture content and its variability correctly increases during the course of the 20th and 21st centuries. These discrepancies therefore imply an unphysical and hence ‘ghost’ sink/source of atmospheric moisture in the models whose atmospheres flood/leak. The ghost source/sink of moisture can also be regarded as atmospheric latent heating/cooling and hence as positive/negative perturbation of the atmospheric energy budget or non-radiative forcing in the range of −1 to +6 W m −2 (median +0.1 W m −2 ). The inter-model variability of the global atmospheric moisture transport from oceans to land areas, which impacts the terrestrial water cycle, is also quite high and ranges

  4. How ocean lateral mixing changes Southern Ocean variability in coupled climate models

    Science.gov (United States)

    Pradal, M. A. S.; Gnanadesikan, A.; Thomas, J. L.

    2016-02-01

    The lateral mixing of tracers represents a major uncertainty in the formulation of coupled climate models. The mixing of tracers along density surfaces in the interior and horizontally within the mixed layer is often parameterized using a mixing coefficient ARedi. The models used in the Coupled Model Intercomparison Project 5 exhibit more than an order of magnitude range in the values of this coefficient used within the Southern Ocean. The impacts of such uncertainty on Southern Ocean variability have remained unclear, even as recent work has shown that this variability differs between different models. In this poster, we change the lateral mixing coefficient within GFDL ESM2Mc, a coarse-resolution Earth System model that nonetheless has a reasonable circulation within the Southern Ocean. As the coefficient varies from 400 to 2400 m2/s the amplitude of the variability varies significantly. The low-mixing case shows strong decadal variability with an annual mean RMS temperature variability exceeding 1C in the Circumpolar Current. The highest-mixing case shows a very similar spatial pattern of variability, but with amplitudes only about 60% as large. The suppression of mixing is larger in the Atlantic Sector of the Southern Ocean relatively to the Pacific sector. We examine the salinity budgets of convective regions, paying particular attention to the extent to which high mixing prevents the buildup of low-saline waters that are capable of shutting off deep convection entirely.

  5. The PROactive instruments to measure physical activity in patients with chronic obstructive pulmonary disease

    Science.gov (United States)

    Gimeno-Santos, Elena; Raste, Yogini; Demeyer, Heleen; Louvaris, Zafeiris; de Jong, Corina; Rabinovich, Roberto A.; Hopkinson, Nicholas S.; Polkey, Michael I.; Vogiatzis, Ioannis; Tabberer, Maggie; Dobbels, Fabienne; Ivanoff, Nathalie; de Boer, Willem I.; van der Molen, Thys; Kulich, Karoly; Serra, Ignasi; Basagaña, Xavier; Troosters, Thierry; Puhan, Milo A.; Karlsson, Niklas

    2015-01-01

    No current patient-centred instrument captures all dimensions of physical activity in chronic obstructive pulmonary disease (COPD). Our objective was item reduction and initial validation of two instruments to measure physical activity in COPD. Physical activity was assessed in a 6-week, randomised, two-way cross-over, multicentre study using PROactive draft questionnaires (daily and clinical visit versions) and two activity monitors. Item reduction followed an iterative process including classical and Rasch model analyses, and input from patients and clinical experts. 236 COPD patients from five European centres were included. Results indicated the concept of physical activity in COPD had two domains, labelled “amount” and “difficulty”. After item reduction, the daily PROactive instrument comprised nine items and the clinical visit contained 14. Both demonstrated good model fit (person separation index >0.7). Confirmatory factor analysis supported the bidimensional structure. Both instruments had good internal consistency (Cronbach's α>0.8), test–retest reliability (intraclass correlation coefficient ≥0.9) and exhibited moderate-to-high correlations (r>0.6) with related constructs and very low correlations (r<0.3) with unrelated constructs, providing evidence for construct validity. Daily and clinical visit “PROactive physical activity in COPD” instruments are hybrid tools combining a short patient-reported outcome questionnaire and two activity monitor variables which provide simple, valid and reliable measures of physical activity in COPD patients. PMID:26022965

  6. A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses

    Science.gov (United States)

    Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini

    2012-01-01

    The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…

  7. Energy conserving schemes for the simulation of musical instrument contact dynamics

    Science.gov (United States)

    Chatziioannou, Vasileios; van Walstijn, Maarten

    2015-03-01

    Collisions are an innate part of the function of many musical instruments. Due to the nonlinear nature of contact forces, special care has to be taken in the construction of numerical schemes for simulation and sound synthesis. Finite difference schemes and other time-stepping algorithms used for musical instrument modelling purposes are normally arrived at by discretising a Newtonian description of the system. However because impact forces are non-analytic functions of the phase space variables, algorithm stability can rarely be established this way. This paper presents a systematic approach to deriving energy conserving schemes for frictionless impact modelling. The proposed numerical formulations follow from discretising Hamilton's equations of motion, generally leading to an implicit system of nonlinear equations that can be solved with Newton's method. The approach is first outlined for point mass collisions and then extended to distributed settings, such as vibrating strings and beams colliding with rigid obstacles. Stability and other relevant properties of the proposed approach are discussed and further demonstrated with simulation examples. The methodology is exemplified through a case study on tanpura string vibration, with the results confirming the main findings of previous studies on the role of the bridge in sound generation with this type of string instrument.

  8. Instrumentation for the follow-up of severe accidents

    International Nuclear Information System (INIS)

    Munoz Sanchez, A.; Nino Perote, R.

    2000-01-01

    During severe accidents, it is foreseeable that the instrumentation installed in a plant is subjected to conditions which are more hostile than those for which the instrumentation was designed and qualified. Moreover, new, specific instrumentation is required to monitor variables which have not been considered until now, and to control systems which lessen the consequences of severe accidents. Both existing instrumentation used to monitor critical functions in design basis accident conditions and additional instrumentation which provides the information necessary to control and mitigate the consequences of severe accidents, have to be designed to withstand such conditions, especially in terms of measurements range, functional characteristics and qualification to withstand pressure and temperature loads resulting from steam explosion, hydrogen combustion/explosion and high levels of radiation over long periods of time. (Author)

  9. Input variable selection for data-driven models of Coriolis flowmeters for two-phase flow measurement

    International Nuclear Information System (INIS)

    Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao

    2017-01-01

    Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction. (paper)

  10. Modelling of XCO2 Surfaces Based on Flight Tests of TanSat Instruments

    Directory of Open Access Journals (Sweden)

    Li Li Zhang

    2016-11-01

    Full Text Available The TanSat carbon satellite is to be launched at the end of 2016. In order to verify the performance of its instruments, a flight test of TanSat instruments was conducted in Jilin Province in September, 2015. The flight test area covered a total area of about 11,000 km2 and the underlying surface cover included several lakes, forest land, grassland, wetland, farmland, a thermal power plant and numerous cities and villages. We modeled the column-average dry-air mole fraction of atmospheric carbon dioxide (XCO2 surface based on flight test data which measured the near- and short-wave infrared (NIR reflected solar radiation in the absorption bands at around 760 and 1610 nm. However, it is difficult to directly analyze the spatial distribution of XCO2 in the flight area using the limited flight test data and the approximate surface of XCO2, which was obtained by regression modeling, which is not very accurate either. We therefore used the high accuracy surface modeling (HASM platform to fill the gaps where there is no information on XCO2 in the flight test area, which takes the approximate surface of XCO2 as its driving field and the XCO2 observations retrieved from the flight test as its optimum control constraints. High accuracy surfaces of XCO2 were constructed with HASM based on the flight’s observations. The results showed that the mean XCO2 in the flight test area is about 400 ppm and that XCO2 over urban areas is much higher than in other places. Compared with OCO-2’s XCO2, the mean difference is 0.7 ppm and the standard deviation is 0.95 ppm. Therefore, the modelling of the XCO2 surface based on the flight test of the TanSat instruments fell within an expected and acceptable range.

  11. Ensembling Variable Selectors by Stability Selection for the Cox Model

    Directory of Open Access Journals (Sweden)

    Qing-Yan Yin

    2017-01-01

    Full Text Available As a pivotal tool to build interpretive models, variable selection plays an increasingly important role in high-dimensional data analysis. In recent years, variable selection ensembles (VSEs have gained much interest due to their many advantages. Stability selection (Meinshausen and Bühlmann, 2010, a VSE technique based on subsampling in combination with a base algorithm like lasso, is an effective method to control false discovery rate (FDR and to improve selection accuracy in linear regression models. By adopting lasso as a base learner, we attempt to extend stability selection to handle variable selection problems in a Cox model. According to our experience, it is crucial to set the regularization region Λ in lasso and the parameter λmin properly so that stability selection can work well. To the best of our knowledge, however, there is no literature addressing this problem in an explicit way. Therefore, we first provide a detailed procedure to specify Λ and λmin. Then, some simulated and real-world data with various censoring rates are used to examine how well stability selection performs. It is also compared with several other variable selection approaches. Experimental results demonstrate that it achieves better or competitive performance in comparison with several other popular techniques.

  12. ExoMars Trace Gas Orbiter Instrument Modelling Approach to Streamline Science Operations

    Science.gov (United States)

    Munoz Fernandez, Michela; Frew, David; Ashman, Michael; Cardesin Moinelo, Alejandro; Garcia Beteta, Juan Jose; Geiger, Bernhard; Metcalfe, Leo; Nespoli, Federico; Muniz Solaz, Carlos

    2018-05-01

    ExoMars Trace Gas Orbiter (TGO) science operations activities are centralised at ESAC's Science Operations Centre (SOC). The SOC receives the inputs from the principal investigators (PIs) in order to implement and deliver the spacecraft pointing requests and instrument timelines to the Mission Operations Centre (MOC). The high number of orbits per planning cycle has made it necessary to abstract the planning interactions between the SOC and the PI teams at the observation level. This paper describes the modelling approach we have conducted for TGOís instruments to streamline science operations. We have created dynamic observation types that scale to adapt to the conditions specified by the PI teams including observation timing, and pointing block parameters calculated from observation geometry. This approach is considered and improvement with respect to previous missions where the generation of the observation pointing and commanding requests was performed manually by the instrument teams. Automation software assists us to effectively handle the high density of planned orbits with increasing volume of scientific data and to successfully meet opportunistic scientific goals and objectives. Our planning tool combines the instrument observation definition files provided by the PIs together with the flight dynamics products to generate the Pointing Requests and the instrument timeline (ITL). The ITL contains all the validated commands at the TC sequence level and computes the resource envelopes (data rate, power, data volume) within the constraints. At the SOC, our main goal is to maximise the science output while minimising the number of iterations among the teams, ensuring that the timeline does not violate the state transitions allowed in the Mission Operations Rules and Constraints Document.

  13. Suprathermal ions in the solar wind from the Voyager spacecraft: Instrument modeling and background analysis

    International Nuclear Information System (INIS)

    Randol, B M; Christian, E R

    2015-01-01

    Using publicly available data from the Voyager Low Energy Charged Particle (LECP) instruments, we investigate the form of the solar wind ion suprathermal tail in the outer heliosphere inside the termination shock. This tail has a commonly observed form in the inner heliosphere, that is, a power law with a particular spectral index. The Voyager spacecraft have taken data beyond 100 AU, farther than any other spacecraft. However, during extended periods of time, the data appears to be mostly background. We have developed a technique to self-consistently estimate the background seen by LECP due to cosmic rays using data from the Voyager cosmic ray instruments and a simple, semi-analytical model of the LECP instruments

  14. Multiscale thermohydrologic model: addressing variability and uncertainty at Yucca Mountain

    International Nuclear Information System (INIS)

    Buscheck, T; Rosenberg, N D; Gansemer, J D; Sun, Y

    2000-01-01

    Performance assessment and design evaluation require a modeling tool that simultaneously accounts for processes occurring at a scale of a few tens of centimeters around individual waste packages and emplacement drifts, and also on behavior at the scale of the mountain. Many processes and features must be considered, including non-isothermal, multiphase-flow in rock of variable saturation and thermal radiation in open cavities. Also, given the nature of the fractured rock at Yucca Mountain, a dual-permeability approach is needed to represent permeability. A monolithic numerical model with all these features requires too large a computational cost to be an effective simulation tool, one that is used to examine sensitivity to key model assumptions and parameters. We have developed a multi-scale modeling approach that effectively simulates 3D discrete-heat-source, mountain-scale thermohydrologic behavior at Yucca Mountain and captures the natural variability of the site consistent with what we know from site characterization and waste-package-to-waste-package variability in heat output. We describe this approach and present results examining the role of infiltration flux, the most important natural-system parameter with respect to how thermohydrologic behavior influences the performance of the repository

  15. Exploratory and Creative Properties of Physical-Modeling-based Musical Instruments

    DEFF Research Database (Denmark)

    Gelineck, Steven

    Digital musical instruments are developed to enable musicians to find new ways of expressing themselves. The development and evaluation of these instruments can be approached from many different perspectives depending on which capabilities one wants the musicians to have. This thesis attempts...... to approach development and evaluation of these instruments with the notion that instruments today are able to facilitate the creative process that is so crucial for creating music. The fundamental question pursued throughout the thesis is how creative work processes of composers of electronic music can...... be supported and even challenged by the instruments they use. What is it that makes one musical instrument more creatively inspiring than another, and how do we evaluate how well it succeeds? In order to present answers to these questions, the thesis focusses on the sound synthesis technique of physical...

  16. NASA Instrument Cost Model for Explorer-Like Mission Instruments (NICM-E)

    Science.gov (United States)

    Habib-Agahi, Hamid; Fox, George; Mrozinski, Joe; Ball, Gary

    2013-01-01

    NICM-E is a cost estimating relationship that supplements the traditional NICM System Level CERs for instruments flown on NASA Explorer-like missions that have the following three characteristics: 1) fly on Class C missions, 2) major development led and performed by universities or research foundations, and 3) have significant level of inheritance.

  17. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    Directory of Open Access Journals (Sweden)

    Jun-He Yang

    2017-01-01

    Full Text Available Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir’s water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir’s water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  18. The radiation budget of stratocumulus clouds measured by tethered balloon instrumentation: Variability of flux measurements

    Science.gov (United States)

    Duda, David P.; Stephens, Graeme L.; Cox, Stephen K.

    1990-01-01

    Measurements of longwave and shortwave radiation were made using an instrument package on the NASA tethered balloon during the FIRE Marine Stratocumulus experiment. Radiation data from two pairs of pyranometers were used to obtain vertical profiles of the near-infrared and total solar fluxes through the boundary layer, while a pair of pyrgeometers supplied measurements of the longwave fluxes in the cloud layer. The radiation observations were analyzed to determine heating rates and to measure the radiative energy budget inside the stratocumulus clouds during several tethered balloon flights. The radiation fields in the cloud layer were also simulated by a two-stream radiative transfer model, which used cloud optical properties derived from microphysical measurements and Mie scattering theory.

  19. Out- and insourcing, an analysis model for use of instrumented techniques

    DEFF Research Database (Denmark)

    Bang, Henrik Peter; Grønbæk, Niels; Larsen, Claus Richard

    2017-01-01

    We sketch an outline of a model for analyzing the use of ICT-tools, in particular CAS, in teaching designs employed by ‘generic’ teachers. Our model uses the business economics concepts out- and insourcing as metaphors within the dialectics of tool and content in planning of teaching. Outsourcing...... is done in order to enhance outcome through external partners. The converse concept of insourcing refers to internal sourcing. We shall adhere to the framework of the anthropological theory of the didactic, viewing out- and insourcing primarily as decisions about the technology component of praxeologies....... We use the model on a concrete example from Danish upper secondary mathematics to uncover what underlies teachers’ decisions (deliberate or colloquial) on incorporating instrumented approaches....

  20. Adaptation of endothelial cells to physiologically-modeled, variable shear stress.

    Directory of Open Access Journals (Sweden)

    Joseph S Uzarski

    Full Text Available Endothelial cell (EC function is mediated by variable hemodynamic shear stress patterns at the vascular wall, where complex shear stress profiles directly correlate with blood flow conditions that vary temporally based on metabolic demand. The interactions of these more complex and variable shear fields with EC have not been represented in hemodynamic flow models. We hypothesized that EC exposed to pulsatile shear stress that changes in magnitude and duration, modeled directly from real-time physiological variations in heart rate, would elicit phenotypic changes as relevant to their critical roles in thrombosis, hemostasis, and inflammation. Here we designed a physiological flow (PF model based on short-term temporal changes in blood flow observed in vivo and compared it to static culture and steady flow (SF at a fixed pulse frequency of 1.3 Hz. Results show significant changes in gene regulation as a function of temporally variable flow, indicating a reduced wound phenotype more representative of quiescence. EC cultured under PF exhibited significantly higher endothelial nitric oxide synthase (eNOS activity (PF: 176.0±11.9 nmol/10(5 EC; SF: 115.0±12.5 nmol/10(5 EC, p = 0.002 and lower TNF-a-induced HL-60 leukocyte adhesion (PF: 37±6 HL-60 cells/mm(2; SF: 111±18 HL-60/mm(2, p = 0.003 than cells cultured under SF which is consistent with a more quiescent anti-inflammatory and anti-thrombotic phenotype. In vitro models have become increasingly adept at mimicking natural physiology and in doing so have clarified the importance of both chemical and physical cues that drive cell function. These data illustrate that the variability in metabolic demand and subsequent changes in perfusion resulting in constantly variable shear stress plays a key role in EC function that has not previously been described.

  1. Oracle Efficient Variable Selection in Random and Fixed Effects Panel Data Models

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl

    This paper generalizes the results for the Bridge estimator of Huang et al. (2008) to linear random and fixed effects panel data models which are allowed to grow in both dimensions. In particular we show that the Bridge estimator is oracle efficient. It can correctly distinguish between relevant...... and irrelevant variables and the asymptotic distribution of the estimators of the coefficients of the relevant variables is the same as if only these had been included in the model, i.e. as if an oracle had revealed the true model prior to estimation. In the case of more explanatory variables than observations......, we prove that the Marginal Bridge estimator can asymptotically correctly distinguish between relevant and irrelevant explanatory variables. We do this without restricting the dependence between covariates and without assuming sub Gaussianity of the error terms thereby generalizing the results...

  2. Variable Renewable Energy in Long-Term Planning Models: A Multi-Model Perspective

    Energy Technology Data Exchange (ETDEWEB)

    Cole, Wesley J. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Frew, Bethany A. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Mai, Trieu T. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Sun, Yinong [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bistline, John [Electric Power Research Inst., Palo Alto, CA (United States); Blanford, Geoffrey [Electric Power Research Inst., Palo Alto, CA (United States); Young, David [Electric Power Research Inst., Palo Alto, CA (United States); Marcy, Cara [Energy Information Administration, Washington, DC (United States); Namovicz, Chris [Energy Information Administration, Washington, DC (United States); Edelman, Risa [Environmental Protection Agency, Washington, DC (United States); Meroney, Bill [Environmental Protection Agency; Sims, Ryan [Environmental Protection Agency; Stenhouse, Jeb [Environmental Protection Agency; Donohoo-Vallett, Paul [U.S. Department of Energy

    2017-11-03

    Long-term capacity expansion models of the U.S. electricity sector have long been used to inform electric sector stakeholders and decision makers. With the recent surge in variable renewable energy (VRE) generators - primarily wind and solar photovoltaics - the need to appropriately represent VRE generators in these long-term models has increased. VRE generators are especially difficult to represent for a variety of reasons, including their variability, uncertainty, and spatial diversity. To assess current best practices, share methods and data, and identify future research needs for VRE representation in capacity expansion models, four capacity expansion modeling teams from the Electric Power Research Institute, the U.S. Energy Information Administration, the U.S. Environmental Protection Agency, and the National Renewable Energy Laboratory conducted two workshops of VRE modeling for national-scale capacity expansion models. The workshops covered a wide range of VRE topics, including transmission and VRE resource data, VRE capacity value, dispatch and operational modeling, distributed generation, and temporal and spatial resolution. The objectives of the workshops were both to better understand these topics and to improve the representation of VRE across the suite of models. Given these goals, each team incorporated model updates and performed additional analyses between the first and second workshops. This report summarizes the analyses and model 'experiments' that were conducted as part of these workshops as well as the various methods for treating VRE among the four modeling teams. The report also reviews the findings and learnings from the two workshops. We emphasize the areas where there is still need for additional research and development on analysis tools to incorporate VRE into long-term planning and decision-making.

  3. Separation of variables in anisotropic models: anisotropic Rabi and elliptic Gaudin model in an external magnetic field

    Science.gov (United States)

    Skrypnyk, T.

    2017-08-01

    We study the problem of separation of variables for classical integrable Hamiltonian systems governed by non-skew-symmetric non-dynamical so(3)\\otimes so(3) -valued elliptic r-matrices with spectral parameters. We consider several examples of such models, and perform separation of variables for classical anisotropic one- and two-spin Gaudin-type models in an external magnetic field, and for Jaynes-Cummings-Dicke-type models without the rotating wave approximation.

  4. Minimally invasive instrumentation without fusion during posterior thoracic corpectomies: a comparison of percutaneously instrumented nonfused segments with open instrumented fused segments.

    Science.gov (United States)

    Lau, Darryl; Chou, Dean

    2017-07-01

    OBJECTIVE During the mini-open posterior corpectomy, percutaneous instrumentation without fusion is performed above and below the corpectomy level. In this study, the authors' goal was to compare the perioperative and long-term implant failure rates of patients who underwent nonfused percutaneous instrumentation with those of patients who underwent traditional open instrumented fusion. METHODS Adult patients who underwent posterior thoracic corpectomies with cage reconstruction between 2009 and 2014 were identified. Patients who underwent mini-open corpectomy had percutaneous instrumentation without fusion, and patients who underwent open corpectomy had instrumented fusion above and below the corpectomy site. The authors compared perioperative outcomes and rates of implant failure requiring reoperation between the open (fused) and mini-open (unfused) groups. RESULTS A total of 75 patients were identified, and 53 patients (32 open and 21 mini-open) were available for followup. The mean patient age was 52.8 years, and 56.6% of patients were male. There were no significant differences in baseline variables between the 2 groups. The overall perioperative complication rate was 15.1%, and there was no significant difference between the open and mini-open groups (18.8% vs 9.5%; p = 0.359). The mean hospital stay was 10.5 days. The open group required a significantly longer stay than the mini-open group (12.8 vs 7.1 days; p open and mini-open groups at 6 months (3.1% vs 0.0%, p = 0.413), 1 year (10.7% vs 6.2%, p = 0.620), and 2 years (18.2% vs 8.3%, p = 0.438). The overall mean follow-up was 29.2 months. CONCLUSIONS These findings suggest that percutaneous instrumentation without fusion in mini-open transpedicular corpectomies offers similar implant failure and reoperation rates as open instrumented fusion as far out as 2 years of follow-up.

  5. Importance of Intrinsic and Instrumental Value of Education in Pakistan

    Science.gov (United States)

    Kumar, Mahendar

    2017-01-01

    Normally, effectiveness of any object or thing is judged by two values; intrinsic and instrumental. To compare intrinsic value of education with instrumental value, this study has used the following variables: getting knowledge for its own sake, getting knowledge for social status, getting knowledge for job or business endeavor and getting…

  6. Using the Rasch measurement model to design a report writing assessment instrument.

    Science.gov (United States)

    Carlson, Wayne R

    2013-01-01

    This paper describes how the Rasch measurement model was used to develop an assessment instrument designed to measure student ability to write law enforcement incident and investigative reports. The ability to write reports is a requirement of all law enforcement recruits in the state of Michigan and is a part of the state's mandatory basic training curriculum, which is promulgated by the Michigan Commission on Law Enforcement Standards (MCOLES). Recently, MCOLES conducted research to modernize its training and testing in the area of report writing. A structured validation process was used, which included: a) an examination of the job tasks of a patrol officer, b) input from content experts, c) a review of the professional research, and d) the creation of an instrument to measure student competency. The Rasch model addressed several measurement principles that were central to construct validity, which were particularly useful for assessing student performances. Based on the results of the report writing validation project, the state established a legitimate connectivity between the report writing standard and the essential job functions of a patrol officer in Michigan. The project also produced an authentic instrument for measuring minimum levels of report writing competency, which generated results that are valid for inferences of student ability. Ultimately, the state of Michigan must ensure the safety of its citizens by licensing only those patrol officers who possess a minimum level of core competency. Maintaining the validity and reliability of both the training and testing processes can ensure that the system for producing such candidates functions as intended.

  7. Does social trust increase willingness to pay taxes to improve public healthcare? Cross-sectional cross-country instrumental variable analysis.

    Science.gov (United States)

    Habibov, Nazim; Cheung, Alex; Auchynnikava, Alena

    2017-09-01

    The purpose of this paper is to investigate the effect of social trust on the willingness to pay more taxes to improve public healthcare in post-communist countries. The well-documented association between higher levels of social trust and better health has traditionally been assumed to reflect the notion that social trust is positively associated with support for public healthcare system through its encouragement of cooperative behaviour, social cohesion, social solidarity, and collective action. Hence, in this paper, we have explicitly tested the notion that social trust contributes to an increase in willingness to financially support public healthcare. We use micro data from the 2010 Life-in-Transition survey (N = 29,526). Classic binomial probit and instrumental variables ivprobit regressions are estimated to model the relationship between social trust and paying more taxes to improve public healthcare. We found that an increase in social trust is associated with a greater willingness to pay more taxes to improve public healthcare. From the perspective of policy-making, healthcare administrators, policy-makers, and international donors should be aware that social trust is an important factor in determining the willingness of the population to provide much-needed financial resources to supporting public healthcare. From a theoretical perspective, we found that estimating the effect of trust on support for healthcare without taking confounding and measurement error problems into consideration will likely lead to an underestimation of the true effect of trust. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Modelling the effects of spatial variability on radionuclide migration

    International Nuclear Information System (INIS)

    1998-01-01

    The NEA workshop reflect the present status in national waste management program, specifically in spatial variability and performance assessment of geologic disposal sites for deed repository system the four sessions were: Spatial Variability: Its Definition and Significance to Performance Assessment and Site Characterisation; Experience with the Modelling of Radionuclide Migration in the Presence of Spatial Variability in Various Geological Environments; New Areas for Investigation: Two Personal Views; What is Wanted and What is Feasible: Views and Future Plans in Selected Waste Management Organisations. The 26 papers presented on the four oral sessions and on the poster session have been abstracted and indexed individually for the INIS database. (R.P.)

  9. An Undergraduate Research Experience on Studying Variable Stars

    Science.gov (United States)

    Amaral, A.; Percy, J. R.

    2016-06-01

    We describe and evaluate a summer undergraduate research project and experience by one of us (AA), under the supervision of the other (JP). The aim of the project was to sample current approaches to analyzing variable star data, and topics related to the study of Mira variable stars and their astrophysical importance. This project was done through the Summer Undergraduate Research Program (SURP) in astronomy at the University of Toronto. SURP allowed undergraduate students to explore and learn about many topics within astronomy and astrophysics, from instrumentation to cosmology. SURP introduced students to key skills which are essential for students hoping to pursue graduate studies in any scientific field. Variable stars proved to be an excellent topic for a research project. For beginners to independent research, it introduces key concepts in research such as critical thinking and problem solving, while illuminating previously learned topics in stellar physics. The focus of this summer project was to compare observations with structural and evolutionary models, including modelling the random walk behavior exhibited in the (O-C) diagrams of most Mira stars. We found that the random walk could be modelled by using random fluctuations of the period. This explanation agreed well with observations.

  10. AeroPropulsoServoElasticity: Dynamic Modeling of the Variable Cycle Propulsion System

    Science.gov (United States)

    Kopasakis, George

    2012-01-01

    This presentation was made at the 2012 Fundamental Aeronautics Program Technical Conference and it covers research work for the Dynamic Modeling of the Variable cycle Propulsion System that was done under the Supersonics Project, in the area of AeroPropulsoServoElasticity. The presentation covers the objective for the propulsion system dynamic modeling work, followed by the work that has been done so far to model the variable Cycle Engine, modeling of the inlet, the nozzle, the modeling that has been done to model the affects of flow distortion, and finally presenting some concluding remarks and future plans.

  11. Ocean carbon and heat variability in an Earth System Model

    Science.gov (United States)

    Thomas, J. L.; Waugh, D.; Gnanadesikan, A.

    2016-12-01

    Ocean carbon and heat content are very important for regulating global climate. Furthermore, due to lack of observations and dependence on parameterizations, there has been little consensus in the modeling community on the magnitude of realistic ocean carbon and heat content variability, particularly in the Southern Ocean. We assess the differences between global oceanic heat and carbon content variability in GFDL ESM2Mc using a 500-year, pre-industrial control simulation. The global carbon and heat content are directly out of phase with each other; however, in the Southern Ocean the heat and carbon content are in phase. The global heat mutli-decadal variability is primarily explained by variability in the tropics and mid-latitudes, while the variability in global carbon content is primarily explained by Southern Ocean variability. In order to test the robustness of this relationship, we use three additional pre-industrial control simulations using different mesoscale mixing parameterizations. Three pre-industrial control simulations are conducted with the along-isopycnal diffusion coefficient (Aredi) set to constant values of 400, 800 (control) and 2400 m2 s-1. These values for Aredi are within the range of parameter settings commonly used in modeling groups. Finally, one pre-industrial control simulation is conducted where the minimum in the Gent-McWilliams parameterization closure scheme (AGM) increased to 600 m2 s-1. We find that the different simulations have very different multi-decadal variability, especially in the Weddell Sea where the characteristics of deep convection are drastically changed. While the temporal frequency and amplitude global heat and carbon content changes significantly, the overall spatial pattern of variability remains unchanged between the simulations.

  12. Two experimental tests of relational models of procedural justice: non-instrumental voice and authority group membership.

    Science.gov (United States)

    Platow, Michael J; Eggins, Rachael A; Chattopadhyay, Rachana; Brewer, Greg; Hardwick, Lisa; Milsom, Laurin; Brocklebank, Jacinta; Lalor, Thérèse; Martin, Rowena; Quee, Michelle; Vassallo, Sara; Welsh, Jenny

    2013-06-01

    In both a laboratory experiment (in Australia) using university as the basis of group membership, and a scenario experiment (in India) using religion as the basis of group membership, we observe more favourable respect and fairness ratings in response to an in-group authority than an out-group authority who administers non-instrumental voice. Moreover, we observe in our second experiment that reported likelihood of protest (herein called "social-change voice") was relatively high following non-instrumental voice from an out-group authority, but relatively low following non-instrumental voice from an in-group authority. Our findings are consistent with relational models of procedural justice, and extend the work by examining likely use of alternative forms of voice as well as highlighting the relative importance of instrumentality. ©2012 The British Psychological Society.

  13. Predictive-property-ranked variable reduction in partial least squares modelling with final complexity adapted models: comparison of properties for ranking.

    Science.gov (United States)

    Andries, Jan P M; Vander Heyden, Yvan; Buydens, Lutgarde M C

    2013-01-14

    The calibration performance of partial least squares regression for one response (PLS1) can be improved by eliminating uninformative variables. Many variable-reduction methods are based on so-called predictor-variable properties or predictive properties, which are functions of various PLS-model parameters, and which may change during the steps of the variable-reduction process. Recently, a new predictive-property-ranked variable reduction method with final complexity adapted models, denoted as PPRVR-FCAM or simply FCAM, was introduced. It is a backward variable elimination method applied on the predictive-property-ranked variables. The variable number is first reduced, with constant PLS1 model complexity A, until A variables remain, followed by a further decrease in PLS complexity, allowing the final selection of small numbers of variables. In this study for three data sets the utility and effectiveness of six individual and nine combined predictor-variable properties are investigated, when used in the FCAM method. The individual properties include the absolute value of the PLS1 regression coefficient (REG), the significance of the PLS1 regression coefficient (SIG), the norm of the loading weight (NLW) vector, the variable importance in the projection (VIP), the selectivity ratio (SR), and the squared correlation coefficient of a predictor variable with the response y (COR). The selective and predictive performances of the models resulting from the use of these properties are statistically compared using the one-tailed Wilcoxon signed rank test. The results indicate that the models, resulting from variable reduction with the FCAM method, using individual or combined properties, have similar or better predictive abilities than the full spectrum models. After mean-centring of the data, REG and SIG, provide low numbers of informative variables, with a meaning relevant to the response, and lower than the other individual properties, while the predictive abilities are

  14. Effect of climate variables on cocoa black pod incidence in Sabah using ARIMAX model

    Science.gov (United States)

    Ling Sheng Chang, Albert; Ramba, Haya; Mohd. Jaaffar, Ahmad Kamil; Kim Phin, Chong; Chong Mun, Ho

    2016-06-01

    Cocoa black pod disease is one of the major diseases affecting the cocoa production in Malaysia and also around the world. Studies have shown that the climate variables have influenced the cocoa black pod disease incidence and it is important to quantify the black pod disease variation due to the effect of climate variables. Application of time series analysis especially auto-regressive moving average (ARIMA) model has been widely used in economics study and can be used to quantify the effect of climate variables on black pod incidence to forecast the right time to control the incidence. However, ARIMA model does not capture some turning points in cocoa black pod incidence. In order to improve forecasting performance, other explanatory variables such as climate variables should be included into ARIMA model as ARIMAX model. Therefore, this paper is to study the effect of climate variables on the cocoa black pod disease incidence using ARIMAX model. The findings of the study showed ARIMAX model using MA(1) and relative humidity at lag 7 days, RHt - 7 gave better R square value compared to ARIMA model using MA(1) which could be used to forecast the black pod incidence to assist the farmers determine timely application of fungicide spraying and culture practices to control the black pod incidence.

  15. Incorporating Latent Variables into Discrete Choice Models - A Simultaneous Estimation Approach Using SEM Software

    Directory of Open Access Journals (Sweden)

    Dirk Temme

    2008-12-01

    Full Text Available Integrated choice and latent variable (ICLV models represent a promising new class of models which merge classic choice models with the structural equation approach (SEM for latent variables. Despite their conceptual appeal, applications of ICLV models in marketing remain rare. We extend previous ICLV applications by first estimating a multinomial choice model and, second, by estimating hierarchical relations between latent variables. An empirical study on travel mode choice clearly demonstrates the value of ICLV models to enhance the understanding of choice processes. In addition to the usually studied directly observable variables such as travel time, we show how abstract motivations such as power and hedonism as well as attitudes such as a desire for flexibility impact on travel mode choice. Furthermore, we show that it is possible to estimate such a complex ICLV model with the widely available structural equation modeling package Mplus. This finding is likely to encourage more widespread application of this appealing model class in the marketing field.

  16. a modified intervention model for gross domestic product variable

    African Journals Online (AJOL)

    observations on a variable that have been measured at ... assumption that successive values in the data file ... these interventions, one may try to evaluate the effect of ... generalized series by comparing the distinct periods. A ... the process of checking for adequacy of the model based .... As a result, the model's forecast will.

  17. First-Passage-Time Distribution for Variable-Diffusion Processes

    Science.gov (United States)

    Barney, Liberty; Gunaratne, Gemunu H.

    2017-05-01

    First-passage-time distribution, which presents the likelihood of a stock reaching a pre-specified price at a given time, is useful in establishing the value of financial instruments and in designing trading strategies. First-passage-time distribution for Wiener processes has a single peak, while that for stocks exhibits a notable second peak within a trading day. This feature has only been discussed sporadically—often dismissed as due to insufficient/incorrect data or circumvented by conversion to tick time—and to the best of our knowledge has not been explained in terms of the underlying stochastic process. It was shown previously that intra-day variations in the market can be modeled by a stochastic process containing two variable-diffusion processes (Hua et al. in, Physica A 419:221-233, 2015). We show here that the first-passage-time distribution of this two-stage variable-diffusion model does exhibit a behavior similar to the empirical observation. In addition, we find that an extended model incorporating overnight price fluctuations exhibits intra- and inter-day behavior similar to those of empirical first-passage-time distributions.

  18. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology.

    Science.gov (United States)

    Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H

    2017-07-01

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in

  19. Changes in Southern Hemisphere circulation variability in climate change modelling experiments

    International Nuclear Information System (INIS)

    Grainger, Simon; Frederiksen, Carsten; Zheng, Xiaogu

    2007-01-01

    Full text: The seasonal mean of a climate variable can be considered as a statistical random variable, consisting of a signal and noise components (Madden 1976). The noise component consists of internal intraseasonal variability, and is not predictable on time-scales of a season or more ahead. The signal consists of slowly varying external and internal variability, and is potentially predictable on seasonal time-scales. The method of Zheng and Frederiksen (2004) has been applied to monthly time series of 500hPa Geopotential height from models submitted to the Coupled Model Intercomparison Project (CMIP3) experiment to obtain covariance matrices of the intraseasonal and slow components of covariability for summer and winter. The Empirical Orthogonal Functions (EOFs) of the intraseasonal and slow covariance matrices for the second half of the 20th century are compared with those observed by Frederiksen and Zheng (2007). The leading EOF in summer and winter for both the intraseasonal and slow components of covariability is the Southern Annular Mode (see, e.g. Kiladis and Mo 1998). This is generally reproduced by the CMIP3 models, although with different variance amounts. The observed secondary intraseasonal covariability modes of wave 4 patterns in summer and wave 3 or blocking in winter are also generally seen in the models, although the actual spatial pattern is different. For the slow covariabilty, the models are less successful in reproducing the two observed ENSO modes, with generally only one of them being represented among the leading EOFs. However, most models reproduce the observed South Pacific wave pattern. The intraseasonal and slow covariances matrices of 500hPa geopotential height under three climate change scenarios are also analysed and compared with those found for the second half of the 20th century. Through aggregating the results from a number of CMIP3 models, a consensus estimate of the changes in Southern Hemisphere variability, and their

  20. Estimations of natural variability between satellite measurements of trace species concentrations

    Science.gov (United States)

    Sheese, P.; Walker, K. A.; Boone, C. D.; Degenstein, D. A.; Kolonjari, F.; Plummer, D. A.; von Clarmann, T.

    2017-12-01

    In order to validate satellite measurements of atmospheric states, it is necessary to understand the range of random and systematic errors inherent in the measurements. On occasions where the measurements do not agree within those errors, a common "go-to" explanation is that the unexplained difference can be chalked up to "natural variability". However, the expected natural variability is often left ambiguous and rarely quantified. This study will look to quantify the expected natural variability of both O3 and NO2 between two satellite instruments: ACE-FTS (Atmospheric Chemistry Experiment - Fourier Transform Spectrometer) and OSIRIS (Optical Spectrograph and Infrared Imaging System). By sampling the CMAM30 (30-year specified dynamics simulation of the Canadian Middle Atmosphere Model) climate chemistry model throughout the upper troposphere and stratosphere at times and geolocations of coincident ACE-FTS and OSIRIS measurements at varying coincidence criteria, height-dependent expected values of O3 and NO2 variability will be estimated and reported on. The results could also be used to better optimize the coincidence criteria used in satellite measurement validation studies.

  1. Modeling key processes causing climate change and variability

    Energy Technology Data Exchange (ETDEWEB)

    Henriksson, S.

    2013-09-01

    Greenhouse gas warming, internal climate variability and aerosol climate effects are studied and the importance to understand these key processes and being able to separate their influence on the climate is discussed. Aerosol-climate model ECHAM5-HAM and the COSMOS millennium model consisting of atmospheric, ocean and carbon cycle and land-use models are applied and results compared to measurements. Topics at focus are climate sensitivity, quasiperiodic variability with a period of 50-80 years and variability at other timescales, climate effects due to aerosols over India and climate effects of northern hemisphere mid- and high-latitude volcanic eruptions. The main findings of this work are (1) pointing out the remaining challenges in reducing climate sensitivity uncertainty from observational evidence, (2) estimates for the amplitude of a 50-80 year quasiperiodic oscillation in global mean temperature ranging from 0.03 K to 0.17 K and for its phase progression as well as the synchronising effect of external forcing, (3) identifying a power law shape S(f) {proportional_to} f-{alpha} for the spectrum of global mean temperature with {alpha} {approx} 0.8 between multidecadal and El Nino timescales with a smaller exponent in modelled climate without external forcing, (4) separating aerosol properties and climate effects in India by season and location (5) the more efficient dispersion of secondary sulfate aerosols than primary carbonaceous aerosols in the simulations, (6) an increase in monsoon rainfall in northern India due to aerosol light absorption and a probably larger decrease due to aerosol dimming effects and (7) an estimate of mean maximum cooling of 0.19 K due to larger northern hemisphere mid- and high-latitude volcanic eruptions. The results could be applied or useful in better isolating the human-caused climate change signal, in studying the processes further and in more detail, in decadal climate prediction, in model evaluation and in emission policy

  2. Latent variable models are network models.

    Science.gov (United States)

    Molenaar, Peter C M

    2010-06-01

    Cramer et al. present an original and interesting network perspective on comorbidity and contrast this perspective with a more traditional interpretation of comorbidity in terms of latent variable theory. My commentary focuses on the relationship between the two perspectives; that is, it aims to qualify the presumed contrast between interpretations in terms of networks and latent variables.

  3. Solar Variability Magnitudes and Timescales

    Science.gov (United States)

    Kopp, Greg

    2015-08-01

    The Sun’s net radiative output varies on timescales of minutes to many millennia. The former are directly observed as part of the on-going 37-year long total solar irradiance climate data record, while the latter are inferred from solar proxy and stellar evolution models. Since the Sun provides nearly all the energy driving the Earth’s climate system, changes in the sunlight reaching our planet can have - and have had - significant impacts on life and civilizations.Total solar irradiance has been measured from space since 1978 by a series of overlapping instruments. These have shown changes in the spatially- and spectrally-integrated radiant energy at the top of the Earth’s atmosphere from timescales as short as minutes to as long as a solar cycle. The Sun’s ~0.01% variations over a few minutes are caused by the superposition of convection and oscillations, and even occasionally by a large flare. Over days to weeks, changing surface activity affects solar brightness at the ~0.1% level. The 11-year solar cycle has comparable irradiance variations with peaks near solar maxima.Secular variations are harder to discern, being limited by instrument stability and the relatively short duration of the space-borne record. Proxy models of the Sun based on cosmogenic isotope records and inferred from Earth climate signatures indicate solar brightness changes over decades to millennia, although the magnitude of these variations depends on many assumptions. Stellar evolution affects yet longer timescales and is responsible for the greatest solar variabilities.In this talk I will summarize the Sun’s variability magnitudes over different temporal ranges, showing examples relevant for climate studies as well as detections of exo-solar planets transiting Sun-like stars.

  4. Remote and Virtual Instrumentation Platform for Distance Learning

    Directory of Open Access Journals (Sweden)

    Tom Eppes

    2010-08-01

    Full Text Available This journal presents distance learning using the National Instruments ELVIS II and how Multisim can be combined with ELVIS II for distance learning. National Instrument’s ELVIS II is a new version that can easily be used for e-learning. It features 12 of the commonly used instruments in engineering and science laboratories, including an oscilloscope, a function generator, a variable power supply, and an isolated digital multi-meter in a low-cost and easy-to-use platform and completes integration with Multisim software for SPICE simulation, which simplifies the teaching of circuit design. As NI ELVIS II is based on LabView, designers can easily customize the 12 instruments or can create their own using the provided source code for the instruments.

  5. a Latent Variable Path Analysis Model of Secondary Physics Enrollments in New York State.

    Science.gov (United States)

    Sobolewski, Stanley John

    The Percentage of Enrollment in Physics (PEP) at the secondary level nationally has been approximately 20% for the past few decades. For a more scientifically literate citizenry as well as specialists to continue scientific research and development, it is desirable that more students enroll in physics. Some of the predictor variables for physics enrollment and physics achievement that have been identified previously includes a community's socioeconomic status, the availability of physics, the sex of the student, the curriculum, as well as teacher and student data. This study isolated and identified predictor variables for PEP of secondary schools in New York. Data gathered by the State Education Department for the 1990-1991 school year was used. The source of this data included surveys completed by teachers and administrators on student characteristics and school facilities. A data analysis similar to that done by Bryant (1974) was conducted to determine if the relationships between a set of predictor variables related to physics enrollment had changed in the past 20 years. Variables which were isolated included: community, facilities, teacher experience, number of type of science courses, school size and school science facilities. When these variables were isolated, latent variable path diagrams were proposed and verified by the Linear Structural Relations computer modeling program (LISREL). These diagrams differed from those developed by Bryant in that there were more manifest variables used which included achievement scores in the form of Regents exam results. Two criterion variables were used, percentage of students enrolled in physics (PEP) and percent of students enrolled passing the Regents physics exam (PPP). The first model treated school and community level variables as exogenous while the second model treated only the community level variables as exogenous. The goodness of fit indices for the models was 0.77 for the first model and 0.83 for the second

  6. College education and wages in the UK : Estimating conditional average structural functions in nonadditive models with binary endogenous variables

    NARCIS (Netherlands)

    Klein, T.J.

    2013-01-01

    Recent studies debate how the unobserved dependence between the monetary return to college education and selection into college can be characterised. This paper examines this question using British data. We develop a semiparametric local instrumental variables estimator for identified features of a

  7. Instrument uncertainty predictions

    International Nuclear Information System (INIS)

    Coutts, D.A.

    1991-07-01

    The accuracy of measurements and correlations should normally be provided for most experimental activities. The uncertainty is a measure of the accuracy of a stated value or equation. The uncertainty term reflects a combination of instrument errors, modeling limitations, and phenomena understanding deficiencies. This report provides several methodologies to estimate an instrument's uncertainty when used in experimental work. Methods are shown to predict both the pretest and post-test uncertainty

  8. Variability of concrete properties: experimental characterisation and probabilistic modelling for calcium leaching

    International Nuclear Information System (INIS)

    De Larrard, Th.

    2010-09-01

    Evaluating structures durability requires taking into account the variability of material properties. The thesis has two main aspects: on the one hand, an experimental campaign aimed at quantifying the variability of many indicators of concrete behaviour; on the other hand, a simple numerical model for calcium leaching is developed in order to implement probabilistic methods so as to estimate the lifetime of structures such as those related to radioactive waste disposal. The experimental campaign consisted in following up two real building sites, and quantifying the variability of these indicators, studying their correlation, and characterising the random fields variability for the considered variables (especially the correlation length). To draw any conclusion from the accelerated leaching tests with ammonium nitrate by overcoming the effects of temperature, an inverse analysis tool based on the theory of artificial neural networks was developed. Simple numerical tools are presented to investigate the propagation of variability in durability issues, quantify the influence of this variability on the lifespan of structures and explain the variability of the input parameters of the numerical model and the physical measurable quantities of the material. (author)

  9. Age-Related Changes in Bimanual Instrument Playing with Rhythmic Cueing

    Directory of Open Access Journals (Sweden)

    Soo Ji Kim

    2017-09-01

    Full Text Available Deficits in bimanual coordination of older adults have been demonstrated to significantly limit their functioning in daily life. As a bimanual sensorimotor task, instrument playing has great potential for motor and cognitive training in advanced age. While the process of matching a person’s repetitive movements to auditory rhythmic cueing during instrument playing was documented to involve motor and attentional control, investigation into whether the level of cognitive functioning influences the ability to rhythmically coordinate movement to an external beat in older populations is relatively limited. Therefore, the current study aimed to examine how timing accuracy during bimanual instrument playing with rhythmic cueing differed depending on the degree of participants’ cognitive aging. Twenty one young adults, 20 healthy older adults, and 17 older adults with mild dementia participated in this study. Each participant tapped an electronic drum in time to the rhythmic cueing provided using both hands simultaneously and in alternation. During bimanual instrument playing with rhythmic cueing, mean and variability of synchronization errors were measured and compared across the groups and the tempo of cueing during each type of tapping task. Correlations of such timing parameters with cognitive measures were also analyzed. The results showed that the group factor resulted in significant differences in the synchronization errors-related parameters. During bimanual tapping tasks, cognitive decline resulted in differences in synchronization errors between younger adults and older adults with mild dimentia. Also, in terms of variability of synchronization errors, younger adults showed significant differences in maintaining timing performance from older adults with and without mild dementia, which may be attributed to decreased processing time for bimanual coordination due to aging. Significant correlations were observed between variability of

  10. The Integration of Continuous and Discrete Latent Variable Models: Potential Problems and Promising Opportunities

    Science.gov (United States)

    Bauer, Daniel J.; Curran, Patrick J.

    2004-01-01

    Structural equation mixture modeling (SEMM) integrates continuous and discrete latent variable models. Drawing on prior research on the relationships between continuous and discrete latent variable models, the authors identify 3 conditions that may lead to the estimation of spurious latent classes in SEMM: misspecification of the structural model,…

  11. On-ground calibration of the BEPICOLOMBO/SIMBIO-SYS at instrument level

    Science.gov (United States)

    Rodriguez-Ferreira, J.; Poulet, F.; Eng, P.; Longval, Y.; Dassas, K.; Arondel, A.; Langevin, Y.; Capaccioni, F.; Filacchione, G.; Palumbo, P.; Cremonese, G.; Dami, M.

    2012-04-01

    The Mercury Planetary Orbiter/BepiColombo carries an integrated suite of instruments, the Spectrometer and Imagers for MPO BepiColombo-Integrated Observatory SYStem (SIMBIO-SYS). SIMBIO-SYS has 3 channels: a stereo imaging system (STC), a high-resolution imager (HRIC) and a visible-near-infrared imaging spectrometer (VIHI). SIMBIO-SYS will scan the surface of Mercury with these three channels and determine the physical, morphological and compositional properties of the entire planet. Before integration on the S/C, an on-ground calibration at the channels and at the instrument levels will be performed so as to describe the instrumental responses as a function of various parameters that might evolve while the instruments will be operating [1]. The Institut d'Astrophysique Spatiale (IAS) is responsible for the on-ground instrument calibration at the instrument level. During the 4 weeks of calibration campaign planned for June 2012, the instrument will be maintained in a mechanical and thermal environment simulating the space conditions. Four Optical stimuli (QTH lamp, Integrating Sphere, BlackBody with variable temperature from 50 to 1200°C and Monochromator), are placed over an optical bench to illuminate the four channels so as to make the radiometric calibration, straylight monitoring, as well as spectral proofing based on laboratory mineral samples. The instrument will be mounted on a hexapod placed inside a thermal vacuum chamber during the calibration campaign. The hexapod will move the channels within the well-characterized incoming beam. We will present the key activities of the preparation of this calibration: the derivation of the instrument radiometric model, the implementation of the optical, mechanical and software interfaces of the calibration assembly, the characterization of the optical bench and the definition of the calibration procedures.

  12. Modeling Source Water TOC Using Hydroclimate Variables and Local Polynomial Regression.

    Science.gov (United States)

    Samson, Carleigh C; Rajagopalan, Balaji; Summers, R Scott

    2016-04-19

    To control disinfection byproduct (DBP) formation in drinking water, an understanding of the source water total organic carbon (TOC) concentration variability can be critical. Previously, TOC concentrations in water treatment plant source waters have been modeled using streamflow data. However, the lack of streamflow data or unimpaired flow scenarios makes it difficult to model TOC. In addition, TOC variability under climate change further exacerbates the problem. Here we proposed a modeling approach based on local polynomial regression that uses climate, e.g. temperature, and land surface, e.g., soil moisture, variables as predictors of TOC concentration, obviating the need for streamflow. The local polynomial approach has the ability to capture non-Gaussian and nonlinear features that might be present in the relationships. The utility of the methodology is demonstrated using source water quality and climate data in three case study locations with surface source waters including river and reservoir sources. The models show good predictive skill in general at these locations, with lower skills at locations with the most anthropogenic influences in their streams. Source water TOC predictive models can provide water treatment utilities important information for making treatment decisions for DBP regulation compliance under future climate scenarios.

  13. A Method for Modeling the Virtual Instrument Automatic Test System Based on the Petri Net

    Institute of Scientific and Technical Information of China (English)

    MA Min; CHEN Guang-ju

    2005-01-01

    Virtual instrument is playing the important role in automatic test system. This paper introduces a composition of a virtual instrument automatic test system and takes the VXIbus based a test software platform which is developed by CAT lab of the UESTC as an example. Then a method to model this system based on Petri net is proposed. Through this method, we can analyze the test task scheduling to prevent the deadlock or resources conflict. At last, this paper analyzes the feasibility of this method.

  14. Linking Inflammation, Cardiorespiratory Variability, and Neural Control in Acute Inflammation via Computational Modeling.

    Science.gov (United States)

    Dick, Thomas E; Molkov, Yaroslav I; Nieman, Gary; Hsieh, Yee-Hsee; Jacono, Frank J; Doyle, John; Scheff, Jeremy D; Calvano, Steve E; Androulakis, Ioannis P; An, Gary; Vodovotz, Yoram

    2012-01-01

    Acute inflammation leads to organ failure by engaging catastrophic feedback loops in which stressed tissue evokes an inflammatory response and, in turn, inflammation damages tissue. Manifestations of this maladaptive inflammatory response include cardio-respiratory dysfunction that may be reflected in reduced heart rate and ventilatory pattern variabilities. We have developed signal-processing algorithms that quantify non-linear deterministic characteristics of variability in biologic signals. Now, coalescing under the aegis of the NIH Computational Biology Program and the Society for Complexity in Acute Illness, two research teams performed iterative experiments and computational modeling on inflammation and cardio-pulmonary dysfunction in sepsis as well as on neural control of respiration and ventilatory pattern variability. These teams, with additional collaborators, have recently formed a multi-institutional, interdisciplinary consortium, whose goal is to delineate the fundamental interrelationship between the inflammatory response and physiologic variability. Multi-scale mathematical modeling and complementary physiological experiments will provide insight into autonomic neural mechanisms that may modulate the inflammatory response to sepsis and simultaneously reduce heart rate and ventilatory pattern variabilities associated with sepsis. This approach integrates computational models of neural control of breathing and cardio-respiratory coupling with models that combine inflammation, cardiovascular function, and heart rate variability. The resulting integrated model will provide mechanistic explanations for the phenomena of respiratory sinus-arrhythmia and cardio-ventilatory coupling observed under normal conditions, and the loss of these properties during sepsis. This approach holds the potential of modeling cross-scale physiological interactions to improve both basic knowledge and clinical management of acute inflammatory diseases such as sepsis and trauma.

  15. An Atmospheric Variability Model for Venus Aerobraking Missions

    Science.gov (United States)

    Tolson, Robert T.; Prince, Jill L. H.; Konopliv, Alexander A.

    2013-01-01

    Aerobraking has proven to be an enabling technology for planetary missions to Mars and has been proposed to enable low cost missions to Venus. Aerobraking saves a significant amount of propulsion fuel mass by exploiting atmospheric drag to reduce the eccentricity of the initial orbit. The solar arrays have been used as the primary drag surface and only minor modifications have been made in the vehicle design to accommodate the relatively modest aerothermal loads. However, if atmospheric density is highly variable from orbit to orbit, the mission must either accept higher aerothermal risk, a slower pace for aerobraking, or a tighter corridor likely with increased propulsive cost. Hence, knowledge of atmospheric variability is of great interest for the design of aerobraking missions. The first planetary aerobraking was at Venus during the Magellan mission. After the primary Magellan science mission was completed, aerobraking was used to provide a more circular orbit to enhance gravity field recovery. Magellan aerobraking took place between local solar times of 1100 and 1800 hrs, and it was found that the Venusian atmospheric density during the aerobraking phase had less than 10% 1 sigma orbit to orbit variability. On the other hand, at some latitudes and seasons, Martian variability can be as high as 40% 1 sigmaFrom both the MGN and PVO mission it was known that the atmosphere, above aerobraking altitudes, showed greater variability at night, but this variability was never quantified in a systematic manner. This paper proposes a model for atmospheric variability that can be used for aerobraking mission design until more complete data sets become available.

  16. Appraisal and Reliability of Variable Engagement Model Prediction ...

    African Journals Online (AJOL)

    The variable engagement model based on the stress - crack opening displacement relationship and, which describes the behaviour of randomly oriented steel fibres composite subjected to uniaxial tension has been evaluated so as to determine the safety indices associated when the fibres are subjected to pullout and with ...

  17. Developing a workplace resilience instrument.

    Science.gov (United States)

    Mallak, Larry A; Yildiz, Mustafa

    2016-05-27

    Resilience benefits from the use of protective factors, as opposed to risk factors, which are associated with vulnerability. Considerable research and instrument development has been conducted in clinical settings for patients. The need existed for an instrument to be developed in a workplace setting to measure resilience of employees. This study developed and tested a resilience instrument for employees in the workplace. The research instrument was distributed to executives and nurses working in the United States in hospital settings. Five-hundred-forty completed and usable responses were obtained. The instrument contained an inventory of workplace resilience, a job stress questionnaire, and relevant demographics. The resilience items were written based on previous work by the lead author and inspired by Weick's [1] sense-making theory. A four-factor model yielded an instrument having psychometric properties showing good model fit. Twenty items were retained for the resulting Workplace Resilience Instrument (WRI). Parallel analysis was conducted with successive iterations of exploratory and confirmatory factor analyses. Respondents were classified based on their employment with either a rural or an urban hospital. Executives had significantly higher WRI scores than nurses, controlling for gender. WRI scores were positively and significantly correlated with years of experience and the Brief Job Stress Questionnaire. An instrument to measure individual resilience in the workplace (WRI) was developed. The WRI's four factors identify dimensions of workplace resilience for use in subsequent investigations: Active Problem-Solving, Team Efficacy, Confident Sense-Making, and Bricolage.

  18. Towards multi-resolution global climate modeling with ECHAM6-FESOM. Part II: climate variability

    Science.gov (United States)

    Rackow, T.; Goessling, H. F.; Jung, T.; Sidorenko, D.; Semmler, T.; Barbi, D.; Handorf, D.

    2018-04-01

    This study forms part II of two papers describing ECHAM6-FESOM, a newly established global climate model with a unique multi-resolution sea ice-ocean component. While part I deals with the model description and the mean climate state, here we examine the internal climate variability of the model under constant present-day (1990) conditions. We (1) assess the internal variations in the model in terms of objective variability performance indices, (2) analyze variations in global mean surface temperature and put them in context to variations in the observed record, with particular emphasis on the recent warming slowdown, (3) analyze and validate the most common atmospheric and oceanic variability patterns, (4) diagnose the potential predictability of various climate indices, and (5) put the multi-resolution approach to the test by comparing two setups that differ only in oceanic resolution in the equatorial belt, where one ocean mesh keeps the coarse 1° resolution applied in the adjacent open-ocean regions and the other mesh is gradually refined to 0.25°. Objective variability performance indices show that, in the considered setups, ECHAM6-FESOM performs overall favourably compared to five well-established climate models. Internal variations of the global mean surface temperature in the model are consistent with observed fluctuations and suggest that the recent warming slowdown can be explained as a once-in-one-hundred-years event caused by internal climate variability; periods of strong cooling in the model (`hiatus' analogs) are mainly associated with ENSO-related variability and to a lesser degree also to PDO shifts, with the AMO playing a minor role. Common atmospheric and oceanic variability patterns are simulated largely consistent with their real counterparts. Typical deficits also found in other models at similar resolutions remain, in particular too weak non-seasonal variability of SSTs over large parts of the ocean and episodic periods of almost absent

  19. Instrument evaluation no. 11. ESI nuclear model 271 C contamination monitor

    CERN Document Server

    Burgess, P H

    1978-01-01

    The various radiations encountered in radiological protection cover a wide range of energies and radiation measurements have to he carried out under an equally broad spectrum of environmental conditions. This report is one of a series intended to give information on the performance characteristics of radiological protection instruments, to assist in the selection of appropriate instruments for a given purpose, to interpret the results obtained with such instruments, and, in particular, to know the likely sources and magnitude of errors that might be associated with measurements in the field. The radiation, electrical and environmental characteristics of radiation protection instruments are considered together with those aspects of the construction which make an instrument convenient for routine use. To provide consistent criteria for instrument performance, the range of tests performed on any particular class of instrument, the test methods and the criteria of acceptable performance are based broadly on the a...

  20. The Scanning Theremin Microscope: A Model Scanning Probe Instrument for Hands-On Activities

    Science.gov (United States)

    Quardokus, Rebecca C.; Wasio, Natalie A.; Kandel, S. Alex

    2014-01-01

    A model scanning probe microscope, designed using similar principles of operation to research instruments, is described. Proximity sensing is done using a capacitance probe, and a mechanical linkage is used to scan this probe across surfaces. The signal is transduced as an audio tone using a heterodyne detection circuit analogous to that used in…

  1. The ARM Cloud Radar Simulator for Global Climate Models: Bridging Field Data and Climate Models

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yuying [Lawrence Livermore National Laboratory, Livermore, California; Xie, Shaocheng [Lawrence Livermore National Laboratory, Livermore, California; Klein, Stephen A. [Lawrence Livermore National Laboratory, Livermore, California; Marchand, Roger [University of Washington, Seattle, Washington; Kollias, Pavlos [Stony Brook University, Stony Brook, New York; Clothiaux, Eugene E. [The Pennsylvania State University, University Park, Pennsylvania; Lin, Wuyin [Brookhaven National Laboratory, Upton, New York; Johnson, Karen [Brookhaven National Laboratory, Upton, New York; Swales, Dustin [CIRES and NOAA/Earth System Research Laboratory, Boulder, Colorado; Bodas-Salcedo, Alejandro [Met Office Hadley Centre, Exeter, United Kingdom; Tang, Shuaiqi [Lawrence Livermore National Laboratory, Livermore, California; Haynes, John M. [Cooperative Institute for Research in the Atmosphere/Colorado State University, Fort Collins, Colorado; Collis, Scott [Argonne National Laboratory, Argonne, Illinois; Jensen, Michael [Brookhaven National Laboratory, Upton, New York; Bharadwaj, Nitin [Pacific Northwest National Laboratory, Richland, Washington; Hardin, Joseph [Pacific Northwest National Laboratory, Richland, Washington; Isom, Bradley [Pacific Northwest National Laboratory, Richland, Washington

    2018-01-01

    Clouds play an important role in Earth’s radiation budget and hydrological cycle. However, current global climate models (GCMs) have had difficulties in accurately simulating clouds and precipitation. To improve the representation of clouds in climate models, it is crucial to identify where simulated clouds differ from real world observations of them. This can be difficult, since significant differences exist between how a climate model represents clouds and what instruments observe, both in terms of spatial scale and the properties of the hydrometeors which are either modeled or observed. To address these issues and minimize impacts of instrument limitations, the concept of instrument “simulators”, which convert model variables into pseudo-instrument observations, has evolved with the goal to improve and to facilitate the comparison of modeled clouds with observations. Many simulators have (and continue to be developed) for a variety of instruments and purposes. A community satellite simulator package, the Cloud Feedback Model Intercomparison Project (CFMIP) Observation Simulator Package (COSP; Bodas-Salcedo et al. 2011), contains several independent satellite simulators and is being widely used in the global climate modeling community to exploit satellite observations for model cloud evaluation (e.g., Klein et al. 2013; Zhang et al. 2010). This article introduces a ground-based cloud radar simulator developed by the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) program for comparing climate model clouds with ARM observations from its vertically pointing 35-GHz radars. As compared to CloudSat radar observations, ARM radar measurements occur with higher temporal resolution and finer vertical resolution. This enables users to investigate more fully the detailed vertical structures within clouds, resolve thin clouds, and quantify the diurnal variability of clouds. Particularly, ARM radars are sensitive to low-level clouds, which are

  2. Template-Directed Instrumentation Reduces Cost and Improves Efficiency for Total Knee Arthroplasty: An Economic Decision Analysis and Pilot Study.

    Science.gov (United States)

    McLawhorn, Alexander S; Carroll, Kaitlin M; Blevins, Jason L; DeNegre, Scott T; Mayman, David J; Jerabek, Seth A

    2015-10-01

    Template-directed instrumentation (TDI) for total knee arthroplasty (TKA) may streamline operating room (OR) workflow and reduce costs by preselecting implants and minimizing instrument tray burden. A decision model simulated the economics of TDI. Sensitivity analyses determined thresholds for model variables to ensure TDI success. A clinical pilot was reviewed. The accuracy of preoperative templates was validated, and 20 consecutive primary TKAs were performed using TDI. The model determined that preoperative component size estimation should be accurate to ±1 implant size for 50% of TKAs to implement TDI. The pilot showed that preoperative template accuracy exceeded 97%. There were statistically significant improvements in OR turnover time and in-room time for TDI compared to an historical cohort of TKAs. TDI reduces costs and improves OR efficiency. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Viscous cosmological models with a variable cosmological term ...

    African Journals Online (AJOL)

    Einstein's field equations for a Friedmann-Lamaitre Robertson-Walker universe filled with a dissipative fluid with a variable cosmological term L described by full Israel-Stewart theory are considered. General solutions to the field equations for the flat case have been obtained. The solution corresponds to the dust free model ...

  4. Automatic Welding Control Using a State Variable Model.

    Science.gov (United States)

    1979-06-01

    A-A10 610 NAVEAL POSTGRADUATE SCH4O.M CEAY CA0/ 13/ SAUTOMATIC WELDING CONTROL USING A STATE VARIABLE MODEL.W()JUN 79 W V "my UNCLASSIFIED...taverse Drive Unit // Jbint Path /Fixed Track 34 (servomotor positioning). Additional controls of heave (vertical), roll (angular rotation about the

  5. Thermal Testing and Model Correlation for Advanced Topographic Laser Altimeter Instrument (ATLAS)

    Science.gov (United States)

    Patel, Deepak

    2016-01-01

    The Advanced Topographic Laser Altimeter System (ATLAS) part of the Ice Cloud and Land Elevation Satellite 2 (ICESat-2) is an upcoming Earth Science mission focusing on the effects of climate change. The flight instrument passed all environmental testing at GSFC (Goddard Space Flight Center) and is now ready to be shipped to the spacecraft vendor for integration and testing. This topic covers the analysis leading up to the test setup for ATLAS thermal testing as well as model correlation to flight predictions. Test setup analysis section will include areas where ATLAS could not meet flight like conditions and what were the limitations. Model correlation section will walk through changes that had to be made to the thermal model in order to match test results. The correlated model will then be integrated with spacecraft model for on-orbit predictions.

  6. On the ""early-time"" evolution of variables relevant to turbulence models for the Rayleigh-Taylor instability

    Energy Technology Data Exchange (ETDEWEB)

    Rollin, Bertrand [Los Alamos National Laboratory; Andrews, Malcolm J [Los Alamos National Laboratory

    2010-01-01

    We present our progress toward setting initial conditions in variable density turbulence models. In particular, we concentrate our efforts on the BHR turbulence model for turbulent Rayleigh-Taylor instability. Our approach is to predict profiles of relevant variables before fully turbulent regime and use them as initial conditions for the turbulence model. We use an idealized model of mixing between two interpenetrating fluids to define the initial profiles for the turbulence model variables. Velocities and volume fractions used in the idealized mixing model are obtained respectively from a set of ordinary differential equations modeling the growth of the Rayleigh-Taylor instability and from an idealization of the density profile in the mixing layer. A comparison between predicted profiles for the turbulence model variables and profiles of the variables obtained from low Atwood number three dimensional simulations show reasonable agreement.

  7. Feedback control of acoustic musical instruments: collocated control using physical analogs.

    Science.gov (United States)

    Berdahl, Edgar; Smith, Julius O; Niemeyer, Günter

    2012-01-01

    Traditionally, the average professional musician has owned numerous acoustic musical instruments, many of them having distinctive acoustic qualities. However, a modern musician could prefer to have a single musical instrument whose acoustics are programmable by feedback control, where acoustic variables are estimated from sensor measurements in real time and then fed back in order to influence the controlled variables. In this paper, theory is presented that describes stable feedback control of an acoustic musical instrument. The presentation should be accessible to members of the musical acoustics community who may have limited or no experience with feedback control. First, the only control strategy guaranteed to be stable subject to any musical instrument mobility is described: the sensors and actuators must be collocated, and the controller must emulate a physical analog system. Next, the most fundamental feedback controllers and the corresponding physical analog systems are presented. The effects that these controllers have on acoustic musical instruments are described. Finally, practical design challenges are discussed. A proof explains why changing the resonance frequency of a musical resonance requires much more control power than changing the decay time of the resonance. © 2012 Acoustical Society of America.

  8. Health insurance and the demand for medical care: Instrumental variable estimates using health insurer claims data.

    Science.gov (United States)

    Dunn, Abe

    2016-07-01

    This paper takes a different approach to estimating demand for medical care that uses the negotiated prices between insurers and providers as an instrument. The instrument is viewed as a textbook "cost shifting" instrument that impacts plan offerings, but is unobserved by consumers. The paper finds a price elasticity of demand of around -0.20, matching the elasticity found in the RAND Health Insurance Experiment. The paper also studies within-market variation in demand for prescription drugs and other medical care services and obtains comparable price elasticity estimates. Published by Elsevier B.V.

  9. Sensitivity Modeling of On-chip Capacitances : Parasitics Extraction for Manufacturing Variability

    NARCIS (Netherlands)

    Bi, Y.

    2012-01-01

    With each new generation of IC process technologies, the impact of manufacturing variability is increasing. As such, design optimality is harder and harder to achieve and effective modeling tools and methods are needed to capture the effects of variability in such a way that it is understandable and

  10. Instruments to assess integrated care

    DEFF Research Database (Denmark)

    Lyngsø, Anne Marie; Godtfredsen, Nina Skavlan; Høst, Dorte

    2014-01-01

    INTRODUCTION: Although several measurement instruments have been developed to measure the level of integrated health care delivery, no standardised, validated instrument exists covering all aspects of integrated care. The purpose of this review is to identify the instruments concerning how to mea...... was prevalent. It is uncertain whether development of a single 'all-inclusive' model for assessing integrated care is desirable. We emphasise the continuing need for validated instruments embedded in theoretical contexts.......INTRODUCTION: Although several measurement instruments have been developed to measure the level of integrated health care delivery, no standardised, validated instrument exists covering all aspects of integrated care. The purpose of this review is to identify the instruments concerning how...... to measure the level of integration across health-care sectors and to assess and evaluate the organisational elements within the instruments identified. METHODS: An extensive, systematic literature review in PubMed, CINAHL, PsycINFO, Cochrane Library, Web of Science for the years 1980-2011. Selected...

  11. Effect of land model ensemble versus coupled model ensemble on the simulation of precipitation climatology and variability

    Science.gov (United States)

    Wei, Jiangfeng; Dirmeyer, Paul A.; Yang, Zong-Liang; Chen, Haishan

    2017-10-01

    Through a series of model simulations with an atmospheric general circulation model coupled to three different land surface models, this study investigates the impacts of land model ensembles and coupled model ensemble on precipitation simulation. It is found that coupling an ensemble of land models to an atmospheric model has a very minor impact on the improvement of precipitation climatology and variability, but a simple ensemble average of the precipitation from three individually coupled land-atmosphere models produces better results, especially for precipitation variability. The generally weak impact of land processes on precipitation should be the main reason that the land model ensembles do not improve precipitation simulation. However, if there are big biases in the land surface model or land surface data set, correcting them could improve the simulated climate, especially for well-constrained regional climate simulations.

  12. On the Use of Variability Operations in the V-Modell XT Software Process Line

    DEFF Research Database (Denmark)

    Kuhrmann, Marco; Méndez Fernández, Daniel; Ternité, Thomas

    2016-01-01

    . In this article, we present a study on the feasibility of variability operations to support the development of software process lines in the context of the V-Modell XT. We analyze which variability operations are defined and practically used. We provide an initial catalog of variability operations...... as an improvement proposal for other process models. Our findings show that 69 variability operation types are defined across several metamodel versions of which, however, 25 remain unused. The found variability operations allow for systematically modifying the content of process model elements and the process......Software process lines provide a systematic approach to develop and manage software processes. It defines a reference process containing general process assets, whereas a well-defined customization approach allows process engineers to create new process variants, e.g., by extending or modifying...

  13. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    Energy Technology Data Exchange (ETDEWEB)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan [Toosi University of Technology, Tehran (Korea, Republic of)

    2012-05-15

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms.

  14. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    International Nuclear Information System (INIS)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan

    2012-01-01

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms

  15. Combining climate and energy policies: synergies or antagonism? Modeling interactions with energy efficiency instruments

    International Nuclear Information System (INIS)

    Lecuyer, Oskar; Bibas, Ruben

    2012-01-01

    In addition to the already present Climate and Energy package, the European Union (EU) plans to include a binding target to reduce energy consumption. We analyze the rationales the EU invokes to justify such an overlapping and develop a minimal common framework to study interactions arising from the combination of instruments reducing emissions, promoting renewable energy (RE) production and reducing energy demand through energy efficiency (EE) investments. We find that although all instruments tend to reduce GHG emissions and although a price on carbon tends also to give the right incentives for RE and EE, the combination of more than one instrument leads to significant antagonisms regarding major objectives of the policy package. The model allows to show in a single framework and to quantify the antagonistic effects of the joint promotion of RE and EE. We also show and quantify the effects of this joint promotion on ETS permit price, on wholesale market price and on energy production levels. (authors)

  16. Instrumental and ethical aspects of experimental research with animal models

    Directory of Open Access Journals (Sweden)

    Mirian Watanabe

    2014-02-01

    Full Text Available Experimental animal models offer possibilities of physiology knowledge, pathogenesis of disease and action of drugs that are directly related to quality nursing care. This integrative review describes the current state of the instrumental and ethical aspects of experimental research with animal models, including the main recommendations of ethics committees that focus on animal welfare and raises questions about the impact of their findings in nursing care. Data show that, in Brazil, the progress in ethics for the use of animals for scientific purposes was consolidated with Law No. 11.794/2008 establishing ethical procedures, attending health, genetic and experimental parameters. The application of ethics in handling of animals for scientific and educational purposes and obtaining consistent and quality data brings unquestionable contributions to the nurse, as they offer subsidies to relate pathophysiological mechanisms and the clinical aspect on the patient.

  17. Modelling the Spatial Isotope Variability of Precipitation in Syria

    Energy Technology Data Exchange (ETDEWEB)

    Kattan, Z.; Kattaa, B. [Department of Geology, Atomic Energy Commission of Syria (AECS), Damascus (Syrian Arab Republic)

    2013-07-15

    Attempts were made to model the spatial variability of environmental isotope ({sup 18}O, {sup 2}H and {sup 3}H) compositions of precipitation in syria. Rainfall samples periodically collected on a monthly basis from 16 different stations were used for processing and demonstrating the spatial distributions of these isotopes, together with those of deuterium excess (d) values. Mathematically, the modelling process was based on applying simple polynomial models that take into consideration the effects of major geographic factors (Lon.E., Lat.N., and altitude). The modelling results of spatial distribution of stable isotopes ({sup 18}O and {sup 2}H) were generally good, as shown from the high correlation coefficients (R{sup 2} = 0.7-0.8), calculated between the observed and predicted values. In the case of deuterium excess and tritium distributions, the results were most likely approximates (R{sup 2} = 0.5-0.6). Improving the simulation of spatial isotope variability probably requires the incorporation of other local meteorological factors, such as relative air humidity, precipitation amount and vapour pressure, which are supposed to play an important role in such an arid country. (author)

  18. Simple model for crop photosynthesis in terms of weather variables ...

    African Journals Online (AJOL)

    A theoretical mathematical model for describing crop photosynthetic rate in terms of the weather variables and crop characteristics is proposed. The model utilizes a series of efficiency parameters, each of which reflect the fraction of possible photosynthetic rate permitted by the different weather elements or crop architecture.

  19. Model for expressing leaf photosynthesis in terms of weather variables

    African Journals Online (AJOL)

    A theoretical mathematical model for describing photosynthesis in individual leaves in terms of weather variables is proposed. The model utilizes a series of efficiency parameters, each of which reflect the fraction of potential photosynthetic rate permitted by the different environmental elements. These parameters are useful ...

  20. Cost prediction model for various payloads and instruments for the Space Shuttle Orbiter

    Science.gov (United States)

    Hoffman, F. E.

    1984-01-01

    The following cost parameters of the space shuttle were undertaken: (1) to develop a cost prediction model for various payload classes of instruments and experiments for the Space Shuttle Orbiter; and (2) to show the implications of various payload classes on the cost of: reliability analysis, quality assurance, environmental design requirements, documentation, parts selection, and other reliability enhancing activities.

  1. Sulfur Dioxide Analyzer Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Springston, Stephen R. [Brookhaven National Lab. (BNL), Upton, NY (United States)

    2016-05-01

    The Sulfur Dioxide Analyzer measures sulfur dioxide based on absorbance of UV light at one wavelength by SO2 molecules which then decay to a lower energy state by emitting UV light at a longer wavelength. Specifically, SO2 + hυ1 →SO2 *→SO2 + hυ2 The emitted light is proportional to the concentration of SO2 in the optical cell. External communication with the analyzer is available through an Ethernet port configured through the instrument network of the AOS systems. The Model 43i-TLE is part of the i-series of Thermo Scientific instruments. The i-series instruments are designed to interface with external computers through the proprietary Thermo Scientific iPort Software. However, this software is somewhat cumbersome and inflexible. Brookhaven National Laboratory (BNL) has written an interface program in National Instruments LabView that both controls the Model 43i-TLE Analyzer AND queries the unit for all measurement and housekeeping data. The LabView vi (the software program written by BNL) ingests all raw data from the instrument and outputs raw data files in a uniform data format similar to other instruments in the AOS and described more fully in Section 6.0 below.

  2. College Education and Wages in the U.K. : Estimating Conditional Average Structural Functions in Nonadditive Models with Binary Endogenous Variables

    NARCIS (Netherlands)

    Klein, T.J.

    2009-01-01

    Recent studies debate how the unobserved dependence between the monetary return to college education and selection into college can be characterized. This paper examines this question using British data. We develop a semiparametric local instrumental variables estimator for identified features of a

  3. Diurnal variation of stratospheric and lower mesospheric HOCl, ClO and HO2 at the equator: comparison of 1-D model calculations with measurements by satellite instruments

    Directory of Open Access Journals (Sweden)

    M. Khosravi

    2013-08-01

    Full Text Available The diurnal variation of HOCl and the related species ClO, HO2 and HCl measured by satellites has been compared with the results of a one-dimensional photochemical model. The study compares the data from various limb-viewing instruments with model simulations from the middle stratosphere to the lower mesosphere. Data from three sub-millimetre instruments and two infrared spectrometers are used, namely from the Sub-Millimetre Radiometer (SMR on board Odin, the Microwave Limb Sounder (MLS on board Aura, the Superconducting Submillimeter-wave Limb-Emission Sounder (SMILES on the International Space Station, the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS on board ENVISAT, and the Atmospheric Chemistry Experiment Fourier Transform Spectrometer (ACE-FTS on board SCISAT. Inter-comparison of the measurements from instruments on sun-synchronous satellites (SMR, MLS, MIPAS and measurements from solar occultation instruments (ACE-FTS is challenging since the measurements correspond to different solar zenith angles (or local times. However, using a model which covers all solar zenith angles and data from the SMILES instrument which measured at all local times over a period of several months provides the possibility to verify the model and to indirectly compare the diurnally variable species. The satellite data were averaged for latitudes of 20° S to 20° N for the SMILES observation period from November 2009 to April 2010 and were compared at three altitudes: 35, 45 and 55 km. Besides presenting the SMILES data, the study also shows a first comparison of the latest MLS data (version 3.3 of HOCl, ClO, and HO2 with other satellite observations, as well as a first evaluation of HO2 observations made by Odin/SMR. The MISU-1D model has been carefully initialised and run for conditions and locations of the observations. The diurnal cycle features for the species investigated here are generally well reproduced by the model. The satellite

  4. High-resolution regional climate model evaluation using variable-resolution CESM over California

    Science.gov (United States)

    Huang, X.; Rhoades, A.; Ullrich, P. A.; Zarzycki, C. M.

    2015-12-01

    Understanding the effect of climate change at regional scales remains a topic of intensive research. Though computational constraints remain a problem, high horizontal resolution is needed to represent topographic forcing, which is a significant driver of local climate variability. Although regional climate models (RCMs) have traditionally been used at these scales, variable-resolution global climate models (VRGCMs) have recently arisen as an alternative for studying regional weather and climate allowing two-way interaction between these domains without the need for nudging. In this study, the recently developed variable-resolution option within the Community Earth System Model (CESM) is assessed for long-term regional climate modeling over California. Our variable-resolution simulations will focus on relatively high resolutions for climate assessment, namely 28km and 14km regional resolution, which are much more typical for dynamically downscaled studies. For comparison with the more widely used RCM method, the Weather Research and Forecasting (WRF) model will be used for simulations at 27km and 9km. All simulations use the AMIP (Atmospheric Model Intercomparison Project) protocols. The time period is from 1979-01-01 to 2005-12-31 (UTC), and year 1979 was discarded as spin up time. The mean climatology across California's diverse climate zones, including temperature and precipitation, is analyzed and contrasted with the Weather Research and Forcasting (WRF) model (as a traditional RCM), regional reanalysis, gridded observational datasets and uniform high-resolution CESM at 0.25 degree with the finite volume (FV) dynamical core. The results show that variable-resolution CESM is competitive in representing regional climatology on both annual and seasonal time scales. This assessment adds value to the use of VRGCMs for projecting climate change over the coming century and improve our understanding of both past and future regional climate related to fine

  5. Robotic-surgical instrument wrist pose estimation.

    Science.gov (United States)

    Fabel, Stephan; Baek, Kyungim; Berkelman, Peter

    2010-01-01

    The Compact Lightweight Surgery Robot from the University of Hawaii includes two teleoperated instruments and one endoscope manipulator which act in accord to perform assisted interventional medicine. The relative positions and orientations of the robotic instruments and endoscope must be known to the teleoperation system so that the directions of the instrument motions can be controlled to correspond closely to the directions of the motions of the master manipulators, as seen by the the endoscope and displayed to the surgeon. If the manipulator bases are mounted in known locations and all manipulator joint variables are known, then the necessary coordinate transformations between the master and slave manipulators can be easily computed. The versatility and ease of use of the system can be increased, however, by allowing the endoscope or instrument manipulator bases to be moved to arbitrary positions and orientations without reinitializing each manipulator or remeasuring their relative positions. The aim of this work is to find the pose of the instrument end effectors using the video image from the endoscope camera. The P3P pose estimation algorithm is used with a Levenberg-Marquardt optimization to ensure convergence. The correct transformations between the master and slave coordinate frames can then be calculated and updated when the bases of the endoscope or instrument manipulators are moved to new, unknown, positions at any time before or during surgical procedures.

  6. Assessment of small-scale integrated water vapour variability during HOPE

    Science.gov (United States)

    Steinke, S.; Eikenberg, S.; Löhnert, U.; Dick, G.; Klocke, D.; Di Girolamo, P.; Crewell, S.

    2015-03-01

    The spatio-temporal variability of integrated water vapour (IWV) on small scales of less than 10 km and hours is assessed with data from the 2 months of the High Definition Clouds and Precipitation for advancing Climate Prediction (HD(CP)2) Observational Prototype Experiment (HOPE). The statistical intercomparison of the unique set of observations during HOPE (microwave radiometer (MWR), Global Positioning System (GPS), sun photometer, radiosondes, Raman lidar, infrared and near-infrared Moderate Resolution Imaging Spectroradiometer (MODIS) on the satellites Aqua and Terra) measuring close together reveals a good agreement in terms of random differences (standard deviation ≤1 kg m-2) and correlation coefficient (≥ 0.98). The exception is MODIS, which appears to suffer from insufficient cloud filtering. For a case study during HOPE featuring a typical boundary layer development, the IWV variability in time and space on scales of less than 10 km and less than 1 h is investigated in detail. For this purpose, the measurements are complemented by simulations with the novel ICOsahedral Nonhydrostatic modelling framework (ICON), which for this study has a horizontal resolution of 156 m. These runs show that differences in space of 3-4 km or time of 10-15 min induce IWV variabilities on the order of 0.4 kg m-2. This model finding is confirmed by observed time series from two MWRs approximately 3 km apart with a comparable temporal resolution of a few seconds. Standard deviations of IWV derived from MWR measurements reveal a high variability (> 1 kg m-2) even at very short time scales of a few minutes. These cannot be captured by the temporally lower-resolved instruments and by operational numerical weather prediction models such as COSMO-DE (an application of the Consortium for Small-scale Modelling covering Germany) of Deutscher Wetterdienst, which is included in the comparison. However, for time scales larger than 1 h, a sampling resolution of 15 min is

  7. Modeling Psychological Attributes in Psychology – An Epistemological Discussion: Network Analysis vs. Latent Variables

    Science.gov (United States)

    Guyon, Hervé; Falissard, Bruno; Kop, Jean-Luc

    2017-01-01

    Network Analysis is considered as a new method that challenges Latent Variable models in inferring psychological attributes. With Network Analysis, psychological attributes are derived from a complex system of components without the need to call on any latent variables. But the ontological status of psychological attributes is not adequately defined with Network Analysis, because a psychological attribute is both a complex system and a property emerging from this complex system. The aim of this article is to reappraise the legitimacy of latent variable models by engaging in an ontological and epistemological discussion on psychological attributes. Psychological attributes relate to the mental equilibrium of individuals embedded in their social interactions, as robust attractors within complex dynamic processes with emergent properties, distinct from physical entities located in precise areas of the brain. Latent variables thus possess legitimacy, because the emergent properties can be conceptualized and analyzed on the sole basis of their manifestations, without exploring the upstream complex system. However, in opposition with the usual Latent Variable models, this article is in favor of the integration of a dynamic system of manifestations. Latent Variables models and Network Analysis thus appear as complementary approaches. New approaches combining Latent Network Models and Network Residuals are certainly a promising new way to infer psychological attributes, placing psychological attributes in an inter-subjective dynamic approach. Pragmatism-realism appears as the epistemological framework required if we are to use latent variables as representations of psychological attributes. PMID:28572780

  8. A New Bi-Directional Projection Model Based on Pythagorean Uncertain Linguistic Variable

    OpenAIRE

    Huidong Wang; Shifan He; Xiaohong Pan

    2018-01-01

    To solve the multi-attribute decision making (MADM) problems with Pythagorean uncertain linguistic variable, an extended bi-directional projection method is proposed. First, we utilize the linguistic scale function to convert uncertain linguistic variable and provide a new projection model, subsequently. Then, to depict the bi-directional projection method, the formative vectors of alternatives and ideal alternatives are defined. Furthermore, a comparative analysis with projection model is co...

  9. Shared Variable Oriented Parallel Precompiler for SPMD Model

    Institute of Scientific and Technical Information of China (English)

    1995-01-01

    For the moment,commercial parallel computer systems with distributed memory architecture are usually provided with parallel FORTRAN or parallel C compliers,which are just traditional sequential FORTRAN or C compilers expanded with communication statements.Programmers suffer from writing parallel programs with communication statements. The Shared Variable Oriented Parallel Precompiler (SVOPP) proposed in this paper can automatically generate appropriate communication statements based on shared variables for SPMD(Single Program Multiple Data) computation model and greatly ease the parallel programming with high communication efficiency.The core function of parallel C precompiler has been successfully verified on a transputer-based parallel computer.Its prominent performance shows that SVOPP is probably a break-through in parallel programming technique.

  10. Efficient family-based model checking via variability abstractions

    DEFF Research Database (Denmark)

    Dimovski, Aleksandar; Al-Sibahi, Ahmad Salim; Brabrand, Claus

    2016-01-01

    with the abstract model checking of the concrete high-level variational model. This allows the use of Spin with all its accumulated optimizations for efficient verification of variational models without any knowledge about variability. We have implemented the transformations in a prototype tool, and we illustrate......Many software systems are variational: they can be configured to meet diverse sets of requirements. They can produce a (potentially huge) number of related systems, known as products or variants, by systematically reusing common parts. For variational models (variational systems or families...... of related systems), specialized family-based model checking algorithms allow efficient verification of multiple variants, simultaneously, in a single run. These algorithms, implemented in a tool Snip, scale much better than ``the brute force'' approach, where all individual systems are verified using...

  11. Continuous-variable protocol for oblivious transfer in the noisy-storage model

    DEFF Research Database (Denmark)

    Furrer, Fabian; Gehring, Tobias; Schaffner, Christian

    2018-01-01

    for oblivious transfer for optical continuous-variable systems, and prove its security in the noisy-storage model. This model allows us to establish security by sending more quantum signals than an attacker can reliably store during the protocol. The security proof is based on uncertainty relations which we...... derive for continuous-variable systems, that differ from the ones used in quantum key distribution. We experimentally demonstrate in a proof-of-principle experiment the proposed oblivious transfer protocol for various channel losses by using entangled two-mode squeezed states measured with balanced...

  12. Soil Cd, Cr, Cu, Ni, Pb and Zn sorption and retention models using SVM: Variable selection and competitive model.

    Science.gov (United States)

    González Costa, J J; Reigosa, M J; Matías, J M; Covelo, E F

    2017-09-01

    The aim of this study was to model the sorption and retention of Cd, Cu, Ni, Pb and Zn in soils. To that extent, the sorption and retention of these metals were studied and the soil characterization was performed separately. Multiple stepwise regression was used to produce multivariate models with linear techniques and with support vector machines, all of which included 15 explanatory variables characterizing soils. When the R-squared values are represented, two different groups are noticed. Cr, Cu and Pb sorption and retention show a higher R-squared; the most explanatory variables being humified organic matter, Al oxides and, in some cases, cation-exchange capacity (CEC). The other group of metals (Cd, Ni and Zn) shows a lower R-squared, and clays are the most explanatory variables, including a percentage of vermiculite and slime. In some cases, quartz, plagioclase or hematite percentages also show some explanatory capacity. Support Vector Machine (SVM) regression shows that the different models are not as regular as in multiple regression in terms of number of variables, the regression for nickel adsorption being the one with the highest number of variables in its optimal model. On the other hand, there are cases where the most explanatory variables are the same for two metals, as it happens with Cd and Cr adsorption. A similar adsorption mechanism is thus postulated. These patterns of the introduction of variables in the model allow us to create explainability sequences. Those which are the most similar to the selectivity sequences obtained by Covelo (2005) are Mn oxides in multiple regression and change capacity in SVM. Among all the variables, the only one that is explanatory for all the metals after applying the maximum parsimony principle is the percentage of sand in the retention process. In the competitive model arising from the aforementioned sequences, the most intense competitiveness for the adsorption and retention of different metals appears between

  13. Variable-coefficient higher-order nonlinear Schroedinger model in optical fibers: Variable-coefficient bilinear form, Baecklund transformation, brightons and symbolic computation

    International Nuclear Information System (INIS)

    Tian Bo; Gao Yitian; Zhu Hongwu

    2007-01-01

    Symbolically investigated in this Letter is a variable-coefficient higher-order nonlinear Schroedinger (vcHNLS) model for ultrafast signal-routing, fiber laser systems and optical communication systems with distributed dispersion and nonlinearity management. Of physical and optical interests, with bilinear method extend, the vcHNLS model is transformed into a variable-coefficient bilinear form, and then an auto-Baecklund transformation is constructed. Constraints on coefficient functions are analyzed. Potentially observable with future optical-fiber experiments, variable-coefficient brightons are illustrated. Relevant properties and features are discussed as well. Baecklund transformation and other results of this Letter will be of certain value to the studies on inhomogeneous fiber media, core of dispersion-managed brightons, fiber amplifiers, laser systems and optical communication links with distributed dispersion and nonlinearity management

  14. Two-step variable selection in quantile regression models

    Directory of Open Access Journals (Sweden)

    FAN Yali

    2015-06-01

    Full Text Available We propose a two-step variable selection procedure for high dimensional quantile regressions, in which the dimension of the covariates, pn is much larger than the sample size n. In the first step, we perform ℓ1 penalty, and we demonstrate that the first step penalized estimator with the LASSO penalty can reduce the model from an ultra-high dimensional to a model whose size has the same order as that of the true model, and the selected model can cover the true model. The second step excludes the remained irrelevant covariates by applying the adaptive LASSO penalty to the reduced model obtained from the first step. Under some regularity conditions, we show that our procedure enjoys the model selection consistency. We conduct a simulation study and a real data analysis to evaluate the finite sample performance of the proposed approach.

  15. Probabilistic risk assessment modeling of digital instrumentation and control systems using two dynamic methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Aldemir, T., E-mail: aldemir.1@osu.ed [Ohio State University, Nuclear Engineering Program, Columbus, OH 43210 (United States); Guarro, S. [ASCA, Inc., 1720 S. Catalina Avenue, Suite 220, Redondo Beach, CA 90277-5501 (United States); Mandelli, D. [Ohio State University, Nuclear Engineering Program, Columbus, OH 43210 (United States); Kirschenbaum, J. [Ohio State University, Department of Computer Science and Engineering, Columbus, OH 43210 (United States); Mangan, L.A. [Ohio State University, Nuclear Engineering Program, Columbus, OH 43210 (United States); Bucci, P. [Ohio State University, Department of Computer Science and Engineering, Columbus, OH 43210 (United States); Yau, M. [ASCA, Inc., 1720 S. Catalina Avenue, Suite 220, Redondo Beach, CA 90277-5501 (United States); Ekici, E. [Ohio State University, Department of Electrical and Computer Engineering, Columbus, OH 43210 (United States); Miller, D.W.; Sun, X. [Ohio State University, Nuclear Engineering Program, Columbus, OH 43210 (United States); Arndt, S.A. [U.S. Nuclear Regulatory Commission, Washington, DC 20555-0001 (United States)

    2010-10-15

    The Markov/cell-to-cell mapping technique (CCMT) and the dynamic flowgraph methodology (DFM) are two system logic modeling methodologies that have been proposed to address the dynamic characteristics of digital instrumentation and control (I and C) systems and provide risk-analytical capabilities that supplement those provided by traditional probabilistic risk assessment (PRA) techniques for nuclear power plants. Both methodologies utilize a discrete state, multi-valued logic representation of the digital I and C system. For probabilistic quantification purposes, both techniques require the estimation of the probabilities of basic system failure modes, including digital I and C software failure modes, that appear in the prime implicants identified as contributors to a given system event of interest. As in any other system modeling process, the accuracy and predictive value of the models produced by the two techniques, depend not only on the intrinsic features of the modeling paradigm, but also and to a considerable extent on information and knowledge available to the analyst, concerning the system behavior and operation rules under normal and off-nominal conditions, and the associated controlled/monitored process dynamics. The application of the two methodologies is illustrated using a digital feedwater control system (DFWCS) similar to that of an operating pressurized water reactor. This application was carried out to demonstrate how the use of either technique, or both, can facilitate the updating of an existing nuclear power plant PRA model following an upgrade of the instrumentation and control system from analog to digital. Because of scope limitations, the focus of the demonstration of the methodologies was intentionally limited to aspects of digital I and C system behavior for which probabilistic data was on hand or could be generated within the existing project bounds of time and resources. The data used in the probabilistic quantification portion of the

  16. A model of the demand for Islamic banks debt-based financing instrument

    Science.gov (United States)

    Jusoh, Mansor; Khalid, Norlin

    2013-04-01

    This paper presents a theoretical analysis of the demand for debt-based financing instruments of the Islamic banks. Debt-based financing, such as through baibithamanajil and al-murabahah, is by far the most prominent of the Islamic bank financing and yet it has been largely ignored in Islamic economics literature. Most studies instead have been focusing on equity-based financing of al-mudharabah and al-musyarakah. Islamic bank offers debt-based financing through various instruments derived under the principle of exchange (ukud al-mu'awadhat) or more specifically, the contract of deferred sale. Under such arrangement, Islamic debt is created when goods are purchased and the payments are deferred. Thus, unlike debt of the conventional bank which is a form of financial loan contract to facilitate demand for liquid assets, this Islamic debt is created in response to the demand to purchase goods by deferred payment. In this paper we set an analytical framework that is based on an infinitely lived representative agent model (ILRA model) to analyze the demand for goods to be purchased by deferred payment. The resulting demand will then be used to derive the demand for Islamic debt. We also investigate theoretically, factors that may have an impact on the demand for Islamic debt.

  17. Interannual Tropical Rainfall Variability in General Circulation Model Simulations Associated with the Atmospheric Model Intercomparison Project.

    Science.gov (United States)

    Sperber, K. R.; Palmer, T. N.

    1996-11-01

    The interannual variability of rainfall over the Indian subcontinent, the African Sahel, and the Nordeste region of Brazil have been evaluated in 32 models for the period 1979-88 as part of the Atmospheric Model Intercomparison Project (AMIP). The interannual variations of Nordeste rainfall are the most readily captured, owing to the intimate link with Pacific and Atlantic sea surface temperatures. The precipitation variations over India and the Sahel are less well simulated. Additionally, an Indian monsoon wind shear index was calculated for each model. Evaluation of the interannual variability of a wind shear index over the summer monsoon region indicates that the models exhibit greater fidelity in capturing the large-scale dynamic fluctuations than the regional-scale rainfall variations. A rainfall/SST teleconnection quality control was used to objectively stratify model performance. Skill scores improved for those models that qualitatively simulated the observed rainfall/El Niño- Southern Oscillation SST correlation pattern. This subset of models also had a rainfall climatology that was in better agreement with observations, indicating a link between systematic model error and the ability to simulate interannual variations.A suite of six European Centre for Medium-Range Weather Forecasts (ECMWF) AMIP runs (differing only in their initial conditions) have also been examined. As observed, all-India rainfall was enhanced in 1988 relative to 1987 in each of these realizations. All-India rainfall variability during other years showed little or no predictability, possibly due to internal chaotic dynamics associated with intraseasonal monsoon fluctuations and/or unpredictable land surface process interactions. The interannual variations of Nordeste rainfall were best represented. The State University of New York at Albany/National Center for Atmospheric Research Genesis model was run in five initial condition realizations. In this model, the Nordeste rainfall

  18. Environmental versus demographic variability in stochastic predator–prey models

    International Nuclear Information System (INIS)

    Dobramysl, U; Täuber, U C

    2013-01-01

    In contrast to the neutral population cycles of the deterministic mean-field Lotka–Volterra rate equations, including spatial structure and stochastic noise in models for predator–prey interactions yields complex spatio-temporal structures associated with long-lived erratic population oscillations. Environmental variability in the form of quenched spatial randomness in the predation rates results in more localized activity patches. Our previous study showed that population fluctuations in rare favorable regions in turn cause a remarkable increase in the asymptotic densities of both predators and prey. Very intriguing features are found when variable interaction rates are affixed to individual particles rather than lattice sites. Stochastic dynamics with demographic variability in conjunction with inheritable predation efficiencies generate non-trivial time evolution for the predation rate distributions, yet with overall essentially neutral optimization. (paper)

  19. Assessing geotechnical centrifuge modelling in addressing variably saturated flow in soil and fractured rock.

    Science.gov (United States)

    Jones, Brendon R; Brouwers, Luke B; Van Tonder, Warren D; Dippenaar, Matthys A

    2017-05-01

    The vadose zone typically comprises soil underlain by fractured rock. Often, surface water and groundwater parameters are readily available, but variably saturated flow through soil and rock are oversimplified or estimated as input for hydrological models. In this paper, a series of geotechnical centrifuge experiments are conducted to contribute to the knowledge gaps in: (i) variably saturated flow and dispersion in soil and (ii) variably saturated flow in discrete vertical and horizontal fractures. Findings from the research show that the hydraulic gradient, and not the hydraulic conductivity, is scaled for seepage flow in the geotechnical centrifuge. Furthermore, geotechnical centrifuge modelling has been proven as a viable experimental tool for the modelling of hydrodynamic dispersion as well as the replication of similar flow mechanisms for unsaturated fracture flow, as previously observed in literature. Despite the imminent challenges of modelling variable saturation in the vadose zone, the geotechnical centrifuge offers a powerful experimental tool to physically model and observe variably saturated flow. This can be used to give valuable insight into mechanisms associated with solid-fluid interaction problems under these conditions. Findings from future research can be used to validate current numerical modelling techniques and address the subsequent influence on aquifer recharge and vulnerability, contaminant transport, waste disposal, dam construction, slope stability and seepage into subsurface excavations.

  20. Importance analysis for models with correlated variables and its sparse grid solution

    International Nuclear Information System (INIS)

    Li, Luyi; Lu, Zhenzhou

    2013-01-01

    For structural models involving correlated input variables, a novel interpretation for variance-based importance measures is proposed based on the contribution of the correlated input variables to the variance of the model output. After the novel interpretation of the variance-based importance measures is compared with the existing ones, two solutions of the variance-based importance measures of the correlated input variables are built on the sparse grid numerical integration (SGI): double-loop nested sparse grid integration (DSGI) method and single loop sparse grid integration (SSGI) method. The DSGI method solves the importance measure by decreasing the dimensionality of the input variables procedurally, while SSGI method performs importance analysis through extending the dimensionality of the inputs. Both of them can make full use of the advantages of the SGI, and are well tailored for different situations. By analyzing the results of several numerical and engineering examples, it is found that the novel proposed interpretation about the importance measures of the correlated input variables is reasonable, and the proposed methods for solving importance measures are efficient and accurate. -- Highlights: •The contribution of correlated variables to the variance of the output is analyzed. •A novel interpretation for variance-based indices of correlated variables is proposed. •Two solutions for variance-based importance measures of correlated variables are built

  1. Multiple Imputation of Predictor Variables Using Generalized Additive Models

    NARCIS (Netherlands)

    de Jong, Roel; van Buuren, Stef; Spiess, Martin

    2016-01-01

    The sensitivity of multiple imputation methods to deviations from their distributional assumptions is investigated using simulations, where the parameters of scientific interest are the coefficients of a linear regression model, and values in predictor variables are missing at random. The

  2. An agent-based model of cellular dynamics and circadian variability in human endotoxemia.

    Directory of Open Access Journals (Sweden)

    Tung T Nguyen

    Full Text Available As cellular variability and circadian rhythmicity play critical roles in immune and inflammatory responses, we present in this study an agent-based model of human endotoxemia to examine the interplay between circadian controls, cellular variability and stochastic dynamics of inflammatory cytokines. The model is qualitatively validated by its ability to reproduce circadian dynamics of inflammatory mediators and critical inflammatory responses after endotoxin administration in vivo. Novel computational concepts are proposed to characterize the cellular variability and synchronization of inflammatory cytokines in a population of heterogeneous leukocytes. Our results suggest that there is a decrease in cell-to-cell variability of inflammatory cytokines while their synchronization is increased after endotoxin challenge. Model parameters that are responsible for IκB production stimulated by NFκB activation and for the production of anti-inflammatory cytokines have large impacts on system behaviors. Additionally, examining time-dependent systemic responses revealed that the system is least vulnerable to endotoxin in the early morning and most vulnerable around midnight. Although much remains to be explored, proposed computational concepts and the model we have pioneered will provide important insights for future investigations and extensions, especially for single-cell studies to discover how cellular variability contributes to clinical implications.

  3. Vernier Caliper and Micrometer Computer Models Using Easy Java Simulation and Its Pedagogical Design Features--Ideas for Augmenting Learning with Real Instruments

    Science.gov (United States)

    Wee, Loo Kang; Ning, Hwee Tiang

    2014-01-01

    This paper presents the customization of Easy Java Simulation models, used with actual laboratory instruments, to create active experiential learning for measurements. The laboratory instruments are the vernier caliper and the micrometer. Three computer model design ideas that complement real equipment are discussed. These ideas involve (1) a…

  4. Application of Peleg's equation to describe creep responses of potatoes under constant and variable storage conditions.

    Science.gov (United States)

    Solomon, W K; Jindal, V K

    2017-06-01

    The application of Peleg's equation to characterize creep behavior of potatoes during storage was investigated. Potatoes were stored at 25, 15, 5C, and variable (fluctuating) temperature for 16 or 26 weeks. The Peleg equation adequately described the creep response of potatoes during storage at all storage conditions (R 2  = .97to .99). Peleg constant k 1 exhibited a significant (p creep responses during storage or processing will be potentially helpful to better understand the phenomenon. The model parameters from such model could be used to relate rheological properties of raw and cooked potatoes. Moreover, the model parameters could be used to establish relationship between instrumental and sensory attributes which will help in the prediction of sensory attributes from instrumental data. © 2016 Wiley Periodicals, Inc.

  5. A new approach for modelling variability in residential construction projects

    Directory of Open Access Journals (Sweden)

    Mehrdad Arashpour

    2013-06-01

    Full Text Available The construction industry is plagued by long cycle times caused by variability in the supply chain. Variations or undesirable situations are the result of factors such as non-standard practices, work site accidents, inclement weather conditions and faults in design. This paper uses a new approach for modelling variability in construction by linking relative variability indicators to processes. Mass homebuilding sector was chosen as the scope of the analysis because data is readily available. Numerous simulation experiments were designed by varying size of capacity buffers in front of trade contractors, availability of trade contractors, and level of variability in homebuilding processes. The measurements were shown to lead to an accurate determination of relationships between these factors and production parameters. The variability indicator was found to dramatically affect the tangible performance measures such as home completion rates. This study provides for future analysis of the production homebuilding sector, which may lead to improvements in performance and a faster product delivery to homebuyers.

  6. A new approach for modelling variability in residential construction projects

    Directory of Open Access Journals (Sweden)

    Mehrdad Arashpour

    2013-06-01

    Full Text Available The construction industry is plagued by long cycle times caused by variability in the supply chain. Variations or undesirable situations are the result of factors such as non-standard practices, work site accidents, inclement weather conditions and faults in design. This paper uses a new approach for modelling variability in construction by linking relative variability indicators to processes. Mass homebuilding sector was chosen as the scope of the analysis because data is readily available. Numerous simulation experiments were designed by varying size of capacity buffers in front of trade contractors, availability of trade contractors, and level of variability in homebuilding processes. The measurements were shown to lead to an accurate determination of relationships between these factors and production parameters. The variability indicator was found to dramatically affect the tangible performance measures such as home completion rates. This study provides for future analysis of the production homebuilding sector, which may lead to improvements in performance and a faster product delivery to homebuyers. 

  7. The IKARUS instrument

    International Nuclear Information System (INIS)

    Gerster, H.J.; Stein, G.

    1994-01-01

    When the Federal Government decided on a 25% reduction of CO 2 emissions till 2005 in 1990 the necessity resulted that an instrument has to be developed for the analysis and assessment of the ecological, economic and energetic impact of different reduction strategies. The development task was awarded by the BMFT to the Research Centre Juelich in cooperation with well-known institutions of energy system research. The total instrument is scheduled to be finished by the end of 1994. For the decentral use of the instrument by a wide specialist public the developed models and data banks which are equipped with a user-friendly surface are suited for larger PCs (486, 16 MB RAM/500-1000 MB ROM). (orig.) [de

  8. Modelling Inter-relationships among water, governance, human development variables in developing countries with Bayesian networks.

    Science.gov (United States)

    Dondeynaz, C.; Lopez-Puga, J.; Carmona-Moreno, C.

    2012-04-01

    Improving Water and Sanitation Services (WSS), being a complex and interdisciplinary issue, passes through collaboration and coordination of different sectors (environment, health, economic activities, governance, and international cooperation). This inter-dependency has been recognised with the adoption of the "Integrated Water Resources Management" principles that push for the integration of these various dimensions involved in WSS delivery to ensure an efficient and sustainable management. The understanding of these interrelations appears as crucial for decision makers in the water sector in particular in developing countries where WSS still represent an important leverage for livelihood improvement. In this framework, the Joint Research Centre of the European Commission has developed a coherent database (WatSan4Dev database) containing 29 indicators from environmental, socio-economic, governance and financial aid flows data focusing on developing countries (Celine et al, 2011 under publication). The aim of this work is to model the WatSan4Dev dataset using probabilistic models to identify the key variables influencing or being influenced by the water supply and sanitation access levels. Bayesian Network Models are suitable to map the conditional dependencies between variables and also allows ordering variables by level of influence on the dependent variable. Separated models have been built for water supply and for sanitation because of different behaviour. The models are validated if complying with statistical criteria but either with scientific knowledge and literature. A two steps approach has been adopted to build the structure of the model; Bayesian network is first built for each thematic cluster of variables (e.g governance, agricultural pressure, or human development) keeping a detailed level for interpretation later one. A global model is then built based on significant indicators of each cluster being previously modelled. The structure of the

  9. Application of a user-friendly comprehensive circulatory model for estimation of hemodynamic and ventricular variables

    NARCIS (Netherlands)

    Ferrari, G.; Kozarski, M.; Gu, Y. J.; De Lazzari, C.; Di Molfetta, A.; Palko, K. J.; Zielinski, K.; Gorczynska, K.; Darowski, M.; Rakhorst, G.

    2008-01-01

    Purpose: Application of a comprehensive, user-friendly, digital computer circulatory model to estimate hemodynamic and ventricular variables. Methods: The closed-loop lumped parameter circulatory model represents the circulation at the level of large vessels. A variable elastance model reproduces

  10. Radon-Instrumentation

    International Nuclear Information System (INIS)

    Moreno y Moreno, A.

    2003-01-01

    The presentation of the active and passive methods for radon, their identification and measure, instrumentation and characteristics are the objectives of this work. Active detectors: Active Alpha Cam Continuous Air Monitor, Model 758 of Victoreen, Model CMR-510 Continuous Radon Monitor of the Signature Femto-Tech. Passive detectors: SSNTD track detectors in solids Measurement Using Charcoal Canisters, disk of activated coal deposited in a metallic box Electrets Methodology. (Author)

  11. Coupled variable selection for regression modeling of complex treatment patterns in a clinical cancer registry.

    Science.gov (United States)

    Schmidtmann, I; Elsäßer, A; Weinmann, A; Binder, H

    2014-12-30

    For determining a manageable set of covariates potentially influential with respect to a time-to-event endpoint, Cox proportional hazards models can be combined with variable selection techniques, such as stepwise forward selection or backward elimination based on p-values, or regularized regression techniques such as component-wise boosting. Cox regression models have also been adapted for dealing with more complex event patterns, for example, for competing risks settings with separate, cause-specific hazard models for each event type, or for determining the prognostic effect pattern of a variable over different landmark times, with one conditional survival model for each landmark. Motivated by a clinical cancer registry application, where complex event patterns have to be dealt with and variable selection is needed at the same time, we propose a general approach for linking variable selection between several Cox models. Specifically, we combine score statistics for each covariate across models by Fisher's method as a basis for variable selection. This principle is implemented for a stepwise forward selection approach as well as for a regularized regression technique. In an application to data from hepatocellular carcinoma patients, the coupled stepwise approach is seen to facilitate joint interpretation of the different cause-specific Cox models. In conditional survival models at landmark times, which address updates of prediction as time progresses and both treatment and other potential explanatory variables may change, the coupled regularized regression approach identifies potentially important, stably selected covariates together with their effect time pattern, despite having only a small number of events. These results highlight the promise of the proposed approach for coupling variable selection between Cox models, which is particularly relevant for modeling for clinical cancer registries with their complex event patterns. Copyright © 2014 John Wiley & Sons

  12. Einstein x-ray observations of cataclysmic variables

    International Nuclear Information System (INIS)

    Mason, K.O.; Cordova, F.A.

    1982-01-01

    Observations with the imaging x-ray detectors on the Einstein Observatory have led to a large increase in the number of low luminosity x-ray sources known to be associated with cataclysmic variable stars (CVs). The high sensitivity of the Einstein instrumentation has permitted study of their short timescale variability and spectra. The data are adding significantly to our knowledge of the accretion process in cataclysmic variables and forcing some revision in our ideas concerning the origin of the optical variability in these stars

  13. Transcultural adaptation into Portuguese of an instrument for pain evaluation based on the biopsychosocial model

    Directory of Open Access Journals (Sweden)

    Monique Rocha Peixoto dos Santos

    Full Text Available Abstract Introduction: Pain is an individual experience influenced by multiple interacting factors. The “biopsychosocial” care model has gained popularity in response to growing research evidence indicating the influence of biological, psychological, and social factors on the pain experience. The implementation of this model is a challenge in the practice of the health professional. Objective: To perform the transcultural adaptation of the SCEBS method into Brazilian Portuguese. Methods: The instrument was translated and applied to 50 healthy subjects and 50 participants with non-specific chronic pain in the spine. The process of cross-cultural adaptation included the following steps: transcultural adaptation, content analysis of the scale, pre-test, revision, back-translation review, cross-cultural adaptation, revised text correction and final report. Results: The translated and adapted 51-item Portuguese version of the SCEBS method produced an instrument called SCEBS-BR. In the assessment by the target population, 50 adult users of the Brazilian Unified Health System answered the questionnaire and showed good understanding of the instrument on the verbal rating scale. Conclusion: The SCEBS-BR was proved to be easily understandable, showing good semantic validation regardless of schooling level or age, and can be considered adequate for clinical use.

  14. Propensity-score matching in economic analyses: comparison with regression models, instrumental variables, residual inclusion, differences-in-differences, and decomposition methods.

    Science.gov (United States)

    Crown, William H

    2014-02-01

    This paper examines the use of propensity score matching in economic analyses of observational data. Several excellent papers have previously reviewed practical aspects of propensity score estimation and other aspects of the propensity score literature. The purpose of this paper is to compare the conceptual foundation of propensity score models with alternative estimators of treatment effects. References are provided to empirical comparisons among methods that have appeared in the literature. These comparisons are available for a subset of the methods considered in this paper. However, in some cases, no pairwise comparisons of particular methods are yet available, and there are no examples of comparisons across all of the methods surveyed here. Irrespective of the availability of empirical comparisons, the goal of this paper is to provide some intuition about the relative merits of alternative estimators in health economic evaluations where nonlinearity, sample size, availability of pre/post data, heterogeneity, and missing variables can have important implications for choice of methodology. Also considered is the potential combination of propensity score matching with alternative methods such as differences-in-differences and decomposition methods that have not yet appeared in the empirical literature.

  15. Variable Stars in the Field of V729 Aql

    Science.gov (United States)

    Cagaš, P.

    2017-04-01

    Wide field instruments can be used to acquire light curves of tens or even hundreds of variable stars per night, which increases the probability of new discoveries of interesting variable stars and generally increases the efficiency of observations. At the same time, wide field instruments produce a large amount of data, which must be processed using advanced software. The traditional approach, typically used by amateur astronomers, requires an unacceptable amount of time needed to process each data set. New functionality, built into SIPS software package, can shorten the time needed to obtain light curves by several orders of magnitude. Also, newly introduced SILICUPS software is intended for post-processing of stored light curves. It can be used to visualize observations from many nights, to find variable star periods, evaluate types of variability, etc. This work provides an overview of tools used to process data from the large field of view around the variable star V729 Aql. and demonstrates the results.

  16. Analyzing and leveraging self-similarity for variable resolution atmospheric models

    Science.gov (United States)

    O'Brien, Travis; Collins, William

    2015-04-01

    Variable resolution modeling techniques are rapidly becoming a popular strategy for achieving high resolution in a global atmospheric models without the computational cost of global high resolution. However, recent studies have demonstrated a variety of resolution-dependent, and seemingly artificial, features. We argue that the scaling properties of the atmosphere are key to understanding how the statistics of an atmospheric model should change with resolution. We provide two such examples. In the first example we show that the scaling properties of the cloud number distribution define how the ratio of resolved to unresolved clouds should increase with resolution. We show that the loss of resolved clouds, in the high resolution region of variable resolution simulations, with the Community Atmosphere Model version 4 (CAM4) is an artifact of the model's treatment of condensed water (this artifact is significantly reduced in CAM5). In the second example we show that the scaling properties of the horizontal velocity field, combined with the incompressibility assumption, necessarily result in an intensification of vertical mass flux as resolution increases. We show that such an increase is present in a wide variety of models, including CAM and the regional climate models of the ENSEMBLES intercomparision. We present theoretical arguments linking this increase to the intensification of precipitation with increasing resolution.

  17. Remote sensing of the Canadian Arctic: Modelling biophysical variables

    Science.gov (United States)

    Liu, Nanfeng

    It is anticipated that Arctic vegetation will respond in a variety of ways to altered temperature and precipitation patterns expected with climate change, including changes in phenology, productivity, biomass, cover and net ecosystem exchange. Remote sensing provides data and data processing methodologies for monitoring and assessing Arctic vegetation over large areas. The goal of this research was to explore the potential of hyperspectral and high spatial resolution multispectral remote sensing data for modelling two important Arctic biophysical variables: Percent Vegetation Cover (PVC) and the fraction of Absorbed Photosynthetically Active Radiation (fAPAR). A series of field experiments were conducted to collect PVC and fAPAR at three Canadian Arctic sites: (1) Sabine Peninsula, Melville Island, NU; (2) Cape Bounty Arctic Watershed Observatory (CBAWO), Melville Island, NU; and (3) Apex River Watershed (ARW), Baffin Island, NU. Linear relationships between biophysical variables and Vegetation Indices (VIs) were examined at different spatial scales using field spectra (for the Sabine Peninsula site) and high spatial resolution satellite data (for the CBAWO and ARW sites). At the Sabine Peninsula site, hyperspectral VIs exhibited a better performance for modelling PVC than multispectral VIs due to their capacity for sampling fine spectral features. The optimal hyperspectral bands were located at important spectral features observed in Arctic vegetation spectra, including leaf pigment absorption in the red wavelengths and at the red-edge, leaf water absorption in the near infrared, and leaf cellulose and lignin absorption in the shortwave infrared. At the CBAWO and ARW sites, field PVC and fAPAR exhibited strong correlations (R2 > 0.70) with the NDVI (Normalized Difference Vegetation Index) derived from high-resolution WorldView-2 data. Similarly, high spatial resolution satellite-derived fAPAR was correlated to MODIS fAPAR (R2 = 0.68), with a systematic

  18. Statistical Time Series Models of Pilot Control with Applications to Instrument Discrimination

    Science.gov (United States)

    Altschul, R. E.; Nagel, P. M.; Oliver, F.

    1984-01-01

    A general description of the methodology used in obtaining the transfer function models and verification of model fidelity, frequency domain plots of the modeled transfer functions, numerical results obtained from an analysis of poles and zeroes obtained from z plane to s-plane conversions of the transfer functions, and the results of a study on the sequential introduction of other variables, both exogenous and endogenous into the loop are contained.

  19. Intrajudge and Interjudge Reliability of the Stuttering Severity Instrument-Fourth Edition.

    Science.gov (United States)

    Davidow, Jason H; Scott, Kathleen A

    2017-11-08

    The Stuttering Severity Instrument (SSI) is a tool used to measure the severity of stuttering. Previous versions of the instrument have known limitations (e.g., Lewis, 1995). The present study examined the intra- and interjudge reliability of the newest version, the Stuttering Severity Instrument-Fourth Edition (SSI-4) (Riley, 2009). Twelve judges who were trained on the SSI-4 protocol participated. Judges collected SSI-4 data while viewing 4 videos of adults who stutter at Time 1 and 4 weeks later at Time 2. Data were analyzed for intra- and interjudge reliability of the SSI-4 subscores (for Frequency, Duration, and Physical Concomitants), total score, and final severity rating. Intra- and interjudge reliability across the subscores and total score concurred with the manual's reported reliability when reliability was calculated using the methods described in the manual. New calculations of judge agreement produced different values from those in the manual-for the 3 subscores, total score, and final severity rating-and provided data absent from the manual. Clinicians and researchers who use the SSI-4 should carefully consider the limitations of the instrument. Investigation into the multitasking demands of the instrument may provide information on whether separating the collection of data for specific variables will improve intra- and interjudge reliability of those variables.

  20. Oscillating shells: A model for a variable cosmic object

    OpenAIRE

    Nunez, Dario

    1997-01-01

    A model for a possible variable cosmic object is presented. The model consists of a massive shell surrounding a compact object. The gravitational and self-gravitational forces tend to collapse the shell, but the internal tangential stresses oppose the collapse. The combined action of the two types of forces is studied and several cases are presented. In particular, we investigate the spherically symmetric case in which the shell oscillates radially around a central compact object.

  1. Degree of multicollinearity and variables involved in linear dependence in additive-dominant models

    Directory of Open Access Journals (Sweden)

    Juliana Petrini

    2012-12-01

    Full Text Available The objective of this work was to assess the degree of multicollinearity and to identify the variables involved in linear dependence relations in additive-dominant models. Data of birth weight (n=141,567, yearling weight (n=58,124, and scrotal circumference (n=20,371 of Montana Tropical composite cattle were used. Diagnosis of multicollinearity was based on the variance inflation factor (VIF and on the evaluation of the condition indexes and eigenvalues from the correlation matrix among explanatory variables. The first model studied (RM included the fixed effect of dam age class at calving and the covariates associated to the direct and maternal additive and non-additive effects. The second model (R included all the effects of the RM model except the maternal additive effects. Multicollinearity was detected in both models for all traits considered, with VIF values of 1.03 - 70.20 for RM and 1.03 - 60.70 for R. Collinearity increased with the increase of variables in the model and the decrease in the number of observations, and it was classified as weak, with condition index values between 10.00 and 26.77. In general, the variables associated with additive and non-additive effects were involved in multicollinearity, partially due to the natural connection between these covariables as fractions of the biological types in breed composition.

  2. Use of the mathematical modelling method for the investigation of dynamic characteristics of acoustical measuring instruments

    Science.gov (United States)

    Vasilyev, Y. M.; Lagunov, L. F.

    1973-01-01

    The schematic diagram of a noise measuring device is presented that uses pulse expansion modeling according to the peak or any other measured values, to obtain instrument readings at a very low noise error.

  3. Development and evaluation of a stochastic daily rainfall model with long-term variability

    Science.gov (United States)

    Kamal Chowdhury, A. F. M.; Lockart, Natalie; Willgoose, Garry; Kuczera, George; Kiem, Anthony S.; Parana Manage, Nadeeka

    2017-12-01

    The primary objective of this study is to develop a stochastic rainfall generation model that can match not only the short resolution (daily) variability but also the longer resolution (monthly to multiyear) variability of observed rainfall. This study has developed a Markov chain (MC) model, which uses a two-state MC process with two parameters (wet-to-wet and dry-to-dry transition probabilities) to simulate rainfall occurrence and a gamma distribution with two parameters (mean and standard deviation of wet day rainfall) to simulate wet day rainfall depths. Starting with the traditional MC-gamma model with deterministic parameters, this study has developed and assessed four other variants of the MC-gamma model with different parameterisations. The key finding is that if the parameters of the gamma distribution are randomly sampled each year from fitted distributions rather than fixed parameters with time, the variability of rainfall depths at both short and longer temporal resolutions can be preserved, while the variability of wet periods (i.e. number of wet days and mean length of wet spell) can be preserved by decadally varied MC parameters. This is a straightforward enhancement to the traditional simplest MC model and is both objective and parsimonious.

  4. Geochemical Modeling Of F Area Seepage Basin Composition And Variability

    International Nuclear Information System (INIS)

    Millings, M.; Denham, M.; Looney, B.

    2012-01-01

    From the 1950s through 1989, the F Area Seepage Basins at the Savannah River Site (SRS) received low level radioactive wastes resulting from processing nuclear materials. Discharges of process wastes to the F Area Seepage Basins followed by subsequent mixing processes within the basins and eventual infiltration into the subsurface resulted in contamination of the underlying vadose zone and downgradient groundwater. For simulating contaminant behavior and subsurface transport, a quantitative understanding of the interrelated discharge-mixing-infiltration system along with the resulting chemistry of fluids entering the subsurface is needed. An example of this need emerged as the F Area Seepage Basins was selected as a key case study demonstration site for the Advanced Simulation Capability for Environmental Management (ASCEM) Program. This modeling evaluation explored the importance of the wide variability in bulk wastewater chemistry as it propagated through the basins. The results are intended to generally improve and refine the conceptualization of infiltration of chemical wastes from seepage basins receiving variable waste streams and to specifically support the ASCEM case study model for the F Area Seepage Basins. Specific goals of this work included: (1) develop a technically-based 'charge-balanced' nominal source term chemistry for water infiltrating into the subsurface during basin operations, (2) estimate the nature of short term and long term variability in infiltrating water to support scenario development for uncertainty quantification (i.e., UQ analysis), (3) identify key geochemical factors that control overall basin water chemistry and the projected variability/stability, and (4) link wastewater chemistry to the subsurface based on monitoring well data. Results from this study provide data and understanding that can be used in further modeling efforts of the F Area groundwater plume. As identified in this study, key geochemical factors affecting basin

  5. Instrumental variable estimation in a survival context

    DEFF Research Database (Denmark)

    Tchetgen Tchetgen, Eric J; Walter, Stefan; Vansteelandt, Stijn

    2015-01-01

    for regression analysis in a survival context, primarily under an additive hazards model, for which we describe 2 simple methods for estimating causal effects. The first method is a straightforward 2-stage regression approach analogous to 2-stage least squares commonly used for IV analysis in linear regression....... The IV approach is very well developed in the context of linear regression and also for certain generalized linear models with a nonlinear link function. However, IV methods are not as well developed for regression analysis with a censored survival outcome. In this article, we develop the IV approach....... In this approach, the fitted value from a first-stage regression of the exposure on the IV is entered in place of the exposure in the second-stage hazard model to recover a valid estimate of the treatment effect of interest. The second method is a so-called control function approach, which entails adding...

  6. High-Frequency X-ray Variability Detection in A Black Hole Transient with USA.

    Energy Technology Data Exchange (ETDEWEB)

    Shabad, Gayane

    2000-10-16

    Studies of high-frequency variability (above {approx}100 Hz) in X-ray binaries provide a unique opportunity to explore the fundamental physics of spacetime and matter, since the orbital timescale on the order of several milliseconds is a timescale of the motion of matter through the region located in close proximity to a compact stellar object. The detection of weak high-frequency signals in X-ray binaries depends on how well we understand the level of Poisson noise due to the photon counting statistics, i.e. how well we can understand and model the detector deadtime and other instrumental systematic effects. We describe the preflight timing calibration work performed on the Unconventional Stellar Aspect (USA) X-ray detector to study deadtime and timing issues. We developed a Monte Carlo deadtime model and deadtime correction methods for the USA experiment. The instrumental noise power spectrum can be estimated within {approx}0.1% accuracy in the case when no energy-dependent instrumental effect is present. We also developed correction techniques to account for an energy-dependent instrumental effect. The developed methods were successfully tested on USA Cas A and Cygnus X-1 data. This work allowed us to make a detection of a weak signal in a black hole candidate (BHC) transient.

  7. Local-scale models reveal ecological niche variability in amphibian and reptile communities from two contrasting biogeographic regions

    Directory of Open Access Journals (Sweden)

    Alberto Muñoz

    2016-10-01

    Full Text Available Ecological Niche Models (ENMs are widely used to describe how environmental factors influence species distribution. Modelling at a local scale, compared to a large scale within a high environmental gradient, can improve our understanding of ecological species niches. The main goal of this study is to assess and compare the contribution of environmental variables to amphibian and reptile ENMs in two Spanish national parks located in contrasting biogeographic regions, i.e., the Mediterranean and the Atlantic area. The ENMs were built with maximum entropy modelling using 11 environmental variables in each territory. The contributions of these variables to the models were analysed and classified using various statistical procedures (Mann–Whitney U tests, Principal Components Analysis and General Linear Models. Distance to the hydrological network was consistently the most relevant variable for both parks and taxonomic classes. Topographic variables (i.e., slope and altitude were the second most predictive variables, followed by climatic variables. Differences in variable contribution were observed between parks and taxonomic classes. Variables related to water availability had the larger contribution to the models in the Mediterranean park, while topography variables were decisive in the Atlantic park. Specific response curves to environmental variables were in accordance with the biogeographic affinity of species (Mediterranean and non-Mediterranean species and taxonomy (amphibians and reptiles. Interestingly, these results were observed for species located in both parks, particularly those situated at their range limits. Our findings show that ecological niche models built at local scale reveal differences in habitat preferences within a wide environmental gradient. Therefore, modelling at local scales rather than assuming large-scale models could be preferable for the establishment of conservation strategies for herptile species in natural

  8. Local-scale models reveal ecological niche variability in amphibian and reptile communities from two contrasting biogeographic regions

    Science.gov (United States)

    Santos, Xavier; Felicísimo, Ángel M.

    2016-01-01

    Ecological Niche Models (ENMs) are widely used to describe how environmental factors influence species distribution. Modelling at a local scale, compared to a large scale within a high environmental gradient, can improve our understanding of ecological species niches. The main goal of this study is to assess and compare the contribution of environmental variables to amphibian and reptile ENMs in two Spanish national parks located in contrasting biogeographic regions, i.e., the Mediterranean and the Atlantic area. The ENMs were built with maximum entropy modelling using 11 environmental variables in each territory. The contributions of these variables to the models were analysed and classified using various statistical procedures (Mann–Whitney U tests, Principal Components Analysis and General Linear Models). Distance to the hydrological network was consistently the most relevant variable for both parks and taxonomic classes. Topographic variables (i.e., slope and altitude) were the second most predictive variables, followed by climatic variables. Differences in variable contribution were observed between parks and taxonomic classes. Variables related to water availability had the larger contribution to the models in the Mediterranean park, while topography variables were decisive in the Atlantic park. Specific response curves to environmental variables were in accordance with the biogeographic affinity of species (Mediterranean and non-Mediterranean species) and taxonomy (amphibians and reptiles). Interestingly, these results were observed for species located in both parks, particularly those situated at their range limits. Our findings show that ecological niche models built at local scale reveal differences in habitat preferences within a wide environmental gradient. Therefore, modelling at local scales rather than assuming large-scale models could be preferable for the establishment of conservation strategies for herptile species in natural parks. PMID

  9. Reconstructing pre-instrumental streamflow in Eastern Australia using a water balance approach

    Science.gov (United States)

    Tozer, C. R.; Kiem, A. S.; Vance, T. R.; Roberts, J. L.; Curran, M. A. J.; Moy, A. D.

    2018-03-01

    Streamflow reconstructions based on paleoclimate proxies provide much longer records than the short instrumental period records on which water resource management plans are currently based. In Australia there is a lack of in-situ high resolution paleoclimate proxy records, but remote proxies with teleconnections to Australian climate have utility in producing streamflow reconstructions. Here we investigate, via a case study for a catchment in eastern Australia, the novel use of an Antarctic ice-core based rainfall reconstruction within a Budyko-framework to reconstruct ∼1000 years of annual streamflow. The resulting streamflow reconstruction captures interannual to decadal variability in the instrumental streamflow, validating both the use of the ice core rainfall proxy record and the Budyko-framework method. In the preinstrumental era the streamflow reconstruction shows longer wet and dry epochs and periods of streamflow variability that are higher than observed in the instrumental era. Importantly, for both the instrumental record and preinstrumental reconstructions, the wet (dry) epochs in the rainfall record are shorter (longer) in the streamflow record and this non-linearity must be considered when inferring hydroclimatic risk or historical water availability directly from rainfall proxy records alone. These insights provide a better understanding of present infrastructure vulnerability in the context of past climate variability for eastern Australia. The streamflow reconstruction presented here also provides a better understanding of the range of hydroclimatic variability possible, and therefore represents a more realistic baseline on which to quantify the potential impacts of anthropogenic climate change on water security.

  10. Quantifying intrinsic and extrinsic variability in stochastic gene expression models.

    Science.gov (United States)

    Singh, Abhyudai; Soltani, Mohammad

    2013-01-01

    Genetically identical cell populations exhibit considerable intercellular variation in the level of a given protein or mRNA. Both intrinsic and extrinsic sources of noise drive this variability in gene expression. More specifically, extrinsic noise is the expression variability that arises from cell-to-cell differences in cell-specific factors such as enzyme levels, cell size and cell cycle stage. In contrast, intrinsic noise is the expression variability that is not accounted for by extrinsic noise, and typically arises from the inherent stochastic nature of biochemical processes. Two-color reporter experiments are employed to decompose expression variability into its intrinsic and extrinsic noise components. Analytical formulas for intrinsic and extrinsic noise are derived for a class of stochastic gene expression models, where variations in cell-specific factors cause fluctuations in model parameters, in particular, transcription and/or translation rate fluctuations. Assuming mRNA production occurs in random bursts, transcription rate is represented by either the burst frequency (how often the bursts occur) or the burst size (number of mRNAs produced in each burst). Our analysis shows that fluctuations in the transcription burst frequency enhance extrinsic noise but do not affect the intrinsic noise. On the contrary, fluctuations in the transcription burst size or mRNA translation rate dramatically increase both intrinsic and extrinsic noise components. Interestingly, simultaneous fluctuations in transcription and translation rates arising from randomness in ATP abundance can decrease intrinsic noise measured in a two-color reporter assay. Finally, we discuss how these formulas can be combined with single-cell gene expression data from two-color reporter experiments for estimating model parameters.

  11. Stochastic modeling of the Fermi/LAT γ-ray blazar variability

    Energy Technology Data Exchange (ETDEWEB)

    Sobolewska, M. A.; Siemiginowska, A. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Kelly, B. C. [Department of Physics, Broida Hall, University of California, Santa Barbara, CA 93107 (United States); Nalewajko, K., E-mail: malgosia@camk.edu.pl [JILA, University of Colorado and National Institute of Standards and Technology, 440 UCB, Boulder, CO 80309 (United States)

    2014-05-10

    We study the γ-ray variability of 13 blazars observed with the Fermi/Large Area Telescope (LAT). These blazars have the most complete light curves collected during the first four years of the Fermi sky survey. We model them with the Ornstein-Uhlenbeck (OU) process or a mixture of the OU processes. The OU process has power spectral density (PSD) proportional to 1/f {sup α} with α changing at a characteristic timescale, τ{sub 0}, from 0 (τ >> τ{sub 0}) to 2 (τ << τ{sub 0}). The PSD of the mixed OU process has two characteristic timescales and an additional intermediate region with 0 < α < 2. We show that the OU model provides a good description of the Fermi/LAT light curves of three blazars in our sample. For the first time, we constrain a characteristic γ-ray timescale of variability in two BL Lac sources, 3C 66A and PKS 2155-304 (τ{sub 0} ≅ 25 days and τ{sub 0} ≅ 43 days, respectively, in the observer's frame), which are longer than the soft X-ray timescales detected in blazars and Seyfert galaxies. We find that the mixed OU process approximates the light curves of the remaining 10 blazars better than the OU process. We derive limits on their long and short characteristic timescales, and infer that their Fermi/LAT PSD resemble power-law functions. We constrain the PSD slopes for all but one source in the sample. We find hints for sub-hour Fermi/LAT variability in four flat spectrum radio quasars. We discuss the implications of our results for theoretical models of blazar variability.

  12. A MODEL FOR (QUASI-)PERIODIC MULTIWAVELENGTH PHOTOMETRIC VARIABILITY IN YOUNG STELLAR OBJECTS

    Energy Technology Data Exchange (ETDEWEB)

    Kesseli, Aurora Y. [Boston University, 725 Commonwealth Ave, Boston, MA 02215 (United States); Petkova, Maya A.; Wood, Kenneth; Gregory, Scott G. [SUPA, School of Physics and Astronomy, University of St Andrews, North Haugh, St Andrews, Fife, KY16 9AD (United Kingdom); Whitney, Barbara A. [Department of Astronomy, University of Wisconsin-Madison, 475 N. Charter St, Madison, WI 53706 (United States); Hillenbrand, L. A. [Astronomy Department, California Institute of Technology, Pasadena, CA 91125 (United States); Stauffer, J. R.; Morales-Calderon, M.; Rebull, L. [Spitzer Science Center, California Institute of Technology, CA 91125 (United States); Alencar, S. H. P., E-mail: aurorak@bu.com [Departamento de Física—ICEx—UFMG, Av. Antônio Carlos, 6627, 30270-901, Belo Horizonte, MG (Brazil)

    2016-09-01

    We present radiation transfer models of rotating young stellar objects (YSOs) with hot spots in their atmospheres, inner disk warps, and other three-dimensional effects in the nearby circumstellar environment. Our models are based on the geometry expected from magneto-accretion theory, where material moving inward in the disk flows along magnetic field lines to the star and creates stellar hot spots upon impact. Due to rotation of the star and magnetosphere, the disk is variably illuminated. We compare our model light curves to data from the Spitzer YSOVAR project to determine if these processes can explain the variability observed at optical and mid-infrared wavelengths in young stars. We focus on those variables exhibiting “dipper” behavior that may be periodic, quasi-periodic, or aperiodic. We find that the stellar hot-spot size and temperature affects the optical and near-infrared light curves, while the shape and vertical extent of the inner disk warp affects the mid-IR light curve variations. Clumpy disk distributions with non-uniform fractal density structure produce more stochastic light curves. We conclude that magneto-accretion theory is consistent with certain aspects of the multiwavelength photometric variability exhibited by low-mass YSOs. More detailed modeling of individual sources can be used to better determine the stellar hot-spot and inner disk geometries of particular sources.

  13. Micro-macro multilevel latent class models with multiple discrete individual-level variables

    NARCIS (Netherlands)

    Bennink, M.; Croon, M.A.; Kroon, B.; Vermunt, J.K.

    2016-01-01

    An existing micro-macro method for a single individual-level variable is extended to the multivariate situation by presenting two multilevel latent class models in which multiple discrete individual-level variables are used to explain a group-level outcome. As in the univariate case, the

  14. Theoretical investigations of the new Cokriging method for variable-fidelity surrogate modeling

    DEFF Research Database (Denmark)

    Zimmermann, Ralf; Bertram, Anna

    2018-01-01

    Cokriging is a variable-fidelity surrogate modeling technique which emulates a target process based on the spatial correlation of sampled data of different levels of fidelity. In this work, we address two theoretical questions associated with the so-called new Cokriging method for variable fidelity...

  15. Loss given default models incorporating macroeconomic variables for credit cards

    OpenAIRE

    Crook, J.; Bellotti, T.

    2012-01-01

    Based on UK data for major retail credit cards, we build several models of Loss Given Default based on account level data, including Tobit, a decision tree model, a Beta and fractional logit transformation. We find that Ordinary Least Squares models with macroeconomic variables perform best for forecasting Loss Given Default at the account and portfolio levels on independent hold-out data sets. The inclusion of macroeconomic conditions in the model is important, since it provides a means to m...

  16. Quantifying variability in earthquake rupture models using multidimensional scaling: application to the 2011 Tohoku earthquake

    KAUST Repository

    Razafindrakoto, Hoby

    2015-04-22

    Finite-fault earthquake source inversion is an ill-posed inverse problem leading to non-unique solutions. In addition, various fault parametrizations and input data may have been used by different researchers for the same earthquake. Such variability leads to large intra-event variability in the inferred rupture models. One way to understand this problem is to develop robust metrics to quantify model variability. We propose a Multi Dimensional Scaling (MDS) approach to compare rupture models quantitatively. We consider normalized squared and grey-scale metrics that reflect the variability in the location, intensity and geometry of the source parameters. We test the approach on two-dimensional random fields generated using a von Kármán autocorrelation function and varying its spectral parameters. The spread of points in the MDS solution indicates different levels of model variability. We observe that the normalized squared metric is insensitive to variability of spectral parameters, whereas the grey-scale metric is sensitive to small-scale changes in geometry. From this benchmark, we formulate a similarity scale to rank the rupture models. As case studies, we examine inverted models from the Source Inversion Validation (SIV) exercise and published models of the 2011 Mw 9.0 Tohoku earthquake, allowing us to test our approach for a case with a known reference model and one with an unknown true solution. The normalized squared and grey-scale metrics are respectively sensitive to the overall intensity and the extension of the three classes of slip (very large, large, and low). Additionally, we observe that a three-dimensional MDS configuration is preferable for models with large variability. We also find that the models for the Tohoku earthquake derived from tsunami data and their corresponding predictions cluster with a systematic deviation from other models. We demonstrate the stability of the MDS point-cloud using a number of realizations and jackknife tests, for

  17. Quantifying variability in earthquake rupture models using multidimensional scaling: application to the 2011 Tohoku earthquake

    KAUST Repository

    Razafindrakoto, Hoby; Mai, Paul Martin; Genton, Marc G.; Zhang, Ling; Thingbaijam, Kiran Kumar

    2015-01-01

    Finite-fault earthquake source inversion is an ill-posed inverse problem leading to non-unique solutions. In addition, various fault parametrizations and input data may have been used by different researchers for the same earthquake. Such variability leads to large intra-event variability in the inferred rupture models. One way to understand this problem is to develop robust metrics to quantify model variability. We propose a Multi Dimensional Scaling (MDS) approach to compare rupture models quantitatively. We consider normalized squared and grey-scale metrics that reflect the variability in the location, intensity and geometry of the source parameters. We test the approach on two-dimensional random fields generated using a von Kármán autocorrelation function and varying its spectral parameters. The spread of points in the MDS solution indicates different levels of model variability. We observe that the normalized squared metric is insensitive to variability of spectral parameters, whereas the grey-scale metric is sensitive to small-scale changes in geometry. From this benchmark, we formulate a similarity scale to rank the rupture models. As case studies, we examine inverted models from the Source Inversion Validation (SIV) exercise and published models of the 2011 Mw 9.0 Tohoku earthquake, allowing us to test our approach for a case with a known reference model and one with an unknown true solution. The normalized squared and grey-scale metrics are respectively sensitive to the overall intensity and the extension of the three classes of slip (very large, large, and low). Additionally, we observe that a three-dimensional MDS configuration is preferable for models with large variability. We also find that the models for the Tohoku earthquake derived from tsunami data and their corresponding predictions cluster with a systematic deviation from other models. We demonstrate the stability of the MDS point-cloud using a number of realizations and jackknife tests, for

  18. Phonation Quotient in Women: A Measure of Vocal Efficiency Using Three Aerodynamic Instruments.

    Science.gov (United States)

    Joshi, Ashwini; Watts, Christopher R

    2017-03-01

    The purpose of this study was to examine measures of vital capacity and phonation quotient across three age groups in women using three different aerodynamic instruments representing low-tech and high-tech options. This study has a prospective, repeated measures design. Fifteen women in each age group of 25-39 years, 40-59 years, and 60-79 years were assessed using maximum phonation time and vital capacity obtained from three aerodynamic instruments: a handheld analog windmill type spirometer, a handheld digital spirometer, and the Phonatory Aerodynamic System (PAS), Model 6600. Phonation quotient was calculated using vital capacity from each instrument. Analyses of variance were performed to test for main effects of the instruments and age on vital capacity and derived phonation quotient. Pearson product moment correlation was performed to assess measurement reliability (parallel forms) between the instruments. Regression equations, scatterplots, and coefficients of determination were also calculated. Statistically significant differences were found in vital capacity measures for the digital spirometer compared with the windmill-type spirometer and PAS across age groups. Strong positive correlations were present between all three instruments for both vital capacity and derived phonation quotient measurements. Measurement precision for the digital spirometer was lower than the windmill spirometer compared with the PAS. However, all three instruments had strong measurement reliability. Additionally, age did not have an effect on the measurement across instruments. These results are consistent with previous literature reporting data from male speakers and support the use of low-tech options for measurement of basic aerodynamic variables associated with voice production. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  19. The Functional Segregation and Integration Model: Mixture Model Representations of Consistent and Variable Group-Level Connectivity in fMRI

    DEFF Research Database (Denmark)

    Churchill, Nathan William; Madsen, Kristoffer Hougaard; Mørup, Morten

    2016-01-01

    flexibility: they only estimate segregated structure and do not model interregional functional connectivity, nor do they account for network variability across voxels or between subjects. To address these issues, this letter develops the functional segregation and integration model (FSIM). This extension......The brain consists of specialized cortical regions that exchange information between each other, reflecting a combination of segregated (local) and integrated (distributed) processes that define brain function. Functional magnetic resonance imaging (fMRI) is widely used to characterize...... brain regions where network expression predicts subject age in the experimental data. Thus, the FSIM is effective at summarizing functional connectivity structure in group-level fMRI, with applications in modeling the relationships between network variability and behavioral/demographic variables....

  20. Generalized Density-Corrected Model for Gas Diffusivity in Variably Saturated Soils

    DEFF Research Database (Denmark)

    Chamindu, Deepagoda; Møldrup, Per; Schjønning, Per

    2011-01-01

    models. The GDC model was further extended to describe two-region (bimodal) soils and could describe and predict Dp/Do well for both different soil aggregate size fractions and variably compacted volcanic ash soils. A possible use of the new GDC model is engineering applications such as the design...... of highly compacted landfill site caps....

  1. Effects of thermal deformation on optical instruments for space application

    Science.gov (United States)

    Segato, E.; Da Deppo, V.; Debei, S.; Cremonese, G.

    2017-11-01

    Optical instruments for space missions work in hostile environment, it's thus necessary to accurately study the effects of ambient parameters variations on the equipment. In particular optical instruments are very sensitive to ambient conditions, especially temperature. This variable can cause dilatations and misalignments of the optical elements, and can also lead to rise of dangerous stresses in the optics. Their displacements and the deformations degrade the quality of the sampled images. In this work a method for studying the effects of the temperature variations on the performance of imaging instrument is presented. The optics and their mountings are modeled and processed by a thermo-mechanical Finite Element Model (FEM) analysis, then the output data, which describe the deformations of the optical element surfaces, are elaborated using an ad hoc MATLAB routine: a non-linear least square optimization algorithm is adopted to determine the surface equations (plane, spherical, nth polynomial) which best fit the data. The obtained mathematical surface representations are then directly imported into ZEMAX for sequential raytracing analysis. The results are the variations of the Spot Diagrams, of the MTF curves and of the Diffraction Ensquared Energy due to simulated thermal loads. This method has been successfully applied to the Stereo Camera for the BepiColombo mission reproducing expected operative conditions. The results help to design and compare different optical housing systems for a feasible solution and show that it is preferable to use kinematic constraints on prisms and lenses to minimize the variation of the optical performance of the Stereo Camera.

  2. Numerical tools for musical instruments acoustics: analysing nonlinear physical models using continuation of periodic solutions

    OpenAIRE

    Karkar , Sami; Vergez , Christophe; Cochelin , Bruno

    2012-01-01

    International audience; We propose a new approach based on numerical continuation and bifurcation analysis for the study of physical models of instruments that produce self- sustained oscillation. Numerical continuation consists in following how a given solution of a set of equations is modified when one (or several) parameter of these equations are allowed to vary. Several physical models (clarinet, saxophone, and violin) are formulated as nonlinear dynamical systems, whose periodic solution...

  3. Effects of environmental variables on invasive amphibian activity: Using model selection on quantiles for counts

    Science.gov (United States)

    Muller, Benjamin J.; Cade, Brian S.; Schwarzkoph, Lin

    2018-01-01

    Many different factors influence animal activity. Often, the value of an environmental variable may influence significantly the upper or lower tails of the activity distribution. For describing relationships with heterogeneous boundaries, quantile regressions predict a quantile of the conditional distribution of the dependent variable. A quantile count model extends linear quantile regression methods to discrete response variables, and is useful if activity is quantified by trapping, where there may be many tied (equal) values in the activity distribution, over a small range of discrete values. Additionally, different environmental variables in combination may have synergistic or antagonistic effects on activity, so examining their effects together, in a modeling framework, is a useful approach. Thus, model selection on quantile counts can be used to determine the relative importance of different variables in determining activity, across the entire distribution of capture results. We conducted model selection on quantile count models to describe the factors affecting activity (numbers of captures) of cane toads (Rhinella marina) in response to several environmental variables (humidity, temperature, rainfall, wind speed, and moon luminosity) over eleven months of trapping. Environmental effects on activity are understudied in this pest animal. In the dry season, model selection on quantile count models suggested that rainfall positively affected activity, especially near the lower tails of the activity distribution. In the wet season, wind speed limited activity near the maximum of the distribution, while minimum activity increased with minimum temperature. This statistical methodology allowed us to explore, in depth, how environmental factors influenced activity across the entire distribution, and is applicable to any survey or trapping regime, in which environmental variables affect activity.

  4. Assessment of the quality and variability of health information on chronic pain websites using the DISCERN instrument

    Directory of Open Access Journals (Sweden)

    Buckley Norman

    2010-10-01

    Full Text Available Abstract Background The Internet is used increasingly by providers as a tool for disseminating pain-related health information and by patients as a resource about health conditions and treatment options. However, health information on the Internet remains unregulated and varies in quality, accuracy and readability. The objective of this study was to determine the quality of pain websites, and explain variability in quality and readability between pain websites. Methods Five key terms (pain, chronic pain, back pain, arthritis, and fibromyalgia were entered into the Google, Yahoo and MSN search engines. Websites were assessed using the DISCERN instrument as a quality index. Grade level readability ratings were assessed using the Flesch-Kincaid Readability Algorithm. Univariate (using alpha = 0.20 and multivariable regression (using alpha = 0.05 analyses were used to explain the variability in DISCERN scores and grade level readability using potential for commercial gain, health related seals of approval, language(s and multimedia features as independent variables. Results A total of 300 websites were assessed, 21 excluded in accordance with the exclusion criteria and 110 duplicate websites, leaving 161 unique sites. About 6.8% (11/161 websites of the websites offered patients' commercial products for their pain condition, 36.0% (58/161 websites had a health related seal of approval, 75.8% (122/161 websites presented information in English only and 40.4% (65/161 websites offered an interactive multimedia experience. In assessing the quality of the unique websites, of a maximum score of 80, the overall average DISCERN Score was 55.9 (13.6 and readability (grade level of 10.9 (3.9. The multivariable regressions demonstrated that website seals of approval (P = 0.015 and potential for commercial gain (P = 0.189 were contributing factors to higher DISCERN scores, while seals of approval (P = 0.168 and interactive multimedia (P = 0.244 contributed to

  5. An observational and modeling study of the regional impacts of climate variability

    Science.gov (United States)

    Horton, Radley M.

    Climate variability has large impacts on humans and their agricultural systems. Farmers are at the center of this agricultural network, but it is often agricultural planners---regional planners, extension agents, commodity groups and cooperatives---that translate climate information for users. Global climate models (GCMs) are a leading tool for understanding and predicting climate and climate change. Armed with climate projections and forecasts, agricultural planners adapt their decision-making to optimize outcomes. This thesis explores what GCMs can, and cannot, tell us about climate variability and change at regional scales. The question is important, since high-quality regional climate projections could assist farmers and regional planners in key management decisions, contributing to better agricultural outcomes. To answer these questions, climate variability and its regional impacts are explored in observations and models for the current and future climate. The goals are to identify impacts of observed variability, assess model simulation of variability, and explore how climate variability and its impacts may change under enhanced greenhouse warming. Chapter One explores how well Goddard Institute for Space Studies (GISS) atmospheric models, forced by historical sea surface temperatures (SST), simulate climatology and large-scale features during the exceptionally strong 1997--1999 El Nino Southern Oscillation (ENSO) cycle. Reasonable performance in this 'proof of concept' test is considered a minimum requirement for further study of variability in models. All model versions produce appropriate local changes with ENSO, indicating that with correct ocean temperatures these versions are capable of simulating the large-scale effects of ENSO around the globe. A high vertical resolution model (VHR) provides the best simulation. Evidence is also presented that SST anomalies outside the tropical Pacific may play a key role in generating remote teleconnections even

  6. Modelling and control of variable speed wind turbines for power system studies

    DEFF Research Database (Denmark)

    Michalke, Gabriele; Hansen, Anca Daniela

    2010-01-01

    and implemented in the power system simulation tool DIgSILENT. Important issues like the fault ride-through and grid support capabilities of these wind turbine concepts are addressed. The paper reveals that advanced control of variable speed wind turbines can improve power system stability. Finally......, it will be shown in the paper that wind parks consisting of variable speed wind turbines can help nearby connected fixed speed wind turbines to ride-through grid faults. Copyright © 2009 John Wiley & Sons, Ltd.......Modern wind turbines are predominantly variable speed wind turbines with power electronic interface. Emphasis in this paper is therefore on the modelling and control issues of these wind turbine concepts and especially on their impact on the power system. The models and control are developed...

  7. Developing Baltic cod recruitment models II : Incorporation of environmental variability and species interaction

    DEFF Research Database (Denmark)

    Köster, Fritz; Hinrichsen, H.H.; St. John, Michael

    2001-01-01

    We investigate whether a process-oriented approach based on the results of field, laboratory, and modelling studies can be used to develop a stock-environment-recruitment model for Central Baltic cod (Gadus morhua). Based on exploratory statistical analysis, significant variables influencing...... cod in these areas, suggesting that key biotic and abiotic processes can be successfully incorporated into recruitment models....... survival of early life stages and varying systematically among spawning sites were incorporated into stock-recruitment models, first for major cod spawning sites and then combined for the entire Central Baltic. Variables identified included potential egg production by the spawning stock, abiotic conditions...

  8. Thermal Modeling of the Mars Reconnaissance Orbiter's Solar Panel and Instruments during Aerobraking

    Science.gov (United States)

    Dec, John A.; Gasbarre, Joseph F.; Amundsen, Ruth M.

    2007-01-01

    The Mars Reconnaissance Orbiter (MRO) launched on August 12, 2005 and started aerobraking at Mars in March 2006. During the spacecraft s design phase, thermal models of the solar panels and instruments were developed to determine which components would be the most limiting thermally during aerobraking. Having determined the most limiting components, thermal limits in terms of heat rate were established. Advanced thermal modeling techniques were developed utilizing Thermal Desktop and Patran Thermal. Heat transfer coefficients were calculated using a Direct Simulation Monte Carlo technique. Analysis established that the solar panels were the most limiting components during the aerobraking phase of the mission.

  9. Development and Application of Econometric Models for Forecasting and Analysis of Monetary Policy Scenarios

    OpenAIRE

    Malugin, Vladimir; Demidenko , Mikhail; Kalechits, Dmitry; Miksjuk , Alexei; Tsukarev , Taras

    2009-01-01

    A system of econometric models designed for forecasting target monetary indicators as well as conducting monetary policy scenarios analysis is presented. The econometric models integrated in the system are represented in the error correction form and are interlinked by means of monetary policy instruments variables, common exogenous variables characterizing external shocks, and monetary policy target endogenous variables. Forecast accuracy estimates and monetary policy analysis results are pr...

  10. Variable slip wind generator modeling for real-time simulation

    Energy Technology Data Exchange (ETDEWEB)

    Gagnon, R.; Brochu, J.; Turmel, G. [Hydro-Quebec, Varennes, PQ (Canada). IREQ

    2006-07-01

    A model of a wind turbine using a variable slip wound-rotor induction machine was presented. The model was created as part of a library of generic wind generator models intended for wind integration studies. The stator winding of the wind generator was connected directly to the grid and the rotor was driven by the turbine through a drive train. The variable resistors was synthesized by an external resistor in parallel with a diode rectifier. A forced-commutated power electronic device (IGBT) was connected to the wound rotor by slip rings and brushes. Simulations were conducted in a Matlab/Simulink environment using SimPowerSystems blocks to model power systems elements and Simulink blocks to model the turbine, control system and drive train. Detailed descriptions of the turbine, the drive train and the control system were provided. The model's implementation in the simulator was also described. A case study demonstrating the real-time simulation of a wind generator connected at the distribution level of a power system was presented. Results of the case study were then compared with results obtained from the SimPowerSystems off-line simulation. Results showed good agreement between the waveforms, demonstrating the conformity of the real-time and the off-line simulations. The capability of Hypersim for real-time simulation of wind turbines with power electronic converters in a distribution network was demonstrated. It was concluded that hardware-in-the-loop (HIL) simulation of wind turbine controllers for wind integration studies in power systems is now feasible. 5 refs., 1 tab., 6 figs.

  11. Estimating structural equation models with non-normal variables by using transformations

    NARCIS (Netherlands)

    Montfort, van K.; Mooijaart, A.; Meijerink, F.

    2009-01-01

    We discuss structural equation models for non-normal variables. In this situation the maximum likelihood and the generalized least-squares estimates of the model parameters can give incorrect estimates of the standard errors and the associated goodness-of-fit chi-squared statistics. If the sample

  12. Instrumental intelligent test of food sensory quality as mimic of human panel test combining multiple cross-perception sensors and data fusion

    International Nuclear Information System (INIS)

    Ouyang, Qin; Zhao, Jiewen; Chen, Quansheng

    2014-01-01

    Highlights: • To develop a novel instrumental intelligent test methodology for food sensory analysis. • A novel data fusion was used in instrumental intelligent test methodology. • Linear and nonlinear tools were comparatively used for modeling. • The instrumental test methodology can be imitative of human test behavior. - Abstract: Instrumental test of food quality using perception sensors instead of human panel test is attracting massive attention recently. A novel cross-perception multi-sensors data fusion imitating multiple mammal perception was proposed for the instrumental test in this work. First, three mimic sensors of electronic eye, electronic nose and electronic tongue were used in sequence for data acquisition of rice wine samples. Then all data from the three different sensors were preprocessed and merged. Next, three cross-perception variables i.e., color, aroma and taste, were constructed using principal components analysis (PCA) and multiple linear regression (MLR) which were used as the input of models. MLR, back-propagation artificial neural network (BPANN) and support vector machine (SVM) were comparatively used for modeling, and the instrumental test was achieved for the comprehensive quality of samples. Results showed the proposed cross-perception multi-sensors data fusion presented obvious superiority to the traditional data fusion methodologies, also achieved a high correlation coefficient (>90%) with the human panel test results. This work demonstrated that the instrumental test based on the cross-perception multi-sensors data fusion can actually mimic the human test behavior, therefore is of great significance to ensure the quality of products and decrease the loss of the manufacturers

  13. Instrumental intelligent test of food sensory quality as mimic of human panel test combining multiple cross-perception sensors and data fusion

    Energy Technology Data Exchange (ETDEWEB)

    Ouyang, Qin; Zhao, Jiewen; Chen, Quansheng, E-mail: qschen@ujs.edu.cn

    2014-09-02

    Highlights: • To develop a novel instrumental intelligent test methodology for food sensory analysis. • A novel data fusion was used in instrumental intelligent test methodology. • Linear and nonlinear tools were comparatively used for modeling. • The instrumental test methodology can be imitative of human test behavior. - Abstract: Instrumental test of food quality using perception sensors instead of human panel test is attracting massive attention recently. A novel cross-perception multi-sensors data fusion imitating multiple mammal perception was proposed for the instrumental test in this work. First, three mimic sensors of electronic eye, electronic nose and electronic tongue were used in sequence for data acquisition of rice wine samples. Then all data from the three different sensors were preprocessed and merged. Next, three cross-perception variables i.e., color, aroma and taste, were constructed using principal components analysis (PCA) and multiple linear regression (MLR) which were used as the input of models. MLR, back-propagation artificial neural network (BPANN) and support vector machine (SVM) were comparatively used for modeling, and the instrumental test was achieved for the comprehensive quality of samples. Results showed the proposed cross-perception multi-sensors data fusion presented obvious superiority to the traditional data fusion methodologies, also achieved a high correlation coefficient (>90%) with the human panel test results. This work demonstrated that the instrumental test based on the cross-perception multi-sensors data fusion can actually mimic the human test behavior, therefore is of great significance to ensure the quality of products and decrease the loss of the manufacturers.

  14. Cost-effective design of economic instruments in nutrition policy

    Directory of Open Access Journals (Sweden)

    Smed Sinne

    2007-04-01

    Full Text Available Abstract This paper addresses the potential for using economic regulation, e.g. taxes or subsidies, as instruments to combat the increasing problems of inappropriate diets, leading to health problems such as obesity, diabetes 2, cardiovascular diseases etc. in most countries. Such policy measures may be considered as alternatives or supplements to other regulation instruments, including information campaigns, bans or enhancement of technological solutions to the problems of obesity or related diseases. 7 different food tax and subsidy instruments or combinations of instruments are analysed quantitatively. The analyses demonstrate that the average cost-effectiveness with regard to changing the intake of selected nutritional variables can be improved by 10–30 per cent if taxes/subsidies are targeted against these nutrients, compared with targeting selected food categories. Finally, the paper raises a range of issues, which need to be investigated further, before firm conclusions about the suitability of economic instruments in nutrition policy can be drawn.

  15. Explicit estimating equations for semiparametric generalized linear latent variable models

    KAUST Repository

    Ma, Yanyuan

    2010-07-05

    We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.

  16. Modeling and fabrication of an RF MEMS variable capacitor with a fractal geometry

    KAUST Repository

    Elshurafa, Amro M.

    2013-08-16

    In this paper, we model, fabricate, and measure an electrostatically actuated MEMS variable capacitor that utilizes a fractal geometry and serpentine-like suspension arms. Explicitly, a variable capacitor that possesses a top suspended plate with a specific fractal geometry and also possesses a bottom fixed plate complementary in shape to the top plate has been fabricated in the PolyMUMPS process. An important benefit that was achieved from using the fractal geometry in designing the MEMS variable capacitor is increasing the tuning range of the variable capacitor beyond the typical ratio of 1.5. The modeling was carried out using the commercially available finite element software COMSOL to predict both the tuning range and pull-in voltage. Measurement results show that the tuning range is 2.5 at a maximum actuation voltage of 10V.

  17. Can climate variability information constrain a hydrological model for an ungauged Costa Rican catchment?

    Science.gov (United States)

    Quesada-Montano, Beatriz; Westerberg, Ida K.; Fuentes-Andino, Diana; Hidalgo-Leon, Hugo; Halldin, Sven

    2017-04-01

    Long-term hydrological data are key to understanding catchment behaviour and for decision making within water management and planning. Given the lack of observed data in many regions worldwide, hydrological models are an alternative for reproducing historical streamflow series. Additional types of information - to locally observed discharge - can be used to constrain model parameter uncertainty for ungauged catchments. Climate variability exerts a strong influence on streamflow variability on long and short time scales, in particular in the Central-American region. We therefore explored the use of climate variability knowledge to constrain the simulated discharge uncertainty of a conceptual hydrological model applied to a Costa Rican catchment, assumed to be ungauged. To reduce model uncertainty we first rejected parameter relationships that disagreed with our understanding of the system. We then assessed how well climate-based constraints applied at long-term, inter-annual and intra-annual time scales could constrain model uncertainty. Finally, we compared the climate-based constraints to a constraint on low-flow statistics based on information obtained from global maps. We evaluated our method in terms of the ability of the model to reproduce the observed hydrograph and the active catchment processes in terms of two efficiency measures, a statistical consistency measure, a spread measure and 17 hydrological signatures. We found that climate variability knowledge was useful for reducing model uncertainty, in particular, unrealistic representation of deep groundwater processes. The constraints based on global maps of low-flow statistics provided more constraining information than those based on climate variability, but the latter rejected slow rainfall-runoff representations that the low flow statistics did not reject. The use of such knowledge, together with information on low-flow statistics and constraints on parameter relationships showed to be useful to

  18. Weak instruments and the first stage F-statistic in IV models with a nonscalar error covariance structure

    NARCIS (Netherlands)

    Bun, M.; de Haan, M.

    2010-01-01

    We analyze the usefulness of the first stage F-statistic for detecting weak instruments in the IV model with a nonscalar error covariance structure. More in particular, we question the validity of the rule of thumb of a first stage F-statistic of 10 or higher for models with correlated errors

  19. An instrument dedicated for modelling of pulmonary radiotherapy

    International Nuclear Information System (INIS)

    Niezink, Anne G.H.; Dollekamp, Nienke J.; Elzinga, Harriet J.; Borger, Denise; Boer, Eduard J.H.; Ubbels, Jan F.; Woltman-van Iersel, Marleen; Leest, Annija H.D. van der; Beijert, Max; Groen, Harry J.M.; Kraan, Jan; Hiltermann, Thijo J.N.; Wekken, Anthonie J. van der; Putten, John W.G. van; Rutgers, Steven R.; Pieterman, Remge M.; Hosson, Sander M. de; Roenhorst, Anke W.J.; Langendijk, Johannes A.; Widder, Joachim

    2015-01-01

    Background and purpose: Radiotherapy plays a pivotal role in lung cancer treatment. Selection of patients for new (radio)therapeutic options aiming at improving outcomes requires reliable and validated prediction models. We present the implementation of a prospective platform for evaluation and development of lung radiotherapy (proPED-LUNG) as an instrument enabling multidimensional predictive modelling. Materials and methods: ProPED-LUNG was designed to comprise relevant baseline and follow up data of patients receiving pulmonary radiotherapy with curative intent. Patient characteristics, diagnostic and staging information, treatment parameters including full dose–volume-histograms, tumour control, survival, and toxicity are scored. Besides physician-rated data, a range of patient-rated data regarding symptoms and health-related quality-of-life are collected. Results: After 18 months of accrual, 315 patients have been included (accrual rate, 18 per month). Of the first hundred patients included, 70 received conformal (chemo)radiotherapy and 30 underwent stereotactic radiotherapy. Compliance at 3 and 6 months follow-up was 96–100% for patient-rated, and 81–94% for physician-rated assessments. For data collection, 0.4 FTE were allocated in a 183 FTE department (0.2%). Conclusions: ProPED-LUNG is feasible with high compliance rates and yields a large amount of high quality prospective disease-related, treatment-related, patient- and physician-rated data which can be used to evaluate new developments in pulmonary radiotherapy

  20. Analysis and modeling of wafer-level process variability in 28 nm FD-SOI using split C-V measurements

    Science.gov (United States)

    Pradeep, Krishna; Poiroux, Thierry; Scheer, Patrick; Juge, André; Gouget, Gilles; Ghibaudo, Gérard

    2018-07-01

    This work details the analysis of wafer level global process variability in 28 nm FD-SOI using split C-V measurements. The proposed approach initially evaluates the native on wafer process variability using efficient extraction methods on split C-V measurements. The on-wafer threshold voltage (VT) variability is first studied and modeled using a simple analytical model. Then, a statistical model based on the Leti-UTSOI compact model is proposed to describe the total C-V variability in different bias conditions. This statistical model is finally used to study the contribution of each process parameter to the total C-V variability.

  1. Incorporation of Markov reliability models for digital instrumentation and control systems into existing PRAs

    International Nuclear Information System (INIS)

    Bucci, P.; Mangan, L. A.; Kirschenbaum, J.; Mandelli, D.; Aldemir, T.; Arndt, S. A.

    2006-01-01

    Markov models have the ability to capture the statistical dependence between failure events that can arise in the presence of complex dynamic interactions between components of digital instrumentation and control systems. One obstacle to the use of such models in an existing probabilistic risk assessment (PRA) is that most of the currently available PRA software is based on the static event-tree/fault-tree methodology which often cannot represent such interactions. We present an approach to the integration of Markov reliability models into existing PRAs by describing the Markov model of a digital steam generator feedwater level control system, how dynamic event trees (DETs) can be generated from the model, and how the DETs can be incorporated into an existing PRA with the SAPHIRE software. (authors)

  2. Ares I Scale Model Acoustic Test Instrumentation for Acoustic and Pressure Measurements

    Science.gov (United States)

    Vargas, Magda B.; Counter, Douglas

    2011-01-01

    Ares I Scale Model Acoustic Test (ASMAT) is a 5% scale model test of the Ares I vehicle, launch pad and support structures conducted at MSFC to verify acoustic and ignition environments and evaluate water suppression systems Test design considerations 5% measurements must be scaled to full scale requiring high frequency measurements Users had different frequencies of interest Acoustics: 200 - 2,000 Hz full scale equals 4,000 - 40,000 Hz model scale Ignition Transient: 0 - 100 Hz full scale equals 0 - 2,000 Hz model scale Environment exposure Weather exposure: heat, humidity, thunderstorms, rain, cold and snow Test environments: Plume impingement heat and pressure, and water deluge impingement Several types of sensors were used to measure the environments Different instrument mounts were used according to the location and exposure to the environment This presentation addresses the observed effects of the selected sensors and mount design on the acoustic and pressure measurements

  3. Correlation Analysis of Water Demand and Predictive Variables for Short-Term Forecasting Models

    Directory of Open Access Journals (Sweden)

    B. M. Brentan

    2017-01-01

    Full Text Available Operational and economic aspects of water distribution make water demand forecasting paramount for water distribution systems (WDSs management. However, water demand introduces high levels of uncertainty in WDS hydraulic models. As a result, there is growing interest in developing accurate methodologies for water demand forecasting. Several mathematical models can serve this purpose. One crucial aspect is the use of suitable predictive variables. The most used predictive variables involve weather and social aspects. To improve the interrelation knowledge between water demand and various predictive variables, this study applies three algorithms, namely, classical Principal Component Analysis (PCA and machine learning powerful algorithms such as Self-Organizing Maps (SOMs and Random Forest (RF. We show that these last algorithms help corroborate the results found by PCA, while they are able to unveil hidden features for PCA, due to their ability to cope with nonlinearities. This paper presents a correlation study of three district metered areas (DMAs from Franca, a Brazilian city, exploring weather and social variables to improve the knowledge of residential demand for water. For the three DMAs, temperature, relative humidity, and hour of the day appear to be the most important predictive variables to build an accurate regression model.

  4. A latent class distance association model for cross-classified data with a categorical response variable.

    Science.gov (United States)

    Vera, José Fernando; de Rooij, Mark; Heiser, Willem J

    2014-11-01

    In this paper we propose a latent class distance association model for clustering in the predictor space of large contingency tables with a categorical response variable. The rows of such a table are characterized as profiles of a set of explanatory variables, while the columns represent a single outcome variable. In many cases such tables are sparse, with many zero entries, which makes traditional models problematic. By clustering the row profiles into a few specific classes and representing these together with the categories of the response variable in a low-dimensional Euclidean space using a distance association model, a parsimonious prediction model can be obtained. A generalized EM algorithm is proposed to estimate the model parameters and the adjusted Bayesian information criterion statistic is employed to test the number of mixture components and the dimensionality of the representation. An empirical example highlighting the advantages of the new approach and comparing it with traditional approaches is presented. © 2014 The British Psychological Society.

  5. Holdup Measures on an SRNL Mossbauer Spectroscopy Instrument

    Energy Technology Data Exchange (ETDEWEB)

    Dewberry, R.; Brown, T.; Salaymeh, S.

    2010-05-05

    Gamma-ray holdup measurements of a Mossbauer spectroscopy instrument are described and modeled. In the qualitative acquisitions obtained in a low background area of Savannah River National Laboratory, only Am-241 and Np-237 activity were observed. The Am-241 was known to be the instrumental activation source, while the Np-237 is clearly observed as a source of contamination internal to the instrument. The two sources of activity are modeled separately in two acquisition configurations using two separate modeling tools. The results agree well, demonstrating a content of (1980 {+-} 150) {mu}Ci Am-241 and (110 {+-} 50) {mu}Ci of Np-237.

  6. Spatio-temporal Variability of Albedo and its Impact on Glacier Melt Modelling

    Science.gov (United States)

    Kinnard, C.; Mendoza, C.; Abermann, J.; Petlicki, M.; MacDonell, S.; Urrutia, R.

    2017-12-01

    Albedo is an important variable for the surface energy balance of glaciers, yet its representation within distributed glacier mass-balance models is often greatly simplified. Here we study the spatio-temporal evolution of albedo on Glacier Universidad, central Chile (34°S, 70°W), using time-lapse terrestrial photography, and investigate its effect on the shortwave radiation balance and modelled melt rates. A 12 megapixel digital single-lens reflex camera was setup overlooking the glacier and programmed to take three daily images of the glacier during a two-year period (2012-2014). One image was chosen for each day with no cloud shading on the glacier. The RAW images were projected onto a 10m resolution digital elevation model (DEM), using the IMGRAFT software (Messerli and Grinsted, 2015). A six-parameter camera model was calibrated using a single image and a set of 17 ground control points (GCPs), yielding a georeferencing accuracy of accounting for possible camera movement over time. The reflectance values from the projected image were corrected for topographic and atmospheric influences using a parametric solar irradiation model, following a modified algorithm based on Corripio (2004), and then converted to albedo using reference albedo measurements from an on-glacier automatic weather station (AWS). The image-based albedo was found to compare well with independent albedo observations from a second AWS in the glacier accumulation area. Analysis of the albedo maps showed that the albedo is more spatially-variable than the incoming solar radiation, making albedo a more important factor of energy balance spatial variability. The incorporation of albedo maps within an enhanced temperature index melt model revealed that the spatio-temporal variability of albedo is an important factor for the calculation of glacier-wide meltwater fluxes.

  7. ASPECT OF LANGUAGE ON A QUALITATIVE ANALYSIS OF STUDENT’S EVALUATION INSTRUMENT

    Directory of Open Access Journals (Sweden)

    Ismanto Ismanto

    2016-11-01

    Full Text Available This article examined the characteristics of good student’s evaluation instrument. There are at least two requirements that must be met. Those are valid and reliable. The validity of the instrument can be seen from the instrument's ability to measure what should be measured. The fact the existence of the validity of an instrument may be a grain fill, the response process, internal structure, relationship with other variables, and the consequences of the implementation of the charging instrument. Analysis of the content is then known as content validity, i.e. rational analysis of the domain to be measured to determine the representation of each item on the instrument with the ability to be measured. Content validity is submitting pieces of blue print and items of the instrument to the experts to be analyzed quantitatively and qualitatively.

  8. Separation of variables in anisotropic models and non-skew-symmetric elliptic r-matrix

    Science.gov (United States)

    Skrypnyk, Taras

    2017-05-01

    We solve a problem of separation of variables for the classical integrable hamiltonian systems possessing Lax matrices satisfying linear Poisson brackets with the non-skew-symmetric, non-dynamical elliptic so(3)⊗ so(3)-valued classical r-matrix. Using the corresponding Lax matrices, we present a general form of the "separating functions" B( u) and A( u) that generate the coordinates and the momenta of separation for the associated models. We consider several examples and perform the separation of variables for the classical anisotropic Euler's top, Steklov-Lyapunov model of the motion of anisotropic rigid body in the liquid, two-spin generalized Gaudin model and "spin" generalization of Steklov-Lyapunov model.

  9. A New Integrated Weighted Model in SNOW-V10: Verification of Categorical Variables

    Science.gov (United States)

    Huang, Laura X.; Isaac, George A.; Sheng, Grant

    2014-01-01

    This paper presents the verification results for nowcasts of seven categorical variables from an integrated weighted model (INTW) and the underlying numerical weather prediction (NWP) models. Nowcasting, or short range forecasting (0-6 h), over complex terrain with sufficient accuracy is highly desirable but a very challenging task. A weighting, evaluation, bias correction and integration system (WEBIS) for generating nowcasts by integrating NWP forecasts and high frequency observations was used during the Vancouver 2010 Olympic and Paralympic Winter Games as part of the Science of Nowcasting Olympic Weather for Vancouver 2010 (SNOW-V10) project. Forecast data from Canadian high-resolution deterministic NWP system with three nested grids (at 15-, 2.5- and 1-km horizontal grid-spacing) were selected as background gridded data for generating the integrated nowcasts. Seven forecast variables of temperature, relative humidity, wind speed, wind gust, visibility, ceiling and precipitation rate are treated as categorical variables for verifying the integrated weighted forecasts. By analyzing the verification of forecasts from INTW and the NWP models among 15 sites, the integrated weighted model was found to produce more accurate forecasts for the 7 selected forecast variables, regardless of location. This is based on the multi-categorical Heidke skill scores for the test period 12 February to 21 March 2010.

  10. Hierarchical Bayesian models to assess between- and within-batch variability of pathogen contamination in food.

    Science.gov (United States)

    Commeau, Natalie; Cornu, Marie; Albert, Isabelle; Denis, Jean-Baptiste; Parent, Eric

    2012-03-01

    Assessing within-batch and between-batch variability is of major interest for risk assessors and risk managers in the context of microbiological contamination of food. For example, the ratio between the within-batch variability and the between-batch variability has a large impact on the results of a sampling plan. Here, we designed hierarchical Bayesian models to represent such variability. Compatible priors were built mathematically to obtain sound model comparisons. A numeric criterion is proposed to assess the contamination structure comparing the ability of the models to replicate grouped data at the batch level using a posterior predictive loss approach. Models were applied to two case studies: contamination by Listeria monocytogenes of pork breast used to produce diced bacon and contamination by the same microorganism on cold smoked salmon at the end of the process. In the first case study, a contamination structure clearly exists and is located at the batch level, that is, between batches variability is relatively strong, whereas in the second a structure also exists but is less marked. © 2012 Society for Risk Analysis.

  11. Treatment of thoracolumbar burst fractures with variable screw placement or Isola instrumentation and arthrodesis: case series and literature review.

    Science.gov (United States)

    Alvine, Gregory F; Swain, James M; Asher, Marc A; Burton, Douglas C

    2004-08-01

    The controversy of burst fracture surgical management is addressed in this retrospective case study and literature review. The series consisted of 40 consecutive patients, index included, with 41 fractures treated with stiff, limited segment transpedicular bone-anchored instrumentation and arthrodesis from 1987 through 1994. No major acute complications such as death, paralysis, or infection occurred. For the 30 fractures with pre- and postoperative computed tomography studies, spinal canal compromise was 61% and 32%, respectively. Neurologic function improved in 7 of 14 patients (50%) and did not worsen in any. The principal problem encountered was screw breakage, which occurred in 16 of the 41 (39%) instrumented fractures. As we have previously reported, transpedicular anterior bone graft augmentation significantly decreased variable screw placement (VSP) implant breakage. However, it did not prevent Isola implant breakage in two-motion segment constructs. Compared with VSP, Isola provided better sagittal plane realignment and constructs that have been found to be significantly stiffer. Unplanned reoperation was necessary in 9 of the 40 patients (23%). At 1- and 2-year follow-up, 95% and 79% of patients were available for study, and a satisfactory outcome was achieved in 84% and 79%, respectively. These satisfaction and reoperation rates are consistent with the literature of the time. Based on these observations and the loads to which implant constructs are exposed following posterior realignment and stabilization of burst fractures, we recommend that three- or four-motion segment constructs, rather than two motion, be used. To save valuable motion segments, planned construct shortening can be used. An alternative is sequential or staged anterior corpectomy and structural grafting.

  12. On the relevance of spectral features for instrument classification

    DEFF Research Database (Denmark)

    Nielsen, Andreas Brinch; Sigurdsson, Sigurdur; Hansen, Lars Kai

    2007-01-01

    Automatic knowledge extraction from music signals is a key component for most music organization and music information retrieval systems. In this paper, we consider the problem of instrument modelling and instrument classification from the rough audio data. Existing systems for automatic instrument...... classification operate normally on a relatively large number of features, from which those related to the spectrum of the audio signal are particularly relevant. In this paper, we confront two different models about the spectral characterization of musical instruments. The first assumes a constant envelope...

  13. Variable sound speed in interacting dark energy models

    Science.gov (United States)

    Linton, Mark S.; Pourtsidou, Alkistis; Crittenden, Robert; Maartens, Roy

    2018-04-01

    We consider a self-consistent and physical approach to interacting dark energy models described by a Lagrangian, and identify a new class of models with variable dark energy sound speed. We show that if the interaction between dark energy in the form of quintessence and cold dark matter is purely momentum exchange this generally leads to a dark energy sound speed that deviates from unity. Choosing a specific sub-case, we study its phenomenology by investigating the effects of the interaction on the cosmic microwave background and linear matter power spectrum. We also perform a global fitting of cosmological parameters using CMB data, and compare our findings to ΛCDM.

  14. Design and validation of a standards-based science teacher efficacy instrument

    Science.gov (United States)

    Kerr, Patricia Reda

    National standards for K--12 science education address all aspects of science education, with their main emphasis on curriculum---both science subject matter and the process involved in doing science. Standards for science teacher education programs have been developing along a parallel plane, as is self-efficacy research involving classroom teachers. Generally, studies about efficacy have been dichotomous---basing the theoretical underpinnings on the work of either Rotter's Locus of Control theory or on Bandura's explanations of efficacy beliefs and outcome expectancy. This study brings all three threads together---K--12 science standards, teacher education standards, and efficacy beliefs---in an instrument designed to measure science teacher efficacy with items based on identified critical attributes of standards-based science teaching and learning. Based on Bandura's explanation of efficacy being task-specific and having outcome expectancy, a developmental, systematic progression from standards-based strategies and activities to tasks to critical attributes was used to craft items for a standards-based science teacher efficacy instrument. Demographic questions related to school characteristics, teacher characteristics, preservice background, science teaching experience, and post-certification professional development were included in the instrument. The instrument was completed by 102 middle level science teachers, with complete data for 87 teachers. A principal components analysis of the science teachers' responses to the instrument resulted in two components: Standards-Based Science Teacher Efficacy: Beliefs About Teaching (BAT, reliability = .92) and Standards-Based Science Teacher Efficacy: Beliefs About Student Achievement (BASA, reliability = .82). Variables that were characteristic of professional development activities, science content preparation, and school environment were identified as members of the sets of variables predicting the BAT and BASA

  15. Global modeling of land water and energy balances. Part III: Interannual variability

    Science.gov (United States)

    Shmakin, A.B.; Milly, P.C.D.; Dunne, K.A.

    2002-01-01

    The Land Dynamics (LaD) model is tested by comparison with observations of interannual variations in discharge from 44 large river basins for which relatively accurate time series of monthly precipitation (a primary model input) have recently been computed. When results are pooled across all basins, the model explains 67% of the interannual variance of annual runoff ratio anomalies (i.e., anomalies of annual discharge volume, normalized by long-term mean precipitation volume). The new estimates of basin precipitation appear to offer an improvement over those from a state-of-the-art analysis of global precipitation (the Climate Prediction Center Merged Analysis of Precipitation, CMAP), judging from comparisons of parallel model runs and of analyses of precipitation-discharge correlations. When the new precipitation estimates are used, the performance of the LaD model is comparable to, but not significantly better than, that of a simple, semiempirical water-balance relation that uses only annual totals of surface net radiation and precipitation. This implies that the LaD simulations of interannual runoff variability do not benefit substantially from information on geographical variability of land parameters or seasonal structure of interannual variability of precipitation. The aforementioned analyses necessitated the development of a method for downscaling of long-term monthly precipitation data to the relatively short timescales necessary for running the model. The method merges the long-term data with a reference dataset of 1-yr duration, having high temporal resolution. The success of the method, for the model and data considered here, was demonstrated in a series of model-model comparisons and in the comparisons of modeled and observed interannual variations of basin discharge.

  16. Design of and initial results from a Highly Instrumented Reactor for Atmospheric Chemistry (HIRAC

    Directory of Open Access Journals (Sweden)

    D. R. Glowacki

    2007-10-01

    Full Text Available The design of a Highly Instrumented Reactor for Atmospheric Chemistry (HIRAC is described and initial results obtained from HIRAC are presented. The ability of HIRAC to perform in-situ laser-induced fluorescence detection of OH and HO2 radicals with the Fluorescence Assay by Gas Expansion (FAGE technique establishes it as internationally unique for a chamber of its size and pressure/temperature variable capabilities. In addition to the FAGE technique, HIRAC features a suite of analytical instrumentation, including: a multipass FTIR system; a conventional gas chromatography (GC instrument and a GC instrument for formaldehyde detection; NO/NO2, CO, O3, and H2O vapour analysers. Ray tracing simulations and NO2 actinometry have been utilized to develop a detailed model of the radiation field within HIRAC. Comparisons between the analysers and the FTIR coupled to HIRAC have been performed, and HIRAC has also been used to investigate pressure dependent kinetics of the chlorine atom reaction with ethene and the reaction of O3 and t-2-butene. The results obtained are in good agreement with literature recommendations and Master Chemical Mechanism predictions. HIRAC thereby offers a highly instrumented platform with the potential for: (1 high precision kinetics investigations over a range of atmospheric conditions; (2 detailed mechanism development, significantly enhanced according to its capability for measuring radicals; and (3 field instrument intercomparison, calibration, development, and investigations of instrument response at a range of atmospheric conditions.

  17. Variable selection for modelling effects of eutrophication on stream and river ecosystems

    NARCIS (Netherlands)

    Nijboer, R.C.; Verdonschot, P.F.M.

    2004-01-01

    Models are needed for forecasting the effects of eutrophication on stream and river ecosystems. Most of the current models do not include differences in local stream characteristics and effects on the biota. To define the most important variables that should be used in a stream eutrophication model,

  18. The added value of time-variable microgravimetry to the understanding of how volcanoes work

    Science.gov (United States)

    Carbone, Daniele; Poland, Michael; Greco, Filippo; Diament, Michel

    2017-01-01

    During the past few decades, time-variable volcano gravimetry has shown great potential for imaging subsurface processes at active volcanoes (including some processes that might otherwise remain “hidden”), especially when combined with other methods (e.g., ground deformation, seismicity, and gas emissions). By supplying information on changes in the distribution of bulk mass over time, gravimetry can provide information regarding processes such as magma accumulation in void space, gas segregation at shallow depths, and mechanisms driving volcanic uplift and subsidence. Despite its potential, time-variable volcano gravimetry is an underexploited method, not widely adopted by volcano researchers or observatories. The cost of instrumentation and the difficulty in using it under harsh environmental conditions is a significant impediment to the exploitation of gravimetry at many volcanoes. In addition, retrieving useful information from gravity changes in noisy volcanic environments is a major challenge. While these difficulties are not trivial, neither are they insurmountable; indeed, creative efforts in a variety of volcanic settings highlight the value of time-variable gravimetry for understanding hazards as well as revealing fundamental insights into how volcanoes work. Building on previous work, we provide a comprehensive review of time-variable volcano gravimetry, including discussions of instrumentation, modeling and analysis techniques, and case studies that emphasize what can be learned from campaign, continuous, and hybrid gravity observations. We are hopeful that this exploration of time-variable volcano gravimetry will excite more scientists about the potential of the method, spurring further application, development, and innovation.

  19. Modelling Seasonal GWR of Daily PM2.5 with Proper Auxiliary Variables for the Yangtze River Delta

    Directory of Open Access Journals (Sweden)

    Man Jiang

    2017-04-01

    Full Text Available Over the past decades, regional haze episodes have frequently occurred in eastern China, especially in the Yangtze River Delta (YRD. Satellite derived Aerosol Optical Depth (AOD has been used to retrieve the spatial coverage of PM2.5 concentrations. To improve the retrieval accuracy of the daily AOD-PM2.5 model, various auxiliary variables like meteorological or geographical factors have been adopted into the Geographically Weighted Regression (GWR model. However, these variables are always arbitrarily selected without deep consideration of their potentially varying temporal or spatial contributions in the model performance. In this manuscript, we put forward an automatic procedure to select proper auxiliary variables from meteorological and geographical factors and obtain their optimal combinations to construct four seasonal GWR models. We employ two different schemes to comprehensively test the performance of our proposed GWR models: (1 comparison with other regular GWR models by varying the number of auxiliary variables; and (2 comparison with observed ground-level PM2.5 concentrations. The result shows that our GWR models of “AOD + 3” with three common meteorological variables generally perform better than all the other GWR models involved. Our models also show powerful prediction capabilities in PM2.5 concentrations with only slight overfitting. The determination coefficients R2 of our seasonal models are 0.8259 in spring, 0.7818 in summer, 0.8407 in autumn, and 0.7689 in winter. Also, the seasonal models in summer and autumn behave better than those in spring and winter. The comparison between seasonal and yearly models further validates the specific seasonal pattern of auxiliary variables of the GWR model in the YRD. We also stress the importance of key variables and propose a selection process in the AOD-PM2.5 model. Our work validates the significance of proper auxiliary variables in modelling the AOD-PM2.5 relationships and

  20. Modeling Short-Range Soil Variability and its Potential Use in Variable-Rate Treatment of Experimental Plots

    Directory of Open Access Journals (Sweden)

    A Moameni

    2011-02-01

    Full Text Available Abstract In Iran, the experimental plots under fertilizer trials are managed in such a way that the whole plot area uniformly receives agricultural inputs. This could lead to biased research results and hence to suppressing of the efforts made by the researchers. This research was conducted in a selected site belonging to the Gonbad Agricultural Research Station, located in the semiarid region, northeastern Iran. The aim was to characterize the short-range spatial variability of the inherent and management-depended soil properties and to determine if this variation is large and can be managed at practical scales. The soils were sampled using a grid 55 m apart. In total, 100 composite soil samples were collected from topsoil (0-30 cm and were analyzed for calcium carbonate equivalent, organic carbon, clay, available phosphorus, available potassium, iron, copper, zinc and manganese. Descriptive statistics were applied to check data trends. Geostatistical analysis was applied to variography, model fitting and contour mapping. Sampling at 55 m made it possible to split the area of the selected experimental plot into relatively uniform areas that allow application of agricultural inputs with variable rates. Keywords: Short-range soil variability, Within-field soil variability, Interpolation, Precision agriculture, Geostatistics

  1. A computationally inexpensive model for estimating dimensional measurement uncertainty due to x-ray computed tomography instrument misalignments

    Science.gov (United States)

    Ametova, Evelina; Ferrucci, Massimiliano; Chilingaryan, Suren; Dewulf, Wim

    2018-06-01

    The recent emergence of advanced manufacturing techniques such as additive manufacturing and an increased demand on the integrity of components have motivated research on the application of x-ray computed tomography (CT) for dimensional quality control. While CT has shown significant empirical potential for this purpose, there is a need for metrological research to accelerate the acceptance of CT as a measuring instrument. The accuracy in CT-based measurements is vulnerable to the instrument geometrical configuration during data acquisition, namely the relative position and orientation of x-ray source, rotation stage, and detector. Consistency between the actual instrument geometry and the corresponding parameters used in the reconstruction algorithm is critical. Currently available procedures provide users with only estimates of geometrical parameters. Quantification and propagation of uncertainty in the measured geometrical parameters must be considered to provide a complete uncertainty analysis and to establish confidence intervals for CT dimensional measurements. In this paper, we propose a computationally inexpensive model to approximate the influence of errors in CT geometrical parameters on dimensional measurement results. We use surface points extracted from a computer-aided design (CAD) model to model discrepancies in the radiographic image coordinates assigned to the projected edges between an aligned system and a system with misalignments. The efficacy of the proposed method was confirmed on simulated and experimental data in the presence of various geometrical uncertainty contributors.

  2. Modeling variably saturated multispecies reactive groundwater solute transport with MODFLOW-UZF and RT3D

    Science.gov (United States)

    Bailey, Ryan T.; Morway, Eric D.; Niswonger, Richard G.; Gates, Timothy K.

    2013-01-01

    A numerical model was developed that is capable of simulating multispecies reactive solute transport in variably saturated porous media. This model consists of a modified version of the reactive transport model RT3D (Reactive Transport in 3 Dimensions) that is linked to the Unsaturated-Zone Flow (UZF1) package and MODFLOW. Referred to as UZF-RT3D, the model is tested against published analytical benchmarks as well as other published contaminant transport models, including HYDRUS-1D, VS2DT, and SUTRA, and the coupled flow and transport modeling system of CATHY and TRAN3D. Comparisons in one-dimensional, two-dimensional, and three-dimensional variably saturated systems are explored. While several test cases are included to verify the correct implementation of variably saturated transport in UZF-RT3D, other cases are included to demonstrate the usefulness of the code in terms of model run-time and handling the reaction kinetics of multiple interacting species in variably saturated subsurface systems. As UZF1 relies on a kinematic-wave approximation for unsaturated flow that neglects the diffusive terms in Richards equation, UZF-RT3D can be used for large-scale aquifer systems for which the UZF1 formulation is reasonable, that is, capillary-pressure gradients can be neglected and soil parameters can be treated as homogeneous. Decreased model run-time and the ability to include site-specific chemical species and chemical reactions make UZF-RT3D an attractive model for efficient simulation of multispecies reactive transport in variably saturated large-scale subsurface systems.

  3. Application of expert system in measurement instrument instrumentation's maintenance on a acquisition system

    International Nuclear Information System (INIS)

    Pinastiko, W.S.

    1997-01-01

    Expert system is a part of the artificial intelligence, a solution software for complicated problems, which solving the problems need experiences and knowledge. This paper discussed about the research's result, that is a design of expert system to help instrumentation's maintenance on a data acquisition system. By using application of expert system, the system can do health monitoring, automatic trouble trouble tracing ang gives advise toward the trouble. this instrumentation's maintenance system is a tool which has an analytic and inference ability toward th trouble. This smart system is a very useful tool to get a good data acquisition system quality. the model system also can be developed to be a specific application as a remote instrumentation's management system

  4. Abstract: Inference and Interval Estimation for Indirect Effects With Latent Variable Models.

    Science.gov (United States)

    Falk, Carl F; Biesanz, Jeremy C

    2011-11-30

    Models specifying indirect effects (or mediation) and structural equation modeling are both popular in the social sciences. Yet relatively little research has compared methods that test for indirect effects among latent variables and provided precise estimates of the effectiveness of different methods. This simulation study provides an extensive comparison of methods for constructing confidence intervals and for making inferences about indirect effects with latent variables. We compared the percentile (PC) bootstrap, bias-corrected (BC) bootstrap, bias-corrected accelerated (BC a ) bootstrap, likelihood-based confidence intervals (Neale & Miller, 1997), partial posterior predictive (Biesanz, Falk, and Savalei, 2010), and joint significance tests based on Wald tests or likelihood ratio tests. All models included three reflective latent variables representing the independent, dependent, and mediating variables. The design included the following fully crossed conditions: (a) sample size: 100, 200, and 500; (b) number of indicators per latent variable: 3 versus 5; (c) reliability per set of indicators: .7 versus .9; (d) and 16 different path combinations for the indirect effect (α = 0, .14, .39, or .59; and β = 0, .14, .39, or .59). Simulations were performed using a WestGrid cluster of 1680 3.06GHz Intel Xeon processors running R and OpenMx. Results based on 1,000 replications per cell and 2,000 resamples per bootstrap method indicated that the BC and BC a bootstrap methods have inflated Type I error rates. Likelihood-based confidence intervals and the PC bootstrap emerged as methods that adequately control Type I error and have good coverage rates.

  5. Use of variability modes to evaluate AR4 climate models over the Euro-Atlantic region

    Energy Technology Data Exchange (ETDEWEB)

    Casado, M.J.; Pastor, M.A. [Agencia Estatal de Meteorologia (AEMET), Madrid (Spain)

    2012-01-15

    This paper analyzes the ability of the multi-model simulations from the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC) to simulate the main leading modes of variability over the Euro-Atlantic region in winter: the North-Atlantic Oscillation (NAO), the Scandinavian mode (SCAND), the East/Atlantic Oscillation (EA) and the East Atlantic/Western Russia mode (EA/WR). These modes of variability have been evaluated both spatially, by analyzing the intensity and location of their anomaly centres, as well as temporally, by focusing on the probability density functions and e-folding time scales. The choice of variability modes as a tool for climate model assessment can be justified by the fact that modes of variability determine local climatic conditions and their likely change may have important implications for future climate changes. It is found that all the models considered are able to simulate reasonably well these four variability modes, the SCAND being the mode which is best spatially simulated. From a temporal point of view the NAO and SCAND modes are the best simulated. UKMO-HadGEM1 and CGCM3.1(T63) are the models best at reproducing spatial characteristics, whereas CCSM3 and CGCM3.1(T63) are the best ones with regard to the temporal features. GISS-AOM is the model showing the worst performance, in terms of both spatial and temporal features. These results may bring new insight into the selection and use of specific models to simulate Euro-Atlantic climate, with some models being clearly more successful in simulating patterns of temporal and spatial variability than others. (orig.)

  6. Latent variable models an introduction to factor, path, and structural equation analysis

    CERN Document Server

    Loehlin, John C

    2004-01-01

    This fourth edition introduces multiple-latent variable models by utilizing path diagrams to explain the underlying relationships in the models. The book is intended for advanced students and researchers in the areas of social, educational, clinical, ind

  7. Beyond a climate-centric view of plant distribution: edaphic variables add value to distribution models.

    Science.gov (United States)

    Beauregard, Frieda; de Blois, Sylvie

    2014-01-01

    Both climatic and edaphic conditions determine plant distribution, however many species distribution models do not include edaphic variables especially over large geographical extent. Using an exceptional database of vegetation plots (n = 4839) covering an extent of ∼55,000 km2, we tested whether the inclusion of fine scale edaphic variables would improve model predictions of plant distribution compared to models using only climate predictors. We also tested how well these edaphic variables could predict distribution on their own, to evaluate the assumption that at large extents, distribution is governed largely by climate. We also hypothesized that the relative contribution of edaphic and climatic data would vary among species depending on their growth forms and biogeographical attributes within the study area. We modelled 128 native plant species from diverse taxa using four statistical model types and three sets of abiotic predictors: climate, edaphic, and edaphic-climate. Model predictive accuracy and variable importance were compared among these models and for species' characteristics describing growth form, range boundaries within the study area, and prevalence. For many species both the climate-only and edaphic-only models performed well, however the edaphic-climate models generally performed best. The three sets of predictors differed in the spatial information provided about habitat suitability, with climate models able to distinguish range edges, but edaphic models able to better distinguish within-range variation. Model predictive accuracy was generally lower for species without a range boundary within the study area and for common species, but these effects were buffered by including both edaphic and climatic predictors. The relative importance of edaphic and climatic variables varied with growth forms, with trees being more related to climate whereas lower growth forms were more related to edaphic conditions. Our study identifies the potential

  8. Short-term to seasonal variability in factors driving primary productivity in a shallow estuary: Implications for modeling production

    Science.gov (United States)

    Canion, Andy; MacIntyre, Hugh L.; Phipps, Scott

    2013-10-01

    The inputs of primary productivity models may be highly variable on short timescales (hourly to daily) in turbid estuaries, but modeling of productivity in these environments is often implemented with data collected over longer timescales. Daily, seasonal, and spatial variability in primary productivity model parameters: chlorophyll a concentration (Chla), the downwelling light attenuation coefficient (kd), and photosynthesis-irradiance response parameters (Pmchl, αChl) were characterized in Weeks Bay, a nitrogen-impacted shallow estuary in the northern Gulf of Mexico. Variability in primary productivity model parameters in response to environmental forcing, nutrients, and microalgal taxonomic marker pigments were analysed in monthly and short-term datasets. Microalgal biomass (as Chla) was strongly related to total phosphorus concentration on seasonal scales. Hourly data support wind-driven resuspension as a major source of short-term variability in Chla and light attenuation (kd). The empirical relationship between areal primary productivity and a combined variable of biomass and light attenuation showed that variability in the photosynthesis-irradiance response contributed little to the overall variability in primary productivity, and Chla alone could account for 53-86% of the variability in primary productivity. Efforts to model productivity in similar shallow systems with highly variable microalgal biomass may benefit the most by investing resources in improving spatial and temporal resolution of chlorophyll a measurements before increasing the complexity of models used in productivity modeling.

  9. Quantitative assessment of selected policy instruments using the Western European MARKAL model. Phase III EU SAVE White and Green Project: Comparison of market-based instruments to promote energy efficiency

    International Nuclear Information System (INIS)

    Mundaca, Luis; Santi, Federico

    2004-01-01

    This report summarises the modelling exercise carried out in order to assess the implications of selected policy instruments using the energy model of Western Europe (WEU) generated by the Market Allocation (MARKAL) modelling tool. The chosen methodology was the usage of the WEU MARKAL model for analysing the response of this energy system to the following policy instruments: White Certificates, Green Certificates, and Carbon Dioxide (CO 2 ) emissions trading. Results show that the order of magnitude of the effects of the analysed instruments depends on the target/cap that is applied. For the case of White Certificates, it can be observed that up to certain level (i.e., around 15% of cumulated energy savings by 2020 compared to the base case) energy savings are obtained at negative costs. Major savings occur in the residential sector for all the applied targets. Results for CO 2 emissions appear to be robust for the years 2015 and 2020, but it should also be observed that these emission trends are less robust for the years 2005 and 2010. Energy efficiency improvements for the WEU economy that are policy-induced around 6%, 9% and 15% for the low, medium and high target scenarios respectively. For the case of Green Certificates, results show that the sustained penetration of renewable energy sources is dominated by wind and biomass. By examining the autonomous fossil fuel intensity of the WEU economy, energy efficiency improvements that are policy induced account for around 1%, 4% and 6% for each scenario respectively. All the targets are technically possible. For the case of CO 2 emissions trading, due to the fact that these results address just the power sector, they must be seen as complementary of other modelling works that deal with a wider industrial coverage. In our case, the more ambitious the cap is, the lower the share of fossil fuels in electricity production becomes. The different trends for the electricity production seem to be less robust. Compared to

  10. Genuine tripartite entangled states with a local hidden-variable model

    International Nuclear Information System (INIS)

    Toth, Geza; Acin, Antonio

    2006-01-01

    We present a family of three-qubit quantum states with a basic local hidden-variable model. Any von Neumann measurement can be described by a local model for these states. We show that some of these states are genuine three-partite entangled and also distillable. The generalization for larger dimensions or higher number of parties is also discussed. As a by-product, we present symmetric extensions of two-qubit Werner states

  11. Pesticide reducing instruments

    DEFF Research Database (Denmark)

    Jacobsen, Lars-Bo; Jensen, Jørgen Dejgård; Andersen, Martin

    2005-01-01

    -mentioned models and tools. All three scenarios are constructed such that they result in the same welfare implication (measured by national consumption in the CGE model). The scenarios are: 1) pesticide taxes resulting in a 25 percent overall reduction; 2) use of unsprayed field margins, resulting in the same...... for improving bio-diversity and securing drinking water. That is, combining economic modeling with physical biological modeling and geological evaluation allows us to select unsprayed field margins as the most effective instrument. Sensitivity analysis conducted on bio-diversity suggest that this result...

  12. SABER Observations of the OH Meinel Airglow Variability Near the Mesopause

    Science.gov (United States)

    Marsh, Daniel R.; Smith, Anne K.; Mlynczak, Martin G.

    2005-01-01

    The Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument, one of four on board the TIMED satellite, observes the OH Meinel emission at 2.0 m that peaks near the mesopause. The emission results from reactions between members of the oxygen and hydrogen chemical families that can be significantly affected by mesopause dynamics. In this study we compare SABER measurements of OH Meinel emission rates and temperatures with predictions from a 3-dimensional chemical dynamical model. In general, the model is capable of reproducing both the observed diurnal and seasonal OH Meinel emission variability. The results indicate that the diurnal tide has a large effect on the overall magnitude and temporal variation of the emission in low latitudes. This tidal variability is so dominant that the seasonal cycle in the nighttime emission depends very strongly on the local time of the analysis. At higher latitudes, the emission has an annual cycle that is due mainly to transport of oxygen by the seasonally reversing mean circulation.

  13. How comparable are size-resolved particle number concentrations from different instruments?

    Science.gov (United States)

    Hornsby, K. E.; Pryor, S. C.

    2012-12-01

    The need for comparability of particle size resolved measurements originates from multiple drivers including: (i) Recent suggestions that air quality standards for particulate matter should migrate from being mass-based to incorporating number concentrations. This move would necessarily be predicated on measurement comparability which is absolutely critical to compliance determination. (ii) The need to quantify and diagnose causes of variability in nucleation and growth rates in nano-particle experiments conducted in different locations. (iii) Epidemiological research designed to identify key parameters in human health responses to fine particle exposure. Here we present results from a detailed controlled laboratory instrument inter-comparison experiment designed to investigate data comparability in the size range of 2.01-523.3 nm across a range of particle composition, modal diameter and absolute concentration. Particle size distributions were generated using a TSI model 3940 Aerosol Generation System (AGS) diluted using zero air, and sampled using four TSI Scanning Mobility Particle Spectrometer (SMPS) configurations and a TSI model 3091 Fast Mobility Particle Sizer (FMPS). The SMPS configurations used two Electrostatic Classifiers (EC) (model 3080) attached to either a Long DMA (LDMA) (model 3081) or a Nano DMA (NDMA) (model 3085) plumbed to either a TSI model 3025A Butanol Condensed Particle Counting (CPC) or a TSI model 3788 Water CPC. All four systems were run using both high and low flow conditions, and were operated with both the internal diffusion loss and multiple charge corrections turned on. The particle compositions tested were sodium chloride, ammonium nitrate and olive oil diluted in ethanol. Particles of all three were generated at three peak concentration levels (spanning the range observed at our experimental site), and three modal particle diameters. Experimental conditions were maintained for a period of 20 minutes to ensure experimental

  14. Geostatistical Analysis of Mesoscale Spatial Variability and Error in SeaWiFS and MODIS/Aqua Global Ocean Color Data

    Science.gov (United States)

    Glover, David M.; Doney, Scott C.; Oestreich, William K.; Tullo, Alisdair W.

    2018-01-01

    Mesoscale (10-300 km, weeks to months) physical variability strongly modulates the structure and dynamics of planktonic marine ecosystems via both turbulent advection and environmental impacts upon biological rates. Using structure function analysis (geostatistics), we quantify the mesoscale biological signals within global 13 year SeaWiFS (1998-2010) and 8 year MODIS/Aqua (2003-2010) chlorophyll a ocean color data (Level-3, 9 km resolution). We present geographical distributions, seasonality, and interannual variability of key geostatistical parameters: unresolved variability or noise, resolved variability, and spatial range. Resolved variability is nearly identical for both instruments, indicating that geostatistical techniques isolate a robust measure of biophysical mesoscale variability largely independent of measurement platform. In contrast, unresolved variability in MODIS/Aqua is substantially lower than in SeaWiFS, especially in oligotrophic waters where previous analysis identified a problem for the SeaWiFS instrument likely due to sensor noise characteristics. Both records exhibit a statistically significant relationship between resolved mesoscale variability and the low-pass filtered chlorophyll field horizontal gradient magnitude, consistent with physical stirring acting on large-scale gradient as an important factor supporting observed mesoscale variability. Comparable horizontal length scales for variability are found from tracer-based scaling arguments and geostatistical decorrelation. Regional variations between these length scales may reflect scale dependence of biological mechanisms that also create variability directly at the mesoscale, for example, enhanced net phytoplankton growth in coastal and frontal upwelling and convective mixing regions. Global estimates of mesoscale biophysical variability provide an improved basis for evaluating higher resolution, coupled ecosystem-ocean general circulation models, and data assimilation.

  15. Aversive pavlovian responses affect human instrumental motor performance.

    Science.gov (United States)

    Rigoli, Francesco; Pavone, Enea Francesco; Pezzulo, Giovanni

    2012-01-01

    IN NEUROSCIENCE AND PSYCHOLOGY, AN INFLUENTIAL PERSPECTIVE DISTINGUISHES BETWEEN TWO KINDS OF BEHAVIORAL CONTROL: instrumental (habitual and goal-directed) and Pavlovian. Understanding the instrumental-Pavlovian interaction is fundamental for the comprehension of decision-making. Animal studies (as those using the negative auto-maintenance paradigm), have demonstrated that Pavlovian mechanisms can have maladaptive effects on instrumental performance. However, evidence for a similar effect in humans is scarce. In addition, the mechanisms modulating the impact of Pavlovian responses on instrumental performance are largely unknown, both in human and non-human animals. The present paper describes a behavioral experiment investigating the effects of Pavlovian conditioned responses on performance in humans, focusing on the aversive domain. Results showed that Pavlovian responses influenced human performance, and, similar to animal studies, could have maladaptive effects. In particular, Pavlovian responses either impaired or increased performance depending on modulator variables such as threat distance, task controllability, punishment history, amount of training, and explicit punishment expectancy. Overall, these findings help elucidating the computational mechanisms underlying the instrumental-Pavlovian interaction, which might be at the base of apparently irrational phenomena in economics, social behavior, and psychopathology.

  16. Aversive Pavlovian responses affect human instrumental motor performance

    Directory of Open Access Journals (Sweden)

    Francesco eRigoli

    2012-10-01

    Full Text Available In neuroscience and psychology, an influential perspective distinguishes between two kinds of behavioural control: instrumental (habitual and goal-directed and Pavlovian. Understanding the instrumental-Pavlovian interaction is fundamental for the comprehension of decision-making. Animal studies (as those using the negative auto-maintenance paradigm, have demonstrated that Pavlovian mechanisms can have maladaptive effects on instrumental performance. However, evidence for a similar effect in humans is scarce. In addition, the mechanisms modulating the impact of Pavlovian responses on instrumental performance are largely unknown, both in human and non-human animals. The present paper describes a behavioural experiment investigating the effects of Pavlovian conditioned responses on performance in humans, focusing on the aversive domain. Results showed that Pavlovian responses influenced human performance, and, similar to animal studies, could have maladaptive effects. In particular, Pavlovian responses either impaired or increased performance depending on modulator variables such as threat distance, task controllability, punishment history, amount of training, and explicit punishment expectancy. Overall, these findings help elucidating the computational mechanisms underlying the instrumental-Pavlovian interaction, which might be at the base of apparently irrational phenomena in economics, social behaviour, and psychopathology.

  17. Model instruments of effective segmentation of the fast food market

    OpenAIRE

    Mityaeva Tetyana L.

    2013-01-01

    The article presents results of optimisation step-type calculations of economic effectiveness of promotion of fast food with consideration of key parameters of assessment of efficiency of the marketing strategy of segmentation. The article justifies development of a mathematical model on the bases of 3D-presentations and three-dimensional system of management variables. The modern applied mathematical packages allow formation not only of one-dimensional and two-dimensional arrays and analyse ...

  18. Empowerment variables for rehabilitation clients on perceived beliefs concerning work quality of life domains.

    Science.gov (United States)

    Tschopp, Molly K; Frain, Michael P; Bishop, Malachy

    2009-01-01

    This article describes and presents an initial analysis of variables generally associated with empowerment towards perceived beliefs concerning quality of life work domains for individuals with disabilities. The model examines the domains of importance, satisfaction, control and degree of interference of disability that an individual feels towards work. The internet based study used results from 70 individuals with disabilities in varying aspects of work. The variables composing empowerment that correlated strongly with the work domains include: self-advocacy, self-efficacy, perceived stigma, and family resiliency as measured through coping. Quality of Life concerning work was measured through the DSC-C a domain specific QOL instrument.

  19. Instrument Response Modeling and Simulation for the GLAST Burst Monitor

    International Nuclear Information System (INIS)

    Kippen, R. M.; Hoover, A. S.; Wallace, M. S.; Pendleton, G. N.; Meegan, C. A.; Fishman, G. J.; Wilson-Hodge, C. A.; Kouveliotou, C.; Lichti, G. G.; Kienlin, A. von; Steinle, H.; Diehl, R.; Greiner, J.; Preece, R. D.; Connaughton, V.; Briggs, M. S.; Paciesas, W. S.; Bhat, P. N.

    2007-01-01

    The GLAST Burst Monitor (GBM) is designed to provide wide field of view observations of gamma-ray bursts and other fast transient sources in the energy range 10 keV to 30 MeV. The GBM is composed of several unshielded and uncollimated scintillation detectors (twelve NaI and two BGO) that are widely dispersed about the GLAST spacecraft. As a result, reconstructing source locations, energy spectra, and temporal properties from GBM data requires detailed knowledge of the detectors' response to both direct radiation as well as that scattered from the spacecraft and Earth's atmosphere. This full GBM instrument response will be captured in the form of a response function database that is derived from computer modeling and simulation. The simulation system is based on the GEANT4 Monte Carlo radiation transport simulation toolset, and is being extensively validated against calibrated experimental GBM data. We discuss the architecture of the GBM simulation and modeling system and describe how its products will be used for analysis of observed GBM data. Companion papers describe the status of validating the system

  20. Modelling food-web mediated effects of hydrological variability and environmental flows.

    Science.gov (United States)

    Robson, Barbara J; Lester, Rebecca E; Baldwin, Darren S; Bond, Nicholas R; Drouart, Romain; Rolls, Robert J; Ryder, Darren S; Thompson, Ross M

    2017-11-01

    Environmental flows are designed to enhance aquatic ecosystems through a variety of mechanisms; however, to date most attention has been paid to the effects on habitat quality and life-history triggers, especially for fish and vegetation. The effects of environmental flows on food webs have so far received little attention, despite food-web thinking being fundamental to understanding of river ecosystems. Understanding environmental flows in a food-web context can help scientists and policy-makers better understand and manage outcomes of flow alteration and restoration. In this paper, we consider mechanisms by which flow variability can influence and alter food webs, and place these within a conceptual and numerical modelling framework. We also review the strengths and weaknesses of various approaches to modelling the effects of hydrological management on food webs. Although classic bioenergetic models such as Ecopath with Ecosim capture many of the key features required, other approaches, such as biogeochemical ecosystem modelling, end-to-end modelling, population dynamic models, individual-based models, graph theory models, and stock assessment models are also relevant. In many cases, a combination of approaches will be useful. We identify current challenges and new directions in modelling food-web responses to hydrological variability and environmental flow management. These include better integration of food-web and hydraulic models, taking physiologically-based approaches to food quality effects, and better representation of variations in space and time that may create ecosystem control points. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  1. A New Bi-Directional Projection Model Based on Pythagorean Uncertain Linguistic Variable

    Directory of Open Access Journals (Sweden)

    Huidong Wang

    2018-04-01

    Full Text Available To solve the multi-attribute decision making (MADM problems with Pythagorean uncertain linguistic variable, an extended bi-directional projection method is proposed. First, we utilize the linguistic scale function to convert uncertain linguistic variable and provide a new projection model, subsequently. Then, to depict the bi-directional projection method, the formative vectors of alternatives and ideal alternatives are defined. Furthermore, a comparative analysis with projection model is conducted to show the superiority of bi-directional projection method. Finally, an example of graduate’s job option is given to demonstrate the effectiveness and feasibility of the proposed method.

  2. An Assessment of the Need for Standard Variable Names for Airborne Field Campaigns

    Science.gov (United States)

    Beach, A. L., III; Chen, G.; Northup, E. A.; Kusterer, J.; Quam, B. M.

    2017-12-01

    The NASA Earth Venture Program has led to a dramatic increase in airborne observations, requiring updated data management practices with clearly defined data standards and protocols for metadata. An airborne field campaign can involve multiple aircraft and a variety of instruments. It is quite common to have different instruments/techniques measure the same parameter on one or more aircraft platforms. This creates a need to allow instrument Principal Investigators (PIs) to name their variables in a way that would distinguish them across various data sets. A lack of standardization of variables names presents a challenge for data search tools in enabling discovery of similar data across airborne studies, aircraft platforms, and instruments. This was also identified by data users as one of the top issues in data use. One effective approach for mitigating this problem is to enforce variable name standardization, which can effectively map the unique PI variable names to fixed standard names. In order to ensure consistency amongst the standard names, it will be necessary to choose them from a controlled list. However, no such list currently exists despite a number of previous efforts to establish a sufficient list of atmospheric variable names. The Atmospheric Composition Variable Standard Name Working Group was established under the auspices of NASA's Earth Science Data Systems Working Group (ESDSWG) to solicit research community feedback to create a list of standard names that are acceptable to data providers and data users This presentation will discuss the challenges and recommendations of standard variable names in an effort to demonstrate how airborne metadata curation/management can be improved to streamline data ingest, improve interoperability, and discoverability to a broader user community.

  3. The role of updraft velocity in temporal variability of cloud hydrometeor number

    Science.gov (United States)

    Sullivan, Sylvia; Nenes, Athanasios; Lee, Dong Min; Oreopoulos, Lazaros

    2016-04-01

    tool of understanding for hydrometer variability can be instrumental for understanding the source of differences between models used for aerosol-cloud-climate interaction studies.

  4. Childhood malnutrition in Egypt using geoadditive Gaussian and latent variable models.

    Science.gov (United States)

    Khatab, Khaled

    2010-04-01

    Major progress has been made over the last 30 years in reducing the prevalence of malnutrition amongst children less than 5 years of age in developing countries. However, approximately 27% of children under the age of 5 in these countries are still malnourished. This work focuses on the childhood malnutrition in one of the biggest developing countries, Egypt. This study examined the association between bio-demographic and socioeconomic determinants and the malnutrition problem in children less than 5 years of age using the 2003 Demographic and Health survey data for Egypt. In the first step, we use separate geoadditive Gaussian models with the continuous response variables stunting (height-for-age), underweight (weight-for-age), and wasting (weight-for-height) as indicators of nutritional status in our case study. In a second step, based on the results of the first step, we apply the geoadditive Gaussian latent variable model for continuous indicators in which the 3 measurements of the malnutrition status of children are assumed as indicators for the latent variable "nutritional status".

  5. Exploring structural variability in X-ray crystallographic models using protein local optimization by torsion-angle sampling

    International Nuclear Information System (INIS)

    Knight, Jennifer L.; Zhou, Zhiyong; Gallicchio, Emilio; Himmel, Daniel M.; Friesner, Richard A.; Arnold, Eddy; Levy, Ronald M.

    2008-01-01

    Torsion-angle sampling, as implemented in the Protein Local Optimization Program (PLOP), is used to generate multiple structurally variable single-conformer models which are in good agreement with X-ray data. An ensemble-refinement approach to differentiate between positional uncertainty and conformational heterogeneity is proposed. Modeling structural variability is critical for understanding protein function and for modeling reliable targets for in silico docking experiments. Because of the time-intensive nature of manual X-ray crystallographic refinement, automated refinement methods that thoroughly explore conformational space are essential for the systematic construction of structurally variable models. Using five proteins spanning resolutions of 1.0–2.8 Å, it is demonstrated how torsion-angle sampling of backbone and side-chain libraries with filtering against both the chemical energy, using a modern effective potential, and the electron density, coupled with minimization of a reciprocal-space X-ray target function, can generate multiple structurally variable models which fit the X-ray data well. Torsion-angle sampling as implemented in the Protein Local Optimization Program (PLOP) has been used in this work. Models with the lowest R free values are obtained when electrostatic and implicit solvation terms are included in the effective potential. HIV-1 protease, calmodulin and SUMO-conjugating enzyme illustrate how variability in the ensemble of structures captures structural variability that is observed across multiple crystal structures and is linked to functional flexibility at hinge regions and binding interfaces. An ensemble-refinement procedure is proposed to differentiate between variability that is a consequence of physical conformational heterogeneity and that which reflects uncertainty in the atomic coordinates

  6. Variable recruitment fluidic artificial muscles: modeling and experiments

    International Nuclear Information System (INIS)

    Bryant, Matthew; Meller, Michael A; Garcia, Ephrahim

    2014-01-01

    We investigate taking advantage of the lightweight, compliant nature of fluidic artificial muscles to create variable recruitment actuators in the form of artificial muscle bundles. Several actuator elements at different diameter scales are packaged to act as a single actuator device. The actuator elements of the bundle can be connected to the fluidic control circuit so that different groups of actuator elements, much like individual muscle fibers, can be activated independently depending on the required force output and motion. This novel actuation concept allows us to save energy by effectively impedance matching the active size of the actuators on the fly based on the instantaneous required load. This design also allows a single bundled actuator to operate in substantially different force regimes, which could be valuable for robots that need to perform a wide variety of tasks and interact safely with humans. This paper proposes, models and analyzes the actuation efficiency of this actuator concept. The analysis shows that variable recruitment operation can create an actuator that reduces throttling valve losses to operate more efficiently over a broader range of its force–strain operating space. We also present preliminary results of the design, fabrication and experimental characterization of three such bioinspired variable recruitment actuator prototypes. (paper)

  7. Using Random Forests to Select Optimal Input Variables for Short-Term Wind Speed Forecasting Models

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2017-10-01

    Full Text Available Achieving relatively high-accuracy short-term wind speed forecasting estimates is a precondition for the construction and grid-connected operation of wind power forecasting systems for wind farms. Currently, most research is focused on the structure of forecasting models and does not consider the selection of input variables, which can have significant impacts on forecasting performance. This paper presents an input variable selection method for wind speed forecasting models. The candidate input variables for various leading periods are selected and random forests (RF is employed to evaluate the importance of all variable as features. The feature subset with the best evaluation performance is selected as the optimal feature set. Then, kernel-based extreme learning machine is constructed to evaluate the performance of input variables selection based on RF. The results of the case study show that by removing the uncorrelated and redundant features, RF effectively extracts the most strongly correlated set of features from the candidate input variables. By finding the optimal feature combination to represent the original information, RF simplifies the structure of the wind speed forecasting model, shortens the training time required, and substantially improves the model’s accuracy and generalization ability, demonstrating that the input variables selected by RF are effective.

  8. Developing Learning Model Based on Local Culture and Instrument for Mathematical Higher Order Thinking Ability

    Science.gov (United States)

    Saragih, Sahat; Napitupulu, E. Elvis; Fauzi, Amin

    2017-01-01

    This research aims to develop a student-centered learning model based on local culture and instrument of mathematical higher order thinking of junior high school students in the frame of the 2013-Curriculum in North Sumatra, Indonesia. The subjects of the research are seventh graders which are taken proportionally random consisted of three public…

  9. A Variable Stiffness Analysis Model for Large Complex Thin-Walled Guide Rail

    Directory of Open Access Journals (Sweden)

    Wang Xiaolong

    2016-01-01

    Full Text Available Large complex thin-walled guide rail has complicated structure and no uniform low rigidity. The traditional cutting simulations are time consuming due to huge computation especially in large workpiece. To solve these problems, a more efficient variable stiffness analysis model has been propose, which can obtain quantitative stiffness value of the machining surface. Applying simulate cutting force in sampling points using finite element analysis software ABAQUS, the single direction variable stiffness rule can be obtained. The variable stiffness matrix has been propose by analyzing multi-directions coupling variable stiffness rule. Combining with the three direction cutting force value, the reasonability of existing processing parameters can be verified and the optimized cutting parameters can be designed.

  10. Robust Model Predictive Control of a Nonlinear System with Known Scheduling Variable and Uncertain Gain

    DEFF Research Database (Denmark)

    Mirzaei, Mahmood; Poulsen, Niels Kjølstad; Niemann, Hans Henrik

    2012-01-01

    Robust model predictive control (RMPC) of a class of nonlinear systems is considered in this paper. We will use Linear Parameter Varying (LPV) model of the nonlinear system. By taking the advantage of having future values of the scheduling variable, we will simplify state prediction. Because...... of the special structure of the problem, uncertainty is only in the B matrix (gain) of the state space model. Therefore by taking advantage of this structure, we formulate a tractable minimax optimization problem to solve robust model predictive control problem. Wind turbine is chosen as the case study and we...... choose wind speed as the scheduling variable. Wind speed is measurable ahead of the turbine, therefore the scheduling variable is known for the entire prediction horizon....

  11. The use of vector bootstrapping to improve variable selection precision in Lasso models

    NARCIS (Netherlands)

    Laurin, C.; Boomsma, D.I.; Lubke, G.H.

    2016-01-01

    The Lasso is a shrinkage regression method that is widely used for variable selection in statistical genetics. Commonly, K-fold cross-validation is used to fit a Lasso model. This is sometimes followed by using bootstrap confidence intervals to improve precision in the resulting variable selections.

  12. A diffusion decision model analysis of evidence variability in the lexical decision task.

    Science.gov (United States)

    Tillman, Gabriel; Osth, Adam F; van Ravenzwaaij, Don; Heathcote, Andrew

    2017-12-01

    The lexical-decision task is among the most commonly used paradigms in psycholinguistics. In both the signal-detection theory and Diffusion Decision Model (DDM; Ratcliff, Gomez, & McKoon, Psychological Review, 111, 159-182, 2004) frameworks, lexical-decisions are based on a continuous source of word-likeness evidence for both words and non-words. The Retrieving Effectively from Memory model of Lexical-Decision (REM-LD; Wagenmakers et al., Cognitive Psychology, 48(3), 332-367, 2004) provides a comprehensive explanation of lexical-decision data and makes the prediction that word-likeness evidence is more variable for words than non-words and that higher frequency words are more variable than lower frequency words. To test these predictions, we analyzed five lexical-decision data sets with the DDM. For all data sets, drift-rate variability changed across word frequency and non-word conditions. For the most part, REM-LD's predictions about the ordering of evidence variability across stimuli in the lexical-decision task were confirmed.

  13. Kinetic Modeling of Corn Fermentation with S. cerevisiae Using a Variable Temperature Strategy

    Directory of Open Access Journals (Sweden)

    Augusto C. M. Souza

    2018-04-01

    Full Text Available While fermentation is usually done at a fixed temperature, in this study, the effect of having a controlled variable temperature was analyzed. A nonlinear system was used to model batch ethanol fermentation, using corn as substrate and the yeast Saccharomyces cerevisiae, at five different fixed and controlled variable temperatures. The lower temperatures presented higher ethanol yields but took a longer time to reach equilibrium. Higher temperatures had higher initial growth rates, but the decay of yeast cells was faster compared to the lower temperatures. However, in a controlled variable temperature model, the temperature decreased with time with the initial value of 40 ∘ C. When analyzing a time window of 60 h, the ethanol production increased 20% compared to the batch with the highest temperature; however, the yield was still 12% lower compared to the 20 ∘ C batch. When the 24 h’ simulation was analyzed, the controlled model had a higher ethanol concentration compared to both fixed temperature batches.

  14. Kinetic Modeling of Corn Fermentation with S. cerevisiae Using a Variable Temperature Strategy.

    Science.gov (United States)

    Souza, Augusto C M; Mousaviraad, Mohammad; Mapoka, Kenneth O M; Rosentrater, Kurt A

    2018-04-24

    While fermentation is usually done at a fixed temperature, in this study, the effect of having a controlled variable temperature was analyzed. A nonlinear system was used to model batch ethanol fermentation, using corn as substrate and the yeast Saccharomyces cerevisiae , at five different fixed and controlled variable temperatures. The lower temperatures presented higher ethanol yields but took a longer time to reach equilibrium. Higher temperatures had higher initial growth rates, but the decay of yeast cells was faster compared to the lower temperatures. However, in a controlled variable temperature model, the temperature decreased with time with the initial value of 40 ∘ C. When analyzing a time window of 60 h, the ethanol production increased 20% compared to the batch with the highest temperature; however, the yield was still 12% lower compared to the 20 ∘ C batch. When the 24 h’ simulation was analyzed, the controlled model had a higher ethanol concentration compared to both fixed temperature batches.

  15. Modeling safety instrumented systems with MooN voting architectures addressing system reconfiguration for testing

    International Nuclear Information System (INIS)

    Torres-Echeverria, A.C.; Martorell, S.; Thompson, H.A.

    2011-01-01

    This paper addresses the modeling of probability of dangerous failure on demand and spurious trip rate of safety instrumented systems that include MooN voting redundancies in their architecture. MooN systems are a special case of k-out-of-n systems. The first part of the article is devoted to the development of a time-dependent probability of dangerous failure on demand model with capability of handling MooN systems. The model is able to model explicitly common cause failure and diagnostic coverage, as well as different test frequencies and strategies. It includes quantification of both detected and undetected failures, and puts emphasis on the quantification of common cause failure to the system probability of dangerous failure on demand as an additional component. In order to be able to accommodate changes in testing strategies, special treatment is devoted to the analysis of system reconfiguration (including common cause failure) during test of one of its components, what is then included in the model. Another model for spurious trip rate is also analyzed and extended under the same methodology in order to empower it with similar capabilities. These two models are powerful enough, but at the same time simple, to be suitable for handling of dependability measures in multi-objective optimization of both system design and test strategies for safety instrumented systems. The level of modeling detail considered permits compliance with the requirements of the standard IEC 61508. The two models are applied to brief case studies to demonstrate their effectiveness. The results obtained demonstrated that the first model is adequate to quantify time-dependent PFD of MooN systems during different system states (i.e. full operation, test and repair) and different MooN configurations, which values are averaged to obtain the PFD avg . Also, it was demonstrated that the second model is adequate to quantify STR including spurious trips induced by internal component failure and

  16. OECD/CSNI specialist meeting on advanced instrumentation and measurements techniques: summary and conclusions

    International Nuclear Information System (INIS)

    1997-01-01

    This specialist meeting on Advanced Instrumentation and Measurements Techniques was held in Santa Barbara (USA) in 1997 and attracted some 70 participants in ten technical sessions and a session of the round table discussions, with a total of 41 papers. It was intended to bring together the international experts in multi-phase flow instrumentation, experiment and modeling to review the state-of-the-art of the two-phase flow instrumentation methods and to discuss the relation between modeling needs and instrumentation capabilities. The following topics were included: Modeling needs and future direction for improved constitutive relations, interfacial area transport equation, and multi-dimensional two-fluid model formulation; local instrumentation developments for void fraction, interfacial area, phase velocities, turbulence, entrainment, particle size, thermal non-equilibrium, shear stress, nucleation, condensation and boiling; global instrumentation developments for void fraction, mass flow, two-phase level, non-condensable concentration, flow regimes, low flow and break flow; relation between modeling needs and instrumentation capabilities, future directions for experiments focused on modeling needs and for instrumentation developments

  17. Short version of the “instrument for assessment of stress in nursing students” in the Brazilian reality

    Directory of Open Access Journals (Sweden)

    Ana Lúcia Siqueira Costa

    2018-01-01

    Full Text Available ABSTRACT Goal: validate a short version of the Instrument for assessment of stress in nursing students in the Brazilian reality. Method: Methodological study conducted with 1047 nursing students from five Brazilian institutions, who answered the 30 items initially distributed in eight domains. Data were analyzed in the R Statistical Package and in the latent variable analysis, using exploratory and confirmatory factor analyses, Cronbach’s alpha and item-total correlation. Results: The short version of the instrument had 19 items distributed into four domains: Environment, Professional Training, Theoretical Activities and Performance of Practical Activities. The confirmatory analysis showed absolute and parsimony fit to the proposed model with satisfactory residual levels. Alpha values per factor ranged from 0.736 (Environment to 0.842 (Performance of Practical Activities. Conclusion: The short version of the instrument has construct validity and reliability for application to Brazilian nursing undergraduates at any stage of the course.

  18. An accurate fatigue damage model for welded joints subjected to variable amplitude loading

    Science.gov (United States)

    Aeran, A.; Siriwardane, S. C.; Mikkelsen, O.; Langen, I.

    2017-12-01

    Researchers in the past have proposed several fatigue damage models to overcome the shortcomings of the commonly used Miner’s rule. However, requirements of material parameters or S-N curve modifications restricts their practical applications. Also, application of most of these models under variable amplitude loading conditions have not been found. To overcome these restrictions, a new fatigue damage model is proposed in this paper. The proposed model can be applied by practicing engineers using only the S-N curve given in the standard codes of practice. The model is verified with experimentally derived damage evolution curves for C 45 and 16 Mn and gives better agreement compared to previous models. The model predicted fatigue lives are also in better correlation with experimental results compared to previous models as shown in earlier published work by the authors. The proposed model is applied to welded joints subjected to variable amplitude loadings in this paper. The model given around 8% shorter fatigue lives compared to Eurocode given Miner’s rule. This shows the importance of applying accurate fatigue damage models for welded joints.

  19. FinFET centric variability-aware compact model extraction and generation technology supporting DTCO

    OpenAIRE

    Wang, Xingsheng; Cheng, Binjie; Reid, David; Pender, Andrew; Asenov, Plamen; Millar, Campbell; Asenov, Asen

    2015-01-01

    In this paper, we present a FinFET-focused variability-aware compact model (CM) extraction and generation technology supporting design-technology co-optimization. The 14-nm CMOS technology generation silicon on insulator FinFETs are used as testbed transistors to illustrate our approach. The TCAD simulations include a long-range process-induced variability using a design of experiment approach and short-range purely statistical variability (mismatch). The CM extraction supports a hierarchical...

  20. Two-Layer Variable Infiltration Capacity Land Surface Representation for General Circulation Models

    Science.gov (United States)

    Xu, L.

    1994-01-01

    A simple two-layer variable infiltration capacity (VIC-2L) land surface model suitable for incorporation in general circulation models (GCMs) is described. The model consists of a two-layer characterization of the soil within a GCM grid cell, and uses an aerodynamic representation of latent and sensible heat fluxes at the land surface. The effects of GCM spatial subgrid variability of soil moisture and a hydrologically realistic runoff mechanism are represented in the soil layers. The model was tested using long-term hydrologic and climatalogical data for Kings Creek, Kansas to estimate and validate the hydrological parameters. Surface flux data from three First International Satellite Land Surface Climatology Project Field Experiments (FIFE) intensive field compaigns in the summer and fall of 1987 in central Kansas, and from the Anglo-Brazilian Amazonian Climate Observation Study (ABRACOS) in Brazil were used to validate the mode-simulated surface energy fluxes and surface temperature.