WorldWideScience

Sample records for instrumental variables methods

  1. Falsification Testing of Instrumental Variables Methods for Comparative Effectiveness Research.

    Science.gov (United States)

    Pizer, Steven D

    2016-04-01

    To demonstrate how falsification tests can be used to evaluate instrumental variables methods applicable to a wide variety of comparative effectiveness research questions. Brief conceptual review of instrumental variables and falsification testing principles and techniques accompanied by an empirical application. Sample STATA code related to the empirical application is provided in the Appendix. Comparative long-term risks of sulfonylureas and thiazolidinediones for management of type 2 diabetes. Outcomes include mortality and hospitalization for an ambulatory care-sensitive condition. Prescribing pattern variations are used as instrumental variables. Falsification testing is an easily computed and powerful way to evaluate the validity of the key assumption underlying instrumental variables analysis. If falsification tests are used, instrumental variables techniques can help answer a multitude of important clinical questions. © Health Research and Educational Trust.

  2. Instrumental variable methods in comparative safety and effectiveness research.

    Science.gov (United States)

    Brookhart, M Alan; Rassen, Jeremy A; Schneeweiss, Sebastian

    2010-06-01

    Instrumental variable (IV) methods have been proposed as a potential approach to the common problem of uncontrolled confounding in comparative studies of medical interventions, but IV methods are unfamiliar to many researchers. The goal of this article is to provide a non-technical, practical introduction to IV methods for comparative safety and effectiveness research. We outline the principles and basic assumptions necessary for valid IV estimation, discuss how to interpret the results of an IV study, provide a review of instruments that have been used in comparative effectiveness research, and suggest some minimal reporting standards for an IV analysis. Finally, we offer our perspective of the role of IV estimation vis-à-vis more traditional approaches based on statistical modeling of the exposure or outcome. We anticipate that IV methods will be often underpowered for drug safety studies of very rare outcomes, but may be potentially useful in studies of intended effects where uncontrolled confounding may be substantial.

  3. Instrumental variable methods in comparative safety and effectiveness research†

    Science.gov (United States)

    Brookhart, M. Alan; Rassen, Jeremy A.; Schneeweiss, Sebastian

    2010-01-01

    Summary Instrumental variable (IV) methods have been proposed as a potential approach to the common problem of uncontrolled confounding in comparative studies of medical interventions, but IV methods are unfamiliar to many researchers. The goal of this article is to provide a non-technical, practical introduction to IV methods for comparative safety and effectiveness research. We outline the principles and basic assumptions necessary for valid IV estimation, discuss how to interpret the results of an IV study, provide a review of instruments that have been used in comparative effectiveness research, and suggest some minimal reporting standards for an IV analysis. Finally, we offer our perspective of the role of IV estimation vis-à-vis more traditional approaches based on statistical modeling of the exposure or outcome. We anticipate that IV methods will be often underpowered for drug safety studies of very rare outcomes, but may be potentially useful in studies of intended effects where uncontrolled confounding may be substantial. PMID:20354968

  4. A review of instrumental variable estimators for Mendelian randomization.

    Science.gov (United States)

    Burgess, Stephen; Small, Dylan S; Thompson, Simon G

    2017-10-01

    Instrumental variable analysis is an approach for obtaining causal inferences on the effect of an exposure (risk factor) on an outcome from observational data. It has gained in popularity over the past decade with the use of genetic variants as instrumental variables, known as Mendelian randomization. An instrumental variable is associated with the exposure, but not associated with any confounder of the exposure-outcome association, nor is there any causal pathway from the instrumental variable to the outcome other than via the exposure. Under the assumption that a single instrumental variable or a set of instrumental variables for the exposure is available, the causal effect of the exposure on the outcome can be estimated. There are several methods available for instrumental variable estimation; we consider the ratio method, two-stage methods, likelihood-based methods, and semi-parametric methods. Techniques for obtaining statistical inferences and confidence intervals are presented. The statistical properties of estimates from these methods are compared, and practical advice is given about choosing a suitable analysis method. In particular, bias and coverage properties of estimators are considered, especially with weak instruments. Settings particularly relevant to Mendelian randomization are prioritized in the paper, notably the scenario of a continuous exposure and a continuous or binary outcome.

  5. Bayesian methods for meta-analysis of causal relationships estimated using genetic instrumental variables

    DEFF Research Database (Denmark)

    Burgess, Stephen; Thompson, Simon G; Thompson, Grahame

    2010-01-01

    Genetic markers can be used as instrumental variables, in an analogous way to randomization in a clinical trial, to estimate the causal relationship between a phenotype and an outcome variable. Our purpose is to extend the existing methods for such Mendelian randomization studies to the context o...

  6. On the Interpretation of Instrumental Variables in the Presence of Specification Errors

    Directory of Open Access Journals (Sweden)

    P.A.V.B. Swamy

    2015-01-01

    Full Text Available The method of instrumental variables (IV and the generalized method of moments (GMM, and their applications to the estimation of errors-in-variables and simultaneous equations models in econometrics, require data on a sufficient number of instrumental variables that are both exogenous and relevant. We argue that, in general, such instruments (weak or strong cannot exist.

  7. Power calculator for instrumental variable analysis in pharmacoepidemiology.

    Science.gov (United States)

    Walker, Venexia M; Davies, Neil M; Windmeijer, Frank; Burgess, Stephen; Martin, Richard M

    2017-10-01

    Instrumental variable analysis, for example with physicians' prescribing preferences as an instrument for medications issued in primary care, is an increasingly popular method in the field of pharmacoepidemiology. Existing power calculators for studies using instrumental variable analysis, such as Mendelian randomization power calculators, do not allow for the structure of research questions in this field. This is because the analysis in pharmacoepidemiology will typically have stronger instruments and detect larger causal effects than in other fields. Consequently, there is a need for dedicated power calculators for pharmacoepidemiological research. The formula for calculating the power of a study using instrumental variable analysis in the context of pharmacoepidemiology is derived before being validated by a simulation study. The formula is applicable for studies using a single binary instrument to analyse the causal effect of a binary exposure on a continuous outcome. An online calculator, as well as packages in both R and Stata, are provided for the implementation of the formula by others. The statistical power of instrumental variable analysis in pharmacoepidemiological studies to detect a clinically meaningful treatment effect is an important consideration. Research questions in this field have distinct structures that must be accounted for when calculating power. The formula presented differs from existing instrumental variable power formulae due to its parametrization, which is designed specifically for ease of use by pharmacoepidemiologists. © The Author 2017. Published by Oxford University Press on behalf of the International Epidemiological Association

  8. Sensitivity analysis and power for instrumental variable studies.

    Science.gov (United States)

    Wang, Xuran; Jiang, Yang; Zhang, Nancy R; Small, Dylan S

    2018-03-31

    In observational studies to estimate treatment effects, unmeasured confounding is often a concern. The instrumental variable (IV) method can control for unmeasured confounding when there is a valid IV. To be a valid IV, a variable needs to be independent of unmeasured confounders and only affect the outcome through affecting the treatment. When applying the IV method, there is often concern that a putative IV is invalid to some degree. We present an approach to sensitivity analysis for the IV method which examines the sensitivity of inferences to violations of IV validity. Specifically, we consider sensitivity when the magnitude of association between the putative IV and the unmeasured confounders and the direct effect of the IV on the outcome are limited in magnitude by a sensitivity parameter. Our approach is based on extending the Anderson-Rubin test and is valid regardless of the strength of the instrument. A power formula for this sensitivity analysis is presented. We illustrate its usage via examples about Mendelian randomization studies and its implications via a comparison of using rare versus common genetic variants as instruments. © 2018, The International Biometric Society.

  9. Instrumental Variables in the Long Run

    DEFF Research Database (Denmark)

    Casey, Gregory; Klemp, Marc Patrick Brag

    2017-01-01

    In the study of long-run economic growth, it is common to use historical or geographical variables as instruments for contemporary endogenous regressors. We study the interpretation of these conventional instrumental variable (IV) regressions in a general, yet simple, framework. Our aim...... quantitative implications for the field of long-run economic growth. We also use our framework to examine related empirical techniques. We find that two prominent regression methodologies - using gravity-based instruments for trade and including ancestry-adjusted variables in linear regression models - have...... is to estimate the long-run causal effect of changes in the endogenous explanatory variable. We find that conventional IV regressions generally cannot recover this parameter of interest. To estimate this parameter, therefore, we develop an augmented IV estimator that combines the conventional regression...

  10. Causal null hypotheses of sustained treatment strategies: What can be tested with an instrumental variable?

    Science.gov (United States)

    Swanson, Sonja A; Labrecque, Jeremy; Hernán, Miguel A

    2018-05-02

    Sometimes instrumental variable methods are used to test whether a causal effect is null rather than to estimate the magnitude of a causal effect. However, when instrumental variable methods are applied to time-varying exposures, as in many Mendelian randomization studies, it is unclear what causal null hypothesis is tested. Here, we consider different versions of causal null hypotheses for time-varying exposures, show that the instrumental variable conditions alone are insufficient to test some of them, and describe additional assumptions that can be made to test a wider range of causal null hypotheses, including both sharp and average causal null hypotheses. Implications for interpretation and reporting of instrumental variable results are discussed.

  11. Instrumental variable estimation of treatment effects for duration outcomes

    NARCIS (Netherlands)

    G.E. Bijwaard (Govert)

    2007-01-01

    textabstractIn this article we propose and implement an instrumental variable estimation procedure to obtain treatment effects on duration outcomes. The method can handle the typical complications that arise with duration data of time-varying treatment and censoring. The treatment effect we

  12. Eliminating Survivor Bias in Two-stage Instrumental Variable Estimators.

    Science.gov (United States)

    Vansteelandt, Stijn; Walter, Stefan; Tchetgen Tchetgen, Eric

    2018-07-01

    Mendelian randomization studies commonly focus on elderly populations. This makes the instrumental variables analysis of such studies sensitive to survivor bias, a type of selection bias. A particular concern is that the instrumental variable conditions, even when valid for the source population, may be violated for the selective population of individuals who survive the onset of the study. This is potentially very damaging because Mendelian randomization studies are known to be sensitive to bias due to even minor violations of the instrumental variable conditions. Interestingly, the instrumental variable conditions continue to hold within certain risk sets of individuals who are still alive at a given age when the instrument and unmeasured confounders exert additive effects on the exposure, and moreover, the exposure and unmeasured confounders exert additive effects on the hazard of death. In this article, we will exploit this property to derive a two-stage instrumental variable estimator for the effect of exposure on mortality, which is insulated against the above described selection bias under these additivity assumptions.

  13. Comparison of variance estimators for metaanalysis of instrumental variable estimates

    NARCIS (Netherlands)

    Schmidt, A. F.; Hingorani, A. D.; Jefferis, B. J.; White, J.; Groenwold, R. H H; Dudbridge, F.; Ben-Shlomo, Y.; Chaturvedi, N.; Engmann, J.; Hughes, A.; Humphries, S.; Hypponen, E.; Kivimaki, M.; Kuh, D.; Kumari, M.; Menon, U.; Morris, R.; Power, C.; Price, J.; Wannamethee, G.; Whincup, P.

    2016-01-01

    Background: Mendelian randomization studies perform instrumental variable (IV) analysis using genetic IVs. Results of individual Mendelian randomization studies can be pooled through meta-analysis. We explored how different variance estimators influence the meta-analysed IV estimate. Methods: Two

  14. Assessing Mucoadhesion in Polymer Gels: The Effect of Method Type and Instrument Variables

    Directory of Open Access Journals (Sweden)

    Jéssica Bassi da Silva

    2018-03-01

    Full Text Available The process of mucoadhesion has been widely studied using a wide variety of methods, which are influenced by instrumental variables and experiment design, making the comparison between the results of different studies difficult. The aim of this work was to standardize the conditions of the detachment test and the rheological methods of mucoadhesion assessment for semisolids, and introduce a texture profile analysis (TPA method. A factorial design was developed to suggest standard conditions for performing the detachment force method. To evaluate the method, binary polymeric systems were prepared containing poloxamer 407 and Carbopol 971P®, Carbopol 974P®, or Noveon® Polycarbophil. The mucoadhesion of systems was evaluated, and the reproducibility of these measurements investigated. This detachment force method was demonstrated to be reproduceable, and gave different adhesion when mucin disk or ex vivo oral mucosa was used. The factorial design demonstrated that all evaluated parameters had an effect on measurements of mucoadhesive force, but the same was not observed for the work of adhesion. It was suggested that the work of adhesion is a more appropriate metric for evaluating mucoadhesion. Oscillatory rheology was more capable of investigating adhesive interactions than flow rheology. TPA method was demonstrated to be reproducible and can evaluate the adhesiveness interaction parameter. This investigation demonstrates the need for standardized methods to evaluate mucoadhesion and makes suggestions for a standard study design.

  15. A Systematic Review of Statistical Methods Used to Test for Reliability of Medical Instruments Measuring Continuous Variables

    Directory of Open Access Journals (Sweden)

    Rafdzah Zaki

    2013-06-01

    Full Text Available   Objective(s: Reliability measures precision or the extent to which test results can be replicated. This is the first ever systematic review to identify statistical methods used to measure reliability of equipment measuring continuous variables. This studyalso aims to highlight the inappropriate statistical method used in the reliability analysis and its implication in the medical practice.   Materials and Methods: In 2010, five electronic databases were searched between 2007 and 2009 to look for reliability studies. A total of 5,795 titles were initially identified. Only 282 titles were potentially related, and finally 42 fitted the inclusion criteria. Results: The Intra-class Correlation Coefficient (ICC is the most popular method with 25 (60% studies having used this method followed by the comparing means (8 or 19%. Out of 25 studies using the ICC, only 7 (28% reported the confidence intervals and types of ICC used. Most studies (71% also tested the agreement of instruments. Conclusion: This study finds that the Intra-class Correlation Coefficient is the most popular method used to assess the reliability of medical instruments measuring continuous outcomes. There are also inappropriate applications and interpretations of statistical methods in some studies. It is important for medical researchers to be aware of this issue, and be able to correctly perform analysis in reliability studies.

  16. Instrumental variables I: instrumental variables exploit natural variation in nonexperimental data to estimate causal relationships.

    Science.gov (United States)

    Rassen, Jeremy A; Brookhart, M Alan; Glynn, Robert J; Mittleman, Murray A; Schneeweiss, Sebastian

    2009-12-01

    The gold standard of study design for treatment evaluation is widely acknowledged to be the randomized controlled trial (RCT). Trials allow for the estimation of causal effect by randomly assigning participants either to an intervention or comparison group; through the assumption of "exchangeability" between groups, comparing the outcomes will yield an estimate of causal effect. In the many cases where RCTs are impractical or unethical, instrumental variable (IV) analysis offers a nonexperimental alternative based on many of the same principles. IV analysis relies on finding a naturally varying phenomenon, related to treatment but not to outcome except through the effect of treatment itself, and then using this phenomenon as a proxy for the confounded treatment variable. This article demonstrates how IV analysis arises from an analogous but potentially impossible RCT design, and outlines the assumptions necessary for valid estimation. It gives examples of instruments used in clinical epidemiology and concludes with an outline on estimation of effects.

  17. A selective review of the first 20 years of instrumental variables models in health-services research and medicine.

    Science.gov (United States)

    Cawley, John

    2015-01-01

    The method of instrumental variables (IV) is useful for estimating causal effects. Intuitively, it exploits exogenous variation in the treatment, sometimes called natural experiments or instruments. This study reviews the literature in health-services research and medical research that applies the method of instrumental variables, documents trends in its use, and offers examples of various types of instruments. A literature search of the PubMed and EconLit research databases for English-language journal articles published after 1990 yielded a total of 522 original research articles. Citations counts for each article were derived from the Web of Science. A selective review was conducted, with articles prioritized based on number of citations, validity and power of the instrument, and type of instrument. The average annual number of papers in health services research and medical research that apply the method of instrumental variables rose from 1.2 in 1991-1995 to 41.8 in 2006-2010. Commonly-used instruments (natural experiments) in health and medicine are relative distance to a medical care provider offering the treatment and the medical care provider's historic tendency to administer the treatment. Less common but still noteworthy instruments include randomization of treatment for reasons other than research, randomized encouragement to undertake the treatment, day of week of admission as an instrument for waiting time for surgery, and genes as an instrument for whether the respondent has a heritable condition. The use of the method of IV has increased dramatically in the past 20 years, and a wide range of instruments have been used. Applications of the method of IV have in several cases upended conventional wisdom that was based on correlations and led to important insights about health and healthcare. Future research should pursue new applications of existing instruments and search for new instruments that are powerful and valid.

  18. The effect of patient satisfaction with pharmacist consultation on medication adherence: an instrumental variable approach

    Directory of Open Access Journals (Sweden)

    Gu NY

    2008-12-01

    Full Text Available There are limited studies on quantifying the impact of patient satisfaction with pharmacist consultation on patient medication adherence. Objectives: The objective of this study is to evaluate the effect of patient satisfaction with pharmacist consultation services on medication adherence in a large managed care organization. Methods: We analyzed data from a patient satisfaction survey of 6,916 patients who had used pharmacist consultation services in Kaiser Permanente Southern California from 1993 to 1996. We compared treating patient satisfaction as exogenous, in a single-equation probit model, with a bivariate probit model where patient satisfaction was treated as endogenous. Different sets of instrumental variables were employed, including measures of patients' emotional well-being and patients' propensity to fill their prescriptions at a non-Kaiser Permanente (KP pharmacy. The Smith-Blundell test was used to test whether patient satisfaction was endogenous. Over-identification tests were used to test the validity of the instrumental variables. The Staiger-Stock weak instrument test was used to evaluate the explanatory power of the instrumental variables. Results: All tests indicated that the instrumental variables method was valid and the instrumental variables used have significant explanatory power. The single equation probit model indicated that the effect of patient satisfaction with pharmacist consultation was significant (p<0.010. However, the bivariate probit models revealed that the marginal effect of pharmacist consultation on medication adherence was significantly greater than the single equation probit. The effect increased from 7% to 30% (p<0.010 after controlling for endogeneity bias. Conclusion: After appropriate adjustment for endogeneity bias, patients satisfied with their pharmacy services are substantially more likely to adhere to their medication. The results have important policy implications given the increasing focus

  19. Instrumental variables estimation of exposure effects on a time-to-event endpoint using structural cumulative survival models.

    Science.gov (United States)

    Martinussen, Torben; Vansteelandt, Stijn; Tchetgen Tchetgen, Eric J; Zucker, David M

    2017-12-01

    The use of instrumental variables for estimating the effect of an exposure on an outcome is popular in econometrics, and increasingly so in epidemiology. This increasing popularity may be attributed to the natural occurrence of instrumental variables in observational studies that incorporate elements of randomization, either by design or by nature (e.g., random inheritance of genes). Instrumental variables estimation of exposure effects is well established for continuous outcomes and to some extent for binary outcomes. It is, however, largely lacking for time-to-event outcomes because of complications due to censoring and survivorship bias. In this article, we make a novel proposal under a class of structural cumulative survival models which parameterize time-varying effects of a point exposure directly on the scale of the survival function; these models are essentially equivalent with a semi-parametric variant of the instrumental variables additive hazards model. We propose a class of recursive instrumental variable estimators for these exposure effects, and derive their large sample properties along with inferential tools. We examine the performance of the proposed method in simulation studies and illustrate it in a Mendelian randomization study to evaluate the effect of diabetes on mortality using data from the Health and Retirement Study. We further use the proposed method to investigate potential benefit from breast cancer screening on subsequent breast cancer mortality based on the HIP-study. © 2017, The International Biometric Society.

  20. LARF: Instrumental Variable Estimation of Causal Effects through Local Average Response Functions

    Directory of Open Access Journals (Sweden)

    Weihua An

    2016-07-01

    Full Text Available LARF is an R package that provides instrumental variable estimation of treatment effects when both the endogenous treatment and its instrument (i.e., the treatment inducement are binary. The method (Abadie 2003 involves two steps. First, pseudo-weights are constructed from the probability of receiving the treatment inducement. By default LARF estimates the probability by a probit regression. It also provides semiparametric power series estimation of the probability and allows users to employ other external methods to estimate the probability. Second, the pseudo-weights are used to estimate the local average response function conditional on treatment and covariates. LARF provides both least squares and maximum likelihood estimates of the conditional treatment effects.

  1. Robust best linear estimation for regression analysis using surrogate and instrumental variables.

    Science.gov (United States)

    Wang, C Y

    2012-04-01

    We investigate methods for regression analysis when covariates are measured with errors. In a subset of the whole cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies the classical measurement error model, but it may not have repeated measurements. In addition to the surrogate variables that are available among the subjects in the calibration sample, we assume that there is an instrumental variable (IV) that is available for all study subjects. An IV is correlated with the unobserved true exposure variable and hence can be useful in the estimation of the regression coefficients. We propose a robust best linear estimator that uses all the available data, which is the most efficient among a class of consistent estimators. The proposed estimator is shown to be consistent and asymptotically normal under very weak distributional assumptions. For Poisson or linear regression, the proposed estimator is consistent even if the measurement error from the surrogate or IV is heteroscedastic. Finite-sample performance of the proposed estimator is examined and compared with other estimators via intensive simulation studies. The proposed method and other methods are applied to a bladder cancer case-control study.

  2. Instrument Variables for Reducing Noise in Parallel MRI Reconstruction

    Directory of Open Access Journals (Sweden)

    Yuchou Chang

    2017-01-01

    Full Text Available Generalized autocalibrating partially parallel acquisition (GRAPPA has been a widely used parallel MRI technique. However, noise deteriorates the reconstructed image when reduction factor increases or even at low reduction factor for some noisy datasets. Noise, initially generated from scanner, propagates noise-related errors during fitting and interpolation procedures of GRAPPA to distort the final reconstructed image quality. The basic idea we proposed to improve GRAPPA is to remove noise from a system identification perspective. In this paper, we first analyze the GRAPPA noise problem from a noisy input-output system perspective; then, a new framework based on errors-in-variables (EIV model is developed for analyzing noise generation mechanism in GRAPPA and designing a concrete method—instrument variables (IV GRAPPA to remove noise. The proposed EIV framework provides possibilities that noiseless GRAPPA reconstruction could be achieved by existing methods that solve EIV problem other than IV method. Experimental results show that the proposed reconstruction algorithm can better remove the noise compared to the conventional GRAPPA, as validated with both of phantom and in vivo brain data.

  3. Econometrics in outcomes research: the use of instrumental variables.

    Science.gov (United States)

    Newhouse, J P; McClellan, M

    1998-01-01

    We describe an econometric technique, instrumental variables, that can be useful in estimating the effectiveness of clinical treatments in situations when a controlled trial has not or cannot be done. This technique relies upon the existence of one or more variables that induce substantial variation in the treatment variable but have no direct effect on the outcome variable of interest. We illustrate the use of the technique with an application to aggressive treatment of acute myocardial infarction in the elderly.

  4. Optimal Inference for Instrumental Variables Regression with non-Gaussian Errors

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    This paper is concerned with inference on the coefficient on the endogenous regressor in a linear instrumental variables model with a single endogenous regressor, nonrandom exogenous regressors and instruments, and i.i.d. errors whose distribution is unknown. It is shown that under mild smoothness...

  5. Semiparametric methods for estimation of a nonlinear exposure‐outcome relationship using instrumental variables with application to Mendelian randomization

    Science.gov (United States)

    Staley, James R.

    2017-01-01

    ABSTRACT Mendelian randomization, the use of genetic variants as instrumental variables (IV), can test for and estimate the causal effect of an exposure on an outcome. Most IV methods assume that the function relating the exposure to the expected value of the outcome (the exposure‐outcome relationship) is linear. However, in practice, this assumption may not hold. Indeed, often the primary question of interest is to assess the shape of this relationship. We present two novel IV methods for investigating the shape of the exposure‐outcome relationship: a fractional polynomial method and a piecewise linear method. We divide the population into strata using the exposure distribution, and estimate a causal effect, referred to as a localized average causal effect (LACE), in each stratum of population. The fractional polynomial method performs metaregression on these LACE estimates. The piecewise linear method estimates a continuous piecewise linear function, the gradient of which is the LACE estimate in each stratum. Both methods were demonstrated in a simulation study to estimate the true exposure‐outcome relationship well, particularly when the relationship was a fractional polynomial (for the fractional polynomial method) or was piecewise linear (for the piecewise linear method). The methods were used to investigate the shape of relationship of body mass index with systolic blood pressure and diastolic blood pressure. PMID:28317167

  6. Instrumental variables estimation under a structural Cox model

    DEFF Research Database (Denmark)

    Martinussen, Torben; Nørbo Sørensen, Ditte; Vansteelandt, Stijn

    2017-01-01

    Instrumental variable (IV) analysis is an increasingly popular tool for inferring the effect of an exposure on an outcome, as witnessed by the growing number of IV applications in epidemiology, for instance. The majority of IV analyses of time-to-event endpoints are, however, dominated by heurist...

  7. Bias and Bias Correction in Multi-Site Instrumental Variables Analysis of Heterogeneous Mediator Effects

    Science.gov (United States)

    Reardon, Sean F.; Unlu, Faith; Zhu, Pei; Bloom, Howard

    2013-01-01

    We explore the use of instrumental variables (IV) analysis with a multi-site randomized trial to estimate the effect of a mediating variable on an outcome in cases where it can be assumed that the observed mediator is the only mechanism linking treatment assignment to outcomes, as assumption known in the instrumental variables literature as the…

  8. The productivity of mental health care: an instrumental variable approach.

    Science.gov (United States)

    Lu, Mingshan

    1999-06-01

    BACKGROUND: Like many other medical technologies and treatments, there is a lack of reliable evidence on treatment effectiveness of mental health care. Increasingly, data from non-experimental settings are being used to study the effect of treatment. However, as in a number of studies using non-experimental data, a simple regression of outcome on treatment shows a puzzling negative and significant impact of mental health care on the improvement of mental health status, even after including a large number of potential control variables. The central problem in interpreting evidence from real-world or non-experimental settings is, therefore, the potential "selection bias" problem in observational data set. In other words, the choice/quantity of mental health care may be correlated with other variables, particularly unobserved variables, that influence outcome and this may lead to a bias in the estimate of the effect of care in conventional models. AIMS OF THE STUDY: This paper addresses the issue of estimating treatment effects using an observational data set. The information in a mental health data set obtained from two waves of data in Puerto Rico is explored. The results using conventional models - in which the potential selection bias is not controlled - and that from instrumental variable (IV) models - which is what was proposed in this study to correct for the contaminated estimation from conventional models - are compared. METHODS: Treatment effectiveness is estimated in a production function framework. Effectiveness is measured as the improvement in mental health status. To control for the potential selection bias problem, IV approaches are employed. The essence of the IV method is to use one or more instruments, which are observable factors that influence treatment but do not directly affect patient outcomes, to isolate the effect of treatment variation that is independent of unobserved patient characteristics. The data used in this study are the first (1992

  9. Invited Commentary: Using Financial Credits as Instrumental Variables for Estimating the Causal Relationship Between Income and Health.

    Science.gov (United States)

    Pega, Frank

    2016-05-01

    Social epidemiologists are interested in determining the causal relationship between income and health. Natural experiments in which individuals or groups receive income randomly or quasi-randomly from financial credits (e.g., tax credits or cash transfers) are increasingly being analyzed using instrumental variable analysis. For example, in this issue of the Journal, Hamad and Rehkopf (Am J Epidemiol. 2016;183(9):775-784) used an in-work tax credit called the Earned Income Tax Credit as an instrument to estimate the association between income and child development. However, under certain conditions, the use of financial credits as instruments could violate 2 key instrumental variable analytic assumptions. First, some financial credits may directly influence health, for example, through increasing a psychological sense of welfare security. Second, financial credits and health may have several unmeasured common causes, such as politics, other social policies, and the motivation to maximize the credit. If epidemiologists pursue such instrumental variable analyses, using the amount of an unconditional, universal credit that an individual or group has received as the instrument may produce the most conceptually convincing and generalizable evidence. However, other natural income experiments (e.g., lottery winnings) and other methods that allow better adjustment for confounding might be more promising approaches for estimating the causal relationship between income and health. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. Censored Quantile Instrumental Variable Estimates of the Price Elasticity of Expenditure on Medical Care.

    Science.gov (United States)

    Kowalski, Amanda

    2016-01-02

    Efforts to control medical care costs depend critically on how individuals respond to prices. I estimate the price elasticity of expenditure on medical care using a censored quantile instrumental variable (CQIV) estimator. CQIV allows estimates to vary across the conditional expenditure distribution, relaxes traditional censored model assumptions, and addresses endogeneity with an instrumental variable. My instrumental variable strategy uses a family member's injury to induce variation in an individual's own price. Across the conditional deciles of the expenditure distribution, I find elasticities that vary from -0.76 to -1.49, which are an order of magnitude larger than previous estimates.

  11. Instrumental methods of analysis, 7th edition

    International Nuclear Information System (INIS)

    Willard, H.H.; Merritt, L.L. Jr.; Dean, J.A.; Settle, F.A. Jr.

    1988-01-01

    The authors have prepared an organized and generally polished product. The book is fashioned to be used as a textbook for an undergraduate instrumental analysis course, a supporting textbook for graduate-level courses, and a general reference work on analytical instrumentation and techniques for professional chemists. Four major areas are emphasized: data collection and processing, spectroscopic instrumentation and methods, liquid and gas chromatographic methods, and electrochemical methods. Analytical instrumentation and methods have been updated, and a thorough citation of pertinent recent literature is included

  12. Instrumented Impact Testing: Influence of Machine Variables and Specimen Position

    International Nuclear Information System (INIS)

    Lucon, E.; McCowan, C. N.; Santoyo, R. A.

    2008-01-01

    An investigation has been conducted on the influence of impact machine variables and specimen positioning on characteristic forces and absorbed energies from instrumented Charpy tests. Brittle and ductile fracture behavior has been investigated by testing NIST reference samples of low, high and super-high energy levels. Test machine variables included tightness of foundation, anvil and striker bolts, and the position of the center of percussion with respect to the center of strike. For specimen positioning, we tested samples which had been moved away or sideways with respect to the anvils. In order to assess the influence of the various factors, we compared mean values in the reference (unaltered) and altered conditions; for machine variables, t-test analyses were also performed in order to evaluate the statistical significance of the observed differences. Our results indicate that the only circumstance which resulted in variations larger than 5 percent for both brittle and ductile specimens is when the sample is not in contact with the anvils. These findings should be taken into account in future revisions of instrumented Charpy test standards.

  13. Instrumented Impact Testing: Influence of Machine Variables and Specimen Position

    Energy Technology Data Exchange (ETDEWEB)

    Lucon, E.; McCowan, C. N.; Santoyo, R. A.

    2008-09-15

    An investigation has been conducted on the influence of impact machine variables and specimen positioning on characteristic forces and absorbed energies from instrumented Charpy tests. Brittle and ductile fracture behavior has been investigated by testing NIST reference samples of low, high and super-high energy levels. Test machine variables included tightness of foundation, anvil and striker bolts, and the position of the center of percussion with respect to the center of strike. For specimen positioning, we tested samples which had been moved away or sideways with respect to the anvils. In order to assess the influence of the various factors, we compared mean values in the reference (unaltered) and altered conditions; for machine variables, t-test analyses were also performed in order to evaluate the statistical significance of the observed differences. Our results indicate that the only circumstance which resulted in variations larger than 5 percent for both brittle and ductile specimens is when the sample is not in contact with the anvils. These findings should be taken into account in future revisions of instrumented Charpy test standards.

  14. The variability of piezoelectric measurements. Material and measurement method contributions

    International Nuclear Information System (INIS)

    Stewart, M.; Cain, M.

    2002-01-01

    The variability of piezoelectric materials measurements has been investigated in order to separate the contributions from intrinsic instrumental variability, and the contributions from the variability in materials. The work has pinpointed several areas where weaknesses in the measurement methods result in high variability, and also show that good correlation between piezoelectric parameters allow simpler measurement methods to be used. The Berlincourt method has been shown to be unreliable when testing thin discs, however when testing thicker samples there is a good correlation between this and other methods. The high field permittivity and low field permittivity correlate well, so tolerances on low field measurements would predict high field performance. In trying to identify microstructural origins of samples that behave differently to others within a batch, no direct evidence was found to suggest that outliers originate from either differences in microstructure or crystallography. Some of the samples chosen as maximum outliers showed pin-holes, probably from electrical breakdown during poling, even though these defects would ordinarily be detrimental to piezoelectric output. (author)

  15. The XRF spectrometer and the selection of analysis conditions (instrumental variables)

    International Nuclear Information System (INIS)

    Willis, J.P.

    2002-01-01

    Full text: This presentation will begin with a brief discussion of EDXRF and flat- and curved-crystal WDXRF spectrometers, contrasting the major differences between the three types. The remainder of the presentation will contain a detailed overview of the choice and settings of the many instrumental variables contained in a modern WDXRF spectrometer, and will discuss critically the choices facing the analyst in setting up a WDXRF spectrometer for different elements and applications. In particular it will discuss the choice of tube target (when a choice is possible), the kV and mA settings, tube filters, collimator masks, collimators, analyzing crystals, secondary collimators, detectors, pulse height selection, X-ray path medium (air, nitrogen, vacuum or helium), counting times for peak and background positions and their effect on counting statistics and lower limit of detection (LLD). The use of Figure of Merit (FOM) calculations to objectively choose the best combination of instrumental variables also will be discussed. This presentation will be followed by a shorter session on a subsequent day entitled - A Selection of XRF Conditions - Practical Session, where participants will be given the opportunity to discuss in groups the selection of the best instrumental variables for three very diverse applications. Copyright (2002) Australian X-ray Analytical Association Inc

  16. Finite-sample instrumental variables inference using an asymptotically pivotal statistic

    NARCIS (Netherlands)

    Bekker, P; Kleibergen, F

    2003-01-01

    We consider the K-statistic, Kleibergen's (2002, Econometrica 70, 1781-1803) adaptation of the Anderson-Rubin (AR) statistic in instrumental variables regression. Whereas Kleibergen (2002) especially analyzes the asymptotic behavior of the statistic, we focus on finite-sample properties in, a

  17. Finite-sample instrumental variables Inference using an Asymptotically Pivotal Statistic

    NARCIS (Netherlands)

    Bekker, P.; Kleibergen, F.R.

    2001-01-01

    The paper considers the K-statistic, Kleibergen’s (2000) adaptation ofthe Anderson-Rubin (AR) statistic in instrumental variables regression.Compared to the AR-statistic this K-statistic shows improvedasymptotic efficiency in terms of degrees of freedom in overidentifiedmodels and yet it shares,

  18. Testing concordance of instrumental variable effects in generalized linear models with application to Mendelian randomization

    Science.gov (United States)

    Dai, James Y.; Chan, Kwun Chuen Gary; Hsu, Li

    2014-01-01

    Instrumental variable regression is one way to overcome unmeasured confounding and estimate causal effect in observational studies. Built on structural mean models, there has been considerale work recently developed for consistent estimation of causal relative risk and causal odds ratio. Such models can sometimes suffer from identification issues for weak instruments. This hampered the applicability of Mendelian randomization analysis in genetic epidemiology. When there are multiple genetic variants available as instrumental variables, and causal effect is defined in a generalized linear model in the presence of unmeasured confounders, we propose to test concordance between instrumental variable effects on the intermediate exposure and instrumental variable effects on the disease outcome, as a means to test the causal effect. We show that a class of generalized least squares estimators provide valid and consistent tests of causality. For causal effect of a continuous exposure on a dichotomous outcome in logistic models, the proposed estimators are shown to be asymptotically conservative. When the disease outcome is rare, such estimators are consistent due to the log-linear approximation of the logistic function. Optimality of such estimators relative to the well-known two-stage least squares estimator and the double-logistic structural mean model is further discussed. PMID:24863158

  19. Finite-sample instrumental variables inference using an asymptotically pivotal statistic

    NARCIS (Netherlands)

    Bekker, Paul A.; Kleibergen, Frank

    2001-01-01

    The paper considers the K-statistic, Kleibergen’s (2000) adaptation of the Anderson-Rubin (AR) statistic in instrumental variables regression. Compared to the AR-statistic this K-statistic shows improved asymptotic efficiency in terms of degrees of freedom in overidenti?ed models and yet it shares,

  20. On the shape of posterior densities and credible sets in instrumental variable regression models with reduced rank: an application of flexible sampling methods using neural networks

    NARCIS (Netherlands)

    Hoogerheide, L.F.; Kaashoek, J.F.; van Dijk, H.K.

    2007-01-01

    Likelihoods and posteriors of instrumental variable (IV) regression models with strong endogeneity and/or weak instruments may exhibit rather non-elliptical contours in the parameter space. This may seriously affect inference based on Bayesian credible sets. When approximating posterior

  1. On the shape of posterior densities and credible sets in instrumental variable regression models with reduced rank: an application of flexible sampling methods using neural networks

    NARCIS (Netherlands)

    L.F. Hoogerheide (Lennart); J.F. Kaashoek (Johan); H.K. van Dijk (Herman)

    2005-01-01

    textabstractLikelihoods and posteriors of instrumental variable regression models with strong endogeneity and/or weak instruments may exhibit rather non-elliptical contours in the parameter space. This may seriously affect inference based on Bayesian credible sets. When approximating such contours

  2. Instrumental variables estimation of exposure effects on a time-to-event endpoint using structural cumulative survival models

    DEFF Research Database (Denmark)

    Martinussen, Torben; Vansteelandt, Stijn; Tchetgen Tchetgen, Eric J.

    2017-01-01

    The use of instrumental variables for estimating the effect of an exposure on an outcome is popular in econometrics, and increasingly so in epidemiology. This increasing popularity may be attributed to the natural occurrence of instrumental variables in observational studies that incorporate elem...

  3. Institution, Financial Sector, and Economic Growth: Use The Institutions As An Instrument Variable

    Directory of Open Access Journals (Sweden)

    Albertus Girik Allo

    2016-06-01

    Full Text Available Institution has been investigated having indirect role on economic growth. This paper aims to evaluate whether the quality of institution matters for economic growth. By applying institution as instrumental variable at Foreign Direct Investment (FDI, quality of institution significantly influence economic growth. This study applies two set of data period, namely 1985-2013 and 2000-2013, available online in the World Bank (WB. The first data set, 1985-2013 is used to estimate the role of financial sector on economic growth, focuses on 67 countries. The second data set, 2000-2013 determine the role of institution on financial sector and economic growth by applying 2SLS estimation method. We define institutional variables as set of indicators: Control of Corruption, Political Stability and Absence of Violence, and Voice and Accountability provide declining impact of FDI to economic growth.

  4. The Effect of Birth Weight on Academic Performance: Instrumental Variable Analysis.

    Science.gov (United States)

    Lin, Shi Lin; Leung, Gabriel Matthew; Schooling, C Mary

    2017-05-01

    Observationally, lower birth weight is usually associated with poorer academic performance; whether this association is causal or the result of confounding is unknown. To investigate this question, we obtained an effect estimate, which can have a causal interpretation under specific assumptions, of birth weight on educational attainment using instrumental variable analysis based on single nucleotide polymorphisms determining birth weight combined with results from the Social Science Genetic Association Consortium study of 126,559 Caucasians. We similarly obtained an estimate of the effect of birth weight on academic performance in 4,067 adolescents from Hong Kong's (Chinese) Children of 1997 birth cohort (1997-2016), using twin status as an instrumental variable. Birth weight was not associated with years of schooling (per 100-g increase in birth weight, -0.006 years, 95% confidence interval (CI): -0.02, 0.01) or college completion (odds ratio = 1.00, 95% CI: 0.96, 1.03). Birth weight was also unrelated to academic performance in adolescents (per 100-g increase in birth weight, -0.004 grade, 95% CI: -0.04, 0.04) using instrumental variable analysis, although conventional regression gave a small positive association (0.02 higher grade, 95% CI: 0.01, 0.03). Observed associations of birth weight with academic performance may not be causal, suggesting that interventions should focus on the contextual factors generating this correlation. © The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. Statistical Analysis for Multisite Trials Using Instrumental Variables with Random Coefficients

    Science.gov (United States)

    Raudenbush, Stephen W.; Reardon, Sean F.; Nomi, Takako

    2012-01-01

    Multisite trials can clarify the average impact of a new program and the heterogeneity of impacts across sites. Unfortunately, in many applications, compliance with treatment assignment is imperfect. For these applications, we propose an instrumental variable (IV) model with person-specific and site-specific random coefficients. Site-specific IV…

  6. Bias and Bias Correction in Multisite Instrumental Variables Analysis of Heterogeneous Mediator Effects

    Science.gov (United States)

    Reardon, Sean F.; Unlu, Fatih; Zhu, Pei; Bloom, Howard S.

    2014-01-01

    We explore the use of instrumental variables (IV) analysis with a multisite randomized trial to estimate the effect of a mediating variable on an outcome in cases where it can be assumed that the observed mediator is the only mechanism linking treatment assignment to outcomes, an assumption known in the IV literature as the exclusion restriction.…

  7. Impact of instrumental response on observed ozonesonde profiles: First-order estimates and implications for measures of variability

    Science.gov (United States)

    Clifton, G. T.; Merrill, J. T.; Johnson, B. J.; Oltmans, S. J.

    2009-12-01

    Ozonesondes provide information on the ozone distribution up to the middle stratosphere. Ozone profiles often feature layers, with vertically discrete maxima and minima in the mixing ratio. Layers are especially common in the UT/LS regions and originate from wave breaking, shearing and other transport processes. ECC sondes, however, have a moderate response time to significant changes in ozone. A sonde can ascend over 350 meters before it responds fully to a step change in ozone. This results in an overestimate of the altitude assigned to layers and an underestimate of the underlying variability in the amount of ozone. An estimate of the response time is made for each instrument during the preparation for flight, but the profile data are typically not processed to account for the response. Here we present a method of categorizing the response time of ECC instruments and an analysis of a low-pass filter approximation to the effects on profile data. Exponential functions were fit to the step-up and step-down responses using laboratory data. The resulting response time estimates were consistent with results from standard procedures, with the up-step response time exceeding the down-step value somewhat. A single-pole Butterworth filter that approximates the instrumental effect was used with synthetic layered profiles to make first-order estimates of the impact of the finite response time. Using a layer analysis program previously applied to observed profiles we find that instrumental effects can attenuate ozone variability by 20-45% in individual layers, but that the vertical offset in layer altitudes is moderate, up to about 150 meters. We will present results obtained using this approach, coupled with data on the distribution of layer characteristics found using the layer analysis procedure on profiles from Narragansett, Rhode Island and other US sites to quantify the impact on overall variability estimates given ambient distributions of layer occurrence, thickness

  8. Important variables for parents' postnatal sense of security: evaluating a new Swedish instrument (the PPSS instrument).

    Science.gov (United States)

    Persson, Eva K; Dykes, Anna-Karin

    2009-08-01

    to evaluate dimensions of both parents' postnatal sense of security the first week after childbirth, and to determine associations between the PPSS instrument and different sociodemographic and situational background variables. evaluative, cross-sectional design. 113 mothers and 99 fathers with children live born at term, from five hospitals in southern Sweden. mothers and fathers had similar feelings concerning postnatal sense of security. Of the dimensions in the PPSS instrument, a sense of midwives'/nurses' empowering behaviour, a sense of one's own general well-being and a sense of the mother's well-being as experienced by the father were the most important dimensions for parents' experienced security. A sense of affinity within the family (for both parents) and a sense of manageable breast feeding (for mothers) were not significantly associated with their experienced security. A sense of participation during pregnancy and general anxiety were significantly associated background variables for postnatal sense of security for both parents. For the mothers, parity and a sense that the father was participating during pregnancy were also significantly associated. more focus on parents' participation during pregnancy as well as midwives'/nurses' empowering behaviour during the postnatal period will be beneficial for both parents' postnatal sense of security.

  9. Computational and instrumental methods in EPR

    CERN Document Server

    Bender, Christopher J

    2006-01-01

    Computational and Instrumental Methods in EPR Prof. Bender, Fordham University Prof. Lawrence J. Berliner, University of Denver Electron magnetic resonance has been greatly facilitated by the introduction of advances in instrumentation and better computational tools, such as the increasingly widespread use of the density matrix formalism. This volume is devoted to both instrumentation and computation aspects of EPR, while addressing applications such as spin relaxation time measurements, the measurement of hyperfine interaction parameters, and the recovery of Mn(II) spin Hamiltonian parameters via spectral simulation. Key features: Microwave Amplitude Modulation Technique to Measure Spin-Lattice (T1) and Spin-Spin (T2) Relaxation Times Improvement in the Measurement of Spin-Lattice Relaxation Time in Electron Paramagnetic Resonance Quantitative Measurement of Magnetic Hyperfine Parameters and the Physical Organic Chemistry of Supramolecular Systems New Methods of Simulation of Mn(II) EPR Spectra: Single Cryst...

  10. The contextual effects of social capital on health: a cross-national instrumental variable analysis.

    Science.gov (United States)

    Kim, Daniel; Baum, Christopher F; Ganz, Michael L; Subramanian, S V; Kawachi, Ichiro

    2011-12-01

    Past research on the associations between area-level/contextual social capital and health has produced conflicting evidence. However, interpreting this rapidly growing literature is difficult because estimates using conventional regression are prone to major sources of bias including residual confounding and reverse causation. Instrumental variable (IV) analysis can reduce such bias. Using data on up to 167,344 adults in 64 nations in the European and World Values Surveys and applying IV and ordinary least squares (OLS) regression, we estimated the contextual effects of country-level social trust on individual self-rated health. We further explored whether these associations varied by gender and individual levels of trust. Using OLS regression, we found higher average country-level trust to be associated with better self-rated health in both women and men. Instrumental variable analysis yielded qualitatively similar results, although the estimates were more than double in size in both sexes when country population density and corruption were used as instruments. The estimated health effects of raising the percentage of a country's population that trusts others by 10 percentage points were at least as large as the estimated health effects of an individual developing trust in others. These findings were robust to alternative model specifications and instruments. Conventional regression and to a lesser extent IV analysis suggested that these associations are more salient in women and in women reporting social trust. In a large cross-national study, our findings, including those using instrumental variables, support the presence of beneficial effects of higher country-level trust on self-rated health. Previous findings for contextual social capital using traditional regression may have underestimated the true associations. Given the close linkages between self-rated health and all-cause mortality, the public health gains from raising social capital within and across

  11. Radioactive standards and calibration methods for contamination monitoring instruments

    Energy Technology Data Exchange (ETDEWEB)

    Yoshida, Makoto [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1997-06-01

    Contamination monitoring in the facilities for handling unsealed radioactive materials is one of the most important procedures for radiation protection as well as radiation dose monitoring. For implementation of the proper contamination monitoring, radiation measuring instruments should not only be suitable to the purpose of monitoring, but also be well calibrated for the objective qualities of measurement. In the calibration of contamination monitoring instruments, quality reference activities need to be used. They are supplied in different such as extended sources, radioactive solutions or radioactive gases. These reference activities must be traceable to the national standards or equivalent standards. On the other hand, the appropriate calibration methods must be applied for each type of contamination monitoring instruments. In this paper, the concepts of calibration for contamination monitoring instruments, reference sources, determination methods of reference quantities and practical calibration methods of contamination monitoring instruments, including the procedures carried out in Japan Atomic Energy Research Institute and some relevant experimental data. (G.K.)

  12. Fasting Glucose and the Risk of Depressive Symptoms: Instrumental-Variable Regression in the Cardiovascular Risk in Young Finns Study.

    Science.gov (United States)

    Wesołowska, Karolina; Elovainio, Marko; Hintsa, Taina; Jokela, Markus; Pulkki-Råback, Laura; Pitkänen, Niina; Lipsanen, Jari; Tukiainen, Janne; Lyytikäinen, Leo-Pekka; Lehtimäki, Terho; Juonala, Markus; Raitakari, Olli; Keltikangas-Järvinen, Liisa

    2017-12-01

    Type 2 diabetes (T2D) has been associated with depressive symptoms, but the causal direction of this association and the underlying mechanisms, such as increased glucose levels, remain unclear. We used instrumental-variable regression with a genetic instrument (Mendelian randomization) to examine a causal role of increased glucose concentrations in the development of depressive symptoms. Data were from the population-based Cardiovascular Risk in Young Finns Study (n = 1217). Depressive symptoms were assessed in 2012 using a modified Beck Depression Inventory (BDI-I). Fasting glucose was measured concurrently with depressive symptoms. A genetic risk score for fasting glucose (with 35 single nucleotide polymorphisms) was used as an instrumental variable for glucose. Glucose was not associated with depressive symptoms in the standard linear regression (B = -0.04, 95% CI [-0.12, 0.04], p = .34), but the instrumental-variable regression showed an inverse association between glucose and depressive symptoms (B = -0.43, 95% CI [-0.79, -0.07], p = .020). The difference between the estimates of standard linear regression and instrumental-variable regression was significant (p = .026) CONCLUSION: Our results suggest that the association between T2D and depressive symptoms is unlikely to be caused by increased glucose concentrations. It seems possible that T2D might be linked to depressive symptoms due to low glucose levels.

  13. Application of Instrumented Charpy Method in Characterisation of Materials

    Directory of Open Access Journals (Sweden)

    Željko Alar

    2015-07-01

    Full Text Available Testing of absorbed impact energy according to the Charpy method is carried out to determine the behaviour of a material under the impact load. Instrumented Charpy method allows getting the force displacement curve through the entire test, That curve can be related to force-displacement curve which is obtained by the static tensile test. The purpose of this study was to compare the results of forces obtained by the static tensile test with the forces obtained by the instrumented Charpy method. Experimental part of the work contains testing of the mechanical properties of S275J0 steel by the static tensile test and Impact test on instrumented Charpy pendulum.

  14. Intercomparison of two comparative reactivity method instruments inf the Mediterranean basin during summer 2013

    Science.gov (United States)

    Zannoni, N.; Dusanter, S.; Gros, V.; Sarda Esteve, R.; Michoud, V.; Sinha, V.; Locoge, N.; Bonsang, B.

    2015-09-01

    The hydroxyl radical (OH) plays a key role in the atmosphere, as it initiates most of the oxidation processes of volatile organic compounds (VOCs), and can ultimately lead to the formation of ozone and secondary organic aerosols (SOAs). There are still uncertainties associated with the OH budget assessed using current models of atmospheric chemistry and direct measurements of OH sources and sinks have proved to be valuable tools to improve our understanding of the OH chemistry. The total first order loss rate of OH, or total OH reactivity, can be directly measured using three different methods, such as the following: total OH loss rate measurement, laser-induced pump and probe technique and comparative reactivity method. Observations of total OH reactivity are usually coupled to individual measurements of reactive compounds in the gas phase, which are used to calculate the OH reactivity. Studies using the three methods have highlighted that a significant fraction of OH reactivity is often not explained by individually measured reactive compounds and could be associated to unmeasured or unknown chemical species. Therefore accurate and reproducible measurements of OH reactivity are required. The comparative reactivity method (CRM) has demonstrated to be an advantageous technique with an extensive range of applications, and for this reason it has been adopted by several research groups since its development. However, this method also requires careful corrections to derive ambient OH reactivity. Herein we present an intercomparison exercise of two CRM instruments, CRM-LSCE (Laboratoire des Sciences du Climat et de l'Environnement) and CRM-MD (Mines Douai), conducted during July 2013 at the Mediterranean site of Ersa, Cape Corsica, France. The intercomparison exercise included tests to assess the corrections needed by the two instruments to process the raw data sets as well as OH reactivity observations. The observation was divided in three parts: 2 days of plant

  15. Decoupling Solar Variability and Instrument Trends Using the Multiple Same-Irradiance-Level (MuSIL) Analysis Technique

    Science.gov (United States)

    Woods, Thomas N.; Eparvier, Francis G.; Harder, Jerald; Snow, Martin

    2018-05-01

    The solar spectral irradiance (SSI) dataset is a key record for studying and understanding the energetics and radiation balance in Earth's environment. Understanding the long-term variations of the SSI over timescales of the 11-year solar activity cycle and longer is critical for many Sun-Earth research topics. Satellite measurements of the SSI have been made since the 1970s, most of them in the ultraviolet, but recently also in the visible and near-infrared. A limiting factor for the accuracy of previous solar variability results is the uncertainties for the instrument degradation corrections, which need fairly large corrections relative to the amount of solar cycle variability at some wavelengths. The primary objective of this investigation has been to separate out solar cycle variability and any residual uncorrected instrumental trends in the SSI measurements from the Solar Radiation and Climate Experiment (SORCE) mission and the Thermosphere, Mesosphere, Ionosphere, Energetic, and Dynamics (TIMED) mission. A new technique called the Multiple Same-Irradiance-Level (MuSIL) analysis has been developed, which examines an SSI time series at different levels of solar activity to provide long-term trends in an SSI record, and the most common result is a downward trend that most likely stems from uncorrected instrument degradation. This technique has been applied to each wavelength in the SSI records from SORCE (2003 - present) and TIMED (2002 - present) to provide new solar cycle variability results between 27 nm and 1600 nm with a resolution of about 1 nm at most wavelengths. This technique, which was validated with the highly accurate total solar irradiance (TSI) record, has an estimated relative uncertainty of about 5% of the measured solar cycle variability. The MuSIL results are further validated with the comparison of the new solar cycle variability results from different solar cycles.

  16. Variable aperture-based ptychographical iterative engine method.

    Science.gov (United States)

    Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-02-01

    A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  17. 8 years of Solar Spectral Irradiance Variability Observed from the ISS with the SOLAR/SOLSPEC Instrument

    Science.gov (United States)

    Damé, Luc; Bolsée, David; Meftah, Mustapha; Irbah, Abdenour; Hauchecorne, Alain; Bekki, Slimane; Pereira, Nuno; Cessateur, Marchand; Gäel; , Marion; et al.

    2016-10-01

    Accurate measurements of Solar Spectral Irradiance (SSI) are of primary importance for a better understanding of solar physics and of the impact of solar variability on climate (via Earth's atmospheric photochemistry). The acquisition of a top of atmosphere reference solar spectrum and of its temporal and spectral variability during the unusual solar cycle 24 is of prime interest for these studies. These measurements are performed since April 2008 with the SOLSPEC spectro-radiometer from the far ultraviolet to the infrared (166 nm to 3088 nm). This instrument, developed under a fruitful LATMOS/BIRA-IASB collaboration, is part of the Solar Monitoring Observatory (SOLAR) payload, externally mounted on the Columbus module of the International Space Station (ISS). The SOLAR mission, with its actual 8 years duration, will cover almost the entire solar cycle 24. We present here the in-flight operations and performances of the SOLSPEC instrument, including the engineering corrections, calibrations and improved know-how procedure for aging corrections. Accordingly, a SSI reference spectrum from the UV to the NIR will be presented, together with its variability in the UV, as measured by SOLAR/SOLSPEC for 8 years. Uncertainties on these measurements and comparisons with other instruments will be briefly discussed.

  18. Density dependence and climate effects in Rocky Mountain elk: an application of regression with instrumental variables for population time series with sampling error.

    Science.gov (United States)

    Creel, Scott; Creel, Michael

    2009-11-01

    1. Sampling error in annual estimates of population size creates two widely recognized problems for the analysis of population growth. First, if sampling error is mistakenly treated as process error, one obtains inflated estimates of the variation in true population trajectories (Staples, Taper & Dennis 2004). Second, treating sampling error as process error is thought to overestimate the importance of density dependence in population growth (Viljugrein et al. 2005; Dennis et al. 2006). 2. In ecology, state-space models are used to account for sampling error when estimating the effects of density and other variables on population growth (Staples et al. 2004; Dennis et al. 2006). In econometrics, regression with instrumental variables is a well-established method that addresses the problem of correlation between regressors and the error term, but requires fewer assumptions than state-space models (Davidson & MacKinnon 1993; Cameron & Trivedi 2005). 3. We used instrumental variables to account for sampling error and fit a generalized linear model to 472 annual observations of population size for 35 Elk Management Units in Montana, from 1928 to 2004. We compared this model with state-space models fit with the likelihood function of Dennis et al. (2006). We discuss the general advantages and disadvantages of each method. Briefly, regression with instrumental variables is valid with fewer distributional assumptions, but state-space models are more efficient when their distributional assumptions are met. 4. Both methods found that population growth was negatively related to population density and winter snow accumulation. Summer rainfall and wolf (Canis lupus) presence had much weaker effects on elk (Cervus elaphus) dynamics [though limitation by wolves is strong in some elk populations with well-established wolf populations (Creel et al. 2007; Creel & Christianson 2008)]. 5. Coupled with predictions for Montana from global and regional climate models, our results

  19. Application of Instrumented Charpy Method in Characterisation of Materials

    OpenAIRE

    Alar, Željko; Mandić, Davor; Dugorepec, Andrija; Sakoman, Matija

    2015-01-01

    Testing of absorbed impact energy according to the Charpy method is carried out to determine the behaviour of a material under the impact load. Instrumented Charpy method allows getting the force displacement curve through the entire test, That curve can be related to force-displacement curve which is obtained by the static tensile test. The purpose of this study was to compare the results of forces obtained by the static tensile test with the forces obtained by the instrumented Charpy method...

  20. Job demands and job strain as risk factors for employee wellbeing in elderly care: an instrumental-variables analysis.

    Science.gov (United States)

    Elovainio, Marko; Heponiemi, Tarja; Kuusio, Hannamaria; Jokela, Markus; Aalto, Anna-Mari; Pekkarinen, Laura; Noro, Anja; Finne-Soveri, Harriet; Kivimäki, Mika; Sinervo, Timo

    2015-02-01

    The association between psychosocial work environment and employee wellbeing has repeatedly been shown. However, as environmental evaluations have typically been self-reported, the observed associations may be attributable to reporting bias. Applying instrumental-variable regression, we used staffing level (the ratio of staff to residents) as an unconfounded instrument for self-reported job demands and job strain to predict various indicators of wellbeing (perceived stress, psychological distress and sleeping problems) among 1525 registered nurses, practical nurses and nursing assistants working in elderly care wards. In ordinary regression, higher self-reported job demands and job strain were associated with increased risk of perceived stress, psychological distress and sleeping problems. The effect estimates for the associations of these psychosocial factors with perceived stress and psychological distress were greater, but less precisely estimated, in an instrumental-variables analysis which took into account only the variation in self-reported job demands and job strain that was explained by staffing level. No association between psychosocial factors and sleeping problems was observed with the instrumental-variable analysis. These results support a causal interpretation of high self-reported job demands and job strain being risk factors for employee wellbeing. © The Author 2014. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.

  1. Focus on variability : New tools to study intra-individual variability in developmental data

    NARCIS (Netherlands)

    van Geert, P; van Dijk, M

    2002-01-01

    In accordance with dynamic systems theory, we assume that variability is an important developmental phenomenon. However, the standard methodological toolkit of the developmental psychologist offers few instruments for the study of variability. In this article we will present several new methods that

  2. Pre-validation methods for developing a patient reported outcome instrument

    Directory of Open Access Journals (Sweden)

    Castillo Mayret M

    2011-08-01

    Full Text Available Abstract Background Measures that reflect patients' assessment of their health are of increasing importance as outcome measures in randomised controlled trials. The methodological approach used in the pre-validation development of new instruments (item generation, item reduction and question formatting should be robust and transparent. The totality of the content of existing PRO instruments for a specific condition provides a valuable resource (pool of items that can be utilised to develop new instruments. Such 'top down' approaches are common, but the explicit pre-validation methods are often poorly reported. This paper presents a systematic and generalisable 5-step pre-validation PRO instrument methodology. Methods The method is illustrated using the example of the Aberdeen Glaucoma Questionnaire (AGQ. The five steps are: 1 Generation of a pool of items; 2 Item de-duplication (three phases; 3 Item reduction (two phases; 4 Assessment of the remaining items' content coverage against a pre-existing theoretical framework appropriate to the objectives of the instrument and the target population (e.g. ICF; and 5 qualitative exploration of the target populations' views of the new instrument and the items it contains. Results The AGQ 'item pool' contained 725 items. Three de-duplication phases resulted in reduction of 91, 225 and 48 items respectively. The item reduction phases discarded 70 items and 208 items respectively. The draft AGQ contained 83 items with good content coverage. The qualitative exploration ('think aloud' study resulted in removal of a further 15 items and refinement to the wording of others. The resultant draft AGQ contained 68 items. Conclusions This study presents a novel methodology for developing a PRO instrument, based on three sources: literature reporting what is important to patient; theoretically coherent framework; and patients' experience of completing the instrument. By systematically accounting for all items dropped

  3. Variable threshold method for ECG R-peak detection.

    Science.gov (United States)

    Kew, Hsein-Ping; Jeong, Do-Un

    2011-10-01

    In this paper, a wearable belt-type ECG electrode worn around the chest by measuring the real-time ECG is produced in order to minimize the inconvenient in wearing. ECG signal is detected using a potential instrument system. The measured ECG signal is transmits via an ultra low power consumption wireless data communications unit to personal computer using Zigbee-compatible wireless sensor node. ECG signals carry a lot of clinical information for a cardiologist especially the R-peak detection in ECG. R-peak detection generally uses the threshold value which is fixed. There will be errors in peak detection when the baseline changes due to motion artifacts and signal size changes. Preprocessing process which includes differentiation process and Hilbert transform is used as signal preprocessing algorithm. Thereafter, variable threshold method is used to detect the R-peak which is more accurate and efficient than fixed threshold value method. R-peak detection using MIT-BIH databases and Long Term Real-Time ECG is performed in this research in order to evaluate the performance analysis.

  4. Instrumental variables estimates of peer effects in social networks.

    Science.gov (United States)

    An, Weihua

    2015-03-01

    Estimating peer effects with observational data is very difficult because of contextual confounding, peer selection, simultaneity bias, and measurement error, etc. In this paper, I show that instrumental variables (IVs) can help to address these problems in order to provide causal estimates of peer effects. Based on data collected from over 4000 students in six middle schools in China, I use the IV methods to estimate peer effects on smoking. My design-based IV approach differs from previous ones in that it helps to construct potentially strong IVs and to directly test possible violation of exogeneity of the IVs. I show that measurement error in smoking can lead to both under- and imprecise estimations of peer effects. Based on a refined measure of smoking, I find consistent evidence for peer effects on smoking. If a student's best friend smoked within the past 30 days, the student was about one fifth (as indicated by the OLS estimate) or 40 percentage points (as indicated by the IV estimate) more likely to smoke in the same time period. The findings are robust to a variety of robustness checks. I also show that sharing cigarettes may be a mechanism for peer effects on smoking. A 10% increase in the number of cigarettes smoked by a student's best friend is associated with about 4% increase in the number of cigarettes smoked by the student in the same time period. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Advanced Measuring (Instrumentation Methods for Nuclear Installations: A Review

    Directory of Open Access Journals (Sweden)

    Wang Qiu-kuan

    2012-01-01

    Full Text Available The nuclear technology has been widely used in the world. The research of measurement in nuclear installations involves many aspects, such as nuclear reactors, nuclear fuel cycle, safety and security, nuclear accident, after action, analysis, and environmental applications. In last decades, many advanced measuring devices and techniques have been widely applied in nuclear installations. This paper mainly introduces the development of the measuring (instrumentation methods for nuclear installations and the applications of these instruments and methods.

  6. A Geometrical Method for Sound-Hole Size and Location Enhancement in Lute Family Musical Instruments: The Golden Method

    Directory of Open Access Journals (Sweden)

    Soheil Jafari

    2017-11-01

    Full Text Available This paper presents a new analytical approach, the Golden Method, to enhance sound-hole size and location in musical instruments of the lute family in order to obtain better sound damping characteristics based on the concept of the golden ratio and the instrument geometry. The main objective of the paper is to increase the capability of lute family musical instruments in keeping a note for a certain time at a certain level to enhance the instruments’ orchestral characteristics. For this purpose, a geometry-based analytical method, the Golden Method is first described in detail in an itemized feature. A new musical instrument is then developed and tested to confirm the ability of the Golden Method in optimizing the acoustical characteristics of musical instruments from a damping point of view by designing the modified sound-hole. Finally, the new-developed instrument is tested, and the obtained results are compared with those of two well-known instruments to confirm the effectiveness of the proposed method. The experimental results show that the suggested method is able to increase the sound damping time by at least 2.4% without affecting the frequency response function and other acoustic characteristics of the instrument. This methodology could be used as the first step in future studies on design, optimization and evaluation of musical instruments of the lute family (e.g., lute, oud, barbat, mandolin, setar, and etc..

  7. Repairing method of color TV with measuring instrument

    International Nuclear Information System (INIS)

    1996-01-01

    This book concentrates on repairing method of color TV with measuring instrument, which deals with direction and sorts of measuring instrument for service, application and basic technique of an oscilloscope and a synchroscope, constituent of TV and wave reading, everything for test skill for service man, service technique by electronic voltmeter, service technique by sweep generator and maker generator, dot-bar generator and support skill for color TV and color bar generator and application technology of color circuit.

  8. Evaluating disease management programme effectiveness: an introduction to instrumental variables.

    Science.gov (United States)

    Linden, Ariel; Adams, John L

    2006-04-01

    This paper introduces the concept of instrumental variables (IVs) as a means of providing an unbiased estimate of treatment effects in evaluating disease management (DM) programme effectiveness. Model development is described using zip codes as the IV. Three diabetes DM outcomes were evaluated: annual diabetes costs, emergency department (ED) visits and hospital days. Both ordinary least squares (OLS) and IV estimates showed a significant treatment effect for diabetes costs (P = 0.011) but neither model produced a significant treatment effect for ED visits. However, the IV estimate showed a significant treatment effect for hospital days (P = 0.006) whereas the OLS model did not. These results illustrate the utility of IV estimation when the OLS model is sensitive to the confounding effect of hidden bias.

  9. Extraction Methods, Variability Encountered in

    NARCIS (Netherlands)

    Bodelier, P.L.E.; Nelson, K.E.

    2014-01-01

    Synonyms Bias in DNA extractions methods; Variation in DNA extraction methods Definition The variability in extraction methods is defined as differences in quality and quantity of DNA observed using various extraction protocols, leading to differences in outcome of microbial community composition

  10. The use of a combination of instrumental methods to assess change in sensory crispness during storage of a "Honeycrisp" apple breeding family.

    Science.gov (United States)

    Chang, Hsueh-Yuan; Vickers, Zata M; Tong, Cindy B S

    2018-04-01

    Loss of crispness in apple fruit during storage reduces the fruit's fresh sensation and consumer acceptance. Apple varieties that maintain crispness thus have higher potential for longer-term consumer appeal. To efficiently phenotype crispness, several instrumental methods have been tested, but variable results were obtained when different apple varieties were assayed. To extend these studies, we assessed the extent to which instrumental measurements correlate to and predict sensory crispness, with a focus on crispness maintenance. We used an apple breeding family derived from a cross between "Honeycrisp" and "MN1764," which segregates for crispness maintenance. Three types of instrumental measurements (puncture, snapping, and mechanical-acoustic tests) and sensory evaluation were performed on fruit at harvest and after 8 weeks of cold storage. Overall, 20 genotypes from the family and the 2 parents were characterized by 19 force and acoustic measures. In general, crispness was more related to force than to acoustic measures. Force linear distance and maximum force as measured by the mechanical-acoustic test were best correlated with sensory crispness and change in crispness, respectively. The correlations varied by apple genotype. The best multiple linear regression model to predict change in sensory crispness between harvest and storage of fruit of this breeding family incorporated both force and acoustic measures. This work compared the abilities of instrumental tests to predict sensory crispness maintenance of apple fruit. The use of an instrumental method that is highly correlated to sensory crispness evaluation can enhance the efficiency and reduce the cost of measuring crispness for breeding purposes. This study showed that sensory crispness and change in crispness after storage of an apple breeding family were reliably predicted with a combination of instrumental measurements and multiple variable analyses. The strategy potentially can be applied to other

  11. Institution, Financial Sector, and Economic Growth: Use The Institutions As An Instrument Variable

    OpenAIRE

    Albertus Girik Allo

    2016-01-01

    Institution has been investigated having indirect role on economic growth. This paper aims to evaluate whether the quality of institution matters for economic growth. By applying institution as instrumental variable at Foreign Direct Investment (FDI), quality of institution significantly influence economic growth. This study applies two set of data period, namely 1985-2013 and 2000-2013, available online in the World Bank (WB). The first data set, 1985-2013 is used to estimate the role of fin...

  12. Method and apparatus for continuous fluid leak monitoring and detection in analytical instruments and instrument systems

    Science.gov (United States)

    Weitz, Karl K [Pasco, WA; Moore, Ronald J [West Richland, WA

    2010-07-13

    A method and device are disclosed that provide for detection of fluid leaks in analytical instruments and instrument systems. The leak detection device includes a collection tube, a fluid absorbing material, and a circuit that electrically couples to an indicator device. When assembled, the leak detection device detects and monitors for fluid leaks, providing a preselected response in conjunction with the indicator device when contacted by a fluid.

  13. College quality and hourly wages: evidence from the self-revelation model, sibling models and instrumental variables.

    Science.gov (United States)

    Borgen, Nicolai T

    2014-11-01

    This paper addresses the recent discussion on confounding in the returns to college quality literature using the Norwegian case. The main advantage of studying Norway is the quality of the data. Norwegian administrative data provide information on college applications, family relations and a rich set of control variables for all Norwegian citizens applying to college between 1997 and 2004 (N = 141,319) and their succeeding wages between 2003 and 2010 (676,079 person-year observations). With these data, this paper uses a subset of the models that have rendered mixed findings in the literature in order to investigate to what extent confounding biases the returns to college quality. I compare estimates obtained using standard regression models to estimates obtained using the self-revelation model of Dale and Krueger (2002), a sibling fixed effects model and the instrumental variable model used by Long (2008). Using these methods, I consistently find increasing returns to college quality over the course of students' work careers, with positive returns only later in students' work careers. I conclude that the standard regression estimate provides a reasonable estimate of the returns to college quality. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Instrumentation and quantitative methods of evaluation

    International Nuclear Information System (INIS)

    Beck, R.N.; Cooper, M.D.

    1991-01-01

    This report summarizes goals and accomplishments of the research program entitled Instrumentation and Quantitative Methods of Evaluation, during the period January 15, 1989 through July 15, 1991. This program is very closely integrated with the radiopharmaceutical program entitled Quantitative Studies in Radiopharmaceutical Science. Together, they constitute the PROGRAM OF NUCLEAR MEDICINE AND QUANTITATIVE IMAGING RESEARCH within The Franklin McLean Memorial Research Institute (FMI). The program addresses problems involving the basic science and technology that underlie the physical and conceptual tools of radiotracer methodology as they relate to the measurement of structural and functional parameters of physiologic importance in health and disease. The principal tool is quantitative radionuclide imaging. The objective of this program is to further the development and transfer of radiotracer methodology from basic theory to routine clinical practice. The focus of the research is on the development of new instruments and radiopharmaceuticals, and the evaluation of these through the phase of clinical feasibility. 234 refs., 11 figs., 2 tabs

  15. Social interactions and college enrollment: A combined school fixed effects/instrumental variables approach.

    Science.gov (United States)

    Fletcher, Jason M

    2015-07-01

    This paper provides some of the first evidence of peer effects in college enrollment decisions. There are several empirical challenges in assessing the influences of peers in this context, including the endogeneity of high school, shared group-level unobservables, and identifying policy-relevant parameters of social interactions models. This paper addresses these issues by using an instrumental variables/fixed effects approach that compares students in the same school but different grade-levels who are thus exposed to different sets of classmates. In particular, plausibly exogenous variation in peers' parents' college expectations are used as an instrument for peers' college choices. Preferred specifications indicate that increasing a student's exposure to college-going peers by ten percentage points is predicted to raise the student's probability of enrolling in college by 4 percentage points. This effect is roughly half the magnitude of growing up in a household with married parents (vs. an unmarried household). Copyright © 2015 Elsevier Inc. All rights reserved.

  16. A multi-criteria evaluation method for climate change mitigation policy instruments

    International Nuclear Information System (INIS)

    Konidari, Popi; Mavrakis, Dimitrios

    2007-01-01

    This paper presents an integrated multi-criteria analysis method for the quantitative evaluation of climate change mitigation policy instruments. The method consists of: (i) a set of criteria supported by sub-criteria, all of which describe the complex framework under which these instruments are selected by policy makers and implemented, (ii) an Analytical Hierarchy Process (AHP) process for defining weight coefficients for criteria and sub-criteria according to the preferences of three stakeholders groups and (iii) a Multi-Attribute Theory (MAUT)/Simple Multi-Attribute Ranking Technique (SMART) process for assigning grades to each instrument that is evaluated for its performance under a specific sub-criterion. Arguments for the selected combination of these standard methods and definitions for criteria/sub-criteria are quoted. Consistency and robustness tests are performed. The functionality of the proposed method is tested by assessing the aggregate performances of the EU emission trading scheme at Denmark, Germany, Greece, Italy, Netherlands, Portugal, Sweden and United Kingdom. Conclusions are discussed

  17. Instrumentation and measurement method for the ATLAS test facility

    Energy Technology Data Exchange (ETDEWEB)

    Yun, Byong Jo; Chu, In Chul; Eu, Dong Jin; Kang, Kyong Ho; Kim, Yeon Sik; Song, Chul Hwa; Baek, Won Pil

    2007-03-15

    An integral effect test loop for pressurized water reactors (PWRs), the ATLAS is constructed by thermal-hydraulic safety research division in KAERI. The ATLAS facility has been designed to have the length scale of 1/2 and area scale of 1/144 compared with the reference plant, APR1400 which is a Korean evolution type nuclear reactors. A total 1300 instrumentations is equipped in the ATLAS test facility. In this report, the instrumentation of ATLAS test facility and related measurement methods were introduced.

  18. Analytical chromatography. Methods, instrumentation and applications

    International Nuclear Information System (INIS)

    Yashin, Ya I; Yashin, A Ya

    2006-01-01

    The state-of-the-art and the prospects in the development of main methods of analytical chromatography, viz., gas, high performance liquid and ion chromatographic techniques, are characterised. Achievements of the past 10-15 years in the theory and general methodology of chromatography and also in the development of new sorbents, columns and chromatographic instruments are outlined. The use of chromatography in the environmental control, biology, medicine, pharmaceutics, and also for monitoring the quality of foodstuffs and products of chemical, petrochemical and gas industries, etc. is considered.

  19. A Mixed Methods Portrait of Urban Instrumental Music Teaching

    Science.gov (United States)

    Fitzpatrick, Kate R.

    2011-01-01

    The purpose of this mixed methods study was to learn about the ways that instrumental music teachers in Chicago navigated the urban landscape. The design of the study most closely resembles Creswell and Plano Clark's (2007) two-part Triangulation Convergence Mixed Methods Design, with the addition of an initial exploratory focus group component.…

  20. Combining fixed effects and instrumental variable approaches for estimating the effect of psychosocial job quality on mental health: evidence from 13 waves of a nationally representative cohort study.

    Science.gov (United States)

    Milner, Allison; Aitken, Zoe; Kavanagh, Anne; LaMontagne, Anthony D; Pega, Frank; Petrie, Dennis

    2017-06-23

    Previous studies suggest that poor psychosocial job quality is a risk factor for mental health problems, but they use conventional regression analytic methods that cannot rule out reverse causation, unmeasured time-invariant confounding and reporting bias. This study combines two quasi-experimental approaches to improve causal inference by better accounting for these biases: (i) linear fixed effects regression analysis and (ii) linear instrumental variable analysis. We extract 13 annual waves of national cohort data including 13 260 working-age (18-64 years) employees. The exposure variable is self-reported level of psychosocial job quality. The instruments used are two common workplace entitlements. The outcome variable is the Mental Health Inventory (MHI-5). We adjust for measured time-varying confounders. In the fixed effects regression analysis adjusted for time-varying confounders, a 1-point increase in psychosocial job quality is associated with a 1.28-point improvement in mental health on the MHI-5 scale (95% CI: 1.17, 1.40; P variable analysis, a 1-point increase psychosocial job quality is related to 1.62-point improvement on the MHI-5 scale (95% CI: -0.24, 3.48; P = 0.088). Our quasi-experimental results provide evidence to confirm job stressors as risk factors for mental ill health using methods that improve causal inference. © The Author 2017. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  1. Factor analysis methods and validity evidence: A systematic review of instrument development across the continuum of medical education

    Science.gov (United States)

    Wetzel, Angela Payne

    Previous systematic reviews indicate a lack of reporting of reliability and validity evidence in subsets of the medical education literature. Psychology and general education reviews of factor analysis also indicate gaps between current and best practices; yet, a comprehensive review of exploratory factor analysis in instrument development across the continuum of medical education had not been previously identified. Therefore, the purpose for this study was critical review of instrument development articles employing exploratory factor or principal component analysis published in medical education (2006--2010) to describe and assess the reporting of methods and validity evidence based on the Standards for Educational and Psychological Testing and factor analysis best practices. Data extraction of 64 articles measuring a variety of constructs that have been published throughout the peer-reviewed medical education literature indicate significant errors in the translation of exploratory factor analysis best practices to current practice. Further, techniques for establishing validity evidence tend to derive from a limited scope of methods including reliability statistics to support internal structure and support for test content. Instruments reviewed for this study lacked supporting evidence based on relationships with other variables and response process, and evidence based on consequences of testing was not evident. Findings suggest a need for further professional development within the medical education researcher community related to (1) appropriate factor analysis methodology and reporting and (2) the importance of pursuing multiple sources of reliability and validity evidence to construct a well-supported argument for the inferences made from the instrument. Medical education researchers and educators should be cautious in adopting instruments from the literature and carefully review available evidence. Finally, editors and reviewers are encouraged to recognize

  2. Gait variability: methods, modeling and meaning

    Directory of Open Access Journals (Sweden)

    Hausdorff Jeffrey M

    2005-07-01

    Full Text Available Abstract The study of gait variability, the stride-to-stride fluctuations in walking, offers a complementary way of quantifying locomotion and its changes with aging and disease as well as a means of monitoring the effects of therapeutic interventions and rehabilitation. Previous work has suggested that measures of gait variability may be more closely related to falls, a serious consequence of many gait disorders, than are measures based on the mean values of other walking parameters. The Current JNER series presents nine reports on the results of recent investigations into gait variability. One novel method for collecting unconstrained, ambulatory data is reviewed, and a primer on analysis methods is presented along with a heuristic approach to summarizing variability measures. In addition, the first studies of gait variability in animal models of neurodegenerative disease are described, as is a mathematical model of human walking that characterizes certain complex (multifractal features of the motor control's pattern generator. Another investigation demonstrates that, whereas both healthy older controls and patients with a higher-level gait disorder walk more slowly in reduced lighting, only the latter's stride variability increases. Studies of the effects of dual tasks suggest that the regulation of the stride-to-stride fluctuations in stride width and stride time may be influenced by attention loading and may require cognitive input. Finally, a report of gait variability in over 500 subjects, probably the largest study of this kind, suggests how step width variability may relate to fall risk. Together, these studies provide new insights into the factors that regulate the stride-to-stride fluctuations in walking and pave the way for expanded research into the control of gait and the practical application of measures of gait variability in the clinical setting.

  3. Design and Implementation of Data Collection Instruments for Neonatology Research

    Directory of Open Access Journals (Sweden)

    Monica G. HĂŞMĂŞANU

    2014-12-01

    Full Text Available im: The aim of our research was to design and implement data collection instruments to be use in context of an observational prospective clinical study with follow-up conducted on new born with intrauterine growth restriction. Methods: The structure of the data collection forms (paper based and electronic based was first identified and for each variable the best type to accomplish the research aim was established. The code for categorical variables has also been decided as well as the units of measurements for quantitative variables. In respect of good practice, a set of confounding factors (as gender, date of birth, etc. have also been identified and integrated in data collection instruments. Data-entry validation rules were implemented for each variable to reduce data input errors when the electronic data collection instrument was created. Results: Two data collection instruments have been developed and successfully implemented: a paper-based form and an electronic data collection instrument. The developed forms included demographics, neonatal complications (as hypoglycemia, hypocalcemia, etc., biochemical data at birth and follow-up, immunological data, as well as basal and follow-up echocardiographic data. Data-entry validation criteria have been implemented in electronic data collection instrument to assure validity and precision when paper-based data are translated in electronic form. Furthermore, to assure subject’s confidentiality a careful attention was given to HIPPA identifiers when electronic data collection instrument was developed. Conclusion: Data collection instruments were successfully developed and implemented as an a priori step in a clinical research for assisting data collection and management in a case of an observational prospective study with follow-up visits.

  4. Probabilistic Power Flow Method Considering Continuous and Discrete Variables

    Directory of Open Access Journals (Sweden)

    Xuexia Zhang

    2017-04-01

    Full Text Available This paper proposes a probabilistic power flow (PPF method considering continuous and discrete variables (continuous and discrete power flow, CDPF for power systems. The proposed method—based on the cumulant method (CM and multiple deterministic power flow (MDPF calculations—can deal with continuous variables such as wind power generation (WPG and loads, and discrete variables such as fuel cell generation (FCG. In this paper, continuous variables follow a normal distribution (loads or a non-normal distribution (WPG, and discrete variables follow a binomial distribution (FCG. Through testing on IEEE 14-bus and IEEE 118-bus power systems, the proposed method (CDPF has better accuracy compared with the CM, and higher efficiency compared with the Monte Carlo simulation method (MCSM.

  5. Association of Body Mass Index with Depression, Anxiety and Suicide-An Instrumental Variable Analysis of the HUNT Study.

    Directory of Open Access Journals (Sweden)

    Johan Håkon Bjørngaard

    Full Text Available While high body mass index is associated with an increased risk of depression and anxiety, cumulative evidence indicates that it is a protective factor for suicide. The associations from conventional observational studies of body mass index with mental health outcomes are likely to be influenced by reverse causality or confounding by ill-health. In the present study, we investigated the associations between offspring body mass index and parental anxiety, depression and suicide in order to avoid problems with reverse causality and confounding by ill-health.We used data from 32,457 mother-offspring and 27,753 father-offspring pairs from the Norwegian HUNT-study. Anxiety and depression were assessed using the Hospital Anxiety and Depression Scale and suicide death from national registers. Associations between offspring and own body mass index and symptoms of anxiety and depression and suicide mortality were estimated using logistic and Cox regression. Causal effect estimates were estimated with a two sample instrument variable approach using offspring body mass index as an instrument for parental body mass index.Both own and offspring body mass index were positively associated with depression, while the results did not indicate any substantial association between body mass index and anxiety. Although precision was low, suicide mortality was inversely associated with own body mass index and the results from the analysis using offspring body mass index supported these results. Adjusted odds ratios per standard deviation body mass index from the instrumental variable analysis were 1.22 (95% CI: 1.05, 1.43 for depression, 1.10 (95% CI: 0.95, 1.27 for anxiety, and the instrumental variable estimated hazard ratios for suicide was 0.69 (95% CI: 0.30, 1.63.The present study's results indicate that suicide mortality is inversely associated with body mass index. We also found support for a positive association between body mass index and depression, but not

  6. Performance evaluation methods and instrumentation for mine ventilation fans

    Institute of Scientific and Technical Information of China (English)

    LI Man; WANG Xue-rong

    2009-01-01

    Ventilation fans are one of the most important pieces of equipment in coal mines. Their performance plays an important role in the safety of staff and production. Given the actual requirements of coal mine production, we instituted a research project on the measurement methods of key performance parameters such as wind pressure, amount of ventilation and power. At the end a virtual instrument for mine ventilation fans performance evaluation was developed using a USB interface. The practical perform-ance and analytical results of our experiments show that it is feasible, reliable and effective to use the proposed instrumentation for mine ventilation performance evaluation.

  7. Variable selection by lasso-type methods

    Directory of Open Access Journals (Sweden)

    Sohail Chand

    2011-09-01

    Full Text Available Variable selection is an important property of shrinkage methods. The adaptive lasso is an oracle procedure and can do consistent variable selection. In this paper, we provide an explanation that how use of adaptive weights make it possible for the adaptive lasso to satisfy the necessary and almost sufcient condition for consistent variable selection. We suggest a novel algorithm and give an important result that for the adaptive lasso if predictors are normalised after the introduction of adaptive weights, it makes the adaptive lasso performance identical to the lasso.

  8. Variable importance and prediction methods for longitudinal problems with missing variables.

    Directory of Open Access Journals (Sweden)

    Iván Díaz

    Full Text Available We present prediction and variable importance (VIM methods for longitudinal data sets containing continuous and binary exposures subject to missingness. We demonstrate the use of these methods for prognosis of medical outcomes of severe trauma patients, a field in which current medical practice involves rules of thumb and scoring methods that only use a few variables and ignore the dynamic and high-dimensional nature of trauma recovery. Well-principled prediction and VIM methods can provide a tool to make care decisions informed by the high-dimensional patient's physiological and clinical history. Our VIM parameters are analogous to slope coefficients in adjusted regressions, but are not dependent on a specific statistical model, nor require a certain functional form of the prediction regression to be estimated. In addition, they can be causally interpreted under causal and statistical assumptions as the expected outcome under time-specific clinical interventions, related to changes in the mean of the outcome if each individual experiences a specified change in the variable (keeping other variables in the model fixed. Better yet, the targeted MLE used is doubly robust and locally efficient. Because the proposed VIM does not constrain the prediction model fit, we use a very flexible ensemble learner (the SuperLearner, which returns a linear combination of a list of user-given algorithms. Not only is such a prediction algorithm intuitive appealing, it has theoretical justification as being asymptotically equivalent to the oracle selector. The results of the analysis show effects whose size and significance would have been not been found using a parametric approach (such as stepwise regression or LASSO. In addition, the procedure is even more compelling as the predictor on which it is based showed significant improvements in cross-validated fit, for instance area under the curve (AUC for a receiver-operator curve (ROC. Thus, given that 1 our VIM

  9. A method and instruments to identify the torque, the power and the efficiency of an internal combustion engine of a wheeled vehicle

    Science.gov (United States)

    Egorov, A. V.; Kozlov, K. E.; Belogusev, V. N.

    2018-01-01

    In this paper, we propose a new method and instruments to identify the torque, the power, and the efficiency of internal combustion engines in transient conditions. This method, in contrast to the commonly used non-demounting methods based on inertia and strain gauge dynamometers, allows controlling the main performance parameters of internal combustion engines in transient conditions without inaccuracy connected with the torque loss due to its transfer to the driving wheels, on which the torque is measured with existing methods. In addition, the proposed method is easy to create, and it does not use strain measurement instruments, the application of which does not allow identifying the variable values of the measured parameters with high measurement rate; and therefore the use of them leads to the impossibility of taking into account the actual parameters when engineering the wheeled vehicles. Thus the use of this method can greatly improve the measurement accuracy and reduce costs and laboriousness during testing of internal combustion engines. The results of experiments showed the applicability of the proposed method for identification of the internal combustion engines performance parameters. In this paper, it was determined the most preferred transmission ratio when using the proposed method.

  10. Sparse feature learning for instrument identification: Effects of sampling and pooling methods.

    Science.gov (United States)

    Han, Yoonchang; Lee, Subin; Nam, Juhan; Lee, Kyogu

    2016-05-01

    Feature learning for music applications has recently received considerable attention from many researchers. This paper reports on the sparse feature learning algorithm for musical instrument identification, and in particular, focuses on the effects of the frame sampling techniques for dictionary learning and the pooling methods for feature aggregation. To this end, two frame sampling techniques are examined that are fixed and proportional random sampling. Furthermore, the effect of using onset frame was analyzed for both of proposed sampling methods. Regarding summarization of the feature activation, a standard deviation pooling method is used and compared with the commonly used max- and average-pooling techniques. Using more than 47 000 recordings of 24 instruments from various performers, playing styles, and dynamics, a number of tuning parameters are experimented including the analysis frame size, the dictionary size, and the type of frequency scaling as well as the different sampling and pooling methods. The results show that the combination of proportional sampling and standard deviation pooling achieve the best overall performance of 95.62% while the optimal parameter set varies among the instrument classes.

  11. Turbidity threshold sampling: Methods and instrumentation

    Science.gov (United States)

    Rand Eads; Jack Lewis

    2001-01-01

    Traditional methods for determining the frequency of suspended sediment sample collection often rely on measurements, such as water discharge, that are not well correlated to sediment concentration. Stream power is generally not a good predictor of sediment concentration for rivers that transport the bulk of their load as fines, due to the highly variable routing of...

  12. 26 CFR 1.1275-5 - Variable rate debt instruments.

    Science.gov (United States)

    2010-04-01

    ... nonpublicly traded property. A debt instrument (other than a tax-exempt obligation) that would otherwise... variations in the cost of newly borrowed funds in the currency in which the debt instrument is denominated... on the yield of actively traded personal property (within the meaning of section 1092(d)(1)). (ii...

  13. Calibration method based on direct radioactivity measurement for radioactive gas monitoring instruments

    International Nuclear Information System (INIS)

    Yoshida, Makoto; Ohi, Yoshihiro; Chida, Tohru; Wu, Youyang.

    1993-01-01

    A calibration method for radioactive gas monitoring instruments was studied. In the method, gaseous radioactivity standards were provided on the basis of the direct radioactivity measurement by the diffusion-in long proportional counter method (DLPC method). The radioactivity concentration of the gas mixture through a monitoring instrument was determined by sampling the known volume of the gas mixture into the proportional counter used for the DLPC method. Since oxygen in the gas mixture decreased the counting efficiency in a proportional counter, the influence on calibration was experimentally estimated. It was not serious and able to be easily corrected. By the present method, the relation between radioactivity concentration and ionization current was determined for a gas-flow ionization chamber with 1.5 l effective volume. It showed good agreement with the results in other works. (author)

  14. A Workshop on Methods for Neutron Scattering Instrument Design. Introduction and Summary

    International Nuclear Information System (INIS)

    Hjelm, Rex P.

    1996-09-01

    The future of neutron and x-ray scattering instrument development and international cooperation was the focus of the workshop on ''Methods for Neutron Scattering Instrument Design'' September 23-25 at the E.O. Lawrence Berkeley National Laboratory. These proceedings are a collection of a portion of the invited and contributed presentations

  15. Evaluation of surface characteristics of rotary nickel-titanium instruments produced by different manufacturing methods.

    Science.gov (United States)

    Inan, U; Gurel, M

    2017-02-01

    Instrument fracture is a serious concern in endodontic practice. The aim of this study was to investigate the surface quality of new and used rotary nickel-titanium (NiTi) instruments manufactured by the traditional grinding process and twisting methods. Total 16 instruments of two rotary NiTi systems were used in this study. Eight Twisted Files (TF) (SybronEndo, Orange, CA, USA) and 8 Mtwo (VDW, Munich, Germany) instruments were evaluated. New and used of 4 experimental groups were evaluated using an atomic force microscopy (AFM). New and used instruments were analyzed on 3 points along a 3 mm. section at the tip of the instrument. Quantitative measurements according to the topographical deviations were recorded. The data were statistically analyzed with paired samples t-test and independent samples t-test. Mean root mean square (RMS) values for new and used TF 25.06 files were 10.70 ± 2.80 nm and 21.58 ± 6.42 nm, respectively, and the difference between them was statistically significant (P instruments produced by twisting method (TF 25.06) had better surface quality than the instruments produced by traditional grinding process (Mtwo 25.06 files).

  16. Instrumentation to Measure the Capacitance of Biosensors by Sinusoidal Wave Method

    Directory of Open Access Journals (Sweden)

    Pavan Kumar KATHUROJU

    2009-09-01

    Full Text Available Micro Controller based instrumentation to measure the capacitance of biosensors is developed. It is based on frequency domain technique with sinusoidal wave input. Changes in the capacitance of biosensor because of the analyte specific reaction are calculated by knowing the current flowing through the sample. A dedicated 8-bit microcontroller (AT89C52 and its associated peripherals are employed for the hardware and application specific software is developed in ‘C’ language. The paper describes the methodology, instrumentation details along with a specific application to glucose sensing. The measurements are conducted with glucose oxidase based capacitance biosensor and the obtained results are compared with the conventional method of sugar measurements using the UV-Visible spectroscopy (Phenol-Sulphuric acid assay method. Measurement accuracy of the instrument is found to be ± 5 %. Experiments are conducted on glucose sensor with different bias voltages. It is found that for bias voltages varying from 0.5 to 0.7 Volt, the measurements are good for this application.

  17. Variable Lifting Index (VLI): A New Method for Evaluating Variable Lifting Tasks.

    Science.gov (United States)

    Waters, Thomas; Occhipinti, Enrico; Colombini, Daniela; Alvarez-Casado, Enrique; Fox, Robert

    2016-08-01

    We seek to develop a new approach for analyzing the physical demands of highly variable lifting tasks through an adaptation of the Revised NIOSH (National Institute for Occupational Safety and Health) Lifting Equation (RNLE) into a Variable Lifting Index (VLI). There are many jobs that contain individual lifts that vary from lift to lift due to the task requirements. The NIOSH Lifting Equation is not suitable in its present form to analyze variable lifting tasks. In extending the prior work on the VLI, two procedures are presented to allow users to analyze variable lifting tasks. One approach involves the sampling of lifting tasks performed by a worker over a shift and the calculation of the Frequency Independent Lift Index (FILI) for each sampled lift and the aggregation of the FILI values into six categories. The Composite Lift Index (CLI) equation is used with lifting index (LI) category frequency data to calculate the VLI. The second approach employs a detailed systematic collection of lifting task data from production and/or organizational sources. The data are organized into simplified task parameter categories and further aggregated into six FILI categories, which also use the CLI equation to calculate the VLI. The two procedures will allow practitioners to systematically employ the VLI method to a variety of work situations where highly variable lifting tasks are performed. The scientific basis for the VLI procedure is similar to that for the CLI originally presented by NIOSH; however, the VLI method remains to be validated. The VLI method allows an analyst to assess highly variable manual lifting jobs in which the task characteristics vary from lift to lift during a shift. © 2015, Human Factors and Ergonomics Society.

  18. A method for the deliberate and deliberative selection of policy instrument mixes for climate change adaptation

    Directory of Open Access Journals (Sweden)

    Heleen L. P. Mees

    2014-06-01

    Full Text Available Policy instruments can help put climate adaptation plans into action. Here, we propose a method for the systematic assessment and selection of policy instruments for stimulating adaptation action. The multi-disciplinary set of six assessment criteria is derived from economics, policy, and legal studies. These criteria are specified for the purpose of climate adaptation by taking into account four challenges to the governance of climate adaptation: uncertainty, spatial diversity, controversy, and social complexity. The six criteria and four challenges are integrated into a step-wise method that enables the selection of instruments starting from a generic assessment and ending with a specific assessment of policy instrument mixes for the stimulation of a specific adaptation measure. We then apply the method to three examples of adaptation measures. The method's merits lie in enabling deliberate choices through a holistic and comprehensive set of adaptation specific criteria, as well as deliberative choices by offering a stepwise method that structures an informed dialog on instrument selection. Although the method was created and applied by scientific experts, policy-makers can also use the method.

  19. Nuclear medicine and imaging research (instrumentation and quantitative methods of evaluation)

    International Nuclear Information System (INIS)

    Beck, R.N.; Cooper, M.; Chen, C.T.

    1992-07-01

    This document is the annual progress report for project entitled ''Instrumentation and Quantitative Methods of Evaluation.'' Progress is reported in separate sections individually abstracted and indexed for the database. Subject areas reported include theoretical studies of imaging systems and methods, hardware developments, quantitative methods of evaluation, and knowledge transfer: education in quantitative nuclear medicine imaging

  20. Breastfeeding and the risk of childhood asthma: A two-stage instrumental variable analysis to address endogeneity.

    Science.gov (United States)

    Sharma, Nivita D

    2017-09-01

    Several explanations for the inconsistent results on the effects of breastfeeding on childhood asthma have been suggested. The purpose of this study was to investigate one unexplored explanation, which is the presence of a potential endogenous relationship between breastfeeding and childhood asthma. Endogeneity exists when an explanatory variable is correlated with the error term for reasons such as selection bias, reverse causality, and unmeasured confounders. Unadjusted endogeneity will bias the effect of breastfeeding on childhood asthma. To investigate potential endogeneity, a cross-sectional study of breastfeeding practices and incidence of childhood asthma in 87 pediatric patients in Georgia, the USA, was conducted using generalized linear modeling and a two-stage instrumental variable analysis. First, the relationship between breastfeeding and childhood asthma was analyzed without considering endogeneity. Second, tests for presence of endogeneity were performed and having detected endogeneity between breastfeeding and childhood asthma, a two-stage instrumental variable analysis was performed. The first stage of this analysis estimated the duration of breastfeeding and the second-stage estimated the risk of childhood asthma. When endogeneity was not taken into account, duration of breastfeeding was found to significantly increase the risk of childhood asthma (relative risk ratio [RR]=2.020, 95% confidence interval [CI]: [1.143-3.570]). After adjusting for endogeneity, duration of breastfeeding significantly reduced the risk of childhood asthma (RR=0.003, 95% CI: [0.000-0.240]). The findings suggest that researchers should consider evaluating how the presence of endogeneity could affect the relationship between duration of breastfeeding and the risk of childhood asthma. © 2017 EAACI and John Wiley and Sons A/S. Published by John Wiley and Sons Ltd.

  1. Extending the frontiers of mass spectrometric instrumentation and methods

    Energy Technology Data Exchange (ETDEWEB)

    Schieffer, Gregg Martin [Iowa State Univ., Ames, IA (United States)

    2010-01-01

    The focus of this dissertation is two-fold: developing novel analysis methods using mass spectrometry and the implementation and characterization of a novel ion mobility mass spectrometry instrumentation. The novel mass spectrometry combines ion trap for ion/ion reactions coupled to an ion mobility cell. The long term goal of this instrumentation is to use ion/ion reactions to probe the structure of gas phase biomolecule ions. The three ion source - ion trap - ion mobility - qTOF mass spectrometer (IT - IM - TOF MS) instrument is described. The analysis of the degradation products in coal (Chapter 2) and the imaging plant metabolites (Appendix III) fall under the methods development category. These projects use existing commercial instrumentation (JEOL AccuTOF MS and Thermo Finnigan LCQ IT, respectively) for the mass analysis of the degraded coal products and the plant metabolites, respectively. The coal degradation paper discusses the use of the DART ion source for fast and easy sample analysis. The sample preparation consisted of a simple 50 fold dilution of the soluble coal products in water and placing the liquid in front of the heated gas stream. This is the first time the DART ion source has been used for analysis of coal. Steven Raders under the guidance of John Verkade came up with the coal degradation projects. Raders performed the coal degradation reactions, worked up the products, and sent them to me. Gregg Schieffer developed the method and wrote the paper demonstrating the use of the DART ion source for the fast and easy sample analysis. The plant metabolite imaging project extends the use of colloidal graphite as a sample coating for atmospheric pressure LDI. DC Perdian and I closely worked together to make this project work. Perdian focused on building the LDI setup whereas Schieffer focused on the MSn analysis of the metabolites. Both Perdian and I took the data featured in the paper. Perdian was the primary writer of the paper and used it as a

  2. FJ-2207 measuring instrument detection pipe surface a level of pollution method

    International Nuclear Information System (INIS)

    Wang Jiangong

    2010-01-01

    On the pipe surface contamination were detected α level of pollution is a frequently encountered dose-detection work. Because the pipeline surface arc, while the measuring probe for the plane, which for accurate measurement difficult. In this paper, on the FJ-2207-type pipe surface contamination measuring instrument measuring pollution levels in the α method was studied. Introduced the FJ-2207 measuring instrument detection pipe surface α pollution levels. Studied this measuring instrument on the same sources of surface, plane α level of radioactivity measured differences in the results obtained control of the apparatus when the direct measurement of the surface correction factor, and gives 32-216 specifications commonly used pipe direct measurement of the amendment factor. Convenient method, test results are reliable for the accurate measurement of pipe pollution levels in the surface of α as a reference and learning. (authors)

  3. Analytical techniques for instrument design - matrix methods

    International Nuclear Information System (INIS)

    Robinson, R.A.

    1997-01-01

    We take the traditional Cooper-Nathans approach, as has been applied for many years for steady-state triple-axis spectrometers, and consider its generalisation to other inelastic scattering spectrometers. This involves a number of simple manipulations of exponentials of quadratic forms. In particular, we discuss a toolbox of matrix manipulations that can be performed on the 6- dimensional Cooper-Nathans matrix: diagonalisation (Moller-Nielsen method), coordinate changes e.g. from (Δk I ,Δk F to ΔE, ΔQ ampersand 2 dummy variables), integration of one or more variables (e.g. over such dummy variables), integration subject to linear constraints (e.g. Bragg's Law for analysers), inversion to give the variance-covariance matrix, and so on. We show how these tools can be combined to solve a number of important problems, within the narrow-band limit and the gaussian approximation. We will argue that a generalised program that can handle multiple different spectrometers could (and should) be written in parallel to the Monte-Carlo packages that are becoming available. We will also discuss the complementarity between detailed Monte-Carlo calculations and the approach presented here. In particular, Monte-Carlo methods traditionally simulate the real experiment as performed in practice, given a model scattering law, while the Cooper-Nathans method asks the inverse question: given that a neutron turns up in a particular spectrometer configuration (e.g. angle and time of flight), what is the probability distribution of possible scattering events at the sample? The Monte-Carlo approach could be applied in the same spirit to this question

  4. An improved Lobatto discrete variable representation by a phase optimisation and variable mapping method

    International Nuclear Information System (INIS)

    Yu, Dequan; Cong, Shu-Lin; Sun, Zhigang

    2015-01-01

    Highlights: • An optimised finite element discrete variable representation method is proposed. • The method is tested by solving one and two dimensional Schrödinger equations. • The method is quite efficient in solving the molecular Schrödinger equation. • It is very easy to generalise the method to multidimensional problems. - Abstract: The Lobatto discrete variable representation (LDVR) proposed by Manoloupolos and Wyatt (1988) has unique features but has not been generally applied in the field of chemical dynamics. Instead, it has popular application in solving atomic physics problems, in combining with the finite element method (FE-DVR), due to its inherent abilities for treating the Coulomb singularity in spherical coordinates. In this work, an efficient phase optimisation and variable mapping procedure is proposed to improve the grid efficiency of the LDVR/FE-DVR method, which makes it not only be competing with the popular DVR methods, such as the Sinc-DVR, but also keep its advantages for treating with the Coulomb singularity. The method is illustrated by calculations for one-dimensional Coulomb potential, and the vibrational states of one-dimensional Morse potential, two-dimensional Morse potential and two-dimensional Henon–Heiles potential, which prove the efficiency of the proposed scheme and promise more general applications of the LDVR/FE-DVR method

  5. An improved Lobatto discrete variable representation by a phase optimisation and variable mapping method

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Dequan [School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian 116024 (China); State Key Laboratory of Molecular Reaction Dynamics and Center for Theoretical and Computational Chemistry, Dalian Institute of Chemical Physics, Chinese Academy of Science, Dalian 116023 (China); Cong, Shu-Lin, E-mail: shlcong@dlut.edu.cn [School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian 116024 (China); Sun, Zhigang, E-mail: zsun@dicp.ac.cn [State Key Laboratory of Molecular Reaction Dynamics and Center for Theoretical and Computational Chemistry, Dalian Institute of Chemical Physics, Chinese Academy of Science, Dalian 116023 (China); Center for Advanced Chemical Physics and 2011 Frontier Center for Quantum Science and Technology, University of Science and Technology of China, 96 Jinzhai Road, Hefei 230026 (China)

    2015-09-08

    Highlights: • An optimised finite element discrete variable representation method is proposed. • The method is tested by solving one and two dimensional Schrödinger equations. • The method is quite efficient in solving the molecular Schrödinger equation. • It is very easy to generalise the method to multidimensional problems. - Abstract: The Lobatto discrete variable representation (LDVR) proposed by Manoloupolos and Wyatt (1988) has unique features but has not been generally applied in the field of chemical dynamics. Instead, it has popular application in solving atomic physics problems, in combining with the finite element method (FE-DVR), due to its inherent abilities for treating the Coulomb singularity in spherical coordinates. In this work, an efficient phase optimisation and variable mapping procedure is proposed to improve the grid efficiency of the LDVR/FE-DVR method, which makes it not only be competing with the popular DVR methods, such as the Sinc-DVR, but also keep its advantages for treating with the Coulomb singularity. The method is illustrated by calculations for one-dimensional Coulomb potential, and the vibrational states of one-dimensional Morse potential, two-dimensional Morse potential and two-dimensional Henon–Heiles potential, which prove the efficiency of the proposed scheme and promise more general applications of the LDVR/FE-DVR method.

  6. New methods of magnet-based instrumentation for NOTES.

    Science.gov (United States)

    Magdeburg, Richard; Hauth, Daniel; Kaehler, Georg

    2013-12-01

    Laparoscopic surgery has displaced open surgery as the standard of care for many clinical conditions. NOTES has been described as the next surgical frontier with the objective of incision-free abdominal surgery. The principal challenge of NOTES procedures is the loss of triangulation and instrument rigidity, which is one of the fundamental concepts of laparoscopic surgery. To overcome these problems necessitates the development of new instrumentation. material and methods: We aimed to assess the use of a very simple combination of internal and external magnets that might allow the vigorous multiaxial traction/counter-traction required in NOTES procedures. The magnet retraction system consisted of an external magnetic assembly and either small internal magnets attached by endoscopic clips to the designated tissue (magnet-clip-approach) or an endoscopic grasping forceps in a magnetic deflector roll (magnet-trocar-approach). We compared both methods regarding precision, time and efficacy by performing transgastric partial uterus resections with better results for the magnet-trocar-approach. This proof-of-principle animal study showed that the combination of external and internal magnets generates sufficient coupling forces at clinically relevant abdominal wall thicknesses, making them suitable for use and evaluation in NOTES procedures, and provides the vigorous multiaxial traction/counter-traction required by the lack of additional abdominal trocars.

  7. Effect of corruption on healthcare satisfaction in post-soviet nations: A cross-country instrumental variable analysis of twelve countries.

    Science.gov (United States)

    Habibov, Nazim

    2016-03-01

    There is the lack of consensus about the effect of corruption on healthcare satisfaction in transitional countries. Interpreting the burgeoning literature on this topic has proven difficult due to reverse causality and omitted variable bias. In this study, the effect of corruption on healthcare satisfaction is investigated in a set of 12 Post-Socialist countries using instrumental variable regression on the sample of 2010 Life in Transition survey (N = 8655). The results indicate that experiencing corruption significantly reduces healthcare satisfaction. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  8. Method to deterministically study photonic nanostructures in different experimental instruments

    NARCIS (Netherlands)

    Husken, B.H.; Woldering, L.A.; Blum, Christian; Tjerkstra, R.W.; Vos, Willem L.

    2009-01-01

    We describe an experimental method to recover a single, deterministically fabricated nanostructure in various experimental instruments without the use of artificially fabricated markers, with the aim to study photonic structures. Therefore, a detailed map of the spatial surroundings of the

  9. 30 CFR 75.1719-3 - Methods of measurement; light measuring instruments.

    Science.gov (United States)

    2010-07-01

    ... being measured and a sufficient distance from the surface to allow the light sensing element in the... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Methods of measurement; light measuring... § 75.1719-3 Methods of measurement; light measuring instruments. (a) Compliance with § 75.1719-1(d...

  10. Climate Informed Economic Instruments to Enhance Urban Water Supply Resilience to Hydroclimatological Variability and Change

    Science.gov (United States)

    Brown, C.; Carriquiry, M.; Souza Filho, F. A.

    2006-12-01

    Hydroclimatological variability presents acute challenges to urban water supply providers. The impact is often most severe in developing nations where hydrologic and climate variability can be very high, water demand is unmet and increasing, and the financial resources to mitigate the social effects of that variability are limited. Furthermore, existing urban water systems face a reduced solution space, constrained by competing and conflicting interests, such as irrigation demand, recreation and hydropower production, and new (relative to system design) demands to satisfy environmental flow requirements. These constraints magnify the impacts of hydroclimatic variability and increase the vulnerability of urban areas to climate change. The high economic and social costs of structural responses to hydrologic variability, such as groundwater utilization and the construction or expansion of dams, create a need for innovative alternatives. Advances in hydrologic and climate forecasting, and the increasing sophistication and acceptance of incentive-based mechanisms for achieving economically efficient water allocation offer potential for improving the resilience of existing water systems to the challenge of variable supply. This presentation will explore the performance of a system of climate informed economic instruments designed to facilitate the reduction of hydroclimatologic variability-induced impacts on water-sensitive stakeholders. The system is comprised of bulk water option contracts between urban water suppliers and agricultural users and insurance indexed on reservoir inflows designed to cover the financial needs of the water supplier in situations where the option is likely to be exercised. Contract and insurance parameters are linked to forecasts and the evolution of seasonal precipitation and streamflow and designed for financial and political viability. A simulation of system performance is presented based on ongoing work in Metro Manila, Philippines. The

  11. Tundish Cover Flux Thickness Measurement Method and Instrumentation Based on Computer Vision in Continuous Casting Tundish

    Directory of Open Access Journals (Sweden)

    Meng Lu

    2013-01-01

    Full Text Available Thickness of tundish cover flux (TCF plays an important role in continuous casting (CC steelmaking process. Traditional measurement method of TCF thickness is single/double wire methods, which have several problems such as personal security, easily affected by operators, and poor repeatability. To solve all these problems, in this paper, we specifically designed and built an instrumentation and presented a novel method to measure the TCF thickness. The instrumentation was composed of a measurement bar, a mechanical device, a high-definition industrial camera, a Siemens S7-200 programmable logic controller (PLC, and a computer. Our measurement method was based on the computer vision algorithms, including image denoising method, monocular range measurement method, scale invariant feature transform (SIFT, and image gray gradient detection method. Using the present instrumentation and method, images in the CC tundish can be collected by camera and transferred to computer to do imaging processing. Experiments showed that our instrumentation and method worked well at scene of steel plants, can accurately measure the thickness of TCF, and overcome the disadvantages of traditional measurement methods, or even replace the traditional ones.

  12. An application of the variable-r method to subpopulation growth rates in a 19th century agricultural population

    Directory of Open Access Journals (Sweden)

    Corey Sparks

    2009-07-01

    Full Text Available This paper presents an analysis of the differential growth rates of the farming and non-farming segments of a rural Scottish community during the 19th and early 20th centuries using the variable-r method allowing for net migration. Using this method, I find that the farming population of Orkney, Scotland, showed less variability in their reproduction and growth rates than the non-farming population during a period of net population decline. I conclude by suggesting that the variable-r method can be used in general cases where the relative growth of subpopulations or subpopulation reproduction is of interest.

  13. Parametric methods outperformed non-parametric methods in comparisons of discrete numerical variables

    Directory of Open Access Journals (Sweden)

    Sandvik Leiv

    2011-04-01

    Full Text Available Abstract Background The number of events per individual is a widely reported variable in medical research papers. Such variables are the most common representation of the general variable type called discrete numerical. There is currently no consensus on how to compare and present such variables, and recommendations are lacking. The objective of this paper is to present recommendations for analysis and presentation of results for discrete numerical variables. Methods Two simulation studies were used to investigate the performance of hypothesis tests and confidence interval methods for variables with outcomes {0, 1, 2}, {0, 1, 2, 3}, {0, 1, 2, 3, 4}, and {0, 1, 2, 3, 4, 5}, using the difference between the means as an effect measure. Results The Welch U test (the T test with adjustment for unequal variances and its associated confidence interval performed well for almost all situations considered. The Brunner-Munzel test also performed well, except for small sample sizes (10 in each group. The ordinary T test, the Wilcoxon-Mann-Whitney test, the percentile bootstrap interval, and the bootstrap-t interval did not perform satisfactorily. Conclusions The difference between the means is an appropriate effect measure for comparing two independent discrete numerical variables that has both lower and upper bounds. To analyze this problem, we encourage more frequent use of parametric hypothesis tests and confidence intervals.

  14. Propensity-score matching in economic analyses: comparison with regression models, instrumental variables, residual inclusion, differences-in-differences, and decomposition methods.

    Science.gov (United States)

    Crown, William H

    2014-02-01

    This paper examines the use of propensity score matching in economic analyses of observational data. Several excellent papers have previously reviewed practical aspects of propensity score estimation and other aspects of the propensity score literature. The purpose of this paper is to compare the conceptual foundation of propensity score models with alternative estimators of treatment effects. References are provided to empirical comparisons among methods that have appeared in the literature. These comparisons are available for a subset of the methods considered in this paper. However, in some cases, no pairwise comparisons of particular methods are yet available, and there are no examples of comparisons across all of the methods surveyed here. Irrespective of the availability of empirical comparisons, the goal of this paper is to provide some intuition about the relative merits of alternative estimators in health economic evaluations where nonlinearity, sample size, availability of pre/post data, heterogeneity, and missing variables can have important implications for choice of methodology. Also considered is the potential combination of propensity score matching with alternative methods such as differences-in-differences and decomposition methods that have not yet appeared in the empirical literature.

  15. Instrumentation and quantitative methods of evaluation. Progress report, January 15-September 14, 1986

    International Nuclear Information System (INIS)

    Beck, R.N.

    1986-09-01

    This document reports progress under grant entitled ''Instrumentation and Quantitative Methods of Evaluation.'' Individual reports are presented on projects entitled the physical aspects of radionuclide imaging, image reconstruction and quantitative evaluation, PET-related instrumentation for improved quantitation, improvements in the FMI cyclotron for increased utilization, and methodology for quantitative evaluation of diagnostic performance

  16. A method based on a separation of variables in magnetohydrodynamics (MHD); Une methode de separation des variables en magnetohydrodynamique

    Energy Technology Data Exchange (ETDEWEB)

    Cessenat, M.; Genta, P.

    1996-12-31

    We use a method based on a separation of variables for solving a system of first order partial differential equations, in a very simple modelling of MHD. The method consists in introducing three unknown variables {phi}1, {phi}2, {phi}3 in addition of the time variable {tau} and then searching a solution which is separated with respect to {phi}1 and {tau} only. This is allowed by a very simple relation, called a `metric separation equation`, which governs the type of solutions with respect to time. The families of solutions for the system of equations thus obtained, correspond to a radial evolution of the fluid. Solving the MHD equations is then reduced to find the transverse component H{sub {Sigma}} of the magnetic field on the unit sphere {Sigma} by solving a non linear partial differential equation on {Sigma}. Thus we generalize ideas due to Courant-Friedrichs and to Sedov on dimensional analysis and self-similar solutions. (authors).

  17. Fatigue resistance of engine-driven rotary nickel-titanium instruments produced by new manufacturing methods.

    Science.gov (United States)

    Gambarini, Gianluca; Grande, Nicola Maria; Plotino, Gianluca; Somma, Francesco; Garala, Manish; De Luca, Massimo; Testarelli, Luca

    2008-08-01

    The aim of the present study was to investigate whether cyclic fatigue resistance is increased for nickel-titanium instruments manufactured by using new processes. This was evaluated by comparing instruments produced by using the twisted method (TF; SybronEndo, Orange, CA) and those using the M-wire alloy (GTX; Dentsply Tulsa-Dental Specialties, Tulsa, OK) with instruments produced by a traditional NiTi grinding process (K3, SybronEndo). Tests were performed with a specific cyclic fatigue device that evaluated cycles to failure of rotary instruments inside curved artificial canals. Results indicated that size 06-25 TF instruments showed a significant increase (p 0.05) in the mean number of cycles to failure when compared with size 06-20 GT series X instruments. The new manufacturing process produced nickel-titanium rotary files (TF) significantly more resistant to fatigue than instruments produced with the traditional NiTi grinding process. Instruments produced with M-wire (GTX) were not found to be more resistant to fatigue than instruments produced with the traditional NiTi grinding process.

  18. Problems with radiological surveillance instrumentation

    International Nuclear Information System (INIS)

    Swinth, K.L.; Tanner, J.E.; Fleming, D.M.

    1984-09-01

    Many radiological surveillance instruments are in use at DOE facilities throughout the country. These instruments are an essential part of all health physics programs, and poor instrument performance can increase program costs or compromise program effectiveness. Generic data from simple tests on newly purchased instruments shows that many instruments will not meet requirements due to manufacturing defects. In other cases, lack of consideration of instrument use has resulted in poor acceptance of instruments and poor reliability. The performance of instruments is highly variable for electronic and mechanical performance, radiation response, susceptibility to interferences and response to environmental factors. Poor instrument performance in these areas can lead to errors or poor accuracy in measurements

  19. Problems with radiological surveillance instrumentation

    International Nuclear Information System (INIS)

    Swinth, K.L.; Tanner, J.E.; Fleming, D.M.

    1985-01-01

    Many radiological surveillance instruments are in use at DOE facilities throughout the country. These instruments are an essential part of all health physics programs, and poor instrument performance can increase program costs or compromise program effectiveness. Generic data from simple tests on newly purchased instruments shows that many instruments will not meet requirements due to manufacturing defects. In other cases, lack of consideration of instrument use has resulted in poor acceptance of instruments and poor reliability. The performance of instruments is highly variable for electronic and mechanical performance, radiation response, susceptibility to interferences and response to environmental factors. Poor instrument performance in these areas can lead to errors or poor accuracy in measurements

  20. Instrumental neutron activation analysis as a routine method for rock analysis

    International Nuclear Information System (INIS)

    Rosenberg, R.J.

    1977-06-01

    Instrumental neutron activation methods for the analysis of geological samples have been developed. Special emphasis has been laid on the improvement of sensitivity and accuracy in order to maximize tha quality of the analyses. Furthermore, the procedures have been automated as far as possible in order to minimize the cost of the analysis. A short review of the basic literature is given followed by a description of the principles of the method. All aspects concerning the sensitivity are discussed thoroughly in view of the analyst's possibility of influencing them. Experimentally determined detection limits for Na, Al, K, Ca, Sc, Cr, Ti, V, Mn, Fe, Ni, Co, Rb, Zr, Sb, Cs, Ba, La, Ce, Nd, Sm, Eu, Gd, Tb, Dy, Yb, Lu, Hf, Ta, Th and U are given. The errors of the method are discussed followed by actions taken to avoid them. The most significant error was caused by flux deviation, but this was avoided by building a rotating sample holder for rotating the samples during irradiation. A scheme for the INAA of 32 elements is proposed. The method has been automated as far as possible and an automatic γ-spectrometer and a computer program for the automatic calculation of the results are described. Furthermore, a completely automated uranium analyzer based on delayed neutron counting is described. The methods are discussed in view of their applicability to rock analysis. It is stated that the sensitivity varies considerably from element to element and instrumental activation analysis is an excellent method for the analysis of some specific elements like lanthanides, thorium and uranium but less so for many other elements. The accuracy is good varying from 2% to 10% for most elements. Instrumental activation analysis for most elements is rather an expensive method there being, however, a few exceptions. The most important of these is uranium. The analysis of uranium by delayed neutron counting is an inexpensive means for the analysis of large numbers of samples needed for

  1. Instrumental variable analysis

    NARCIS (Netherlands)

    Stel, Vianda S.; Dekker, Friedo W.; Zoccali, Carmine; Jager, Kitty J.

    2013-01-01

    The main advantage of the randomized controlled trial (RCT) is the random assignment of treatment that prevents selection by prognosis. Nevertheless, only few RCTs can be performed given their high cost and the difficulties in conducting such studies. Therefore, several analytical methods for

  2. Two methods for studying the X-ray variability

    NARCIS (Netherlands)

    Yan, Shu-Ping; Ji, Li; Méndez, Mariano; Wang, Na; Liu, Siming; Li, Xiang-Dong

    2016-01-01

    The X-ray aperiodic variability and quasi-periodic oscillation (QPO) are the important tools to study the structure of the accretion flow of X-ray binaries. However, the origin of the complex X-ray variability from X-ray binaries remains yet unsolved. We proposed two methods for studying the X-ray

  3. Method of decontaminating radioactive-contaminated instruments

    Energy Technology Data Exchange (ETDEWEB)

    Urata, M; Fujii, M; Kitaguchi, H

    1982-03-29

    Purpose: To enable safety processing of liquid wastes by recovering radioactive metal ions remaining in the electrolytes after the decontamination procedure thereby decreasing the radioactivity. Method: In a decontamination tank containing electrolytes consisting of diluted hydrochloric acid and diluted sulfuric acid, are provided a radioactive contaminated instrument connected to an anode and a collector electrode made of stainless steel connected to a cathode respectively. Upon applying electrical current, the portion of the mother material to be decontaminated is polished electrolytically into metal ions and they are deposited as metal on the collection electrode. After completion of the decontamination, an ultrasonic wave generator is operated to strip and remove the oxide films. Thereafter, the anode is replaced with the carbon electrode and electrical current is supplied continuously, whereby the remaining metal ions are deposited and recovered as the metal on the collection electrode.

  4. A Streamlined Artificial Variable Free Version of Simplex Method

    OpenAIRE

    Inayatullah, Syed; Touheed, Nasir; Imtiaz, Muhammad

    2015-01-01

    This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new ...

  5. Confirming theoretical pay constructs of a variable pay scheme

    Directory of Open Access Journals (Sweden)

    Sibangilizwe Ncube

    2013-05-01

    Full Text Available Orientation: Return on the investment in variable pay programmes remains controversial because their cost versus contribution cannot be empirically justified. Research purpose: This study validates the findings of the model developed by De Swardt on the factors related to successful variable pay programmes. Motivation for the study: Many organisations blindly implement variable pay programmes without any means to assess the impact these programmes have on the company’s performance. This study was necessary to validate the findings of an existing instrument that validates the contribution of variable pay schemes. Research design, approach and method: The study was conducted using quantitative research. A total of 300 completed questionnaires from a non-purposive sample of 3000 participants in schemes across all South African industries were returned and analysed. Main findings: Using exploratory and confirmatory factor analysis, it was found that the validation instrument developed by De Swardt is still largely valid in evaluating variable pay schemes. The differences between the study and the model were reported. Practical/managerial implications: The study confirmed the robustness of an existing model that enables practitioners to empirically validate the use of variable pay plans. This model assists in the design and implementation of variable pay programmes that meet critical success factors. Contribution/value-add: The study contributed to the development of a measurement instrument that will assess whether a variable pay plan contributes to an organisation’s success.

  6. Instrumental variable analysis as a complementary analysis in studies of adverse effects : venous thromboembolism and second-generation versus third-generation oral contraceptives

    NARCIS (Netherlands)

    Boef, Anna G C; Souverein, Patrick C|info:eu-repo/dai/nl/243074948; Vandenbroucke, Jan P; van Hylckama Vlieg, Astrid; de Boer, Anthonius|info:eu-repo/dai/nl/075097346; le Cessie, Saskia; Dekkers, Olaf M

    2016-01-01

    PURPOSE: A potentially useful role for instrumental variable (IV) analysis may be as a complementary analysis to assess the presence of confounding when studying adverse drug effects. There has been discussion on whether the observed increased risk of venous thromboembolism (VTE) for

  7. Process instrumentation for nuclear power station

    International Nuclear Information System (INIS)

    Yanai, Katsuya; Shinohara, Katsuhiko

    1978-01-01

    Nuclear power stations are the large scale compound system composed of many process systems. Accordingly, for the safe and high reliability operation of the plants, it is necessary to grasp the conditions of respective processes exactly and control the operation correctly. For this purpose, the process instrumentation undertakes the important function to monitor the plant operation. Hitachi Ltd. has exerted ceaseless efforts since long before to establish the basic technology for the process instrumentation in nuclear power stations, to develop and improve hardwares of high reliability, and to establish the quality control system. As for the features of the process instrumentation in nuclear power stations, the enormous quantity of measurement, the diversity of measured variables, the remote measurement and monitoring method, and the ensuring of high reliability are enumerated. Also the hardwares must withstand earthquakes, loss of coolant accidents, radiations, leaks and fires. Hitachi Unitrol Sigma Series is the measurement system which is suitable to the general process instrumentation in nuclear power stations, and satisfies sufficiently the basic requirements described above. It has various features as the nuclear energy system, such as high reliability by the use of ICs, the methods of calculation and transmission considering signal linkage, loop controller system and small size. HIACS-1000 Series is the analog controller of high reliability for water control. (Kako, I.)

  8. Minimally invasive instrumentation without fusion during posterior thoracic corpectomies: a comparison of percutaneously instrumented nonfused segments with open instrumented fused segments.

    Science.gov (United States)

    Lau, Darryl; Chou, Dean

    2017-07-01

    OBJECTIVE During the mini-open posterior corpectomy, percutaneous instrumentation without fusion is performed above and below the corpectomy level. In this study, the authors' goal was to compare the perioperative and long-term implant failure rates of patients who underwent nonfused percutaneous instrumentation with those of patients who underwent traditional open instrumented fusion. METHODS Adult patients who underwent posterior thoracic corpectomies with cage reconstruction between 2009 and 2014 were identified. Patients who underwent mini-open corpectomy had percutaneous instrumentation without fusion, and patients who underwent open corpectomy had instrumented fusion above and below the corpectomy site. The authors compared perioperative outcomes and rates of implant failure requiring reoperation between the open (fused) and mini-open (unfused) groups. RESULTS A total of 75 patients were identified, and 53 patients (32 open and 21 mini-open) were available for followup. The mean patient age was 52.8 years, and 56.6% of patients were male. There were no significant differences in baseline variables between the 2 groups. The overall perioperative complication rate was 15.1%, and there was no significant difference between the open and mini-open groups (18.8% vs 9.5%; p = 0.359). The mean hospital stay was 10.5 days. The open group required a significantly longer stay than the mini-open group (12.8 vs 7.1 days; p open and mini-open groups at 6 months (3.1% vs 0.0%, p = 0.413), 1 year (10.7% vs 6.2%, p = 0.620), and 2 years (18.2% vs 8.3%, p = 0.438). The overall mean follow-up was 29.2 months. CONCLUSIONS These findings suggest that percutaneous instrumentation without fusion in mini-open transpedicular corpectomies offers similar implant failure and reoperation rates as open instrumented fusion as far out as 2 years of follow-up.

  9. A Method for Modeling the Virtual Instrument Automatic Test System Based on the Petri Net

    Institute of Scientific and Technical Information of China (English)

    MA Min; CHEN Guang-ju

    2005-01-01

    Virtual instrument is playing the important role in automatic test system. This paper introduces a composition of a virtual instrument automatic test system and takes the VXIbus based a test software platform which is developed by CAT lab of the UESTC as an example. Then a method to model this system based on Petri net is proposed. Through this method, we can analyze the test task scheduling to prevent the deadlock or resources conflict. At last, this paper analyzes the feasibility of this method.

  10. Method of decontaminating radioactive-contaminated instruments

    International Nuclear Information System (INIS)

    Urata, Megumu; Fujii, Masaaki; Kitaguchi, Hiroshi.

    1982-01-01

    Purpose: To enable safety processing of liquid wastes by recovering radioactive metal ions remaining in the electrolytes after the decontamination procedure thereby decreasing the radioactivity. Method: In a decontamination tank containing electrolytes consisting of diluted hydrochloric acid and diluted sulfuric acid, are provided a radioactive contaminated instrument connected to an anode and a collector electrode made of stainless steel connected to a cathode respectively. Upon applying electrical current, the portion of the mother material to be decontaminated is polished electrolytically into metal ions and they are deposited as metal on the collection electrode. After completion of the decontamination, an ultrasonic wave generator is operated to strip and remove the oxide films. Thereafter, the anode is replaced with the carbon electrode and electrical current is supplied continuously, whereby the remaining metal ions are deposited and recovered as the metal on the collection electrode. (Yoshino, Y.)

  11. Model reduction method using variable-separation for stochastic saddle point problems

    Science.gov (United States)

    Jiang, Lijian; Li, Qiuqi

    2018-02-01

    In this paper, we consider a variable-separation (VS) method to solve the stochastic saddle point (SSP) problems. The VS method is applied to obtain the solution in tensor product structure for stochastic partial differential equations (SPDEs) in a mixed formulation. The aim of such a technique is to construct a reduced basis approximation of the solution of the SSP problems. The VS method attempts to get a low rank separated representation of the solution for SSP in a systematic enrichment manner. No iteration is performed at each enrichment step. In order to satisfy the inf-sup condition in the mixed formulation, we enrich the separated terms for the primal system variable at each enrichment step. For the SSP problems by regularization or penalty, we propose a more efficient variable-separation (VS) method, i.e., the variable-separation by penalty method. This can avoid further enrichment of the separated terms in the original mixed formulation. The computation of the variable-separation method decomposes into offline phase and online phase. Sparse low rank tensor approximation method is used to significantly improve the online computation efficiency when the number of separated terms is large. For the applications of SSP problems, we present three numerical examples to illustrate the performance of the proposed methods.

  12. Method of case hardening depth testing by using multifunctional ultrasonic testing instrument

    International Nuclear Information System (INIS)

    Salchak, Y A; Sednev, D A; Ardashkin, I B; Kroening, M

    2015-01-01

    The paper describes usability of ultrasonic case hardening depth control applying standard instrument of ultrasonic inspections. The ultrasonic method of measuring the depth of the hardened layer is proposed. Experimental series within the specified and multifunctional ultrasonic equipment are performed. The obtained results are compared with the results of a referent method of analysis. (paper)

  13. An Instrumental Variable Probit (IVP Analysis on Depressed Mood in Korea: The Impact of Gender Differences and Other Socio-Economic Factors

    Directory of Open Access Journals (Sweden)

    Lara Gitto

    2015-08-01

    Full Text Available Background Depression is a mental health state whose frequency has been increasing in modern societies. It imposes a great burden, because of the strong impact on people’s quality of life and happiness. Depression can be reliably diagnosed and treated in primary care: if more people could get effective treatments earlier, the costs related to depression would be reversed. The aim of this study was to examine the influence of socio-economic factors and gender on depressed mood, focusing on Korea. In fact, in spite of the great amount of empirical studies carried out for other countries, few epidemiological studies have examined the socio-economic determinants of depression in Korea and they were either limited to samples of employed women or did not control for individual health status. Moreover, as the likely data endogeneity (i.e. the possibility of correlation between the dependent variable and the error term as a result of autocorrelation or simultaneity, such as, in this case, the depressed mood due to health factors that, in turn might be caused by depression, might bias the results, the present study proposes an empirical approach, based on instrumental variables, to deal with this problem. Methods Data for the year 2008 from the Korea National Health and Nutrition Examination Survey (KNHANES were employed. About seven thousands of people (N= 6,751, of which 43% were males and 57% females, aged from 19 to 75 years old, were included in the sample considered in the analysis. In order to take into account the possible endogeneity of some explanatory variables, two Instrumental Variables Probit (IVP regressions were estimated; the variables for which instrumental equations were estimated were related to the participation of women to the workforce and to good health, as reported by people in the sample. Explanatory variables were related to age, gender, family factors (such as the number of family members and marital status and socio

  14. Field astrobiology research instruments and methods in moon-mars analogue site.

    NARCIS (Netherlands)

    Foing, B.H.; Stoker, C.; Zavaleta, J.; Ehrenfreund, P.; Sarrazin, P.; Blake, D.; Page, J.; Pletser, V.; Hendrikse, J.; Oliveira Lebre Direito, M.S.; Kotler, M.; Martins, Z.; Orzechowska, G.; Thiel, C.S.; Clarke, J.; Gross, J.; Wendt, L.; Borst, A.; Peters, S.; Wilhelm, M.-B.; Davies, G.R.; EuroGeoMars 2009 Team, ILEWG

    2011-01-01

    We describe the field demonstration of astrobiology instruments and research methods conducted in and from the Mars Desert Research Station (MDRS) in Utah during the EuroGeoMars campaign 2009 coordinated by ILEWG, ESA/ESTEC and NASA Ames, with the contribution of academic partners. We discuss the

  15. Reference Proteome Extracts for Mass Spec Instrument Performance Validation and Method Development

    Science.gov (United States)

    Rosenblatt, Mike; Urh, Marjeta; Saveliev, Sergei

    2014-01-01

    Biological samples of high complexity are required to test protein mass spec sample preparation procedures and validate mass spec instrument performance. Total cell protein extracts provide the needed sample complexity. However, to be compatible with mass spec applications, such extracts should meet a number of design requirements: compatibility with LC/MS (free of detergents, etc.)high protein integrity (minimal level of protein degradation and non-biological PTMs)compatibility with common sample preparation methods such as proteolysis, PTM enrichment and mass-tag labelingLot-to-lot reproducibility Here we describe total protein extracts from yeast and human cells that meet the above criteria. Two extract formats have been developed: Intact protein extracts with primary use for sample preparation method development and optimizationPre-digested extracts (peptides) with primary use for instrument validation and performance monitoring

  16. Variable identification in group method of data handling methodology

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Iraci Martinez, E-mail: martinez@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Bueno, Elaine Inacio [Instituto Federal de Educacao, Ciencia e Tecnologia, Guarulhos, SP (Brazil)

    2011-07-01

    The Group Method of Data Handling - GMDH is a combinatorial multi-layer algorithm in which a network of layers and nodes is generated using a number of inputs from the data stream being evaluated. The GMDH network topology has been traditionally determined using a layer by layer pruning process based on a preselected criterion of what constitutes the best nodes at each level. The traditional GMDH method is based on an underlying assumption that the data can be modeled by using an approximation of the Volterra Series or Kolmorgorov-Gabor polynomial. A Monitoring and Diagnosis System was developed based on GMDH and Artificial Neural Network - ANN methodologies, and applied to the IPEN research Reactor IEA-R1. The GMDH was used to study the best set of variables to be used to train an ANN, resulting in a best monitoring variable estimative. The system performs the monitoring by comparing these estimative calculated values with measured ones. The IPEN Reactor Data Acquisition System is composed of 58 variables (process and nuclear variables). As the GMDH is a self-organizing methodology, the input variables choice is made automatically, and the real input variables used in the Monitoring and Diagnosis System were not showed in the final result. This work presents a study of variable identification of GMDH methodology by means of an algorithm that works in parallel with the GMDH algorithm and traces the initial variables paths, resulting in an identification of the variables that composes the best Monitoring and Diagnosis Model. (author)

  17. Variable identification in group method of data handling methodology

    International Nuclear Information System (INIS)

    Pereira, Iraci Martinez; Bueno, Elaine Inacio

    2011-01-01

    The Group Method of Data Handling - GMDH is a combinatorial multi-layer algorithm in which a network of layers and nodes is generated using a number of inputs from the data stream being evaluated. The GMDH network topology has been traditionally determined using a layer by layer pruning process based on a preselected criterion of what constitutes the best nodes at each level. The traditional GMDH method is based on an underlying assumption that the data can be modeled by using an approximation of the Volterra Series or Kolmorgorov-Gabor polynomial. A Monitoring and Diagnosis System was developed based on GMDH and Artificial Neural Network - ANN methodologies, and applied to the IPEN research Reactor IEA-R1. The GMDH was used to study the best set of variables to be used to train an ANN, resulting in a best monitoring variable estimative. The system performs the monitoring by comparing these estimative calculated values with measured ones. The IPEN Reactor Data Acquisition System is composed of 58 variables (process and nuclear variables). As the GMDH is a self-organizing methodology, the input variables choice is made automatically, and the real input variables used in the Monitoring and Diagnosis System were not showed in the final result. This work presents a study of variable identification of GMDH methodology by means of an algorithm that works in parallel with the GMDH algorithm and traces the initial variables paths, resulting in an identification of the variables that composes the best Monitoring and Diagnosis Model. (author)

  18. Instrumental variable estimation of the causal effect of plasma 25-hydroxy-vitamin D on colorectal cancer risk: a mendelian randomization analysis.

    Directory of Open Access Journals (Sweden)

    Evropi Theodoratou

    Full Text Available Vitamin D deficiency has been associated with several common diseases, including cancer and is being investigated as a possible risk factor for these conditions. We reported the striking prevalence of vitamin D deficiency in Scotland. Previous epidemiological studies have reported an association between low dietary vitamin D and colorectal cancer (CRC. Using a case-control study design, we tested the association between plasma 25-hydroxy-vitamin D (25-OHD and CRC (2,001 cases, 2,237 controls. To determine whether plasma 25-OHD levels are causally linked to CRC risk, we applied the control function instrumental variable (IV method of the mendelian randomization (MR approach using four single nucleotide polymorphisms (rs2282679, rs12785878, rs10741657, rs6013897 previously shown to be associated with plasma 25-OHD. Low plasma 25-OHD levels were associated with CRC risk in the crude model (odds ratio (OR: 0.76, 95% Confidence Interval (CI: 0.71, 0.81, p: 1.4×10(-14 and after adjusting for age, sex and other confounding factors. Using an allele score that combined all four SNPs as the IV, the estimated causal effect was OR 1.16 (95% CI 0.60, 2.23, whilst it was 0.94 (95% CI 0.46, 1.91 and 0.93 (0.53, 1.63 when using an upstream (rs12785878, rs10741657 and a downstream allele score (rs2282679, rs6013897, respectively. 25-OHD levels were inversely associated with CRC risk, in agreement with recent meta-analyses. The fact that this finding was not replicated when the MR approach was employed might be due to weak instruments, giving low power to demonstrate an effect (<0.35. The prevalence and degree of vitamin D deficiency amongst individuals living in northerly latitudes is of considerable importance because of its relationship to disease. To elucidate the effect of vitamin D on CRC cancer risk, additional large studies of vitamin D and CRC risk are required and/or the application of alternative methods that are less sensitive to weak instrument

  19. A method for retrieving endodontic or atypical nonendodontic separated instruments from the root canal: a report of two cases.

    Science.gov (United States)

    Monteiro, Jardel Camilo do Carmo; Kuga, Milton Carlos; Dantas, Andrea Abi Rached; Jordão-Basso, Keren Cristina Fagundes; Keine, Katia Cristina; Ruchaya, Prashant Jay; Faria, Gisele; Leonardo, Renato de Toledo

    2014-11-01

    This clinical report presents a new method for retrieving separated instruments from the root canal with minimally invasive procedures. The presence of separated instrument in root canal may interfere in the endodontic treatment prognosis. There are several recommended methods to retrieve separated instruments, but some are difficult in clinically practice. This study describes two cases of separated instrument removal from the root canal using a stainless-steel prepared needle associated with a K-file. Case 1 presented a fractured gutta-percha condenser within the mandibular second premolar, it was separated during incorrect intracanal medication calcium hydroxide placement. Case 2 had a fractured sewing needle within the upper central incisor that the patient used to remove food debris from the root canal. After cervical preparation, the fractured instruments were fitted inside a prepared needle and then an endodontic instrument (#25 K-file) was adapted with clockwise turning motion between the needle inner wall and the fragment. The endodontic or atypical nonendodontic separated instrument may be easily pull on of the root canal using a single and low cost device. The methods for retrieving separated instruments from root canal are difficult and destructive procedures. The present case describes a simple method to solve this problem.

  20. Intrajudge and Interjudge Reliability of the Stuttering Severity Instrument-Fourth Edition.

    Science.gov (United States)

    Davidow, Jason H; Scott, Kathleen A

    2017-11-08

    The Stuttering Severity Instrument (SSI) is a tool used to measure the severity of stuttering. Previous versions of the instrument have known limitations (e.g., Lewis, 1995). The present study examined the intra- and interjudge reliability of the newest version, the Stuttering Severity Instrument-Fourth Edition (SSI-4) (Riley, 2009). Twelve judges who were trained on the SSI-4 protocol participated. Judges collected SSI-4 data while viewing 4 videos of adults who stutter at Time 1 and 4 weeks later at Time 2. Data were analyzed for intra- and interjudge reliability of the SSI-4 subscores (for Frequency, Duration, and Physical Concomitants), total score, and final severity rating. Intra- and interjudge reliability across the subscores and total score concurred with the manual's reported reliability when reliability was calculated using the methods described in the manual. New calculations of judge agreement produced different values from those in the manual-for the 3 subscores, total score, and final severity rating-and provided data absent from the manual. Clinicians and researchers who use the SSI-4 should carefully consider the limitations of the instrument. Investigation into the multitasking demands of the instrument may provide information on whether separating the collection of data for specific variables will improve intra- and interjudge reliability of those variables.

  1. Unexpected but Most Welcome: Mixed Methods for the Validation and Revision of the Participatory Evaluation Measurement Instrument

    Science.gov (United States)

    Daigneault, Pierre-Marc; Jacob, Steve

    2014-01-01

    Although combining methods is nothing new, more contributions about why and how to mix methods for validation purposes are needed. This article presents a case of validating the inferences drawn from the Participatory Evaluation Measurement Instrument, an instrument that purports to measure stakeholder participation in evaluation. Although the…

  2. Design and operation of dust measuring instrumentation based on the beta-radiation method

    International Nuclear Information System (INIS)

    Lilienfeld, P.

    1975-01-01

    The theory, instrument design aspects and applications of beta-radiation attenuation for the measurement of the mass concentration of airborne particulates are reviewed. Applicable methods of particle collection, beta sensing configurations, source ( 63 Ni, 14 C, 147 Pr, 85 Kr) and detector design criteria, electronic signal processing, digital control and instrument programming techniques are treated. Advantages, limitations and error sources of beta-attenuation instrumentation are analyzed. Applications to industrial dust measurements, source testing, ambient monitoring, and particle size analysis are the major areas of practical utilization of this technique, and its inherent capability for automated and unattended operation provides compatibility with process control synchronization and alarm, telemetry, and incorporation into pollution monitoring network sensing stations. (orig.) [de

  3. A new method for the radiation representation of musical instruments in auralizations

    DEFF Research Database (Denmark)

    Rindel, Jens Holger; Otondo, Felipe

    2005-01-01

    A new method for the representation of sound sources that vary their directivity in time in auralizations is introduced. A recording method with multi-channel anechoic recordings is proposed in connection with the use of a multiple virtual source reproduction system in auralizations. Listening ex...... to be significant. Further applications of the method are considered for ensembles within room auralizations as well as in the field of studio recording techniques for large instruments....

  4. Measuring the surgical 'learning curve': methods, variables and competency.

    Science.gov (United States)

    Khan, Nuzhath; Abboudi, Hamid; Khan, Mohammed Shamim; Dasgupta, Prokar; Ahmed, Kamran

    2014-03-01

    To describe how learning curves are measured and what procedural variables are used to establish a 'learning curve' (LC). To assess whether LCs are a valuable measure of competency. A review of the surgical literature pertaining to LCs was conducted using the Medline and OVID databases. Variables should be fully defined and when possible, patient-specific variables should be used. Trainee's prior experience and level of supervision should be quantified; the case mix and complexity should ideally be constant. Logistic regression may be used to control for confounding variables. Ideally, a learning plateau should reach a predefined/expert-derived competency level, which should be fully defined. When the group splitting method is used, smaller cohorts should be used in order to narrow the range of the LC. Simulation technology and competence-based objective assessments may be used in training and assessment in LC studies. Measuring the surgical LC has potential benefits for patient safety and surgical education. However, standardisation in the methods and variables used to measure LCs is required. Confounding variables, such as participant's prior experience, case mix, difficulty of procedures and level of supervision, should be controlled. Competency and expert performance should be fully defined. © 2013 The Authors. BJU International © 2013 BJU International.

  5. Evaluation of two disinfection/sterilization methods on silicon rubber-based composite finishing instruments.

    Science.gov (United States)

    Lacerda, Vánia A; Pereira, Leandro O; Hirata JUNIOR, Raphael; Perez, Cesar R

    2015-12-01

    To evaluate the effectiveness of disinfection/sterilization methods and their effects on polishing capacity, micomorphology, and composition of two different composite fiishing and polishing instruments. Two brands of finishing and polishing instruments (Jiffy and Optimize), were analyzed. For the antimicrobial test, 60 points (30 of each brand) were used for polishing composite restorations and submitted to three different groups of disinfection/sterilization methods: none (control), autoclaving, and immersion in peracetic acid for 60 minutes. The in vitro tests were performed to evaluate the polishing performance on resin composite disks (Amelogen) using a 3D scanner (Talyscan) and to evaluate the effects on the points' surface composition (XRF) and micromorphology (MEV) after completing a polishing and sterilizing routine five times. Both sterilization/disinfection methods were efficient against oral cultivable organisms and no deleterious modification was observed to point surface.

  6. Assessing learning outcomes in middle-division classical mechanics: The Colorado Classical Mechanics and Math Methods Instrument

    Science.gov (United States)

    Caballero, Marcos D.; Doughty, Leanne; Turnbull, Anna M.; Pepper, Rachel E.; Pollock, Steven J.

    2017-06-01

    Reliable and validated assessments of introductory physics have been instrumental in driving curricular and pedagogical reforms that lead to improved student learning. As part of an effort to systematically improve our sophomore-level classical mechanics and math methods course (CM 1) at CU Boulder, we have developed a tool to assess student learning of CM 1 concepts in the upper division. The Colorado Classical Mechanics and Math Methods Instrument (CCMI) builds on faculty consensus learning goals and systematic observations of student difficulties. The result is a 9-question open-ended post test that probes student learning in the first half of a two-semester classical mechanics and math methods sequence. In this paper, we describe the design and development of this instrument, its validation, and measurements made in classes at CU Boulder and elsewhere.

  7. New complex variable meshless method for advection—diffusion problems

    International Nuclear Information System (INIS)

    Wang Jian-Fei; Cheng Yu-Min

    2013-01-01

    In this paper, an improved complex variable meshless method (ICVMM) for two-dimensional advection—diffusion problems is developed based on improved complex variable moving least-square (ICVMLS) approximation. The equivalent functional of two-dimensional advection—diffusion problems is formed, the variation method is used to obtain the equation system, and the penalty method is employed to impose the essential boundary conditions. The difference method for two-point boundary value problems is used to obtain the discrete equations. Then the corresponding formulas of the ICVMM for advection—diffusion problems are presented. Two numerical examples with different node distributions are used to validate and inestigate the accuracy and efficiency of the new method in this paper. It is shown that ICVMM is very effective for advection—diffusion problems, and has a good convergent character, accuracy, and computational efficiency

  8. Automatic Recognition Method for Optical Measuring Instruments Based on Machine Vision

    Institute of Scientific and Technical Information of China (English)

    SONG Le; LIN Yuchi; HAO Liguo

    2008-01-01

    Based on a comprehensive study of various algorithms, the automatic recognition of traditional ocular optical measuring instruments is realized. Taking a universal tools microscope (UTM) lens view image as an example, a 2-layer automatic recognition model for data reading is established after adopting a series of pre-processing algorithms. This model is an optimal combination of the correlation-based template matching method and a concurrent back propagation (BP) neural network. Multiple complementary feature extraction is used in generating the eigenvectors of the concurrent network. In order to improve fault-tolerance capacity, rotation invariant features based on Zernike moments are extracted from digit characters and a 4-dimensional group of the outline features is also obtained. Moreover, the operating time and reading accuracy can be adjusted dynamically by setting the threshold value. The experimental result indicates that the newly developed algorithm has optimal recognition precision and working speed. The average reading ratio can achieve 97.23%. The recognition method can automatically obtain the results of optical measuring instruments rapidly and stably without modifying their original structure, which meets the application requirements.

  9. Methods and instrumentation for quantitative microchip capillary electrophoresis

    NARCIS (Netherlands)

    Revermann, T.

    2007-01-01

    The development of novel instrumentation and analytical methodology for quantitative microchip capillary electrophoresis (MCE) is described in this thesis. Demanding only small quantities of reagents and samples, microfluidic instrumentation is highly advantageous. Fast separations at high voltages

  10. Assessing learning outcomes in middle-division classical mechanics: The Colorado Classical Mechanics and Math Methods Instrument

    Directory of Open Access Journals (Sweden)

    Marcos D. Caballero

    2017-04-01

    Full Text Available Reliable and validated assessments of introductory physics have been instrumental in driving curricular and pedagogical reforms that lead to improved student learning. As part of an effort to systematically improve our sophomore-level classical mechanics and math methods course (CM 1 at CU Boulder, we have developed a tool to assess student learning of CM 1 concepts in the upper division. The Colorado Classical Mechanics and Math Methods Instrument (CCMI builds on faculty consensus learning goals and systematic observations of student difficulties. The result is a 9-question open-ended post test that probes student learning in the first half of a two-semester classical mechanics and math methods sequence. In this paper, we describe the design and development of this instrument, its validation, and measurements made in classes at CU Boulder and elsewhere.

  11. A survey of variable selection methods in two Chinese epidemiology journals

    Directory of Open Access Journals (Sweden)

    Lynn Henry S

    2010-09-01

    Full Text Available Abstract Background Although much has been written on developing better procedures for variable selection, there is little research on how it is practiced in actual studies. This review surveys the variable selection methods reported in two high-ranking Chinese epidemiology journals. Methods Articles published in 2004, 2006, and 2008 in the Chinese Journal of Epidemiology and the Chinese Journal of Preventive Medicine were reviewed. Five categories of methods were identified whereby variables were selected using: A - bivariate analyses; B - multivariable analysis; e.g. stepwise or individual significance testing of model coefficients; C - first bivariate analyses, followed by multivariable analysis; D - bivariate analyses or multivariable analysis; and E - other criteria like prior knowledge or personal judgment. Results Among the 287 articles that reported using variable selection methods, 6%, 26%, 30%, 21%, and 17% were in categories A through E, respectively. One hundred sixty-three studies selected variables using bivariate analyses, 80% (130/163 via multiple significance testing at the 5% alpha-level. Of the 219 multivariable analyses, 97 (44% used stepwise procedures, 89 (41% tested individual regression coefficients, but 33 (15% did not mention how variables were selected. Sixty percent (58/97 of the stepwise routines also did not specify the algorithm and/or significance levels. Conclusions The variable selection methods reported in the two journals were limited in variety, and details were often missing. Many studies still relied on problematic techniques like stepwise procedures and/or multiple testing of bivariate associations at the 0.05 alpha-level. These deficiencies should be rectified to safeguard the scientific validity of articles published in Chinese epidemiology journals.

  12. Reconstructing pre-instrumental streamflow in Eastern Australia using a water balance approach

    Science.gov (United States)

    Tozer, C. R.; Kiem, A. S.; Vance, T. R.; Roberts, J. L.; Curran, M. A. J.; Moy, A. D.

    2018-03-01

    Streamflow reconstructions based on paleoclimate proxies provide much longer records than the short instrumental period records on which water resource management plans are currently based. In Australia there is a lack of in-situ high resolution paleoclimate proxy records, but remote proxies with teleconnections to Australian climate have utility in producing streamflow reconstructions. Here we investigate, via a case study for a catchment in eastern Australia, the novel use of an Antarctic ice-core based rainfall reconstruction within a Budyko-framework to reconstruct ∼1000 years of annual streamflow. The resulting streamflow reconstruction captures interannual to decadal variability in the instrumental streamflow, validating both the use of the ice core rainfall proxy record and the Budyko-framework method. In the preinstrumental era the streamflow reconstruction shows longer wet and dry epochs and periods of streamflow variability that are higher than observed in the instrumental era. Importantly, for both the instrumental record and preinstrumental reconstructions, the wet (dry) epochs in the rainfall record are shorter (longer) in the streamflow record and this non-linearity must be considered when inferring hydroclimatic risk or historical water availability directly from rainfall proxy records alone. These insights provide a better understanding of present infrastructure vulnerability in the context of past climate variability for eastern Australia. The streamflow reconstruction presented here also provides a better understanding of the range of hydroclimatic variability possible, and therefore represents a more realistic baseline on which to quantify the potential impacts of anthropogenic climate change on water security.

  13. Development of Flexible Software Process Lines with Variability Operations

    DEFF Research Database (Denmark)

    Dohrmann, Patrick; Schramm, Joachim; Kuhrmann, Marco

    2016-01-01

    the development of flexible software process lines. Method: We conducted a longitudinal study in which we studied 5 variants of the V-Modell XT process line for 2 years. Results: Our results show the variability operation instrument feasible in practice. We analyzed 616 operation exemplars addressing various...

  14. Using method triangulation to validate a new instrument (CPWQ-com) assessing cancer patients' satisfaction with communication

    DEFF Research Database (Denmark)

    Ross, Lone; Lundstrøm, Louise Hyldborg; Petersen, Morten Aagaard

    2012-01-01

    Patients' perceptions of care including the communication with health care staff is recognized as an important aspect of the quality of cancer care. Using mixed methods, we developed and validated a short instrument assessing this communication.......Patients' perceptions of care including the communication with health care staff is recognized as an important aspect of the quality of cancer care. Using mixed methods, we developed and validated a short instrument assessing this communication....

  15. New highly sensitive method of simultaneous instrumental neutron activation determination of 12 microelements in vine

    International Nuclear Information System (INIS)

    Shoniya, N.I.

    1977-01-01

    The main principles and methods of simultaneous multi-element instrumental neutron activation determination of microelements in vine seeds are presented. The methods permit to carry out quantitative evaluation for every single corn of the seeds. It is shown that the method of instrumental neutron activation analysis with the utilization of a semiconductor spectrometer of high resolution and mini electronic computer permit to carry out serial determinations of 12 microelements in the individual corns of vine seeds of different sorts. This method will permit to determine the missing or excess content of this or that biologically important microelement in soils, plants, fruit and genetic material - seeds, and so to determine the optimum conditions of growing plants by applying microelement fertilizers as extra nutrient means

  16. A Comparison of Methods to Test Mediation and Other Intervening Variable Effects

    Science.gov (United States)

    MacKinnon, David P.; Lockwood, Chondra M.; Hoffman, Jeanne M.; West, Stephen G.; Sheets, Virgil

    2010-01-01

    A Monte Carlo study compared 14 methods to test the statistical significance of the intervening variable effect. An intervening variable (mediator) transmits the effect of an independent variable to a dependent variable. The commonly used R. M. Baron and D. A. Kenny (1986) approach has low statistical power. Two methods based on the distribution of the product and 2 difference-in-coefficients methods have the most accurate Type I error rates and greatest statistical power except in 1 important case in which Type I error rates are too high. The best balance of Type I error and statistical power across all cases is the test of the joint significance of the two effects comprising the intervening variable effect. PMID:11928892

  17. Optimization of instrumental neutron activation analysis method by means of 2k experimental design technique aiming the validation of analytical procedures

    International Nuclear Information System (INIS)

    Petroni, Robson; Moreira, Edson G.

    2013-01-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) methods were carried out for the determination of the elements arsenic, chromium, cobalt, iron, rubidium, scandium, selenium and zinc in biological materials. The aim is to validate the analytical methods for future accreditation at the National Institute of Metrology, Quality and Technology (INMETRO). The 2 k experimental design was applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. Samples of Mussel Tissue Certified Reference Material and multi-element standards were analyzed considering the following variables: sample decay time, counting time and sample distance to detector. The standard multi-element concentration (comparator standard), mass of the sample and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN - CNEN/SP). Optimized conditions were estimated based on the results of z-score tests, main effect and interaction effects. The results obtained with the different experimental configurations were evaluated for accuracy (precision and trueness) for each measurement. (author)

  18. A Method of Separation Assurance for Instrument Flight Procedures at Non-Radar Airports

    Science.gov (United States)

    Conway, Sheila R.; Consiglio, Maria

    2002-01-01

    A method to provide automated air traffic separation assurance services during approach to or departure from a non-radar, non-towered airport environment is described. The method is constrained by provision of these services without radical changes or ambitious investments in current ground-based technologies. The proposed procedures are designed to grant access to a large number of airfields that currently have no or very limited access under Instrument Flight Rules (IFR), thus increasing mobility with minimal infrastructure investment. This paper primarily addresses a low-cost option for airport and instrument approach infrastructure, but is designed to be an architecture from which a more efficient, albeit more complex, system may be developed. A functional description of the capabilities in the current NAS infrastructure is provided. Automated terminal operations and procedures are introduced. Rules of engagement and the operations are defined. Results of preliminary simulation testing are presented. Finally, application of the method to more terminal-like operations, and major research areas, including necessary piloted studies, are discussed.

  19. Partial differential equations with variable exponents variational methods and qualitative analysis

    CERN Document Server

    Radulescu, Vicentiu D

    2015-01-01

    Partial Differential Equations with Variable Exponents: Variational Methods and Qualitative Analysis provides researchers and graduate students with a thorough introduction to the theory of nonlinear partial differential equations (PDEs) with a variable exponent, particularly those of elliptic type. The book presents the most important variational methods for elliptic PDEs described by nonhomogeneous differential operators and containing one or more power-type nonlinearities with a variable exponent. The authors give a systematic treatment of the basic mathematical theory and constructive meth

  20. Extensions of von Neumann's method for generating random variables

    International Nuclear Information System (INIS)

    Monahan, J.F.

    1979-01-01

    Von Neumann's method of generating random variables with the exponential distribution and Forsythe's method for obtaining distributions with densities of the form e/sup -G//sup( x/) are generalized to apply to certain power series representations. The flexibility of the power series methods is illustrated by algorithms for the Cauchy and geometric distributions

  1. Instrumental analysis, second edition

    International Nuclear Information System (INIS)

    Christian, G.D.; O'Reilly, J.E.

    1988-01-01

    The second edition of Instrumental Analysis is a survey of the major instrument-based methods of chemical analysis. It appears to be aimed at undergraduates but would be equally useful in a graduate course. The volume explores all of the classical quantitative methods and contains sections on techniques that usually are not included in a semester course in instrumentation (such as electron spectroscopy and the kinetic methods). Adequate coverage of all of the methods contained in this book would require several semesters of focused study. The 25 chapters were written by different authors, yet the style throughout the book is more uniform than in the earlier edition. With the exception of a two-chapter course in analog and digital circuits, the book purports to de-emphasize instrumentation, focusing more on the theory behind the methods and the application of the methods to analytical problems. However, a detailed analysis of the instruments used in each method is by no means absent. The book has the favor of a user's guide to analysis

  2. Under What Assumptions Do Site-by-Treatment Instruments Identify Average Causal Effects?

    Science.gov (United States)

    Reardon, Sean F.; Raudenbush, Stephen W.

    2013-01-01

    The increasing availability of data from multi-site randomized trials provides a potential opportunity to use instrumental variables methods to study the effects of multiple hypothesized mediators of the effect of a treatment. We derive nine assumptions needed to identify the effects of multiple mediators when using site-by-treatment interactions…

  3. A streamlined artificial variable free version of simplex method.

    Directory of Open Access Journals (Sweden)

    Syed Inayatullah

    Full Text Available This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new method has also been presented which provides a way to easily implement the phase 1 of traditional dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our methods provide full freedom to the user, that whether to start with primal artificial free version or dual artificial free version without making any reformulation to the LP structure. Last but not the least, it provides a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement.

  4. A streamlined artificial variable free version of simplex method.

    Science.gov (United States)

    Inayatullah, Syed; Touheed, Nasir; Imtiaz, Muhammad

    2015-01-01

    This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new method has also been presented which provides a way to easily implement the phase 1 of traditional dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our methods provide full freedom to the user, that whether to start with primal artificial free version or dual artificial free version without making any reformulation to the LP structure. Last but not the least, it provides a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement.

  5. A new method for the assessment of the surface topography of NiTi rotary instruments.

    Science.gov (United States)

    Ferreira, F; Barbosa, I; Scelza, P; Russano, D; Neff, J; Montagnana, M; Zaccaro Scelza, M

    2017-09-01

    To describe a new method for the assessment of nanoscale alterations in the surface topography of nickel-titanium endodontic instruments using a high-resolution optical method and to verify the accuracy of the technique. Noncontact three-dimensional optical profilometry was used to evaluate defects on a size 25, .08 taper reciprocating instrument (WaveOne ® ), which was subjected to a cyclic fatigue test in a simulated root canal in a clear resin block. For the investigation, an original procedure was established for the analysis of similar areas located 3 mm from the tip of the instrument before and after canal preparation to enable the repeatability and reproducibility of the measurements with precision. All observations and analysis were taken in areas measuring 210 × 210 μm provided by the software of the equipment. The three-dimensional high-resolution image analysis showed clear alterations in the surface topography of the examined cutting blade and flute of the instrument, before and after use, with the presence of surface irregularities such as deformations, debris, grooves, cracks, steps and microcavities. Optical profilometry provided accurate qualitative nanoscale evaluation of similar surfaces before and after the fatigue test. The stability and repeatability of the technique enables a more comprehensive understanding of the effects of wear on the surface of endodontic instruments. © 2016 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  6. An ergonomics based design research method for the arrangement of helicopter flight instrument panels.

    Science.gov (United States)

    Alppay, Cem; Bayazit, Nigan

    2015-11-01

    In this paper, we study the arrangement of displays in flight instrument panels of multi-purpose civil helicopters following a user-centered design method based on ergonomics principles. Our methodology can also be described as a user-interface arrangement methodology based on user opinions and preferences. This study can be outlined as gathering user-centered data using two different research methods and then analyzing and integrating the collected data to come up with an optimal instrument panel design. An interview with helicopter pilots formed the first step of our research. In that interview, pilots were asked to provide a quantitative evaluation of basic interface arrangement principles. In the second phase of the research, a paper prototyping study was conducted with same pilots. The final phase of the study entailed synthesizing the findings from interviews and observational studies to formulate an optimal flight instrument arrangement methodology. The primary results that we present in our paper are the methodology that we developed and three new interface arrangement concepts, namely relationship of inseparability, integrated value and locational value. An optimum instrument panel arrangement is also proposed by the researchers. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  7. Standard test method for verifying the alignment of X-Ray diffraction instrumentation for residual stress measurement

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This test method covers the preparation and use of a flat stress-free test specimen for the purpose of checking the systematic error caused by instrument misalignment or sample positioning in X-ray diffraction residual stress measurement, or both. 1.2 This test method is applicable to apparatus intended for X-ray diffraction macroscopic residual stress measurement in polycrystalline samples employing measurement of a diffraction peak position in the high-back reflection region, and in which the θ, 2θ, and ψ rotation axes can be made to coincide (see Fig. 1). 1.3 This test method describes the use of iron powder which has been investigated in round-robin studies for the purpose of verifying the alignment of instrumentation intended for stress measurement in ferritic or martensitic steels. To verify instrument alignment prior to stress measurement in other metallic alloys and ceramics, powder having the same or lower diffraction angle as the material to be measured should be prepared in similar fashion...

  8. Precision of GNSS instruments by static method comparing in real time

    Directory of Open Access Journals (Sweden)

    Slavomír Labant

    2009-09-01

    Full Text Available Tablet paper describes comparison of measuring accuracy two apparatus from the firm Leica. One of them recieve signals onlyfrom GPS satelites and another instrument is working with GPS and also with GLONASS satelites. Measuring is carry out by RTK staticmethod with 2 minutes observations. Measurement processing is separated to X, Y (position and h (heigh. Adjustment of directobservations is used as a adjusting method.

  9. Instruments and methods of scintillation spectra processing for radiation control tasks

    International Nuclear Information System (INIS)

    Antropov, S.Yu.; Ermilov, A.P.; Ermilov, S.A.; Komarov, N.A.; Krokhin, I.I.

    2005-01-01

    On gamma-spectrometer the response function could be calculated on the base of sensitivity data, energy resolution and form of Compton spectrum part. On the scintillation gamma-spectrometer with Na-I(Tl) crystal 63x63 mm the method allows to divide the 5-10 components mixtures, and on the beta-spectrometer of 2-3 component mixtures. The approach is realized in the 'Progress' program-instrumental complexes

  10. Hybrid Instruments and the Indirect Credit Method - Does it work?

    OpenAIRE

    Wiedermann-Ondrej, Nadine

    2007-01-01

    This paper analyses the possibility of double non-taxation of hybrid instruments in cross border transactions where the country of the investor has implemented the indirect credit method for mitigation or elimination of double taxation. From an isolated perspective a double non-taxation cannot be obtained because typically no taxes are paid in the foreign country due to the classification as debt and therefore even in the case of a classification as a dividend in the country of the investor n...

  11. Development of a localized probabilistic sensitivity method to determine random variable regional importance

    International Nuclear Information System (INIS)

    Millwater, Harry; Singh, Gulshan; Cortina, Miguel

    2012-01-01

    There are many methods to identify the important variable out of a set of random variables, i.e., “inter-variable” importance; however, to date there are no comparable methods to identify the “region” of importance within a random variable, i.e., “intra-variable” importance. Knowledge of the critical region of an input random variable (tail, near-tail, and central region) can provide valuable information towards characterizing, understanding, and improving a model through additional modeling or testing. As a result, an intra-variable probabilistic sensitivity method was developed and demonstrated for independent random variables that computes the partial derivative of a probabilistic response with respect to a localized perturbation in the CDF values of each random variable. These sensitivities are then normalized in absolute value with respect to the largest sensitivity within a distribution to indicate the region of importance. The methodology is implemented using the Score Function kernel-based method such that existing samples can be used to compute sensitivities for negligible cost. Numerical examples demonstrate the accuracy of the method through comparisons with finite difference and numerical integration quadrature estimates. - Highlights: ► Probabilistic sensitivity methodology. ► Determines the “region” of importance within random variables such as left tail, near tail, center, right tail, etc. ► Uses the Score Function approach to reuse the samples, hence, negligible cost. ► No restrictions on the random variable types or limit states.

  12. Optimization of instrumental neutron activation analysis method by means of 2{sup k} experimental design technique aiming the validation of analytical procedures

    Energy Technology Data Exchange (ETDEWEB)

    Petroni, Robson; Moreira, Edson G., E-mail: rpetroni@ipen.br, E-mail: emoreira@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2013-07-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) methods were carried out for the determination of the elements arsenic, chromium, cobalt, iron, rubidium, scandium, selenium and zinc in biological materials. The aim is to validate the analytical methods for future accreditation at the National Institute of Metrology, Quality and Technology (INMETRO). The 2{sup k} experimental design was applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. Samples of Mussel Tissue Certified Reference Material and multi-element standards were analyzed considering the following variables: sample decay time, counting time and sample distance to detector. The standard multi-element concentration (comparator standard), mass of the sample and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN - CNEN/SP). Optimized conditions were estimated based on the results of z-score tests, main effect and interaction effects. The results obtained with the different experimental configurations were evaluated for accuracy (precision and trueness) for each measurement. (author)

  13. Statistical methods and regression analysis of stratospheric ozone and meteorological variables in Isfahan

    Science.gov (United States)

    Hassanzadeh, S.; Hosseinibalam, F.; Omidvari, M.

    2008-04-01

    Data of seven meteorological variables (relative humidity, wet temperature, dry temperature, maximum temperature, minimum temperature, ground temperature and sun radiation time) and ozone values have been used for statistical analysis. Meteorological variables and ozone values were analyzed using both multiple linear regression and principal component methods. Data for the period 1999-2004 are analyzed jointly using both methods. For all periods, temperature dependent variables were highly correlated, but were all negatively correlated with relative humidity. Multiple regression analysis was used to fit the meteorological variables using the meteorological variables as predictors. A variable selection method based on high loading of varimax rotated principal components was used to obtain subsets of the predictor variables to be included in the linear regression model of the meteorological variables. In 1999, 2001 and 2002 one of the meteorological variables was weakly influenced predominantly by the ozone concentrations. However, the model did not predict that the meteorological variables for the year 2000 were not influenced predominantly by the ozone concentrations that point to variation in sun radiation. This could be due to other factors that were not explicitly considered in this study.

  14. The MCLIB library: Monte Carlo simulation of neutron scattering instruments

    Energy Technology Data Exchange (ETDEWEB)

    Seeger, P.A.

    1995-09-01

    Monte Carlo is a method to integrate over a large number of variables. Random numbers are used to select a value for each variable, and the integrand is evaluated. The process is repeated a large number of times and the resulting values are averaged. For a neutron transport problem, first select a neutron from the source distribution, and project it through the instrument using either deterministic or probabilistic algorithms to describe its interaction whenever it hits something, and then (if it hits the detector) tally it in a histogram representing where and when it was detected. This is intended to simulate the process of running an actual experiment (but it is much slower). This report describes the philosophy and structure of MCLIB, a Fortran library of Monte Carlo subroutines which has been developed for design of neutron scattering instruments. A pair of programs (LQDGEOM and MC{_}RUN) which use the library are shown as an example.

  15. The MCLIB library: Monte Carlo simulation of neutron scattering instruments

    International Nuclear Information System (INIS)

    Seeger, P.A.

    1995-01-01

    Monte Carlo is a method to integrate over a large number of variables. Random numbers are used to select a value for each variable, and the integrand is evaluated. The process is repeated a large number of times and the resulting values are averaged. For a neutron transport problem, first select a neutron from the source distribution, and project it through the instrument using either deterministic or probabilistic algorithms to describe its interaction whenever it hits something, and then (if it hits the detector) tally it in a histogram representing where and when it was detected. This is intended to simulate the process of running an actual experiment (but it is much slower). This report describes the philosophy and structure of MCLIB, a Fortran library of Monte Carlo subroutines which has been developed for design of neutron scattering instruments. A pair of programs (LQDGEOM and MC RUN) which use the library are shown as an example

  16. Industrial instrumentation principles and design

    CERN Document Server

    Padmanabhan, Tattamangalam R

    2000-01-01

    Pneumatic, hydraulic and allied instrumentation schemes have given way to electronic schemes in recent years thanks to the rapid strides in electronics and allied areas. Principles, design and applications of such state-of-the-art instrumentation schemes form the subject matter of this book. Through representative examples, the basic building blocks of instrumentation schemes are identified and each of these building blocks discussed in terms of its design and interface characteristics. The common generic schemes synthesized with such building blocks are dealt with subsequently. This forms the scope of Part I. The focus in Part II is on application. Displacement and allied instrumentation, force and allied instrumentation and process instrumentation in terms of temperature, flow, pressure level and other common process variables are dealt with separately and exhaustively. Despite the diversity in the sensor principles and characteristics and the variety in the applications and their environments, it is possib...

  17. Spatial and temporal variability of rainfall and their effects on hydrological response in urban areas - A review

    NARCIS (Netherlands)

    Cristiano, E.; ten Veldhuis, J.A.E.; van de Giesen, N.C.

    2017-01-01

    In urban areas, hydrological processes are characterized by high variability in space and time, making them sensitive to small-scale temporal and spatial rainfall variability. In the last decades new instruments, techniques, and methods have been developed to capture rainfall and hydrological

  18. Variable discrete ordinates method for radiation transfer in plane-parallel semi-transparent media with variable refractive index

    Science.gov (United States)

    Sarvari, S. M. Hosseini

    2017-09-01

    The traditional form of discrete ordinates method is applied to solve the radiative transfer equation in plane-parallel semi-transparent media with variable refractive index through using the variable discrete ordinate directions and the concept of refracted radiative intensity. The refractive index are taken as constant in each control volume, such that the direction cosines of radiative rays remain non-variant through each control volume, and then, the directions of discrete ordinates are changed locally by passing each control volume, according to the Snell's law of refraction. The results are compared by the previous studies in this field. Despite simplicity, the results show that the variable discrete ordinate method has a good accuracy in solving the radiative transfer equation in the semi-transparent media with arbitrary distribution of refractive index.

  19. Assessment of hip dysplasia and osteoarthritis: Variability of different methods

    International Nuclear Information System (INIS)

    Troelsen, Anders; Elmengaard, Brian; Soeballe, Kjeld; Roemer, Lone; Kring, Soeren

    2010-01-01

    Background: Reliable assessment of hip dysplasia and osteoarthritis is crucial in young adults who may benefit from joint-preserving surgery. Purpose: To investigate the variability of different methods for diagnostic assessment of hip dysplasia and osteoarthritis. Material and Methods: By each of four observers, two assessments were done by vision and two by angle construction. For both methods, the intra- and interobserver variability of center-edge and acetabular index angle assessment were analyzed. The observers' ability to diagnose hip dysplasia and osteoarthritis were assessed. All measures were compared to those made on computed tomography scan. Results: Intra- and interobserver variability of angle assessment was less when angles were drawn compared with assessment by vision, and the observers' ability to diagnose hip dysplasia improved when angles were drawn. Assessment of osteoarthritis in general showed poor agreement with findings on computed tomography scan. Conclusion: We recommend that angles always should be drawn for assessment of hip dysplasia on pelvic radiographs. Given the inherent variability of diagnostic assessment of hip dysplasia, a computed tomography scan could be considered in patients with relevant hip symptoms and a center-edge angle between 20 deg and 30 deg. Osteoarthritis should be assessed by measuring the joint space width or by classifying the Toennis grade as either 0-1 or 2-3

  20. Does the Early Bird Catch the Worm? Instrumental Variable Estimates of Educational Effects of Age of School Entry in Germany

    OpenAIRE

    Puhani, Patrick A.; Weber, Andrea M.

    2006-01-01

    We estimate the effect of age of school entry on educational outcomes using two different data sets for Germany, sampling pupils at the end of primary school and in the middle of secondary school. Results are obtained based on instrumental variable estimation exploiting the exogenous variation in month of birth. We find robust and significant positive effects on educational outcomes for pupils who enter school at seven instead of six years of age: Test scores at the end of primary school incr...

  1. Assessment of the quality and variability of health information on chronic pain websites using the DISCERN instrument

    Directory of Open Access Journals (Sweden)

    Buckley Norman

    2010-10-01

    Full Text Available Abstract Background The Internet is used increasingly by providers as a tool for disseminating pain-related health information and by patients as a resource about health conditions and treatment options. However, health information on the Internet remains unregulated and varies in quality, accuracy and readability. The objective of this study was to determine the quality of pain websites, and explain variability in quality and readability between pain websites. Methods Five key terms (pain, chronic pain, back pain, arthritis, and fibromyalgia were entered into the Google, Yahoo and MSN search engines. Websites were assessed using the DISCERN instrument as a quality index. Grade level readability ratings were assessed using the Flesch-Kincaid Readability Algorithm. Univariate (using alpha = 0.20 and multivariable regression (using alpha = 0.05 analyses were used to explain the variability in DISCERN scores and grade level readability using potential for commercial gain, health related seals of approval, language(s and multimedia features as independent variables. Results A total of 300 websites were assessed, 21 excluded in accordance with the exclusion criteria and 110 duplicate websites, leaving 161 unique sites. About 6.8% (11/161 websites of the websites offered patients' commercial products for their pain condition, 36.0% (58/161 websites had a health related seal of approval, 75.8% (122/161 websites presented information in English only and 40.4% (65/161 websites offered an interactive multimedia experience. In assessing the quality of the unique websites, of a maximum score of 80, the overall average DISCERN Score was 55.9 (13.6 and readability (grade level of 10.9 (3.9. The multivariable regressions demonstrated that website seals of approval (P = 0.015 and potential for commercial gain (P = 0.189 were contributing factors to higher DISCERN scores, while seals of approval (P = 0.168 and interactive multimedia (P = 0.244 contributed to

  2. Developments in FT-ICR MS instrumentation, ionization techniques, and data interpretation methods for petroleomics.

    Science.gov (United States)

    Cho, Yunju; Ahmed, Arif; Islam, Annana; Kim, Sunghwan

    2015-01-01

    Because of the increasing importance of heavy and unconventional crude oil as an energy source, there is a growing need for petroleomics: the pursuit of more complete and detailed knowledge of the chemical compositions of crude oil. Crude oil has an extremely complex nature; hence, techniques with ultra-high resolving capabilities, such as Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS), are necessary. FT-ICR MS has been successfully applied to the study of heavy and unconventional crude oils such as bitumen and shale oil. However, the analysis of crude oil with FT-ICR MS is not trivial, and it has pushed analysis to the limits of instrumental and methodological capabilities. For example, high-resolution mass spectra of crude oils may contain over 100,000 peaks that require interpretation. To visualize large data sets more effectively, data processing methods such as Kendrick mass defect analysis and statistical analyses have been developed. The successful application of FT-ICR MS to the study of crude oil has been critically dependent on key developments in FT-ICR MS instrumentation and data processing methods. This review offers an introduction to the basic principles, FT-ICR MS instrumentation development, ionization techniques, and data interpretation methods for petroleomics and is intended for readers having no prior experience in this field of study. © 2014 Wiley Periodicals, Inc.

  3. Comparing daily temperature averaging methods: the role of surface and atmosphere variables in determining spatial and seasonal variability

    Science.gov (United States)

    Bernhardt, Jase; Carleton, Andrew M.

    2018-05-01

    The two main methods for determining the average daily near-surface air temperature, twice-daily averaging (i.e., [Tmax+Tmin]/2) and hourly averaging (i.e., the average of 24 hourly temperature measurements), typically show differences associated with the asymmetry of the daily temperature curve. To quantify the relative influence of several land surface and atmosphere variables on the two temperature averaging methods, we correlate data for 215 weather stations across the Contiguous United States (CONUS) for the period 1981-2010 with the differences between the two temperature-averaging methods. The variables are land use-land cover (LULC) type, soil moisture, snow cover, cloud cover, atmospheric moisture (i.e., specific humidity, dew point temperature), and precipitation. Multiple linear regression models explain the spatial and monthly variations in the difference between the two temperature-averaging methods. We find statistically significant correlations between both the land surface and atmosphere variables studied with the difference between temperature-averaging methods, especially for the extreme (i.e., summer, winter) seasons (adjusted R2 > 0.50). Models considering stations with certain LULC types, particularly forest and developed land, have adjusted R2 values > 0.70, indicating that both surface and atmosphere variables control the daily temperature curve and its asymmetry. This study improves our understanding of the role of surface and near-surface conditions in modifying thermal climates of the CONUS for a wide range of environments, and their likely importance as anthropogenic forcings—notably LULC changes and greenhouse gas emissions—continues.

  4. Instrumentation for environmental monitoring: biomedical

    International Nuclear Information System (INIS)

    1979-05-01

    An update is presented to Volume four of the six-volume series devoted to a survey of instruments useful for measurements in biomedicine related to environmental research and monitoring. Results of the survey are given as descriptions of the physical and operating characteristics of available instruments, critical comparisons among instrumentation methods, and recommendations of promising methodology and development of new instrumentation. Methods of detection and analysis of gaseous organic pollutants and metals, including Ni and As are presented. Instrument techniques and notes are included on atomic spectrometry and uv and visible absorption instrumentation

  5. USAGE OF PICTOGRAMS TO INTRODUCE MUSICAL INSTRUMENTS TO EDUCABLE MENTALLY RETARDED CHILDREN AS AN ALTERNATIVE METHOD

    Directory of Open Access Journals (Sweden)

    Gunsu YILMA

    2014-01-01

    Full Text Available The purpose of this research is to examine and investigate the perception ability of musical instruments of educable mentally retarded children with the support of visual elements. The research is conducted for every children individually in a special education and rehabilitation centre. The problematic of this research is the level of perception ability of musical instruments with visual support on mild mentally retarded children. In this research, perception ability of defining pictograms by music is introduced as an alternative method. It is researched that how educable mentally retarded children perceive pictograms by music tools. In this case, it is aimed to introduce musical instruments to educable mentally retarded children by pictograms with music. The research is applied with a qualitative approach. Data were obtained with the recorder, then they were turned into texts and analyzed with content analysis method.

  6. Tracking-by-detection of surgical instruments in minimally invasive surgery via the convolutional neural network deep learning-based method.

    Science.gov (United States)

    Zhao, Zijian; Voros, Sandrine; Weng, Ying; Chang, Faliang; Li, Ruijian

    2017-12-01

    Worldwide propagation of minimally invasive surgeries (MIS) is hindered by their drawback of indirect observation and manipulation, while monitoring of surgical instruments moving in the operated body required by surgeons is a challenging problem. Tracking of surgical instruments by vision-based methods is quite lucrative, due to its flexible implementation via software-based control with no need to modify instruments or surgical workflow. A MIS instrument is conventionally split into a shaft and end-effector portions, while a 2D/3D tracking-by-detection framework is proposed, which performs the shaft tracking followed by the end-effector one. The former portion is described by line features via the RANSAC scheme, while the latter is depicted by special image features based on deep learning through a well-trained convolutional neural network. The method verification in 2D and 3D formulation is performed through the experiments on ex-vivo video sequences, while qualitative validation on in-vivo video sequences is obtained. The proposed method provides robust and accurate tracking, which is confirmed by the experimental results: its 3D performance in ex-vivo video sequences exceeds those of the available state-of -the-art methods. Moreover, the experiments on in-vivo sequences demonstrate that the proposed method can tackle the difficult condition of tracking with unknown camera parameters. Further refinements of the method will refer to the occlusion and multi-instrumental MIS applications.

  7. Treating pre-instrumental data as "missing" data: using a tree-ring-based paleoclimate record and imputations to reconstruct streamflow in the Missouri River Basin

    Science.gov (United States)

    Ho, M. W.; Lall, U.; Cook, E. R.

    2015-12-01

    Advances in paleoclimatology in the past few decades have provided opportunities to expand the temporal perspective of the hydrological and climatological variability across the world. The North American region is particularly fortunate in this respect where a relatively dense network of high resolution paleoclimate proxy records have been assembled. One such network is the annually-resolved Living Blended Drought Atlas (LBDA): a paleoclimate reconstruction of the Palmer Drought Severity Index (PDSI) that covers North America on a 0.5° × 0.5° grid based on tree-ring chronologies. However, the use of the LBDA to assess North American streamflow variability requires a model by which streamflow may be reconstructed. Paleoclimate reconstructions have typically used models that first seek to quantify the relationship between the paleoclimate variable and the environmental variable of interest before extrapolating the relationship back in time. In contrast, the pre-instrumental streamflow is here considered as "missing" data. A method of imputing the "missing" streamflow data, prior to the instrumental record, is applied through multiple imputation using chained equations for streamflow in the Missouri River Basin. In this method, the distribution of the instrumental streamflow and LBDA is used to estimate sets of plausible values for the "missing" streamflow data resulting in a ~600 year-long streamflow reconstruction. Past research into external climate forcings, oceanic-atmospheric variability and its teleconnections, and assessments of rare multi-centennial instrumental records demonstrate that large temporal oscillations in hydrological conditions are unlikely to be captured in most instrumental records. The reconstruction of multi-centennial records of streamflow will enable comprehensive assessments of current and future water resource infrastructure and operations under the existing scope of natural climate variability.

  8. On-line scheme for parameter estimation of nonlinear lithium ion battery equivalent circuit models using the simplified refined instrumental variable method for a modified Wiener continuous-time model

    International Nuclear Information System (INIS)

    Allafi, Walid; Uddin, Kotub; Zhang, Cheng; Mazuir Raja Ahsan Sha, Raja; Marco, James

    2017-01-01

    Highlights: •Off-line estimation approach for continuous-time domain for non-invertible function. •Model reformulated to multi-input-single-output; nonlinearity described by sigmoid. •Method directly estimates parameters of nonlinear ECM from the measured-data. •Iterative on-line technique leads to smoother convergence. •The model is validated off-line and on-line using NCA battery. -- Abstract: The accuracy of identifying the parameters of models describing lithium ion batteries (LIBs) in typical battery management system (BMS) applications is critical to the estimation of key states such as the state of charge (SoC) and state of health (SoH). In applications such as electric vehicles (EVs) where LIBs are subjected to highly demanding cycles of operation and varying environmental conditions leading to non-trivial interactions of ageing stress factors, this identification is more challenging. This paper proposes an algorithm that directly estimates the parameters of a nonlinear battery model from measured input and output data in the continuous time-domain. The simplified refined instrumental variable method is extended to estimate the parameters of a Wiener model where there is no requirement for the nonlinear function to be invertible. To account for nonlinear battery dynamics, in this paper, the typical linear equivalent circuit model (ECM) is enhanced by a block-oriented Wiener configuration where the nonlinear memoryless block following the typical ECM is defined to be a sigmoid static nonlinearity. The nonlinear Weiner model is reformulated in the form of a multi-input, single-output linear model. This linear form allows the parameters of the nonlinear model to be estimated using any linear estimator such as the well-established least squares (LS) algorithm. In this paper, the recursive least square (RLS) method is adopted for online parameter estimation. The approach was validated on experimental data measured from an 18650-type Graphite

  9. Resistance Torque Based Variable Duty-Cycle Control Method for a Stage II Compressor

    Science.gov (United States)

    Zhong, Meipeng; Zheng, Shuiying

    2017-07-01

    The resistance torque of a piston stage II compressor generates strenuous fluctuations in a rotational period, and this can lead to negative influences on the working performance of the compressor. To restrain the strenuous fluctuations in the piston stage II compressor, a variable duty-cycle control method based on the resistance torque is proposed. A dynamic model of a stage II compressor is set up, and the resistance torque and other characteristic parameters are acquired as the control targets. Then, a variable duty-cycle control method is applied to track the resistance torque, thereby improving the working performance of the compressor. Simulated results show that the compressor, driven by the proposed method, requires lower current, while the rotating speed and the output torque remain comparable to the traditional variable-frequency control methods. A variable duty-cycle control system is developed, and the experimental results prove that the proposed method can help reduce the specific power, input power, and working noise of the compressor to 0.97 kW·m-3·min-1, 0.09 kW and 3.10 dB, respectively, under the same conditions of discharge pressure of 2.00 MPa and a discharge volume of 0.095 m3/min. The proposed variable duty-cycle control method tracks the resistance torque dynamically, and improves the working performance of a Stage II Compressor. The proposed variable duty-cycle control method can be applied to other compressors, and can provide theoretical guidance for the compressor.

  10. Improvement of the variable storage coefficient method with water surface gradient as a variable

    Science.gov (United States)

    The variable storage coefficient (VSC) method has been used for streamflow routing in continuous hydrological simulation models such as the Agricultural Policy/Environmental eXtender (APEX) and the Soil and Water Assessment Tool (SWAT) for more than 30 years. APEX operates on a daily time step and ...

  11. THE REGULATION OF MONEY CIRCULATION ON THE BASIS OF USING METHODS AND INSTRUMENTS OF MONETARY POLICY

    OpenAIRE

    S. Mishchenko; S. Naumenkova

    2013-01-01

    In the article it was researched the instruments and mechanism of safeguarding stability of money market on the basis of implementing the optimal monetary policy regime. It was determined the main directions of appliance the monetary policy methods and instruments to guiding money market stability and it was also investigated the influence of transmission mechanism on providing the soundness of money circulations.

  12. Emission quantification using the tracer gas dispersion method: The influence of instrument, tracer gas species and source simulation

    DEFF Research Database (Denmark)

    Delre, Antonio; Mønster, Jacob; Samuelsson, Jerker

    2018-01-01

    The tracer gas dispersion method (TDM) is a remote sensing method used for quantifying fugitive emissions by relying on the controlled release of a tracer gas at the source, combined with concentration measurements of the tracer and target gas plumes. The TDM was tested at a wastewater treatment...... plant for plant-integrated methane emission quantification, using four analytical instruments simultaneously and four different tracer gases. Measurements performed using a combination of an analytical instrument and a tracer gas, with a high ratio between the tracer gas release rate and instrument...... precision (a high release-precision ratio), resulted in well-defined plumes with a high signal-to-noise ratio and a high methane-to-tracer gas correlation factor. Measured methane emission rates differed by up to 18% from the mean value when measurements were performed using seven different instrument...

  13. Latent variable method for automatic adaptation to background states in motor imagery BCI

    Science.gov (United States)

    Dagaev, Nikolay; Volkova, Ksenia; Ossadtchi, Alexei

    2018-02-01

    Objective. Brain-computer interface (BCI) systems are known to be vulnerable to variabilities in background states of a user. Usually, no detailed information on these states is available even during the training stage. Thus there is a need in a method which is capable of taking background states into account in an unsupervised way. Approach. We propose a latent variable method that is based on a probabilistic model with a discrete latent variable. In order to estimate the model’s parameters, we suggest to use the expectation maximization algorithm. The proposed method is aimed at assessing characteristics of background states without any corresponding data labeling. In the context of asynchronous motor imagery paradigm, we applied this method to the real data from twelve able-bodied subjects with open/closed eyes serving as background states. Main results. We found that the latent variable method improved classification of target states compared to the baseline method (in seven of twelve subjects). In addition, we found that our method was also capable of background states recognition (in six of twelve subjects). Significance. Without any supervised information on background states, the latent variable method provides a way to improve classification in BCI by taking background states into account at the training stage and then by making decisions on target states weighted by posterior probabilities of background states at the prediction stage.

  14. Validation parameters of instrumental method for determination of total bacterial count in milk

    Directory of Open Access Journals (Sweden)

    Nataša Mikulec

    2004-10-01

    Full Text Available The method of flow citometry as rapid, instrumental and routine microbiological method is used for determination of total bacterial count in milk. The results of flow citometry are expressed as individual bacterial cells count. Problems regarding the interpretation of the results of total bacterial count can be avoided by transformation of the results of flow citometry method onto the scale of reference method (HRN ISO 6610:2001.. The method of flow citometry, like any analitycal method, according to the HRN EN ISO/IEC 17025:2000 standard, requires validation and verification. This paper describes parameters of validation: accuracy, precision, specificity, range, robustness and measuring uncertainty for the method of flow citometry.

  15. Assessment of hip dysplasia and osteoarthritis: Variability of different methods

    Energy Technology Data Exchange (ETDEWEB)

    Troelsen, Anders; Elmengaard, Brian; Soeballe, Kjeld (Orthopedic Research Unit, Univ. Hospital of Aarhus, Aarhus (Denmark)), e-mail: a_troelsen@hotmail.com; Roemer, Lone (Dept. of Radiology, Univ. Hospital of Aarhus, Aarhus (Denmark)); Kring, Soeren (Dept. of Orthopedic Surgery, Aabenraa Hospital, Aabenraa (Denmark))

    2010-03-15

    Background: Reliable assessment of hip dysplasia and osteoarthritis is crucial in young adults who may benefit from joint-preserving surgery. Purpose: To investigate the variability of different methods for diagnostic assessment of hip dysplasia and osteoarthritis. Material and Methods: By each of four observers, two assessments were done by vision and two by angle construction. For both methods, the intra- and interobserver variability of center-edge and acetabular index angle assessment were analyzed. The observers' ability to diagnose hip dysplasia and osteoarthritis were assessed. All measures were compared to those made on computed tomography scan. Results: Intra- and interobserver variability of angle assessment was less when angles were drawn compared with assessment by vision, and the observers' ability to diagnose hip dysplasia improved when angles were drawn. Assessment of osteoarthritis in general showed poor agreement with findings on computed tomography scan. Conclusion: We recommend that angles always should be drawn for assessment of hip dysplasia on pelvic radiographs. Given the inherent variability of diagnostic assessment of hip dysplasia, a computed tomography scan could be considered in patients with relevant hip symptoms and a center-edge angle between 20 deg and 30 deg. Osteoarthritis should be assessed by measuring the joint space width or by classifying the Toennis grade as either 0-1 or 2-3

  16. Organizational learning, pilot test of Likert-type instruments

    Directory of Open Access Journals (Sweden)

    Manuel Alfonso Garzón Castrillón

    2010-09-01

    Full Text Available This paper presents the results obtained in the pilot study of instruments created to comply the specific objective of designing and validating instruments to study the capacity of organizational learning. The Likert measurement scale was used because it allowed to establish the pertinence of the dimension as variable in the context of organizational learning. A One-way Analysis of Variance (ANOVA was used, with statistical package SPSS. Some 138 variables in 3 factors and 40 affirmations were simplified.

  17. Instrumentation

    International Nuclear Information System (INIS)

    Muehllehner, G.; Colsher, J.G.

    1982-01-01

    This chapter reviews the parameters which are important to positron-imaging instruments. It summarizes the options which various groups have explored in designing tomographs and the methods which have been developed to overcome some of the limitations inherent in the technique as well as in present instruments. The chapter is not presented as a defense of positron imaging versus single-photon or other imaging modality, neither does it contain a description of various existing instruments, but rather stresses their common properties and problems. Design parameters which are considered are resolution, sampling requirements, sensitivity, methods of eliminating scattered radiation, random coincidences and attenuation. The implementation of these parameters is considered, with special reference to sampling, choice of detector material, detector ring diameter and shielding and variations in point spread function. Quantitation problems discussed are normalization, and attenuation and random corrections. Present developments mentioned are noise reduction through time-of-flight-assisted tomography and signal to noise improvements through high intrinsic resolution. Extensive bibliography. (U.K.)

  18. Nonparametric instrumental regression with non-convex constraints

    International Nuclear Information System (INIS)

    Grasmair, M; Scherzer, O; Vanhems, A

    2013-01-01

    This paper considers the nonparametric regression model with an additive error that is dependent on the explanatory variables. As is common in empirical studies in epidemiology and economics, it also supposes that valid instrumental variables are observed. A classical example in microeconomics considers the consumer demand function as a function of the price of goods and the income, both variables often considered as endogenous. In this framework, the economic theory also imposes shape restrictions on the demand function, such as integrability conditions. Motivated by this illustration in microeconomics, we study an estimator of a nonparametric constrained regression function using instrumental variables by means of Tikhonov regularization. We derive rates of convergence for the regularized model both in a deterministic and stochastic setting under the assumption that the true regression function satisfies a projected source condition including, because of the non-convexity of the imposed constraints, an additional smallness condition. (paper)

  19. Nonparametric instrumental regression with non-convex constraints

    Science.gov (United States)

    Grasmair, M.; Scherzer, O.; Vanhems, A.

    2013-03-01

    This paper considers the nonparametric regression model with an additive error that is dependent on the explanatory variables. As is common in empirical studies in epidemiology and economics, it also supposes that valid instrumental variables are observed. A classical example in microeconomics considers the consumer demand function as a function of the price of goods and the income, both variables often considered as endogenous. In this framework, the economic theory also imposes shape restrictions on the demand function, such as integrability conditions. Motivated by this illustration in microeconomics, we study an estimator of a nonparametric constrained regression function using instrumental variables by means of Tikhonov regularization. We derive rates of convergence for the regularized model both in a deterministic and stochastic setting under the assumption that the true regression function satisfies a projected source condition including, because of the non-convexity of the imposed constraints, an additional smallness condition.

  20. Work-nonwork interference: Preliminary results on the psychometric properties of a new instrument

    Directory of Open Access Journals (Sweden)

    Eileen Koekemoer

    2010-11-01

    Research purpose: The objectives of this study were to investigate the internal validity (construct, discriminant and convergent validity, reliability and external validity (relationship with theoretically relevant variables, including job characteristics, home characteristics, burnout, ill health and life satisfaction of the instrument. Motivation for the study: Work-family interaction is a key topic receiving significant research attention. In order to facilitate comparison across work-family studies, the use of psychometrically sound instruments is of great importance. Research design, approach and method: A cross-sectional survey design was used for the target population of married employees with children working at a tertiary institution in the North West province (n = 366. In addition to the new instrument, job characteristics, home characteristics, burnout, ill health and life satisfaction were measured. Main findings: The results provided evidence for construct, discriminant and convergent validity, reliability and significant relations with external variables. Practical/managerial implications: The new instrument can be used by researchers and managers as a test under development to investigate the interference between work and different nonwork roles (i.e. parental role, spousal role, work role, domestic role and specific relations with antecedents (e.g. job/home characteristics and well-being (e.g. burnout, ill health and life satisfaction. Contribution/value-add: This study provides preliminary information on the psychometric properties of a new instrument that measures the interference between work and nonwork.

  1. Development of a quality instrument for assessing the spontaneous reports of ADR/ADE using Delphi method in China.

    Science.gov (United States)

    Chen, Lixun; Jiang, Ling; Shen, Aizong; Wei, Wei

    2016-09-01

    The frequently low quality of submitted spontaneous reports is of an increasing concern; to our knowledge, no validated instrument exists for assessing case reports' quality comprehensively enough. This work was conducted to develop such a quality instrument for assessing the spontaneous reports of adverse drug reaction (ADR)/adverse drug event (ADE) in China. Initial evaluation indicators were generated using systematic and literature data analysis. Final indicators and their weights were identified using Delphi method. The final quality instrument was developed by adopting the synthetic scoring method. A consensus was reached after four rounds of Delphi survey. The developed quality instrument consisted of 6 first-rank indicators, 18 second-rank indicators, and 115 third-rank indicators, and each rank indicator has been weighted. It evaluates the quality of spontaneous reports of ADR/ADE comprehensively and quantitatively on six parameters: authenticity, duplication, regulatory, completeness, vigilance level, and reporting time frame. The developed instrument was tested with good reliability and validity, which can be used to comprehensively and quantitatively assess the submitted spontaneous reports of ADR/ADE in China.

  2. THE REGULATION OF MONEY CIRCULATION ON THE BASIS OF USING METHODS AND INSTRUMENTS OF MONETARY POLICY

    Directory of Open Access Journals (Sweden)

    S. Mishchenko

    2013-05-01

    Full Text Available In the article it was researched the instruments and mechanism of safeguarding stability of money market on the basis of implementing the optimal monetary policy regime. It was determined the main directions of appliance the monetary policy methods and instruments to guiding money market stability and it was also investigated the influence of transmission mechanism on providing the soundness of money circulations.

  3. Authentication method for safeguards instruments securing data transmission

    International Nuclear Information System (INIS)

    Richter, B.; Stein, G.; Neumann, G.; Gartner, K.J.

    1986-01-01

    Because of the worldwide increase in nuclear fuel cycle activities, the need arises to reduce inspection effort by increasing the inspection efficiency per facility. Therefore, more and more advanced safeguards instruments will be designed for automatic operation. In addition, sensoring and recording devices may be well separated from each other within the facility, while the data transmission medium is a cable. The basic problem is the authenticity of the transmitted information. It has to be ensured that no potential adversary is able to falsify the transmitted safeguards data, i.e. the data transmission is secured. At present, predominantly C/S-devices are designed for automatic and remote interrogation. Also in other areas of safeguards instrumentation authentication will become a major issue, in particular, where the facility operator may offer his process instrumentation to be used also for safeguards purposes. In this paper possibilities to solve the problem of authentication are analysed

  4. A Novel Flood Forecasting Method Based on Initial State Variable Correction

    Directory of Open Access Journals (Sweden)

    Kuang Li

    2017-12-01

    Full Text Available The influence of initial state variables on flood forecasting accuracy by using conceptual hydrological models is analyzed in this paper and a novel flood forecasting method based on correction of initial state variables is proposed. The new method is abbreviated as ISVC (Initial State Variable Correction. The ISVC takes the residual between the measured and forecasted flows during the initial period of the flood event as the objective function, and it uses a particle swarm optimization algorithm to correct the initial state variables, which are then used to drive the flood forecasting model. The historical flood events of 11 watersheds in south China are forecasted and verified, and important issues concerning the ISVC application are then discussed. The study results show that the ISVC is effective and applicable in flood forecasting tasks. It can significantly improve the flood forecasting accuracy in most cases.

  5. Proceedings of a workshop on methods for neutron scattering instrumentation design

    International Nuclear Information System (INIS)

    Hjelm, R.P.

    1997-09-01

    The future of neutron and x-ray scattering instrument development and international cooperation was the focus of the workshop. The international gathering of about 50 participants representing 15 national facilities, universities and corporations featured oral presentations, posters, discussions and demonstrations. Participants looked at a number of issues concerning neutron scattering instruments and the tools used in instrument design. Objectives included: (1) determining the needs of the neutron scattering community in instrument design computer code and information sharing to aid future instrument development, (2) providing for a means of training scientists in neutron scattering and neutron instrument techniques, and (3) facilitating the involvement of other scientists in determining the characteristics of new instruments that meet future scientific objectives, and (4) fostering international cooperation in meeting these needs. The scope of the meeting included: (1) a review of x-ray scattering instrument design tools, (2) a look at the present status of neutron scattering instrument design tools and models of neutron optical elements, and (3) discussions of the present and future needs of the neutron scattering community. Selected papers were abstracted separately for inclusion to the Energy Science and Technology Database

  6. Proceedings of a workshop on methods for neutron scattering instrumentation design

    Energy Technology Data Exchange (ETDEWEB)

    Hjelm, R.P. [ed.] [Los Alamos National Lab., NM (United States)

    1997-09-01

    The future of neutron and x-ray scattering instrument development and international cooperation was the focus of the workshop. The international gathering of about 50 participants representing 15 national facilities, universities and corporations featured oral presentations, posters, discussions and demonstrations. Participants looked at a number of issues concerning neutron scattering instruments and the tools used in instrument design. Objectives included: (1) determining the needs of the neutron scattering community in instrument design computer code and information sharing to aid future instrument development, (2) providing for a means of training scientists in neutron scattering and neutron instrument techniques, and (3) facilitating the involvement of other scientists in determining the characteristics of new instruments that meet future scientific objectives, and (4) fostering international cooperation in meeting these needs. The scope of the meeting included: (1) a review of x-ray scattering instrument design tools, (2) a look at the present status of neutron scattering instrument design tools and models of neutron optical elements, and (3) discussions of the present and future needs of the neutron scattering community. Selected papers were abstracted separately for inclusion to the Energy Science and Technology Database.

  7. Rare earths analysis of rock samples by instrumental neutron activation analysis, internal standard method

    International Nuclear Information System (INIS)

    Silachyov, I.

    2016-01-01

    The application of instrumental neutron activation analysis for the determination of long-lived rare earth elements (REE) in rock samples is considered in this work. Two different methods are statistically compared: the well established external standard method carried out using standard reference materials, and the internal standard method (ISM), using Fe, determined through X-ray fluorescence analysis, as an element-comparator. The ISM proved to be the more precise method for a wide range of REE contents and can be recommended for routine practice. (author)

  8. Contribution to the discussion of P.M. Fayers and David J. Hand: Causal variables, indicator variables and measurement scales: an example from quality of life

    DEFF Research Database (Denmark)

    Keiding, Niels

    2002-01-01

    Causal variables; Clinimetric scales; Composite scales; Construct validity; Measurement scales; Multi-item scales; Quality-of-life instruments......Causal variables; Clinimetric scales; Composite scales; Construct validity; Measurement scales; Multi-item scales; Quality-of-life instruments...

  9. Variable selection in near-infrared spectroscopy: Benchmarking of feature selection methods on biodiesel data

    International Nuclear Information System (INIS)

    Balabin, Roman M.; Smirnov, Sergey V.

    2011-01-01

    During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm -1 ) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic

  10. Construction of the descriptive system for the assessment of quality of life AQoL-6D utility instrument

    Directory of Open Access Journals (Sweden)

    Richardson Jeffrey RJ

    2012-04-01

    Full Text Available Abstract Background Multi attribute utility (MAU instruments are used to include the health related quality of life (HRQoL in economic evaluations of health programs. Comparative studies suggest different MAU instruments measure related but different constructs. The objective of this paper is to describe the methods employed to achieve content validity in the descriptive system of the Assessment of Quality of Life (AQoL-6D, MAU instrument. Methods The AQoL program introduced the use of psychometric methods in the construction of health related MAU instruments. To develop the AQoL-6D we selected 112 items from previous research, focus groups and expert judgment and administered them to 316 members of the public and 302 hospital patients. The search for content validity across a broad spectrum of health states required both formative and reflective modelling. We employed Exploratory Factor Analysis and Structural Equation Modelling (SEM to meet these dual requirements. Results and Discussion The resulting instrument employs 20 items in a multi-tier descriptive system. Latent dimension variables achieve sensitive descriptions of 6 dimensions which, in turn, combine to form a single latent QoL variable. Diagnostic statistics from the SEM analysis are exceptionally good and confirm the hypothesised structure of the model. Conclusions The AQoL-6D descriptive system has good psychometric properties. They imply that the instrument has achieved construct validity and provides a sensitive description of HRQoL. This means that it may be used with confidence for measuring health related quality of life and that it is a suitable basis for modelling utilities for inclusion in the economic evaluation of health programs.

  11. A method for automating calibration and records management for instrumentation and dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    O`Brien, J.M. Jr.; Rushton, R.O.; Burns, R.E. Jr. [Atlan-Tech, Inc., Roswell, GA (United States)

    1993-12-31

    Current industry requirements are becoming more stringent on quality assurance records and documentation for calibration of instruments and dosimetry. A novel method is presented here that will allow a progressive automation scheme to be used in pursuit of that goal. This concept is based on computer-controlled irradiators that can act as stand-alone devices or be interfaced to other components via a computer local area network. In this way, complete systems can be built with modules to create a records management system to meet the needs of small laboratories or large multi-building calibration groups. Different database engines or formats can be used simply by replacing a module. Modules for temperature and pressure monitoring or shipping and receiving can be added, as well as equipment modules for direct IEEE-488 interface to electrometers and other instrumentation.

  12. A method for automating calibration and records management for instrumentation and dosimetry

    International Nuclear Information System (INIS)

    O'Brien, J.M. Jr.; Rushton, R.O.; Burns, R.E. Jr.

    1993-01-01

    Current industry requirements are becoming more stringent on quality assurance records and documentation for calibration of instruments and dosimetry. A novel method is presented here that will allow a progressive automation scheme to be used in pursuit of that goal. This concept is based on computer-controlled irradiators that can act as stand-alone devices or be interfaced to other components via a computer local area network. In this way, complete systems can be built with modules to create a records management system to meet the needs of small laboratories or large multi-building calibration groups. Different database engines or formats can be used simply by replacing a module. Modules for temperature and pressure monitoring or shipping and receiving can be added, as well as equipment modules for direct IEEE-488 interface to electrometers and other instrumentation

  13. A stochastic Galerkin method for the Euler equations with Roe variable transformation

    KAUST Repository

    Pettersson, Per; Iaccarino, Gianluca; Nordströ m, Jan

    2014-01-01

    The Euler equations subject to uncertainty in the initial and boundary conditions are investigated via the stochastic Galerkin approach. We present a new fully intrusive method based on a variable transformation of the continuous equations. Roe variables are employed to get quadratic dependence in the flux function and a well-defined Roe average matrix that can be determined without matrix inversion.In previous formulations based on generalized polynomial chaos expansion of the physical variables, the need to introduce stochastic expansions of inverse quantities, or square roots of stochastic quantities of interest, adds to the number of possible different ways to approximate the original stochastic problem. We present a method where the square roots occur in the choice of variables, resulting in an unambiguous problem formulation.The Roe formulation saves computational cost compared to the formulation based on expansion of conservative variables. Moreover, the Roe formulation is more robust and can handle cases of supersonic flow, for which the conservative variable formulation fails to produce a bounded solution. For certain stochastic basis functions, the proposed method can be made more effective and well-conditioned. This leads to increased robustness for both choices of variables. We use a multi-wavelet basis that can be chosen to include a large number of resolution levels to handle more extreme cases (e.g. strong discontinuities) in a robust way. For smooth cases, the order of the polynomial representation can be increased for increased accuracy. © 2013 Elsevier Inc.

  14. Validation of the Organizational Culture Assessment Instrument

    Science.gov (United States)

    Heritage, Brody; Pollock, Clare; Roberts, Lynne

    2014-01-01

    Organizational culture is a commonly studied area in industrial/organizational psychology due to its important role in workplace behaviour, cognitions, and outcomes. Jung et al.'s [1] review of the psychometric properties of organizational culture measurement instruments noted many instruments have limited validation data despite frequent use in both theoretical and applied situations. The Organizational Culture Assessment Instrument (OCAI) has had conflicting data regarding its psychometric properties, particularly regarding its factor structure. Our study examined the factor structure and criterion validity of the OCAI using robust analysis methods on data gathered from 328 (females = 226, males = 102) Australian employees. Confirmatory factor analysis supported a four factor structure of the OCAI for both ideal and current organizational culture perspectives. Current organizational culture data demonstrated expected reciprocally-opposed relationships between three of the four OCAI factors and the outcome variable of job satisfaction but ideal culture data did not, thus indicating possible weak criterion validity when the OCAI is used to assess ideal culture. Based on the mixed evidence regarding the measure's properties, further examination of the factor structure and broad validity of the measure is encouraged. PMID:24667839

  15. Validation of the organizational culture assessment instrument.

    Directory of Open Access Journals (Sweden)

    Brody Heritage

    Full Text Available Organizational culture is a commonly studied area in industrial/organizational psychology due to its important role in workplace behaviour, cognitions, and outcomes. Jung et al.'s [1] review of the psychometric properties of organizational culture measurement instruments noted many instruments have limited validation data despite frequent use in both theoretical and applied situations. The Organizational Culture Assessment Instrument (OCAI has had conflicting data regarding its psychometric properties, particularly regarding its factor structure. Our study examined the factor structure and criterion validity of the OCAI using robust analysis methods on data gathered from 328 (females = 226, males = 102 Australian employees. Confirmatory factor analysis supported a four factor structure of the OCAI for both ideal and current organizational culture perspectives. Current organizational culture data demonstrated expected reciprocally-opposed relationships between three of the four OCAI factors and the outcome variable of job satisfaction but ideal culture data did not, thus indicating possible weak criterion validity when the OCAI is used to assess ideal culture. Based on the mixed evidence regarding the measure's properties, further examination of the factor structure and broad validity of the measure is encouraged.

  16. Experimental innovations in surface science a guide to practical laboratory methods and instruments

    CERN Document Server

    Yates, John T

    2015-01-01

    This book is a new edition of a classic text on experimental methods and instruments in surface science. It offers practical insight useful to chemists, physicists, and materials scientists working in experimental surface science. This enlarged second edition contains almost 300 descriptions of experimental methods. The more than 50 active areas with individual scientific and measurement concepts and activities relevant to each area are presented in this book. The key areas covered are: Vacuum System Technology, Mechanical Fabrication Techniques, Measurement Methods, Thermal Control, Delivery of Adsorbates to Surfaces, UHV Windows, Surface Preparation Methods, High Area Solids, Safety. The book is written for researchers and graduate students.

  17. The functional variable method for solving the fractional Korteweg ...

    Indian Academy of Sciences (India)

    The physical and engineering processes have been modelled by means of fractional ... very important role in various fields such as economics, chemistry, notably control the- .... In §3, the functional variable method is applied for finding exact.

  18. A sizing method for stand-alone PV installations with variable demand

    Energy Technology Data Exchange (ETDEWEB)

    Posadillo, R. [Grupo de Investigacion en Energias y Recursos Renovables, Dpto. de Fisica Aplicada, E.P.S., Universidad de Cordoba, Avda. Menendez Pidal s/n, 14004 Cordoba (Spain); Lopez Luque, R. [Grupo de Investigacion de Fisica Para las Energias y Recursos Renovables, Dpto. de Fisica Aplicada, Edificio C2 Campus de Rabanales, 14071 Cordoba (Spain)

    2008-05-15

    The practical applicability of the considerations made in a previous paper to characterize energy balances in stand-alone photovoltaic systems (SAPV) is presented. Given that energy balances were characterized based on monthly estimations, the method is appropriate for sizing installations with variable monthly demands and variable monthly panel tilt (for seasonal estimations). The method presented is original in that it is the only method proposed for this type of demand. The method is based on the rational utilization of daily solar radiation distribution functions. When exact mathematical expressions are not available, approximate empirical expressions can be used. The more precise the statistical characterization of the solar radiation on the receiver module, the more precise the sizing method given that the characterization will solely depend on the distribution function of the daily global irradiation on the tilted surface H{sub g{beta}}{sub i}. This method, like previous ones, uses the concept of loss of load probability (LLP) as a parameter to characterize system design and includes information on the standard deviation of this parameter ({sigma}{sub LLP}) as well as two new parameters: annual number of system failures (f) and the standard deviation of annual number of system failures ({sigma}{sub f}). This paper therefore provides an analytical method for evaluating and sizing stand-alone PV systems with variable monthly demand and panel inclination. The sizing method has also been applied in a practical manner. (author)

  19. A fast collocation method for a variable-coefficient nonlocal diffusion model

    Science.gov (United States)

    Wang, Che; Wang, Hong

    2017-02-01

    We develop a fast collocation scheme for a variable-coefficient nonlocal diffusion model, for which a numerical discretization would yield a dense stiffness matrix. The development of the fast method is achieved by carefully handling the variable coefficients appearing inside the singular integral operator and exploiting the structure of the dense stiffness matrix. The resulting fast method reduces the computational work from O (N3) required by a commonly used direct solver to O (Nlog ⁡ N) per iteration and the memory requirement from O (N2) to O (N). Furthermore, the fast method reduces the computational work of assembling the stiffness matrix from O (N2) to O (N). Numerical results are presented to show the utility of the fast method.

  20. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.

    Science.gov (United States)

    Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  1. Introduction to meteorological measurements and data handling for solar energy applications. Task IV. Development of an isolation handbook and instrument package

    Energy Technology Data Exchange (ETDEWEB)

    None

    1980-01-01

    The following are covered: the Sun and its radiation, solar radiation and atmospheric interaction, solar radiation measurement methods, spectral irradiance measurements of natural sources, the measurement of infrared radiation, the measurement of circumsolar radiation, some empirical properties of solar radiation and related parameters, duration of sunshine, and meteorological variables related to solar energy. Included in appendices are manufacturers and distributors of solar radiation measuring instruments and an approximate method for quality control of solar radiation instruments. (MHR)

  2. Evaluation of Validity and Reliability for Hierarchical Scales Using Latent Variable Modeling

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.

    2012-01-01

    A latent variable modeling method is outlined, which accomplishes estimation of criterion validity and reliability for a multicomponent measuring instrument with hierarchical structure. The approach provides point and interval estimates for the scale criterion validity and reliability coefficients, and can also be used for testing composite or…

  3. A generalized fractional sub-equation method for fractional differential equations with variable coefficients

    International Nuclear Information System (INIS)

    Tang, Bo; He, Yinnian; Wei, Leilei; Zhang, Xindong

    2012-01-01

    In this Letter, a generalized fractional sub-equation method is proposed for solving fractional differential equations with variable coefficients. Being concise and straightforward, this method is applied to the space–time fractional Gardner equation with variable coefficients. As a result, many exact solutions are obtained including hyperbolic function solutions, trigonometric function solutions and rational solutions. It is shown that the considered method provides a very effective, convenient and powerful mathematical tool for solving many other fractional differential equations in mathematical physics. -- Highlights: ► Study of fractional differential equations with variable coefficients plays a role in applied physical sciences. ► It is shown that the proposed algorithm is effective for solving fractional differential equations with variable coefficients. ► The obtained solutions may give insight into many considerable physical processes.

  4. The functional variable method for finding exact solutions of some ...

    Indian Academy of Sciences (India)

    Abstract. In this paper, we implemented the functional variable method and the modified. Riemann–Liouville derivative for the exact solitary wave solutions and periodic wave solutions of the time-fractional Klein–Gordon equation, and the time-fractional Hirota–Satsuma coupled. KdV system. This method is extremely simple ...

  5. Optimization of PID Parameters Utilizing Variable Weight Grey-Taguchi Method and Particle Swarm Optimization

    Science.gov (United States)

    Azmi, Nur Iffah Mohamed; Arifin Mat Piah, Kamal; Yusoff, Wan Azhar Wan; Romlay, Fadhlur Rahman Mohd

    2018-03-01

    Controller that uses PID parameters requires a good tuning method in order to improve the control system performance. Tuning PID control method is divided into two namely the classical methods and the methods of artificial intelligence. Particle swarm optimization algorithm (PSO) is one of the artificial intelligence methods. Previously, researchers had integrated PSO algorithms in the PID parameter tuning process. This research aims to improve the PSO-PID tuning algorithms by integrating the tuning process with the Variable Weight Grey- Taguchi Design of Experiment (DOE) method. This is done by conducting the DOE on the two PSO optimizing parameters: the particle velocity limit and the weight distribution factor. Computer simulations and physical experiments were conducted by using the proposed PSO- PID with the Variable Weight Grey-Taguchi DOE and the classical Ziegler-Nichols methods. They are implemented on the hydraulic positioning system. Simulation results show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE has reduced the rise time by 48.13% and settling time by 48.57% compared to the Ziegler-Nichols method. Furthermore, the physical experiment results also show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE tuning method responds better than Ziegler-Nichols tuning. In conclusion, this research has improved the PSO-PID parameter by applying the PSO-PID algorithm together with the Variable Weight Grey-Taguchi DOE method as a tuning method in the hydraulic positioning system.

  6. Error response test system and method using test mask variable

    Science.gov (United States)

    Gender, Thomas K. (Inventor)

    2006-01-01

    An error response test system and method with increased functionality and improved performance is provided. The error response test system provides the ability to inject errors into the application under test to test the error response of the application under test in an automated and efficient manner. The error response system injects errors into the application through a test mask variable. The test mask variable is added to the application under test. During normal operation, the test mask variable is set to allow the application under test to operate normally. During testing, the error response test system can change the test mask variable to introduce an error into the application under test. The error response system can then monitor the application under test to determine whether the application has the correct response to the error.

  7. Viscoelastic Earthquake Cycle Simulation with Memory Variable Method

    Science.gov (United States)

    Hirahara, K.; Ohtani, M.

    2017-12-01

    There have so far been no EQ (earthquake) cycle simulations, based on RSF (rate and state friction) laws, in viscoelastic media, except for Kato (2002), who simulated cycles on a 2-D vertical strike-slip fault, and showed nearly the same cycles as those in elastic cases. The viscoelasticity could, however, give more effects on large dip-slip EQ cycles. In a boundary element approach, stress is calculated using a hereditary integral of stress relaxation function and slip deficit rate, where we need the past slip rates, leading to huge computational costs. This is a cause for almost no simulations in viscoelastic media. We have investigated the memory variable method utilized in numerical computation of wave propagation in dissipative media (e.g., Moczo and Kristek, 2005). In this method, introducing memory variables satisfying 1st order differential equations, we need no hereditary integrals in stress calculation and the computational costs are the same order of those in elastic cases. Further, Hirahara et al. (2012) developed the iterative memory variable method, referring to Taylor et al. (1970), in EQ cycle simulations in linear viscoelastic media. In this presentation, first, we introduce our method in EQ cycle simulations and show the effect of the linear viscoelasticity on stick-slip cycles in a 1-DOF block-SLS (standard linear solid) model, where the elastic spring of the traditional block-spring model is replaced by SLS element and we pull, in a constant rate, the block obeying RSF law. In this model, the memory variable stands for the displacement of the dash-pot in SLS element. The use of smaller viscosity reduces the recurrence time to a minimum value. The smaller viscosity means the smaller relaxation time, which makes the stress recovery quicker, leading to the smaller recurrence time. Second, we show EQ cycles on a 2-D dip-slip fault with the dip angel of 20 degrees in an elastic layer with thickness of 40 km overriding a Maxwell viscoelastic half

  8. A review of modern instrumental methods of elemental analysis of petroleum related material. Part 2

    International Nuclear Information System (INIS)

    Nadkarni, R.A.

    1991-01-01

    In this paper a review is presented of the state of the art in elemental analysis of petroleum-related materials (crude oil, gasoline, additives, and lubricants) using modern instrumental analysis techniques. The major instrumental techniques used for elemental analysis of petroleum products include atomic absorption spectrometry (both with flame and with graphite furnace atomizer), inductively coupled plasma atomic emission spectrometry, ion chromatography, microelemental methods, neutron activation, spark source mass spectrometry, and x-ray fluorescence. Each of these techniques is compared for its advantages, disadvantages, and typical applications in the petroleum field

  9. Unit-specific calibration of Actigraph accelerometers in a mechanical setup - is it worth the effort? The effect on random output variation caused by technical inter-instrument variability in the laboratory and in the field

    DEFF Research Database (Denmark)

    Moeller, Niels C; Korsholm, Lars; Kristensen, Peter L

    2008-01-01

    BACKGROUND: Potentially, unit-specific in-vitro calibration of accelerometers could increase field data quality and study power. However, reduced inter-unit variability would only be important if random instrument variability contributes considerably to the total variation in field data. Therefor...

  10. The potential of soft computing methods in NPP instrumentation and control

    International Nuclear Information System (INIS)

    Hampel, R.; Chaker, N.; Kaestner, W.; Traichel, A.; Wagenknecht, M.; Gocht, U.

    2002-01-01

    The method of signal processing by soft computing include the application of fuzzy logic, synthetic neural networks, and evolutionary algorithms. The article contains an outline of the objectives and results of the application of fuzzy logic and methods of synthetic neural networks in nuclear measurement and control. The special requirements to be met by the software in safety-related areas with respect to reliability, evaluation, and validation are described. Possible uses may be in off-line applications in modeling, simulation, and reliability analysis as well as in on-line applications (real-time systems) for instrumentation and control. Safety-related aspects of signal processing are described and analyzed for the fuzzy logic and synthetic neural network concepts. Application are covered in selected examples. (orig.)

  11. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    Directory of Open Access Journals (Sweden)

    Jun-He Yang

    2017-01-01

    Full Text Available Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir’s water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir’s water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  12. The added value of time-variable microgravimetry to the understanding of how volcanoes work

    Science.gov (United States)

    Carbone, Daniele; Poland, Michael; Greco, Filippo; Diament, Michel

    2017-01-01

    During the past few decades, time-variable volcano gravimetry has shown great potential for imaging subsurface processes at active volcanoes (including some processes that might otherwise remain “hidden”), especially when combined with other methods (e.g., ground deformation, seismicity, and gas emissions). By supplying information on changes in the distribution of bulk mass over time, gravimetry can provide information regarding processes such as magma accumulation in void space, gas segregation at shallow depths, and mechanisms driving volcanic uplift and subsidence. Despite its potential, time-variable volcano gravimetry is an underexploited method, not widely adopted by volcano researchers or observatories. The cost of instrumentation and the difficulty in using it under harsh environmental conditions is a significant impediment to the exploitation of gravimetry at many volcanoes. In addition, retrieving useful information from gravity changes in noisy volcanic environments is a major challenge. While these difficulties are not trivial, neither are they insurmountable; indeed, creative efforts in a variety of volcanic settings highlight the value of time-variable gravimetry for understanding hazards as well as revealing fundamental insights into how volcanoes work. Building on previous work, we provide a comprehensive review of time-variable volcano gravimetry, including discussions of instrumentation, modeling and analysis techniques, and case studies that emphasize what can be learned from campaign, continuous, and hybrid gravity observations. We are hopeful that this exploration of time-variable volcano gravimetry will excite more scientists about the potential of the method, spurring further application, development, and innovation.

  13. Applying the Mixed Methods Instrument Development and Construct Validation Process: the Transformative Experience Questionnaire

    Science.gov (United States)

    Koskey, Kristin L. K.; Sondergeld, Toni A.; Stewart, Victoria C.; Pugh, Kevin J.

    2018-01-01

    Onwuegbuzie and colleagues proposed the Instrument Development and Construct Validation (IDCV) process as a mixed methods framework for creating and validating measures. Examples applying IDCV are lacking. We provide an illustrative case integrating the Rasch model and cognitive interviews applied to the development of the Transformative…

  14. Instrumental neutron activation analysis - a routine method

    International Nuclear Information System (INIS)

    Bruin, M. de.

    1983-01-01

    This thesis describes the way in which at IRI instrumental neutron activation analysis (INAA) has been developed into an automated system for routine analysis. The basis of this work are 20 publications describing the development of INAA since 1968. (Auth.)

  15. Precision and accuracy of blood glucose measurements using three different instruments.

    Science.gov (United States)

    Nowotny, B; Nowotny, P J; Strassburger, K; Roden, M

    2012-02-01

    Assessment of insulin sensitivity by dynamic metabolic tests such as the hyperinsulinemic euglycemic clamp critically relies on the reproducible and fast measurement of blood glucose concentrations. Although various instruments have been developed over the last decades, little is known as to the accuracy and comparability. We therefore compared the best new instrument with the former gold standard instruments to measure glucose concentrations in metabolic tests. Fasting blood samples of 15 diabetic and 10 healthy subjects were collected into sodium-fluoride tubes, spiked with glucose (0, 2.8, 6.9 and 11.1 mmol/l) and measured either as whole blood (range 3.3-26.3 mmol/l) or following centrifugation as plasma (range 3.9-32.0 mmol/l). Plasma samples were analyzed in the YSI-2300 STAT plus (YSI), EKF Biosen C-Line (EKF) and the reference method, Beckman Glucose analyzer-II (BMG), whole blood samples in EKF instruments with YSI as reference method. The average deviation of the EKF from the reference, BMG, was 3.0 ± 3.5% without any concentration-dependent variability. Glucose measurements by YSI were in good agreement with that by BMG (plasma) and EKF (plasma and whole blood) up to concentrations of 13.13 mmol/l (0.5 ± 3.7%), but deviation increased to -6.2 ± 3.8% at higher concentrations. Precision (n = 6) was ±2.2% (YSI), ±3.9% (EKF) and ±5.2% (BMG). The EKF instrument is comparable regarding accuracy and precision to the reference method BMG and can be used in metabolic tests, while the YSI showed a systematic shift at higher glucose concentrations. Based on these results we decided to replace BMG with EKF instrument in metabolic tests. © 2012 The Authors. Diabetic Medicine © 2012 Diabetes UK.

  16. Instruments for Water Quality Monitoring

    Science.gov (United States)

    Ballinger, Dwight G.

    1972-01-01

    Presents information regarding available instruments for industries and agencies who must monitor numerous aquatic parameters. Charts denote examples of parameters sampled, testing methods, range and accuracy of test methods, cost analysis, and reliability of instruments. (BL)

  17. The relationship between venture capital investment and macro economic variables via statistical computation method

    Science.gov (United States)

    Aygunes, Gunes

    2017-07-01

    The objective of this paper is to survey and determine the macroeconomic factors affecting the level of venture capital (VC) investments in a country. The literary depends on venture capitalists' quality and countries' venture capital investments. The aim of this paper is to give relationship between venture capital investment and macro economic variables via statistical computation method. We investigate the countries and macro economic variables. By using statistical computation method, we derive correlation between venture capital investments and macro economic variables. According to method of logistic regression model (logit regression or logit model), macro economic variables are correlated with each other in three group. Venture capitalists regard correlations as a indicator. Finally, we give correlation matrix of our results.

  18. An Instrumental Variable Probit (IVP) analysis on depressed mood in Korea: the impact of gender differences and other socio-economic factors.

    Science.gov (United States)

    Gitto, Lara; Noh, Yong-Hwan; Andrés, Antonio Rodríguez

    2015-04-16

    Depression is a mental health state whose frequency has been increasing in modern societies. It imposes a great burden, because of the strong impact on people's quality of life and happiness. Depression can be reliably diagnosed and treated in primary care: if more people could get effective treatments earlier, the costs related to depression would be reversed. The aim of this study was to examine the influence of socio-economic factors and gender on depressed mood, focusing on Korea. In fact, in spite of the great amount of empirical studies carried out for other countries, few epidemiological studies have examined the socio-economic determinants of depression in Korea and they were either limited to samples of employed women or did not control for individual health status. Moreover, as the likely data endogeneity (i.e. the possibility of correlation between the dependent variable and the error term as a result of autocorrelation or simultaneity, such as, in this case, the depressed mood due to health factors that, in turn might be caused by depression), might bias the results, the present study proposes an empirical approach, based on instrumental variables, to deal with this problem. Data for the year 2008 from the Korea National Health and Nutrition Examination Survey (KNHANES) were employed. About seven thousands of people (N= 6,751, of which 43% were males and 57% females), aged from 19 to 75 years old, were included in the sample considered in the analysis. In order to take into account the possible endogeneity of some explanatory variables, two Instrumental Variables Probit (IVP) regressions were estimated; the variables for which instrumental equations were estimated were related to the participation of women to the workforce and to good health, as reported by people in the sample. Explanatory variables were related to age, gender, family factors (such as the number of family members and marital status) and socio-economic factors (such as education

  19. Construction of the descriptive system for the Assessment of Quality of Life AQoL-6D utility instrument.

    Science.gov (United States)

    Richardson, Jeffrey R J; Peacock, Stuart J; Hawthorne, Graeme; Iezzi, Angelo; Elsworth, Gerald; Day, Neil A

    2012-04-17

    Multi attribute utility (MAU) instruments are used to include the health related quality of life (HRQoL) in economic evaluations of health programs. Comparative studies suggest different MAU instruments measure related but different constructs. The objective of this paper is to describe the methods employed to achieve content validity in the descriptive system of the Assessment of Quality of Life (AQoL)-6D, MAU instrument. The AQoL program introduced the use of psychometric methods in the construction of health related MAU instruments. To develop the AQoL-6D we selected 112 items from previous research, focus groups and expert judgment and administered them to 316 members of the public and 302 hospital patients. The search for content validity across a broad spectrum of health states required both formative and reflective modelling. We employed Exploratory Factor Analysis and Structural Equation Modelling (SEM) to meet these dual requirements. The resulting instrument employs 20 items in a multi-tier descriptive system. Latent dimension variables achieve sensitive descriptions of 6 dimensions which, in turn, combine to form a single latent QoL variable. Diagnostic statistics from the SEM analysis are exceptionally good and confirm the hypothesised structure of the model. The AQoL-6D descriptive system has good psychometric properties. They imply that the instrument has achieved construct validity and provides a sensitive description of HRQoL. This means that it may be used with confidence for measuring health related quality of life and that it is a suitable basis for modelling utilities for inclusion in the economic evaluation of health programs.

  20. Wind resource in metropolitan France: assessment methods, variability and trends

    International Nuclear Information System (INIS)

    Jourdier, Benedicte

    2015-01-01

    France has one of the largest wind potentials in Europe, yet far from being fully exploited. The wind resource and energy yield assessment is a key step before building a wind farm, aiming at predicting the future electricity production. Any over-estimation in the assessment process puts in jeopardy the project's profitability. This has been the case in the recent years, when wind farm managers have noticed that they produced less than expected. The under-production problem leads to questioning both the validity of the assessment methods and the inter-annual wind variability. This thesis tackles these two issues. In a first part are investigated the errors linked to the assessment methods, especially in two steps: the vertical extrapolation of wind measurements and the statistical modelling of wind-speed data by a Weibull distribution. The second part investigates the inter-annual to decadal variability of wind speeds, in order to understand how this variability may have contributed to the under-production and so that it is better taken into account in the future. (author) [fr

  1. Predicting College Women's Career Plans: Instrumentality, Work, and Family

    Science.gov (United States)

    Savela, Alexandra E.; O'Brien, Karen M.

    2016-01-01

    This study examined how college women's instrumentality and expectations about combining work and family predicted early career development variables. Specifically, 177 undergraduate women completed measures of instrumentality (i.e., traits such as ambition, assertiveness, and risk taking), willingness to compromise career for family, anticipated…

  2. Impact of Uniform Methods on Interlaboratory Antibody Titration Variability: Antibody Titration and Uniform Methods.

    Science.gov (United States)

    Bachegowda, Lohith S; Cheng, Yan H; Long, Thomas; Shaz, Beth H

    2017-01-01

    -Substantial variability between different antibody titration methods prompted development and introduction of uniform methods in 2008. -To determine whether uniform methods consistently decrease interlaboratory variation in proficiency testing. -Proficiency testing data for antibody titration between 2009 and 2013 were obtained from the College of American Pathologists. Each laboratory was supplied plasma and red cells to determine anti-A and anti-D antibody titers by their standard method: gel or tube by uniform or other methods at different testing phases (immediate spin and/or room temperature [anti-A], and/or anti-human globulin [AHG: anti-A and anti-D]) with different additives. Interlaboratory variations were compared by analyzing the distribution of titer results by method and phase. -A median of 574 and 1100 responses were reported for anti-A and anti-D antibody titers, respectively, during a 5-year period. The 3 most frequent (median) methods performed for anti-A antibody were uniform tube room temperature (147.5; range, 119-159), uniform tube AHG (143.5; range, 134-150), and other tube AHG (97; range, 82-116); for anti-D antibody, the methods were other tube (451; range, 431-465), uniform tube (404; range, 382-462), and uniform gel (137; range, 121-153). Of the larger reported methods, uniform gel AHG phase for anti-A and anti-D antibodies had the most participants with the same result (mode). For anti-A antibody, 0 of 8 (uniform versus other tube room temperature) and 1 of 8 (uniform versus other tube AHG), and for anti-D antibody, 0 of 8 (uniform versus other tube) and 0 of 8 (uniform versus other gel) proficiency tests showed significant titer variability reduction. -Uniform methods harmonize laboratory techniques but rarely reduce interlaboratory titer variance in comparison with other methods.

  3. The Leech method for diagnosing constipation: intra- and interobserver variability and accuracy

    International Nuclear Information System (INIS)

    Lorijn, Fleur de; Voskuijl, Wieger P.; Taminiau, Jan A.; Benninga, Marc A.; Rijn, Rick R. van; Henneman, Onno D.F.; Heijmans, Jarom; Reitsma, Johannes B.

    2006-01-01

    The data concerning the value of a plain abdominal radiograph in childhood constipation are inconsistent. Recently, positive results have been reported of a new radiographic scoring system, ''the Leech method'', for assessing faecal loading. To assess intra- and interobserver variability and determine diagnostic accuracy of the Leech method in identifying children with functional constipation (FC). A total of 89 children (median age 9.8 years) with functional gastrointestinal disorders were included in the study. Based on clinical parameters, 52 fulfilled the criteria for FC, six fulfilled the criteria for functional abdominal pain (FAP), and 31 for functional non-retentive faecal incontinence (FNRFI); the latter two groups provided the controls. To assess intra- and interobserver variability of the Leech method three scorers scored the same abdominal radiograph twice. A Leech score of 9 or more was considered as suggestive of constipation. ROC analysis was used to determine the diagnostic accuracy of the Leech method in separating patients with FC from control patients. Significant intraobserver variability was found between two scorers (P=0.005 and P<0.0001), whereas there was no systematic difference between the two scores of the other scorer (P=0.89). The scores between scorers differed systematically and displayed large variability. The area under the ROC curve was 0.68 (95% CI 0.58-0.80), indicating poor diagnostic accuracy. The Leech scoring method for assessing faecal loading on a plain abdominal radiograph is of limited value in the diagnosis of FC in children. (orig.)

  4. Detailed characterizations of a Comparative Reactivity Method (CRM) instrument: experiments vs. modelling

    Science.gov (United States)

    Michoud, V.; Hansen, R. F.; Locoge, N.; Stevens, P. S.; Dusanter, S.

    2015-04-01

    The Hydroxyl radical (OH) is an important oxidant in the daytime troposphere that controls the lifetime of most trace gases, whose oxidation leads to the formation of harmful secondary pollutants such as ozone (O3) and Secondary Organic Aerosols (SOA). In spite of the importance of OH, uncertainties remain concerning its atmospheric budget and integrated measurements of the total sink of OH can help reducing these uncertainties. In this context, several methods have been developed to measure the first-order loss rate of ambient OH, called total OH reactivity. Among these techniques, the Comparative Reactivity Method (CRM) is promising and has already been widely used in the field and in atmospheric simulation chambers. This technique relies on monitoring competitive OH reactions between a reference molecule (pyrrole) and compounds present in ambient air inside a sampling reactor. However, artefacts and interferences exist for this method and a thorough characterization of the CRM technique is needed. In this study, we present a detailed characterization of a CRM instrument, assessing the corrections that need to be applied on ambient measurements. The main corrections are, in the order of their integration in the data processing: (1) a correction for a change in relative humidity between zero air and ambient air, (2) a correction for the formation of spurious OH when artificially produced HO2 react with NO in the sampling reactor, and (3) a correction for a deviation from pseudo first-order kinetics. The dependences of these artefacts to various measurable parameters, such as the pyrrole-to-OH ratio or the bimolecular reaction rate constants of ambient trace gases with OH are also studied. From these dependences, parameterizations are proposed to correct the OH reactivity measurements from the abovementioned artefacts. A comparison of experimental and simulation results is then discussed. The simulations were performed using a 0-D box model including either (1) a

  5. VOLUMETRIC METHOD FOR EVALUATION OF BEACHES VARIABILITY BASED ON GIS-TOOLS

    Directory of Open Access Journals (Sweden)

    V. V. Dolotov

    2015-01-01

    Full Text Available In frame of cadastral beach evaluation the volumetric method of natural variability index is proposed. It base on spatial calculations with Cut-Fill method and volume accounting ofboththe common beach contour and specific areas for the each time.

  6. Selecting minimum dataset soil variables using PLSR as a regressive multivariate method

    Science.gov (United States)

    Stellacci, Anna Maria; Armenise, Elena; Castellini, Mirko; Rossi, Roberta; Vitti, Carolina; Leogrande, Rita; De Benedetto, Daniela; Ferrara, Rossana M.; Vivaldi, Gaetano A.

    2017-04-01

    Long-term field experiments and science-based tools that characterize soil status (namely the soil quality indices, SQIs) assume a strategic role in assessing the effect of agronomic techniques and thus in improving soil management especially in marginal environments. Selecting key soil variables able to best represent soil status is a critical step for the calculation of SQIs. Current studies show the effectiveness of statistical methods for variable selection to extract relevant information deriving from multivariate datasets. Principal component analysis (PCA) has been mainly used, however supervised multivariate methods and regressive techniques are progressively being evaluated (Armenise et al., 2013; de Paul Obade et al., 2016; Pulido Moncada et al., 2014). The present study explores the effectiveness of partial least square regression (PLSR) in selecting critical soil variables, using a dataset comparing conventional tillage and sod-seeding on durum wheat. The results were compared to those obtained using PCA and stepwise discriminant analysis (SDA). The soil data derived from a long-term field experiment in Southern Italy. On samples collected in April 2015, the following set of variables was quantified: (i) chemical: total organic carbon and nitrogen (TOC and TN), alkali-extractable C (TEC and humic substances - HA-FA), water extractable N and organic C (WEN and WEOC), Olsen extractable P, exchangeable cations, pH and EC; (ii) physical: texture, dry bulk density (BD), macroporosity (Pmac), air capacity (AC), and relative field capacity (RFC); (iii) biological: carbon of the microbial biomass quantified with the fumigation-extraction method. PCA and SDA were previously applied to the multivariate dataset (Stellacci et al., 2016). PLSR was carried out on mean centered and variance scaled data of predictors (soil variables) and response (wheat yield) variables using the PLS procedure of SAS/STAT. In addition, variable importance for projection (VIP

  7. Microprocessor-based, on-line decision aid for resolving conflicting nuclear reactor instrumentation

    International Nuclear Information System (INIS)

    Alesso, H.P.

    1981-01-01

    We describe one design for a microprocessor-based, on-line decision aid for identifying and resolving false, conflicting, or misleading instrument indications resulting from certain systems interactions for a pressurized water reactor. The system processes sensor signals from groups of instruments that track together under nominal transient and certain accident conditions, and alarms when they do not track together. We examine multiple-casualty systems interaction and formulate a trial grouping of variables that track together under specified conditions. A two-of-three type redundancy check of key variables provides alarm and indication of conflicting information when one signal suddenly tracks in opposition due to multiple casualty, instrument failure, and/or locally abnormal conditions. Since a vote count of two of three variables in conflict as inconclusive evidence, the system is not designed to provide tripping or corrective action, but improves the operator/instrument interface by providing additional and partially digested information

  8. Instrumentation

    International Nuclear Information System (INIS)

    Umminger, K.

    2008-01-01

    A proper measurement of the relevant single and two-phase flow parameters is the basis for the understanding of many complex thermal-hydraulic processes. Reliable instrumentation is therefore necessary for the interaction between analysis and experiment especially in the field of nuclear safety research where postulated accident scenarios have to be simulated in experimental facilities and predicted by complex computer code systems. The so-called conventional instrumentation for the measurement of e. g. pressures, temperatures, pressure differences and single phase flow velocities is still a solid basis for the investigation and interpretation of many phenomena and especially for the understanding of the overall system behavior. Measurement data from such instrumentation still serves in many cases as a database for thermal-hydraulic system codes. However some special instrumentation such as online concentration measurement for boric acid in the water phase or for non-condensibles in steam atmosphere as well as flow visualization techniques were further developed and successfully applied during the recent years. Concerning the modeling needs for advanced thermal-hydraulic codes, significant advances have been accomplished in the last few years in the local instrumentation technology for two-phase flow by the application of new sensor techniques, optical or beam methods and electronic technology. This paper will give insight into the current state of instrumentation technology for safety-related thermohydraulic experiments. Advantages and limitations of some measurement processes and systems will be indicated as well as trends and possibilities for further development. Aspects of instrumentation in operating reactors will also be mentioned.

  9. KEELE, Minimization of Nonlinear Function with Linear Constraints, Variable Metric Method

    International Nuclear Information System (INIS)

    Westley, G.W.

    1975-01-01

    1 - Description of problem or function: KEELE is a linearly constrained nonlinear programming algorithm for locating a local minimum of a function of n variables with the variables subject to linear equality and/or inequality constraints. 2 - Method of solution: A variable metric procedure is used where the direction of search at each iteration is obtained by multiplying the negative of the gradient vector by a positive definite matrix which approximates the inverse of the matrix of second partial derivatives associated with the function. 3 - Restrictions on the complexity of the problem: Array dimensions limit the number of variables to 20 and the number of constraints to 50. These can be changed by the user

  10. Propulsion and launching analysis of variable-mass rockets by analytical methods

    OpenAIRE

    D.D. Ganji; M. Gorji; M. Hatami; A. Hasanpour; N. Khademzadeh

    2013-01-01

    In this study, applications of some analytical methods on nonlinear equation of the launching of a rocket with variable mass are investigated. Differential transformation method (DTM), homotopy perturbation method (HPM) and least square method (LSM) were applied and their results are compared with numerical solution. An excellent agreement with analytical methods and numerical ones is observed in the results and this reveals that analytical methods are effective and convenient. Also a paramet...

  11. A method of estimating GPS instrumental biases with a convolution algorithm

    Science.gov (United States)

    Li, Qi; Ma, Guanyi; Lu, Weijun; Wan, Qingtao; Fan, Jiangtao; Wang, Xiaolan; Li, Jinghua; Li, Changhua

    2018-03-01

    This paper presents a method of deriving the instrumental differential code biases (DCBs) of GPS satellites and dual frequency receivers. Considering that the total electron content (TEC) varies smoothly over a small area, one ionospheric pierce point (IPP) and four more nearby IPPs were selected to build an equation with a convolution algorithm. In addition, unknown DCB parameters were arranged into a set of equations with GPS observations in a day unit by assuming that DCBs do not vary within a day. Then, the DCBs of satellites and receivers were determined by solving the equation set with the least-squares fitting technique. The performance of this method is examined by applying it to 361 days in 2014 using the observation data from 1311 GPS Earth Observation Network (GEONET) receivers. The result was crosswise-compared with the DCB estimated by the mesh method and the IONEX products from the Center for Orbit Determination in Europe (CODE). The DCB values derived by this method agree with those of the mesh method and the CODE products, with biases of 0.091 ns and 0.321 ns, respectively. The convolution method's accuracy and stability were quite good and showed improvements over the mesh method.

  12. Theoretical investigations of the new Cokriging method for variable-fidelity surrogate modeling

    DEFF Research Database (Denmark)

    Zimmermann, Ralf; Bertram, Anna

    2018-01-01

    Cokriging is a variable-fidelity surrogate modeling technique which emulates a target process based on the spatial correlation of sampled data of different levels of fidelity. In this work, we address two theoretical questions associated with the so-called new Cokriging method for variable fidelity...

  13. Modern Instrumental Methods in Forensic Toxicology*

    Science.gov (United States)

    Smith, Michael L.; Vorce, Shawn P.; Holler, Justin M.; Shimomura, Eric; Magluilo, Joe; Jacobs, Aaron J.; Huestis, Marilyn A.

    2009-01-01

    This article reviews modern analytical instrumentation in forensic toxicology for identification and quantification of drugs and toxins in biological fluids and tissues. A brief description of the theory and inherent strengths and limitations of each methodology is included. The focus is on new technologies that address current analytical limitations. A goal of this review is to encourage innovations to improve our technological capabilities and to encourage use of these analytical techniques in forensic toxicology practice. PMID:17579968

  14. Constrained variable projection method for blind deconvolution

    International Nuclear Information System (INIS)

    Cornelio, A; Piccolomini, E Loli; Nagy, J G

    2012-01-01

    This paper is focused on the solution of the blind deconvolution problem, here modeled as a separable nonlinear least squares problem. The well known ill-posedness, both on recovering the blurring operator and the true image, makes the problem really difficult to handle. We show that, by imposing appropriate constraints on the variables and with well chosen regularization parameters, it is possible to obtain an objective function that is fairly well behaved. Hence, the resulting nonlinear minimization problem can be effectively solved by classical methods, such as the Gauss-Newton algorithm.

  15. A damage detection method for instrumented civil structures using prerecorded Green’s functions and cross-correlation

    OpenAIRE

    Heckman, Vanessa; Kohler, Monica; Heaton, Thomas

    2011-01-01

    Automated damage detection methods have application to instrumented structures that are susceptible to types of damage that are difficult or costly to detect. The presented method has application to the detection of brittle fracture of welded beam-column connections in steel moment-resisting frames (MRFs), where locations of potential structural damage are known a priori. The method makes use of a prerecorded catalog of Green’s function templates and a cross-correlation method ...

  16. Pancreatitis Quality of Life Instrument: Development of a new instrument

    Directory of Open Access Journals (Sweden)

    Wahid Wassef

    2014-02-01

    Full Text Available Objectives: The goal of this project was to develop the first disease-specific instrument for the evaluation of quality of life in chronic pancreatitis. Methods: Focus groups and interview sessions were conducted, with chronic pancreatitis patients, to identify items felt to impact quality of life which were subsequently formatted into a paper-and-pencil instrument. This instrument was used to conduct an online survey by an expert panel of pancreatologists to evaluate its content validity. Finally, the modified instrument was presented to patients during precognitive testing interviews to evaluate its clarity and appropriateness. Results: In total, 10 patients were enrolled in the focus groups and interview sessions where they identified 50 items. Once redundant items were removed, the 40 remaining items were made into a paper-and-pencil instrument referred to as the Pancreatitis Quality of Life Instrument. Through the processes of content validation and precognitive testing, the number of items in the instrument was reduced to 24. Conclusions: This marks the development of the first disease-specific instrument to evaluate quality of life in chronic pancreatitis. It includes unique features not found in generic instruments (economic factors, stigma, and spiritual factors. Although this marks a giant step forward, psychometric evaluation is still needed prior to its clinical use.

  17. Reactor instrumentation and control

    International Nuclear Information System (INIS)

    Wach, D.; Beraha, D.

    1980-01-01

    The methods for measuring radiation are shortly reviewed. The instrumentation for neutron flux measurement is classified into out-of-core and in-core instrumentation. The out-of-core instrumentation monitors the operational range from the subcritical reactor to full power. This large range is covered by several measurement channels which derive their signals from counter tubes and ionization chambers. The in-core instrumentation provides more detailed information on the power distribution in the core. The self-powered neutron detectors and the aeroball system in PWR reactors are discussed. Temperature and pressure measurement devices are briefly discussed. The different methods for leak detection are described. In concluding the plant instrumentation part some new monitoring systems and analysis methods are presented: early failure detection methods by noise analysis, acoustic monitoring and vibration monitoring. The presentation of the control starts from an qualitative assessment of the reactor dynamics. The chosen control strategy leads to the definition of the part-load diagram, which provides the set-points for the different control systems. The tasks and the functions of these control systems are described. In additiion to the control, a number of limiting systems is employed to keep the reactor in a safe operating region. Finally, an outlook is given on future developments in control, concerning mainly the increased application of process computers. (orig./RW)

  18. Essays on Neural Network Sampling Methods and Instrumental Variables

    NARCIS (Netherlands)

    L.F. Hoogerheide (Lennart)

    2006-01-01

    textabstractDe laatste decennia zijn voor allerlei economische processen complexe modellen afgeleid, zoals voor de groei van het Bruto Binnenlands Product (BBP). In deze modellen zijn in sommige gevallen geavanceerde methoden nodig om kansen te berekenen, bijvoorbeeld de kans op een naderende

  19. A Contribution To The Development And Analysis Of A Combined Current-Voltage Instrument Transformer By Using Modern CAD Methods

    International Nuclear Information System (INIS)

    Chundeva-Blajer, Marija M.

    2004-01-01

    The principle aim and task of the thesis is the analysis and development of 20 kV combined current-voltage instrument transformer (CCVIT) by using modern CAD techniques. CCVIT is a complex electromagnetic system comprising of four windings and two magnetic cores in one insulation housing for simultaneous transformation of high voltages and currents to measurable signal values by standard instruments. The analytical design methods can be applied on simple electromagnetic configurations, which is not the case with the CCVIT. There is mutual electromagnetic influence between the voltage measurement core (VMC) and the current measurement core (CMC). After the analytical CCVIT design had been done, exact determination of its metrological characteristics has been accomplished by using the numerical finite element method implemented in the FEM-3D program package. The FEM-3D calculation is made in 19 cross-sectional layers of the z-axis of the CCVIT three-dimensional domain. By FEM-3D application the three-dimensional CCVIT magnetic field distribution is derived. This is the basis for calculation of the initial metrological characteristics of the CCVIT (VMC is accuracy class 3 and CMC is accuracy class 1). By using the stochastic optimization technique based on genetic algorithm the CCVIT optimal design is achieved. The objective function is the minimum of the metrological parameters (VIM voltage error and CMC current error). There are I I independent input variables during the optimization process by which the optimal project is derived. The optimal project is adapted for realization of a prototype and the optimized project is derived. Full comparative analysis of the metrological and the electromagnetic characteristics of the three projects is accomplished. By application of the program package MATLAB/SIMULINK the CCVIT transient phenomena is analyzed for different regimes in the three design projects. In the Instrument Transformer Factory of EMO A. D.-Ohrid a CCVIT

  20. Recursive form of general limited memory variable metric methods

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Vlček, Jan

    2013-01-01

    Roč. 49, č. 2 (2013), s. 224-235 ISSN 0023-5954 Institutional support: RVO:67985807 Keywords : unconstrained optimization * large scale optimization * limited memory methods * variable metric updates * recursive matrix formulation * algorithms Subject RIV: BA - General Mathematics Impact factor: 0.563, year: 2013 http://dml.cz/handle/10338.dmlcz/143365

  1. The Leech method for diagnosing constipation: intra- and interobserver variability and accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Lorijn, Fleur de; Voskuijl, Wieger P.; Taminiau, Jan A.; Benninga, Marc A. [Emma Children' s Hospital, Department of Paediatric Gastroenterology and Nutrition, Amsterdam (Netherlands); Rijn, Rick R. van; Henneman, Onno D.F. [Academic Medical Centre, Department of Radiology, Amsterdam (Netherlands); Heijmans, Jarom [Emma Children' s Hospital, Department of Paediatric Gastroenterology and Nutrition, Amsterdam (Netherlands); Academic Medical Centre, Department of Radiology, Amsterdam (Netherlands); Reitsma, Johannes B. [Academic Medical Centre, Department of Clinical Epidemiology and Biostatistics, Amsterdam (Netherlands)

    2006-01-01

    The data concerning the value of a plain abdominal radiograph in childhood constipation are inconsistent. Recently, positive results have been reported of a new radiographic scoring system, ''the Leech method'', for assessing faecal loading. To assess intra- and interobserver variability and determine diagnostic accuracy of the Leech method in identifying children with functional constipation (FC). A total of 89 children (median age 9.8 years) with functional gastrointestinal disorders were included in the study. Based on clinical parameters, 52 fulfilled the criteria for FC, six fulfilled the criteria for functional abdominal pain (FAP), and 31 for functional non-retentive faecal incontinence (FNRFI); the latter two groups provided the controls. To assess intra- and interobserver variability of the Leech method three scorers scored the same abdominal radiograph twice. A Leech score of 9 or more was considered as suggestive of constipation. ROC analysis was used to determine the diagnostic accuracy of the Leech method in separating patients with FC from control patients. Significant intraobserver variability was found between two scorers (P=0.005 and P<0.0001), whereas there was no systematic difference between the two scores of the other scorer (P=0.89). The scores between scorers differed systematically and displayed large variability. The area under the ROC curve was 0.68 (95% CI 0.58-0.80), indicating poor diagnostic accuracy. The Leech scoring method for assessing faecal loading on a plain abdominal radiograph is of limited value in the diagnosis of FC in children. (orig.)

  2. A method based on a separation of variables in magnetohydrodynamics (MHD)

    International Nuclear Information System (INIS)

    Cessenat, M.; Genta, P.

    1996-01-01

    We use a method based on a separation of variables for solving a system of first order partial differential equations, in a very simple modelling of MHD. The method consists in introducing three unknown variables φ1, φ2, φ3 in addition of the time variable τ and then searching a solution which is separated with respect to φ1 and τ only. This is allowed by a very simple relation, called a 'metric separation equation', which governs the type of solutions with respect to time. The families of solutions for the system of equations thus obtained, correspond to a radial evolution of the fluid. Solving the MHD equations is then reduced to find the transverse component H Σ of the magnetic field on the unit sphere Σ by solving a non linear partial differential equation on Σ. Thus we generalize ideas due to Courant-Friedrichs and to Sedov on dimensional analysis and self-similar solutions. (authors)

  3. An effective method for finding special solutions of nonlinear differential equations with variable coefficients

    International Nuclear Information System (INIS)

    Qin Maochang; Fan Guihong

    2008-01-01

    There are many interesting methods can be utilized to construct special solutions of nonlinear differential equations with constant coefficients. However, most of these methods are not applicable to nonlinear differential equations with variable coefficients. A new method is presented in this Letter, which can be used to find special solutions of nonlinear differential equations with variable coefficients. This method is based on seeking appropriate Bernoulli equation corresponding to the equation studied. Many well-known equations are chosen to illustrate the application of this method

  4. Improved methods for signal processing in measurements of mercury by Tekran® 2537A and 2537B instruments

    Science.gov (United States)

    Ambrose, Jesse L.

    2017-12-01

    Atmospheric Hg measurements are commonly carried out using Tekran® Instruments Corporation's model 2537 Hg vapor analyzers, which employ gold amalgamation preconcentration sampling and detection by thermal desorption (TD) and atomic fluorescence spectrometry (AFS). A generally overlooked and poorly characterized source of analytical uncertainty in those measurements is the method by which the raw Hg atomic fluorescence (AF) signal is processed. Here I describe new software-based methods for processing the raw signal from the Tekran® 2537 instruments, and I evaluate the performances of those methods together with the standard Tekran® internal signal processing method. For test datasets from two Tekran® instruments (one 2537A and one 2537B), I estimate that signal processing uncertainties in Hg loadings determined with the Tekran® method are within ±[1 % + 1.2 pg] and ±[6 % + 0.21 pg], respectively. I demonstrate that the Tekran® method can produce significant low biases (≥ 5 %) not only at low Hg sample loadings (< 5 pg) but also at tropospheric background concentrations of gaseous elemental mercury (GEM) and total mercury (THg) (˜ 1 to 2 ng m-3) under typical operating conditions (sample loadings of 5-10 pg). Signal processing uncertainties associated with the Tekran® method can therefore represent a significant unaccounted for addition to the overall ˜ 10 to 15 % uncertainty previously estimated for Tekran®-based GEM and THg measurements. Signal processing bias can also add significantly to uncertainties in Tekran®-based gaseous oxidized mercury (GOM) and particle-bound mercury (PBM) measurements, which often derive from Hg sample loadings < 5 pg. In comparison, estimated signal processing uncertainties associated with the new methods described herein are low, ranging from within ±0.053 pg, when the Hg thermal desorption peaks are defined manually, to within ±[2 % + 0.080 pg] when peak definition is automated. Mercury limits of detection (LODs

  5. Fuzzy associative memories for instrument fault detection

    International Nuclear Information System (INIS)

    Heger, A.S.

    1996-01-01

    A fuzzy logic instrument fault detection scheme is developed for systems having two or three redundant sensors. In the fuzzy logic approach the deviation between each signal pairing is computed and classified into three fuzzy sets. A rule base is created allowing the human perception of the situation to be represented mathematically. Fuzzy associative memories are then applied. Finally, a defuzzification scheme is used to find the centroid location, and hence the signal status. Real-time analyses are carried out to evaluate the instantaneous signal status as well as the long-term results for the sensor set. Instantaneous signal validation results are used to compute a best estimate for the measured state variable. The long-term sensor validation method uses a frequency fuzzy variable to determine the signal condition over a specific period. To corroborate the methodology synthetic data representing various anomalies are analyzed with both the fuzzy logic technique and the parity space approach. (Author)

  6. Process variables consistency at Atucha I NPP

    International Nuclear Information System (INIS)

    Arostegui, E.; Aparicio, M.; Herzovich, P.; Wenzel, J.; Urrutia, G.

    1996-01-01

    A method to evaluate the different systems performance has been developed and is still under assessment. In order to perform this job a process computer upgraded in 1992 was used. In this sense and taking into account that the resolution and stability of instrumentation is higher than its accuracy process data were corrected by software. In this was, much time spent in recalibration, and also human errors were avoided. Besides, this method allowed a better record of instrumentation performance and also an early detection of instruments failure. On the other hand, the process modelization, mainly heat and material balances has also been used to check that sensors, transducers, analog to digital converters and computer software are working properly. Some of these process equations have been introduced into the computer codes, so in some cases, it is possible to have an ''on line'' analysis of process variables and process instrumentation behaviour. Examples of process analysis are: Heat exchangers, i.e. the power calculated using shell side temperatures is compared with the tube side values; turbine performance is compared with condenser water temperature; power measured on the secondary side (one minute average measurements optimized in order to eliminate process noise are compared with power obtained from primary side data); the calibration of temperatures have been made by direct measurement of redundant sensors and have shown to be the best method; in the case of pressure and differential pressure transducers are cross checked in service when it is possible. In the present paper, details of the examples mentioned above and of other ones are given and discussed. (author). 2 refs, 1 fig., 1 tab

  7. Process variables consistency at Atucha I NPP

    Energy Technology Data Exchange (ETDEWEB)

    Arostegui, E; Aparicio, M; Herzovich, P; Wenzel, J [Central Nuclear Atucha I, Nucleoelectrica S.A., Lima, Buenos Aires (Argentina); Urrutia, G [Comision Nacional de Energia Atomica, Buenos Aires (Argentina)

    1997-12-31

    A method to evaluate the different systems performance has been developed and is still under assessment. In order to perform this job a process computer upgraded in 1992 was used. In this sense and taking into account that the resolution and stability of instrumentation is higher than its accuracy process data were corrected by software. In this was, much time spent in recalibration, and also human errors were avoided. Besides, this method allowed a better record of instrumentation performance and also an early detection of instruments failure. On the other hand, the process modelization, mainly heat and material balances has also been used to check that sensors, transducers, analog to digital converters and computer software are working properly. Some of these process equations have been introduced into the computer codes, so in some cases, it is possible to have an ``on line`` analysis of process variables and process instrumentation behaviour. Examples of process analysis are: Heat exchangers, i.e. the power calculated using shell side temperatures is compared with the tube side values; turbine performance is compared with condenser water temperature; power measured on the secondary side (one minute average measurements optimized in order to eliminate process noise are compared with power obtained from primary side data); the calibration of temperatures have been made by direct measurement of redundant sensors and have shown to be the best method; in the case of pressure and differential pressure transducers are cross checked in service when it is possible. In the present paper, details of the examples mentioned above and of other ones are given and discussed. (author). 2 refs, 1 fig., 1 tab.

  8. Instrument comparison for Aerosolized Titanium Dioxide

    Science.gov (United States)

    Ranpara, Anand

    Recent toxicological studies have shown that the surface area of ultrafine particles (UFP i.e., particles with diameters less than 0.1 micrometer) has a stronger correlation with adverse health effects than does mass of these particles. Ultrafine titanium dioxide (TiO2) particles are widely used in industry, and their use is associated with adverse health outcomes, such as micro vascular dysfunctions and pulmonary damages. The primary aim of this experimental study was to compare a variety of laboratory and industrial hygiene (IH) field study instruments all measuring the same aerosolized TiO2. The study also observed intra-instrument variability between measurements made by two apparently identical devices of the same type of instrument placed side-by-side. The types of instruments studied were (1) DustTrak(TM) DRX, (2) Personal Data RAMs(TM) (PDR), (3) GRIMM, (4) Diffusion charger (DC) and (5) Scanning Mobility Particle Sizer (SMPS). Two devices of each of the four IH field study instrument types were used to measure six levels of mass concentration of fine and ultrafine TiO2 aerosols in controlled chamber tests. Metrics evaluated included real-time mass, active surface area and number/geometric surface area distributions, and off-line gravimetric mass and morphology on filters. DustTrak(TM) DRXs and PDRs were used for mass concentration measurements. DCs were used for active surface area concentration measurements. GRIMMs were used for number concentration measurements. SMPS was used for inter-instrument comparisons of surface area and number concentrations. The results indicated that two apparently identical devices of each DRX and PDR were statistically not different with each other for all the trials of both the sizes of powder (p < 5%). Mean difference between mass concentrations measured by two DustTrak DRX devices was smaller than that measured by two PDR devices. DustTrak DRX measurements were closer to the reference method, gravimetric mass concentration

  9. A Miniaturized Variable Pressure Scanning Electron Microscope (MVP-SEM) for the Surface of Mars: An Instrument for the Planetary Science Community

    Science.gov (United States)

    Edmunson, J.; Gaskin, J. A.; Danilatos, G.; Doloboff, I. J.; Effinger, M. R.; Harvey, R. P.; Jerman, G. A.; Klein-Schoder, R.; Mackie, W.; Magera, B.; hide

    2016-01-01

    The Miniaturized Variable Pressure Scanning Electron Microscope(MVP-SEM) project, funded by the NASA Planetary Instrument Concepts for the Advancement of Solar System Observations (PICASSO) Research Opportunities in Space and Earth Science (ROSES), will build upon previous miniaturized SEM designs for lunar and International Space Station (ISS) applications and recent advancements in variable pressure SEM's to design and build a SEM to complete analyses of samples on the surface of Mars using the atmosphere as an imaging medium. By the end of the PICASSO work, a prototype of the primary proof-of-concept components (i.e., the electron gun, focusing optics and scanning system)will be assembled and preliminary testing in a Mars analog chamber at the Jet Propulsion Laboratory will be completed to partially fulfill Technology Readiness Level to 5 requirements for those components. The team plans to have Secondary Electron Imaging(SEI), Backscattered Electron (BSE) detection, and Energy Dispersive Spectroscopy (EDS) capabilities through the MVP-SEM.

  10. Variable selection methods in PLS regression - a comparison study on metabolomics data

    DEFF Research Database (Denmark)

    Karaman, İbrahim; Hedemann, Mette Skou; Knudsen, Knud Erik Bach

    . The aim of the metabolomics study was to investigate the metabolic profile in pigs fed various cereal fractions with special attention to the metabolism of lignans using LC-MS based metabolomic approach. References 1. Lê Cao KA, Rossouw D, Robert-Granié C, Besse P: A Sparse PLS for Variable Selection when...... integrated approach. Due to the high number of variables in data sets (both raw data and after peak picking) the selection of important variables in an explorative analysis is difficult, especially when different data sets of metabolomics data need to be related. Variable selection (or removal of irrelevant...... different strategies for variable selection on PLSR method were considered and compared with respect to selected subset of variables and the possibility for biological validation. Sparse PLSR [1] as well as PLSR with Jack-knifing [2] was applied to data in order to achieve variable selection prior...

  11. Low-Dimensional Feature Representation for Instrument Identification

    Science.gov (United States)

    Ihara, Mizuki; Maeda, Shin-Ichi; Ikeda, Kazushi; Ishii, Shin

    For monophonic music instrument identification, various feature extraction and selection methods have been proposed. One of the issues toward instrument identification is that the same spectrum is not always observed even in the same instrument due to the difference of the recording condition. Therefore, it is important to find non-redundant instrument-specific features that maintain information essential for high-quality instrument identification to apply them to various instrumental music analyses. For such a dimensionality reduction method, the authors propose the utilization of linear projection methods: local Fisher discriminant analysis (LFDA) and LFDA combined with principal component analysis (PCA). After experimentally clarifying that raw power spectra are actually good for instrument classification, the authors reduced the feature dimensionality by LFDA or by PCA followed by LFDA (PCA-LFDA). The reduced features achieved reasonably high identification performance that was comparable or higher than those by the power spectra and those achieved by other existing studies. These results demonstrated that our LFDA and PCA-LFDA can successfully extract low-dimensional instrument features that maintain the characteristic information of the instruments.

  12. Do radio frequencies of medical instruments common in the operating room interfere with near-infrared spectroscopy signals?

    Science.gov (United States)

    Shadgan, Babak; Molavi, Behnam; Reid, W. D.; Dumont, Guy; Macnab, Andrew J.

    2010-02-01

    Background: Medical and diagnostic applications of near infrared spectroscopy (NIRS) are increasing, especially in operating rooms (OR). Since NIRS is an optical technique, radio frequency (RF) interference from other instruments is unlikely to affect the raw optical data, however, NIRS data processing and signal output could be affected. Methods: We investigated the potential for three common OR instruments: an electrical cautery, an orthopaedic drill and an imaging system, to generate electromagnetic interference (EMI) that could potentially influence NIRS signals. The time of onset and duration of every operation of each device was recorded during surgery. To remove the effects of slow changing physiological variables, we first used a lowpass filter and then selected 2 windows with variable lengths around the moment of device onset. For each instant, variances (energy) and means of the signals in the 2 windows were compared. Results: Twenty patients were studied during ankle surgery. Analysis shows no statistically significant difference in the means and variance of the NIRS signals (p < 0.01) during operation of any of the three devices for all surgeries. Conclusion: This method confirms the instruments evaluated caused no significant interference. NIRS can potentially be used without EMI in clinical environments such as the OR.

  13. Comparing surgical trays with redundant instruments with trays with reduced instruments: a cost analysis.

    Science.gov (United States)

    John-Baptiste, A; Sowerby, L J; Chin, C J; Martin, J; Rotenberg, B W

    2016-01-01

    When prearranged standard surgical trays contain instruments that are repeatedly unused, the redundancy can result in unnecessary health care costs. Our objective was to estimate potential savings by performing an economic evaluation comparing the cost of surgical trays with redundant instruments with surgical trays with reduced instruments ("reduced trays"). We performed a cost-analysis from the hospital perspective over a 1-year period. Using a mathematical model, we compared the direct costs of trays containing redundant instruments to reduced trays for 5 otolaryngology procedures. We incorporated data from several sources including local hospital data on surgical volume, the number of instruments on redundant and reduced trays, wages of personnel and time required to pack instruments. From the literature, we incorporated instrument depreciation costs and the time required to decontaminate an instrument. We performed 1-way sensitivity analyses on all variables, including surgical volume. Costs were estimated in 2013 Canadian dollars. The cost of redundant trays was $21 806 and the cost of reduced trays was $8803, for a 1-year cost saving of $13 003. In sensitivity analyses, cost savings ranged from $3262 to $21 395, based on the surgical volume at the institution. Variation in surgical volume resulted in a wider range of estimates, with a minimum of $3253 for low-volume to a maximum of $52 012 for high-volume institutions. Our study suggests moderate savings may be achieved by reducing surgical tray redundancy and, if applied to other surgical specialties, may result in savings to Canadian health care systems.

  14. A critical appraisal of instruments to measure outcomes of interprofessional education.

    Science.gov (United States)

    Oates, Matthew; Davidson, Megan

    2015-04-01

    Interprofessional education (IPE) is believed to prepare health professional graduates for successful collaborative practice. A range of instruments have been developed to measure the outcomes of IPE. An understanding of the psychometric properties of these instruments is important if they are to be used to measure the effectiveness of IPE. This review set out to identify instruments available to measure outcomes of IPE and collaborative practice in pre-qualification health professional students and to critically appraise the psychometric properties of validity, responsiveness and reliability against contemporary standards for instrument design. Instruments were selected from a pool of extant instruments and subjected to critical appraisal to determine whether they satisfied inclusion criteria. The qualitative and psychometric attributes of the included instruments were appraised using a checklist developed for this review. Nine instruments were critically appraised, including the widely adopted Readiness for Interprofessional Learning Scale (RIPLS) and the Interdisciplinary Education Perception Scale (IEPS). Validity evidence for instruments was predominantly based on test content and internal structure. Ceiling effects and lack of scale width contribute to the inability of some instruments to detect change in variables of interest. Limited reliability data were reported for two instruments. Scale development and scoring protocols were generally reported by instrument developers, but the inconsistent application of scoring protocols for some instruments was apparent. A number of instruments have been developed to measure outcomes of IPE in pre-qualification health professional students. Based on reported validity evidence and reliability data, the psychometric integrity of these instruments is limited. The theoretical test construction paradigm on which instruments have been developed may be contributing to the failure of some instruments to detect change in

  15. Spatial and temporal variability of rainfall and their effects on hydrological response in urban areas – a review

    OpenAIRE

    E. Cristiano; M.-C. ten Veldhuis; N. van de Giesen

    2017-01-01

    In urban areas, hydrological processes are characterized by high variability in space and time, making them sensitive to small-scale temporal and spatial rainfall variability. In the last decades new instruments, techniques, and methods have been developed to capture rainfall and hydrological processes at high resolution. Weather radars have been introduced to estimate high spatial and temporal rainfall variability. At the same time, new models have been proposed to reproduce hydrological res...

  16. Comparison of methods and instruments for 222Rn/220Rn progeny measurement

    International Nuclear Information System (INIS)

    Liu Yanyang; Shang Bing; Wu Yunyun; Zhou Qingzhi

    2012-01-01

    In this paper, comparisons were made among three methods of measurement (grab measurement, continuous measurement and integrating measurement) and also measurement of different instruments for a radon/thoron mixed chamber. Taking the optimized five-segment method as a comparison criterion, for the equilibrium-equivalent concentration of 222 Rn, measured results of Balm and 24 h integrating detectors are 31% and 29% higher than the criterion, the results of Wl x, however, is 20% lower; and for 220 Rn progeny, the results of Fiji-142, Kf-602D, BWLM and 24 h integrating detector are 86%, 18%, 28% and 36% higher than the criterion respectively, except that of WLx, which is 5% lower. For the differences shown, further research is needed. (authors)

  17. Instrumentation and control systems for monitoring and data acquisition for thermal recovery process

    Energy Technology Data Exchange (ETDEWEB)

    Aparicio, J.; Hernandez, E.; Perozo, H. [PDVSA Intevep, S.A. (Venezuela)

    2011-07-01

    Thermal recovery methods are often applied to enhance oil recovery in heavy oil reservoirs, one of its challenges is to control the displacement of the thermal front. Methods are thus implemented to obtain data on the temperatures in the wells at any given time and to monitor other variables so that the behaviour of the thermal front can be predicted. The aim of this paper is to present a new control and instrumentation scheme to measure all of the variables. A software was created using Labview a graphs-based programming language software and PostgreSQL, a database management system. Using this software, sensors can be added or removed at any time; trends can be immediately visualized; and quality of the information is ensured since there is no human intervention in the data collection or processing. This paper presented a software which improves monitoring of all of the variables affecting the behaviour of the thermal front.

  18. Automatic variable selection method and a comparison for quantitative analysis in laser-induced breakdown spectroscopy

    Science.gov (United States)

    Duan, Fajie; Fu, Xiao; Jiang, Jiajia; Huang, Tingting; Ma, Ling; Zhang, Cong

    2018-05-01

    In this work, an automatic variable selection method for quantitative analysis of soil samples using laser-induced breakdown spectroscopy (LIBS) is proposed, which is based on full spectrum correction (FSC) and modified iterative predictor weighting-partial least squares (mIPW-PLS). The method features automatic selection without artificial processes. To illustrate the feasibility and effectiveness of the method, a comparison with genetic algorithm (GA) and successive projections algorithm (SPA) for different elements (copper, barium and chromium) detection in soil was implemented. The experimental results showed that all the three methods could accomplish variable selection effectively, among which FSC-mIPW-PLS required significantly shorter computation time (12 s approximately for 40,000 initial variables) than the others. Moreover, improved quantification models were got with variable selection approaches. The root mean square errors of prediction (RMSEP) of models utilizing the new method were 27.47 (copper), 37.15 (barium) and 39.70 (chromium) mg/kg, which showed comparable prediction effect with GA and SPA.

  19. Electrical Bioimpedance-Controlled Surgical Instrumentation.

    Science.gov (United States)

    Brendle, Christian; Rein, Benjamin; Niesche, Annegret; Korff, Alexander; Radermacher, Klaus; Misgeld, Berno; Leonhardt, Steffen

    2015-10-01

    A bioimpedance-controlled concept for bone cement milling during revision total hip replacement is presented. Normally, the surgeon manually removes bone cement using a hammer and chisel. However, this procedure is relatively rough and unintended harm may occur to tissue at any time. The proposed bioimpedance-controlled surgical instrumentation improves this process because, for example, most risks associated with bone cement removal are avoided. The electrical bioimpedance measurements enable online process-control by using the milling head as both a cutting tool and measurement electrode at the same time. Furthermore, a novel integrated surgical milling tool is introduced, which allows acquisition of electrical bioimpedance data for online control; these data are used as a process variable. Process identification is based on finite element method simulation and on experimental studies with a rapid control prototyping system. The control loop design includes the identified process model, the characterization of noise as being normally distributed and the filtering, which is necessary for sufficient accuracy ( ±0.5 mm). Also, in a comparative study, noise suppression is investigated in silico with a moving average filter and a Kalman filter. Finally, performance analysis shows that the bioimpedance-controlled surgical instrumentation may also performs effectively at a higher feed rate (e.g., 5 mm/s).

  20. Second-order particle-in-cell (PIC) computational method in the one-dimensional variable Eulerian mesh system

    International Nuclear Information System (INIS)

    Pyun, J.J.

    1981-01-01

    As part of an effort to incorporate the variable Eulerian mesh into the second-order PIC computational method, a truncation error analysis was performed to calculate the second-order error terms for the variable Eulerian mesh system. The results that the maximum mesh size increment/decrement is limited to be α(Δr/sub i/) 2 where Δr/sub i/ is a non-dimensional mesh size of the ith cell, and α is a constant of order one. The numerical solutions of Burgers' equation by the second-order PIC method in the variable Eulerian mesh system wer compared with its exact solution. It was found that the second-order accuracy in the PIC method was maintained under the above condition. Additional problems were analyzed using the second-order PIC methods in both variable and uniform Eulerian mesh systems. The results indicate that the second-order PIC method in the variable Eulerian mesh system can provide substantial computational time saving with no loss in accuracy

  1. Effect of freezing method and frozen storage duration on instrumental quality of lamb throughout display.

    Science.gov (United States)

    Muela, E; Sañudo, C; Campo, M M; Medel, I; Beltrán, J A

    2010-04-01

    This study evaluated the effect of freezing method (FM) (air blast freezer, freezing tunnel, or nitrogen chamber) and frozen storage duration (FSD) (1, 3, or 6 months) on the instrumental measurements of quality of thawed lamb, aged for a total of 72 h, throughout a 10-d display period, compared to the quality of fresh meat. pH, colour, lipid oxidation, thawing, and cooking losses in Longissimus thoracis and lumborum muscle, were determined following standard methods. FM affected yellowness, FSD redness and thawing losses, and both affected oxidation (increased as freezing rate decreased and/or as storage duration increased). When compared with fresh meat, the main differences appeared on oxidation (where a significant interaction between treatment (3FM x 3FSD + fresh meat) with display duration was detected), and on total losses (thaw + cook losses). Oxidation was lower in fresh meat, but values were not significantly different from those stored frozen for 1 month. Fresh meat had smaller total losses than did thawed meat, but losses were not significantly different from meat frozen in the freezing tunnel and stored frozen for 1 month. Display duration had a greater effect on instrumental quality parameters than did FM or FSD. pH, b*, and oxidation increased, and L* and a* decreased with an increase in the number of days on display. In conclusion, neither freezing method nor frozen storage up to 6 months influenced extensively the properties of lamb when instrumental measurements of quality were measured in meat that had been displayed for 1d after thawing. The small deterioration shown in this study should not give consumers concerns about frozen meat. 2009 Elsevier Ltd. All rights reserved.

  2. Improved retrieval of cloud base heights from ceilometer using a non-standard instrument method

    Science.gov (United States)

    Wang, Yang; Zhao, Chuanfeng; Dong, Zipeng; Li, Zhanqing; Hu, Shuzhen; Chen, Tianmeng; Tao, Fa; Wang, Yuzhao

    2018-04-01

    Cloud-base height (CBH) is a basic cloud parameter but has not been measured accurately, especially under polluted conditions due to the interference of aerosol. Taking advantage of a comprehensive field experiment in northern China in which a variety of advanced cloud probing instruments were operated, different methods of detecting CBH are assessed. The Micro-Pulse Lidar (MPL) and the Vaisala ceilometer (CL51) provided two types of backscattered profiles. The latter has been employed widely as a standard means of measuring CBH using the manufacturer's operational algorithm to generate standard CBH products (CL51 MAN) whose quality is rigorously assessed here, in comparison with a research algorithm that we developed named value distribution equalization (VDE) algorithm. It was applied to both the profiles of lidar backscattering data from the two instruments. The VDE algorithm is found to produce more accurate estimates of CBH for both instruments and can cope with heavy aerosol loading conditions well. By contrast, CL51 MAN overestimates CBH by 400 m and misses many low level clouds under such conditions. These findings are important given that CL51 has been adopted operationally by many meteorological stations in China.

  3. Chaos synchronization using single variable feedback based on backstepping method

    International Nuclear Information System (INIS)

    Zhang Jian; Li Chunguang; Zhang Hongbin; Yu Juebang

    2004-01-01

    In recent years, backstepping method has been developed in the field of nonlinear control, such as controller, observer and output regulation. In this paper, an effective backstepping design is applied to chaos synchronization. There are some advantages in this method for synchronizing chaotic systems, such as (a) the synchronization error is exponential convergent; (b) only one variable information of the master system is needed; (c) it presents a systematic procedure for selecting a proper controller. Numerical simulations for the Chua's circuit and the Roessler system demonstrate that this method is very effective

  4. Statistical methods for biodosimetry in the presence of both Berkson and classical measurement error

    Science.gov (United States)

    Miller, Austin

    In radiation epidemiology, the true dose received by those exposed cannot be assessed directly. Physical dosimetry uses a deterministic function of the source term, distance and shielding to estimate dose. For the atomic bomb survivors, the physical dosimetry system is well established. The classical measurement errors plaguing the location and shielding inputs to the physical dosimetry system are well known. Adjusting for the associated biases requires an estimate for the classical measurement error variance, for which no data-driven estimate exists. In this case, an instrumental variable solution is the most viable option to overcome the classical measurement error indeterminacy. Biological indicators of dose may serve as instrumental variables. Specification of the biodosimeter dose-response model requires identification of the radiosensitivity variables, for which we develop statistical definitions and variables. More recently, researchers have recognized Berkson error in the dose estimates, introduced by averaging assumptions for many components in the physical dosimetry system. We show that Berkson error induces a bias in the instrumental variable estimate of the dose-response coefficient, and then address the estimation problem. This model is specified by developing an instrumental variable mixed measurement error likelihood function, which is then maximized using a Monte Carlo EM Algorithm. These methods produce dose estimates that incorporate information from both physical and biological indicators of dose, as well as the first instrumental variable based data-driven estimate for the classical measurement error variance.

  5. Instrumentation for the follow-up of severe accidents

    International Nuclear Information System (INIS)

    Munoz Sanchez, A.; Nino Perote, R.

    2000-01-01

    During severe accidents, it is foreseeable that the instrumentation installed in a plant is subjected to conditions which are more hostile than those for which the instrumentation was designed and qualified. Moreover, new, specific instrumentation is required to monitor variables which have not been considered until now, and to control systems which lessen the consequences of severe accidents. Both existing instrumentation used to monitor critical functions in design basis accident conditions and additional instrumentation which provides the information necessary to control and mitigate the consequences of severe accidents, have to be designed to withstand such conditions, especially in terms of measurements range, functional characteristics and qualification to withstand pressure and temperature loads resulting from steam explosion, hydrogen combustion/explosion and high levels of radiation over long periods of time. (Author)

  6. [Correlation coefficient-based classification method of hydrological dependence variability: With auto-regression model as example].

    Science.gov (United States)

    Zhao, Yu Xi; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi

    2018-04-01

    Hydrological process evaluation is temporal dependent. Hydrological time series including dependence components do not meet the data consistency assumption for hydrological computation. Both of those factors cause great difficulty for water researches. Given the existence of hydrological dependence variability, we proposed a correlationcoefficient-based method for significance evaluation of hydrological dependence based on auto-regression model. By calculating the correlation coefficient between the original series and its dependence component and selecting reasonable thresholds of correlation coefficient, this method divided significance degree of dependence into no variability, weak variability, mid variability, strong variability, and drastic variability. By deducing the relationship between correlation coefficient and auto-correlation coefficient in each order of series, we found that the correlation coefficient was mainly determined by the magnitude of auto-correlation coefficient from the 1 order to p order, which clarified the theoretical basis of this method. With the first-order and second-order auto-regression models as examples, the reasonability of the deduced formula was verified through Monte-Carlo experiments to classify the relationship between correlation coefficient and auto-correlation coefficient. This method was used to analyze three observed hydrological time series. The results indicated the coexistence of stochastic and dependence characteristics in hydrological process.

  7. Combining within and between instrument information to estimate precision

    International Nuclear Information System (INIS)

    Jost, J.W.; Devary, J.L.; Ward, J.E.

    1980-01-01

    When two instruments, both having replicated measurements, are used to measure the same set of items, between instrument information may be used to augment the within instrument precision estimate. A method is presented which combines the within and between instrument information to obtain an unbiased and minimum variance estimate of instrument precision. The method does not assume the instruments have equal precision

  8. A QSAR Study of Environmental Estrogens Based on a Novel Variable Selection Method

    Directory of Open Access Journals (Sweden)

    Aiqian Zhang

    2012-05-01

    Full Text Available A large number of descriptors were employed to characterize the molecular structure of 53 natural, synthetic, and environmental chemicals which are suspected of disrupting endocrine functions by mimicking or antagonizing natural hormones and may thus pose a serious threat to the health of humans and wildlife. In this work, a robust quantitative structure-activity relationship (QSAR model with a novel variable selection method has been proposed for the effective estrogens. The variable selection method is based on variable interaction (VSMVI with leave-multiple-out cross validation (LMOCV to select the best subset. During variable selection, model construction and assessment, the Organization for Economic Co-operation and Development (OECD principles for regulation of QSAR acceptability were fully considered, such as using an unambiguous multiple-linear regression (MLR algorithm to build the model, using several validation methods to assessment the performance of the model, giving the define of applicability domain and analyzing the outliers with the results of molecular docking. The performance of the QSAR model indicates that the VSMVI is an effective, feasible and practical tool for rapid screening of the best subset from large molecular descriptors.

  9. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    Energy Technology Data Exchange (ETDEWEB)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan [Toosi University of Technology, Tehran (Korea, Republic of)

    2012-05-15

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms.

  10. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    International Nuclear Information System (INIS)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan

    2012-01-01

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms

  11. Quantification and variability in colonic volume with a novel magnetic resonance imaging method

    DEFF Research Database (Denmark)

    Nilsson, M; Sandberg, Thomas Holm; Poulsen, Jakob Lykke

    2015-01-01

    Background: Segmental distribution of colorectal volume is relevant in a number of diseases, but clinical and experimental use demands robust reliability and validity. Using a novel semi-automatic magnetic resonance imaging-based technique, the aims of this study were to describe: (i) inter......-individual and intra-individual variability of segmental colorectal volumes between two observations in healthy subjects and (ii) the change in segmental colorectal volume distribution before and after defecation. Methods: The inter-individual and intra-individual variability of four colorectal volumes (cecum...... (p = 0.02). Conclusions & Inferences: Imaging of segmental colorectal volume, morphology, and fecal accumulation is advantageous to conventional methods in its low variability, high spatial resolution, and its absence of contrast-enhancing agents and irradiation. Hence, the method is suitable...

  12. Improved method for solving the neutron transport problem by discretization of space and energy variables

    International Nuclear Information System (INIS)

    Bosevski, T.

    1971-01-01

    The polynomial interpolation of neutron flux between the chosen space and energy variables enabled transformation of the integral transport equation into a system of linear equations with constant coefficients. Solutions of this system are the needed values of flux for chosen values of space and energy variables. The proposed improved method for solving the neutron transport problem including the mathematical formalism is simple and efficient since the number of needed input data is decreased both in treating the spatial and energy variables. Mathematical method based on this approach gives more stable solutions with significantly decreased probability of numerical errors. Computer code based on the proposed method was used for calculations of one heavy water and one light water reactor cell, and the results were compared to results of other very precise calculations. The proposed method was better concerning convergence rate, decreased computing time and needed computer memory. Discretization of variables enabled direct comparison of theoretical and experimental results

  13. Importance of Intrinsic and Instrumental Value of Education in Pakistan

    Science.gov (United States)

    Kumar, Mahendar

    2017-01-01

    Normally, effectiveness of any object or thing is judged by two values; intrinsic and instrumental. To compare intrinsic value of education with instrumental value, this study has used the following variables: getting knowledge for its own sake, getting knowledge for social status, getting knowledge for job or business endeavor and getting…

  14. Ultrahigh-dimensional variable selection method for whole-genome gene-gene interaction analysis

    Directory of Open Access Journals (Sweden)

    Ueki Masao

    2012-05-01

    Full Text Available Abstract Background Genome-wide gene-gene interaction analysis using single nucleotide polymorphisms (SNPs is an attractive way for identification of genetic components that confers susceptibility of human complex diseases. Individual hypothesis testing for SNP-SNP pairs as in common genome-wide association study (GWAS however involves difficulty in setting overall p-value due to complicated correlation structure, namely, the multiple testing problem that causes unacceptable false negative results. A large number of SNP-SNP pairs than sample size, so-called the large p small n problem, precludes simultaneous analysis using multiple regression. The method that overcomes above issues is thus needed. Results We adopt an up-to-date method for ultrahigh-dimensional variable selection termed the sure independence screening (SIS for appropriate handling of numerous number of SNP-SNP interactions by including them as predictor variables in logistic regression. We propose ranking strategy using promising dummy coding methods and following variable selection procedure in the SIS method suitably modified for gene-gene interaction analysis. We also implemented the procedures in a software program, EPISIS, using the cost-effective GPGPU (General-purpose computing on graphics processing units technology. EPISIS can complete exhaustive search for SNP-SNP interactions in standard GWAS dataset within several hours. The proposed method works successfully in simulation experiments and in application to real WTCCC (Wellcome Trust Case–control Consortium data. Conclusions Based on the machine-learning principle, the proposed method gives powerful and flexible genome-wide search for various patterns of gene-gene interaction.

  15. RECOVERY OF LARGE ANGULAR SCALE CMB POLARIZATION FOR INSTRUMENTS EMPLOYING VARIABLE-DELAY POLARIZATION MODULATORS

    Energy Technology Data Exchange (ETDEWEB)

    Miller, N. J.; Marriage, T. A.; Appel, J. W.; Bennett, C. L.; Eimer, J.; Essinger-Hileman, T.; Harrington, K.; Rostem, K.; Watts, D. J. [Department of Physics and Astronomy, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218 (United States); Chuss, D. T. [Department of Physics, Villanova University, 800 E Lancaster, Villanova, PA 19085 (United States); Wollack, E. J.; Fixsen, D. J.; Moseley, S. H.; Switzer, E. R., E-mail: Nathan.J.Miller@nasa.gov [Observational Cosmology Laboratory, Code 665, NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States)

    2016-02-20

    Variable-delay Polarization Modulators (VPMs) are currently being implemented in experiments designed to measure the polarization of the cosmic microwave background on large angular scales because of their capability for providing rapid, front-end polarization modulation and control over systematic errors. Despite the advantages provided by the VPM, it is important to identify and mitigate any time-varying effects that leak into the synchronously modulated component of the signal. In this paper, the effect of emission from a 300 K VPM on the system performance is considered and addressed. Though instrument design can greatly reduce the influence of modulated VPM emission, some residual modulated signal is expected. VPM emission is treated in the presence of rotational misalignments and temperature variation. Simulations of time-ordered data are used to evaluate the effect of these residual errors on the power spectrum. The analysis and modeling in this paper guides experimentalists on the critical aspects of observations using VPMs as front-end modulators. By implementing the characterizations and controls as described, front-end VPM modulation can be very powerful for mitigating 1/f noise in large angular scale polarimetric surveys. None of the systematic errors studied fundamentally limit the detection and characterization of B-modes on large scales for a tensor-to-scalar ratio of r = 0.01. Indeed, r < 0.01 is achievable with commensurately improved characterizations and controls.

  16. Use of a variable tracer infusion method to determine glucose turnover in humans

    International Nuclear Information System (INIS)

    Molina, J.M.; Baron, A.D.; Edelman, S.V.; Brechtel, G.; Wallace, P.; Olefsky, J.M.

    1990-01-01

    The single-compartment pool fraction model, when used with the hyperinsulinemic glucose clamp technique to measure rates of glucose turnover, sometimes underestimates true rates of glucose appearance (Ra) resulting in negative values for hepatic glucose output (HGO). We focused our attention on isotope discrimination and model error as possible explanations for this underestimation. We found no difference in [3-3H] glucose specific activity in samples obtained simultaneously from the femoral artery and vein (2,400 +/- 455 vs. 2,454 +/- 522 dpm/mg) in 6 men during a hyperinsulinemic euglycemic clamp study where insulin was infused at 40 mU.m-2.min-1 for 3 h; therefore, isotope discrimination did not occur. We compared the ability of a constant (0.6 microCi/min) vs. variable tracer infusion method (tracer added to the glucose infusate) to measure non-steady-state Ra during hyperinsulinemic clamp studies. Plasma specific activity fell during the constant tracer infusion studies but did not change from base line during the variable tracer infusion studies. By maintaining a constant plasma specific activity the variable tracer infusion method eliminates uncertainty about changes in glucose pool size. This overcame modeling error and more accurately measures non-steady-state Ra (P less than 0.001 by analysis of variance vs. constant infusion method). In conclusion, underestimation of Ra determined isotopically during hyperinsulinemic clamp studies is largely due to modeling error that can be overcome by use of the variable tracer infusion method. This method allows more accurate determination of Ra and HGO under non-steady-state conditions

  17. A method to forecast quantitative variables relating to nuclear public acceptance

    International Nuclear Information System (INIS)

    Ohnishi, T.

    1992-01-01

    A methodology is proposed for forecasting the future trend of quantitative variables profoundly related to the public acceptance (PA) of nuclear energy. The social environment influencing PA is first modeled by breaking it down into a finite number of fundamental elements and then the interactive formulae between the quantitative variables, which are attributed to and characterize each element, are determined by using the actual values of the variables in the past. Inputting the estimated values of exogenous variables into these formulae, the forecast values of endogenous variables can finally be obtained. Using this method, the problem of nuclear PA in Japan is treated as, for example, where the context is considered to comprise a public sector and the general social environment and socio-psychology. The public sector is broken down into three elements of the general public, the inhabitants living around nuclear facilities and the activists of anti-nuclear movements, whereas the social environment and socio-psychological factors are broken down into several elements, such as news media and psychological factors. Twenty-seven endogenous and seven exogenous variables are introduced to quantify these elements. After quantitatively formulating the interactive features between them and extrapolating the exogenous variables into the future estimates are made of the growth or attenuation of the endogenous variables, such as the pro- and anti-nuclear fractions in public opinion polls and the frequency of occurrence of anti-nuclear movements. (author)

  18. A new hydraulic regulation method on district heating system with distributed variable-speed pumps

    International Nuclear Information System (INIS)

    Wang, Hai; Wang, Haiying; Zhu, Tong

    2017-01-01

    Highlights: • A hydraulic regulation method was presented for district heating with distributed variable speed pumps. • Information and automation technologies were utilized to support the proposed method. • A new hydraulic model was developed for distributed variable speed pumps. • A new optimization model was developed based on genetic algorithm. • Two scenarios of a multi-source looped system was illustrated to validate the method. - Abstract: Compared with the hydraulic configuration based on the conventional central circulating pump, a district heating system with distributed variable-speed-pumps configuration can often save 30–50% power consumption on circulating pumps with frequency inverters. However, the hydraulic regulations on distributed variable-speed-pumps configuration could be more complicated than ever while all distributed pumps need to be adjusted to their designated flow rates. Especially in a multi-source looped structure heating network where the distributed pumps have strongly coupled and severe non-linear hydraulic connections with each other, it would be rather difficult to maintain the hydraulic balance during the regulations. In this paper, with the help of the advanced automation and information technologies, a new hydraulic regulation method was proposed to achieve on-site hydraulic balance for the district heating systems with distributed variable-speed-pumps configuration. The proposed method was comprised of a new hydraulic model, which was developed to adapt the distributed variable-speed-pumps configuration, and a calibration model with genetic algorithm. By carrying out the proposed method step by step, the flow rates of all distributed pumps can be progressively adjusted to their designated values. A hypothetic district heating system with 2 heat sources and 10 substations was taken as a case study to illustrate the feasibility of the proposed method. Two scenarios were investigated respectively. In Scenario I, the

  19. Field estimation of soil water content. A practical guide to methods, instrumentation and sensor technology

    International Nuclear Information System (INIS)

    2008-01-01

    During a period of five years, an international group of soil water instrumentation experts were contracted by the International Atomic Energy Agency to carry out a range of comparative assessments of soil water sensing methods under laboratory and field conditions. The detailed results of those studies are published elsewhere. Most of the devices examined worked well some of the time, but most also performed poorly in some circumstances. The group was also aware that the choice of a water measurement technology is often made for economic, convenience and other reasons, and that there was a need to be able to obtain the best results from any device used. The choice of a technology is sometimes not made by the ultimate user, or even if it is, the main constraint may be financial rather than technical. Thus, this guide is presented in a way that allows the user to obtain the best performance from any instrument, while also providing guidance as to which instruments perform best under given circumstances. That said, this expert group of the IAEA reached several important conclusions: (1) the field calibrated neutron moisture meter (NMM) remains the most accurate and precise method for soil profile water content determination in the field, and is the only indirect method capable of providing accurate soil water balance data for studies of crop water use, water use efficiency, irrigation efficiency and irrigation water use efficiency, with a minimum number of access tubes; (2) those electromagnetic sensors known as capacitance sensors exhibit much more variability in the field than either the NMM or direct soil water measurements, and they are not recommended for soil water balance studies for this reason (impractically large numbers of access tubes and sensors are required) and because they are rendered inaccurate by changes in soil bulk electrical conductivity (including temperature effects) that often occur in irrigated soils, particularly those containing

  20. Trace elements in cigarette tobacco by a method of instrumental neutron activation analysis

    International Nuclear Information System (INIS)

    Noordin Ibrahim

    1986-01-01

    A total of ten cigarette brands were investigated for determining the trace elemental concentrations in tobacco so as to assess their role in the induction of related diseases through smoking. A method instrumental Neutron Activation analysis was employed due to high sensitivity, speed and ability to analyse sample for a wide spectrum of elements simultaneously. A total of 18 elements were detected of which the majority are toxic elements. A full result and conclusion will be reported in the forthcoming paper. (A.J.)

  1. OCOPTR, Minimization of Nonlinear Function, Variable Metric Method, Derivative Calculation. DRVOCR, Minimization of Nonlinear Function, Variable Metric Method, Derivative Calculation

    International Nuclear Information System (INIS)

    Nazareth, J. L.

    1979-01-01

    1 - Description of problem or function: OCOPTR and DRVOCR are computer programs designed to find minima of non-linear differentiable functions f: R n →R with n dimensional domains. OCOPTR requires that the user only provide function values (i.e. it is a derivative-free routine). DRVOCR requires the user to supply both function and gradient information. 2 - Method of solution: OCOPTR and DRVOCR use the variable metric (or quasi-Newton) method of Davidon (1975). For OCOPTR, the derivatives are estimated by finite differences along a suitable set of linearly independent directions. For DRVOCR, the derivatives are user- supplied. Some features of the codes are the storage of the approximation to the inverse Hessian matrix in lower trapezoidal factored form and the use of an optimally-conditioned updating method. Linear equality constraints are permitted subject to the initial Hessian factor being chosen correctly. 3 - Restrictions on the complexity of the problem: The functions to which the routine is applied are assumed to be differentiable. The routine also requires (n 2 /2) + 0(n) storage locations where n is the problem dimension

  2. Remote and Virtual Instrumentation Platform for Distance Learning

    Directory of Open Access Journals (Sweden)

    Tom Eppes

    2010-08-01

    Full Text Available This journal presents distance learning using the National Instruments ELVIS II and how Multisim can be combined with ELVIS II for distance learning. National Instrument’s ELVIS II is a new version that can easily be used for e-learning. It features 12 of the commonly used instruments in engineering and science laboratories, including an oscilloscope, a function generator, a variable power supply, and an isolated digital multi-meter in a low-cost and easy-to-use platform and completes integration with Multisim software for SPICE simulation, which simplifies the teaching of circuit design. As NI ELVIS II is based on LabView, designers can easily customize the 12 instruments or can create their own using the provided source code for the instruments.

  3. The test beamline of the European Spallation Source – Instrumentation development and wavelength frame multiplication

    International Nuclear Information System (INIS)

    Woracek, R.; Hofmann, T.; Bulat, M.; Sales, M.; Habicht, K.; Andersen, K.; Strobl, M.

    2016-01-01

    The European Spallation Source (ESS), scheduled to start operation in 2020, is aiming to deliver the most intense neutron beams for experimental research of any facility worldwide. Its long pulse time structure implies significant differences for instrumentation compared to other spallation sources which, in contrast, are all providing short neutron pulses. In order to enable the development of methods and technology adapted to this novel type of source well in advance of the first instruments being constructed at ESS, a test beamline (TBL) was designed and built at the BER II research reactor at Helmholtz-Zentrum Berlin (HZB). Operating the TBL shall provide valuable experience in order to allow for a smooth start of operations at ESS. The beamline is capable of mimicking the ESS pulse structure by a double chopper system and provides variable wavelength resolution as low as 0.5% over a wide wavelength band between 1.6 Å and 10 Å by a dedicated wavelength frame multiplication (WFM) chopper system. WFM is proposed for several ESS instruments to allow for flexible time-of-flight resolution. Hence, ESS will benefit from the TBL which offers unique possibilities for testing methods and components. This article describes the main capabilities of the instrument, its performance as experimentally verified during the commissioning, and its relevance to currently starting ESS instrumentation projects.

  4. The test beamline of the European Spallation Source – Instrumentation development and wavelength frame multiplication

    Energy Technology Data Exchange (ETDEWEB)

    Woracek, R., E-mail: robin.woracek@esss.se [European Spallation Source ESS ERIC, P.O. Box 176, SE-22100 Lund (Sweden); Hofmann, T.; Bulat, M. [Helmholtz-Zentrum Berlin für Materialien und Energie, Hahn-Meitner Platz 1, 14109 Berlin (Germany); Sales, M. [Technical University of Denmark, Fysikvej, 2800 Kgs. Lyngby (Denmark); Habicht, K. [Helmholtz-Zentrum Berlin für Materialien und Energie, Hahn-Meitner Platz 1, 14109 Berlin (Germany); Andersen, K. [European Spallation Source ESS ERIC, P.O. Box 176, SE-22100 Lund (Sweden); Strobl, M. [European Spallation Source ESS ERIC, P.O. Box 176, SE-22100 Lund (Sweden); Technical University of Denmark, Fysikvej, 2800 Kgs. Lyngby (Denmark)

    2016-12-11

    The European Spallation Source (ESS), scheduled to start operation in 2020, is aiming to deliver the most intense neutron beams for experimental research of any facility worldwide. Its long pulse time structure implies significant differences for instrumentation compared to other spallation sources which, in contrast, are all providing short neutron pulses. In order to enable the development of methods and technology adapted to this novel type of source well in advance of the first instruments being constructed at ESS, a test beamline (TBL) was designed and built at the BER II research reactor at Helmholtz-Zentrum Berlin (HZB). Operating the TBL shall provide valuable experience in order to allow for a smooth start of operations at ESS. The beamline is capable of mimicking the ESS pulse structure by a double chopper system and provides variable wavelength resolution as low as 0.5% over a wide wavelength band between 1.6 Å and 10 Å by a dedicated wavelength frame multiplication (WFM) chopper system. WFM is proposed for several ESS instruments to allow for flexible time-of-flight resolution. Hence, ESS will benefit from the TBL which offers unique possibilities for testing methods and components. This article describes the main capabilities of the instrument, its performance as experimentally verified during the commissioning, and its relevance to currently starting ESS instrumentation projects.

  5. Instrument uncertainty predictions

    International Nuclear Information System (INIS)

    Coutts, D.A.

    1991-07-01

    The accuracy of measurements and correlations should normally be provided for most experimental activities. The uncertainty is a measure of the accuracy of a stated value or equation. The uncertainty term reflects a combination of instrument errors, modeling limitations, and phenomena understanding deficiencies. This report provides several methodologies to estimate an instrument's uncertainty when used in experimental work. Methods are shown to predict both the pretest and post-test uncertainty

  6. Age-Related Changes in Bimanual Instrument Playing with Rhythmic Cueing

    Directory of Open Access Journals (Sweden)

    Soo Ji Kim

    2017-09-01

    Full Text Available Deficits in bimanual coordination of older adults have been demonstrated to significantly limit their functioning in daily life. As a bimanual sensorimotor task, instrument playing has great potential for motor and cognitive training in advanced age. While the process of matching a person’s repetitive movements to auditory rhythmic cueing during instrument playing was documented to involve motor and attentional control, investigation into whether the level of cognitive functioning influences the ability to rhythmically coordinate movement to an external beat in older populations is relatively limited. Therefore, the current study aimed to examine how timing accuracy during bimanual instrument playing with rhythmic cueing differed depending on the degree of participants’ cognitive aging. Twenty one young adults, 20 healthy older adults, and 17 older adults with mild dementia participated in this study. Each participant tapped an electronic drum in time to the rhythmic cueing provided using both hands simultaneously and in alternation. During bimanual instrument playing with rhythmic cueing, mean and variability of synchronization errors were measured and compared across the groups and the tempo of cueing during each type of tapping task. Correlations of such timing parameters with cognitive measures were also analyzed. The results showed that the group factor resulted in significant differences in the synchronization errors-related parameters. During bimanual tapping tasks, cognitive decline resulted in differences in synchronization errors between younger adults and older adults with mild dimentia. Also, in terms of variability of synchronization errors, younger adults showed significant differences in maintaining timing performance from older adults with and without mild dementia, which may be attributed to decreased processing time for bimanual coordination due to aging. Significant correlations were observed between variability of

  7. An Extended TOPSIS Method for Multiple Attribute Decision Making based on Interval Neutrosophic Uncertain Linguistic Variables

    Directory of Open Access Journals (Sweden)

    Said Broumi

    2015-03-01

    Full Text Available The interval neutrosophic uncertain linguistic variables can easily express the indeterminate and inconsistent information in real world, and TOPSIS is a very effective decision making method more and more extensive applications. In this paper, we will extend the TOPSIS method to deal with the interval neutrosophic uncertain linguistic information, and propose an extended TOPSIS method to solve the multiple attribute decision making problems in which the attribute value takes the form of the interval neutrosophic uncertain linguistic variables and attribute weight is unknown. Firstly, the operational rules and properties for the interval neutrosophic variables are introduced. Then the distance between two interval neutrosophic uncertain linguistic variables is proposed and the attribute weight is calculated by the maximizing deviation method, and the closeness coefficients to the ideal solution for each alternatives. Finally, an illustrative example is given to illustrate the decision making steps and the effectiveness of the proposed method.

  8. Determinants of The Application of Macro Prudential Instruments

    Directory of Open Access Journals (Sweden)

    Zakaria Firano

    2017-09-01

    Full Text Available The use of macro prudential instruments today gives rise to a major debate within the walls of central banks and other authorities in charge of financial stability. Contrary to micro prudential instruments, whose effects remain limited, macro prudential instruments are different in nature and can affect the stability of the financial system. By influencing the financial cycle and the financial structure of financial institutions, the use of such instruments should be conducted with great vigilance as well as macroeconomic and financial expertise. But the experiences of central banks in this area are sketchy, and only some emerging countries have experience using these types of instruments in different ways. This paper presents an analysis of instruments of macro prudential policy and attempts to empirically demonstrate that these instruments should be used only in specific economic and financial situations. Indeed, the results obtained, using modeling bivariate panel, confirm that these instruments are more effective when used to mitigate the euphoria of financial and economic cycles. In this sense, the output gap, describing the economic cycle, and the Z-score are the intermediate variables for the activation of capital instruments. Moreover, the liquidity ratio and changes in bank profitability are the two early warning indicators for activation of liquidity instruments.

  9. Optical Methods and Instrumentation in Brain Imaging and Therapy

    CERN Document Server

    2013-01-01

    This book provides a comprehensive up-to-date review of optical approaches used in brain imaging and therapy. It covers a variety of imaging techniques including diffuse optical imaging, laser speckle imaging, photoacoustic imaging and optical coherence tomography. A number of laser-based therapeutic approaches are reviewed, including photodynamic therapy, fluorescence guided resection and photothermal therapy. Fundamental principles and instrumentation are discussed for each imaging and therapeutic technique. Represents the first publication dedicated solely to optical diagnostics and therapeutics in the brain Provides a comprehensive review of the principles of each imaging/therapeutic modality Reviews the latest advances in instrumentation for optical diagnostics in the brain Discusses new optical-based therapeutic approaches for brain diseases

  10. Evaluation of Rock Powdering Methods to Obtain Fine-grained Samples for CHEMIN, a Combined XRD/XRF Instrument

    Science.gov (United States)

    Chipera, S. J.; Vaniman, D. T.; Bish, D. L.; Sarrazin, P.; Feldman, S.; Blake, D. F.; Bearman, G.; Bar-Cohen, Y.

    2004-01-01

    A miniature XRD/XRF (X-ray diffraction / X-ray fluorescence) instrument, CHEMIN, is currently being developed for definitive mineralogic analysis of soils and rocks on Mars. One of the technical issues that must be addressed to enable remote XRD analysis is how best to obtain a representative sample powder for analysis. For powder XRD analyses, it is beneficial to have a fine-grained sample to reduce preferred orientation effects and to provide a statistically significant number of crystallites to the X-ray beam. Although a two-dimensional detector as used in the CHEMIN instrument will produce good results even with poorly prepared powder, the quality of the data will improve and the time required for data collection will be reduced if the sample is fine-grained and randomly oriented. A variety of methods have been proposed for XRD sample preparation. Chipera et al. presented grain size distributions and XRD results from powders generated with an Ultrasonic/Sonic Driller/Corer (USDC) currently being developed at JPL. The USDC was shown to be an effective instrument for sampling rock to produce powder suitable for XRD. In this paper, we compare powder prepared using the USDC with powder obtained with a miniaturized rock crusher developed at JPL and with powder obtained with a rotary tungsten carbide bit to powders obtained from a laboratory bench-scale Retsch mill (provides benchmark mineralogical data). These comparisons will allow assessment of the suitability of these methods for analysis by an XRD/XRF instrument such as CHEMIN.

  11. Preference-based disease-specific health-related quality of life instrument for glaucoma: a mixed methods study protocol

    Science.gov (United States)

    Muratov, Sergei; Podbielski, Dominik W; Jack, Susan M; Ahmed, Iqbal Ike K; Mitchell, Levine A H; Baltaziak, Monika; Xie, Feng

    2016-01-01

    Introduction A primary objective of healthcare services is to improve patients' health and health-related quality of life (HRQoL). Glaucoma, which affects a substantial proportion of the world population, has a significant detrimental impact on HRQoL. Although there are a number of glaucoma-specific questionnaires to measure HRQoL, none is preference-based which prevent them from being used in health economic evaluation. The proposed study is aimed to develop a preference-based instrument that is capable of capturing important effects specific to glaucoma and treatments on HRQoL and is scored based on the patients' preferences. Methods A sequential, exploratory mixed methods design will be used to guide the development and evaluation of the HRQoL instrument. The study consists of several stages to be implemented sequentially: item identification, item selection, validation and valuation. The instrument items will be identified and selected through a literature review and the conduct of a qualitative study. Validation will be conducted to establish psychometric properties of the instrument followed by a valuation exercise to derive utility scores for the health states described. Ethics and dissemination This study has been approved by the Trillium Health Partners Research Ethics Board (ID number 753). All personal information will be de-identified with the identification code kept in a secured location including the rest of the study data. Only qualified and study-related personnel will be allowed to access the data. The results of the study will be distributed widely through peer-reviewed journals, conferences and internal meetings. PMID:28186941

  12. The control variable method: a fully implicit numerical method for solving conservation equations for unsteady multidimensional fluid flow

    International Nuclear Information System (INIS)

    Le Coq, G.; Boudsocq, G.; Raymond, P.

    1983-03-01

    The Control Variable Method is extended to multidimensional fluid flow transient computations. In this paper basic principles of the method are given. The method uses a fully implicit space discretization and is based on the decomposition of the momentum flux tensor into scalar, vectorial, and tensorial, terms. Finally some computations about viscous-driven flow and buoyancy-driven flow in cavity are presented

  13. Locating disease genes using Bayesian variable selection with the Haseman-Elston method

    Directory of Open Access Journals (Sweden)

    He Qimei

    2003-12-01

    Full Text Available Abstract Background We applied stochastic search variable selection (SSVS, a Bayesian model selection method, to the simulated data of Genetic Analysis Workshop 13. We used SSVS with the revisited Haseman-Elston method to find the markers linked to the loci determining change in cholesterol over time. To study gene-gene interaction (epistasis and gene-environment interaction, we adopted prior structures, which incorporate the relationship among the predictors. This allows SSVS to search in the model space more efficiently and avoid the less likely models. Results In applying SSVS, instead of looking at the posterior distribution of each of the candidate models, which is sensitive to the setting of the prior, we ranked the candidate variables (markers according to their marginal posterior probability, which was shown to be more robust to the prior. Compared with traditional methods that consider one marker at a time, our method considers all markers simultaneously and obtains more favorable results. Conclusions We showed that SSVS is a powerful method for identifying linked markers using the Haseman-Elston method, even for weak effects. SSVS is very effective because it does a smart search over the entire model space.

  14. Digital study of nuclear reactor instrument

    International Nuclear Information System (INIS)

    Lv Gongxiang; Yang Zhijun

    2006-01-01

    The paper introduces the design method of nuclear reactor's digital instrument developed by authors based on the AT89C52 single chip microcomputer. Also the instrument system hardware structure and software framework are given. The instrument apply DDC112 which is responsible for the measure of lower current. When designing the instrument system, anti-interference measure of software, especially hardware is considered seriously. (authors)

  15. Improved flux calculations for viscous incompressible flow by the variable penalty method

    International Nuclear Information System (INIS)

    Kheshgi, H.; Luskin, M.

    1985-01-01

    The Navier-Stokes system for viscous, incompressible flow is considered, taking into account a replacement of the continuity equation by the perturbed continuity equation. The introduction of the approximation allows the pressure variable to be eliminated to obtain the system of equations for the approximate velocity. The penalty approximation is often applied to numerical discretizations since it provides a reduction in the size and band-width of the system of equations. Attention is given to error estimates, and to two numerical experiments which illustrate the error estimates considered. It is found that the variable penalty method provides an accurate solution for a much wider range of epsilon than the classical penalty method. 8 references

  16. MODERN INSTRUMENTAL METHODS TO CONTROL THE SEED QUALITY IN ROOT VEGETABLES

    Directory of Open Access Journals (Sweden)

    F. B. Musaev

    2017-01-01

    Full Text Available The standard methods of analysis don’t meet all modern requirements to determine the seed a quality. These methods can’t unveil inner deficiencies that are very important to control seed viability. The capabilities of new instrumental method to analyze the seed quality of root vegetables were regarded in the article. The method of micro-focus radiography is distinguished from other existing methods by more sensitivity, rapidity and easiness to be performed. Based on practical importance the visualization of inner seed structure, it allows determining far before seed germination the degree of endosperm development and embryo; the presence of inner damages and infections, occupation and damage caused by pests. The use of micro-focus radiography enables to detect the degree of seed quality difference for some traits such as monogermity and self-fertilization that are economically valuable for breeding program in red beet. With the aid of the method the level of seed development, damage and inner deficiencies in carrot and parsnip can be revealed. In X-ray projection seeds of inbred lines of radish significantly differed from variety population ones for their underdevelopment in the inner structure. The advantage of the method is that seeds rest undamaged after quality analyzing and both can be used for further examination with the use of other methods or be sown; that is quite important for breeders, when handling with small quantity or collectable plant breeding material. The results radiography analyses can be saved and archived that enables to watch for seed qualities in dynamic; this data can be also used at possible arbitration cases. 

  17. Using traditional methods and indigenous technologies for coping with climate variability

    NARCIS (Netherlands)

    Stigter, C.J.; Zheng Dawei,; Onyewotu, L.O.Z.; Mei Xurong,

    2005-01-01

    In agrometeorology and management of meteorology related natural resources, many traditional methods and indigenous technologies are still in use or being revived for managing low external inputs sustainable agriculture (LEISA) under conditions of climate variability. This paper starts with the

  18. Variable scaling method and Stark effect in hydrogen atom

    International Nuclear Information System (INIS)

    Choudhury, R.K.R.; Ghosh, B.

    1983-09-01

    By relating the Stark effect problem in hydrogen-like atoms to that of the spherical anharmonic oscillator we have found simple formulas for energy eigenvalues for the Stark effect. Matrix elements have been calculated using 0(2,1) algebra technique after Armstrong and then the variable scaling method has been used to find optimal solutions. Our numerical results are compared with those of Hioe and Yoo and also with the results obtained by Lanczos. (author)

  19. Collective variables method in relativistic theory

    International Nuclear Information System (INIS)

    Shurgaya, A.V.

    1983-01-01

    Classical theory of N-component field is considered. The method of collective variables accurately accounting for conservation laws proceeding from invariance theory under homogeneous Lorentz group is developed within the frames of generalized hamiltonian dynamics. Hyperboloids are invariant surfaces Under the homogeneous Lorentz group. Proceeding from this, field transformation is introduced, and the surface is parametrized so that generators of the homogeneous Lorentz group do not include components dependent on interaction and their effect on the field function is reduced to geometrical. The interaction is completely included in the expression for the energy-momentum vector of the system which is a dynamical value. Gauge is chosen where parameters of four-dimensional translations and their canonically-conjugated pulses are non-physical and thus phase space is determined by parameters of the homogeneous Lorentz group, field function and their canonically-conjugated pulses. So it is managed to accurately account for conservation laws proceeding from the requirement of lorentz-invariance

  20. Pacman dysplasia: a lethal skeletal dysplasia with variable radiographic features

    Energy Technology Data Exchange (ETDEWEB)

    Miller, S.F. [Dept. of Radiology, Children' s Hospital of the King' s Daughters, Norfolk (United States); Proud, V.K. [Dept. of Genetics, Children' s Hospital of the King' s Daughters, Norfolk (United States); Werner, A.L. [Dept. of Pathology, Children' s Hospital of the King' s Daughters, Norfolk (United States); Field, F.M.; Wilcox, W.F.; Lachman, R.S.; Rimoin, D.L. [International Skeletal Dysplasia Registry, Cedars-Sinai Medical Center, Los Angeles (United States)

    2003-04-01

    Background: Punctate or stippled cartilaginous calcifications are associated with many conditions, including chromosomal, infectious, endocrine, and teratogenic etiologies. Some of these conditions are clinically mild, while others are lethal. Accurate diagnosis can prove instrumental in clinical management and in genetic counseling. Objective: To describe the diagnostic radiographic features seen in Pacman dysplasia, a distinct autosomal recessive, lethal skeletal dysplasia. Materials and methods: We present the fourth reported case of Pacman dysplasia and compare the findings seen in our patient with the three previously described patients. Results: Invariable and variable radiographic findings were seen in all four cases of histologically proven Pacman dysplasia. Conclusion: Pacman dysplasia presents both constant and variable diagnostic radiographic features. (orig.)

  1. Investigation of load reduction for a variable speed, variable pitch, and variable coning wind turbine

    Energy Technology Data Exchange (ETDEWEB)

    Pierce, K. [Univ. of Utah, Salt Lake City, UT (United States)

    1997-12-31

    A two bladed, variable speed and variable pitch wind turbine was modeled using ADAMS{reg_sign} to evaluate load reduction abilities of a variable coning configuration as compared to a teetered rotor, and also to evaluate control methods. The basic dynamic behavior of the variable coning turbine was investigated and compared to the teetered rotor under constant wind conditions as well as turbulent wind conditions. Results indicate the variable coning rotor has larger flap oscillation amplitudes and much lower root flap bending moments than the teetered rotor. Three methods of control were evaluated for turbulent wind simulations. These were a standard IPD control method, a generalized predictive control method, and a bias estimate control method. Each control method was evaluated for both the variable coning configuration and the teetered configuration. The ability of the different control methods to maintain the rotor speed near the desired set point is evaluated from the RMS error of rotor speed. The activity of the control system is evaluated from cycles per second of the blade pitch angle. All three of the methods were found to produce similar results for the variable coning rotor and the teetered rotor, as well as similar results to each other.

  2. A design method of compensators for multi-variable control system with PID controllers 'CHARLY'

    International Nuclear Information System (INIS)

    Fujiwara, Toshitaka; Yamada, Katsumi

    1985-01-01

    A systematic design method of compensators for a multi-variable control system having usual PID controllers in its loops is presented in this paper. The method itself is able: to determine the main manipulating variable corresponding to each controlled variable with a sensitivity analysis in the frequency domain. to tune PID controllers sufficiently to realize adequate control actions with a searching technique of minimum values of cost functionals. to design compensators improving the control preformance and to simulate a total system for confirming the designed compensators. In the phase of compensator design, the state variable feed-back gain is obtained by means of the OPTIMAL REGULATOR THEORY for the composite system of plant and PID controllers. The transfer function type compensators the configurations of which were previously given are, then, designed to approximate the frequency responces of the above mentioned state feed-back system. An example is illustrated for convenience. (author)

  3. A novel variable baseline visibility detection system and its measurement method

    Science.gov (United States)

    Li, Meng; Jiang, Li-hui; Xiong, Xing-long; Zhang, Guizhong; Yao, JianQuan

    2017-10-01

    As an important meteorological observation instrument, the visibility meter can ensure the safety of traffic operation. However, due to the optical system contamination as well as sample error, the accuracy and stability of the equipment are difficult to meet the requirement in the low-visibility environment. To settle this matter, a novel measurement equipment was designed based upon multiple baseline, which essentially acts as an atmospheric transmission meter with movable optical receiver, applying weighted least square method to process signal. Theoretical analysis and experiments in real atmosphere environment support this technique.

  4. Feedback control of acoustic musical instruments: collocated control using physical analogs.

    Science.gov (United States)

    Berdahl, Edgar; Smith, Julius O; Niemeyer, Günter

    2012-01-01

    Traditionally, the average professional musician has owned numerous acoustic musical instruments, many of them having distinctive acoustic qualities. However, a modern musician could prefer to have a single musical instrument whose acoustics are programmable by feedback control, where acoustic variables are estimated from sensor measurements in real time and then fed back in order to influence the controlled variables. In this paper, theory is presented that describes stable feedback control of an acoustic musical instrument. The presentation should be accessible to members of the musical acoustics community who may have limited or no experience with feedback control. First, the only control strategy guaranteed to be stable subject to any musical instrument mobility is described: the sensors and actuators must be collocated, and the controller must emulate a physical analog system. Next, the most fundamental feedback controllers and the corresponding physical analog systems are presented. The effects that these controllers have on acoustic musical instruments are described. Finally, practical design challenges are discussed. A proof explains why changing the resonance frequency of a musical resonance requires much more control power than changing the decay time of the resonance. © 2012 Acoustical Society of America.

  5. Health insurance and the demand for medical care: Instrumental variable estimates using health insurer claims data.

    Science.gov (United States)

    Dunn, Abe

    2016-07-01

    This paper takes a different approach to estimating demand for medical care that uses the negotiated prices between insurers and providers as an instrument. The instrument is viewed as a textbook "cost shifting" instrument that impacts plan offerings, but is unobserved by consumers. The paper finds a price elasticity of demand of around -0.20, matching the elasticity found in the RAND Health Insurance Experiment. The paper also studies within-market variation in demand for prescription drugs and other medical care services and obtains comparable price elasticity estimates. Published by Elsevier B.V.

  6. A novel single-step, multipoint calibration method for instrumented Lab-on-Chip systems

    DEFF Research Database (Denmark)

    Pfreundt, Andrea; Patou, François; Zulfiqar, Azeem

    2014-01-01

    for instrument-based PoC blood biomarker analysis systems. Motivated by the complexity of associating high-accuracy biosensing using silicon nanowire field effect transistors with ease of use for the PoC system user, we propose a novel one-step, multipoint calibration method for LoC-based systems. Our approach...... specifically addresses the important interfaces between a novel microfluidic unit to integrate the sensor array and a mobile-device hardware accessory. A multi-point calibration curve is obtained by generating a defined set of reference concentrations from a single input. By consecutively splitting the flow...

  7. Instruments to assess integrated care

    DEFF Research Database (Denmark)

    Lyngsø, Anne Marie; Godtfredsen, Nina Skavlan; Høst, Dorte

    2014-01-01

    INTRODUCTION: Although several measurement instruments have been developed to measure the level of integrated health care delivery, no standardised, validated instrument exists covering all aspects of integrated care. The purpose of this review is to identify the instruments concerning how to mea...... was prevalent. It is uncertain whether development of a single 'all-inclusive' model for assessing integrated care is desirable. We emphasise the continuing need for validated instruments embedded in theoretical contexts.......INTRODUCTION: Although several measurement instruments have been developed to measure the level of integrated health care delivery, no standardised, validated instrument exists covering all aspects of integrated care. The purpose of this review is to identify the instruments concerning how...... to measure the level of integration across health-care sectors and to assess and evaluate the organisational elements within the instruments identified. METHODS: An extensive, systematic literature review in PubMed, CINAHL, PsycINFO, Cochrane Library, Web of Science for the years 1980-2011. Selected...

  8. Instrumental variable estimation in a survival context

    DEFF Research Database (Denmark)

    Tchetgen Tchetgen, Eric J; Walter, Stefan; Vansteelandt, Stijn

    2015-01-01

    for regression analysis in a survival context, primarily under an additive hazards model, for which we describe 2 simple methods for estimating causal effects. The first method is a straightforward 2-stage regression approach analogous to 2-stage least squares commonly used for IV analysis in linear regression....... The IV approach is very well developed in the context of linear regression and also for certain generalized linear models with a nonlinear link function. However, IV methods are not as well developed for regression analysis with a censored survival outcome. In this article, we develop the IV approach....... In this approach, the fitted value from a first-stage regression of the exposure on the IV is entered in place of the exposure in the second-stage hazard model to recover a valid estimate of the treatment effect of interest. The second method is a so-called control function approach, which entails adding...

  9. VALIDATION OF ANALYTICAL METHODS AND INSTRUMENTATION FOR BERYLLIUM MEASUREMENT: REVIEW AND SUMMARY OF AVAILABLE GUIDES, PROCEDURES, AND PROTOCOLS

    Energy Technology Data Exchange (ETDEWEB)

    Ekechukwu, A.

    2008-12-17

    This document proposes to provide a listing of available sources which can be used to validate analytical methods and/or instrumentation for beryllium determination. A literature review was conducted of available standard methods and publications used for method validation and/or quality control. A comprehensive listing of the articles, papers, and books reviewed is given in Appendix 1. Available validation documents and guides are listed in the appendix; each has a brief description of application and use. In the referenced sources, there are varying approaches to validation and varying descriptions of validation at different stages in method development. This discussion focuses on validation and verification of fully developed methods and instrumentation that have been offered up for use or approval by other laboratories or official consensus bodies such as ASTM International, the International Standards Organization (ISO) and the Association of Official Analytical Chemists (AOAC). This review was conducted as part of a collaborative effort to investigate and improve the state of validation for measuring beryllium in the workplace and the environment. Documents and publications from the United States and Europe are included. Unless otherwise specified, all documents were published in English.

  10. A comparison of nickel-titanium rotary instruments manufactured using different methods and cross-sectional areas: ability to resist cyclic fatigue.

    Science.gov (United States)

    Oh, So-Ram; Chang, Seok-Woo; Lee, Yoon; Gu, Yu; Son, Won-Jun; Lee, Woocheol; Baek, Seung-Ho; Bae, Kwang-Shik; Choi, Gi-Woon; Lim, Sang-Min; Kum, Kee-Yeon

    2010-04-01

    This study examined the effect of the manufacturing methods (ground, electropolished, and twisted) and the cross-sectional area (CSA) of nickel-titanium (NiTi) rotary instruments on their cyclic fatigue resistance. A total of 80 NiTi rotary instruments (ISO 25/.06 taper) from 4 brands (K3, ProFile, RaCe, and TF) were rotated in a simulated root canal with pecking motion until fracture. The number of cycles to failure (NCF) was calculated. The CSA at 3 mm from the tip of new instruments of each brand was calculated. The correlation between the CSA and NCF was evaluated. All fractured surfaces were analyzed using a scanning electron microscope to determine the fracture mode. The TF instruments were the most resistant to fatigue failure. The resistance to cyclic failure increased with decreasing CSA. All fractured surfaces showed the coexistence of ductile and brittle properties. The CSA had a significant effect on the fatigue resistance of NiTi rotary instruments. Copyright 2010 Mosby, Inc. All rights reserved.

  11. High-Level Disinfection of Otorhinolaryngology Clinical Instruments: An Evaluation of the Efficacy and Cost-effectiveness of Instrument Storage.

    Science.gov (United States)

    Yalamanchi, Pratyusha; Yu, Jason; Chandler, Laura; Mirza, Natasha

    2018-01-01

    Objectives Despite increasing interest in individual instrument storage, risk of bacterial cross-contamination of otorhinolaryngology clinic instruments has not been assessed. This study is the first to determine the clinical efficacy and cost-effectiveness of standard high-level disinfection and clinic instrument storage. Methods To assess for cross-contamination, surveillance cultures of otorhinolaryngology clinic instruments subject to standard high-level disinfection and storage were obtained at the start and end of the outpatient clinical workday. Rate of microorganism recovery was compared with cultures of instruments stored in individual peel packs and control cultures of contaminated instruments. Based on historical clinic data, the direct allocation method of cost accounting was used to determine aggregate raw material cost and additional labor hours required to process and restock peel-packed instruments. Results Among 150 cultures of standard high-level disinfected and co-located clinic instruments, 3 positive bacterial cultures occurred; 100% of control cultures were positive for bacterial species ( P cost of individual semicritical instrument storage at $97,852.50 per year. Discussion With in vitro inoculation of >200 otorhinolaryngology clinic instruments, this study demonstrates that standard high-level disinfection and storage are equally efficacious to more time-consuming and expensive individual instrument storage protocols, such as peel packing, with regard to bacterial contamination. Implications for Practice Standard high-level disinfection and storage are equally effective to labor-intensive and costly individual instrument storage protocols.

  12. Method to deterministically study photonic nanostructures in different experimental instruments.

    Science.gov (United States)

    Husken, B H; Woldering, L A; Blum, C; Vos, W L

    2009-01-01

    We describe an experimental method to recover a single, deterministically fabricated nanostructure in various experimental instruments without the use of artificially fabricated markers, with the aim to study photonic structures. Therefore, a detailed map of the spatial surroundings of the nanostructure is made during the fabrication of the structure. These maps are made using a series of micrographs with successively decreasing magnifications. The graphs reveal intrinsic and characteristic geometric features that can subsequently be used in different setups to act as markers. As an illustration, we probe surface cavities with radii of 65 nm on a silica opal photonic crystal with various setups: a focused ion beam workstation; a scanning electron microscope (SEM); a wide field optical microscope and a confocal microscope. We use cross-correlation techniques to recover a small area imaged with the SEM in a large area photographed with the optical microscope, which provides a possible avenue to automatic searching. We show how both structural and optical reflectivity data can be obtained from one and the same nanostructure. Since our approach does not use artificial grids or markers, it is of particular interest for samples whose structure is not known a priori, like samples created solely by self-assembly. In addition, our method is not restricted to conducting samples.

  13. Robotic-surgical instrument wrist pose estimation.

    Science.gov (United States)

    Fabel, Stephan; Baek, Kyungim; Berkelman, Peter

    2010-01-01

    The Compact Lightweight Surgery Robot from the University of Hawaii includes two teleoperated instruments and one endoscope manipulator which act in accord to perform assisted interventional medicine. The relative positions and orientations of the robotic instruments and endoscope must be known to the teleoperation system so that the directions of the instrument motions can be controlled to correspond closely to the directions of the motions of the master manipulators, as seen by the the endoscope and displayed to the surgeon. If the manipulator bases are mounted in known locations and all manipulator joint variables are known, then the necessary coordinate transformations between the master and slave manipulators can be easily computed. The versatility and ease of use of the system can be increased, however, by allowing the endoscope or instrument manipulator bases to be moved to arbitrary positions and orientations without reinitializing each manipulator or remeasuring their relative positions. The aim of this work is to find the pose of the instrument end effectors using the video image from the endoscope camera. The P3P pose estimation algorithm is used with a Levenberg-Marquardt optimization to ensure convergence. The correct transformations between the master and slave coordinate frames can then be calculated and updated when the bases of the endoscope or instrument manipulators are moved to new, unknown, positions at any time before or during surgical procedures.

  14. Comparative Study of Three Rotary Instruments for root canal Preparation using Computed Tomography

    International Nuclear Information System (INIS)

    Mohamed, A.M.E.

    2015-01-01

    Cleaning and shaping the root canal is a key to success in root canal treatment. This includes the removal of organic substrate from the root canal system by chemo mechanical methods, and the shaping of the root canal system into a continuously tapered preparation. This should be done while maintaining the original path of the root canal. Although instruments for root canal preparation have been progressively developed and optimized, a complete mechanical debridement of the root canal system is rarely achievable. One of the main reasons is the geometrical dis symmetry between the root canal and preparation instruments. Rotary instruments regardless of their type and form produce a preparation with a round outline if they are used in a simple linear filing motion, which in most of the cases do not coincide with the outline of the root canal. Root canal preparation in narrow, curved canals is a challenge even for experienced endodontists. Shaping of curved canals became more effective after the introduction of nickel-titanium (Ni-Ti) endodontic instruments. Despite the advantages of Ni-Ti rotary instruments, intra canal fracture is the most common procedural accident that occurs with these instruments during clinical use. It is a common experience between clinicians that Ni-Ti rotary instruments may undergo unexpected fracture without any visible warning, such as any previous permanent defect or deformation. Pro Taper Ni-Ti instruments were introduced with a unique design of variable taper within one instrument and continuously changing helical angles. Pro Taper rotary instruments are claimed to generate lower torque values during their use because of their modified nonradial landed cross-section that increases the cutting efficiency and reduces contact areas. On the other hand, the variable taper within one instrument is believed to reduce the ‘taper lock’ effect (torsional failure) in comparison with similarly tapered instruments. Nevertheless, Pro Taper

  15. Risk assessment of groundwater level variability using variable Kriging methods

    Science.gov (United States)

    Spanoudaki, Katerina; Kampanis, Nikolaos A.

    2015-04-01

    Assessment of the water table level spatial variability in aquifers provides useful information regarding optimal groundwater management. This information becomes more important in basins where the water table level has fallen significantly. The spatial variability of the water table level in this work is estimated based on hydraulic head measured during the wet period of the hydrological year 2007-2008, in a sparsely monitored basin in Crete, Greece, which is of high socioeconomic and agricultural interest. Three Kriging-based methodologies are elaborated in Matlab environment to estimate the spatial variability of the water table level in the basin. The first methodology is based on the Ordinary Kriging approach, the second involves auxiliary information from a Digital Elevation Model in terms of Residual Kriging and the third methodology calculates the probability of the groundwater level to fall below a predefined minimum value that could cause significant problems in groundwater resources availability, by means of Indicator Kriging. The Box-Cox methodology is applied to normalize both the data and the residuals for improved prediction results. In addition, various classical variogram models are applied to determine the spatial dependence of the measurements. The Matérn model proves to be the optimal, which in combination with Kriging methodologies provides the most accurate cross validation estimations. Groundwater level and probability maps are constructed to examine the spatial variability of the groundwater level in the basin and the associated risk that certain locations exhibit regarding a predefined minimum value that has been set for the sustainability of the basin's groundwater resources. Acknowledgement The work presented in this paper has been funded by the Greek State Scholarships Foundation (IKY), Fellowships of Excellence for Postdoctoral Studies (Siemens Program), 'A simulation-optimization model for assessing the best practices for the

  16. The effects of competition on premiums: using United Healthcare's 2015 entry into Affordable Care Act's marketplaces as an instrumental variable.

    Science.gov (United States)

    Agirdas, Cagdas; Krebs, Robert J; Yano, Masato

    2018-01-08

    One goal of the Affordable Care Act is to increase insurance coverage by improving competition and lowering premiums. To facilitate this goal, the federal government enacted online marketplaces in the 395 rating areas spanning 34 states that chose not to establish their own state-run marketplaces. Few multivariate regression studies analyzing the effects of competition on premiums suffer from endogeneity, due to simultaneity and omitted variable biases. However, United Healthcare's decision to enter these marketplaces in 2015 provides the researcher with an opportunity to address this endogeneity problem. Exploiting the variation caused by United Healthcare's entry decision as an instrument for competition, we study the impact of competition on premiums during the first 2 years of these marketplaces. Combining panel data from five different sources and controlling for 12 variables, we find that one more insurer in a rating area leads to a 6.97% reduction in the second-lowest-priced silver plan premium, which is larger than the estimated effects in existing literature. Furthermore, we run a threshold analysis and find that competition's effects on premiums become statistically insignificant if there are four or more insurers in a rating area. These findings are robust to alternative measures of premiums, inclusion of a non-linear term in the regression models and a county-level analysis.

  17. Quantifying Vegetation Biophysical Variables from Imaging Spectroscopy Data: A Review on Retrieval Methods

    Science.gov (United States)

    Verrelst, Jochem; Malenovský, Zbyněk; Van der Tol, Christiaan; Camps-Valls, Gustau; Gastellu-Etchegorry, Jean-Philippe; Lewis, Philip; North, Peter; Moreno, Jose

    2018-06-01

    An unprecedented spectroscopic data stream will soon become available with forthcoming Earth-observing satellite missions equipped with imaging spectroradiometers. This data stream will open up a vast array of opportunities to quantify a diversity of biochemical and structural vegetation properties. The processing requirements for such large data streams require reliable retrieval techniques enabling the spatiotemporally explicit quantification of biophysical variables. With the aim of preparing for this new era of Earth observation, this review summarizes the state-of-the-art retrieval methods that have been applied in experimental imaging spectroscopy studies inferring all kinds of vegetation biophysical variables. Identified retrieval methods are categorized into: (1) parametric regression, including vegetation indices, shape indices and spectral transformations; (2) nonparametric regression, including linear and nonlinear machine learning regression algorithms; (3) physically based, including inversion of radiative transfer models (RTMs) using numerical optimization and look-up table approaches; and (4) hybrid regression methods, which combine RTM simulations with machine learning regression methods. For each of these categories, an overview of widely applied methods with application to mapping vegetation properties is given. In view of processing imaging spectroscopy data, a critical aspect involves the challenge of dealing with spectral multicollinearity. The ability to provide robust estimates, retrieval uncertainties and acceptable retrieval processing speed are other important aspects in view of operational processing. Recommendations towards new-generation spectroscopy-based processing chains for operational production of biophysical variables are given.

  18. Comparison of Sparse and Jack-knife partial least squares regression methods for variable selection

    DEFF Research Database (Denmark)

    Karaman, Ibrahim; Qannari, El Mostafa; Martens, Harald

    2013-01-01

    The objective of this study was to compare two different techniques of variable selection, Sparse PLSR and Jack-knife PLSR, with respect to their predictive ability and their ability to identify relevant variables. Sparse PLSR is a method that is frequently used in genomics, whereas Jack-knife PL...

  19. Strong Stability Preserving Explicit Linear Multistep Methods with Variable Step Size

    KAUST Repository

    Hadjimichael, Yiannis

    2016-09-08

    Strong stability preserving (SSP) methods are designed primarily for time integration of nonlinear hyperbolic PDEs, for which the permissible SSP step size varies from one step to the next. We develop the first SSP linear multistep methods (of order two and three) with variable step size, and prove their optimality, stability, and convergence. The choice of step size for multistep SSP methods is an interesting problem because the allowable step size depends on the SSP coefficient, which in turn depends on the chosen step sizes. The description of the methods includes an optimal step-size strategy. We prove sharp upper bounds on the allowable step size for explicit SSP linear multistep methods and show the existence of methods with arbitrarily high order of accuracy. The effectiveness of the methods is demonstrated through numerical examples.

  20. Study of input variables in group method of data handling methodology

    International Nuclear Information System (INIS)

    Pereira, Iraci Martinez; Bueno, Elaine Inacio

    2013-01-01

    The Group Method of Data Handling - GMDH is a combinatorial multi-layer algorithm in which a network of layers and nodes is generated using a number of inputs from the data stream being evaluated. The GMDH network topology has been traditionally determined using a layer by layer pruning process based on a pre-selected criterion of what constitutes the best nodes at each level. The traditional GMDH method is based on an underlying assumption that the data can be modeled by using an approximation of the Volterra Series or Kolmorgorov-Gabor polynomial. A Monitoring and Diagnosis System was developed based on GMDH and ANN methodologies, and applied to the IPEN research Reactor IEA-1. The system performs the monitoring by comparing the GMDH and ANN calculated values with measured ones. As the GMDH is a self-organizing methodology, the input variables choice is made automatically. On the other hand, the results of ANN methodology are strongly dependent on which variables are used as neural network input. (author)

  1. CLOUDCLOUD : general-purpose instrument monitoring and data managing software

    Science.gov (United States)

    Dias, António; Amorim, António; Tomé, António

    2016-04-01

    monitored with parsing intervals as fast as milliseconds. This software (server+agents+interface+database) comes in easy and ready-to-use packages that can be installed in any operating system, including Android and iOS systems. This software is ideal for use in modular experiments or monitoring stations with large variability in instruments and measuring methods or in large collaborations, where data requires homogenization in order to be effectively transmitted to all involved parties. This work presents the software and provides performance comparison with previously used monitoring systems in the CLOUD experiment at CERN.

  2. Higuchi’s Method applied to detection of changes in timbre of digital sound synthesis of string instruments with the functional transformation method

    Science.gov (United States)

    Kanjanapen, Manorth; Kunsombat, Cherdsak; Chiangga, Surasak

    2017-09-01

    The functional transformation method (FTM) is a powerful tool for detailed investigation of digital sound synthesis by the physical modeling method, the resulting sound or measured vibrational characteristics at discretized points on real instruments directly solves the underlying physical effect of partial differential equation (PDE). In this paper, we present the Higuchi’s method to examine the difference between the timbre of tone and estimate fractal dimension of musical signals which contains information about their geometrical structure that synthesizes by FTM. With the Higuchi’s method we obtain the whole process is not complicated, fast processing, with the ease of analysis without expertise in the physics or virtuoso musicians and the easiest way for the common people can judge that sounds similarly presented.

  3. Evaluation of multivariate calibration models transferred between spectroscopic instruments

    DEFF Research Database (Denmark)

    Eskildsen, Carl Emil Aae; Hansen, Per W.; Skov, Thomas

    2016-01-01

    In a setting where multiple spectroscopic instruments are used for the same measurements it may be convenient to develop the calibration model on a single instrument and then transfer this model to the other instruments. In the ideal scenario, all instruments provide the same predictions for the ......In a setting where multiple spectroscopic instruments are used for the same measurements it may be convenient to develop the calibration model on a single instrument and then transfer this model to the other instruments. In the ideal scenario, all instruments provide the same predictions...... for the same samples using the transferred model. However, sometimes the success of a model transfer is evaluated by comparing the transferred model predictions with the reference values. This is not optimal, as uncertainties in the reference method will impact the evaluation. This paper proposes a new method...... for calibration model transfer evaluation. The new method is based on comparing predictions from different instruments, rather than comparing predictions and reference values. A total of 75 flour samples were available for the study. All samples were measured on ten near infrared (NIR) instruments from two...

  4. A comprehensive review of sensors and instrumentation methods in devices for musical expression.

    Science.gov (United States)

    Medeiros, Carolina Brum; Wanderley, Marcelo M

    2014-07-25

    Digital Musical Instruments (DMIs) are musical instruments typically composed of a control surface where user interaction is measured by sensors whose values are mapped to sound synthesis algorithms. These instruments have gained interest among skilled musicians and performers in the last decades leading to artistic practices including musical performance, interactive installations and dance. The creation of DMIs typically involves several areas, among them: arts, design and engineering. The balance between these areas is an essential task in DMI design so that the resulting instruments are aesthetically appealing, robust, and allow responsive, accurate and repeatable sensing. In this paper, we review the use of sensors in the DMI community as manifested in the proceedings of the International Conference on New Interfaces for Musical Expression (NIME 2009-2013). Focusing on the sensor technologies and signal conditioning techniques used by the NIME community. Although it has been claimed that specifications for artistic tools are harder than those for military applications, this study raises a paradox showing that in most of the cases, DMIs are based on a few basic sensors types and unsophisticated engineering solutions, not taking advantage of more advanced sensing, instrumentation and signal processing techniques that could dramatically improve their response. We aim to raise awareness of limitations of any engineering solution and to assert the benefits of advanced electronics instrumentation design in DMIs. For this, we propose the use of specialized sensors such as strain gages, advanced conditioning circuits and signal processing tools such as sensor fusion. We believe that careful electronic instrumentation design may lead to more responsive instruments.

  5. Multimode laser beam analyzer instrument using electrically programmable optics.

    Science.gov (United States)

    Marraccini, Philip J; Riza, Nabeel A

    2011-12-01

    Presented is a novel design of a multimode laser beam analyzer using a digital micromirror device (DMD) and an electronically controlled variable focus lens (ECVFL) that serve as the digital and analog agile optics, respectively. The proposed analyzer is a broadband laser characterization instrument that uses the agile optics to smartly direct light to the required point photodetectors to enable beam measurements of minimum beam waist size, minimum waist location, divergence, and the beam propagation parameter M(2). Experimental results successfully demonstrate these measurements for a 500 mW multimode test laser beam with a wavelength of 532 nm. The minimum beam waist, divergence, and M(2) experimental results for the test laser are found to be 257.61 μm, 2.103 mrad, 1.600 and 326.67 μm, 2.682 mrad, 2.587 for the vertical and horizontal directions, respectively. These measurements are compared to a traditional scan method and the results of the beam waist are found to be within error tolerance of the demonstrated instrument.

  6. Verification of the Indicating Measuring Instruments Taking into Account their Instrumental Measurement Uncertainty

    Directory of Open Access Journals (Sweden)

    Zakharov Igor

    2017-12-01

    Full Text Available The specific features of the measuring instruments verification based on the results of their calibration are considered. It is noted that, in contrast to the verification procedure used in the legal metrology, the verification procedure for calibrated measuring instruments has to take into account the uncertainty of measurements into account. In this regard, a large number of measuring instruments, considered as those that are in compliance after verification in the legal metrology, turns out to be not in compliance after calibration. In this case, it is necessary to evaluate the probability of compliance of indicating measuring instruments. The procedure of compliance probability determination on the basis of the Monte Carlo method is considered. An example of calibration of a Vernier caliper is given.

  7. Application of Muskingum routing method with variable parameters in ungauged basin

    Directory of Open Access Journals (Sweden)

    Xiao-meng Song

    2011-03-01

    Full Text Available This paper describes a flood routing method applied in an ungauged basin, utilizing the Muskingum model with variable parameters of wave travel time K and weight coefficient of discharge x based on the physical characteristics of the river reach and flood, including the reach slope, length, width, and flood discharge. Three formulas for estimating parameters of wide rectangular, triangular, and parabolic cross sections are proposed. The influence of the flood on channel flow routing parameters is taken into account. The HEC-HMS hydrological model and the geospatial hydrologic analysis module HEC-GeoHMS were used to extract channel or watershed characteristics and to divide sub-basins. In addition, the initial and constant-rate method, user synthetic unit hydrograph method, and exponential recession method were used to estimate runoff volumes, the direct runoff hydrograph, and the baseflow hydrograph, respectively. The Muskingum model with variable parameters was then applied in the Louzigou Basin in Henan Province of China, and of the results, the percentages of flood events with a relative error of peak discharge less than 20% and runoff volume less than 10% are both 100%. They also show that the percentages of flood events with coefficients of determination greater than 0.8 are 83.33%, 91.67%, and 87.5%, respectively, for rectangular, triangular, and parabolic cross sections in 24 flood events. Therefore, this method is applicable to ungauged basins.

  8. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology.

    Science.gov (United States)

    Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H

    2017-07-01

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in

  9. Einstein x-ray observations of cataclysmic variables

    International Nuclear Information System (INIS)

    Mason, K.O.; Cordova, F.A.

    1982-01-01

    Observations with the imaging x-ray detectors on the Einstein Observatory have led to a large increase in the number of low luminosity x-ray sources known to be associated with cataclysmic variable stars (CVs). The high sensitivity of the Einstein instrumentation has permitted study of their short timescale variability and spectra. The data are adding significantly to our knowledge of the accretion process in cataclysmic variables and forcing some revision in our ideas concerning the origin of the optical variability in these stars

  10. LandScape: a simple method to aggregate p--Values and other stochastic variables without a priori grouping

    DEFF Research Database (Denmark)

    Wiuf, Carsten; Pallesen, Jonatan; Foldager, Leslie

    2016-01-01

    variables without assuming a priori defined groups. We provide different ways to evaluate the significance of the aggregated variables based on theoretical considerations and resampling techniques, and show that under certain assumptions the FWER is controlled in the strong sense. Validity of the method...... and the results might depend on the chosen criteria. Methods that summarize, or aggregate, test statistics or p-values, without relying on a priori criteria, are therefore desirable. We present a simple method to aggregate a sequence of stochastic variables, such as test statistics or p-values, into fewer...

  11. Comparison of real-time instruments and gravimetric method when measuring particulate matter in a residential building.

    Science.gov (United States)

    Wang, Zuocheng; Calderón, Leonardo; Patton, Allison P; Sorensen Allacci, MaryAnn; Senick, Jennifer; Wener, Richard; Andrews, Clinton J; Mainelis, Gediminas

    2016-11-01

    This study used several real-time and filter-based aerosol instruments to measure PM 2.5 levels in a high-rise residential green building in the Northeastern US and compared performance of those instruments. PM 2.5 24-hr average concentrations were determined using a Personal Modular Impactor (PMI) with 2.5 µm cut (SKC Inc., Eighty Four, PA) and a direct reading pDR-1500 (Thermo Scientific, Franklin, MA) as well as its filter. 1-hr average PM 2.5 concentrations were measured in the same apartments with an Aerotrak Optical Particle Counter (OPC) (model 8220, TSI, Inc., Shoreview, MN) and a DustTrak DRX mass monitor (model 8534, TSI, Inc., Shoreview, MN). OPC and DRX measurements were compared with concurrent 1-hr mass concentration from the pDR-1500. The pDR-1500 direct reading showed approximately 40% higher particle mass concentration compared to its own filter (n = 41), and 25% higher PM 2.5 mass concentration compared to the PMI 2.5 filter. The pDR-1500 direct reading and PMI 2.5 in non-smoking homes (self-reported) were not significantly different (n = 10, R 2 = 0.937), while the difference between measurements for smoking homes was 44% (n = 31, R 2 = 0.773). Both OPC and DRX data had substantial and significant systematic and proportional biases compared with pDR-1500 readings. However, these methods were highly correlated: R 2 = 0.936 for OPC versus pDR-1500 reading and R 2 = 0.863 for DRX versus pDR-1500 reading. The data suggest that accuracy of aerosol mass concentrations from direct-reading instruments in indoor environments depends on the instrument, and that correction factors can be used to reduce biases of these real-time monitors in residential green buildings with similar aerosol properties. This study used several real-time and filter-based aerosol instruments to measure PM 2.5 levels in a high-rise residential green building in the northeastern United States and compared performance of those instruments. The data show that while the use of real

  12. Variable Stars in the Field of V729 Aql

    Science.gov (United States)

    Cagaš, P.

    2017-04-01

    Wide field instruments can be used to acquire light curves of tens or even hundreds of variable stars per night, which increases the probability of new discoveries of interesting variable stars and generally increases the efficiency of observations. At the same time, wide field instruments produce a large amount of data, which must be processed using advanced software. The traditional approach, typically used by amateur astronomers, requires an unacceptable amount of time needed to process each data set. New functionality, built into SIPS software package, can shorten the time needed to obtain light curves by several orders of magnitude. Also, newly introduced SILICUPS software is intended for post-processing of stored light curves. It can be used to visualize observations from many nights, to find variable star periods, evaluate types of variability, etc. This work provides an overview of tools used to process data from the large field of view around the variable star V729 Aql. and demonstrates the results.

  13. A method to standardize gait and balance variables for gait velocity.

    NARCIS (Netherlands)

    Iersel, M.B. van; Olde Rikkert, M.G.M.; Borm, G.F.

    2007-01-01

    Many gait and balance variables depend on gait velocity, which seriously hinders the interpretation of gait and balance data derived from walks at different velocities. However, as far as we know there is no widely accepted method to correct for effects of gait velocity on other gait and balance

  14. Control Method for Variable Speed Wind Turbines to Support Temporary Primary Frequency Control

    DEFF Research Database (Denmark)

    Wang, Haijiao; Chen, Zhe; Jiang, Quanyuan

    2014-01-01

    This paper develops a control method for variable speed wind turbines (VSWTs) to support temporary primary frequency control of power system. The control method contains two parts: (1) up-regulate support control when a frequency drop event occurs; (2) down-regulate support control when a frequen...

  15. Modeling the solute transport by particle-tracing method with variable weights

    Science.gov (United States)

    Jiang, J.

    2016-12-01

    Particle-tracing method is usually used to simulate the solute transport in fracture media. In this method, the concentration at one point is proportional to number of particles visiting this point. However, this method is rather inefficient at the points with small concentration. Few particles visit these points, which leads to violent oscillation or gives zero value of concentration. In this paper, we proposed a particle-tracing method with variable weights. The concentration at one point is proportional to the sum of the weights of the particles visiting it. It adjusts the weight factors during simulations according to the estimated probabilities of corresponding walks. If the weight W of a tracking particle is larger than the relative concentration C at the corresponding site, the tracking particle will be splitted into Int(W/C) copies and each copy will be simulated independently with the weight W/Int(W/C) . If the weight W of a tracking particle is less than the relative concentration C at the corresponding site, the tracking particle will be continually tracked with a probability W/C and the weight will be adjusted to be C. By adjusting weights, the number of visiting particles distributes evenly in the whole range. Through this variable weights scheme, we can eliminate the violent oscillation and increase the accuracy of orders of magnitudes.

  16. Quantifying measurement uncertainty and spatial variability in the context of model evaluation

    Science.gov (United States)

    Choukulkar, A.; Brewer, A.; Pichugina, Y. L.; Bonin, T.; Banta, R. M.; Sandberg, S.; Weickmann, A. M.; Djalalova, I.; McCaffrey, K.; Bianco, L.; Wilczak, J. M.; Newman, J. F.; Draxl, C.; Lundquist, J. K.; Wharton, S.; Olson, J.; Kenyon, J.; Marquis, M.

    2017-12-01

    In an effort to improve wind forecasts for the wind energy sector, the Department of Energy and the NOAA funded the second Wind Forecast Improvement Project (WFIP2). As part of the WFIP2 field campaign, a large suite of in-situ and remote sensing instrumentation was deployed to the Columbia River Gorge in Oregon and Washington from October 2015 - March 2017. The array of instrumentation deployed included 915-MHz wind profiling radars, sodars, wind- profiling lidars, and scanning lidars. The role of these instruments was to provide wind measurements at high spatial and temporal resolution for model evaluation and improvement of model physics. To properly determine model errors, the uncertainties in instrument-model comparisons need to be quantified accurately. These uncertainties arise from several factors such as measurement uncertainty, spatial variability, and interpolation of model output to instrument locations, to name a few. In this presentation, we will introduce a formalism to quantify measurement uncertainty and spatial variability. The accuracy of this formalism will be tested using existing datasets such as the eXperimental Planetary boundary layer Instrumentation Assessment (XPIA) campaign. Finally, the uncertainties in wind measurement and the spatial variability estimates from the WFIP2 field campaign will be discussed to understand the challenges involved in model evaluation.

  17. Rorschach Inkblot Method data at baseline and after 2 years treatment of consecutively admitted patients with first-episode schizophrenia

    DEFF Research Database (Denmark)

    Rosenbaum, Bent; Andersen, Palle Bent; Knudsen, Per Bjerregaard

    2012-01-01

    Background: The Rorschach Inkblot Method is regarded as an important clinical instrument for detailed diagnostic description of the integrative capacities of individuals in psychotic states and as an instrument for measuring progression in the course of treatment. Aims: To describe relevant...... Rorschach variables at baseline in a group of consecutively admitted patients with first-episode schizophrenia. Furthermore, to describe the changes in these variables from baseline to year 2 for the group of patients given psychiatric standard treatment, and to compare these changes with changes in other...... outcome measures [Positive and Negative Syndrome Scale (PANSS), Global Assessment of Functioning (GAF), Strauss-Carpenter and socio-demographic variables]. Methods: In a prospective study, 34 patients consecutively admitted to treatment for a first episode of schizophrenia were tested using Exner...

  18. Selection of variables for neural network analysis. Comparisons of several methods with high energy physics data

    International Nuclear Information System (INIS)

    Proriol, J.

    1994-01-01

    Five different methods are compared for selecting the most important variables with a view to classifying high energy physics events with neural networks. The different methods are: the F-test, Principal Component Analysis (PCA), a decision tree method: CART, weight evaluation, and Optimal Cell Damage (OCD). The neural networks use the variables selected with the different methods. We compare the percentages of events properly classified by each neural network. The learning set and the test set are the same for all the neural networks. (author)

  19. Laboratory test for ice adhesion strength using commercial instrumentation.

    Science.gov (United States)

    Wang, Chenyu; Zhang, Wei; Siva, Adarsh; Tiea, Daniel; Wynne, Kenneth J

    2014-01-21

    A laboratory test method for evaluating ice adhesion has been developed employing a commercially available instrument normally used for dynamic mechanical analysis (TA RSA-III). This is the first laboratory ice adhesion test that does not require a custom-built apparatus. The upper grip range of ∼10 mm is an enabling feature that is essential for the test. The method involves removal of an ice cylinder from a polymer coating with a probe and the determination of peak removal force (Ps). To validate the test method, the strength of ice adhesion was determined for a prototypical glassy polymer, poly(methyl methacrylate). The distance of the probe from the PMMA surface has been identified as a critical variable for Ps. The new test provides a readily available platform for investigating fundamental surface characteristics affecting ice adhesion. In addition to the ice release test, PMMA coatings were characterized using DSC, DCA, and TM-AFM.

  20. A Comprehensive Review of Sensors and Instrumentation Methods in Devices for Musical Expression

    Directory of Open Access Journals (Sweden)

    Carolina Brum Medeiros

    2014-07-01

    Full Text Available Digital Musical Instruments (DMIs are musical instruments typically composed of a control surface where user interaction is measured by sensors whose values are mapped to sound synthesis algorithms. These instruments have gained interest among skilled musicians and performers in the last decades leading to artistic practices including musical performance, interactive installations and dance. The creation of DMIs typically involves several areas, among them: arts, design and engineering. The balance between these areas is an essential task in DMI design so that the resulting instruments are aesthetically appealing, robust, and allow responsive, accurate and repeatable sensing. In this paper, we review the use of sensors in the DMI community as manifested in the proceedings of the International Conference on New Interfaces for Musical Expression (NIME 2009–2013. Focusing on the sensor technologies and signal conditioning techniques used by the NIME community. Although it has been claimed that specifications for artistic tools are harder than those for military applications, this study raises a paradox showing that in most of the cases, DMIs are based on a few basic sensors types and unsophisticated engineering solutions, not taking advantage of more advanced sensing, instrumentation and signal processing techniques that could dramatically improve their response. We aim to raise awareness of limitations of any engineering solution and to assert the benefits of advanced electronics instrumentation design in DMIs. For this, we propose the use of specialized sensors such as strain gages, advanced conditioning circuits and signal processing tools such as sensor fusion. We believe that careful electronic instrumentation design may lead to more responsive instruments.

  1. The instruments in the first psychological laboratory in Mexico: antecedents, influence, and methods.

    Science.gov (United States)

    Escobar, Rogelio

    2014-11-01

    Enrique O. Aragón established the first psychological laboratory in Mexico in 1916. This laboratory was inspired by Wundt's laboratory and by those created afterward in Germany and the United States. It was equipped with state-of-the art instruments imported from Germany in 1902 from Ernst Zimmermann who supplied instruments for Wundt's laboratory. Although previous authors have described the social events leading to the creation of the laboratory, there are limited descriptions of the instruments, their use, and their influence. With the aid of archival resources, the initial location of the laboratory was determined. The analysis of instruments revealed a previously overlooked relation with a previous laboratory of experimental physiology. The influence of the laboratory was traced by describing the careers of 4 students, 3 of them women, who worked with the instruments during the first 2 decades of the 20th century, each becoming accomplished scholars. In addition, this article, by identifying and analyzing the instruments shown in photographs of the psychological laboratory and in 1 motion film, provides information of the class demonstrations and the experiments conducted in this laboratory.

  2. Interpreting two state instruments for intermediary values

    International Nuclear Information System (INIS)

    Good, R.R.

    1979-01-01

    Interpolating data from instruments which produce only two distinct binary outputs is discussed. The problem of determining void fraction, given an instrument which produces only a liquid or a no-liquid output, is used to demonstrate three different methods of data interpolation. The three methods involve a form of time averaging. The methods are signal amplitude histogram, signal derivative, and signal derivative discriminator. The advantages, disadvantages, and accuracies of each method are described

  3. Reliability of the input admittance of bowed-string instruments measured by the hammer method.

    Science.gov (United States)

    Zhang, Ailin; Woodhouse, Jim

    2014-12-01

    The input admittance at the bridge, measured by hammer testing, is often regarded as the most useful and convenient measurement of the vibrational behavior of a bowed string instrument. However, this method has been questioned, due especially to differences between human bowing and hammer impact. The goal of the research presented here is to investigate the reliability and accuracy of this classic hammer method. Experimental studies were carried out on cellos, with three different driving conditions and three different boundary conditions. Results suggest that there is nothing fundamentally different about the hammer method, compared to other kinds of excitation. The third series of experiments offers an opportunity to explore the difference between the input admittance measuring from one bridge corner to another and that of single strings. The classic measurement is found to give a reasonable approximation to that of all four strings. Some possible differences between the hammer method and normal bowing and implications of the acoustical results are also discussed.

  4. Instrumental Landing Using Audio Indication

    Science.gov (United States)

    Burlak, E. A.; Nabatchikov, A. M.; Korsun, O. N.

    2018-02-01

    The paper proposes an audio indication method for presenting to a pilot the information regarding the relative positions of an aircraft in the tasks of precision piloting. The implementation of the method is presented, the use of such parameters of audio signal as loudness, frequency and modulation are discussed. To confirm the operability of the audio indication channel the experiments using modern aircraft simulation facility were carried out. The simulated performed the instrument landing using the proposed audio method to indicate the aircraft deviations in relation to the slide path. The results proved compatible with the simulated instrumental landings using the traditional glidescope pointers. It inspires to develop the method in order to solve other precision piloting tasks.

  5. Variable selection in multivariate calibration based on clustering of variable concept.

    Science.gov (United States)

    Farrokhnia, Maryam; Karimi, Sadegh

    2016-01-01

    Recently we have proposed a new variable selection algorithm, based on clustering of variable concept (CLoVA) in classification problem. With the same idea, this new concept has been applied to a regression problem and then the obtained results have been compared with conventional variable selection strategies for PLS. The basic idea behind the clustering of variable is that, the instrument channels are clustered into different clusters via clustering algorithms. Then, the spectral data of each cluster are subjected to PLS regression. Different real data sets (Cargill corn, Biscuit dough, ACE QSAR, Soy, and Tablet) have been used to evaluate the influence of the clustering of variables on the prediction performances of PLS. Almost in the all cases, the statistical parameter especially in prediction error shows the superiority of CLoVA-PLS respect to other variable selection strategies. Finally the synergy clustering of variable (sCLoVA-PLS), which is used the combination of cluster, has been proposed as an efficient and modification of CLoVA algorithm. The obtained statistical parameter indicates that variable clustering can split useful part from redundant ones, and then based on informative cluster; stable model can be reached. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Design and implementation of a wireless instrument adapter

    DEFF Research Database (Denmark)

    Laino, Kaori V.; Saathoff, Thore; Savarimuthu, Thiusius R.

    2018-01-01

    The evaluation of new methods for control and manipulation in minimally invasive robotic surgery requires a realistic setup. To decouple the evaluation of methods from overall clinical systems, we propose an instrument adapter for the S line EndoWrist\\c{opyright} instruments of the da Vinci...... surgical system. The adapter is small and lightweight and can be mounted to any robot to mimic motion. We describe its design and implementation, as well as a setup to calibrate instruments to study precise motion control. Our results indicate that each instrument requires individual calibration...

  7. Variable-Structure Control of a Model Glider Airplane

    Science.gov (United States)

    Waszak, Martin R.; Anderson, Mark R.

    2008-01-01

    A variable-structure control system designed to enable a fuselage-heavy airplane to recover from spin has been demonstrated in a hand-launched, instrumented model glider airplane. Variable-structure control is a high-speed switching feedback control technique that has been developed for control of nonlinear dynamic systems.

  8. DATA COLLECTION METHOD FOR PEDESTRIAN MOVEMENT VARIABLES

    Directory of Open Access Journals (Sweden)

    Hajime Inamura

    2000-01-01

    Full Text Available The need of tools for design and evaluation of pedestrian areas, subways stations, entrance hall, shopping mall, escape routes, stadium etc lead to the necessity of a pedestrian model. One approach pedestrian model is Microscopic Pedestrian Simulation Model. To be able to develop and calibrate a microscopic pedestrian simulation model, a number of variables need to be considered. As the first step of model development, some data was collected using video and the coordinate of the head path through image processing were also taken. Several numbers of variables can be gathered to describe the behavior of pedestrian from a different point of view. This paper describes how to obtain variables from video taking and simple image processing that can represent the movement of pedestrians and its variables

  9. Variable aperture-based ptychographical iterative engine method

    Science.gov (United States)

    Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-02-01

    A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches.

  10. A comparison of methods to separate treatment from self-selection effects in an online banking setting

    NARCIS (Netherlands)

    Gensler, S.; Leeflang, P.S.H.; Skiera, B.

    The literature discusses several methods to control for self-selection effects but provides little guidance on which method to use in a setting with a limited number of variables. The authors theoretically compare and empirically assess the performance of different matching methods and instrumental

  11. High-Frequency X-ray Variability Detection in A Black Hole Transient with USA.

    Energy Technology Data Exchange (ETDEWEB)

    Shabad, Gayane

    2000-10-16

    Studies of high-frequency variability (above {approx}100 Hz) in X-ray binaries provide a unique opportunity to explore the fundamental physics of spacetime and matter, since the orbital timescale on the order of several milliseconds is a timescale of the motion of matter through the region located in close proximity to a compact stellar object. The detection of weak high-frequency signals in X-ray binaries depends on how well we understand the level of Poisson noise due to the photon counting statistics, i.e. how well we can understand and model the detector deadtime and other instrumental systematic effects. We describe the preflight timing calibration work performed on the Unconventional Stellar Aspect (USA) X-ray detector to study deadtime and timing issues. We developed a Monte Carlo deadtime model and deadtime correction methods for the USA experiment. The instrumental noise power spectrum can be estimated within {approx}0.1% accuracy in the case when no energy-dependent instrumental effect is present. We also developed correction techniques to account for an energy-dependent instrumental effect. The developed methods were successfully tested on USA Cas A and Cygnus X-1 data. This work allowed us to make a detection of a weak signal in a black hole candidate (BHC) transient.

  12. Methods and instrumentation for investigating Hall sensors during their irradiation in nuclear research reactors

    International Nuclear Information System (INIS)

    Bolshakova, I.; Holyaka, R.; Makido, E.; Marusenkov, A.; Shurygin, F.; Yerashok, V.; Moreau, P. J.; Vayakis, G.; Duran, I.; Stockel, J.; Chekanov, V.; Konopleva, R.; Nazarkin, I.; Kulikov, S.; Leroy, C.

    2009-01-01

    Present work discusses the issues of creating the instrumentation for testing the semiconductor magnetic field sensors during their irradiation with neutrons in nuclear reactors up to fluences similar to neutron fluences in steady-state sensor locations in ITER. The novelty of the work consists in Hall sensor parameters being investigated: first, directly during the irradiation (in real time), and, second, at high irradiation levels (fast neutron fluence > 10 18 n/cm 2 ). Developed instrumentation has been successfully tested and applied in the research experiments on radiation stability of magnetic sensors in IBR-2 (JINR, Dubna) and VVR-M (PNPI, Saint-Petersburg) reactors. The 'Remote-Rad' bench consists of 2 heads (head 1 and head 2) bearing investigated sensors put in a ceramic setting, of electronic unit, of personal computer and of signal lines. Each head contains 6 Hall sensors and a coil for generating test magnetic field. Moreover head 1 contains thermocouples for temperature measurement while the temperature of head 2 is measured by thermo-resistive method. The heads are placed in the reactor channel

  13. In-house validation study of the DuPont Qualicon BAX system Q7 instrument with the BAX system PCR Assay for Salmonella (modification of AOAC Official Method 2003.09 and AOAC Research Institute Performance-Tested Method 100201).

    Science.gov (United States)

    Tice, George; Andaloro, Bridget; White, H Kirk; Bolton, Lance; Wang, Siqun; Davis, Eugene; Wallace, Morgan

    2009-01-01

    In 2006, DuPont Qualicon introduced the BAX system Q7 instrument for use with its assays. To demonstrate the equivalence of the new and old instruments, a validation study was conducted using the BAX system PCR Assay for Salmonella, AOAC Official Method 2003.09, on three food types. The foods were simultaneously analyzed with the BAX system Q7 instrument and either the U.S. Food and Drug Administration Bacteriological Analytical Manual or the U.S. Department of Agriculture-Food Safety and Inspection Service Microbiology Laboratory Guidebook reference method for detecting Salmonella. Comparable performance between the BAX system and the reference methods was observed. Of the 75 paired samples analyzed, 39 samples were positive by both the BAX system and reference methods, and 36 samples were negative by both the BAX system and reference methods, demonstrating 100% correlation. Inclusivity and exclusivity for the BAX system Q7 instrument were also established by testing 50 Salmonella strains and 20 non-Salmonella isolates. All Salmonella strains returned positive results, and all non-Salmonella isolates returned a negative response.

  14. Endotoxins in surgical instruments of hip arthroplasty

    Directory of Open Access Journals (Sweden)

    Vania Regina Goveia

    2016-06-01

    Full Text Available Abstract OBJECTIVE To investigate endotoxins in sterilized surgical instruments used in hip arthroplasties. METHOD A descriptive exploratory study conducted in a public teaching hospital. Six types of surgical instruments were selected, namely: acetabulum rasp, femoral rasp, femoral head remover, chisel box, flexible bone reamer and femoral head test. The selection was based on the analysis of the difficulty in removing bone and blood residues during cleaning. The sample was made up of 60 surgical instruments, which were tested for endotoxins in three different stages. The EndosafeTM Gel-Clot LAL (Limulus Amebocyte Lysate method was used. RESULT There was consistent gel formation with positive analysis in eight instruments, corresponding to 13.3%, being four femoral rasps and four bone reamers. CONCLUSION Endotoxins in quantity ≥0.125 UE/mL were detected in 13.3% of the instruments tested.

  15. Is foreign direct investment good for health in low and middle income countries? An instrumental variable approach.

    Science.gov (United States)

    Burns, Darren K; Jones, Andrew P; Goryakin, Yevgeniy; Suhrcke, Marc

    2017-05-01

    There is a scarcity of quantitative research into the effect of FDI on population health in low and middle income countries (LMICs). This paper investigates the relationship using annual panel data from 85 LMICs between 1974 and 2012. When controlling for time trends, country fixed effects, correlation between repeated observations, relevant covariates, and endogeneity via a novel instrumental variable approach, we find FDI to have a beneficial effect on overall health, proxied by life expectancy. When investigating age-specific mortality rates, we find a stronger beneficial effect of FDI on adult mortality, yet no association with either infant or child mortality. Notably, FDI effects on health remain undetected in all models which do not control for endogeneity. Exploring the effect of sector-specific FDI on health in LMICs, we provide preliminary evidence of a weak inverse association between secondary (i.e. manufacturing) sector FDI and overall life expectancy. Our results thus suggest that FDI has provided an overall benefit to population health in LMICs, particularly in adults, yet investments into the secondary sector could be harmful to health. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Comparison of different calibration methods suited for calibration problems with many variables

    DEFF Research Database (Denmark)

    Holst, Helle

    1992-01-01

    This paper describes and compares different kinds of statistical methods proposed in the literature as suited for solving calibration problems with many variables. These are: principal component regression, partial least-squares, and ridge regression. The statistical techniques themselves do...

  17. Biological variables for the site survey of surface ecosystems - existing data and survey methods

    International Nuclear Information System (INIS)

    Kylaekorpi, Lasse; Berggren, Jens; Larsson, Mats; Liberg, Maria; Rydgren, Bernt

    2000-06-01

    In the process of selecting a safe and environmentally acceptable location for the deep level repository of nuclear waste, site surveys will be carried out. These site surveys will also include studies of the biota at the site, in order to assure that the chosen site will not conflict with important ecological interests, and to establish a thorough baseline for future impact assessments and monitoring programmes. As a preparation to the site survey programme, a review of the variables that need to be surveyed is conducted. This report contains the review for some of those variables. For each variable, existing data sources and their characteristics are listed. For those variables for which existing data sources are inadequate, suggestions are made for appropriate methods that will enable the establishment of an acceptable baseline. In this report the following variables are reviewed: Fishery, Landscape, Vegetation types, Key biotopes, Species (flora and fauna), Red-listed species (flora and fauna), Biomass (flora and fauna), Water level, water retention time (incl. water body and flow), Nutrients/toxins, Oxygen concentration, Layering, stratification, Light conditions/transparency, Temperature, Sediment transport, (Marine environments are excluded from this review). For a major part of the variables, the existing data coverage is most likely insufficient. Both the temporal and/or the geographical resolution is often limited, which means that complementary surveys must be performed during (or before) the site surveys. It is, however, in general difficult to make exact judgements on the extent of existing data, and also to give suggestions for relevant methods to use in the site surveys. This can be finally decided only when the locations for the sites are decided upon. The relevance of the different variables also depends on the environmental characteristics of the sites. Therefore, we suggest that when the survey sites are selected, an additional review is

  18. Biological variables for the site survey of surface ecosystems - existing data and survey methods

    Energy Technology Data Exchange (ETDEWEB)

    Kylaekorpi, Lasse; Berggren, Jens; Larsson, Mats; Liberg, Maria; Rydgren, Bernt [SwedPower AB, Stockholm (Sweden)

    2000-06-01

    In the process of selecting a safe and environmentally acceptable location for the deep level repository of nuclear waste, site surveys will be carried out. These site surveys will also include studies of the biota at the site, in order to assure that the chosen site will not conflict with important ecological interests, and to establish a thorough baseline for future impact assessments and monitoring programmes. As a preparation to the site survey programme, a review of the variables that need to be surveyed is conducted. This report contains the review for some of those variables. For each variable, existing data sources and their characteristics are listed. For those variables for which existing data sources are inadequate, suggestions are made for appropriate methods that will enable the establishment of an acceptable baseline. In this report the following variables are reviewed: Fishery, Landscape, Vegetation types, Key biotopes, Species (flora and fauna), Red-listed species (flora and fauna), Biomass (flora and fauna), Water level, water retention time (incl. water body and flow), Nutrients/toxins, Oxygen concentration, Layering, stratification, Light conditions/transparency, Temperature, Sediment transport, (Marine environments are excluded from this review). For a major part of the variables, the existing data coverage is most likely insufficient. Both the temporal and/or the geographical resolution is often limited, which means that complementary surveys must be performed during (or before) the site surveys. It is, however, in general difficult to make exact judgements on the extent of existing data, and also to give suggestions for relevant methods to use in the site surveys. This can be finally decided only when the locations for the sites are decided upon. The relevance of the different variables also depends on the environmental characteristics of the sites. Therefore, we suggest that when the survey sites are selected, an additional review is

  19. Effects of thermal deformation on optical instruments for space application

    Science.gov (United States)

    Segato, E.; Da Deppo, V.; Debei, S.; Cremonese, G.

    2017-11-01

    Optical instruments for space missions work in hostile environment, it's thus necessary to accurately study the effects of ambient parameters variations on the equipment. In particular optical instruments are very sensitive to ambient conditions, especially temperature. This variable can cause dilatations and misalignments of the optical elements, and can also lead to rise of dangerous stresses in the optics. Their displacements and the deformations degrade the quality of the sampled images. In this work a method for studying the effects of the temperature variations on the performance of imaging instrument is presented. The optics and their mountings are modeled and processed by a thermo-mechanical Finite Element Model (FEM) analysis, then the output data, which describe the deformations of the optical element surfaces, are elaborated using an ad hoc MATLAB routine: a non-linear least square optimization algorithm is adopted to determine the surface equations (plane, spherical, nth polynomial) which best fit the data. The obtained mathematical surface representations are then directly imported into ZEMAX for sequential raytracing analysis. The results are the variations of the Spot Diagrams, of the MTF curves and of the Diffraction Ensquared Energy due to simulated thermal loads. This method has been successfully applied to the Stereo Camera for the BepiColombo mission reproducing expected operative conditions. The results help to design and compare different optical housing systems for a feasible solution and show that it is preferable to use kinematic constraints on prisms and lenses to minimize the variation of the optical performance of the Stereo Camera.

  20. Critical Science Instrument Alignment of the James Webb Space Telescope (JWST) Integrated Science Instrument Module (ISIM)

    Science.gov (United States)

    Rohrbach, Scott O.; Kubalak, David A.; Gracey, Renee M.; Sabatke, Derek S.; Howard, Joseph M.; Telfer, Randal C.; Zielinski, Thomas P.

    2016-01-01

    This paper describes the critical instrument alignment terms associated with the six-degree of freedom alignment of each the Science Instrument (SI) in the James Webb Space Telescope (JWST), including focus, pupil shear, pupil clocking, and boresight. We present the test methods used during cryogenic-vacuum tests to directly measure the performance of each parameter, the requirements levied on each, and the impact of any violations of these requirements at the instrument and Observatory level.

  1. Optimization and development of the instrumental parameters for a method of multielemental analysis through atomic spectroscopy emission, for the determination of My, Fe Mn and Cr

    International Nuclear Information System (INIS)

    Lanzoni Vindas, E.

    1998-01-01

    This study optimized the instrumental parameters of a method of multielemental (sequential) analysis, through atomic emission, for the determination of My, Fe,Mn and Cr. It used the factorial design at two levels and the method of Simplex optimization, that permitted the determination of the four cations under the same instrumental conditions. The author studied an analytic system, in which the conditions were not lineal between instrumental answers and the concentration, having to make adjustment of the calibration curves in homocedastic and heterocedastic conditions. (S. Grainger)

  2. Approaches for developing a sizing method for stand-alone PV systems with variable demand

    Energy Technology Data Exchange (ETDEWEB)

    Posadillo, R. [Grupo de Investigacion en Energias y Recursos Renovables, Dpto. de Fisica Aplicada, E.P.S., Universidad de Cordoba, Avda. Menendez Pidal s/n, 14004 Cordoba (Spain); Lopez Luque, R. [Grupo de Investigacion de Fisica para las Energias y Recursos Renovables, Dpto. de Fisica Aplicada. Edificio C2 Campus de Rabanales, 14071 Cordoba (Spain)

    2008-05-15

    Accurate sizing is one of the most important aspects to take into consideration when designing a stand-alone photovoltaic system (SAPV). Various methods, which differ in terms of their simplicity or reliability, have been developed for this purpose. Analytical methods, which seek functional relationships between variables of interest to the sizing problem, are one of these approaches. A series of rational considerations are presented in this paper with the aim of shedding light upon the basic principles and results of various sizing methods proposed by different authors. These considerations set the basis for a new analytical method that has been designed for systems with variable monthly energy demands. Following previous approaches, the method proposed is based on the concept of loss of load probability (LLP) - a parameter that is used to characterize system design. The method includes information on the standard deviation of loss of load probability ({sigma}{sub LLP}) and on two new parameters: annual number of system failures (f) and standard deviation of annual number of failures ({sigma}{sub f}). The method proves useful for sizing a PV system in a reliable manner and serves to explain the discrepancies found in the research on systems with LLP<10{sup -2}. We demonstrate that reliability depends not only on the sizing variables and on the distribution function of solar radiation, but on the minimum value as well, which in a given location and with a monthly average clearness index, achieves total solar radiation on the receiver surface. (author)

  3. New developments in radiation protection instrumentation via active electronic methods

    International Nuclear Information System (INIS)

    Umbarger, C.J.

    1981-01-01

    New developments in electronics and radiation detectors are improving on real-time data acquisition of radiation exposure and contamination conditions. Recent developments in low power circuit designs, hybrid and integrated circuits, and microcomputers have all contributed to smaller and lighter radiation detection instruments that are, at the same time, more sensitive and provide more information (e.g., radioisotope identification) than previous devices. New developments in radiation detectors, such as cadmium telluride, gas scintillation proportional counters, and imaging counters (both charged particle and photon) promise higher sensitivities and expanded uses over present instruments. These developments are being applied in such areas as health physics, waste management, environmental monitoring, in vivo measurements, and nuclear safeguards

  4. Quantifying temporal glucose variability in diabetes via continuous glucose monitoring: mathematical methods and clinical application.

    Science.gov (United States)

    Kovatchev, Boris P; Clarke, William L; Breton, Marc; Brayman, Kenneth; McCall, Anthony

    2005-12-01

    Continuous glucose monitors (CGMs) collect detailed blood glucose (BG) time series, which carry significant information about the dynamics of BG fluctuations. In contrast, the methods for analysis of CGM data remain those developed for infrequent BG self-monitoring. As a result, important information about the temporal structure of the data is lost during the translation of raw sensor readings into clinically interpretable statistics and images. The following mathematical methods are introduced into the field of CGM data interpretation: (1) analysis of BG rate of change; (2) risk analysis using previously reported Low/High BG Indices and Poincare (lag) plot of risk associated with temporal BG variability; and (3) spatial aggregation of the process of BG fluctuations and its Markov chain visualization. The clinical application of these methods is illustrated by analysis of data of a patient with Type 1 diabetes mellitus who underwent islet transplantation and with data from clinical trials. Normative data [12,025 reference (YSI device, Yellow Springs Instruments, Yellow Springs, OH) BG determinations] in patients with Type 1 diabetes mellitus who underwent insulin and glucose challenges suggest that the 90%, 95%, and 99% confidence intervals of BG rate of change that could be maximally sustained over 15-30 min are [-2,2], [-3,3], and [-4,4] mg/dL/min, respectively. BG dynamics and risk parameters clearly differentiated the stages of transplantation and the effects of medication. Aspects of treatment were clearly visualized by graphs of BG rate of change and Low/High BG Indices, by a Poincare plot of risk for rapid BG fluctuations, and by a plot of the aggregated Markov process. Advanced analysis and visualization of CGM data allow for evaluation of dynamical characteristics of diabetes and reveal clinical information that is inaccessible via standard statistics, which do not take into account the temporal structure of the data. The use of such methods improves the

  5. Polyphonic pitch detection and instrument separation

    Science.gov (United States)

    Bay, Mert; Beauchamp, James W.

    2005-09-01

    An algorithm for polyphonic pitch detection and musical instrument separation is presented. Each instrument is represented as a time-varying harmonic series. Spectral information is obtained from a monaural input signal using a spectral peak tracking method. Fundamental frequencies (F0s) for each time frame are estimated from the spectral data using an Expectation Maximization (EM) algorithm with a Gaussian mixture model representing the harmonic series. The method first estimates the most predominant F0, suppresses its series in the input, and then the EM algorithm is run iteratively to estimate each next F0. Collisions between instrument harmonics, which frequently occur, are predicted from the estimated F0s, and the resulting corrupted harmonics are ignored. The amplitudes of these corrupted harmonics are replaced by harmonics taken from a library of spectral envelopes for different instruments, where the spectrum which most closely matches the important characteristics of each extracted spectrum is chosen. Finally, each voice is separately resynthesized by additive synthesis. This algorithm is demonstrated for a trio piece that consists of 3 different instruments.

  6. P-Link: A method for generating multicomponent cytochrome P450 fusions with variable linker length

    DEFF Research Database (Denmark)

    Belsare, Ketaki D.; Ruff, Anna Joelle; Martinez, Ronny

    2014-01-01

    Fusion protein construction is a widely employed biochemical technique, especially when it comes to multi-component enzymes such as cytochrome P450s. Here we describe a novel method for generating fusion proteins with variable linker lengths, protein fusion with variable linker insertion (P...

  7. Cost-effective design of economic instruments in nutrition policy

    Directory of Open Access Journals (Sweden)

    Smed Sinne

    2007-04-01

    Full Text Available Abstract This paper addresses the potential for using economic regulation, e.g. taxes or subsidies, as instruments to combat the increasing problems of inappropriate diets, leading to health problems such as obesity, diabetes 2, cardiovascular diseases etc. in most countries. Such policy measures may be considered as alternatives or supplements to other regulation instruments, including information campaigns, bans or enhancement of technological solutions to the problems of obesity or related diseases. 7 different food tax and subsidy instruments or combinations of instruments are analysed quantitatively. The analyses demonstrate that the average cost-effectiveness with regard to changing the intake of selected nutritional variables can be improved by 10–30 per cent if taxes/subsidies are targeted against these nutrients, compared with targeting selected food categories. Finally, the paper raises a range of issues, which need to be investigated further, before firm conclusions about the suitability of economic instruments in nutrition policy can be drawn.

  8. Application of instrumental neutron activation analysis and multivariate statistical methods to archaeological Syrian ceramics

    International Nuclear Information System (INIS)

    Bakraji, E. H.; Othman, I.; Sarhil, A.; Al-Somel, N.

    2002-01-01

    Instrumental neutron activation analysis (INAA) has been utilized in the analysis of thirty-seven archaeological ceramics fragment samples collected from Tal AI-Wardiate site, Missiaf town, Hamma city, Syria. 36 chemical elements were determined. These elemental concentrations have been processed using two multivariate statistical methods, cluster and factor analysis in order to determine similarities and correlation between the various samples. Factor analysis confirms that samples were correctly classified by cluster analysis. The results showed that samples can be considered to be manufactured using three different sources of raw material. (author)

  9. Cumulative Mass and NIOSH Variable Lifting Index Method for Risk Assessment: Possible Relations.

    Science.gov (United States)

    Stucchi, Giulia; Battevi, Natale; Pandolfi, Monica; Galinotti, Luca; Iodice, Simona; Favero, Chiara

    2018-02-01

    Objective The aim of this study was to explore whether the Variable Lifting Index (VLI) can be corrected for cumulative mass and thus test its efficacy in predicting the risk of low-back pain (LBP). Background A validation study of the VLI method was published in this journal reporting promising results. Although several studies highlighted a positive correlation between cumulative load and LBP, cumulative mass has never been considered in any of the studies investigating the relationship between manual material handling and LBP. Method Both VLI and cumulative mass were calculated for 2,374 exposed subjects using a systematic approach. Due to high variability of cumulative mass values, a stratification within VLI categories was employed. Dummy variables (1-4) were assigned to each class and used as a multiplier factor for the VLI, resulting in a new index (VLI_CMM). Data on LBP were collected by occupational physicians at the study sites. Logistic regression was used to estimate the risk of acute LBP within levels of risk exposure when compared with a control group formed by 1,028 unexposed subjects. Results Data showed greatly variable values of cumulative mass across all VLI classes. The potential effect of cumulative mass on damage emerged as not significant ( p value = .6526). Conclusion When comparing VLI_CMM with raw VLI, the former failed to prove itself as a better predictor of LBP risk. Application To recognize cumulative mass as a modifier, especially for lumbar degenerative spine diseases, authors of future studies should investigate potential association between the VLI and other damage variables.

  10. Sparse reconstruction for quantitative bioluminescence tomography based on the incomplete variables truncated conjugate gradient method.

    Science.gov (United States)

    He, Xiaowei; Liang, Jimin; Wang, Xiaorui; Yu, Jingjing; Qu, Xiaochao; Wang, Xiaodong; Hou, Yanbin; Chen, Duofang; Liu, Fang; Tian, Jie

    2010-11-22

    In this paper, we present an incomplete variables truncated conjugate gradient (IVTCG) method for bioluminescence tomography (BLT). Considering the sparse characteristic of the light source and insufficient surface measurement in the BLT scenarios, we combine a sparseness-inducing (ℓ1 norm) regularization term with a quadratic error term in the IVTCG-based framework for solving the inverse problem. By limiting the number of variables updated at each iterative and combining a variable splitting strategy to find the search direction more efficiently, it obtains fast and stable source reconstruction, even without a priori information of the permissible source region and multispectral measurements. Numerical experiments on a mouse atlas validate the effectiveness of the method. In vivo mouse experimental results further indicate its potential for a practical BLT system.

  11. Variable separation solutions for the Nizhnik-Novikov-Veselov equation via the extended tanh-function method

    International Nuclear Information System (INIS)

    Zhang Jiefang; Dai Chaoqing; Zong Fengde

    2007-01-01

    In this paper, with the variable separation approach and based on the general reduction theory, we successfully generalize this extended tanh-function method to obtain new types of variable separation solutions for the following Nizhnik-Novikov-Veselov (NNV) equation. Among the solutions, two solutions are new types of variable separation solutions, while the last solution is similar to the solution given by Darboux transformation in Hu et al 2003 Chin. Phys. Lett. 20 1413

  12. Propulsion and launching analysis of variable-mass rockets by analytical methods

    Directory of Open Access Journals (Sweden)

    D.D. Ganji

    2013-09-01

    Full Text Available In this study, applications of some analytical methods on nonlinear equation of the launching of a rocket with variable mass are investigated. Differential transformation method (DTM, homotopy perturbation method (HPM and least square method (LSM were applied and their results are compared with numerical solution. An excellent agreement with analytical methods and numerical ones is observed in the results and this reveals that analytical methods are effective and convenient. Also a parametric study is performed here which includes the effect of exhaust velocity (Ce, burn rate (BR of fuel and diameter of cylindrical rocket (d on the motion of a sample rocket, and contours for showing the sensitivity of these parameters are plotted. The main results indicate that the rocket velocity and altitude are increased with increasing the Ce and BR and decreased with increasing the rocket diameter and drag coefficient.

  13. The application of variable sampling method in the audit testing of insurance companies' premium income

    Directory of Open Access Journals (Sweden)

    Jovković Biljana

    2012-12-01

    Full Text Available The aim of this paper is to present the procedure of audit sampling using the variable sampling methods for conducting the tests of income from insurance premiums in insurance company 'Takovo'. Since the incomes from the insurance premiums from vehicle insurance and third-party vehicle insurance have the dominant share of the insurance company's income, the application of this method will be shown in the audit examination of these incomes - incomes from VI and TPVI premiums. For investigating the applicability of these methods in testing the income of other insurance companies, we shall implement the method of variable sampling in the audit testing of the premium income from the three leading insurance companies in Serbia, 'Dunav', 'DDOR' and 'Delta Generali' Insurance.

  14. ASPECT OF LANGUAGE ON A QUALITATIVE ANALYSIS OF STUDENT’S EVALUATION INSTRUMENT

    Directory of Open Access Journals (Sweden)

    Ismanto Ismanto

    2016-11-01

    Full Text Available This article examined the characteristics of good student’s evaluation instrument. There are at least two requirements that must be met. Those are valid and reliable. The validity of the instrument can be seen from the instrument's ability to measure what should be measured. The fact the existence of the validity of an instrument may be a grain fill, the response process, internal structure, relationship with other variables, and the consequences of the implementation of the charging instrument. Analysis of the content is then known as content validity, i.e. rational analysis of the domain to be measured to determine the representation of each item on the instrument with the ability to be measured. Content validity is submitting pieces of blue print and items of the instrument to the experts to be analyzed quantitatively and qualitatively.

  15. Qualitative to Quantitative and Spectrum to Report: An Instrument-Focused Research Methods Course for First-Year Students

    Science.gov (United States)

    Thomas, Alyssa C.; Boucher, Michelle A.; Pulliam, Curtis R.

    2015-01-01

    Our Introduction to Research Methods course is a first-year majors course built around the idea of helping students learn to work like chemists, write like chemists, and think like chemists. We have developed this course as a hybrid hands-on/ lecture experience built around instrumentation use and report preparation. We take the product from one…

  16. Treatment of thoracolumbar burst fractures with variable screw placement or Isola instrumentation and arthrodesis: case series and literature review.

    Science.gov (United States)

    Alvine, Gregory F; Swain, James M; Asher, Marc A; Burton, Douglas C

    2004-08-01

    The controversy of burst fracture surgical management is addressed in this retrospective case study and literature review. The series consisted of 40 consecutive patients, index included, with 41 fractures treated with stiff, limited segment transpedicular bone-anchored instrumentation and arthrodesis from 1987 through 1994. No major acute complications such as death, paralysis, or infection occurred. For the 30 fractures with pre- and postoperative computed tomography studies, spinal canal compromise was 61% and 32%, respectively. Neurologic function improved in 7 of 14 patients (50%) and did not worsen in any. The principal problem encountered was screw breakage, which occurred in 16 of the 41 (39%) instrumented fractures. As we have previously reported, transpedicular anterior bone graft augmentation significantly decreased variable screw placement (VSP) implant breakage. However, it did not prevent Isola implant breakage in two-motion segment constructs. Compared with VSP, Isola provided better sagittal plane realignment and constructs that have been found to be significantly stiffer. Unplanned reoperation was necessary in 9 of the 40 patients (23%). At 1- and 2-year follow-up, 95% and 79% of patients were available for study, and a satisfactory outcome was achieved in 84% and 79%, respectively. These satisfaction and reoperation rates are consistent with the literature of the time. Based on these observations and the loads to which implant constructs are exposed following posterior realignment and stabilization of burst fractures, we recommend that three- or four-motion segment constructs, rather than two motion, be used. To save valuable motion segments, planned construct shortening can be used. An alternative is sequential or staged anterior corpectomy and structural grafting.

  17. Design and validation of a standards-based science teacher efficacy instrument

    Science.gov (United States)

    Kerr, Patricia Reda

    National standards for K--12 science education address all aspects of science education, with their main emphasis on curriculum---both science subject matter and the process involved in doing science. Standards for science teacher education programs have been developing along a parallel plane, as is self-efficacy research involving classroom teachers. Generally, studies about efficacy have been dichotomous---basing the theoretical underpinnings on the work of either Rotter's Locus of Control theory or on Bandura's explanations of efficacy beliefs and outcome expectancy. This study brings all three threads together---K--12 science standards, teacher education standards, and efficacy beliefs---in an instrument designed to measure science teacher efficacy with items based on identified critical attributes of standards-based science teaching and learning. Based on Bandura's explanation of efficacy being task-specific and having outcome expectancy, a developmental, systematic progression from standards-based strategies and activities to tasks to critical attributes was used to craft items for a standards-based science teacher efficacy instrument. Demographic questions related to school characteristics, teacher characteristics, preservice background, science teaching experience, and post-certification professional development were included in the instrument. The instrument was completed by 102 middle level science teachers, with complete data for 87 teachers. A principal components analysis of the science teachers' responses to the instrument resulted in two components: Standards-Based Science Teacher Efficacy: Beliefs About Teaching (BAT, reliability = .92) and Standards-Based Science Teacher Efficacy: Beliefs About Student Achievement (BASA, reliability = .82). Variables that were characteristic of professional development activities, science content preparation, and school environment were identified as members of the sets of variables predicting the BAT and BASA

  18. Quantitative Assessment of Blood Pressure Measurement Accuracy and Variability from Visual Auscultation Method by Observers without Receiving Medical Training

    Science.gov (United States)

    Feng, Yong; Chen, Aiqing

    2017-01-01

    This study aimed to quantify blood pressure (BP) measurement accuracy and variability with different techniques. Thirty video clips of BP recordings from the BHS training database were converted to Korotkoff sound waveforms. Ten observers without receiving medical training were asked to determine BPs using (a) traditional manual auscultatory method and (b) visual auscultation method by visualizing the Korotkoff sound waveform, which was repeated three times on different days. The measurement error was calculated against the reference answers, and the measurement variability was calculated from the SD of the three repeats. Statistical analysis showed that, in comparison with the auscultatory method, visual method significantly reduced overall variability from 2.2 to 1.1 mmHg for SBP and from 1.9 to 0.9 mmHg for DBP (both p auscultation methods). In conclusion, the visual auscultation method had the ability to achieve an acceptable degree of BP measurement accuracy, with smaller variability in comparison with the traditional auscultatory method. PMID:29423405

  19. Energy conserving schemes for the simulation of musical instrument contact dynamics

    Science.gov (United States)

    Chatziioannou, Vasileios; van Walstijn, Maarten

    2015-03-01

    Collisions are an innate part of the function of many musical instruments. Due to the nonlinear nature of contact forces, special care has to be taken in the construction of numerical schemes for simulation and sound synthesis. Finite difference schemes and other time-stepping algorithms used for musical instrument modelling purposes are normally arrived at by discretising a Newtonian description of the system. However because impact forces are non-analytic functions of the phase space variables, algorithm stability can rarely be established this way. This paper presents a systematic approach to deriving energy conserving schemes for frictionless impact modelling. The proposed numerical formulations follow from discretising Hamilton's equations of motion, generally leading to an implicit system of nonlinear equations that can be solved with Newton's method. The approach is first outlined for point mass collisions and then extended to distributed settings, such as vibrating strings and beams colliding with rigid obstacles. Stability and other relevant properties of the proposed approach are discussed and further demonstrated with simulation examples. The methodology is exemplified through a case study on tanpura string vibration, with the results confirming the main findings of previous studies on the role of the bridge in sound generation with this type of string instrument.

  20. Field instrumentation for hydrofracturing stress measurements

    International Nuclear Information System (INIS)

    Bjarnason, Bjarni; Torikka, Arne.

    1989-08-01

    A recently developed system for rock stress measurements by the hydraulic fracturing method is documented in detail. The new equipment is intended for measurement in vertical or nearvertical boreholes, down to a maximum depth of 1000 m. The minimum borehole, diameter required is 56 mm. Downhole instrumentation comprises a straddle packer assembly for borehole fracturing, equipment for determination of fracture orientations and a pressure transducer. The downhole tools are operated by means of a multihose system, containing high pressure hydraulic tubings, signal cable and carrying wire into one hose unit. The surface components of the equipment include a system for generation and control of water pressures up to approximately 75 MPa, an hydraulically operated drum for the multihose and a data acquisition system. All surface instrumentation is permanently mounted on a truck, which also serves as power source for the instrumentation. In addition to the description of instrumentation, the theoretical fundament and the testing procedures associated with the hydraulic fracturing method are briefly outlined

  1. Performance specifications for health physics instrumentation: portable instrumentation for use in normal work environments. Part 2. Test results

    International Nuclear Information System (INIS)

    Kenoyer, J.L.; Swinth, K.L.; Stoetzel, G.A.; Selby, J.M.

    1986-09-01

    The Pacific Northwest Laboratory evaluated a draft American National Standards Institute Standard N42.17 (ANSI N42.17) on performance specifications for health physics instrumentation through a project jointly funded by the US Department of Energy and the US Nuclear Regulatory Commission. The evaluation involved testing a representative cross section of instruments against criteria in the standard. This report presents results of the testing program. A brief history of the project is included in the introduction. The instrumentation tested is described in general terms (i.e., types, ranges); however, no direct relationship between the results and a specific instrument model is made in this report. Testing requirements in ANSI N42.17D4, Revision 1 (May 1985) are summarized and the methods by which the tests are performed are discussed. Brief descriptions of the testing equipment are included in the methods section of the report. More detailed information about the draft standard, testing requirements and procedures, and the test equipment is included in ''Performance Specifications for Health Physics Instrumentation - Portable Instrumentation for Use in Normal Work Environments, Part 1: Manual of Testing Procedures.'' Results of testing are given in two formats: test-by-test and instrument-by-instrument. Discussion is included on significant and interesting findings, on comparisons of results from the same type of instruments from same and different manufacturers, and on data grouped by manufacturer. Conclusions are made on the applicability and practicality of the proposed standard and on instrument performance. Changes that have been made to the proposed standard based on findings of the testing program are listed and discussed. 22 refs., 11 figs., 77 tabs

  2. APPLICATION OF THE SPECTROMETRIC METHOD FOR CALCULATING THE DOSE RATE FOR CREATING CALIBRATION HIGHLY SENSITIVE INSTRUMENTS BASED ON SCINTILLATION DETECTION UNITS

    Directory of Open Access Journals (Sweden)

    R. V. Lukashevich

    2017-01-01

    Full Text Available Devices based on scintillation detector are highly sensitive to photon radiation and are widely used to measure the environment dose rate. Modernization of the measuring path to minimize the error in measuring the response of the detector to gamma radiation has already reached its technological ceiling and does not give the proper effect. More promising for this purpose are new methods of processing the obtained spectrometric information. The purpose of this work is the development of highly sensitive instruments based on scintillation detection units using a spectrometric method for calculating dose rate.In this paper we consider the spectrometric method of dosimetry of gamma radiation based on the transformation of the measured instrumental spectrum. Using predetermined or measured functions of the detector response to the action of gamma radiation of a given energy and flux density, a certain function of the energy G(E is determined. Using this function as the core of the integral transformation from the field to dose characteristic, it is possible to obtain the dose value directly from the current instrumentation spectrum. Applying the function G(E to the energy distribution of the fluence of photon radiation in the environment, the total dose rate can be determined without information on the distribution of radioisotopes in the environment.To determine G(E by Monte-Carlo method instrumental response function of the scintillator detector to monoenergetic photon radiation sources as well as other characteristics are calculated. Then the whole full-scale energy range is divided into energy ranges for which the function G(E is calculated using a linear interpolation.Spectrometric method for dose calculation using the function G(E, which allows the use of scintillation detection units for a wide range of dosimetry applications is considered in the article. As well as describes the method of calculating this function by using Monte-Carlo methods

  3. The method to Certify Performance of Long-Lived In-Core Instrumentation

    Energy Technology Data Exchange (ETDEWEB)

    Roh, Kyung-ho; Cha, Kyoon-ho; Moon, Sang-rae [KHNP CRI, Daejeon (Korea, Republic of)

    2015-10-15

    Rh ICI (In-Core Instrumentation) used in OPR1000 generates the relatively large signal but its lifetime is below 6 years. Rh ICI consists of 5 detectors which is a type of SPND (Self Powered Neutron Detector), a couple of thermo-couple, one background wire and several fillers. The short lifetime of Rh detector causes increase of procurement price and space shortage of spent fuel pool. Also, it makes operators be exposed by more radiations. KHNP (Korea Hydro and Nuclear Power Co., Ltd.) CRI (Central Research Institute) is developing the LLICI (Long-Lived In-Core Instrumentation) based on vanadium to solve these problems. LLICI is the detector which is a type of SPND based on Vanadium and has the lifetime of about 10 years. The short lifetime of OPR1000's Rh ICI and long cycle operation strategy cause increase of procurement price, space shortage of spent fuel pool and more radiation exposed to operators. KHNP (Korea Hydro and Nuclear Power Co., Ltd.) CRI (Central Research Institute) is developing the LLICI (Long-Lived In-Core Instrumentation) to solve these problems.

  4. System and method of modulating electrical signals using photoconductive wide bandgap semiconductors as variable resistors

    Science.gov (United States)

    Harris, John Richardson; Caporaso, George J; Sampayan, Stephen E

    2013-10-22

    A system and method for producing modulated electrical signals. The system uses a variable resistor having a photoconductive wide bandgap semiconductor material construction whose conduction response to changes in amplitude of incident radiation is substantially linear throughout a non-saturation region to enable operation in non-avalanche mode. The system also includes a modulated radiation source, such as a modulated laser, for producing amplitude-modulated radiation with which to direct upon the variable resistor and modulate its conduction response. A voltage source and an output port, are both operably connected to the variable resistor so that an electrical signal may be produced at the output port by way of the variable resistor, either generated by activation of the variable resistor or propagating through the variable resistor. In this manner, the electrical signal is modulated by the variable resistor so as to have a waveform substantially similar to the amplitude-modulated radiation.

  5. VALIDATION OF ANALYTICAL METHODS AND INSTRUMENTATION FOR BERYLLIUM MEASUREMENT: REVIEW AND SUMMARY OF AVAILABLE GUIDES, PROCEDURES, AND PROTOCOLS

    Energy Technology Data Exchange (ETDEWEB)

    Ekechukwu, A

    2009-05-27

    Method validation is the process of evaluating whether an analytical method is acceptable for its intended purpose. For pharmaceutical methods, guidelines from the United States Pharmacopeia (USP), International Conference on Harmonisation (ICH), and the United States Food and Drug Administration (USFDA) provide a framework for performing such valications. In general, methods for regulatory compliance must include studies on specificity, linearity, accuracy, precision, range, detection limit, quantitation limit, and robustness. Elements of these guidelines are readily adapted to the issue of validation for beryllium sampling and analysis. This document provides a listing of available sources which can be used to validate analytical methods and/or instrumentation for beryllium determination. A literature review was conducted of available standard methods and publications used for method validation and/or quality control. A comprehensive listing of the articles, papers and books reviewed is given in the Appendix. Available validation documents and guides are listed therein; each has a brief description of application and use. In the referenced sources, there are varying approches to validation and varying descriptions of the valication process at different stages in method development. This discussion focuses on valication and verification of fully developed methods and instrumentation that have been offered up for use or approval by other laboratories or official consensus bodies such as ASTM International, the International Standards Organization (ISO) and the Association of Official Analytical Chemists (AOAC). This review was conducted as part of a collaborative effort to investigate and improve the state of validation for measuring beryllium in the workplace and the environment. Documents and publications from the United States and Europe are included. Unless otherwise specified, all referenced documents were published in English.

  6. Instrumentation and method for measuring NIR light absorbed in tissue during MR imaging in medical NIRS measurements

    Science.gov (United States)

    Myllylä, Teemu S.; Sorvoja, Hannu S. S.; Nikkinen, Juha; Tervonen, Osmo; Kiviniemi, Vesa; Myllylä, Risto A.

    2011-07-01

    Our goal is to provide a cost-effective method for examining human tissue, particularly the brain, by the simultaneous use of functional magnetic resonance imaging (fMRI) and near-infrared spectroscopy (NIRS). Due to its compatibility requirements, MRI poses a demanding challenge for NIRS measurements. This paper focuses particularly on presenting the instrumentation and a method for the non-invasive measurement of NIR light absorbed in human tissue during MR imaging. One practical method to avoid disturbances in MR imaging involves using long fibre bundles to enable conducting the measurements at some distance from the MRI scanner. This setup serves in fact a dual purpose, since also the NIRS device will be less disturbed by the MRI scanner. However, measurements based on long fibre bundles suffer from light attenuation. Furthermore, because one of our primary goals was to make the measuring method as cost-effective as possible, we used high-power light emitting diodes instead of more expensive lasers. The use of LEDs, however, limits the maximum output power which can be extracted to illuminate the tissue. To meet these requirements, we improved methods of emitting light sufficiently deep into tissue. We also show how to measure NIR light of a very small power level that scatters from the tissue in the MRI environment, which is characterized by strong electromagnetic interference. In this paper, we present the implemented instrumentation and measuring method and report on test measurements conducted during MRI scanning. These measurements were performed in MRI operating rooms housing 1.5 Tesla-strength closed MRI scanners (manufactured by GE) in the Dept. of Diagnostic Radiology at the Oulu University Hospital.

  7. Instrumenting an upland research catchment in Canterbury, New Zealand to study controls on variability of soil moisture, shallow groundwater and streamflow

    Science.gov (United States)

    McMillan, Hilary; Srinivasan, Ms

    2015-04-01

    Hydrologists recognise the importance of vertical drainage and deep flow paths in runoff generation, even in headwater catchments. Both soil and groundwater stores are highly variable over multiple scales, and the distribution of water has a strong control on flow rates and timing. In this study, we instrumented an upland headwater catchment in New Zealand to measure the temporal and spatial variation in unsaturated and saturated-zone responses. In NZ, upland catchments are the source of much of the water used in lowland agriculture, but the hydrology of such catchments and their role in water partitioning, storage and transport is poorly understood. The study area is the Langs Gully catchment in the North Branch of the Waipara River, Canterbury: this catchment was chosen to be representative of the foothills environment, with lightly managed dryland pasture and native Matagouri shrub vegetation cover. Over a period of 16 months we measured continuous soil moisture at 32 locations and near-surface water table (versus hillslope locations, and convergent versus divergent hillslopes. We found that temporal variability is strongly controlled by the climatic seasonal cycle, for both soil moisture and water table, and for both the mean and extremes of their distributions. Groundwater is a larger water storage component than soil moisture, and the difference increases with catchment wetness. The spatial standard deviation of both soil moisture and groundwater is larger in winter than in summer. It peaks during rainfall events due to partial saturation of the catchment, and also rises in spring as different locations dry out at different rates. The most important controls on spatial variability are aspect and distance from stream. South-facing and near-stream locations have higher water tables and more, larger soil moisture wetting events. Typical hydrological models do not explicitly account for aspect, but our results suggest that it is an important factor in hillslope

  8. A postprocessing method in the HMC framework for predicting gene function based on biological instrumental data

    Science.gov (United States)

    Feng, Shou; Fu, Ping; Zheng, Wenbin

    2018-03-01

    Predicting gene function based on biological instrumental data is a complicated and challenging hierarchical multi-label classification (HMC) problem. When using local approach methods to solve this problem, a preliminary results processing method is usually needed. This paper proposed a novel preliminary results processing method called the nodes interaction method. The nodes interaction method revises the preliminary results and guarantees that the predictions are consistent with the hierarchy constraint. This method exploits the label dependency and considers the hierarchical interaction between nodes when making decisions based on the Bayesian network in its first phase. In the second phase, this method further adjusts the results according to the hierarchy constraint. Implementing the nodes interaction method in the HMC framework also enhances the HMC performance for solving the gene function prediction problem based on the Gene Ontology (GO), the hierarchy of which is a directed acyclic graph that is more difficult to tackle. The experimental results validate the promising performance of the proposed method compared to state-of-the-art methods on eight benchmark yeast data sets annotated by the GO.

  9. Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant

    Science.gov (United States)

    Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa

    2013-09-17

    System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.

  10. Using cognitive pre-testing methods in the development of a new evidenced-based pressure ulcer risk assessment instrument

    Directory of Open Access Journals (Sweden)

    S. Coleman

    2016-11-01

    Full Text Available Abstract Background Variation in development methods of Pressure Ulcer Risk Assessment Instruments has led to inconsistent inclusion of risk factors and concerns about content validity. A new evidenced-based Risk Assessment Instrument, the Pressure Ulcer Risk Primary Or Secondary Evaluation Tool - PURPOSE-T was developed as part of a National Institute for Health Research (NIHR funded Pressure Ulcer Research Programme (PURPOSE: RP-PG-0407-10056. This paper reports the pre-test phase to assess and improve PURPOSE-T acceptability, usability and confirm content validity. Methods A descriptive study incorporating cognitive pre-testing methods and integration of service user views was undertaken over 3 cycles comprising PURPOSE-T training, a focus group and one-to-one think-aloud interviews. Clinical nurses from 2 acute and 2 community NHS Trusts, were grouped according to job role. Focus group participants used 3 vignettes to complete PURPOSE-T assessments and then participated in the focus group. Think-aloud participants were interviewed during their completion of PURPOSE-T. After each pre-test cycle analysis was undertaken and adjustment/improvements made to PURPOSE-T in an iterative process. This incorporated the use of descriptive statistics for data completeness and decision rule compliance and directed content analysis for interview and focus group data. Data were collected April 2012-June 2012. Results Thirty-four nurses participated in 3 pre-test cycles. Data from 3 focus groups, 12 think-aloud interviews incorporating 101 PURPOSE-T assessments led to changes to improve instrument content and design, flow and format, decision support and item-specific wording. Acceptability and usability were demonstrated by improved data completion and appropriate risk pathway allocation. The pre-test also confirmed content validity with clinical nurses. Conclusions The pre-test was an important step in the development of the preliminary PURPOSE-T and the

  11. Instrumental performance of an etude after three methods of practice.

    Science.gov (United States)

    Vanden Ark, S

    1997-12-01

    For 80 fifth-grade students three practice conditions (mental, mental with physical simulation, and physical with singing) produced significant mean differences in instrumental performance of an etude. No significant differences were found for traditional, physical practice.

  12. Aversive pavlovian responses affect human instrumental motor performance.

    Science.gov (United States)

    Rigoli, Francesco; Pavone, Enea Francesco; Pezzulo, Giovanni

    2012-01-01

    IN NEUROSCIENCE AND PSYCHOLOGY, AN INFLUENTIAL PERSPECTIVE DISTINGUISHES BETWEEN TWO KINDS OF BEHAVIORAL CONTROL: instrumental (habitual and goal-directed) and Pavlovian. Understanding the instrumental-Pavlovian interaction is fundamental for the comprehension of decision-making. Animal studies (as those using the negative auto-maintenance paradigm), have demonstrated that Pavlovian mechanisms can have maladaptive effects on instrumental performance. However, evidence for a similar effect in humans is scarce. In addition, the mechanisms modulating the impact of Pavlovian responses on instrumental performance are largely unknown, both in human and non-human animals. The present paper describes a behavioral experiment investigating the effects of Pavlovian conditioned responses on performance in humans, focusing on the aversive domain. Results showed that Pavlovian responses influenced human performance, and, similar to animal studies, could have maladaptive effects. In particular, Pavlovian responses either impaired or increased performance depending on modulator variables such as threat distance, task controllability, punishment history, amount of training, and explicit punishment expectancy. Overall, these findings help elucidating the computational mechanisms underlying the instrumental-Pavlovian interaction, which might be at the base of apparently irrational phenomena in economics, social behavior, and psychopathology.

  13. Aversive Pavlovian responses affect human instrumental motor performance

    Directory of Open Access Journals (Sweden)

    Francesco eRigoli

    2012-10-01

    Full Text Available In neuroscience and psychology, an influential perspective distinguishes between two kinds of behavioural control: instrumental (habitual and goal-directed and Pavlovian. Understanding the instrumental-Pavlovian interaction is fundamental for the comprehension of decision-making. Animal studies (as those using the negative auto-maintenance paradigm, have demonstrated that Pavlovian mechanisms can have maladaptive effects on instrumental performance. However, evidence for a similar effect in humans is scarce. In addition, the mechanisms modulating the impact of Pavlovian responses on instrumental performance are largely unknown, both in human and non-human animals. The present paper describes a behavioural experiment investigating the effects of Pavlovian conditioned responses on performance in humans, focusing on the aversive domain. Results showed that Pavlovian responses influenced human performance, and, similar to animal studies, could have maladaptive effects. In particular, Pavlovian responses either impaired or increased performance depending on modulator variables such as threat distance, task controllability, punishment history, amount of training, and explicit punishment expectancy. Overall, these findings help elucidating the computational mechanisms underlying the instrumental-Pavlovian interaction, which might be at the base of apparently irrational phenomena in economics, social behaviour, and psychopathology.

  14. Design requirements for the SWIFT instrument

    International Nuclear Information System (INIS)

    Rahnama, P; McDade, I; Shepherd, G; Gault, W

    2013-01-01

    The Stratospheric Wind Interferometer for Transport studies (SWIFT) instrument is a proposed limb-viewing satellite instrument that employs the method of Doppler Michelson interferometry to measure stratospheric wind velocities and ozone densities in the altitude range of 15–45 km. The values of the main instrument parameters including filter system parameters and Michelson interferometer parameters are derived using simulations and analyses. The system design requirements for the instrument and spacecraft are presented and discussed. Some of the retrieval-imposed design requirements are also discussed. Critical design issues are identified. The design optimization process is described. The sensitivity of wind measurements to instrument characteristics is investigated including the impact on critical design issues. Using sensitivity analyses, the instrument parameters were iteratively optimized in order to meet the science objectives. It is shown that wind measurements are sensitive to the thermal sensitivity of the instrument components, especially the narrow filter and the Michelson interferometer. The optimized values of the main system parameters including Michelson interferometer optical path difference, instrument visibility, instrument responsivity and knowledge of spacecraft velocity are reported. This work also shows that the filter thermal drift and the Michelson thermal drift are two main technical risks. (paper)

  15. Investigation on Motorcyclist Riding Behaviour at Curve Entry Using Instrumented Motorcycle

    Science.gov (United States)

    Yuen, Choon Wah; Karim, Mohamed Rehan; Saifizul, Ahmad

    2014-01-01

    This paper details the study on the changes in riding behaviour, such as changes in speed as well as the brake force and throttle force applied, when motorcyclists ride over a curve section road using an instrumented motorcycle. In this study, an instrumented motorcycle equipped with various types of sensors, on-board cameras, and data loggers, was developed in order to collect the riding data on the study site. Results from the statistical analysis showed that riding characteristics, such as changes in speed, brake force, and throttle force applied, are influenced by the distance from the curve entry, riding experience, and travel mileage of the riders. A structural equation modeling was used to study the impact of these variables on the change of riding behaviour in curve entry section. Four regression equations are formed to study the relationship between four dependent variables, which are speed, throttle force, front brake force, and rear brake force applied with the independent variables. PMID:24523660

  16. An Assessment of the Need for Standard Variable Names for Airborne Field Campaigns

    Science.gov (United States)

    Beach, A. L., III; Chen, G.; Northup, E. A.; Kusterer, J.; Quam, B. M.

    2017-12-01

    The NASA Earth Venture Program has led to a dramatic increase in airborne observations, requiring updated data management practices with clearly defined data standards and protocols for metadata. An airborne field campaign can involve multiple aircraft and a variety of instruments. It is quite common to have different instruments/techniques measure the same parameter on one or more aircraft platforms. This creates a need to allow instrument Principal Investigators (PIs) to name their variables in a way that would distinguish them across various data sets. A lack of standardization of variables names presents a challenge for data search tools in enabling discovery of similar data across airborne studies, aircraft platforms, and instruments. This was also identified by data users as one of the top issues in data use. One effective approach for mitigating this problem is to enforce variable name standardization, which can effectively map the unique PI variable names to fixed standard names. In order to ensure consistency amongst the standard names, it will be necessary to choose them from a controlled list. However, no such list currently exists despite a number of previous efforts to establish a sufficient list of atmospheric variable names. The Atmospheric Composition Variable Standard Name Working Group was established under the auspices of NASA's Earth Science Data Systems Working Group (ESDSWG) to solicit research community feedback to create a list of standard names that are acceptable to data providers and data users This presentation will discuss the challenges and recommendations of standard variable names in an effort to demonstrate how airborne metadata curation/management can be improved to streamline data ingest, improve interoperability, and discoverability to a broader user community.

  17. Regularized variable metric method versus the conjugate gradient method in solution of radiative boundary design problem

    International Nuclear Information System (INIS)

    Kowsary, F.; Pooladvand, K.; Pourshaghaghy, A.

    2007-01-01

    In this paper, an appropriate distribution of the heating elements' strengths in a radiation furnace is estimated using inverse methods so that a pre-specified temperature and heat flux distribution is attained on the design surface. Minimization of the sum of the squares of the error function is performed using the variable metric method (VMM), and the results are compared with those obtained by the conjugate gradient method (CGM) established previously in the literature. It is shown via test cases and a well-founded validation procedure that the VMM, when using a 'regularized' estimator, is more accurate and is able to reach at a higher quality final solution as compared to the CGM. The test cases used in this study were two-dimensional furnaces filled with an absorbing, emitting, and scattering gas

  18. Comparison of Two- and Three-Dimensional Methods for Analysis of Trunk Kinematic Variables in the Golf Swing.

    Science.gov (United States)

    Smith, Aimée C; Roberts, Jonathan R; Wallace, Eric S; Kong, Pui; Forrester, Stephanie E

    2016-02-01

    Two-dimensional methods have been used to compute trunk kinematic variables (flexion/extension, lateral bend, axial rotation) and X-factor (difference in axial rotation between trunk and pelvis) during the golf swing. Recent X-factor studies advocated three-dimensional (3D) analysis due to the errors associated with two-dimensional (2D) methods, but this has not been investigated for all trunk kinematic variables. The purpose of this study was to compare trunk kinematic variables and X-factor calculated by 2D and 3D methods to examine how different approaches influenced their profiles during the swing. Trunk kinematic variables and X-factor were calculated for golfers from vectors projected onto the global laboratory planes and from 3D segment angles. Trunk kinematic variable profiles were similar in shape; however, there were statistically significant differences in trunk flexion (-6.5 ± 3.6°) at top of backswing and trunk right-side lateral bend (8.7 ± 2.9°) at impact. Differences between 2D and 3D X-factor (approximately 16°) could largely be explained by projection errors introduced to the 2D analysis through flexion and lateral bend of the trunk and pelvis segments. The results support the need to use a 3D method for kinematic data calculation to accurately analyze the golf swing.

  19. Stochastic methods for uncertainty treatment of functional variables in computer codes: application to safety studies

    International Nuclear Information System (INIS)

    Nanty, Simon

    2015-01-01

    This work relates to the framework of uncertainty quantification for numerical simulators, and more precisely studies two industrial applications linked to the safety studies of nuclear plants. These two applications have several common features. The first one is that the computer code inputs are functional and scalar variables, functional ones being dependent. The second feature is that the probability distribution of functional variables is known only through a sample of their realizations. The third feature, relative to only one of the two applications, is the high computational cost of the code, which limits the number of possible simulations. The main objective of this work was to propose a complete methodology for the uncertainty analysis of numerical simulators for the two considered cases. First, we have proposed a methodology to quantify the uncertainties of dependent functional random variables from a sample of their realizations. This methodology enables to both model the dependency between variables and their link to another variable, called co-variate, which could be, for instance, the output of the considered code. Then, we have developed an adaptation of a visualization tool for functional data, which enables to simultaneously visualize the uncertainties and features of dependent functional variables. Second, a method to perform the global sensitivity analysis of the codes used in the two studied cases has been proposed. In the case of a computationally demanding code, the direct use of quantitative global sensitivity analysis methods is intractable. To overcome this issue, the retained solution consists in building a surrogate model or meta model, a fast-running model approximating the computationally expensive code. An optimized uniform sampling strategy for scalar and functional variables has been developed to build a learning basis for the meta model. Finally, a new approximation approach for expensive codes with functional outputs has been

  20. Pixe method as microanalytical instrument

    International Nuclear Information System (INIS)

    Tabacniks, M.H.

    1986-02-01

    The PIXE method (Particle Induced X-Ray Emission) as analytical method presenting the evolution, the theoretical fundaments, the detection limit, the optimization for operational conditions is evaluated. The applications of the method to air pollution control and aerosol studies in regions such as Antartic, Amazon and other regions are analysed. (M.C.K.) [pt

  1. Tritium instrumentation for a fusion reactor power plant

    International Nuclear Information System (INIS)

    Shank, K.E.; Easterly, C.E.

    1976-09-01

    A review of tritium instrumentation is presented. This includes a discussion of currently available in-plant instrumentation and methods required for sampling stacks, monitoring process streams and reactor coolants, analyzing occupational work areas for air and surface contamination, and personnel monitoring. Off-site instrumentation and collection techniques are also presented. Conclusions are made concerning the adequacy of existing instrumentation in relation to the monitoring needs for fusion reactors

  2. The application of seasonal latent variable in forecasting electricity demand as an alternative method

    International Nuclear Information System (INIS)

    Sumer, Kutluk Kagan; Goktas, Ozlem; Hepsag, Aycan

    2009-01-01

    In this study, we used ARIMA, seasonal ARIMA (SARIMA) and alternatively the regression model with seasonal latent variable in forecasting electricity demand by using data that belongs to 'Kayseri and Vicinity Electricity Joint-Stock Company' over the 1997:1-2005:12 periods. This study tries to examine the advantages of forecasting with ARIMA, SARIMA methods and with the model has seasonal latent variable to each other. The results support that ARIMA and SARIMA models are unsuccessful in forecasting electricity demand. The regression model with seasonal latent variable used in this study gives more successful results than ARIMA and SARIMA models because also this model can consider seasonal fluctuations and structural breaks

  3. A Method of MPPT Control Based on Power Variable Step-size in Photovoltaic Converter System

    Directory of Open Access Journals (Sweden)

    Xu Hui-xiang

    2016-01-01

    Full Text Available Since the disadvantage of traditional MPPT algorithms of variable step-size, proposed power tracking based on variable step-size with the advantage method of the constant-voltage and the perturb-observe (P&O[1-3]. The control strategy modify the problem of voltage fluctuation caused by perturb-observe method, at the same time, introducing the advantage of constant-voltage method and simplify the circuit topology. With the theoretical derivation, control the output power of photovoltaic modules to change the duty cycle of main switch. Achieve the maximum power stabilization output, reduce the volatility of energy loss effectively, and improve the inversion efficiency[3,4]. Given the result of experimental test based theoretical derivation and the curve of MPPT when the prototype work.

  4. Short version of the “instrument for assessment of stress in nursing students” in the Brazilian reality

    Directory of Open Access Journals (Sweden)

    Ana Lúcia Siqueira Costa

    2018-01-01

    Full Text Available ABSTRACT Goal: validate a short version of the Instrument for assessment of stress in nursing students in the Brazilian reality. Method: Methodological study conducted with 1047 nursing students from five Brazilian institutions, who answered the 30 items initially distributed in eight domains. Data were analyzed in the R Statistical Package and in the latent variable analysis, using exploratory and confirmatory factor analyses, Cronbach’s alpha and item-total correlation. Results: The short version of the instrument had 19 items distributed into four domains: Environment, Professional Training, Theoretical Activities and Performance of Practical Activities. The confirmatory analysis showed absolute and parsimony fit to the proposed model with satisfactory residual levels. Alpha values per factor ranged from 0.736 (Environment to 0.842 (Performance of Practical Activities. Conclusion: The short version of the instrument has construct validity and reliability for application to Brazilian nursing undergraduates at any stage of the course.

  5. PERFORMANCE CONFIRMATION IN-SITU INSTRUMENTATION

    International Nuclear Information System (INIS)

    N.T. Raczka

    2000-01-01

    The purpose of this document is to identify and analyze the types of in-situ instruments and methods that could be used in support of the data acquisition portion of the Performance Confirmation (PC) program at the potential nuclear waste repository at Yucca Mountain. The PC program will require geomechanical , geophysical, thermal, and hydrologic instrumentation of several kinds. This analysis is being prepared to document the technical issues associated with each type of measurement during the PC period. This analysis utilizes the ''Performance Confirmation Input Criteria'' (CRWMS M andO 1999a) as its starting point. The scope of this analysis is primarily on the period after the start of waste package emplacement and before permanent closure of the repository, a period lasting between 15 and 300 years after last package emplacement (Stroupe 2000, Attachment 1, p. 1). The primary objectives of this analysis are to: (1) Review the design criteria as presented in the ''Performance Confirmation Input Criteria'' (CRWMS M andO 1999a). The scope of this analysis will be limited to the instrumentation related to parameters that require continuous monitoring of the conditions underground. (2) Preliminary identification and listing of the data requirements and parameters as related to the current repository layout in support of PC monitoring. (3) Preliminary identification of methods and instrumentation for the acquisition of the required data. Although the ''Performance Confirmation Input Criteria'' (CRWMS M andO 1999a) defines a broad range of data that must be obtained from a variety of methods, the focus of this analysis is on instrumentation related to the performance of the rock mass and the formation of water in the repository environment, that is obtainable from in-situ observation, testing, and monitoring

  6. Sharp or broad pulse peak for high resolution instruments? Choice of moderator performance

    International Nuclear Information System (INIS)

    Arai, M.; Watanabe, N.; Teshigawara, M.

    2001-01-01

    We demonstrate a concept how we should choose moderator performance to realize required performance for instruments. Neutron burst pulse can be characterized with peak intensity, peak width and tail. Those can be controllable by designing moderator, i.e. material, temperature, shape, decoupling, poisoning and having premoderator. Hence there are large number of variable parameters to be determined. Here we discuss the required moderator performance for some typical examples, i.e. high resolution powder instrument, chopper instrument, high resolution back scattering machine. (author)

  7. Eddy Covariance Method for CO2 Emission Measurements: CCS Applications, Principles, Instrumentation and Software

    Science.gov (United States)

    Burba, George; Madsen, Rod; Feese, Kristin

    2013-04-01

    The Eddy Covariance method is a micrometeorological technique for direct high-speed measurements of the transport of gases, heat, and momentum between the earth's surface and the atmosphere. Gas fluxes, emission and exchange rates are carefully characterized from single-point in-situ measurements using permanent or mobile towers, or moving platforms such as automobiles, helicopters, airplanes, etc. Since the early 1990s, this technique has been widely used by micrometeorologists across the globe for quantifying CO2 emission rates from various natural, urban and agricultural ecosystems [1,2], including areas of agricultural carbon sequestration. Presently, over 600 eddy covariance stations are in operation in over 120 countries. In the last 3-5 years, advancements in instrumentation and software have reached the point when they can be effectively used outside the area of micrometeorology, and can prove valuable for geological carbon capture and sequestration, landfill emission measurements, high-precision agriculture and other non-micrometeorological industrial and regulatory applications. In the field of geological carbon capture and sequestration, the magnitude of CO2 seepage fluxes depends on a variety of factors. Emerging projects utilize eddy covariance measurement to monitor large areas where CO2 may escape from the subsurface, to detect and quantify CO2 leakage, and to assure the efficiency of CO2 geological storage [3,4,5,6,7,8]. Although Eddy Covariance is one of the most direct and defensible ways to measure and calculate turbulent fluxes, the method is mathematically complex, and requires careful setup, execution and data processing tailor-fit to a specific site and a project. With this in mind, step-by-step instructions were created to introduce a novice to the conventional Eddy Covariance technique [9], and to assist in further understanding the method through more advanced references such as graduate-level textbooks, flux networks guidelines, journals

  8. Inter- and Intra-method Variability of VS Profiles and VS30 at ARRA-funded Sites

    Science.gov (United States)

    Yong, A.; Boatwright, J.; Martin, A. J.

    2015-12-01

    The 2009 American Recovery and Reinvestment Act (ARRA) funded geophysical site characterizations at 191 seismographic stations in California and in the central and eastern United States. Shallow boreholes were considered cost- and environmentally-prohibitive, thus non-invasive methods (passive and active surface- and body-wave techniques) were used at these stations. The drawback, however, is that these techniques measure seismic properties indirectly and introduce more uncertainty than borehole methods. The principal methods applied were Array Microtremor (AM), Multi-channel Analysis of Surface Waves (MASW; Rayleigh and Love waves), Spectral Analysis of Surface Waves (SASW), Refraction Microtremor (ReMi), and P- and S-wave refraction tomography. Depending on the apparent geologic or seismic complexity of the site, field crews applied one or a combination of these methods to estimate the shear-wave velocity (VS) profile and calculate VS30, the time-averaged VS to a depth of 30 meters. We study the inter- and intra-method variability of VS and VS30 at each seismographic station where combinations of techniques were applied. For each site, we find both types of variability in VS30 remain insignificant (5-10% difference) despite substantial variability observed in the VS profiles. We also find that reliable VS profiles are best developed using a combination of techniques, e.g., surface-wave VS profiles correlated against P-wave tomography to constrain variables (Poisson's ratio and density) that are key depth-dependent parameters used in modeling VS profiles. The most reliable results are based on surface- or body-wave profiles correlated against independent observations such as material properties inferred from outcropping geology nearby. For example, mapped geology describes station CI.LJR as a hard rock site (VS30 > 760 m/s). However, decomposed rock outcrops were found nearby and support the estimated VS30 of 303 m/s derived from the MASW (Love wave) profile.

  9. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

    Science.gov (United States)

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

  10. Influence of different manufacturing methods on the cyclic fatigue of rotary nickel-titanium endodontic instruments.

    Science.gov (United States)

    Rodrigues, Renata C V; Lopes, Hélio P; Elias, Carlos N; Amaral, Georgiana; Vieira, Victor T L; De Martin, Alexandre S

    2011-11-01

    The aim of this study was to evaluate, by static and dynamic cyclic fatigue tests, the number of cycles to fracture (NCF) 2 types of rotary NiTi instruments: Twisted File (SybronEndo, Orange, CA), which is manufactured by a proprietary twisting process, and RaCe files (FKG Dentaire, La Chaux-de-Fonds, Switzerland), which are manufactured by grinding. Twenty Twisted Files (TFs) and 20 RaCe files #25/.006 taper instruments were allowed to rotate freely in an artificial curved canal at 310 rpm in a static or a dynamic model until fracture occurred. Measurements of the fractured fragments showed that fracture occurred at the point of maximum flexure in the midpoint of the curved segment. The NCF was significantly lower for RaCe instruments compared with TFs. The NCF was also lower for instruments subjected to the static test compared with the dynamic model in both groups. Scanning electron microscopic analysis revealed ductile morphologic characteristics on the fractured surfaces of all instruments and no plastic deformation in their helical shafts. Rotary NiTi endodontic instruments manufactured by twisting present greater resistance to cyclic fatigue compared with instruments manufactured by grinding. The fracture mode observed in all instruments was of the ductile type. Copyright © 2011 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  11. Instrument surveillance and calibration verification through plant wide monitoring using autoassociative neural networks

    International Nuclear Information System (INIS)

    Wrest, D.J.; Hines, J.W.; Uhrig, R.E.

    1996-01-01

    The approach to instrument surveillance and calibration verification (ISCV) through plant wide monitoring proposed in this paper is an autoassociative neural network (AANN) which will utilize digitized data presently available in the Safety Parameter Display computer system from Florida Power Corporations Crystal River number 3 nuclear power plant. An autoassociative neural network is one in which the outputs are trained to emulate the inputs over an appropriate dynamic range. The relationships between the different variables are embedded in the weights by the training process. As a result, the output can be a correct version of an input pattern that has been distorted by noise, missing data, or non-linearities. Plant variables that have some degree of coherence with each other constitute the inputs to the network. Once the network has been trained with normal operational data it has been shown to successfully monitor the selected plant variables to detect sensor drift or failure by simply comparing the network inputs with the outputs. The AANN method of monitoring many variables not only indicates that there is a sensor failure, it clearly indicates the signal channel in which the signal error has occurred. (author). 11 refs, 8 figs, 2 tabs

  12. Instrument surveillance and calibration verification through plant wide monitoring using autoassociative neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Wrest, D J; Hines, J W; Uhrig, R E [Tennessee Univ., Knoxville, TN (United States). Dept. of Nuclear Engineering

    1997-12-31

    The approach to instrument surveillance and calibration verification (ISCV) through plant wide monitoring proposed in this paper is an autoassociative neural network (AANN) which will utilize digitized data presently available in the Safety Parameter Display computer system from Florida Power Corporations Crystal River number 3 nuclear power plant. An autoassociative neural network is one in which the outputs are trained to emulate the inputs over an appropriate dynamic range. The relationships between the different variables are embedded in the weights by the training process. As a result, the output can be a correct version of an input pattern that has been distorted by noise, missing data, or non-linearities. Plant variables that have some degree of coherence with each other constitute the inputs to the network. Once the network has been trained with normal operational data it has been shown to successfully monitor the selected plant variables to detect sensor drift or failure by simply comparing the network inputs with the outputs. The AANN method of monitoring many variables not only indicates that there is a sensor failure, it clearly indicates the signal channel in which the signal error has occurred. (author). 11 refs, 8 figs, 2 tabs.

  13. Variability of floods, droughts and windstorms over the past 500 years in Central Europe based on documentary and instrumental data

    Science.gov (United States)

    Brazdil, Rudolf

    2016-04-01

    Hydrological and meteorological extremes (HMEs) in Central Europe during the past 500 years can be reconstructed based on instrumental and documentary data. Documentary data about weather and related phenomena represent the basic source of information for historical climatology and hydrology, dealing with reconstruction of past climate and HMEs, their perception and impacts on human society. The paper presents the basic distribution of documentary data on (i) direct descriptions of HMEs and their proxies on the one hand and on (ii) individual and institutional data sources on the other. Several groups of documentary evidence such as narrative written records (annals, chronicles, memoirs), visual daily weather records, official and personal correspondence, special prints, financial and economic records (with particular attention to taxation data), newspapers, pictorial documentation, chronograms, epigraphic data, early instrumental observations, early scientific papers and communications are demonstrated with respect to extraction of information about HMEs, which concerns usually of their occurrence, severity, seasonality, meteorological causes, perception and human impacts. The paper further presents the analysis of 500-year variability of floods, droughts and windstorms on the base of series, created by combination of documentary and instrumental data. Results, advantages and drawbacks of such approach are documented on the examples from the Czech Lands. The analysis of floods concentrates on the River Vltava (Prague) and the River Elbe (Děčín) which show the highest frequency of floods occurring in the 19th century (mainly of winter synoptic type) and in the second half of the 16th century (summer synoptic type). Reported are also the most disastrous floods (August 1501, March and August 1598, February 1655, June 1675, February 1784, March 1845, February 1862, September 1890, August 2002) and the European context of floods in the severe winter 1783/84. Drought

  14. Marketing instruments of foreign trade promotion

    Directory of Open Access Journals (Sweden)

    Bjelić Predrag

    2011-01-01

    Full Text Available Instruments of promotion as a part of marketing mix are usually associated with companies but more and more countries use this instrument in order to boost their exports. These foreign trade promotion instruments are now popular in many countries in the world since their use is not opposed to any World Trade Organization rules. Marketing instruments of trade promotions are the most important. They include National Exhibitions and National labels of origin and quality. In order to coordinate the application of these instruments countries have established national bodies for trade promotion. Many studies in the past had argued that national Agencies established to promote export did not had any real success, but recent studies indicate that they could have a significant impact on country export promotion. The result of this rise in impact of national export promotion agencies is due to international effort spearheaded by International Trade Center. The aim of this paper is to point out types and methods of marketing instruments application in trade promotion and to present the effectiveness of these instruments applications.

  15. Transient response of level instruments in a research reactor

    International Nuclear Information System (INIS)

    Cheng, Lap Y.

    1989-01-01

    A numerical model has been developed to simulate the dynamics of water level instruments in a research nuclear reactor. A bubble device, with helium gas as the working fluid, is used to monitor liquid level by sensing the static head pressure due to the height of liquid in the reactor vessel. A finite-difference model is constructed to study the transient response of the water level instruments to pressure perturbations. The field equations which describe the hydraulics of the helium gas in the bubbler device are arranged in the form of a tridiagonal matrix and the field variables are solved at each time step by the Thomas algorithm. Simulation results indicate that the dynamic response of the helium gas depends mainly on the volume and the inertia of the gas in the level instrument tubings. The anomalies in the simulated level indication are attributed to the inherent lag in the level instrument due to the hydraulics of the system. 1 ref., 5 figs

  16. The relationship between glass ceiling and power distance as a cultural variable by a new method

    OpenAIRE

    Naide Jahangirov; Guler Saglam Ari; Seymur Jahangirov; Nuray Guneri Tosunoglu

    2015-01-01

    Glass ceiling symbolizes a variety of barriers and obstacles that arise from gender inequality at business life. With this mind, culture influences gender dynamics. The purpose of this research was to examine the relationship between the glass ceiling and the power distance as a cultural variable within organizations. Gender variable is taken as a moderator variable in relationship between the concepts. In addition to conventional correlation analysis, we employed a new method to investigate ...

  17. The Place of Nailfold Capillaroscopy Among Instrumental Methods for Assessment of Some Peripheral Ischaemic Syndromes in Rheumatology.

    Science.gov (United States)

    Lambova, Sevdalina N

    2016-01-01

    Micro- and macrovascular pathology is a frequent finding in a number of common rheumatic diseases. Secondary Raynaud's phenomenon (RP) is among the most common symptoms in systemic sclerosis and several other systemic autoimmune diseases including a broad differential diagnosis. It should be also differential from other peripheral vascular syndromes such as embolism, thrombosis, etc., some of which lead to clinical manifestation of the blue toe syndrome. The current review discusses the instrumental methods for vascular assessments. Nailfold capillaroscopy is the only method among the imaging techniques that can be used for morphological assessment of the nutritive capillaries in the nailfold area. Laser-Doppler flowmetry and laser-Doppler imaging are methods for functional assessment of microcirculation, while thermography and plethysmography reflect both blood flow in peripheral arteries and microcirculation. Doppler ultrasound and angiography visualize peripheral arteries. The choice of the appropriate instrumental method is guided by the clinical presentation. The main role of capillaroscopy is to provide differential diagnosis between primary and secondary RP. In rheumatology, capillaroscopic changes in systemic sclerosis have been recently defined as diagnostic. The appearance of abnormal capillaroscopic pattern inherits high positive predictive value for the development of a connective tissue disease that is higher than the predictive value of antinuclear antibodies. In cases of abrupt onset of peripheral ischaemia, clinical signs of critical ischaemia, unilateral or lower limb involvement, Doppler ultrasound and angiography are indicated. The most common causes for such clinical picture that may be referred to rheumatologic consultation are the antiphospholipid syndrome, mimickers of vasculitides such as atherosclerosis with cholesterol emboli, and neoplasms.

  18. Calibration of solar radiation measuring instruments. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Bahm, R J; Nakos, J C

    1979-11-01

    A review of solar radiation measurement of instruments and some types of errors is given; and procedures for calibrating solar radiation measuring instruments are detailed. An appendix contains a description of various agencies who perform calibration of solar instruments and a description of the methods they used at the time this report was prepared. (WHK)

  19. Inspector-instrument interface in portable NDA instrumentation

    International Nuclear Information System (INIS)

    Halbig, J.K.; Klosterbuer, S.F.

    1981-01-01

    Recent electronics technology advances make it possible to design sophisticated instruments in small packages for convenient field implementation. An inspector-instrument interface design that allows communication of procedures, responses, and results between the instrument and user is presented. This capability has been incorporated into new spent-fuel instrumentation and a battery-powered multichannel analyzer

  20. Inspector-instrument interface in portable NDA instrumentation

    International Nuclear Information System (INIS)

    Halbig, J.K.; Klosterbuer, S.F.

    1981-01-01

    Recent electronics technology advances make it possible to design sophisticated instruments in small packages for convenient field implementation. This report describes an inspector-instrument interface design which allows communication of procedures, responses, and results between the instrument and user. The interface has been incorporated into new spent-fuel instrumentation and a battery-powered multichannel analyzer

  1. Efficient Method for Calculating the Composite Stiffness of Parabolic Leaf Springs with Variable Stiffness for Vehicle Rear Suspension

    Directory of Open Access Journals (Sweden)

    Wen-ku Shi

    2016-01-01

    Full Text Available The composite stiffness of parabolic leaf springs with variable stiffness is difficult to calculate using traditional integral equations. Numerical integration or FEA may be used but will require computer-aided software and long calculation times. An efficient method for calculating the composite stiffness of parabolic leaf springs with variable stiffness is developed and evaluated to reduce the complexity of calculation and shorten the calculation time. A simplified model for double-leaf springs with variable stiffness is built, and a composite stiffness calculation method for the model is derived using displacement superposition and material deformation continuity. The proposed method can be applied on triple-leaf and multileaf springs. The accuracy of the calculation method is verified by the rig test and FEA analysis. Finally, several parameters that should be considered during the design process of springs are discussed. The rig test and FEA analytical results indicate that the calculated results are acceptable. The proposed method can provide guidance for the design and production of parabolic leaf springs with variable stiffness. The composite stiffness of the leaf spring can be calculated quickly and accurately when the basic parameters of the leaf spring are known.

  2. Stress Intensity Factor for Interface Cracks in Bimaterials Using Complex Variable Meshless Manifold Method

    Directory of Open Access Journals (Sweden)

    Hongfen Gao

    2014-01-01

    Full Text Available This paper describes the application of the complex variable meshless manifold method (CVMMM to stress intensity factor analyses of structures containing interface cracks between dissimilar materials. A discontinuous function and the near-tip asymptotic displacement functions are added to the CVMMM approximation using the framework of complex variable moving least-squares (CVMLS approximation. This enables the domain to be modeled by CVMMM without explicitly meshing the crack surfaces. The enriched crack-tip functions are chosen as those that span the asymptotic displacement fields for an interfacial crack. The complex stress intensity factors for bimaterial interfacial cracks were numerically evaluated using the method. Good agreement between the numerical results and the reference solutions for benchmark interfacial crack problems is realized.

  3. Discrete curved ray-tracing method for radiative transfer in an absorbing-emitting semitransparent slab with variable spatial refractive index

    International Nuclear Information System (INIS)

    Liu, L.H.

    2004-01-01

    A discrete curved ray-tracing method is developed to analyze the radiative transfer in one-dimensional absorbing-emitting semitransparent slab with variable spatial refractive index. The curved ray trajectory is locally treated as straight line and the complicated and time-consuming computation of ray trajectory is cut down. A problem of radiative equilibrium with linear variable spatial refractive index is taken as an example to examine the accuracy of the proposed method. The temperature distributions are determined by the proposed method and compared with the data in references, which are obtained by other different methods. The results show that the discrete curved ray-tracing method has a good accuracy in solving the radiative transfer in one-dimensional semitransparent slab with variable spatial refractive index

  4. Modeling intraindividual variability with repeated measures data methods and applications

    CERN Document Server

    Hershberger, Scott L

    2013-01-01

    This book examines how individuals behave across time and to what degree that behavior changes, fluctuates, or remains stable.It features the most current methods on modeling repeated measures data as reported by a distinguished group of experts in the field. The goal is to make the latest techniques used to assess intraindividual variability accessible to a wide range of researchers. Each chapter is written in a ""user-friendly"" style such that even the ""novice"" data analyst can easily apply the techniques.Each chapter features:a minimum discussion of mathematical detail;an empirical examp

  5. Validation of a Job Satisfaction Instrument for Residential-Care Employees.

    Science.gov (United States)

    Sluyter, Gary V.; Mukherjee, Ajit K.

    1986-01-01

    A new job satisfaction instrument for employees of a residential care facility for mentally retarded persons effectively measures the employees' satisfaction with 12 work related variables: salary, company policies, supervision, working conditions, interpersonal relations, security, advancement, recognition, achievement, work responsibility, and…

  6. Spatial and temporal variability of rainfall and their effects on hydrological response in urban areas - a review

    Science.gov (United States)

    Cristiano, Elena; ten Veldhuis, Marie-claire; van de Giesen, Nick

    2017-07-01

    In urban areas, hydrological processes are characterized by high variability in space and time, making them sensitive to small-scale temporal and spatial rainfall variability. In the last decades new instruments, techniques, and methods have been developed to capture rainfall and hydrological processes at high resolution. Weather radars have been introduced to estimate high spatial and temporal rainfall variability. At the same time, new models have been proposed to reproduce hydrological response, based on small-scale representation of urban catchment spatial variability. Despite these efforts, interactions between rainfall variability, catchment heterogeneity, and hydrological response remain poorly understood. This paper presents a review of our current understanding of hydrological processes in urban environments as reported in the literature, focusing on their spatial and temporal variability aspects. We review recent findings on the effects of rainfall variability on hydrological response and identify gaps where knowledge needs to be further developed to improve our understanding of and capability to predict urban hydrological response.

  7. Effects of categorization method, regression type, and variable distribution on the inflation of Type-I error rate when categorizing a confounding variable.

    Science.gov (United States)

    Barnwell-Ménard, Jean-Louis; Li, Qing; Cohen, Alan A

    2015-03-15

    The loss of signal associated with categorizing a continuous variable is well known, and previous studies have demonstrated that this can lead to an inflation of Type-I error when the categorized variable is a confounder in a regression analysis estimating the effect of an exposure on an outcome. However, it is not known how the Type-I error may vary under different circumstances, including logistic versus linear regression, different distributions of the confounder, and different categorization methods. Here, we analytically quantified the effect of categorization and then performed a series of 9600 Monte Carlo simulations to estimate the Type-I error inflation associated with categorization of a confounder under different regression scenarios. We show that Type-I error is unacceptably high (>10% in most scenarios and often 100%). The only exception was when the variable categorized was a continuous mixture proxy for a genuinely dichotomous latent variable, where both the continuous proxy and the categorized variable are error-ridden proxies for the dichotomous latent variable. As expected, error inflation was also higher with larger sample size, fewer categories, and stronger associations between the confounder and the exposure or outcome. We provide online tools that can help researchers estimate the potential error inflation and understand how serious a problem this is. Copyright © 2014 John Wiley & Sons, Ltd.

  8. Read margin analysis of crossbar arrays using the cell-variability-aware simulation method

    Science.gov (United States)

    Sun, Wookyung; Choi, Sujin; Shin, Hyungsoon

    2018-02-01

    This paper proposes a new concept of read margin analysis of crossbar arrays using cell-variability-aware simulation. The size of the crossbar array should be considered to predict the read margin characteristic of the crossbar array because the read margin depends on the number of word lines and bit lines. However, an excessively high-CPU time is required to simulate large arrays using a commercial circuit simulator. A variability-aware MATLAB simulator that considers independent variability sources is developed to analyze the characteristics of the read margin according to the array size. The developed MATLAB simulator provides an effective method for reducing the simulation time while maintaining the accuracy of the read margin estimation in the crossbar array. The simulation is also highly efficient in analyzing the characteristic of the crossbar memory array considering the statistical variations in the cell characteristics.

  9. Application of a primitive variable Newton's method for the calculation of an axisymmetric laminar diffusion flame

    International Nuclear Information System (INIS)

    Xu, Yuenong; Smooke, M.D.

    1993-01-01

    In this paper we present a primitive variable Newton-based solution method with a block-line linear equation solver for the calculation of reacting flows. The present approach is compared with the stream function-vorticity Newton's method and the SIMPLER algorithm on the calculation of a system of fully elliptic equations governing an axisymmetric methane-air laminar diffusion flame. The chemical reaction is modeled by the flame sheet approximation. The numerical solution agrees well with experimental data in the major chemical species. The comparison of three sets of numerical results indicates that the stream function-vorticity solution using the approximate boundary conditions reported in the previous calculations predicts a longer flame length and a broader flame shape. With a new set of modified vorticity boundary conditions, we obtain agreement between the primitive variable and stream function-vorticity solutions. The primitive variable Newton's method converges much faster than the other two methods. Because of much less computer memory required for the block-line tridiagonal solver compared to a direct solver, the present approach makes it possible to calculate multidimensional flames with detailed reaction mechanisms. The SIMPLER algorithm shows a slow convergence rate compared to the other two methods in the present calculation

  10. r2VIM: A new variable selection method for random forests in genome-wide association studies.

    Science.gov (United States)

    Szymczak, Silke; Holzinger, Emily; Dasgupta, Abhijit; Malley, James D; Molloy, Anne M; Mills, James L; Brody, Lawrence C; Stambolian, Dwight; Bailey-Wilson, Joan E

    2016-01-01

    Machine learning methods and in particular random forests (RFs) are a promising alternative to standard single SNP analyses in genome-wide association studies (GWAS). RFs provide variable importance measures (VIMs) to rank SNPs according to their predictive power. However, in contrast to the established genome-wide significance threshold, no clear criteria exist to determine how many SNPs should be selected for downstream analyses. We propose a new variable selection approach, recurrent relative variable importance measure (r2VIM). Importance values are calculated relative to an observed minimal importance score for several runs of RF and only SNPs with large relative VIMs in all of the runs are selected as important. Evaluations on simulated GWAS data show that the new method controls the number of false-positives under the null hypothesis. Under a simple alternative hypothesis with several independent main effects it is only slightly less powerful than logistic regression. In an experimental GWAS data set, the same strong signal is identified while the approach selects none of the SNPs in an underpowered GWAS. The novel variable selection method r2VIM is a promising extension to standard RF for objectively selecting relevant SNPs in GWAS while controlling the number of false-positive results.

  11. An overview of process instrumentation, protective safety interlocks and alarm system at the JET facilities active gas handling system

    International Nuclear Information System (INIS)

    Skinner, N.; Brennan, P.; Brown, K.; Gibbons, C.; Jones, G.; Knipe, S.; Manning, C.; Perevezentsev, A.; Stagg, R.; Thomas, R.; Yorkshades, J.

    2003-01-01

    The Joint European Torus (JET) Facilities Active Gas Handling System (AGHS) comprises ten interconnected processing sub-systems that supply, process and recover tritium from gases used in the JET Machine. Operations require a diverse range of process instrumentation to carry out a multiplicity of monitoring and control tasks and approximately 500 process variables are measured. The different types and application of process instruments are presented with specially adapted or custom-built versions highlighted. Forming part of the Safety Case for tritium operations, a dedicated hardwired interlock and alarm system provides an essential safety function. In the event of failure modes, each hardwired interlock will back-up software interlocks and shutdown areas of plant to a failsafe condition. Design of the interlock and alarm system is outlined and general methodology described. Practical experience gained during plant operations is summarised and the methods employed for routine functional testing of essential instrument systems explained

  12. Authentication of nuclear-material assays made with in-plant instruments

    International Nuclear Information System (INIS)

    Hatcher, C.R.; Hsue, S.T.; Russo, P.A.

    1982-01-01

    This paper develops a general approach for International Atomic Energy Agency (IAEA) authentication of nuclear material assays made with in-plant instruments under facility operator control. The IAEA is evaluating the use of in-plant instruments as a part of international safeguards at large bulk-handling facilities, such as reprocessing plants, fuel fabrication plants, and enrichment plants. One of the major technical problems associated with IAEA use of data from in-plant instruments is the need to show that there has been no tampering with the measurements. Two fundamentally different methods are discussed that can be used by IAEA inspectors to independently verify (or authenticate) measurements made with in-plant instruments. Method 1, called external authentication, uses a protected IAEA measurement technique to compare in-plant instrument results with IAEA results. Method 2, called internal authentication, uses protected IAEA standards, known physical constants, and special test procedures to determine the performance characteristics of the in-plant instrument. The importance of measurement control programs to detect normally expected instrument failures and procedural errors is also addressed. The paper concludes with a brief discussion of factors that should be considered by the designers of new in-plant instruments in order to facilitate IAEA authentication procedures

  13. Psychological variables involved in teacher’s job performance

    OpenAIRE

    Torres Valladares, Manuel; Lajo Lazo, Rosario

    2014-01-01

    The purpose of this study is to analyze the casual relations that can exist between some psychological variables (Personality Type A, Stress facing and Burnout Syndrome) and the labour performance of university teachers from  five faculties of medicine of Lima Metropolitana. The instruments used were: Blumenthal’s inventory of auto report of behaviour type A, COPE, Maslasch’s Burnout inventory and the teacher’s labour performance made by Manuel Fernández Arata. All these instruments were subj...

  14. Instrumentation and methods evaluations for shallow land burial of waste materials: water erosion

    International Nuclear Information System (INIS)

    Hostetler, D.D.; Murphy, E.M.; Childs, S.W.

    1981-08-01

    The erosion of geologic materials by water at shallow-land hazardous waste disposal sites can compromise waste containment. Erosion of protective soil from these sites may enhance waste transport to the biosphere through water, air, and biologic pathways. The purpose of this study was to review current methods of evaluating soil erosion and to recommend methods for use at shallow-land, hazardous waste burial sites. The basic principles of erosion control are: minimize raindrop impact on the soil surface; minimize runoff quantity; minimize runoff velocity; and maximize the soil's resistance to erosion. Generally soil erosion can be controlled when these principles are successfully applied at waste disposal sites. However, these erosion control practices may jeopardize waste containment. Typical erosion control practices may enhance waste transport by increasing subsurface moisture movement and biologic uptake of hazardous wastes. A two part monitoring program is recommended for US Department of Energy (DOE) hazardous waste disposal sites. The monitoring programs and associated measurement methods are designed to provide baseline data permitting analysis and prediction of long term erosion hazards at disposal sites. These two monitoring programs are: (1) site reconnaissance and tracking; and (2) site instrumentation. Some potential waste transport problems arising from erosion control practices are identified. This report summarizes current literature regarding water erosion prediction and control

  15. Infectious complications in head and neck cancer patients treated with cetuximab: propensity score and instrumental variable analysis.

    Directory of Open Access Journals (Sweden)

    Ching-Chih Lee

    Full Text Available BACKGROUND: To compare the infection rates between cetuximab-treated patients with head and neck cancers (HNC and untreated patients. METHODOLOGY: A national cohort of 1083 HNC patients identified in 2010 from the Taiwan National Health Insurance Research Database was established. After patients were followed for one year, propensity score analysis and instrumental variable analysis were performed to assess the association between cetuximab therapy and the infection rates. RESULTS: HNC patients receiving cetuximab (n = 158 were older, had lower SES, and resided more frequently in rural areas as compared to those without cetuximab therapy. 125 patients, 32 (20.3% in the group using cetuximab and 93 (10.1% in the group not using it presented infections. The propensity score analysis revealed a 2.3-fold (adjusted odds ratio [OR] = 2.27; 95% CI, 1.46-3.54; P = 0.001 increased risk for infection in HNC patients treated with cetuximab. However, using IVA, the average treatment effect of cetuximab was not statistically associated with increased risk of infection (OR, 0.87; 95% CI, 0.61-1.14. CONCLUSIONS: Cetuximab therapy was not statistically associated with infection rate in HNC patients. However, older HNC patients using cetuximab may incur up to 33% infection rate during one year. Particular attention should be given to older HNC patients treated with cetuximab.

  16. Identification of solid state fermentation degree with FT-NIR spectroscopy: Comparison of wavelength variable selection methods of CARS and SCARS

    Science.gov (United States)

    Jiang, Hui; Zhang, Hang; Chen, Quansheng; Mei, Congli; Liu, Guohai

    2015-10-01

    The use of wavelength variable selection before partial least squares discriminant analysis (PLS-DA) for qualitative identification of solid state fermentation degree by FT-NIR spectroscopy technique was investigated in this study. Two wavelength variable selection methods including competitive adaptive reweighted sampling (CARS) and stability competitive adaptive reweighted sampling (SCARS) were employed to select the important wavelengths. PLS-DA was applied to calibrate identified model using selected wavelength variables by CARS and SCARS for identification of solid state fermentation degree. Experimental results showed that the number of selected wavelength variables by CARS and SCARS were 58 and 47, respectively, from the 1557 original wavelength variables. Compared with the results of full-spectrum PLS-DA, the two wavelength variable selection methods both could enhance the performance of identified models. Meanwhile, compared with CARS-PLS-DA model, the SCARS-PLS-DA model achieved better results with the identification rate of 91.43% in the validation process. The overall results sufficiently demonstrate the PLS-DA model constructed using selected wavelength variables by a proper wavelength variable method can be more accurate identification of solid state fermentation degree.

  17. Domestic violence on children: development and validation of an instrument to evaluate knowledge of health professionals

    Directory of Open Access Journals (Sweden)

    Lanuza Borges Oliveira

    Full Text Available ABSTRACT Objective: to develop and validate an instrument to evaluate the knowledge of health professionals about domestic violence on children. Method: this was a study conducted with 194 physicians, nurses and dentists. A literature review was performed for preparation of the items and identification of the dimensions. Apparent and content validation was performed using analysis of three experts and 27 professors of the pediatric health discipline. For construct validation, Cronbach's alpha was used, and the Kappa test was applied to verify reproducibility. The criterion validation was conducted using the Student's t-test. Results: the final instrument included 56 items; the Cronbach alpha was 0.734, the Kappa test showed a correlation greater than 0.6 for most items, and the Student t-test showed a statistically significant value to the level of 5% for the two selected variables: years of education and using the Family Health Strategy. Conclusion: the instrument is valid and can be used as a promising tool to develop or direct actions in public health and evaluate knowledge about domestic violence on children.

  18. Intercomparison of different instruments for measuring radon concentration in air

    International Nuclear Information System (INIS)

    Shimo, Michikuni; Iida, Takao

    1990-01-01

    An intercomparison of different instruments for measurement of radon concentration was carried out. The instruments include an ionization chamber, the charcoal-trap method, a flow-type ionization chamber (pulse-counting method), a two-filter method, an electrostatic collection method and a passive integration radon monitor. All instruments except for the passive radon monitor have been calibrated independently. Measurements were performed over a concentration range from about 3.5 Bq·m -3 (in outdoor air) to 110 Bq·m -3 (in indoor air). The results obtained by these techniques, except the two-filter technique, are comparable. Radon daughter concentration measured using a filter-sampling method was about 52% of radon concentration. (author)

  19. Radon-Instrumentation

    International Nuclear Information System (INIS)

    Moreno y Moreno, A.

    2003-01-01

    The presentation of the active and passive methods for radon, their identification and measure, instrumentation and characteristics are the objectives of this work. Active detectors: Active Alpha Cam Continuous Air Monitor, Model 758 of Victoreen, Model CMR-510 Continuous Radon Monitor of the Signature Femto-Tech. Passive detectors: SSNTD track detectors in solids Measurement Using Charcoal Canisters, disk of activated coal deposited in a metallic box Electrets Methodology. (Author)

  20. Instrumentation development

    International Nuclear Information System (INIS)

    Anon.

    1976-01-01

    Areas being investigated for instrumentation improvement during low-level pollution monitoring include laser opto-acoustic spectroscopy, x-ray fluorescence spectroscopy, optical fluorescence spectroscopy, liquid crystal gas detectors, advanced forms of atomic absorption spectroscopy, electro-analytical chemistry, and mass spectroscopy. Emphasis is also directed toward development of physical methods, as opposed to conventional chemical analysis techniques for monitoring these trace amounts of pollution related to energy development and utilization

  1. Interpolation decoding method with variable parameters for fractal image compression

    International Nuclear Information System (INIS)

    He Chuanjiang; Li Gaoping; Shen Xiaona

    2007-01-01

    The interpolation fractal decoding method, which is introduced by [He C, Yang SX, Huang X. Progressive decoding method for fractal image compression. IEE Proc Vis Image Signal Process 2004;3:207-13], involves generating progressively the decoded image by means of an interpolation iterative procedure with a constant parameter. It is well-known that the majority of image details are added at the first steps of iterations in the conventional fractal decoding; hence the constant parameter for the interpolation decoding method must be set as a smaller value in order to achieve a better progressive decoding. However, it needs to take an extremely large number of iterations to converge. It is thus reasonable for some applications to slow down the iterative process at the first stages of decoding and then to accelerate it afterwards (e.g., at some iteration as we need). To achieve the goal, this paper proposed an interpolation decoding scheme with variable (iteration-dependent) parameters and proved the convergence of the decoding process mathematically. Experimental results demonstrate that the proposed scheme has really achieved the above-mentioned goal

  2. The relationship between glass ceiling and power distance as a cultural variable by a new method

    Directory of Open Access Journals (Sweden)

    Naide Jahangirov

    2015-12-01

    Full Text Available Glass ceiling symbolizes a variety of barriers and obstacles that arise from gender inequality at business life. With this mind, culture influences gender dynamics. The purpose of this research was to examine the relationship between the glass ceiling and the power distance as a cultural variable within organizations. Gender variable is taken as a moderator variable in relationship between the concepts. In addition to conventional correlation analysis, we employed a new method to investigate this relationship in detail. The survey data were obtained from 109 people working at a research center which operated as a part of the non-profit private university in Ankara, Turkey. The relationship between the variables was revealed by a new method which was developed as an addition to the correlation in survey. The analysis revealed that the female staff perceived the glass ceiling and the power distance more intensely than the male staff. In addition, the medium level relation was determined between the power distance and the glass ceiling perception among female staff.

  3. Asymptotics of diagonal elements of projection matrices under many instruments/regressors

    Czech Academy of Sciences Publication Activity Database

    Anatolyev, Stanislav; Yaskov, P.

    2017-01-01

    Roč. 33, č. 3 (2017), s. 717-738 ISSN 0266-4666 Institutional support: RVO:67985998 Keywords : instrumental variable estimation * inference * models Subject RIV: AH - Economics OBOR OECD: Applied Economics, Econometrics Impact factor: 1.011, year: 2016

  4. The Possibility Using the Power Production Function of Complex Variable for Economic Forecasting

    Directory of Open Access Journals (Sweden)

    Sergey Gennadyevich Svetunkov

    2016-09-01

    Full Text Available The possibility of dynamic analysis and forecasting production results using the power production functions of complex variables with real coefficients is considered. This model expands the arsenal of instrumental methods and allows multivariate production forecasts which are unattainable by other methods of real variables as the functions of complex variables simulate the production differently in comparison with the models of real variables. The values of coefficients of the power production functions of complex variables can be calculated for each statistical observation. This allows to consider the change of the coefficients over time, to analyze this trend and predict the values of the coefficients for a given term, thereby to predict the form of the production function, which forecasts the operating results. Thus, the model of the production function with variable coefficients is introduced into the scientific circulation. With this model, the inverse problem of forecasting might be solved, such as the determination of the necessary quantities of labor and capital to achieve the desired operational results. The study is based on the principles of the modern methodology of complex-valued economy, one of its sections is the complex-valued patterns of production functions. In the article, the possibility of economic forecasting is tested on the example of the UK economy. The results of this prediction are compared with the forecasts obtained by other methods, which have led to the conclusion about the effectiveness of the proposed approach and the method of forecasting at the macro levels of production systems. A complex-valued power model of the production function is recommended for the multivariate prediction of sustainable production systems — the global economy, the economies of individual countries, major industries and regions.

  5. MANU. Instrumentation of Buffer Demo. Preliminary Study

    International Nuclear Information System (INIS)

    Laaksonen, R.

    2010-01-01

    The purpose of this work is to describe feasible measuring and monitoring alternatives which can be used, if needed, in medium to full scale nuclear waste repository deposition hole mock-up tests. The focus of the work was to determine what variables can actually be measured, how to achieve the measurements and what kind of demands comes from the modelling, scientific, and technical points of view. This project includes a review of the previous waste repository mock-up tests carried out in several European countries such as Belgium, Czech Republic, Spain and Sweden. Also information was gathered by interviewing domestic and foreign scientists specialized in the fields of measurement instrumentation and related in-situ and laboratory work. On the basis of this review, recommendations were developed for the necessary actions needed to be done from the instrumentation point of view for future tests. It is possible to measure and monitor the processes going on in a deposition hole in-situ conditions. The data received during a test in real repository conditions enables to follow the processes and to verify the hypothesis made on the behaviour of various components of the repository: buffer, canister, rock and backfill. Because full scale testing is expensive, the objectives and hypothesis must be carefully set and the test itself with its instrumentation must serve very specific objectives. The main purpose of mock-up tests is to verify that the conditions surrounding the canister are according to the design requirements. A whole mock-up test and demonstration process requires a lot of time and effort. The instrumentation part of the work must also start at early stages to ensure that the instrumentation itself will not become bottlenecked nor suffer from low quality solutions. The planning of the instrumentation work could be done in collaboration with foreign scientists which have participated to previous instrumentation projects. (orig.)

  6. Guide on Economic Instruments & Non-market Valuation Methods

    DEFF Research Database (Denmark)

    Zandersen, Marianne; Bartczak, Anna; Czajkowski, Mikołaj

    The aim of this guidance document is to provide forest practitioners, decision makers and forest owners insights into the various economic instruments available to enhance the non-market ecosystem provision of forests such as a high quality biodiversity; enhanced carbon sequestration; improved...... with ecosystem degradation and iii) by recognising the substantial economic and welfare benefits of better management of ecosystems in forests. Ecosystem services contribute to economic welfare in two ways: • by contributing to the generation of income and wellbeing; and • by preventing damages that inflict...... initiatives it is therefore essential to consider trade offs and synergies between the complex interplay between ecosystem goods and services within an ecosystem,...

  7. Instrumentation

    International Nuclear Information System (INIS)

    Prieur, G.; Nadi, M.; Hedjiedj, A.; Weber, S.

    1995-01-01

    This second chapter on instrumentation gives little general consideration on history and classification of instrumentation, and two specific states of the art. The first one concerns NMR (block diagram of instrumentation chain with details on the magnets, gradients, probes, reception unit). The first one concerns precision instrumentation (optical fiber gyro-meter and scanning electron microscope), and its data processing tools (programmability, VXI standard and its history). The chapter ends with future trends on smart sensors and Field Emission Displays. (D.L.). Refs., figs

  8. Supermathematics and its applications in statistical physics Grassmann variables and the method of supersymmetry

    CERN Document Server

    Wegner, Franz

    2016-01-01

    This text presents the mathematical concepts of Grassmann variables and the method of supersymmetry to a broad audience of physicists interested in applying these tools to disordered and critical systems, as well as related topics in statistical physics. Based on many courses and seminars held by the author, one of the pioneers in this field, the reader is given a systematic and tutorial introduction to the subject matter. The algebra and analysis of Grassmann variables is presented in part I. The mathematics of these variables is applied to a random matrix model, path integrals for fermions, dimer models and the Ising model in two dimensions. Supermathematics - the use of commuting and anticommuting variables on an equal footing - is the subject of part II. The properties of supervectors and supermatrices, which contain both commuting and Grassmann components, are treated in great detail, including the derivation of integral theorems. In part III, supersymmetric physical models are considered. While supersym...

  9. A fully Bayesian method for jointly fitting instrumental calibration and X-ray spectral models

    International Nuclear Information System (INIS)

    Xu, Jin; Yu, Yaming; Van Dyk, David A.; Kashyap, Vinay L.; Siemiginowska, Aneta; Drake, Jeremy; Ratzlaff, Pete; Connors, Alanna; Meng, Xiao-Li

    2014-01-01

    Owing to a lack of robust principled methods, systematic instrumental uncertainties have generally been ignored in astrophysical data analysis despite wide recognition of the importance of including them. Ignoring calibration uncertainty can cause bias in the estimation of source model parameters and can lead to underestimation of the variance of these estimates. We previously introduced a pragmatic Bayesian method to address this problem. The method is 'pragmatic' in that it introduced an ad hoc technique that simplified computation by neglecting the potential information in the data for narrowing the uncertainty for the calibration product. Following that work, we use a principal component analysis to efficiently represent the uncertainty of the effective area of an X-ray (or γ-ray) telescope. Here, however, we leverage this representation to enable a principled, fully Bayesian method that coherently accounts for the calibration uncertainty in high-energy spectral analysis. In this setting, the method is compared with standard analysis techniques and the pragmatic Bayesian method. The advantage of the fully Bayesian method is that it allows the data to provide information not only for estimation of the source parameters but also for the calibration product—here the effective area, conditional on the adopted spectral model. In this way, it can yield more accurate and efficient estimates of the source parameters along with valid estimates of their uncertainty. Provided that the source spectrum can be accurately described by a parameterized model, this method allows rigorous inference about the effective area by quantifying which possible curves are most consistent with the data.

  10. Spatial and temporal variability of rainfall and their effects on hydrological response in urban areas – a review

    Directory of Open Access Journals (Sweden)

    E. Cristiano

    2017-07-01

    Full Text Available In urban areas, hydrological processes are characterized by high variability in space and time, making them sensitive to small-scale temporal and spatial rainfall variability. In the last decades new instruments, techniques, and methods have been developed to capture rainfall and hydrological processes at high resolution. Weather radars have been introduced to estimate high spatial and temporal rainfall variability. At the same time, new models have been proposed to reproduce hydrological response, based on small-scale representation of urban catchment spatial variability. Despite these efforts, interactions between rainfall variability, catchment heterogeneity, and hydrological response remain poorly understood. This paper presents a review of our current understanding of hydrological processes in urban environments as reported in the literature, focusing on their spatial and temporal variability aspects. We review recent findings on the effects of rainfall variability on hydrological response and identify gaps where knowledge needs to be further developed to improve our understanding of and capability to predict urban hydrological response.

  11. A framework for assessing Health Economic Evaluation (HEE) quality appraisal instruments.

    Science.gov (United States)

    Langer, Astrid

    2012-08-16

    Health economic evaluations support the health care decision-making process by providing information on costs and consequences of health interventions. The quality of such studies is assessed by health economic evaluation (HEE) quality appraisal instruments. At present, there is no instrument for measuring and improving the quality of such HEE quality appraisal instruments. Therefore, the objectives of this study are to establish a framework for assessing the quality of HEE quality appraisal instruments to support and improve their quality, and to apply this framework to those HEE quality appraisal instruments which have been subject to more scrutiny than others, in order to test the framework and to demonstrate the shortcomings of existing HEE quality appraisal instruments. To develop the quality assessment framework for HEE quality appraisal instruments, the experiences of using appraisal tools for clinical guidelines are used. Based on a deductive iterative process, clinical guideline appraisal instruments identified through literature search are reviewed, consolidated, and adapted to produce the final quality assessment framework for HEE quality appraisal instruments. The final quality assessment framework for HEE quality appraisal instruments consists of 36 items organized within 7 dimensions, each of which captures a specific domain of quality. Applying the quality assessment framework to four existing HEE quality appraisal instruments, it is found that these four quality appraisal instruments are of variable quality. The framework described in this study should be regarded as a starting point for appraising the quality of HEE quality appraisal instruments. This framework can be used by HEE quality appraisal instrument producers to support and improve the quality and acceptance of existing and future HEE quality appraisal instruments. By applying this framework, users of HEE quality appraisal instruments can become aware of methodological deficiencies

  12. Method of collective variables with reference system for the grand canonical ensemble

    International Nuclear Information System (INIS)

    Yukhnovskii, I.R.

    1989-01-01

    A method of collective variables with special reference system for the grand canonical ensemble is presented. An explicit form is obtained for the basis sixth-degree measure density needed to describe the liquid-gas phase transition. Here the author presents the fundamentals of the method, which are as follows: (1) the functional form for the partition function in the grand canonical ensemble; (2) derivation of thermodynamic relations for the coefficients of the Jacobian; (3) transition to the problem on an adequate lattice; and (4) obtaining of the explicit form for the functional of the partition function

  13. Variation in posture quality across musical instruments and its impact during performances.

    Science.gov (United States)

    Blanco-Piñeiro, Patricia; Díaz-Pereira, M Pino; Martínez Vidal, Aurora

    2018-06-01

    Bad posture increases the risk that a musician may suffer from musculoskeletal disorders. This study compared posture quality required by different instruments or families of instruments. Using an ad-hoc postural observation instrument embracing 11 postural variables, four experts evaluated the postures of 100 students attending a Spanish higher conservatory of music. The agreement of the experts' evaluations was statistically confirmed by a Cohen's κ value between 0.855 and 1.000 and a Kendall value between 0.709 and 1.000 (p instrument families and seated posture with respect to pelvic attitude, dorsal curvature and head alignment in both sagittal and frontal planes. This analysis also showed an association between instrument families and standing posture with respect to the frontal plane of the axis of gravity, pelvic attitude, head alignment in the frontal plane, the sagittal plane of the shoulders and overall posture. While certain postural defects appear to be common to all families of instruments, others are more characteristic of some families than others. The instrument associated with the best posture quality was the bagpipe, followed by percussion and strings.

  14. Developing and testing an instrument to measure the presence of conditions for successful implementation of quality improvement collaboratives

    Directory of Open Access Journals (Sweden)

    Wagner Cordula

    2008-08-01

    Full Text Available Abstract Background In quality improvement collaboratives (QICs teams of practitioners from different health care organizations are brought together to systematically improve an aspect of patient care. Teams take part in a series of meetings to learn about relevant best practices, quality methods and change ideas, and share experiences in making changes in their own local setting. The purpose of this study was to develop an instrument for measuring team organization, external change agent support and support from the team's home institution in a Dutch national improvement and dissemination programme for hospitals based on several QICs. Methods The exploratory methodological design included two phases: a content development and assessment, resulting in an instrument with 15 items, and b field testing (N = 165. Internal consistency reliability was tested via Cronbach's alpha coefficient. Principal component analyses were used to identify underlying constructs. Tests of scaling assumptions according to the multi trait/multi-item matrix, were used to confirm the component structure. Results Three components were revealed, explaining 65% of the variability. The components were labelled 'organizational support', 'team organization' and 'external change agent support'. One item not meeting item-scale criteria was removed. This resulted in a 14 item instrument. Scale reliability ranged from 0.77 to 0.91. Internal item consistency and divergent validity were satisfactory. Conclusion On the whole, the instrument appears to be a promising tool for assessing team organization and internal and external support during QIC implementation. The psychometric properties were good and warrant application of the instrument for the evaluation of the national programme and similar improvement programmes.

  15. Beam Instrumentation and Diagnostics

    CERN Document Server

    Strehl, Peter

    2006-01-01

    This treatise covers all aspects of the design and the daily operations of a beam diagnostic system for a large particle accelerator. A very interdisciplinary field, it involves contributions from physicists, electrical and mechanical engineers and computer experts alike so as to satisfy the ever-increasing demands for beam parameter variability for a vast range of operation modi and particles. The author draws upon 40 years of research and work, most of them spent as the head of the beam diagnostics group at GSI. He has illustrated the more theoretical aspects with many real-life examples that will provide beam instrumentation designers with ideas and tools for their work.

  16. Asymptotics of diagonal elements of projection matrices under many instruments/regressors

    Czech Academy of Sciences Publication Activity Database

    Anatolyev, Stanislav; Yaskov, P.

    2017-01-01

    Roč. 33, č. 3 (2017), s. 717-738 ISSN 0266-4666 Institutional support: Progres-Q24 Keywords : instrumental variable estimation * inference * models Subject RIV: AH - Economics OBOR OECD: Applied Economics, Econometrics Impact factor: 1.011, year: 2016

  17. The space instrument SOVAP of the PICARD mission

    Science.gov (United States)

    Conscience, C.; Meftah, M.; Chevalier, A.; Dewitte, S.; Crommelynck, D.,

    2011-09-01

    PICARD is a Satellite dedicated to the simultaneous measurement of the absolute total and spectral solar irradiance, the diameter and solar shape and the Sun's interior probed by helioseismology method. Its objectives are the study of the origin of the solar variability and the study of the relations between the Sun and the Earth's climate. PICARD was launched on June 15, 2010. The Satellite was placed into the heliosynchronous orbit of 735 km with inclination of 98.28 degrees. The payload consists in two absolute radiometers measuring the TSI (Total Solar Irradiance) and an imaging telescope to determine the solar diameter, the limb shape and asphericity. SOVAP (SOlar VAriability Picard) is an experiment developed by the Belgian STCE (Solar Terrestrial Center of Excellence) with a contribution of the CNRS (Centre National de la Recherche Scientifique) composed of an absolute radiometer provided by the RMIB (Royal Meteorological Institute of Belgium) to measure the TSI and a bolometer provided by the ROB (Royal Observatory of Belgium). The continuous observation of the solar irradiance at the highest possible precision and accuracy is an important objective of the Earth climate change. This requires: high quality metrology in the space environment. In this article, we describe the SOVAP instrument, its performances and uncertainties on the measurements of the TSI.

  18. Instrumentation

    Energy Technology Data Exchange (ETDEWEB)

    Decreton, M

    2001-04-01

    SCK-CEN's research and development programme on instrumentation involves the assessment and the development of sensitive measurement systems used within a radiation environment. Particular emphasis is on the assessment of optical fibre components and their adaptability to radiation environments. The evaluation of ageing processes of instrumentation in fission plants, the development of specific data evaluation strategies to compensate for ageing induced degradation of sensors and cable performance form part of these activities. In 2000, particular emphasis was on in-core reactor instrumentation applied to fusion, accelerator driven and water-cooled fission reactors. This involved the development of high performance instrumentation for irradiation experiments in the BR2 reactor in support of new instrumentation needs for MYRRHA, and for diagnostic systems for the ITER reactor.

  19. Instrumentation

    International Nuclear Information System (INIS)

    Decreton, M.

    2001-01-01

    SCK-CEN's research and development programme on instrumentation involves the assessment and the development of sensitive measurement systems used within a radiation environment. Particular emphasis is on the assessment of optical fibre components and their adaptability to radiation environments. The evaluation of ageing processes of instrumentation in fission plants, the development of specific data evaluation strategies to compensate for ageing induced degradation of sensors and cable performance form part of these activities. In 2000, particular emphasis was on in-core reactor instrumentation applied to fusion, accelerator driven and water-cooled fission reactors. This involved the development of high performance instrumentation for irradiation experiments in the BR2 reactor in support of new instrumentation needs for MYRRHA, and for diagnostic systems for the ITER reactor

  20. 99Tc atom counting by quadrupole ICP-MS. Optimisation of the instrumental response

    International Nuclear Information System (INIS)

    Mas, Jose L.; Garcia-Leon, Manuel; Bolivar, Juan P.

    2002-01-01

    In this paper, an extensive work is done on the specific tune of a conventional ICP-MS for 99 Tc atom counting. For this, two methods have been used and compared: the partial variable control method and the 5D Simplex method. Instrumental limits of detection of 0.2 and 0.8 ppt, respectively, were obtained. They are noticeably lower than that found with the automatic tune method of the spectrometer, 47 ppt, which shows the need of a specific tune when very low levels of 99 Tc have to be determined. A study is presented on the mass interferences for 99 Tc. Our experiments show that the formation of polyatomic atoms or refractory oxides as well as 98 Mo hydrides seem to be irrelevant for 99 Tc atom counting. The opposite occurs with the presence of isobaric interferences, i.e. 99 Ru, and the effect of abundance sensitivity, or low-mass resolution, which can modify the response at m/z=99 to a non-negligible extent

  1. Optical wavelength selection for portable hemoglobin determination by near-infrared spectroscopy method

    Science.gov (United States)

    Tian, Han; Li, Ming; Wang, Yue; Sheng, Dinggao; Liu, Jun; Zhang, Linna

    2017-11-01

    Hemoglobin concentration is commonly used in clinical medicine to diagnose anemia, identify bleeding, and manage red blood cell transfusions. The golden standard method for determining hemoglobin concentration in blood requires reagent. Spectral methods were advantageous at fast and non-reagent measurement. However, model calibration with full spectrum is time-consuming. Moreover, it is necessary to use a few variables considering size and cost of instrumentation, especially for a portable biomedical instrument. This study presents different wavelength selection methods for optical wavelengths for total hemoglobin concentration determination in whole blood. The results showed that modelling using only two wavelengths combination (1143 nm, 1298 nm) can keep on the fine predictability with full spectrum. It appears that the proper selection of optical wavelengths can be more effective than using the whole spectra for determination hemoglobin in whole blood. We also discussed the influence of water absorptivity on the wavelength selection. This research provides valuable references for designing portable NIR instruments determining hemoglobin concentration, and may provide some experience for noninvasive hemoglobin measurement by NIR methods.

  2. Manufacturing challenge: An employee perception of the impact of BEM variables on motivation

    Science.gov (United States)

    Nyaude, Alaster

    The study examines the impact of Thomas F. Gilbert's Behavior Engineering Model (BEM) variables on employee perception of motivation at an aerospace equipment manufacturing plant in Georgia. The research process involved literature review, and determination of an appropriate survey instrument for the study. The Hersey-Chevalier modified PROBE instrument (Appendix C) was used with Dr Roger Chevalier's validation. The participants' responses were further examined to determine the influence of demographic control variables of age, gender, length of service with the company and education on employee perception of motivation. The results indicated that the top three highly motivating variables were knowledge and skills, capacity and resources. Knowledge and skills was perceived to be highly motivating, capacity as second highly motivating and resources as the third highly motivating variable. Interestingly, the fourth highly motivating variable was information, the fifth was motives and the sixth was incentives. The results also showed that demographic control variables had no influence on employee perception of motivation. Further research may be required to understand to what extend these BEM variables impact employee perceptions of motivation.

  3. Nuclear medicine and imaging research. Instrumentation and quantitative methods of evaluation. Progress report, January 15, 1985-January 14, 1986

    International Nuclear Information System (INIS)

    Beck, R.N.; Cooper, M.D.

    1985-09-01

    This program of research addresses problems involving the basic science and technology of radioactive tracer methods as they relate to nuclear medicine and imaging. The broad goal is to develop new instruments and methods for image formation, processing, quantitation, and display, so as to maximize the diagnostic information per unit of absorbed radiation dose to the patient. These developments are designed to meet the needs imposed by new radiopharmaceuticals developed to solve specific biomedical problems, as well as to meet the instrumentation needs associated with radiopharmaceutical production and quantitative clinical feasibility studies of the brain with PET VI. Project I addresses problems associated with the quantitative imaging of single-photon emitters; Project II addresses similar problems associated with the quantitative imaging of positron emitters; Project III addresses methodological problems associated with the quantitative evaluation of the efficacy of diagnostic imaging procedures. The original proposal covered work to be carried out over the three-year contract period. This report covers progress made during Year Three. 36 refs., 1 tab

  4. Linear latent variable models: the lava-package

    DEFF Research Database (Denmark)

    Holst, Klaus Kähler; Budtz-Jørgensen, Esben

    2013-01-01

    are implemented including robust standard errors for clustered correlated data, multigroup analyses, non-linear parameter constraints, inference with incomplete data, maximum likelihood estimation with censored and binary observations, and instrumental variable estimators. In addition an extensive simulation......An R package for specifying and estimating linear latent variable models is presented. The philosophy of the implementation is to separate the model specification from the actual data, which leads to a dynamic and easy way of modeling complex hierarchical structures. Several advanced features...

  5. Predicting Falls in Parkinson Disease: What Is the Value of Instrumented Testing in OFF Medication State?

    Directory of Open Access Journals (Sweden)

    Martina Hoskovcová

    Full Text Available Falls are a common complication of advancing Parkinson's disease (PD. Although numerous risk factors are known, reliable predictors of future falls are still lacking. The objective of this prospective study was to investigate clinical and instrumented tests of balance and gait in both OFF and ON medication states and to verify their utility in the prediction of future falls in PD patients.Forty-five patients with idiopathic PD were examined in defined OFF and ON medication states within one examination day including PD-specific clinical tests, instrumented Timed Up and Go test (iTUG and computerized dynamic posturography. The same gait and balance tests were performed in 22 control subjects of comparable age and sex. Participants were then followed-up for 6 months using monthly fall diaries and phone calls.During the follow-up period, 27/45 PD patients and 4/22 control subjects fell one or more times. Previous falls, fear of falling, more severe motor impairment in the OFF state, higher PD stage, more pronounced depressive symptoms, higher daily levodopa dose and stride time variability in the OFF state were significant risk factors for future falls in PD patients. Increased stride time variability in the OFF state in combination with faster walking cadence appears to be the most significant predictor of future falls, superior to clinical predictors.Incorporating instrumented gait measures into the baseline assessment battery as well as accounting for both OFF and ON medication states might improve future fall prediction in PD patients. However, instrumented testing in the OFF state is not routinely performed in clinical practice and has not been used in the development of fall prevention programs in PD. New assessment methods for daylong monitoring of gait, balance and falls are thus required to more effectively address the risk of falling in PD patients.

  6. The Instrumentation Channel for the MUCOOL Experiment

    International Nuclear Information System (INIS)

    Kahn, S. A.; Guler, H.; Lu, C.; McDonald, K. T.; Prebys, E. J.; Vahsen, S. E.

    1999-01-01

    The MUCOOL facility is proposed to examine cooling techniques that could be used in a muon collider. The solenoidal beam channel before and after the cooling test section are instrumented to measure the beam emittance. This instrumentation channel includes a bent solenoid to provide dispersion and time projection chambers to measure the beam variables before and after the bend. The momentum of the muons is obtained from a measurement of the drift of the muon trajectory in the bent solenoid. The timing measurement is made by determining the phase from the momentum of the muon before and after it traverses RF cavities or by the use of a fast Cherenkov chamber. A computer simulation of the muon solenoidal channel is performed using GEANT. This study evaluates the resolution of the beam emittance measurement for MUCOOL

  7. “You Can’t Play a Sad Song on the Banjo:” Acoustic Factors in the Judgment of Instrument Capacity to Convey Sadness

    Directory of Open Access Journals (Sweden)

    David Huron

    2014-05-01

    Full Text Available Forty-four Western-enculturated musicians completed two studies. The first group was asked to judge the relative sadness of forty-four familiar Western instruments. An independent group was asked to assess a number of acoustical properties for those same instruments. Using the estimated acoustical properties as predictor variables in a multiple regression analysis, a significant correlation was found between those properties known to contribute to sad prosody in speech and the judged sadness of the instruments. The best predictor variable was the ability of the instrument to make small pitch movements. Other variables investigated included the darkness of the timbre, the ability to play low pitches, the ability to play quietly, and the capacity of the instrument to "mumble." Four of the acoustical factors were found to exhibit a considerable amount of shared variance, suggesting that they may originate in a common underlying factor. It is suggested that the shared proximal cause of these acoustical features may be low physical energy.

  8. Environmental Variables and Pupils' Academic Performance in ...

    African Journals Online (AJOL)

    This causal-comparative study was carried out to investigate the influence of environmental variables on pupils' academic performance in primary science in Cross River State, Nigeria. Three hypotheses were formulated to guide the study. Two instruments were used to collect data for the study namely: environmental ...

  9. Investigation of Music Student Efficacy as Influenced by Age, Experience, Gender, Ethnicity, and Type of Instrument Played in South Carolina

    Science.gov (United States)

    White, Norman

    2010-01-01

    The purpose of this research study was to quantitatively examine South Carolina high school instrumental music students' self-efficacy as measured by the Generalized Self-Efficacy (GSE) instrument (Schwarzer & Jerusalem, 1993). The independent variables of age, experience, gender, ethnicity, and type of instrument played) were correlated with…

  10. REFERQUAL: a pilot study of a new service quality assessment instrument in the GP exercise referral scheme setting

    Science.gov (United States)

    Cock, Don; Adams, Iain C; Ibbetson, Adrian B; Baugh, Phil

    2006-01-01

    Background The development of an instrument accurately assessing service quality in the GP Exercise Referral Scheme (ERS) industry could potentially inform scheme organisers of the factors that affect adherence rates leading to the implementation of strategic interventions aimed at reducing client drop-out. Methods A modified version of the SERVQUAL instrument was designed for use in the ERS setting and subsequently piloted amongst 27 ERS clients. Results Test re-test correlations were calculated via Pearson's 'r' or Spearman's 'rho', depending on whether the variables were Normally Distributed, to show a significant (mean r = 0.957, SD = 0.02, p < 0.05; mean rho = 0.934, SD = 0.03, p < 0.05) relationship between all items within the questionnaire. In addition, satisfactory internal consistency was demonstrated via Cronbach's 'α'. Furthermore, clients responded favourably towards the usability, wording and applicability of the instrument's items. Conclusion REFERQUAL is considered to represent promise as a suitable tool for future evaluation of service quality within the ERS community. Future research should further assess the validity and reliability of this instrument through the use of a confirmatory factor analysis to scrutinise the proposed dimensional structure. PMID:16725021

  11. LLNA variability: An essential ingredient for a comprehensive assessment of non-animal skin sensitization test methods and strategies.

    Science.gov (United States)

    Hoffmann, Sebastian

    2015-01-01

    The development of non-animal skin sensitization test methods and strategies is quickly progressing. Either individually or in combination, the predictive capacity is usually described in comparison to local lymph node assay (LLNA) results. In this process the important lesson from other endpoints, such as skin or eye irritation, to account for variability reference test results - here the LLNA - has not yet been fully acknowledged. In order to provide assessors as well as method and strategy developers with appropriate estimates, we investigated the variability of EC3 values from repeated substance testing using the publicly available NICEATM (NTP Interagency Center for the Evaluation of Alternative Toxicological Methods) LLNA database. Repeat experiments for more than 60 substances were analyzed - once taking the vehicle into account and once combining data over all vehicles. In general, variability was higher when different vehicles were used. In terms of skin sensitization potential, i.e., discriminating sensitizer from non-sensitizers, the false positive rate ranged from 14-20%, while the false negative rate was 4-5%. In terms of skin sensitization potency, the rate to assign a substance to the next higher or next lower potency class was approx.10-15%. In addition, general estimates for EC3 variability are provided that can be used for modelling purposes. With our analysis we stress the importance of considering the LLNA variability in the assessment of skin sensitization test methods and strategies and provide estimates thereof.

  12. Instrumentation Cables Test Plan

    Energy Technology Data Exchange (ETDEWEB)

    Muna, Alice Baca [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); LaFleur, Chris Bensdotter [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-10-01

    A fire at a nuclear power plant (NPP) has the potential to damage structures, systems, and components important to safety, if not promptly detected and suppressed. At Browns Ferry Nuclear Power Plant on March 22, 1975, a fire in the reactor building damaged electrical power and control systems. Damage to instrumentation cables impeded the function of both normal and standby reactor coolant systems, and degraded the operators’ plant monitoring capability. This event resulted in additional NRC involvement with utilities to ensure that NPPs are properly protected from fire as intended by the NRC principle design criteria (i.e., general design criteria 3, Fire Protection). Current guidance and methods for both deterministic and performance based approaches typically make conservative (bounding) assumptions regarding the fire-induced failure modes of instrumentation cables and those failure modes effects on component and system response. Numerous fire testing programs have been conducted in the past to evaluate the failure modes and effects of electrical cables exposed to severe thermal conditions. However, that testing has primarily focused on control circuits with only a limited number of tests performed on instrumentation circuits. In 2001, the Nuclear Energy Institute (NEI) and the Electric Power Research Institute (EPRI) conducted a series of cable fire tests designed to address specific aspects of the cable failure and circuit fault issues of concern1. The NRC was invited to observe and participate in that program. The NRC sponsored Sandia National Laboratories to support this participation, whom among other things, added a 4-20 mA instrumentation circuit and instrumentation cabling to six of the tests. Although limited, one insight drawn from those instrumentation circuits tests was that the failure characteristics appeared to depend on the cable insulation material. The results showed that for thermoset insulated cables, the instrument reading tended to drift

  13. Ergonomic investigation of weight distribution of laparoscopic instruments.

    Science.gov (United States)

    Lin, Chiuhsiang Joe; Chen, Hung-Jen; Lo, Ying-Chu

    2011-06-01

    Laparoscopic surgery procedures require highly specialized visually controlled movements. Investigations of industrial applications indicate that the length as well as the weight of hand-held tools substantially affects movement time (MT). Different weight distributions may have similar effects on long-shafted laparoscopic instruments when performing surgical procedures. For this reason, the current experiment aimed at finding direct evidence of the weight distribution effect in an accurate task. Ten right-handed subjects made continuous Fitts' pointing tasks using a long laparoscopic instrument. The factors and levels were target width: (2.5, 3, 3.5, and 4 cm), target distance (14, 23, and 37 cm), and weight distribution (uniform, front, middle, and rear). Weight distribution was made by chips of lead attached to the laparoscopic instrument. MT, error rate, and throughput (TP) were recorded as dependent variables. There were significant differences between the weight distribution in MT and in TP. The middle position was found to require the least time to manipulate the laparoscopic instrument in pointing tasks and also obtained the highest TP. These analyses and findings pointed to a design direction for the ergonomics and usability of the long hand-held tool such as the laparoscopic instrument in this study. To optimize efficiency in using these tools, the consideration of a better weight design is important and should not be neglected.

  14. A Comparison of seismic instrument noise coherence analysis techniques

    Science.gov (United States)

    Ringler, A.T.; Hutt, C.R.; Evans, J.R.; Sandoval, L.D.

    2011-01-01

    The self-noise of a seismic instrument is a fundamental characteristic used to evaluate the quality of the instrument. It is important to be able to measure this self-noise robustly, to understand how differences among test configurations affect the tests, and to understand how different processing techniques and isolation methods (from nonseismic sources) can contribute to differences in results. We compare two popular coherence methods used for calculating incoherent noise, which is widely used as an estimate of instrument self-noise (incoherent noise and self-noise are not strictly identical but in observatory practice are approximately equivalent; Holcomb, 1989; Sleeman et al., 2006). Beyond directly comparing these two coherence methods on similar models of seismometers, we compare how small changes in test conditions can contribute to incoherent-noise estimates. These conditions include timing errors, signal-to-noise ratio changes (ratios between background noise and instrument incoherent noise), relative sensor locations, misalignment errors, processing techniques, and different configurations of sensor types.

  15. Development of α and/or β activity aerosol instrumentation

    International Nuclear Information System (INIS)

    Lu Zhengyong; Li Aiwu; Gou Quanlu

    1996-01-01

    A radioactive aerosol instrumentation is developed recently for measuring the α and/or β activity of artificial radioactivity aerosols which are produced in nuclear facilities. The instrumentation has the function discriminating natural radioactivity aerosols resulted from radon and thoron daughters, and it is enabled in time and without delay to measure α and β artificial activity collected with a filter by pumping aerosols through this filter. The energy discrimination and compensation method is used for eliminating the influence of natural αradioactivity aerosols. To minimize the influence of natural β-radioactivity aerosols, the method measuring the ratio α/β of natural aerosols is also used in the instrument. The improved methods eliminating the influence of natural background α and β aerosols are used so that both α and β artificial activities in aerosol filter samples can be monitored simultaneously. The instrumentation is appropriate for monitoring α and/or β artificial radioactive aerosols

  16. Accurate Lithium-ion battery parameter estimation with continuous-time system identification methods

    International Nuclear Information System (INIS)

    Xia, Bing; Zhao, Xin; Callafon, Raymond de; Garnier, Hugues; Nguyen, Truong; Mi, Chris

    2016-01-01

    Highlights: • Continuous-time system identification is applied in Lithium-ion battery modeling. • Continuous-time and discrete-time identification methods are compared in detail. • The instrumental variable method is employed to further improve the estimation. • Simulations and experiments validate the advantages of continuous-time methods. - Abstract: The modeling of Lithium-ion batteries usually utilizes discrete-time system identification methods to estimate parameters of discrete models. However, in real applications, there is a fundamental limitation of the discrete-time methods in dealing with sensitivity when the system is stiff and the storage resolutions are limited. To overcome this problem, this paper adopts direct continuous-time system identification methods to estimate the parameters of equivalent circuit models for Lithium-ion batteries. Compared with discrete-time system identification methods, the continuous-time system identification methods provide more accurate estimates to both fast and slow dynamics in battery systems and are less sensitive to disturbances. A case of a 2"n"d-order equivalent circuit model is studied which shows that the continuous-time estimates are more robust to high sampling rates, measurement noises and rounding errors. In addition, the estimation by the conventional continuous-time least squares method is further improved in the case of noisy output measurement by introducing the instrumental variable method. Simulation and experiment results validate the analysis and demonstrate the advantages of the continuous-time system identification methods in battery applications.

  17. Parameter Estimation of a Closed Loop Coupled Tank Time Varying System using Recursive Methods

    International Nuclear Information System (INIS)

    Basir, Siti Nora; Yussof, Hanafiah; Shamsuddin, Syamimi; Selamat, Hazlina; Zahari, Nur Ismarrubie

    2013-01-01

    This project investigates the direct identification of closed loop plant using discrete-time approach. The uses of Recursive Least Squares (RLS), Recursive Instrumental Variable (RIV) and Recursive Instrumental Variable with Centre-Of-Triangle (RIV + COT) in the parameter estimation of closed loop time varying system have been considered. The algorithms were applied in a coupled tank system that employs covariance resetting technique where the time of parameter changes occur is unknown. The performances of all the parameter estimation methods, RLS, RIV and RIV + COT were compared. The estimation of the system whose output was corrupted with white and coloured noises were investigated. Covariance resetting technique successfully executed when the parameters change. RIV + COT gives better estimates than RLS and RIV in terms of convergence and maximum overshoot

  18. Conceptual design of safety instrumentation for PFBR

    International Nuclear Information System (INIS)

    Muralikrishna, G.; Seshadri, U.; Raghavan, K.

    1996-01-01

    Instrumentation systems enable monitoring of the process which in turn enables control and shutdown of the process as per the requirements. Safety Instrumentation due to its vital importance has a stringent role and this needs to be designed methodically. This paper presents the details of the conceptual design for PFBR. (author). 4 figs, 3 tabs

  19. Injection Methods and Instrumentation for Serial X-ray Free Electron Laser Experiments

    Science.gov (United States)

    James, Daniel

    Scientists have used X-rays to study biological molecules for nearly a century. Now with the X-ray free electron laser (XFEL), new methods have been developed to advance structural biology. These new methods include serial femtosecond crystallography, single particle imaging, solution scattering, and time resolved techniques. The XFEL is characterized by high intensity pulses, which are only about 50 femtoseconds in duration. The intensity allows for scattering from microscopic particles, while the short pulses offer a way to outrun radiation damage. XFELs are powerful enough to obliterate most samples in a single pulse. While this allows for a "diffract and destroy" methodology, it also requires instrumentation that can position microscopic particles into the X-ray beam (which may also be microscopic), continuously renew the sample after each pulse, and maintain sample viability during data collection. Typically these experiments have used liquid microjets to continuously renew sample. The high flow rate associated with liquid microjets requires large amounts of sample, most of which runs to waste between pulses. An injector designed to stream a viscous gel-like material called lipidic cubic phase (LCP) was developed to address this problem. LCP, commonly used as a growth medium for membrane protein crystals, lends itself to low flow rate jetting and so reduces the amount of sample wasted significantly. This work discusses sample delivery and injection for XFEL experiments. It reviews the liquid microjet method extensively, and presents the LCP injector as a novel device for serial crystallography, including detailed protocols for the LCP injector and anti-settler operation.

  20. An overview of the laser ranging method of space laser altimeter

    Science.gov (United States)

    Zhou, Hui; Chen, Yuwei; Hyyppä, Juha; Li, Song

    2017-11-01

    Space laser altimeter is an active remote sensing instrument to measure topographic map of Earth, Moon and planetary. The space laser altimeter determines the range between the instrument and laser footprint by measuring round trip time of laser pulse. The return pulse reflected from ground surface is gathered by the receiver of space laser altimeter, the pulsewidth and amplitude of which are changeable with the variability of the ground relief. Meantime, several kinds of noise overlapped on the return pulse signal affect its signal-to-noise ratio. To eliminate the influence of these factors that cause range walk and range uncertainty, the reliable laser ranging methods need to be implemented to obtain high-precision range results. Based on typical space laser altimeters in the past few decades, various ranging methods are expounded in detail according to the operational principle of instruments and timing method. By illustrating the concrete procedure of determining time of flight of laser pulse, this overview provides the comparison of the employed technologies in previous and undergoing research programs and prospect innovative technology for space laser altimeters in future.